AbletonOSC x Tidal?

Daniel Jones presented this project at NIME 2023:

AbletonOSC: A unified control API for Ableton Live
This paper describes AbletonOSC, an Open Sound Control API whose objective is to expose the complete Ableton Live Object Model via OSC. Embedded within Live by harnessing its internal Python scripting interface, AbletonOSC allows external processes to exert real-time control over any element of a Live set, ranging from generating new melodic sequences to modulating deeply-nested synthesis parameters. We describe the motivations and historical precedents behind AbletonOSC, provide an overview of its OSC namespace and the classes of functionality that are exposed by the API, and discuss new types of musical interaction that AbletonOSC enables.

PDF: 28.pdf - Google Drive

Previously, I have attempted an integration between Tidal and Bitwig via BitwigOSC:

My memory is a little weak, but I remember some limitations/frustrations being:

  • Unable to achieve sample-level sync of Tidal MIDI / Audio recording, messing around with nudge
  • Lack of bidirectional OSC comms
  • BtiwigOSC written in Java, using official API but not possible (for me) to alter as needed

I am now evaluating whether to embark on a new effort with AbletonOSC.

In comparison to before:

A few questions for folks here:

  • Would you be interested in using such a project, if so what for?
  • Would you be interested in contributing development time to such a project, and if so what aspect(s)?
  • Would you be more interested in Strudel / Tidal / Vortex / integration?

I have my own sense of how I would use this, but I believe it could be quite flexible for different approaches. And if we went the Vortex route, being in the Python ecosystem opens up for many possibilities regarding creative AI/ML.

5 Likes

I’ve not deeply considered the details of what I would be looking for, and am still very inexperienced with Tidal in comparison to others here. My workflow utilizes SuperCollider lightly compared to many folks, as my sound sources are primarily in hardware (though I am considering moving some amount of that into plugins, M4L devices, etc.) This workflow largely relegates SC to control-rate duties like clocking and sequencing of triggers, pitch, and modulation. That’s probably about enough context for the moment.

For a while I’ve been considering how much more, and more easily, I could support and expand my workflow by leaning on Live more than SC, perhaps even entirely eliminating SC.

One of the obvious areas of workflow expansion might be in terms of capturing patterns as they are played, for both visualization as well as later alteration, arrangement, etc.

In my mind this all starts with an ability to send n-cycles-worth of a pattern (or the results of a transition between patterns, etc.) to a specified new or existing clip in Live.

If the support also included the ability to launch one or more specified clips then that would be most excellent.

A notable area I definitely haven’t yet considered is the sending of automation, which I expect would prove quite valuable.

To some of your questions:

  • I might be able to contribute to the effort, but time availability is low and my python experience is very minimal and rather stale.
  • I would personally be using Tidal, as Haskell is my jam.
1 Like

I'm interested in learning more and staying in touch with any efforts to use OSC from Tidal to Ableton Live. The direction here is intriguing but I don't have a current use case for it. If some work materializes I may be able to contribute thoughts and ideas, and eventually do some testing/validation and documentation.

In addition to Tidal and Sardine, I use Ableton Live as well as Max/MSP and Max4Live. Max4L has an integration with Live, and connects to the LOM. Some time ago, I investigated this and tested out a simple connection to observe some parameters. I assume that AbletonOSC has the same access to the LOM as Max4Live, but it would be good to have that validated by someone who knows.

My personal interest would be to use Python/Vortex. Bubo has some thoughts on what is needed to make Vortex more usable with improved packaging. But I'm not sure what that would mean for a general Tidal integration, which might have a wider potential user base.

Thanks for posting.

1 Like

I have not thought about this in any depth, but at a first pass, I can see this as being very useful if one could pattern parameters of Ableton devices and third-party plugins. I am still a Tidal lightweight, but hoping to have time soon to dive more deeply into the system. Much of my usage is currently sending MIDI to hardware and software instruments, so I do not have much use for the tie to SuperCollider at this point. I am interested in learning more SC in the future as well, but just in terms of what I typically do with Tidal at the moment, it is MIDI-centric.

If I did not need SC and could instead host software instruments and effects in Ableton, and then could both send MIDI notes and control their parameters (perhaps via automation), that would be excellent. If that is a place this could go, I may be able to make some contribution. Python is a good language for this.

Seeing the mention of Bitwig in the first post, I will say that I would be even more excited about this if the integration was with Bitwig instead of Ableton. But I have little facility with Java so that could be a barrier.

Will keep an eye on this as it develops. Interested to hear what others might do with this integration.

2 Likes

i'd be very interested in a deeper Tidal x Ableton integration in general, as i do believe that while mostly used in a studio production context nowadays, Ableton still is (or can be) an amazing if not the best tool for live performance of electronic music.

The fact that Tidal uses Link for scheduling now makes the pairing even more of a no-brainer.

I'm currently returning to Tidal after a long-ish break, working on a hybrid workflow with ableton but currently limited to sending MIDI from Supercollider.

Right now i'm mostly concerned with getting Supercollider out of the equation (as it seems overkill just to send MIDI to ableton), since Tidal-MIDI is dead GitHub - isyuck/tidalrytm at device-agnosticism seems promising (and rather straightforward to me as i have to deal with C++ a lot at work)

Another dream of mine would be to launch predefined Tidal Patterns from named ableton clips, i know this is possible in theory but i haven't managed to get tidal-listener to build on my M1 mac so far. i had some success last winter by wrapping ghc inside a node.js instance within max4live and sending strings directly to stdin that way, but the whole thing ended up ridiculously niche, overly complex and hard to use or maintain.

as for AbletonOSC - this looks extremely promising! i have no experience with python and the Live API seemed kinda intimidating the last time i looked at it, but i'd be willing to give python a go for this :+1:

1 Like

Hey, it's been a while but i've found the time again to look into this a little deeper.

I still really like the approach of using the Live API via Tidal to change the state of devices, tracks et cetera (basically to define/redefine the audio graph via text editor).

However, looking at paragraph (? section?? what are the numbered thingys called in academic papers?) "7.1 Timing Resolution" the following sentence irked me:

[...] For these reasons, clients
should still use MIDI for relaying note triggers and other
time-sensitive events to Live.

(warning, long-ish rant on timing accuracy incoming...)

Triggering events in Live via MIDI with at least somewhat acceptable precision requires very low audio buffer sizes, and even at extremely low values is prone to severe timing jitter. So you have a tradeoff between bad timing and unnecessary CPU overhead, which is really bad. You lose either way, basically.

The thing is that Tidal actually request events via OSC well in advance, and SuperDirt is able to ensure sample accurate timing by scheduling these events according to the timestamp contained within the OSC message. This is such a clever approach i'm actually kind of surprised no one else is doing it? Anyway, converting these accurately scheduled events to MIDI and shooting them at Ableton removes all of that precision. Boo!

The same goes for sending OSC messages directly to a Max4Live device with the "Live" scheduling option in Tidal, in my experience the timing behaviour is completely unpredictable and subject to all kinds of influence beyond our control (mostly our network driver i suppose).

However!

In the past few days i've been able to program a little custom Max External in C++ that can schedule bangs in response to messages. and shoot them out at exact timepoints during the current (or next) measure, leading to sample-accurate triggering of instruments within Live from Tidal OSC messages.

To verify this i have had MIDI clips and Tidal patterns trigger the same sound, with one of them phase inverted. Bar some slight jittering of the occasional hit every handful of bars for the first minue or so, the result has been dead silent. The best part is that the higher the buffer size in Live's audio settings, the better performance was!

I'm still hunting down the reason for these rare cases where the MIDI event is offset by a few samples. It seems that latency compensation for devices on other tracks can throw off the timing between a Tidal sequence and an identical MIDI clip by a few samples. manually adjusting the track delay for that track can fix it. Also the fact that it seems to "warm up" suggests that maybe you don't want to have your audio thread run on a low performance CPU core. But this is nitpicking at the highest possible level, even in cases where it's off we're talking about a handful of samples here.

Even in its current state this is massively better than any preexisting solution for triggering sound events in Live via Tidal.

@jarm (or anyone else for that matter) If you're interested i can send over my current code for the max external + the corresponding m4l device so you can build and test it yourself. It is of course a messy and probably really naive solution, but it works!

To relate all of the above to the topic at hand: While i'm personally leaning more towards having tidal control the Live API by way of Max, this way of ensuring tight MIDI sequencing would imo be a very solid foundation for any bigger projects built with AbletonOSC. It directly addresses one of the shortcomings mentioned in the paper, and hopefully might reignite interest in controlling Live via Tidal.

1 Like

Absolutely fantastic, Robin!

1 Like

I just saw your post and I was wondering if you continued to work on this max external. I'd be really interested in seeing it !

I'm currently working on a live coding language and I'd like to write a Haskell library that allows to play some musical pattern inside Ableton. Even though I do not plan to make it work specifically for Tidal (my main purpose is my own language, which is still a work in progress), such a library could be also used for Tidal, I'll try to make it agnostic.

Basically my goal is to be able to tell Ableton "at this timestamp, do this", where the timestamp is using the Ableton Link standards and "do this" is a LOM command (with additional capabilities to play MIDI as well as automations) just like the one used in Ableton OSC. The difference with AbletonOSC being that the scheduling is tight, and one can play MIDI notes without having to actually insert them inside a MIDI clip. It would also be possible to schedule automations, in the form of Max enveloppes linked to some parameters, also triggered at some precise timings. For this, I was thinking of making a Max external that listens to OSC commands, parses them and schedule a message directly using Live sequencer. Is this what your external does ?

hey there! yeah that's pretty much exactly what the external does. it parses the OSC messages and writes events into a multichannel audio buffer, which is then read out via a phasor~ and a ton of index~ objects linked to live's transport. that's pretty much the only way to get sample accurate scheduling within ableton.

i'll make an effort to tidy up the code enough to share it on github over the course of the week. it's not pretty, but it's been working rock solid for months now. and while the required structure of the OSC messages is hardcoded for now it definetly doesn't have to be (or you could simply adapt it to your purposes). since it's just OSC you can of course use it with your own Haskell library rather than tidal.

one big caveat though is that my notion of timestamps is not absolute, but always relies on the notion of the position within a "cycle" or bar/measure, and therefore the program scheduling the events and the max or live session hosting the external need to share a timeline via ableton link.
i've previously tried working with absolute NTP timestamps instead, which i was sort of able to pull off in a really naive way where i made a syscall at every sample of the audio buffer, which is obviously extremely inefficient (great for heating rooms in winter though).
ever since tidal implemented link i've abandoned that approach, otherwise my next step would have been to figure out how to periodically poll systime and calculate the necessary offsets within each audio buffer.

anyway long story short i'll probably have the repo ready at some point during the weekend :+1:

1 Like

This looks great ! Writing events to an audio buffer is a clever trick, I didn't think of this ! I use Tidal's Link binding in my library, so if it works with Tidal, it should work for me too. Right now I am focusing on getting it to work with SuperCollider, and as soon as this works I'll start trying to get it to work with Ableton Live.

Anyway, if you post a repo for this on GitHub, I'll probably contribute !