How does Tidal scheduler work?

Hello everybody, pretty new to Tidal and I love it.
I have been making LiveCoding for a couple of years with Kyma as a backend for sound synthesis and with a LiveCoding package that I have been writing for Pharo for the frontend.
now I would love to make some kind of "Tidalism" :slight_smile: inside Pharo, I know it's hard and I don't know if I would get it but it's worth trying.
I wonder how can I understand better Tidal's scheduler. is there any paper or is there a particular module I should look for inside the source code? My Haskell knowledge is pretty basic but in the meanwhile I am studying it so slowly getting better :slight_smile: thanks for your help.

p.s.
here a glimpse of the LiveCoding package for Pharo:

3 Likes

Hi!

The scheduler uses Link to translate cycles into wall clock. This was introduced in Use ableton link for scheduling by Zalastax · Pull Request #898 · tidalcycles/Tidal · GitHub. The Tempo module is the driver of the scheduling. It works in chunks (length is defined by cFrameTimespan, which is 50ms by default), scheduling all events within the chunk that is being processed. We process chunks ahead of time. How much ahead is defined by cProcessAhead, which is 333ms by default.

2 Likes

Can you explain in more depths how this is done? I'm also curious about the Tidal scheduler and never found anything written about it. Part of my fascination with live coding environments comes from how they deal with time.

1 Like

thanks @Raph and @Zalastax I will look inside the Tempo module (and it will take time). what interest me particulary is how time "ticks" in Tidal. it is extremely precise and I love it

A lot of that precision comes from OSC and Supercollider. Tidal doesn’t actually tick precisely, but it does attach the precise time to the event messages it sends, and then Supercollider uses that information to schedule precise audio events on the sound server.

Because Tidal is always thinking a little ahead (as mentioned above), events get sent out early enough to be properly scheduled.

There's also a mode where Tidal sends each event just in time. It spawns a new "thread" for each event to send and delays that thread according to the timestamp. But this is not so accurate so it's best to avoid it. Tidal/Stream.hs at ba97ca5db7fa0326f67709a34585c1a10962eacc · tidalcycles/Tidal · GitHub

1 Like

this makes a lot of sense to me

1 Like

This is super interesting, thanks for the insight! I would love read a book one day about different strategies for timing in real time computer music systems but I haven't found any literature about it. This would allow to compare the strategies used by different softwares such as SC, Tidal, etc..

Not the book you want, but, e.g.: Scheduling and Server timing | SuperCollider 3.12.2 Help

But then I expect that this also depends on the lower-level drivers that are used (jack, pipewire, alsa).

And it seems a mystery that this all works. (and works in combination, and works on various hardware.)

So yes, I want that book too.

[EDIT] Paul Davis has a section on Synchronization in Arcoiud (The Ardour Manual)

Tidal represents time as rational numbers, which helps keep precision when it comes to tuplets and complex embedded transformations.. and Tidal patterns are represented as functions of time, giving it it's particular flexibility and constraints. Scheduling is then just a matter of calling the pattern function to calculate the events for the current timeslice. It calculates slices at 20Hz by default.

1 Like

it sounds really interesting. I hope to be able to implement something with a steady pace in Pharo. 20Hz should not be too much demanding for the program. is there any reason for the 20Hz choice?

No not really! Some testing would find the sweet spot in terms of latency and efficiency.

thinking about this, does it mean that if I send OSC messages to another program that doesnt read timestamps (as Max or Kyma) I could have some jitter?

Yes, some jitter would be likely. You would need to use the same trick as Tidal: try to send the message at the right time, e.g. by creating separate threads that are paused until they should send the message. This method may get jitter from poor timing of waking up threads. The method may also get jitter from unpredictable delays in communicating with this other software.

This is also what I have experienced with multiple techniques/languages. However, computer music has been around for more than 50 years and I'm sure that there should be some document compiling some of the "methods that work" for scheduling in musical time by taking delay and jitter into account. I'm surprised that nothing can be easily found about that issue as it would be immensely useful for all live coders around (and more generally to anyone willing to program a sequencer or a musical timeline system).

Where does the high precision of SC comes from? Is it because it is running at a very high rate / audio rate? This would mean that spawning an audio thread is necessary to replicate a fully working Tidal (and Strudel does that, am I right?). More generally, it would mean that to have the full Tidal experience and split cycles in tiny tiny chunks, you would need an extreme precision on the scheduler side. Max/MSP can offer similar garanties if you schedule using signals and audio rate objects. I wonder if @julian can bring some highlights on the topic.

The temporal precision of Tidal so far is truly impressive given how unusual the scheduling method is compared to other approaches to scheduling / sequencing. Definitely something that should be emphasized more! :melting_face:

This article touches on some of the points you ask about: Real-time Audio Processing (or, why it's actually easier to have a client-server separation in Supercollider) - Resources - scsynth

1 Like