How to replay a live coding session

Hi there,

One of the most fascinating aspects of music is the possibility to replay it. We can record a track and replay it any time we want.

Is this valid for live coding? What are the options?

We can record the generated audio. Yes, this allows us to replay a set but is it enough?

We have videos, this is great as besides listening we can see coder hitting the keyboard but is it enough?

For us, writing is the starting point. Our partiture is code but contrary to traditional music our partiture lacks the time reference. We focus on the moment and we store the latest snapshot losing the whole creative process along the way.

Some would say that the whole point of live coding is that is "live". But the use cases are not limited to the live set. Think about rehearsal, releases containing code, education, etc.

Also, while I'm writing this on a Tidal Cycles forum the same reasoning could be easily applied to any other live coding or algorithmic audio or video software out there. After all, we all share the same technology: a text editor.

Do you know of some tools we could already use?

I'd like to speculate on a set of requirement for such a tool:

  • It should be able to store every single character entered together with the position of the cursor and the timestamp.
  • It should allow handling sessions so we could start, stop and replay them easily.
  • It should be independent of the editor so everyone could benefit from it.

I'm very curious to know what you guys think about it?

1 Like

I'm also interested in this though from a slightly different direction. I'm interested in reactive music for interactive systems.

In the past I've always thought about that from the perspective of procedurally layering carefully prepared loops (which is how it's most often done) or some more broadly ambitious generative composition system (which is a huge, fascinating rat hole but still a rat hole). But more recently, especially after spending more time with tidal, it seems like it would be nice to do something like what you describe.

What I imagine is recording a tidal session and then coming back and post processing it to add parameters which a reactive re-performance system could use to adjust the sound. For example you might make the choice of which particular drum sound to use or which scale to draw notes from be something the re-performance system controlled so that it could keep that consistent as it sequenced together chunks of music. Or you might identify sections of the performance which could be stretched or omitted if the system needed to change the length of the piece.

It seems like this could be interesting even outside of reactive music. It would let a musician encode moments where they chose to change a pattern in one way but might have chosen to change it in a different way. Then on re-performance the system could choose a different option each time. Obviously that very quickly gets tangled without very careful planning for what decisions the re-performer can make and how those decisions interact. Not that that is a new problem in livecoding, just the same problem in a new layer.

Dunno, something very interesting in there.

1 Like

Hey!
If you are looking to repruduce a certain composition you developed in a live coding language, you could just record your screen and study your piece to reproduce it, just like an instrumentist studies a play, memorizing a score.
Another thing you could do is to create a little program that saves a new file, let's say every 25 seconds. Or it could push your code into a GitHub or GitLab repository. Or you can do it manually!
I think this would be two options to give account for your creative process and how the music developes in a time flow.

1 Like

This is nothing like a complete tool but it gets the basics:

I was able to record a session from my editor and play it back. The recorded session is included in the repo. It doesn't get every keystroke, for that you'd need deeper editor/terminal integration. It just logs events that actually go to tidal, ie. when a pattern changes. And it's up to you to figure out how to install it in your editor in whatever way makes sense which isn't a real user friendly experience. I'll likely poke at this project some more because it's a thing I'd like to have and maybe it'll turn into something actually useful.

3 Likes

Great idea, I was thinking about this today too - awesome to see you actually produce something!

I use ORCA quite a bit too - sounds like this could be useful there too

Would you consider expanding it to include recording midi channel commands too?

Oh, wow. ORCA looks astounding. Like befunge for music. I haven't looked at how it's set up but I'm going to guess that capturing it's state would be trickier. The thing I wrote relies on the fact there's an air gap between the editor and tidal where you can cram hooks like this. ORCA seems like it's likely more integrated so you'd need something more invasive if they don't already have a logging function.

For capturing MIDI, I suspect you'd be better off looking at existing MIDI recording tools, none of which I've used but I know they're out there. Getting the midi and tidal playback synchronized is likely the tricky part. I'll admit that MIDI isn't a part of my work flow so even if I do expand this prototype that's unlikely to be a priority for me.

This is super interesting... I'll check the code later and give it a try.

My experimental 'feedforward' editor does this. It records every keypress with an accurate timestamp, and there's a way to get it to replay sessions.

I've used it in 'algorithmic drumming circle' sessions, where I've got eight kids (8 year olds) to use tidal together, each with their own speaker and computer (raspberry pi). I then replayed their sessions as a kind of art installation. I wrote about it here: https://zenodo.org/record/3346443

I've long thought that conventional revision control systems need reinventing for algorithmic music. They're really linear, whereas music is cyclic. A music RCS should help you develop a piece of music so you get back where you started, mapping out different potential paths, rather than follow a linear path where you keep adding features, with only forking and rejoining that path, and no returning back (apart from some clunky backporting).

So I think how you best record a live coding performance in terms of symbolic edits, and then allow people to manipulate those edits, is an interesting research question that could be a nice bridge between music improvisation and composition.

7 Likes

Iā€™m keen if you ever wanted to pursue this research further and collaboratively.

1 Like

This is very interesting

That's really cool. The log format Feedforward produces looks great.

I've experimented with this a bit more for my reactive music use case. I have the stub of an embeddable reperformance library. I have some ideas about the intermediary representation it will use, something between a log like Feedforward produces and the final, executable pattern but I haven't tried implementing anything yet. For my case it needs to both be easy to manipulate by hand and expose parameters the host system can manipulate programmatically. It may be that using the midi-in support that tidal already have is enough for the programmatic side.

A bit more detail on the thing I've written if anyone is interested:
After looking at what would be involved in getting a Hakell library linked into my target environment (a fully statically linked Rust application) I decided to try reimplementing the core bits of tidal that I needed in Rust rather than trying to link. Not because Rust is a better choice for this stuff, it's hard to imagine a worse language for live coding, but because it's the language of the host application and linking Haskell across an FFI boundary is tricky. After fiddling with how to represent the Haskell style functional API in Rust (it isn't pretty) I got a thing that supports a handful of functions and a large subset of mininotation. It actually wasn't that bad to write and should be pretty easy to extend. But my goal is very explicitly not to recreate all of tidal, just to be able to easily use tidal as a composition tool for my reactive music and play those compositions back inside the application.

1 Like

@alec is your code open? Really interested in it :slight_smile:

It is. There is link in the previous message but it's basically invisible. The code is here: https://github.com/alec-deason/paguroidea

1 Like

Hi ohjimijimijimi,

In SuperCollider there is an Class called History and its convenient sister HistoryGui.
I used them in the past, and what they do it once you start an document histroy, it records every OSC message sent to the server AFAIR.
It allows to see every message received and go back in the history and re-execute it if you want.
I am note familier with Tidal innards, but I guess it should be possible building on this class to be to achieve what you are saying.

My 2 cents

would something like this work?

1 Like

what a cool project!
I especially like the part in the paper talking about indigenous american
knot tying language (code)
this technology was also utilized in new mexico
by the pueblos
to expel the spanish in the 17th century

1 Like

huh, revisited this thread by chance and realised I'd missed this post - awesome feature, awesome project with the kids too!

Side note, one of the highest value things I think I've gotten from this course/forum is the exposure to some really high value projects that you and others are working/have worked on. It's inspiring and humbling - thankyou :slight_smile:

1 Like