I've been experimenting with connecting TidalCycles to Claude via the Model Context Protocol, and I thought I'd share my findings in case anyone finds them useful.
Essentially, it lets you have conversational interactions with Tidal - ask Claude to generate patterns, modify what's playing, or explore ideas without leaving the chat interface. It's not intended to replace the creative process, but rather to offer an alternative way to interact with Tidal when exploring or learning.
The server handles the actual Tidal evaluation, so you can ask things like "create a drum pattern with a half-time feel" or "make that bassline more syncopated", and it'll work with the usual Tidal syntax under the hood.
It's still early days, and I'm figuring out what works well and what doesn't. If you're curious and want to try it out, the code is on GitHub
Fair warning: it's beta software, and using AI for music creation isn't everyone's cup of tea - which I completely understand. If you do try it, I'd be interested to hear what you think (good or bad).
I'm quite against using AI for these tasks overall, but I do find it interesting as an interface. So not precisely for loose prompts, but rather so that people who have a hard time typing for different reasons (such as arthritis, parkinson's, etc) can still get into livecoding or keep their interest in it if they're developing such impediments. I see the project is mostly about conversating with Claude rather than have it work on an editor tho.
Does this prompt actually work? Then include example output.
Care to explain why it should work? Does the LLM create the audio signal (running tidal and supercollider) and then matches it (its spectogram) to the textual description? Or is it just text-matching - so it would sorta kinda work if it had ingested enough blog posts where some drum pattern notation occurs near the words "half-time feel"?
What if the pattern does not appear verbatim (a string literal) but uses tidal's built-in operators, or user-defined (in Haskell) functions? Then the LLM must realize their semantics. That may be possible, but, in the words of Dijkstra - foolish (https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667.html , 1976)
or ... is this an elaborate joke. Those hundreds of emojis in the README? No human writes like that, and I certainly won't read it. But I can appreciate it as a work of art (a demonstration of the Zeitgeist).
By all means, do explore the possibilites. A creative mind can use any method of producing music.
d1 $ stack [ s "bd2 bd ~ bd" # gain 0.9, s "~ ~ sn ~" # gain 0.85, s "hh8" # gain 0.6 ]
This (depending on the mode you choose) is either piped to Tidal using the GCHi interface on the command line, or written directly to a Tidal file, which you can trigger yourself.
It’s not perfect and hallucinates sample banks occasionally - but it’s really just a proof of concept at the moment.
Full disclosure, Claude wrote the README - AI seems to love the emojis. I could remove them (or ask it), but I enjoy the sparkle they add.
If I get the chance over the next few days I’ll record a demo of how it works.
Like I said, it’s a rough first draft, and I’m sharing early to get feedback from the community.
yes I (think I) understand what this is doing, but by my question was - why do you think it would associate some Haskell/tidal expression with the concept of 'half time'.
typing out the prompt for the language model - same thing?
even with current practice (tidal/ghc/supercollider) you're not typing the actual code that is run on the machine (by the CPU), but some text that is several abstraction layers away. The LLM is just another transformation added on top -
albeit a very impractical one. Dijkstra's point (cited above) is that "natural language" as an extra layer is unsuitable for programming. It is a regression, after the progress of formalizing sciences, and programming.
My point is, for artistic expression, this objection does not matter. You can play the funkiest music on the crappiest of instruments. Although, chat with some sound engineer about proper mics and speakers ..
I feel very strongly that this is a false equivalence. Writing code and a prompt are not the same skill or practice.
The medium is the message, and as much as live live coding is about anything, it is about using coding as a form of artistic expression. It runs counter to the norm of code as a means of production, surveillance, extraction, war.
At least until the bubble bursts, the medium of LLMs at large is about content, slop, fascism, and the industrialization of language.
I don't want to make assumptions about the original posters intentions, plenty of people are certainly trying to use LLMs with a different intention, but I think the well is poisoned at this time, at least for myself.
Even if you’re right, this process of enshitification will be wired into our entire social infrastructure before bubbles burst. It’s the biggest bubble ever and the bubble-makers have deep pockets to keep this going for a long while. We will have to build whatever we can in the ruins of the old world.
whoa, easy there ... I was suggesting to discuss the proposal on its technical merit (how/why would it work? - how many attention heads/layers do we need to connect all the (textual representations of the) necessary concepts for "tidal cycles - half time feel"; and does it indeed? - looking at example prompts and outputs).
If you're worried about the surveillance economy (I am), and if you care for reproducibility of scientific research (I do), then you can run open-weight open-source LLMs locally. You could even compute the weights (on the contents of tidalcycles . org, and haskell . org?). Perhaps local resources are limited - so we get a restricted model - which could still be fun to use for making music. Art is always based on some restriction? (literature - only use words. painting - only use two dimensions, etc.)
Let's not be like "I don't use operating systems, ever, because Microsoft controls Windows, and Google controls Android". Of course, a tidal front-end for some commerical LLM just helps feeding user data to the provider. But so does linking to videos on youtube, or to code on github, and putting them there in the first place. With that regard, I appreciate that tidal's code moved to codeberg, and the solstice stream, to eulerroom.
I think this is a great initiative. Software for music production suffers from being extremely difficult to learn and use for anyone but the least time-challenged (speaking from personal experience). An LLM assistant can surely help the live coder learn the Tidal language, suggest patterns, and in particular help in discovery (functions you didn’t remember, or never understood, from reading the manual).
what makes you so sure of this, for Tidal specifically? We should remember that an LLM answers the question "what is the most likely continuation, if this text (prompt) would appear on the open internet".
The quality of this answer depends on
what (quantity and quality) the model has seen in training
how much extra information the prompt contains.
it generally seems to "work" (in the sense of "create plausibly looking output") for well-known coding problems in popular programming languages - since stackoverflow is full of would-be answers, which the poor LLM was trained on.
Do you think Tidal is popular enough? Or - that we can stuff enough information in the prompt?
For the project announced here - what exactly is in the prompt?
I’ve been experimenting with this too— sans MCP server. It’s an easy way to get some interesting beats, but I’m generally underwhelmed by what it produces. It also seems like it has a hard time referencing other music. For example, if I tell it to match the style of a real artist it will rarely come close to what I’d expect.
Still a very exciting and strange thing to have an LLM helping as a production assistant.
Yes there are very many of these vibe coded projects around tidal and strudel, and they’re just encouraging some very dodgy companies to wash the IP away from free/open source software and music, while producing mediocre results.
I'll be happy to try it out if I can connect it to my local LLM.
I tried something similar with blender to help me draw out complex scenes using the python API, and it actually worked (this was in GPT3 era).
I learned python and blender from it. It did need a lot of guidance. I imagine if you say a "half-time feel" to an LLM, it might need some help implementing it. But it might actually help to learn some tidal code patterns to live code them yourself later on.
With the right intentions, it can be a valuable addition I think. Thank you for sharing
I’m not actually sure, I haven’t tried the tool out, and it’s possible that the results won’t be good. And even so, I appreciate the attempt to making LLMs work for Tital coding. We can’t expect that an LLM will produce consistent results, but it’s very likely that it can help guide the user around the reference documentation. For example, to suggest ways to modify a pattern that the user comes up with.