All of MIDI IN to tidal I've seen is controlling values of pre-defined patterns. Even if that control is spread across the keyboard, you are still controlling variables instead of triggering events. I was wondering about this and commented about it in the last stream (Stream #5). Alex responded it would be possible, so I'm creating this topic to talk about it. I'm not 100% my idea came across in the stream or in this paragraph so I'll explain better:
I imagine the possibility of writing something like this: -pseudocode warning-
d1 $ stutWith 3 (1/8) (|+ note 7) $ midiIn # s "arpy" # room 0.4 # pan rand
And have this work in a way that if you press, for example, A4 in your MIDI keyboard, it would generate the same result as doing:
once $ stutWith 3 (1/8) (|+ note 7) $ note "a4" # s "arpy" # room 0.4 # pan rand
This way Tidal could also turn into a physical instrument itself, and therefore, the most powerful playable sampler in the world. I think it'd be cool.
I don't know nothing about Haskell so I wouldn't even be sure on how to start to approach this. Tell me what you think of this idea and if it'd be possible. I have been told there's a 'midiIn' feature in FoxDot, so that might be something worth to look into too.
I know it is possible to take MIDI IN for Control Change (CC) parameters, i.e., you could use the knobs on your synth to control tidal. I wonder if the implementation below can be adapted to read the midinote (instead of cc)
I recently found a way to trigger events with a MIDI keyboard, by simply translating MIDI events to key strokes, with softwares like MidiKey2key, or even Xdotool if you're under Linux.
In example, you can associate midinote "49" to a combination of :
typing a string like " once $ note "c2" # s "arpy" "
and then pressing "Ctrl+Enter" key shortcut in order to evaluate that string.
There is some inconvenience for sure, like flooding your code sheet with every keystrokes, but I guess some interesting tricks can be found (I haven't explored that technique so much yet).
My post linked above explains how to get note values - there is some code there you can use in SuperCollider to do so.
The second part of your question is beyond my skills though, although I have seen @yaxu creating Tidal patterns from an old fashioned music box, so I'm sure it is possible
I don't think you'll be able to achieve this with the current state of the FoxDot MIDI implementation. If you're good with Python it might be possible.
It will be more possible in Tidal, but maybe a bit tricky.
Can you explain what you are trying to achieve? Not from a technical perspective as that's clear from your post, but in terms of a workflow?
This sort of thing can be done by sending mini-notation strings to Tidal using the open sound control library, which it then reads with the cP function.
So you could send an osc message with path /ctrl and two string values "chord" and "[0,2,7]" to localhost port 6060, and pick it up with this pattern:
You could then write something in supercollider to take midi input, construct the mininotation string and send it to tidal.
That works, and I'm fairly sure that @eris has a video demo of a nicely developed use of this.. It isn't as immediate as pressing a key and hearing the result, though. Interesting that @a.d.du.nord has found a workaround..
It could be possible to have something that takes direct midi (or other) input to trigger events, but then what happens if you manipulate time in the pattern? It could keep a history, so that going back in time would work, but what about going forward in time.. The timelords and ladies wouldn't be happy.
I guess there would be two main workflows, but they both could be resumed as 'using tidal as playable instrument':
Using Tidal as a sampler instrument, that could be used live or recording for a larger production. Kind of like creating a SynthDef inside SuperCollider that's meant to be played by a midi keyboard.
Livecoding an instrument while other person is playing it. This is also something that's done in live SuperCollider performances. You could have, let's say, an "electricpiano" sample and define it as playable by an instrument. Have another person play that sample in a MIDI keyboard and add effects and randomize parameters, having an interactive livecoding-keyboardist session.
Yes I'm sure a lot of functions wouldn't work because of the clear difference between a pattern and a signal of arriving notes, but hey, a lot of stuff would still be available to work with still. Even just a simple every 2 (# pan 0.2) with a # vowel "a e i" at the end would be amazing. Although I am most interested in using 'stutWith' and 'superimpose' or 'off' and make crazy delays and effects. But well, I don't know much about how any of these functions work in the back, so maybe I'm talking nonsense.
Well, if you want to manipulate live incoming audio with tidal, if you don't mind using a daw like ableton, you can setup record on one channel for your live instrument, add a filter (vowels' a e i o u) and you can control the parameters of that filter based on MIDI CC values triggered by tidal. Wrap those parameters in a variable.
Hey @geikha - I think as Alex says you are going to run into some problems with timing, but I think ultimately what you want is do-able with a combination of the techniques described in this thread. But you might lose some of the fluency you are imagining.
I've been thinking about this problem a lot, and found something like this almost works - maybe this can give you a starting point for further tweaking.
d1 $ sound "arpy*16" # up ((cF 1 "n")-60) # gain ((cF 1 "t"))
Where "n" is taking the midi note number from your keyboard, and "t" is using noteon and noteoff to control gain. So you aren't triggering the note, but making it audible with a keypress.
Using this script in SuperCollider for your startup.
You could change the number of values per cycle to something a bit different. Maybe you could do something clever with legato too, I need to think about that one a bit more.
It might work better with a granular synthesis type approach too.
As I said, probably not as fluent as you were hoping but might be the start of a solution. Let me know if you get anywhere I'll keep thinking too.
Ah that's an interesting approach. One problem might be that Tidal feels slow to react.. Because by default it's tweaked for efficiency and accuracy rather than immediacy.
You could make it a bit more responsive by tweaking your BootTidal.hs, e.g. you could try reducing cFrameTimespan from 1/20 to 1/40 , and reducing oLatency to 0.05 or lower..
These are tradeoffs - reducing cFrameTimespan too far will cause high cpu usage, and reducing oLatency will cause 'late' messages to supercollider and timing issues.
Granular synthesis is a good idea too, have you tried with a high chop e.g. (slow 16 $ chop 256 $ s "bev") # up ((cF 1 "n")-60) # gain ((cF 1 "t")))