Live workflow and mixing eq options

Hi. Super new to Tidal and looking for some input/sharing regarding live workflow.

Here is a screen shot of my current thinking:

(First post. Hopefully screenshot uploads correctly.

Wondering if it is an inherently bad idea to split channels into separate files in order to better visualise all that is going on. I found myself getting quite lost recently when improvising with a friend and am wondering how others manage.

Also in a live setting what do people do for eq/mixing? Pure code or is it best to output to a daw (using a controller) or separate audio channels to a mixer in order to be able to control eq, gain/volume, etc. If daw is a good solution any recommendations for a minimal linux daw? (don/t need a full blown daw just simple audio processing).

Alternatively is there a way to use Supercollider (with controller) to handle eq and levels (i am sure there is)...

And one last question does anyone have info on receiving an external midi clock via usb in Tidal or SuperDirt?

Any input would be greatly appreciated.

This is the only bit I feel I can handle... if it works for you, it's not inherently bad. Defining what "works" actually means is crucial, but if there's no issues with your tidal/sc comms, or resource load I can't see why you wouldn't use this if it works for you?
I personally would struggle with the line length, but otherwise it looks pretty functional.

Another alternative to keep from getting lost is to consider changing your syntax. I noticed @kit-christoper (and I believe @kindohm as well) use a slightly unorthodox approach where line begining characters (eg #, $) are often placed at the end of the line, and line terminators (frequently , in stacks and soforth) are placed at the beginning of lines.

The commas in particular added some nice visual markers imo -

Here's one of the sets where I noticed this layout:

2 Likes

Thanks for the input and vid. Lots to learn. I have picked up Tidal for a collaboration with a friend using modular gear and maxmsp, so have the added factor of operating in a more improvised fashion rather than linear. Definitely some of the vid's syntax structure will help. My screenshot above is initial setup thoughts which relate to the midi controller I am considering using (XoneK2). Thought I would look to see how others handle things before committing. Again thanks for your feedback.

I like to keep all the code related to a single session/piece/song in a single file, that's often not to much to deal with, I like your indentation, pretty similar to mine (example: https://github.com/ndr-brt/sc-adente/blob/master/tidal/practices/nice/deuskan.tidal)

I do everything from the code, more immediate and, single track's gain can be controlled by functions too :slight_smile:

Thanks. More code to study!

I am sure the more experience I have with Tidal, and how it can best be structured, a single file solutions is probably best. I am currently working on a four file approach to better visualise corresponding channels on my midi controller.. will see how that goes in rehearsal/development tomorrow.

One of my main reasons for wanting to incorporate a controller / hardware mixer was to avoid me being an idiot and missing decimal points in on the fly gain/fx declarations ((>.<))

For the decimal stuff, you could define another gain function (e.g. g) that will divide by 100 your value and pass it to gain function, so you will use values 0 .. 100, no decimals involved :slight_smile:

2 Likes

Nice. Do you have an example snippet of such a declaration and its usage handy? If not should be easy enough to figure when I have a chance. Thanks!

yeah, sorry but I was on the phone.
Here's the function:
let g x = gain (x/100)

so you can set gain from 0 to 100, e.g.
d1 $ s "bd" # g 80

2 Likes

@cleary busted! yes, i definitely ripped off Mike's "indent-everything-on-its-own-line" style. literally the only reason i started putting the operator at the end of the line is because Atom doesn't recognize and syntax-highlight the operator at the beginning of a newly indented line.

image

1 Like

Nice mix btw Kit. Will go through your channel when I have the chance. Got a lot to learn.

Maybe we can fix that on atom plugin... Can you open an issue? :slight_smile:

Just because it's a typical developer antipattern change habits to satisfy editor needs, it's better to change editor :smiley:

1 Like

absolutely. thank you for offering to fix!

edit: @ndr_brt, i have opened the issue but am having trouble with the markdown examples. not sure how to make that part more clear. let me know if you want me to edit!

For a Linux DAW, I can’t recommend Reaper enough...

https://www.reaper.fm/

Thanks I have tried Reaper in the past. Have been using Ableton for forever but am now trying to switch everything over to Linux. I think I am leaning more to getting things done in Tidal/SuperDirt/Collider or via hardware rather than adding extra system load.

How heavy is Reaper on cpu and memory? (obviously would depend on what you are doing with it...). Also is there anything to be aware of/missing in the free version (as far as I know they have a paid upgrade right?).

fixed, in v3.10.1 :wink:

1 Like

sick! confirming it works both ways now...it might take me some time to undo my antipatterns :rofl:

image

thanks @ndr_brt!

1 Like

I haven't noticed any performance issues with Reaper yet.. but I am only using it for a few things: live monitoring (with some VST compression and EQ), and recording into one stereo track...

I was using Audacity for a bit to achieve a similar outcome, but got a bit frustrated when I found that I couldn't add any real-time effects (VST or otherwise) to the live monitored signal. All effects must be done post recording...

It's worth noting that the Reaper Linux build is marked as 'experimental' -- but I haven't run into any issues with it. 30 day trial I believe...

Thanks for the extra info. No realtime fx with Audacity is an immediate flag.... but as far as I new Audacity was always more of a wave editor / sample prepper than daw. Midi implementation in Tidal is working for me at the moment for mixing and fxs so am going to explore that before looking at the daw route.

Hi,

I don't have any experiences with Tidal, because I have just started using it, but I did some postprocessing and mixing using Sonic Pi. As both are using SuperCollider and anyhow this is just a question of processing audio data this should not be a bit difference. What I have no idea about how you can route different IO streams (d1...) but I guess that should be possible.

If you have managed that (please let us know here) and if you work with Linux you can use Carla, which is a very capable audio plugin host for a variety of plugins, that are available in Linux world (LADSPA, DSSI, LV2 and VST2, VST3 and AU).

If you want a mixer you could try Nonmixer which I have successfully tried out but did not use on the longer term. It also speaks OSC so if you happen to own an OSC-capable hardware controller you can control this also from the outside (one solution, although this is software) could e. g. be a tablet with TouchOSC or Open Stage Control. (By the way: Carla also speaks OSC but have never tried that out.)

You will have to find a solution for the session management so that you can simply load your routing and the loaded plugins but I was quite content with what Carla hat do offer (you can simply save you setup and load this setting file again). If you want to know more, I'd recommend this video talk from 2017, which is by Filipe Coelho (aka falkTX), who also is the current maintainer of Jack Audio Server.

Hope this goes into the direction you were looking for.

2 Likes

Wow thanks for all the info! Nonmixer looks insane. Can I ask what you settled on in terms of set up when you were using Sonic Pi if not Nonmixer?
In terms of IO streams I believe this is what you are referring to: https://tidalcycles.org/index.php/Separate_audio_outputs Can then be routed via Jack.
Also this: https://tidalcycles.org/index.php/Controller_Input converts OSC to MIDI negating the need for OSC hardware.