Bringing AudioStellar capabilities to SuperDirt and Tidal


So, for the last 3 year I've been participating in a research project whose main bi-product is AudioStellar. Audiostellar is an open-source data-driven experimental sampler. It basically allows one to build a 2d map of a bunch of audio files. Usually the spectrum of each sound is used, so the resulting map organizes sound in relation to their spectrum. Around this visualization, we've built different creative ways of exploring this map (like casting particles that travel with different types of movement or using sound distance as rhythm to create sequences). Also the software has an OSC API that gives one the ability to create other types of explorations of the map.

The problem

As a performer that has been using tidal for over a year, I've found that integrating audiostellar to my performance has been difficult/uncomfortable. There are different reasons for this:

  • Audiostellar has a GUI that needs to be onscreen in other for stuff to work. I don't actually want nor need to be looking at the gui.
  • The audio engine that audiostellar has is much more limited than what supercollider offers. SC allows to create and destroy audio chains on the fly which helps with polyphony and FXs. Audiostellar has a more traditional approach with channels and FXs that are added to the channel.
  • Extending audiostellar's sound capabilities is harder than Supercollider. Mainly because Audiostellar is made on C++ and designing sounds and FXS , for me, in Supercollider is much easier than in C++.

So, what I want to explore now is to bring some ideas of audiostellar to Tidal and Superdirt.
If I would strip audiostellar to the core, it is:

  1. a pipeline that allows to create a map given a bunch of audio files
  2. A way to ask questions to that map. i.e. ¿Which sound is closer to this coordinate (x,y)? ¿Which sounds belong to this group? etc...

Forgetting about the first part (which is mainly implemented in python and I have no intention of porting since its an offline process), the second part could be easily implemented in SuperDirt mainly as a wrapper around the default sampler.

What I imagine is something in the lines of

-- really rudimentary example, where we choose the sample by a coordinate, which is not very interesting
d1 $ 
s "supersampler"
# x 0.3
# y 0.5

-- play samples only belonging to a specific group (probably similar in timbre)
d1 $ 
n "0*3 2(3,8)"
# s "supersampler"
# cluster "0"

-- instead of choosing a group by an index, you could use previously tagged groups
d1 $ 
n "0*3 2([3|5],8) <~ 7*2>"
# s "supersampler"
# cluster "[kicks|toms]"

Of course these are really simple examples, its just to illustrate possibilities.

So, what I think would be easy to start experimenting is to wrap the SuperDirt sampler with a way to resolve which buffer needs to play

~dirt.soundLibrary(\supersampler, (play: {|dirtEvent|
// grab parameters of event like #cluster, #note, etc and use them to resolve which buffer needs to be played.  This is where the Audiostellar API would come into play .
// Pass rest of parameters (like #pan, #speed) 'as is'.
 Synth(\sampler, [params]);

So, for now I have set a repo here to start experimenting with loading a dataset (which is a json array where every sound has a path and a position field) to a Dictionary.
Also I've basically copied the sampler implementation of SuperDirt to use here, but it would be better to reuse the SuperDirt sampler.

The test that I made for know kinda works but it sounds weird (mainly the envelopes since I believe SuperDirt has a separate SynthDef that acts as an envelope, outside of the sampler).
So, any pointers in how to reuse the SuperDirt sampler with these approach??

Also any commentaries, suggestions, questions are kindly welcome! <3


This sounds like a very nice project, and I think it would be a great feature to have and experiment with. I think @yaxu, @lwlsn (and possibly others) have talked about implementing similar ideas in the past, especially figuring out a way of representing the metadata of samples and being able to query it. I don't think any of it got as far as implementing something. Here is a related thread and issue:

1 Like

I could also imagine this might work as a translation layer on the Tidal end. Writing a Haskell function that uses the database to turn an x,y pair (or a cluster name and index?) into a sample name might be more straightforward than doing it in Supercollider.

1 Like

Yeah, I thought about that but I'm not that versed in haskell and I wouldn't know where to start.
For now I want to experiment with what I know and sketch some things.
The main roadblock that I have right now is to be able to route the sampler through SuperDirt chains (envelope, and Fxs), since its sounding super weird :frowning: (mainly because synthdef instances won't release themselves and pile up).

looking at the hacks folder I came up with

~dirt.addModule(\supersampler , (play: {
  //example just using the ~n param as index
   b = dataset[~n].at(\sound).bufnum; 
  // I think there are different versions of this synthdef, for mono/stereo samples and n outputs

Anyways, thanks for the comments!

Yeah, I think augmenting samples with metadata could be awesome. For this types of maps its kinda harder since coordinates of a sound are really tied to a specific map and these maps are usually non-deterministic (same sound with the same process yield a different result).
I think that other metadata like loudness, spectral centroid, chromagram should work better

~dirt.addModule(\supersampler , (play: { …

Modules are meant only as event-based effects. I see this should be documented a little more!
It is better to use the sound library, and there is also a hook called playInside which is useful if you want to use superdirt's bus structure:

var audiostellar = { |x, y| /* add the functionality here ... */ };
	(playInside: { |e|
		// derive the sample buffer number from x and y
		var bufNum = audiostellar.value(~x, ~y);
		// make sure that you play it within the ~synthGroup and to the out
			instrument: \sampler,
			bufNum: bufNum,
			out: ~out, 
			freq: ~freq, 
			amp: ~amp, 
			group: ~synthGroup

This is from an example in hacks/function-hacks.scd.


Well, thinking about it a little more, I think this is a good opportunity to actually learn haskell :no_mouth:

ohh thanks! I didn't know about playInside (and was looking at hacks/core-synths-hacks.scd).
Yeah and I was mixing up soundLibrary.addSynth and addModule.

I'll probably keep trying things in sclang in parallel with reading learnyouhaskell

Yes it could go either way, but I'd probably recommend keeping audio analysis-driven stuff on the superdirt side.