Context
So, for the last 3 year I've been participating in a research project whose main bi-product is AudioStellar. Audiostellar is an open-source data-driven experimental sampler. It basically allows one to build a 2d map of a bunch of audio files. Usually the spectrum of each sound is used, so the resulting map organizes sound in relation to their spectrum. Around this visualization, we've built different creative ways of exploring this map (like casting particles that travel with different types of movement or using sound distance as rhythm to create sequences). Also the software has an OSC API that gives one the ability to create other types of explorations of the map.
The problem
As a performer that has been using tidal for over a year, I've found that integrating audiostellar to my performance has been difficult/uncomfortable. There are different reasons for this:
- Audiostellar has a GUI that needs to be onscreen in other for stuff to work. I don't actually want nor need to be looking at the gui.
- The audio engine that audiostellar has is much more limited than what supercollider offers. SC allows to create and destroy audio chains on the fly which helps with polyphony and FXs. Audiostellar has a more traditional approach with channels and FXs that are added to the channel.
- Extending audiostellar's sound capabilities is harder than Supercollider. Mainly because Audiostellar is made on C++ and designing sounds and FXS , for me, in Supercollider is much easier than in C++.
So, what I want to explore now is to bring some ideas of audiostellar to Tidal and Superdirt.
If I would strip audiostellar to the core, it is:
- a pipeline that allows to create a map given a bunch of audio files
- A way to ask questions to that map. i.e. ¿Which sound is closer to this coordinate (x,y)? ¿Which sounds belong to this group? etc...
Forgetting about the first part (which is mainly implemented in python and I have no intention of porting since its an offline process), the second part could be easily implemented in SuperDirt mainly as a wrapper around the default sampler.
What I imagine is something in the lines of
-- really rudimentary example, where we choose the sample by a coordinate, which is not very interesting
d1 $
s "supersampler"
# x 0.3
# y 0.5
-- play samples only belonging to a specific group (probably similar in timbre)
d1 $
n "0*3 2(3,8)"
# s "supersampler"
# cluster "0"
-- instead of choosing a group by an index, you could use previously tagged groups
d1 $
n "0*3 2([3|5],8) <~ 7*2>"
# s "supersampler"
# cluster "[kicks|toms]"
Of course these are really simple examples, its just to illustrate possibilities.
So, what I think would be easy to start experimenting is to wrap the SuperDirt sampler with a way to resolve which buffer needs to play
~dirt.soundLibrary(\supersampler, (play: {|dirtEvent|
// grab parameters of event like #cluster, #note, etc and use them to resolve which buffer needs to be played. This is where the Audiostellar API would come into play .
// Pass rest of parameters (like #pan, #speed) 'as is'.
Synth(\sampler, [params]);
}))
So, for now I have set a repo here to start experimenting with loading a dataset (which is a json array where every sound has a path and a position field) to a Dictionary.
Also I've basically copied the sampler implementation of SuperDirt to use here, but it would be better to reuse the SuperDirt sampler.
The test that I made for know kinda works but it sounds weird (mainly the envelopes since I believe SuperDirt has a separate SynthDef that acts as an envelope, outside of the sampler).
So, any pointers in how to reuse the SuperDirt sampler with these approach??
Also any commentaries, suggestions, questions are kindly welcome! <3