Hi all, new here. I have an idea I'm trying to bring to fruition, and would love some extra eyes to see if it's even possible.
The basic idea is I have a performer, an audience, and a projector. The:
- Performer has a DI into a machine either running Tidal (local) or reading from and writing to a stream with Tidal (remote)
- Projector displays a QR code that leads audience members to a link. The display for the QR code knows A. how many current connections (people have opened the link) and B. when there are no current connections.
- Audience (maybe 20-30 people) open the link and it brings them to a web server that plays some randomly generated oscillator. The oscillator that plays is different for each audience member, however each oscillator gets sent to the performer's Tidal instance and uses it as either FM, AM, PM or LFO mod on the input signal from the performer. That is, the more people that have the link open, the more modulation is occurring on the performer's raw signal (and if no one has it open, the signal is basically bypass).
tl;dr, audience opens link, single oscillator plays on their phone and is sent to modulate performer's sound, more links open = louder individual amplitude for each oscillator/connected device. When no more active connections, no more oscillators and no more sound.
My technical implementation idea would be that the web server would maybe be a GCP or AWS instance running all of the software. When an incoming connection happens, spin up a Docker container (or k8s pod) and run Tidal inside, open two ports (say 1234 and 5678), play audio on the audience's phone from port 1234 and send same audio on 5678 to another running tidal instance that is waiting for incoming oscillators. Each pod/container talks to a master controller that keeps track of how many active connections are on the server; the amplitude increases with increasing number of connections.
My questions are as follows:
- Can I run Tidal in a Kubernetes cluster?
- Can I have multiple instances talk to each other and send/receive input via a Kubernetes cluster network?
- Can I have yet another instance (container/pod/etc) read every input and use it as modulation for a local input signal (say going directly into my computer running the server?) I actually don't know how this would work.
Any thoughts or ideas would be much appreciated!
Hi hive9, welcome to the club!
I have a bit of difficulties understanding your idea, unfortunately. Do you want the oscillator of the phone to play over the device's (phone's) loudspeakers? Or do you want to provide a graphical interface to create an oscillator that is used on the audio of the performer? Or do you want both?
In the first case, I think synthesising the audio on the devices (i.e. with web audio) seems to me the most desirable solution. Depending on the specifics you might need to implement synchronisation. I definitely do not recommend synthesising the sound on a server and the streaming it to the device because of bandwidth (you're talking about 20-30 people on the same wifi) and latency (reliable synchronisation will be almost impossible in a live setting)
In case you want to use the phones to control effects that will be applied to the performers audio, you could send the control messages via osc directly to the performer's computer and synthesise the effect there. You could distinguish the different devices either within the osc message or by opening a different port for every device. The performers pc would have to listen to these messages and react to them, which should be doable with not too much hassle in SuperCollider.
If you want both, I still recommend synthesising on the device locally and exchanging osc messages for communication and synchronisation.
Hi @fnab, thanks for the response!
Do you want the oscillator of the phone to play over the device's (phone's) loudspeakers? Or do you want to provide a graphical interface to create an oscillator that is used on the audio of the performer?
The former - I want the phone to basically play an oscillator with as little input from the user as possible.
In the first case, I think synthesising the audio on the devices (i.e. with web audio) seems to me the most desirable solution.
I agree, this seems the most logical. OSC allows me to send messages to a listening device, correct? What would be the route to the device? Would I need my machine to have a SuperCollider listening on the public internet? Or is there some middle man approach that would work?
In general, this is a helpful response, thank you for your clarifications!
OSC allows me to send messages to a listening device, correct?
Its specifically designed to for communication between services on a network and is also how Tidal communicates with SuperCollider. To receive a message you simply need to listen on a specific port (and the other devices has to send an osc message to that devices on that port).
What would be the route to the device? Would I need my machine to have a SuperCollider listening on the public internet?
If it's a viable option for you, I recommend setting up a Wifi network to which the audience can connect. There you do all the networking. That saves you a lot of hassle firewalls, port forwarding and what not. You could provide a qr code that connects to that wifi on the projector and perhaps even directly redirect to the web page that contains the audio app and which you only host within this dedicated network (either from the performers device or from the access point).
The web page could run a web-audio app. Unfortunately I have never worked with web audio so I cannot provide any specifics on how to implement it exactly but I'm sure with web-audio you can easily listen for osc messages on a specific port and synthesise simple waveforms.