Learning more about sound on Linux

(I apologize if this is not in the right category, since it doesn't directly concern Tidal)

I'm getting a bit lost with all the different components of sound on Linux (jack, OSC, alsa...), and I'd like to learn a bit more in depth about how all this works. Does anyone have any good resources, courses on this topic?
Thank you!

I find the Arch wiki to be a great resource regardless of which distribution of Linux you are using.

https://wiki.archlinux.org/index.php/Professional_audio

4 Likes

I don't know of any good resources on explaining how all this works for musicians (other than what I've linked below). However, my current Linux production setup (which is gloriously streamlined compared to my Mac/Windows setups) is based on Sevish's guide.

At some point I plan on doing a video on my production setup, as it has become rather complex and involves combining a lot of different tools (Tidal included).so I've had to find ways to automate setup & routing. The below blog post is a great start though, and what my setup is based on (specifically the KxStudio toolset including Carla).

6 Likes

Frankly, this forum may be your best resource? I'd start digging (like you are) and ask questions as you get stuck - here's a 10,000 ft view for your initial items mentioned:

JACK:
An audio routing service (JACKD <- the D refers to daemon, or continually running service).
JACK aware audio applications can register audio/midi inputs and outputs with the JACK service, and you can use something like qjackctl Connect dialog to connect inputs and outputs of different applications to "route" audio between them.

OSC:
An ip network based protocol, for sending control messages to listening devices/software/services. It could be considered a control protocol somewhat like MIDI (in the sense you can control similar sorts of devices with OSC) but this comparison doesn't hold up for long. I find OSC significantly more flexible, and significantly more complicated to understand.
OSC communication operates external to something like JACK, it just requires an ip network interface (local or external)

ALSA:
The low level sound architecture implemented in the linux kernel to manage sound card/midi devices - JACK and PulseAudio both operate as an abstraction layer to ALSA ie they both require ALSA in order to function, but their intent is to provide more convenient tools/functionality for certain purposes.
PulseAudio is intended to simplify desktop audio management, JACK provides realtime audio routing capabilities, and you can even bridge PulseAudio in JACK using JACK aware pulseaudio modules (allowing you to route audio from non-JACK aware applications like web browsers into JACK via the pulseaudio JACK modules)

Hope that clears a little of the confusion, I was on the same journey you are about 3 years ago. The only way I got through was a lot of trial, error, reading bits and pieces, but there was not a coherent resource that explained everything that I recall finding.

2 Likes

kinda new to linux myself :slight_smile:
lots to learn...
this helped me on elementary (ubuntu-ish) linux

Thank you all for your inputs! I'm going to read all that attentively.

@eris I had no idea Sevish used Linux! This made me dive back into his music, which is always a nice thing :grin:

The repo @abalone1969 linked itself links (in the scripts) to https://wiki.linuxaudio.org/wiki/start, which seems to be a good resource too, so I figured I would put it in this thread too.

1 Like

www.linuxmusicians.com or the LAU (linux audio mailinglist) are good resources.

For modular setups and when using things like Supercollider, JACK (qjackctl) is what people use.

On Linux Desktop you're dealing with Pulseaudio (which is a layer above ALSA I think). In some occasions there is a fight between Pulseaudio and JACK for accessing the soundcard. There are several solutions for this. My solution is to use the internal soundcard for pulseaudio and the external card for JACK.

The app 'cadence' might be a interesting way to deal with this problem. It has tools for bridging Pulseaudio to JACK. And also to bridge ALSA MIDI to JACK midi (which is more accurate).

Yes, it's not the ideal world. But with the script mentioned, you've a decent start. Other option is to install the distro AVLinux, which does the configuration for you.

One interesting way to manage setups when using softsynths and a soft mixer for instance, is the Non-session-manager. Applications like Zynaddsubfx, Carla and non-mixer are handy then too:

2 Likes

Hey just fyi this has been possible in pulseaudio itself for many years - on most debian based distros I've run across you'll find the module-jack-sink and module-jack-source pulseaudio modules.

They usually load automatically from the following stanza in /etc/pulse/default.pa:

### Automatically connect sink and source if JACK server is present
.ifexists module-jackdbus-detect.so
.nofail
load-module module-jackdbus-detect channels=2
.fail
.endif

and can also be loaded manually:

pactl load-module module-jack-sink
pactl load-module module-jack-source

a2jmidid is also available to do this - just add to the qjackctl startup script: a2jmidid --export-hw (and potentially a killall a2jmidid in the pre-shutdown too)

Just mentioning because I believe cadence is not that widely available in the default package repos (whereas both these options are) -

1 Like

I've spent hours trying to configure sound on various Linux distributions. I've learned a lot in the process but it definitely feels like casting spells, invoking stuff and having a chamanism-oriented approach to audio routing. Recently, I've learned about Pipewire, a huge project aiming to make audio/video routing easy and painless on Linux. The project gained a lot of traction and is almost ready to be released in various distributions (Fedora, etc...).

For those of you who are running on an Arch-based distribution, it is now possible to give it a try. All you need to do is to follow the instructions on the Arch Wiki. Be careful, install all the dropin packages. They will route the audio of your apps into Pipewire automatically. Catia is still needed for manual audio routing, and all the things you know about MIDI still applies as well. It is not really "automatic" and "painless" yet. However, it is definitely nicer than starting and configuring various incompatible sound servers.

I just started to use it, so here are some of the things I've noticed:

  • some applications will "unplug" themselves from Pipewire sometimes.
  • the performance is really good, but less optimal than what you get with a traditional jack server.
  • All the heavy-work is done behind your back. This is a relief in many cases, but sometimes, it is still difficult to know what goes or how to configure it manually if needed (there is config files ofc).

I hope that Pipewire will make it in the main distros, in one year or two.

2 Likes