Why asoundrc?

What is it good for, why do I want one?

Neither the .asoundrc nor the asound.conf files are required for ALSA to work properly. Most applications will work without them. They are used to allow extra functionality, such as routing and sample-rate conversion, through the alsa-lib layer.

The .asoundrc file

This file allows the you to have more advanced control over your card/device. The .asoundrc file consists of definitions of the various cards available in your system. It also gives you access to the pcm plugins in alsa-lib. These allow you to do tricky things like combine your cards into one or access multiple i/o's on your mulitchannel card.

Where does asoundrc live?

The asoundrc file is typically installed in a user's home directory

	$HOME/.asoundrc

and is called from

	/usr/share/alsa/alsa.conf

It is also possible to install a system wide configuration file as

	/etc/asound.conf

When an alsa application starts both configuration files are read.

Below is the most basic definition.

The default plugin

Make a file called .asoundrc in your home and/or root directory.

        vi /home/xxx/.asoundrc

copy and paste the following into the file then save it.

        pcm.!default {
type hw
card 0
}

ctl.!default {
type hw
card 0
}

The keyword default is defined in the ALSA lib API and will always access hw:0,0 - the default device on the default soundcard. Specifying the !default name supercedes the one defined in the ALSA lib API.

Now you can test:

	aplay -D default test.wav

The naming of PCM devices

A typical asoundrc starts with a 'PCM hw type'. This gives an ALSA application the ability to start a virtual soundcard (plugin, or slave) by a given name. Without this, the soundcard(s)/devices(s) must be accessed with names like hw:0,0 or default. For example:

	aplay -D hw:0,0 test.wav

or with ecasound

	ecasound -i test.wav -o alsa,hw:0,0

The numbers after hw: stand for the soundcard number and device number. This can get confusing as some sound "cards" are better represented by calling them sound "devices", for example USB sounddevices. However they are still "cards" in the sense that they have a specific driver controlling a specific piece of hardware. They also correspond to the index shown in

	/proc/asound/cards

As with most arrays the first item usually starts at 0 not 1. This is true for the way pcm devices (physical i/o channels) are represented in ALSA. Starting at pcm0c (capture), pcm0p (playback).

We use subdevices mainly for hardware which can mix several streams together. It is impractical to have 32 devices with exactly the same capabilities. The subdevices can be opened without a specific address, so the first free subdevice is opened. Also, we temporarily use subdevices for hardware with alot of streams (I/O connectors) - for example MIDI. There are several limits given by used minor numbers (8 PCM devices per card, 8 MIDI devices per card etc.).

For example, to access the first device on the first soundcard/device, you would use

	hw:0,0

to access the first device on the second soundcard/device, you would use

	hw:1,0

to access the second device on the third soundcard/device, you would use

	hw:2,1

The Control device

The control device for a card is the way that programs modify various "controls" on the card. For many cards this includes the mixer, some cards, for example the rme9652, have no mixer. However, they do still have a number of other controls and some programs like JACK need to be able to access them. Examples include the digital i/o sync indicators, sample clock source switch and so on.

Aliases

With the 'PCM hw type' you are able to define aliases for your devices. The syntax for this definition is:

	pcm.NAME {
type hw # Kernel PCM
card INT/STR # Card name or number
[device] INT # Device number (default 0)
[subdevice] INT # Subdevice number, -1 first available (default -1)
mmap_emulation BOOL # enable mmap emulation for ro/wo devices
}

For example, this gives your first soundcard an alias:

	pcm.primary {
type hw
card 0
device 0
}

Now you can access this card by the alias 'primary'.

	aplay -D primary test.wav

Plugins

To see a full list of plugins visit the alsa-lib documentation which is generated daily from cvs.

Q: What are plugins?

A: In ALSA, PCM plugins extend functionality and features of PCM devices. The plugins deal automagically with jobs like naming devices, sample rate conversions, sample copying among channels, writing to a file, joining sound cards/devices for multiple inputs/outputs (not sample synced), using multi channel sound cards/devices and other possibilities left for you to explore. to make use of them you need to create a virtual slave device.

A very simple slave could be defined as follows:

	pcm_slave.sltest {
pcm
}

This defines a slave without any parameters. It's nothing more than another alias for your sound device. The slightly more complicated thing to understand is that parameters for 'pcm types' must be defined in the slave-definition-block. Let's setup a rate-converter which shows this behaviour.

	pcm_slave.sl2 {
pcm
rate 48000
}

pcm.rate_convert {
type rate
slave sl2
}

Now you can call this newly created virtual device by:

	aplay -D rate_convert test.wav

Which automatically converts your samples to a 44.1 kHz sample rate while playing. It's not very useful because most players and alsa converts samples to the right sample rate which your soundcard is capable of, but you can use it for a conversion to a lower static sample rate for example. A more complex tool for conversion is the pcm type plug. the syntax is:

  	type plug             	# Format adjusted PCM
slave STR # Slave name (see pcm_slave)
# or
slave { # Slave definition
pcm STR # Slave PCM name
# or
pcm { } # Slave PCM definition
[format STR] # Slave format (default nearest) or "unchanged"
[channels INT] # Slave channels (default nearest) or "unchanged"
[rate INT] # Slave rate (default nearest) or "unchanged"
}
route_policy STR # route policy for automatic ttable generation
# STR can be 'default', 'average', 'copy', 'duplicate'
# average: result is average of input channels
# copy: only first channels are copied to destination
# duplicate: duplicate first set of channels
# default: copy policy, except for mono capture - sum
ttable { # Transfer table (bidimensional compound of
# cchannels * schannels numbers)
CCHANNEL {
SCHANNEL REAL # route value (0.0 ... 1.0)
}
}

We can use it as follows:

	pcm_slave.sl3 {
pcm
format S16_LE
channels 1
rate 16000
}

pcm.complex_convert {
type plug
slave sl3
}

By calling it with:

	aplay -vD complex_convert test.wav

You will convert the sample during playing to the sample format: S16_LE, one channel and a sample rate of 16 kHz. As you called aplay with the verbose option -v you see this options as it appears as it comes from the original file. with:

	aplay -v test.wav

You see the original settings of the file.

Software mixing

Software mixing is the ability to play multiple sound files or applications at the same time through the same device. There are many ways to have software mixing in the Linux environment. Usually it requires a server application such as ARTSD, ESD, JACK... The list is large and the apps can often be confusing to use.

dmix

These days we have a native plugin for ALSA called the dmix (direct mixing) plugin. It allows software mixing in an easy to use syntax and without the hassle of installing/understanding a new application first.

A very interesting and potentially extremely useful aspect of this plugin is using it combined with the default plugin name. In theory this means all applications that have native ALSA support will share the sound device. In practice not many applications are able to take advantage of this functionality yet. However if you have time to test and report your findings to the application developers it is worth a try:

    pcm.!default {
type plug
slave.pcm "dmixer"
}


pcm.dmixer {
type dmix
ipc_key 1024
slave {
pcm "hw:1,0"
period_time 0
period_size 1024
buffer_size 4096
rate 44100
}
bindings {
0 0
1 1
}
}

ctl.dmixer {
type hw
card 0
}

(To use it with your own name just change the word !default above)

Now try:

	aplay -f cd -D default test.wav

on multiple consoles.

Some notes:

The dmix PCM name is already defined in the global configuration file /usr/share/alsa/alsa.conf.

- The default sample rate for this device is 48000Hz. If you would like to change it use:

 	aplay -D"plug:'dmix:RATE=44100'" test.wav

- An example command for dmix plugin to use 44100Hz sample-rate and hw:1,0 output device:

 	aplay -Dplug:'dmix:SLAVE="hw:1,0",RATE=44100' test.wav

- The dmix PCM name is already defined in the global configuration file /usr/share/alsa/alsa.conf.

Defaults are:

	SLAVE="hw:0,0",RATE=48000

NB the dmix plugin is not based on client/server architecture, but it writes directly to the soundcard's DMA buffer. There is no limit to the amount of instances that can be run at one time.

dsnoop

While the dmix plugin is for mixing multiple output(playback) streams together, if you want to use multiple input(capture) clients you need to use the dsnoop plugin:

    pcm.mixin {
type dsnoop
ipc_key 5978293 # must be unique for all dmix plugins!!!!
ipc_key_add_uid yes
slave {
pcm "hw:0,0"
channels 2
period_size 1024
buffer_size 4096
rate 44100
periods 0
period_time 0
}
bindings {
0 0
0 1
}
}

JACK plugin

This plugin allows the user to connect applications that support alsa natively to the JACK daemon.

To compile and install jack, you need to move to src/pcm/ext directory, and run "make jack" and "make install-jack". this is intentional.

To use the JACK plugin you will need to add the following to your .asoundrc.

	pcm.jackplug {
type plug
slave { pcm "jack" }
}

pcm.jack {
type jack
playback_ports {
0 alsa_pcm:playback_1
1 alsa_pcm:playback_2
}
capture_ports {
0 alsa_pcm:capture_1
1 alsa_pcm:capture_2
}
}

Now, you can use:

	aplay -Djackplug somefile
arecord -Djackplug somefile

Virtual multi channel devices

If you would like to link two or more ALSA devices together so that you have a virtual multi channel device it is possible. However this will not create the mythical "multi-channel soundcard out of el-cheapo consumer cards". The real devices will drift out of sync over time. It is sometimes helpful to make applications see for example, one 4 channel card to allow for flexible routing if they can't easily be made to talk to multiple cards (making use of JACK being one example).

Copy and paste the following into your asoundrc.

	# create a virtual four-channel device with two sound devices:
# This is in fact two interleaved stereo streams in
# different memory locations, so JACK will complain that it
# cannot get mmap-based access. see below.

pcm.multi {
type multi;
slaves.a.pcm "hw:0,0";
slaves.a.channels 2;
slaves.b.pcm "hw:1,0";
slaves.b.channels 2;
bindings.0.slave a;
bindings.0.channel 0;
bindings.1.slave a;
bindings.1.channel 1;
bindings.2.slave b;
bindings.2.channel 0;
bindings.3.slave b;
bindings.3.channel 1;
}

# JACK will be unhappy if there is no mixer to talk to, so we set
# this to card 0. This could be any device but 0 is easy.

ctl.multi {
type hw;
card 0;
}

# This creates a 4 channel interleaved pcm stream based on
# the multi device. JACK will work with this one.

pcm.ttable {
type route;
slave.pcm "multi";
ttable.0.0 1;
ttable.1.1 1;
ttable.2.2 1;
ttable.3.3 1;
}
# see above.
ctl.ttable {
type hw;
card 0;
}

This will give you xruns, but it's suitable for testing. To test the above setting feed an audio signal to the real sound devices play it back and listen to it with an external mixer:

	arecord -f S16_LE -r 44100 -c 4 -D multi | aplay -f S16_LE -r 44100 -c 4 -D multi

To start JACK with the new device, use

 	jackd [-a] -R [-v] -d alsa -d ttable [-p 1024]

Bindings explained

The above example for a virtual multi channel device uses bindings to make the connections work. The following is a more advanced asoundrc for 2 RME Hammerfalls which is a professional multichannel sound device. Below is a full explanation of how bindings work.

	# This is for two RME Hammerfall cards. They have been split into top row
# and bottom row with channels 0-7+16-25 on rme9652_0 and channels 8-15+26-27 on
# rme9652_1. NB. channels 24-27 are used as two stereo channels while the others are mono.

# eg. card1
# | 0 1 2 3 4 5 6 7 |
# | 8 9 10 11 12 13 14 15 24 25|
# card2
# | 16 17 18 19 20 21 22 23 24 25|




pcm_slave.rme9652_s {
pcm rme9652_0
}
pcm.rme9652_1 {
type hw
card 1
}
ctl.rme9652_1 {
type hw
card 1
}
pcm.rme9652_0 {
type hw
card 0
}
ctl.rme9652_0 {
type hw
card 0
}
ctl.rme9652_48 {
type hw
card 0
}
pcm.rme9652_48 {
type multi;
slaves.a.pcm rme9652_0;
slaves.a.channels 26;
slaves.b.pcm rme9652_1;
slaves.b.channels 26;
bindings.0.slave a;
bindings.0.channel 0;
bindings.1.slave a;
bindings.1.channel 1;
bindings.2.slave a;
bindings.2.channel 2;
bindings.3.slave a;
bindings.3.channel 3;
bindings.4.slave a;
bindings.4.channel 4;
bindings.5.slave a;
bindings.5.channel 5;
bindings.6.slave a;
bindings.6.channel 6;
bindings.7.slave a;
bindings.7.channel 7;
bindings.8.slave a;
bindings.8.channel 16;
bindings.9.slave a;
bindings.9.channel 17;
bindings.10.slave a;
bindings.10.channel 18;
bindings.11.slave a;
bindings.11.channel 19;
bindings.12.slave a;
bindings.12.channel 20;
bindings.13.slave a;
bindings.13.channel 21;
bindings.14.slave a;
bindings.14.channel 22;
bindings.15.slave a;
bindings.15.channel 23;

# Use rme9652_1

bindings.16.slave b;
bindings.16.channel 8;
bindings.17.slave b;
bindings.17.channel 9;
bindings.18.slave b;
bindings.18.channel 10;
bindings.19.slave b;
bindings.19.channel 11;
bindings.20.slave b;
bindings.20.channel 12;
bindings.21.slave b;
bindings.21.channel 13;
bindings.22.slave b;
bindings.22.channel 14;
bindings.23.slave b;
bindings.23.channel 15;

# stereo channels

bindings.24.slave a;
bindings.24.channel 24;
bindings.25.slave a;
bindings.25.channel 25;
bindings.26.slave b;
bindings.26.channel 24;
bindings.27.slave b;
bindings.27.channel 25;
}

What is happening?

There are two sound cards which are linked with a wordclock pipe. That allows them to keep sample sync with each other which is very important for multichannel work. If the sample rates are not in sync then your sounds become out of time with each other.

Each sound card has a number of physical channels 19 + 10. They are represented in /proc/asound/cardx as pcmXc (capture) and pcmXp (playback). Where X equals the number of the physical input/output (i/o) channels starting at 0.

If you look at the lines:

	type multi;
slaves.a.pcm rme9652_0;
slaves.a.channels 26;

You can see that the card has been nicknamed "a" and given a range of 26 channels. You can assign the card any number of channels you want but you can only use as many channels as the card has physically available. The bindings start at the first available pcm device for the card ie. pcm0c pcm0p - and move upwards sequentially from there.

The first card for this file has 18 physical pcm devices. Nine of them are capture devices ie. pcm0c pcm1c pcm2cpcm8c, each with corresponding playback channels ie. pcm0p pcm1p pcm2p ... pcm8p. The second card has 10 physical pcm devices ie. pcm0c pcm1c pcm2c ... pcm9c. ...

If you look at the lines:

    	bindings.0.slave a;
bindings.0.channel 0;
bindings.1.slave a;
bindings.1.channel 1;

The first binding points to the first available pcm device on the card represented as "a". The second binding points to the second available pcm device on "a" and so on up to the last one available. We then assign a channel number to the binding so that the channels on the new virtual "soundcard" we have created are easy for us to access.

Another way of saying it is:

   	address of.the first channel on my new soundcard.using my real soundcard called "a";
make this address of.the first channel on my new soundcard.be the first pcm device on my new soundcard;
   	address of.the second channel on my new soundcard.using my real soundcard called "a";
make this address of.the second channel on my real soundcard.be the second pcm device on my new soundcard;


Referenced applications

  • aRTsd - the aRTs sound server is the basis of desktop sound for KDE.
  • ESD - the Enlightened Sound Daemon mixes several audio streams for playback by a single audio device.
  • Ecasound - a commandline multitrack recorder and editor with various GUI apps.
  • JACK - Jack Audio Connection Kit. If you don't know this app you don't know JACK.

Notes:

  1. Example: Alternative .asoundrc and modules.conf files
  2. Tricks for getting the most out of the card.


'Development > Linux' 카테고리의 다른 글

[Android] 우분트 10.4 기반 빌드/환경구축/소스/빌드[펌]  (0) 2012.09.26
Ubuntu 10.04 JDK  (0) 2012.09.26
Gstreamer build  (0) 2011.07.28
NFS in ubuntu  (0) 2009.12.04
Upgrade distribution in ubuntu  (0) 2009.11.27
Posted by 까 치
,