WiLMA - Wireless Large Scale Microphone Array

Everyday situations are rich in numerous acoustic events emerging from different origins. Such acoustic scenes may comprise discussions of our fellow human beings, chirping birds, cars, cyclists, and many more. So far, no recording or scene analysis technique for this rich and dynamically  changing acoustic environment exists, though it would be needed in order to document or actively shape an acoustic scene. We know customized  techniques for recording symphony orchestras with a static cast, but none that automatically readjusts to scenes with varying content. Thus, a new recording technique, that analyzes the signal content, the position and the  activity of all sources in a scene, is feasible. The wireless large scale  microphone array (WiLMA) is the mobile infrastructure at the Institute of  Electronic Music and Acoustics (IEM) that allows for investigating into new recording and analysis techniques.

This project was partly funded by the MINT/Masse program of the Federal Ministry of Science and Research.


 

System

Traditionally, the sensor nodes of a wireless sensor network (WSN) that captures sound events, are populated with low quality microphones, amplifiers and analog to digital converters (ADCs) in order to decrease sensor node size, power consumption and cost.
The Wireless large-scale microphone array (WiLMA) developed at the Institute of Electronic Music and Acoustics introduces high quality audio processing in wireless sensor networks. Each of the sixteen sensor modules (SM) allows for capturing up to four high-end microphone signals which in turn enables the use of a 4-channel microphone array (e.g. first order tetrahedral ambisonics microphone) per SM. Thus, the system operates as a large scale microphone array, with a total of 64 audio channels. A single SM and the used microphone array are depicted in figure 2.

The acquired data from all SMs is transmitted (either wireless or wired) to a central unit running the host application depicted in figure 3. This host application visualizes input levels, syncronization and battery status. Further, it allows the user to individually configure each SM for a specific task.
Each SM is equipped with a local processing unit in order to perform computations on the acquired data. Instead of sending the raw data to the the central unit responsible for the fusion, sensor modules can use their processing abilities to locally carry out simple computations and transmit only the required and partially processed data. This intelligent sensor network approach results in decreased network traffic and higher flexibility of the system.

An example application using the WiLMA hardware is to separate sources of an acoustic scene and track their movement. Thus, it should be possible to analyze the separated source signals and to assign a specific event to a specific source. Figure 1 conceptually depicts the process of a spatial transcription. Areas that could benefit from that application include assisted living scenarios, acoustical planning, the surveillance of urban areas, multi-channel source separation, event detection, source tracking and so on.
Another application is the high audio quality multichannel recording of an acoustic scene with the added benefit of flexible microphone positioning due to wirless operation of the system.

Sensor Module

Each SM consists of several sub-modules. A simplified block diagram is depicted in figure 4.

ANALOG FRONT-END: The analog front-end consists of four high-quality digitally controllable microphone preamps and a 4 channel ADC with a resolution of 24 bit and a sampling rate of 48 kHz. Timestamps provided by the sync-module are captured via I2S and multiplexed with the audio data into an 8-channel Time-division multiplexing (TDM) stream.

SYNC MODULE: Wireless synchronization within the WiLMA system is established by a master module sending timestamp messages at defined periodic intervals on a sub-GHz channel. On each sensor module a frequency-locked-loop (FLL) is used to synchronize a local oscillator to the master. The word clock for the ADC on the analog frontend is derived from the oscillator disciplined by the master module.

SYSTEM ON CHIP: The heart of each sensor module is a Beaglebone A6 equipped with a ARM Cortex A8 based processor running Linux. An ALSA driver developed for the analog front-end captures the TDM data stream which in turn is then processed by a real-time application and streamed to the central unit via ethernet or WLAN.

POWER MODULE: The power module generates supply voltages for the different modules from the wall plug supply or the battery, respectively. It also generates an optional 48V supply voltage for microphones requiring phantom power.

BATTERY MANAGEMENT: The LiPo battery pack is connected to a battery management system which is responsible for controlling charge voltage and charge current, switching between power sources and providing information about the battery status via I2C bus. In case of battery undervoltage the battery management system autonomously disconnects the load from the battery to keep the battery in a safe state.

ASD Team (IEM)

Schörkhuber, Christian DI

schoerkhuber(at)iem.at
+43 316 389 3800

Zaunschirm, Markus DI

zaunschirm(at)iem.at
+43 316 389 3604

zmölnig, IOhannes DI

zmoelnig(at)iem.at
+43 316 389 3634

Contact

Inffeldgasse 10/III
8010 Graz
Tel: +43 316 389 3170
Fax: +43 316 389 3171
Email
Website

Office:
Mo-Do 7:30-16:00
Fr 7:30-13:30