hagen

Author Archives: hagen

In order to enable interactive exploration of the environment, including changing ear signals, the Two!Ears model can either be used together with a binaural simulation stage or the signals can directly be acquired by a robotic platform. In our project we have been building two of those robotic platforms, one at Institut des Systèmes Intelligents et de Robotique (ISIR) in Paris, and one at Laboratoire d’Analyse et d’Architecture des Systèmes (LAAS-CNRS) in Toulouse.

In both cases we used the KEMAR head and torso simulator as a basic, as it provides a standard way for acquiring binaural signals. The KEMAR was then placed on movable robotic platforms, namely ODI in Paris which was specially made for KEMAR by enova robotics and the in-house build Jido platform (Toulouse).

kemar_on_robots

In addition to the movements provided by the robotic platforms, we modified the KEMAR by adding a motor into its body in order to allow head rotations. It has an accuracy under 1° and is also useful if you want to acquire binaural room impulse responses with the KEMAR.

kemar_head_motor

If you want to see both robots in action you can have a look at the companion videos for a localisation approach presented at ICASSP 2016 by Bustamante et al. or at a video showing a navigation task performed in the lab at ISIR.

This entry was posted in General and tagged , on by .

A new version of our software framework was published today, please go to the download section and have a look at the installation guide in order to try it out.

Besides lots of bug fixes, the main new features of this release are:

Blackboard system:
* Replaced GmtkLocationKS by GmmLocationKS
* Remove dependency on external GMTK framework
New Examples:
* GMM-based localisation under reverberant conditions

This entry was posted in Software release on by .

A new version of our software framework was published today, please go to the download section and have a look at the installation guide in order to try it out.

Besides lots of bug fixes, the main new features of this release are:

Binaural simulator:
* Works now under Matlab 2015b
New processors in the Auditory front-end:
* precedence effect processor
* MOC feedback processor
New knowledge source in the Blackboard system:
* Segmentation knowledge source
* Deep neural-network based localisation knowledge source
* Coloration knowledge source
* Localisation knowledge source for evaluating spatial audio systems
New Database entries:
* Results from listening test on coloration in wave field synthesis
New Examples:
* DNN-based localisation under reverberant conditions
* Segmentation with and without priming
* (Re)train the segmentation stage
* Prediction of coloration in spatial audio systems
* Prediction of localisation in spatial audio systems

This entry was posted in Software release on by .

The Sound Field Synthesis Toolbox is a tool to work with the spatial audio methods Wave Field Synthesis and Higher Order Ambisonics in Matlab/Octave. It allows for numerical simulations of sound fields in the time and frequency domain. Furthermore, with the generation of binaural simulations of multichannel loudspeaker setups it allows for creating stimuli for listening tests in order to evaluate different sound field synthesis methods.

After a few years of work, today we finally are coming up with the 1.0.0 release.
You can download the latest release and you should have a look at the tutorial on github how to use it.

sfs-1.0.0

NEWS:
– added references for all driving functions
– streamlined nested conf settings; e.g. now it is no longer neccessary to set conf.ir.hcompfile if conf.usehcomp == false
– added WFS driving functions from Völk et al. and Verheijen et al.
– removed secondary_source_number() and xy_grid, because they are no longer needed
– enabled pre-equalization filter of WFS as default in SFS_config_example()
– fixed sound_field_mono_sdm_kx()
– Green’s function for line sources returns now real values
– correct y-direction of plane waves for 3D NFC-HOA
– updated the test functions in the validation folder
– several small fixes

This entry was posted in Publications on by .

If you want to talk about the auditory system to your students it is sometimes hard to find free material like illustrations of the phenomenon. Here, we provide an illustration that is available under the Creative Commons license and can be used for all purposes. The image is available as png, svg, eps, pdf.

auditory_perception

The parts highlighted in blue are the auditory path way starting with the cochlea and ending in the auditory cortex.
In between marks are highlighting the processing steps starting at the cochlear nucleus, superior olivary complex, and lateral lemniscus in the brainstem going further to the inferior colliculus in the midbrain and the medial geniculate body in the thalamus.

The idea for this illustration is borrowed from B. Grothe, M. Pecka, and D. McAlpine Mechanisms of Sound Localization in Mammals. The cochlea and outer ear is from L. Chittka and A. Brockmann, Perception space–the final frontier. The sketch of the brain is based on K. Talbot et al, Synaptic dysbindin-1 reductions in schizophrenia occur in an isoform-specific manner indicating their subsynaptic location.

This entry was posted in Media on by .