2D/3D acoustic source localization using Deep Learning techniques and arbitrary microphone array configurations
Résumé
Protecting sensitive sites from drone threats requires an accurate strategy for drones detection and localization. This is why the Deeplomatics project proposes a combination of acoustic detection and localization on compact microphone arrays, and optical recognition using active imaging to monitor these threats.
Rather than finding the direction of arrival of the source using an acoustic source localization algorithm based on a propagation model (such as the MUSIC method), an artificial intelligence approach, named the BeamLearnin, has been specifically designed to determine in real time the position of an acoustic source, directly from raw microphones signals.
The optimization of the neural network is done using data obtained either from numerical simulations or from multi-channel microphone recordings. One of the advantages of the second method is to optimize the learning variables on real signals that integrate all the characteristics of the antenna used for the recordings, from the frequency responses of the sensors to the diffraction of the array body. In this case, the learning phase also allows an intrinsic calibration of the microphone array.
These measured datasets can be recorded from any compact microphone array. Indeed, during the learning process, the arrays are positioned in the center of a sphere of loudspeakers restoring a perfectly known pressure field thanks to the formalism of spherical harmonic. This formalism also makes it possible to spatialize acoustic sources from digitally calculated signals, but also to restore a pressure field measured during UAV flights in real environments. Thus a single on-site measurement campaign can be used to build several datasets corresponding to different microphone array placed in the spatialization sphere.
In addition to being able to localize the position of a source in real time, the proposed BeamLearning approach offers good results and exhibits both a better average experimental localization of the BeamLearning approach, but also a lower dispersion of localization errors than SH-MUSIC. On the GPU architecture used for source localization using the BeamLearning method, the computation time required to estimate the source DOA is also in favor of our approach, with a reduction of the computation time by a factor of 75 when compared to the SH-MUSIC method.