Advances in Acoustic Intelligence

The field of acoustic intelligence is rapidly advancing, with a focus on developing innovative methods for analyzing and interpreting acoustic signals. Recent research has explored the use of acoustic signals for high-fidelity environmental perception, causal physical reasoning, and predictive simulation of dynamic events. Additionally, there has been a growing interest in applying machine learning techniques to medical audio signals, enabling automated analysis and potentially standardizing the processing of medical sounds. Other areas of research include non-invasive object classification using acoustic scattering and cooperative contactless object transport with acoustic robots. Noteworthy papers in this area include: MUDAS, which introduces a framework for unsupervised domain adaptation in multi-label sound classification, achieving notable improvements in classification accuracy. Making deep neural networks work for medical audio, which contributes to the analysis of infant cry sounds to predict medical conditions through neural transfer learning, model compression, and domain adaptation techniques.

Sources

15,500 Seconds: Lean UAV Classification Leveraging PEFT and Pre-Trained Networks

MUDAS: Mote-scale Unsupervised Domain Adaptation in Multi-label Sound Classification

A Survey on World Models Grounded in Acoustic Physical Information

A Cooperative Contactless Object Transport with Acoustic Robots

Making deep neural networks work for medical audio: representation, compression and domain adaptation

Acoustic scattering AI for non-invasive object classifications: A case study on hair assessment

pycnet-audio: A Python package to support bioacoustics data processing

Built with on top of