In bioacoustics research, field biologists study the dynamics of acoustic communication by recording audio streams of animal or bird vocalizations in-situ. Being able to identify an individual's vocalizations is important for classification and census, and relating this to a caller's geographic position can help give insight into behavior.
Previous work using distributed acoustic sensing platforms has shown on-line automated event detectors can be used to detect and record only events of interest for off-line source localization. However, this approach risks the loss of useful data if the automated event detectors are badly configured.
In this work, we describe the design of VoxNet, an end-to-end system which provides hardware and software support to gather and process audio data in both on-line and off-line modes. VoxNet allows the user to dynamically reconfigure the network in-field, analysing data using dynamically adjustable visualizers.
This approach enables on-line interactions which are not possible by simply using off-line analysis, such as using localization results to direct the scientist's observations immediately after they have happened. Providing an on-line interaction with the system can enable new, currently unrealized bioacoustics-related questions to be asked in the future.
Author
Author
Author
Author
Author
Author