15 Years of Audio Data Show Whale’s Movements
Over the last several years, Google AI Perception teams have developed techniques for audio event analysis they applied on YouTube for non-speech captions, video categorizations and indexing. The group’s AudioSet evaluation set and open-sourced model code is now assisting conservation organizations in analyzing large quantities of acoustic data for wildlife monitoring.
As part of its AI for Social Good program, and in partnership with the Pacific Islands Fisheries Science Center of the U.S. National Oceanic and Atmospheric Administration (NOAA), Google AI Perception developed algorithms to identify humpback whale calls from 15 years of underwater recordings at several locations in the Pacific. Matt Harvey, Software Engineer of Google AI Perception explained the process of tracking the humpback whales in a blog.
“The research provided new and important information about humpback whale presence, seasonality, daily calling behavior, and population structure. This is especially important in remote, uninhabited islands, about which scientists have had no information until now.”
Since the dataset spans a large period, knowing when and where humpback whales were calling is expected to reveal whether they have changed their movements over the years, especially in relation to increasing human ocean activity. Researchers could then decide how to limit human impact on the whales.
Using passive acoustic monitoring, the process of listening to marine mammals with underwater microphones called hydrophones, the researchers recorded signals so that detection, classification, and localization tasks could be done offline. This method is better than ship-based visual surveys, because it includes the ability to detect submerged animals, and has longer detection ranges and longer monitoring periods.
Since 2005, NOAA has collected recordings from ocean-bottom hydrophones at 12 sites in the Pacific Island region, a winter breeding and calving destination for certain populations of humpback whales. Here’s a “picture” of their locations in a given time period.
Devices called high-frequency acoustic recording packages, or HARPs, recorded the data. In total, NOAA provided about 15 years of audio or 9.2 terabytes after decimation from 200 kHz to 10 kHz.
Manually marking humpback whale calls, even with the aid of computer-assisted methods, is extremely time-consuming, so the group decided to use image classification for audio event detection by creating a spectrogram—a histogram of sound power plotted on time-frequency axis.
read more at ai.googleblog.com
Leave A Comment