Scientists are using sound to track nighttime bird migration

A flock of Common Cranes at night

A group of researchers at New York University and the Cornell Lab of Ornithology are helping to track the nighttime migratory patterns of birds by teaching a computer to recognize their flight calls.

The technique, called acoustic monitoring, has existed for some time, but the development of advanced computer algorithms may provide researchers with better information than they have gathered in the past.

“We want to make as many different kinds of measurements as we can,” says Andrew Farnsworth, of the Cornell Lab of Ornithology. “The intro point to go from human to computer is about thinking of these sounds in terms of frequency and time, and figuring out how to measure that in increasing detail and feed that information into the machine’s listening models.”

“On the sensors, there is a spectral template detector that scans the audio as it comes in, checking for potential matches,” explains Justin Salamon of NYU, one of the collaborators on the project. “When a potential match is identified, it snaps roughly one second of that audio centered around the detection and sends that through the server.”

Flight calls are distinct from birdsong. Birdsong is made up of many different notes strung together. Flight calls are single notes, almost exclusively less than one hundred milliseconds long. The researchers ‘teach’ the machine to recognize these calls by giving it a large collection of recordings.

Then they use what's called ‘unsupervised feature learning,’ which means that they don't tell the algorithm what to look for. Rather, by giving it a large number of examples, the computer builds a statistical model of the specific patterns that are representative of a certain species.

The eventual goal is to be able to put names to these nocturnally-migrating species and do it in an automated way in real time, in order to understand the biology — the acoustic communication — and apply it to conservation.

Right now, scientists use two sources of information when trying to understand migratory patterns. The first is bird watchers — many people watching birds all across the country. Cornell has been good at gathering information from citizen scientists who help categorize the occurrence of certain species by location and time. But those observations are mostly made by day, and migrations often occur at night.

The second source is surveillance radar, but this can show only the volume, the speed and the direction of a migration. Radar says nothing about the species composition of a given migration. Acoustic monitoring could reveal the missing piece of this puzzle, telling researchers something about the precise species composition of a migration at a specific time and place, says Juan Pablo Bello, an associate professor of music technology and electrical and computer engineering and the director of the Music and Audio Research Lab at NYU.

“Biologists in the project are interested in better understanding migratory patterns for bird species across the continental US,” Bello explains. “Specifically they are looking at two things. The first is to understand precisely the onset of the migration across the different seasons; and also to understand the compositions of migratory flights that are started during those periods of time.”

Acoustic monitoring of birds flying at night is kind of the audio equivalent of finding a needle in a haystack. The computer must discern the bird’s flight call within all the background noise and separate it our from other similar sounds. The call of a spring peeper, another bird or even a mechanical sound like a battery alarm, can all have very similar frequency patterns that blend together in one recording,.

It’s a much more difficult task than asking Siri for directions to the nearest Starbucks. “Speaking sources and speaking content are a fundamentally different problem than putting a microphone outside, where you have a signal that is probably very, very far from the microphone and is a very tenuous occurrence in the acoustic environment,” Bello explains.

“Then you have all sorts of background conditions — noise from the wind, from rain, from other species, from human activity,” he adds. “Being able to pick out the sounds in these very complicated and complex environments is a challenge your phone cannot address right now, and that we're trying to address with this technology.”

This article is based on an interview that aired on PRI's Science Friday with Ira Flatow

Sign up for our daily newsletter

Sign up for The Top of the World, delivered to your inbox every weekday morning.