The Story

“What’s that bird?”

Yellow-bellied Sapsucker perched on a branch.
Yellow-bellied Sapsucker © Daniel Jauvin / Macaulay Library

Part of the mission of the Cornell Lab of Ornithology is to help people find the answer to “what’s that bird?” We know sorting through a massive field guide, using search engines, and other resources can make it really challenging to figure out what you just saw—our goal is to make that challenge easier.

Merlin is designed to be a birding coach for bird watchers at every level. Merlin asks you the same questions that an expert birder would ask to help solve a mystery bird sighting. Notice that date and location are Merlin’s first and most important questions. It takes years of experience in the field to know what species are expected at a given location and date. Merlin shares this knowledge with you based on more than 800 million sightings submitted to eBird from birders around the world.

Merlin also asks you to describe the color, size, and behavior of the bird you saw. Because no two people describe birds exactly the same way, Merlin presents a shortlist of possible species based on descriptions from Cornell Lab experts as well as thousands of bird enthusiasts who helped “teach” Merlin by participating in online activities. They’ve contributed more than 3 million descriptors to help Merlin match your input with the most likely birds. When you identify a species and click “This is My Bird,” Merlin also saves your record to help improve its future performance.

Some people experience birds through the viewfinder of their camera, and putting a name to the bird they just photographed can be both rewarding and educational. The Photo ID feature in Merlin allows anyone with a camera to snap a photo and get a list of suggestions. Photo ID is yet another method to help you identify the birds you encounter.

We launched Merlin in 2014 with the goal of adding more species and more features in time. We appreciate your feedback about what’s working for you, what isn’t, and features you’d like to see. If you’d like to support our efforts to continue developing Merlin, please consider making a donation.

We hope you enjoy using Merlin and sharing it with your friends and family!

Our Team

The Merlin staff is based at the Cornell Lab of Ornithology, but our team includes 5000+ birders spanning the globe who have contributed photos and audio recordings with eBird, as well as everyone who has submitted their sightings to eBird.

We are grateful to our talented undergraduate students who have put many years of work into curating the data Merlin relies on, managing the text in multiple languages, updating maps, and many other tasks.

Current students: Benjamin Hack, Stella Hao, Tristan Herwood, Jack Hutchison, Archie Jiang, and Alyssa Nowicki.

Past students: Ben Barkley, Larry Chen, Jeremy Collison, Hermione Deng, Gates Dupont, Kevin Ebert, Sam Heinrich, Luke Seitz, and Alex Wiebe.

Photographers and Recordists

The photographs and audio recordings used in Merlin were sourced from more than 5,000 contributors who uploaded their media to Macaulay Library.

Photos and audio cuts included in Merlin attempt to cover the full range of variation of each species, and are selected and edited by the collections team at Macaulay Library and partners. Special thanks also to recordists from xeno-canto, WikiAves, AVoCet, and the Internet Bird Collection who contributed their recordings.

Additional Contributors

Identification text: The text is tailored for those moments when you are watching a bird in the field and using Merlin to pick among species. Merlin text is written by Callan Alexander, Nick Athanas, Keith Barnes, Jessie Barry, Ken Behrens, Garima Bhatia, Larry Chen, Mat Gilfedder, Lisle Gwynne, Charley Hesse, Steve N.G. Howell, Praveen J, Alexander Lees, Steve Mlodinow, Nárgila Moura, Adhithyan NK, Brian O’Shea, Yoav Perlman, Suhel Quader, Krishna Murthy R, Estevão F. Santos, Luke Seitz, Ramit Singal, Andrew Spencer, Rajneesh Suvarna, Sarah Toner, Ashwin Viswanathan, Drew Weber, Alex Wiebe, and Sam Woods.

Identification text copyeditors: Kathi Borgmann, Ned Brinkley, Nathan Pieplow, Hugh Powell, and Michael Retter.

Media editors: Tayler Brooks, Evan Lipton, Jay McGowan, Ramit Singal, Andrew Spencer, Andres Vasquez, Drew Weber.

Translators: Chinese (Simplified): Hermione Deng, Stella Hao, Archie Jiang, and Tao Liu. Chinese (Traditional): Scott Lin and others. Hebrew: Yoav Perlman. Portuguese: Alexander Lees, Nárgila Moura, Pedro Fernandes, Lorena Patrício, Estevão F. Santos, and Vitor Bernardes Valentini. Spanish: Daniel Arias, Roselvy Juárez, Vanessa Navarro Rodríguez, Vicente Rodríguez, Jennifer Romero, and John van Dort. French: Jean-Pierre Artigau and eBird Quebec team. German: Angelika Nelson. Russian: Nathan Pieplow. Japanese: Hiko Komatsu, Hiroko Okamoto, and Joye Zhou.

Our Thanks

Merlin is made possible by support from the National Science Foundation (grant number DRL-1010818), Pennington® Wild Bird Food, SWAROVSKI OPTIK, the Faucett Catalyst Fund, and friends and members of the Cornell Lab of Ornithology. We would like to express our gratitude to them as well as to the dedicated volunteers who made this app possible: eBird participants, visitors to All About Birds, and sound recordists and photographers who contributed their media to the Macaulay Library.

Photo ID was developed in collaboration with Dr. Pietro Perona’s computational vision lab at Caltech, and Dr. Serge Belongie’s computer vision group at Cornell Tech, collaborators on the Visipedia project. Merlin Photo ID uses computer vision technology, developed as part of Dr. Grant Van Horn’s doctoral work at Caltech, to identify birds in photos. First publicly released Nov 30th, 2017.

Sound ID was developed in-house at the Cornell Lab of Ornithology, led by Dr. Grant Van Horn with assistance from Dr. Benjamin Hoffman. Sound ID uses recordings archived in the Macaulay Library to learn how to recognize the vocalizations of different bird species. Sound ID is trained on audio recordings that are first converted to visual representations (spectrograms), then analyzed using computer vision tools similar to those that power Photo ID. Dataset preparation began in 2020 with model development starting in early 2021. We thank the many annotators that helped curate hundreds of audio recordings for each species. First publicly release June 23rd, 2021.