Google AI division, DeepMind, has introduced a groundbreaking artificial intelligence model named DolphinGemma. Which is aimed at helping researchers interpret dolphin vocalizations and better understand the complexity of dolphin communication. This initiative is part of a broader effort to bridge the gap between human technology and animal behavior using advanced AI.
DolphinGemma was developed in collaboration with the Wild Dolphin Project (WDP). A nonprofit research organization that has studied Atlantic spotted dolphins in their natural habitat for nearly four decades. Built on Google’s lightweight, open-source Gemma model architecture. DolphinGemma is designed to generate dolphin-like sound sequences and analyze real dolphin vocalizations for meaningful patterns.
What sets DolphinGemma apart is its ability to run efficiently on mobile devices. According to Google, the model is optimized for smartphones, which significantly enhances field research capabilities. This summer, WDP will deploy the model on the upcoming Google Pixel 9 smartphone, enabling real-time interaction with dolphin vocalizations. The device will not only produce synthetic dolphin calls but also “listen” and analyze natural responses from dolphins to determine possible matches. Essentially simulating a two-way communication loop.
Previously, WDP researchers used the Pixel 6 for similar studies, but the newer Pixel 9 offers improved processing power. Allowing both the AI model and signal-matching algorithms to run simultaneously. This advancement could accelerate discoveries in marine biology. And deepen our understanding of how intelligent marine species, like dolphins, use complex sounds to interact.
Google and WDP believe this project could pave the way for more ethical, AI-enhanced research practices in wildlife studies. Potentially unlocking new dimensions in interspecies communication. The effort not only highlights the versatility of AI but also underscores its potential in conservation science and animal behavior research.