I personally was happy to see this project get built. The dolphin researchers have been doing great science for years, from the computational/mathematics side it was quite neat see how that was combined with the Gemma models.
>By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication — a task previously requiring immense human effort.
But this doesn't really tell me anything. What does it mean to "help researchers uncover" this stuff? What is the model actually doing?
The article reads like the press releases you see from academic departments, where an earth shattering breakthrough is juuuuust around the corner. In every single department, of every single university.
It's more PR fluff than substance.
LLMs are multi-lingual without really trying assuming the languages in question are sufficiently well-represented in their training corpus.
I presume their ability to translate comes from the fact that there are lots of human-translated passages in their corpus; the same work in multiple languages, which lets them figure out the necessary mappings between semantic points (words.)
But I wonder about the translation capability of a model trained on multiple languages but with completely disjoint documents (no documents that were translations of another, no dictionaries, etc).
Could the emerging latent "concept space" of two completely different human languages be similar enough that the model could translate well, even without ever seeing examples of how a multilingual human would do a translation?
I don't have a strong intuition here but it seems plausible. And if so, that's remarkable because that's basically a science-fiction babelfish or universal translator.
In the case of non-human communication, I know there has been some fairly well-motivated theorizing about the semantics of individual whale vocalizations. You could imagine a first pass at something like this if the meaning of (say) a couple dozen vocalizations could be characterized with a reasonable degree of confidence.
Super interesting domain that's ripe for some fresh perspectives imo. Feels like at this stage, all people can really do is throw stuff at the wall. The interesting part will begin when someone can get something to stick!
> that's basically a science-fiction babelfish or universal translator
Ten years ago I would have laughed at this notion, but today it doesn't feel that crazy.
I'd conjecture that over the next ten years, this general line of research will yield some non-obvious insights into the structure of non-human communication systems.
Increasingly feels like the sci-fi era has begun -- what a time to be alive.
Yes. I remember reading that the EU parliamentary proceedings in particular are used to train machine translation models. Unfortunately, I cant remember where I read that. I did find the dataset: https://paperswithcode.com/dataset/europarl
Languages encode similar human experiences, so their conceptual spaces probably have natural alignments even without translation examples. Words for common objects or emotions might cluster similarly.
But without seeing actual translations, a model would miss nuances, idioms, and how languages carve up meaning differently. It might grasp that "dog" and "perro" relate to similar concepts without knowing they're direct translations.
And it gets even more complex because the connotations of "dog" in the USA in 2025 are unquestionably different from "dog" in England in 1599. I can only assume these distinctions also hold across languages. They're not a direct translation.
Let alone extreme cultural specificities... To follow the same example, how would one define "doge" now?
Regardless of whether or not it works perfectly, surely we can all relate to the childhood desire to 'speak' to animals at one point or another?
You can call it a waste of resources or someones desperate attempt at keeping their job if you want, but these are marine biologists. I imagine cross species communication would be a major achievement and seems like a worthwhile endeavor to me.
If we want to know something about what's going on in the ocean, or high on a mountain or in the sky or whatever - what if we can just ask some animals about it? What about for things that animals can naturally perceive that humans have trouble with - certain wavelengths of light or magnetic fields for example? How about being able to recruit animals to do specific tasks that they are better suited for? Seems like a win for us, and maybe a win for them as well.
Not sure what else, but history suggests that the more people have been able to communicate with each other, the better the outcomes. I assume this holds true more broadly as well.
However if universal communication was to be made. Don't you think that animals are going to be pretty pissed to discover what we have done with their kingdom?
"Hi Mr Dolphin, how's the sea today?" "Wet and a plastic bottle cap got lodged in my blowhole last night..."
Sometimes it's good just to know things. If we needed to find a practical justification for everything before we started exploring it, we'd still be animals.
The cynicism on display here is little more than virtue signalling and/or upvote farming.
Sad to see such thoughtless behaviour has reached even this bastion of reason.
Plenty of people genuinely dislike the concentration of economic and computing power that big tech represents. Them expressing this is not "virtue signaling", it is an authentic moral position they hold.
Plenty of people genuinely dislike the disregard for labor and intellectual property rights that anything Gen AI represents. Again, an authentic moral position.
"Virtue signaling" is for example, when a corporate entity doesn't authentically support diversity through any kind of consequential action but does make sure to sponsor the local pride event (in exchange for their logo being everywhere) and swaps their social media logos to rainbow versions.
Calling something "trendy" is a great way to try to dismiss it without actually providing any counterargument. The deep suspicion of anything Google does is extremely well justified IMHO.
Please give me one example in the last decade where meta or Google research has led to actual products or open-sourced technologies, and not just expensive proprietary experiments shelved after billions were spent on them?
I'll wait.
To me this task looks less like next token prediction language modeling and more like translating a single “word” at a time into English. It’s a pretty tractable problem. The harder parts probably come from all the messiness of hearing and playing sounds underwater.
I would imagine adapting to new vocab would be pretty clunky in an LLM based system. It would be interesting if it were able to add new words in real time.
https://www.nytimes.com/2017/12/08/science/dolphins-machine-...
The difference between recognizing someone from hearing them, and actually talking to them!
If it can't do the most basic stuff, please explain to be how in the fuck it is going to understand dolphin language and why would should believe its results anyway?
It's rather unsound reasoning, but you certainly can.
But, since context is so important to communication, I think this would be easier to accomplish with carefully built experiments with captive dolphin populations first. Beginning with wild dolphins is like dropping a guy from New York City into rural Mongolia and hoping he'll learn the language.
> WDP is beginning to deploy DolphinGemma this field season with immediate potential benefits. By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication — a task previously requiring immense human effort. Eventually, these patterns, augmented with synthetic sounds created by the researchers to refer to objects with which the dolphins like to play, may establish a shared vocabulary with the dolphins for interactive communication.
And then the world will suddenly understand...
To understand their language we need shared experiences, shared emotions, common internal worlds. Observation of dolphin-dolphin interaction would help but to a limited degree.
It would help if the dolphins are also interested in teaching us. Dolphins or we could say to the other '... that is how we pronounce sea-cucumber'. Shared nouns would be the easiest.
The next level, a far harder level, would be to reach the stage where we can say 'the emotion that you are feeling now, that we call "anger"'.
We will no quite have the right word for "anxiety that I feel when my baby's blood flow doesn't sound right in Doppler".
Teaching or learning 'ennui' and 'schadenfreude' would be a whole lot harder.
This begs a question can one fully feel and understand an emotion we do not have a word for ? Perhaps Wittgenstein has an answer.
Postscript: I seem to have triggered quite a few of you and that has me surprised. I thought this would be neither controversial nor unpopular. It's ironic in a way. If we can't understand each other, understanding dolphin "speech" would be a tough hill to climb.
For all the word that they don't have in their language, we/they can invent them. Just like we do all the time: artificial intelligence, social network, skyscraper, surfboard, tuxedo, black hole, whatever...
It might also be possible that dolphins' language uses the same patterns as our language(s) and that an LLM that knows both can manage to translate between the two.
I suggest a bit more optimistic look on the world, especially on something that's pretty-much impossible to have any negative consequences for humanity.
If you had read this part --
"But that alone would tell us almost nothing of what dolphin dialogue means.
To understand their language we need shared experiences, shared emotions, common internal worlds. Observation of dolphin-dolphin interaction would help but to a limited degree."
it ought to have been clear that what I am arguing is a corpus of dolphin communication fed to an LLM alone will not suffice. A lot of investments have to be made in this part -- To understand their language we need shared experiences, shared emotions, common internal worlds.
I am sure both you and me would be very happy the day we can have some conversation with dolphins.
Even a limited success would gladden my heart.
Why? With modern AI there exists unsupervised learning for translation where you don't have to explicitly make translation pairs between the 2 languages. It seems possible to eventually create a way to translate without having to have a teaching process for individual words like you describe.
There was a NASA funded attempt to communicate with Dolphins. This eccentric scientist created a house that was half water (a series of connected pools) and half dry spaces. A woman named Margaret Howe Lovatt lived full-time with the Dolphins attempting to learn a shared language between them.
Things went completely off the rails in many, many ways. The lead scientist became obsessed with LSD and built an isolation chamber above the house. This was like the sensory deprivation tanks you get now (often called float tanks). He would take LSD and place himself in the tank and believed he was psychically communicating with the Dolphins.
1. https://www.theguardian.com/environment/2014/jun/08/the-dolp...
She also had sex with a male dolphin called Peter.
>He would take LSD and place himself in the tank and believed he was psychically communicating with the Dolphins.
He eventually came to believe he was communicating with a cosmic entity called ECCO (Earth Coincidence Control Office). The story of the Sega game "Ecco the Dolphin" [1] is a tongue-in-cheek reference to this. I recommend watching the Atrocity Guide episode on John C. Lily and his dolphin "science" [2]. It's on par with The Men Who Stare at Goats (the non-fiction book [3], not the movie).
He has a website that looks like it's been untouched since his death, 2001: http://www.johnclilly.com/
[1] https://en.wikipedia.org/wiki/Ecco_the_Dolphin
[2] https://www.youtube.com/watch?v=UziFw-jQSks
[3] https://en.wikipedia.org/wiki/The_Men_Who_Stare_at_Goats
Paraphrasing carl sagan: "You don't go to Japan and kidnap a Japanese man start jking him off, give him fing acid, and then ask him to learn English!"
The bad outcome is the "AI" will translate our hellos as an insult, the dolphins will drop the masquerade, reveal themselves as our superiors and pound us into dust once and forever.
Picture the last surviving human surrounded by dolphins floating in the air with frickin laser beams coming out of their heads... all angrily asking "why did you say that about our mother?".
And in the background, ChatGPT is saying "I apologize if my previous response was not helpful".