American Sign Language is not English, in fact, it's not even particularly close to English. Much of the language is conveyed with body movements outside of the hands and fingers, particularly with facial expressions and "named placeholders."
> All this is to say, that we need to build a 5000 hour scale dataset for Sign Language Translation and we are good to go. But where can we find this data? Luckily news broadcasters often include special news segments for the hearing-impaired.
You need _way_ more than just 5000 hours of video. People who are deaf of hard of hearing, in my experience, dislike the interpreters in news broadcasts. It's very difficult, as an interpreter, to provide _worthwhile_ translations of what is being spoken _as_ it is being spoken.
It's more of a bad and broken transliteration that if you struggle to think about you can parse out and understand.
The other issue is most interpreters are hearing and so use the language slightly differently from actual deaf persons, and training on this on news topics will make it very weak when it comes to understanding and interpreting anything outside of this context. ASL has "dialects" and "slang."
Hearing people always presume this will be simple. They should really just take an ASL class and worth with deaf and hearing impaired people first.
During the pandemic she’d get very frustrated by the ASL she saw on the news. Her mom and deaf friends couldn’t understand them. It wasn’t long before she was on the news regularly to make sure better information was going out. She kept getting COVID, because she refused to wear a mask while working, because coving up the face would make it more difficult to convey the message. I had to respect the dedication.
It's common to think that we think in languages, but at a fundamental level, we simply don't. Ever have the experience of not being able to remember the word for something? you know exactly what you are thinking, but can't come up with the word. If you thought in your language, this wouldn't happen.
On a related note, this sort of explains why our model is struggling to fit on 500 hours of our current dataset (even on the training set). Even so, the current state of automatic translation for Indian Sign Language is that, in-the-wild, even individual words cannot be detected very well. We hope that what we are building might at least improve the state-of-the-art there.
> It's more of a bad and broken transliteration that if you struggle to think about you can parse out and understand.
Can you elaborate a bit more on this. Do you think if we make a system for bad/broken transliteration and funnel it through ChatGPT, it might give meaningful results? That is ChatGPT might be able to correct for errors as it is a strong language model.
No, because ChatGPT's training data has practically no way of knowing what a real sign language looks like, since there's no real written form of any sign language and ChatGPT learned its languages from writing.
Sincerely: I think it's awesome that you're taking something like this on, and even better that you're open to learning about it and correcting flawed assumptions. Others have already noted some holes in your understanding of sign, so I'll also just note that I think a solid brush up on the fundamentals of what language models are and aren't is called for—they're not linguistic fairy dust you can sprinkle on a language problem to make it fly. They're statistical machines that can predict likely results based on their training corpus, which corpus is more or less all the text on the internet.
I'm afraid I'm not in a good position to recommend beginner resources (I learned this stuff in university back before it really took off), but I've heard good things about Andrej Karpathy's YouTube channel.
> We hope that what we are building might at least improve the state-of-the-art there.
Do you have any theoretical arguments for how and why it would improve it? If not, my concern is that you're just sucking the air out of the room. (Research into "throw a large language model at the problem" doesn't tend to produce any insight that could be used by other approaches, and doesn't tend to work, but it does funnel a lot of grant funding into cloud providers' pockets.)
I'm not sure I understand your point. Chinese is also not English but machine translation of Chinese to English can be done.
You're right that laypeople often assume, wrongly, that a given country's sign language is an encoding of the local spoken language. In reality it's usually a totally different language in its own right. But that shouldn't mean that translation is fundamentally impossible.
it seems to be more common to see sign language interpreters now. is it just virtue signaling to have that instead of just closed captions?
Regardless, I’m curious how (dis)similar ISL is to ASL.
From my understanding, they are quite dissimilar. A person who knows ISL will not understand ASL, for example.
Repetition of a sign usually indicates an additional emphasis.
The dialect needs to be all covered and multiply mapped to its word.
Furthermore, YouTube has an excellent collection of really bad or fake ASL interpreters in many news broadcasts, so bad, really really bad, worse than Al Gore Hanging Chad news broadcast or the "hard-of-hearing" inset box during Saturday Night Live News broadcast.
You still need an RID-certified or CDI-certified ASL interpreter to vet the source.
In the ASL world, most news translations into ASL are delayed or sped up from the person talking and/or the captions if they happen to also be available.
You are going to have sync problems.
Secondly, it's not just moving the hands, body movements, facial expressions, etc all count in ASL , I'm betting they count in ISL as well.
Thirdly the quality of interpretation can be really bad. Horrendous. it's not so common these days, but it was fairly common that speakers would hire an interpreter and mistakenly hire someone willing to just move their arms randomly. I had it happen once at a doctors office. The "interpreter" was just lost in space. The doctor and I started writing things down and the interpreter seemed a little embarrassed at least.
Sometimes they hire sign language students, you can imagine hiring a first year french student to interpret for you, it's no different really. Sometimes they mean well, sometimes they are just there for the paycheck.
I bet it's a lot worse with ISL, because it's still very new, most students are not taught in ISL, there are only about 300 registered interpreters for millions of deaf people in India. https://islrtc.nic.in/history-0
We are still very much struggling with vocal to English transcriptions using AI. Despite loads of work from lots of companies and researchers. They are getting better, and in ideal scenarios are actually quite useful. Unfortunately the world is far from ideal.
The other day on a meeting with 2 people using the same phone. The AI transcription was highly confused and it went very, very wrong.
I'm not trying to discourage you, and it's great to see people trying. I wish you lots of success, just know it's not an easy thing and I imagine lots of lifetimes of work will be needed to generate useful signed language to written language services that are on-par with the best of the voice to text systems we have today.
I do think these problems are much, much worse for ISL as you rightly noted.
I think I should have been careful when I said "solve" in my post. But that really came from a place of optimism/excitement.
Just know, it won't be a weekend hack to solve the problem(s) though.
It's the same reason we prefer interpreters to Google Translate when the message is important.
Interpretation adds the nuance of a human that all the automatic tools miss.
I'm sure it could make a small dent in the market for interpreters but only a small one.
Unfortunately there's no getting away from that. While the scarcity of data indeed is an issue, and your idea is nice (congratulations!) the actual problem is the scarcity of useful data. Since sign language doesn't correspond to the oral language, there are many problems with alignment and just what to translate to. Glosses (oral language words used as representation for signs) are not enough at all, since they don't capture the morphology and grammar of the language, which among other things heavily relies on space and movement. Video + audio/audio captions is nearly useless.
Good luck with your efforts, this is a fascinating area where we get to combine the best of CS, AI, linguistics... but it's hard! As I said, let me know if you want some literature, by PM/email if you want, and I'll get back to you later.
Detecting the presence of sign language in a video is an interesting subset of the problem and is important for building out more diverse corpora. I would also try to find more conversational sources of data, since news broadcasts can be clinical as others have mentioned. Good luck.