In 2014 my idea of a futuristic translation tool was a souped up Vim plugin or an ncurses TUI app that would autocomplete typing and do hyper-fast translation memory lookups. A decade+ later and I moved out of full time translation and into dev work for translation/localisation agencies. Google developed the transformer model in their translation research, having already given us the neural network improvements from around 2016. Then workable coding agents surfaced and ideas continued to percolate. A few discussions with colleagues, who'd done similar pre-AI, and projects that failed to gain traction with clients later, I thought I should give it a go myself.
I knew exactly what I needed to build, what stack I wanted to use, had been testing LLM editing of machine translation (of NN type) in an elaborate script with batching and RAG and error handling and so on. But models and "harnesses" have kept improving, the features I thought would be months of work were a week (a video dubbing and subbing suite, with cloned voices, time alignment etc.). Performance and security audits from multiple models tightened things up. Continue to do so. Django takes care of secure basics, I work on bugs and performance everywhere else.
I have now a next gen translation tool. It can do really useful things that the existing SOTA CAT tools cannot (yet) do due to inertia and massive corp culture.
I've got 100s of tasks still to do (todoist mcp ftw) and doubtless many more bugs to iron out. But I'm slowing down on features now and switching to marketing, distribution, talking to audiences etc. so I can concentrate on delivering value.
Keen to hear thoughts. Homepage for the "Studio" is here: https://studio.languageops.com and if you're not interested in the tool, come and spin a 3D globe which says hello in every language of the world as you hover over it: https://languageops.com. Easter egg somewhere south.