At the Feb. 27 Content Delivery & Security Association (CDSA) Summit in London, Konstantin Dranch, founder of Custom Machine Translation (Custom.MT), which integrates leading machine translation and GenAI models into localization workflows, sees a change happening in how AI is being used — and who is using it — for content creation.
His opening keynote — “LLMs in Media Localization: Creators vs. Studios” — examined how individual creators have harnessed large language models (LLMs) to produce video content, using the technology to create everything from voiceovers and subtitles to generated images and video effects. Meanwhile, the professional media localization industry remains (largely) untouched by AI.
Traditional players — broadcasters, streamers, and vendors — continue to operate with conventional methods, showing no significant productivity gains from this technology, Dranch said, while creators are already producing content faster, and creating more of it.
This faster adoption of AI in content creation could spell trouble for the traditional studio content ecosystem, he suggested.
Custom.MT got its start helping subtitlers, dubbers, and translators ease manual tasks, and make translations in most any voice imaginable (and do so accurately). “What everybody wants is to implement AI and drop the costs,” Dranch said. “And so what the text localization industry is doing right now is called quality estimation … when you ask a large language model to check for you.
Instead of a human checking after AI, it’s AI checking after AI.” Companies employing the technology are seeing real savings, he said, and, beyond private companies, governments small and large are beginning to see cost benefits as well.
But in media specifically, roadblocks are appearing, Drench said. Trust issues among both viewers hearing a synthetic voice, and whether or not AI can be relied on to be accurate in translations are among the concerns, he added. “ … Broadcasters and the [studios], they are limited by this trust challenge. … So yes, the situation is challenging, there’s been guilt strikes, there’s been a hype, but the content spend is going on up and I’m hopeful that 2025, 2026, your market will recover a little bit,” Drench said.
That trust issue isn’t present for content creators, though: they’re adopting GenAI “like crazy,” Drench said.
“The content doesn’t have to be so polished or trustworthy. It just needs to answer to something which we haven’t heard before,” he said. Google is making free AI dubbing available to content creators, and while the quality may not be top-notch, “it’s free,” Drench said. “Everybody who sees an opportunity [will use] YouTube, right? This here is the content grow zone. This is where the boundaries have not been drawn yet, not completely. These people who are building to be there, they need monetization, they need dubbing, they need subtitling, they need guidance on how to make it bigger. And I think the fight of 2025, for those who want to change a little bit in the world, who want to be brave, explore, fail many times, is there.”
With the theme “Where AI and Localization Converge,” the Content Delivery & Security Association (CDSA) Summit in London brought together the European community to talk about key trends, challenges and the future for the localization industry.
Attendees heard from subject matter experts, academics, content creators, creatives, and their service provider partners as they delve into issues around the industry landscape following a tumultuous period, including how artificial intelligence, voice technologies, and machine learning are playing an ever-more important role, and how the adoption of smart, targeted cloud-based solutions can help achieve greater workflow efficiencies.
The CDSA Summit London was sponsored by Papercup, Red Bee, Deluxe, EIDR, Iyuno, Tech Align Group, OOONA and Voiseed.