“Where do you expect the language industry to go within the next 10 years?”
“How will the role of the linguist change?”
“How will quality management needs and methods evolve?”
Jan Nohovec posed these questions at the end of his Language Quality Manager course at Localization Academy.1 He left them open. I haven’t been able to.
In October 2023, Maria Schnell, Chief Language Officer at RWS, offered a partial answer. Goodbye translators, she wrote. Hello Language Specialists – as she described them: creative linguists, smart technologists, cultural consultants, content optimizers, industry experts.2 Schnell’s framing held in 2023. The industry has moved fast since, and the answer needs extending.
What’s missing is the distinction the AI moment has made urgent: the difference between QA and LQA.
Everyone thinks they know what quality is – and that assumption is the problem. Separating linguistic assessment from generic quality assumption – that’s what the L in LQA actually means. Jan Nohovec frames the problem plainly in his course material: quality is widely assumed and rarely defined. Angelika Ottmann and Carmen Canfora make the same point from a different angle in their 2020 MDÜ analysis: quality cannot be tested into a translation after the fact – it has to be built in through the process. Final-stage QA catches symptoms; it doesn’t produce quality where the workflow didn’t generate it.4 Which is why the language part cannot be left to general quality management. It requires someone who holds both – the linguistic authority and the quality methodology – simultaneously.
This distinction has always mattered. What’s changed is that AI-assisted translation has made it simultaneously more necessary and more invisible. Enterprises are not retreating from AI – nor should they. The problem is deploying it without first defining what good looks like in the target language and culture.
The disciplinary gap is older than AI. Christopher Kurz traced it in the same 2020 MDÜ issue: ISO 9000 defines quality as the degree to which a set of inherent characteristics fulfills requirements, and ISO 9001 builds the management system around that definition. Both presuppose that requirements have been specified. In translation, they routinely haven’t been – which is why generic quality management treats linguistic output as if requirements were self-evident. They aren’t.5
LLMs produce output that automated quality checks pass with ease. The output is fluent, consistent, and fast. It also carries a failure mode none of those qualities can catch – text that reads as linguistically accurate but functions as culturally wrong.
Automated QA doesn’t catch this. A published Lionbridge case study showed that integrating style guide rules into LLM translation prompts produced significant, measurable quality improvements.3 Style guide governance closes that gap, which runs beneath what automated checks measure. The failures are caught by the human layer that understands what the content is actually supposed to do in the target language and culture.
“In this new domain, the STEM fields of education will still be important, but it is the humanities, the arts, and the social sciences that will increase in relevance, and therefore power and influence.”
Don Norman, Imminent Report 2026Norman names this from the broader cultural shift; at the language layer, the part of LLM output that automated QA can verify is exactly the part that doesn’t carry meaning for the human reader.
For German, that gap is particularly consequential. In content for the German-speaking audience – where “game-changing” reads as hollow and military metaphors land as aggressive rather than energizing – the gap between linguistically correct and culturally functional sits beyond the reach of automated quality assurance. When AI produces “bahnbrechende, branchenführende Plattform” – groundbreaking, industry-leading – the German reader doesn’t register conviction; they register noise that legal review will flag under §5 UWG. Academic research on AI translation of real German corporate texts confirms the pattern: AI systematically neutralizes the performative and evaluative dimensions that carry meaning for the German reader.
An LLM output that satisfies every automated check can still fail that test. Reinhard Tillmann named the structural problem in 2020: in translation, the absence of complaints is routinely treated as evidence of quality. It isn’t. It’s evidence that no one has yet noticed – or that the readers most likely to notice have stopped reading.6
The Language Specialist title captures the skill evolution correctly. What it doesn’t capture is the governance layer – the systematic framework for deciding whether AI output actually works for the reader it’s written for, in the context it will be read in, with the consequences it carries if it doesn’t. That’s what LQA is. The industry is still working out how to provide it.
And that’s the question Jan left open that I find I can’t. For now.
Sources
- Jan Nohovec, Language Quality Manager course, Localization Academy, 2025. Jan Nohovec is Head of Quality at Argos Multilingual; formerly RWS/Moravia/SDL. The three questions are posed as open reflection prompts at the end of the certification course. ↩
- Maria Schnell, Chief Language Officer, RWS. Public statement, October 2023. Widely cited within the localization industry. ↩
- Lionbridge. Published case study on style guide integration into LLM translation prompts. Specific publication details to be verified by Patrick Roye; the finding on measurable quality improvements is the operative claim. ↩
- Angelika Ottmann and Carmen Canfora, “Risikomanagement für Übersetzungen,” MDÜ – Fachzeitschrift für Dolmetscher und Übersetzer, 6/2020. ↩
- Christopher Kurz, “Translation Quality Management nach ISO 9001,” MDÜ – Fachzeitschrift für Dolmetscher und Übersetzer, 6/2020. ↩
- Reinhard Tillmann, contribution to MDÜ – Fachzeitschrift für Dolmetscher und Übersetzer, 6/2020. ↩