Today, with news and information coming from all corners of the globe, nearly everyone has had the experience of trying to puzzle meaning from a machine translation. But now, thanks to a widget created by Microsoft Research, anyone looking at technical documentation on Support.Microsoft is able to review the content in its original English and provide a better translation, if they have one. The question remained, though: Who decides which translations are approved for publication on the Microsoft site?
“We receive thousands of submissions for translated sentences,” explained Jose Riesco, an international project manager in Engineering, Community and Online Support at Microsoft. “We needed to find trusted partners to help determine which ones to publish. Because of their technical expertise and proven community leadership, MVPs seemed like a natural choice.”
When Jose and his team reached out to a few MVPs to see what they thought, they were enthusiastic about the project.
Because of the scope of Microsoft’s products and services, and the ongoing evolution of its technologies, the company delivers an enormous amount of technical documentation, and all of it needs translation. Microsoft’s machine translation technology is world-class. In fact, Microsoft’s Collaborative Machine Translation presentation recently won a Taus Award of Excellence in Beijing. But that doesn’t mean it can’t improve, and to get better, machine translation technology depends on community involvement.
Microsoft Research developed a widget which allows a reader to hover over a machine-translated sentence, see it in its original English, and compare the quality of the translation. If they think they can improve on the translation, they simply hit a button and provide it.
Those submissions are then reviewed by approved moderators at Microsoft. Under a program which Jose and his team have been piloting for the past few months, a number of MVPs were invited to serve as moderators. “I was so impressed by their willingness to bring quality documentation to their regions,” said Jose. In fact, in one month, one MVP reviewed and approved nearly 1,900 sentences.
“Many MVPs prefer to read technical content in English, but that’s not true of a lot of other people around the world, especially consumers, people who are not necessarily technology professionals,” explained Jose. “We’re especially concerned with languages which have a smaller volume of native speakers. They are at a disadvantage because machine translation follows a statistical model where the more bi-lingual strings that you have in the system, the better the quality of the output. Since for smaller languages we don’t have many pairs of sentences (English-Target), the quality is not as high. When MVPs post-edit sentences in their native languages, these sentences get retrofitted into the system and improves the translation quality. These are the same engines that Microsoft uses in Bing.”
The pilot program has been so successful that Microsoft is expanding the program to all MVPs.
“Once an MVP is added as a translator/moderator in our Collaborative Translation Framework database, they are able to approve/reject/edit contributions made by Microsoft users as well as enter their own. Once an MVP submits an edit, it gets automatically approved and immediately becomes part of the machine translation database,” said Jose. “Plus, by seeing what an MVP chooses to translate or review, it helps us identify the most relevant content.”
You can see Jose’s profile in the Community Champ section and the profiles of a number of MVPs who have participated as translation moderators in the MVP Spotlight section of this site.