Pages

Wednesday, April 26, 2017

Optimizing LSP Performance in the Artificial Intelligence Landscape

Artificial Intelligence and Machine Learning have been all over the news of late, and we see that all the internet giants are making huge investments in acquiring AI expertise and/or using "machine intelligence" which is the term being used when describing how these two areas come together in business applications. It is said that AI is attracting more venture capital investment than any other single area in VC at the moment, and there are now people regularly claiming that the AI guided machines will dominate or at least deeply influence and transform much of our lives in future perhaps dangerously so. This overview of AI and this detailed overview of  Neuralink is quite entertaining and gives one a sense of the velocity of learning, and knowledge acquisition and transmission that we are currently facing and IMO are worth skimming through at least.

However. machines learn from data and find ways to leverage patterns in data in innumerable ways. The value of this pattern learning can only be as good as the data used and however exciting all this technology seems we need to understand that our comprehension of how the brain (much less the mind) works is still really is only in its infancy. “Machine learning” is a fancy way of saying “finding patterns in data”. If the data is not good, the patterns will be suspect and often wrong. This will not change no matter how good the algorithms and technology are.

The business translation industry is only just beginning to get into an understanding of the value of  "big data". Very few actually collect data at a level that they can really leverage these new options. I have seen that the industry has terrible and archaic practices with translation data organization, management and collection in my years working with SMT where LSPs struggle to find data to build engines. A huge amount of time is spent on gathering data in these cases, and little is known about its quality and value. However, as this post points out there is a huge opportunity in gathering all kinds of translation process related data, and leveraging it, so that most man-machine processes improve and continually raise productivity and quality.

This is a guest post by Danny de Wit about a cloud-based technology he has developed called Tolq. This is an AI infused translation management system with multiple translation processes that can be continually enhanced with AI-based techniques as process related data is acquired. The system is a combination of TMS, project management and CAT tool functionality that can continually be updated and enhanced to improve workflow ease and efficiency for both traditional and MT-related workflows.

====================


Boom. There it was in the news: Google NMT makes all LSP's redundant with their Neural Machine Translation efforts! And if not now, then surely within a few years. Neural machine translation solves it when it comes to translation. Right?i

Translation industry insiders already know this is not the case, nor is it going to be the case soon. Translation is about understanding, and not just about figuring out patterns, which neural networks do so well. So there are still huge limitations to overcome: homonyms, misspelled source words, consistency, style and tone-of-voice, localisation, jargon/terminology, incorrect grammar in the source text and much more.

For now, artificial intelligence is a tool. Just like translation memory and cat tools are. But it's important to understand that this is a very powerful one and one of a different kind. This is a tool, which when applied correctly, gains strength every day and will open up more and more new opportunities.

Within a period of just a few years, the entire translation industry will be reshaped. Translation is not the only industry where the impact will be felt. The same will go for virtually any industry. The technology is that powerful. But it won’t be in the shape of zero-shot NMT.

Artificial intelligence is a wave of innovation that you have to jump on. And do so today.

Fortunately, it turns out that one key thing to help you do this is one that LSP's have been doing already for years: collecting high-quality data.



The Key is Data & Algorithms



All A.I. advancements are built upon models that are generated from data-sets. Without data, there is no model. Data is both the enabler, but also the limit. What's not in the data, is not available to be used.

LSP's have been collecting data for years and years. Using that data to power A.I. algorithms today and in the future is a key strategy to implement.

In addition, LSP's have client specific data and add to that data on a daily basis. This means there's the opportunity to have client specific A.I. tools that gain in strength over time. Offering this to your clients is a big differentiator.

Having generic NMT plus different layers of client specific NMT and other client specific A.I. tools can provide you with workflows that were previously unachievable.

But how do you integrate this inside your operations? We all know that clients and therefore LSP's have a hard time managing data. The incentive to centralise was always there, but it has never been as clear or impactful as it is right now.



 

Unification of Workflow, Data Storage and Algorithms: the new holy trinity for LSP's

The interests of LSP's and clients are aligned where it comes to the unification of these elements.

Clients can take advantage of huge cost savings and new services (like generated multi-lingual content a.o.).

LSP's will operate more efficiently and gain huge workflow management advantages.

LSP's increase on their position as indispensable partners for clients, because of the complexity of the technology involved. Clients will not be able to implement anything like this themselves. In part due to the data + algorithms requirement to make this all work.

Simple workflow improvements, that companies like recently Lilt offer are a step forward, but due to their architecture can only add some efficiency gains inside an existing process, but not lift the LSP organisation to a new level taking advantage of all A.I. has to offer.

Instead, architecture that brings together the three key elements to take full advantage of the future is a much better alternative. Tolq calls this the "A.I. Backbone" which is present in all our operations. Unifying all workflows and data storage to a central, but still layered, structure. Then add to that different A.I. algorithms to optimise and expand the translation services process.

Architecture that makes your company stronger each day, with each translated word.
 

 

New Opportunities: Algorithms Galore!


What can other advances should we expect from A.I. in the near term?

LSP's can look forward to more advances for workflow optimisation and the possibility to introduce new services.

The generic engines will be one of the tools to take advantage of, but to get to the final product that clients require, LSP's that centralise their operations and data will be able to take advantage of new algorithms to offer clients new services. Some examples are shown in the diagram below. But many more will be made available as the A.I. wave increases in speed.
 

 

 Value Creation Strategies for LSP's


When it comes to creating value the core strategy for LSP's should be to start combining technology with data. Even the technology giants will be envious of that combination. We expect to see huge acquisitions in that space.

In addition, combining data with algorithms can provide powerful scalable profit centers due to their nature. It's data crunching vs. human workflows. 
========


Danny de Wit - CEO & Founder, Tolq

Danny de Wit founded Tolq in 2010. Danny’s drive to radically innovate the traditional world of translation was fuelled by the lack of satisfying solutions for website translation.

Danny de Wit studied Business Administration at the Erasmus University Rotterdam (Netherlands). After a career in sales, consultancy and interim management Danny’s primary focus and passion have been startups and technology.

Prior to founding Tolq Danny was involved as an entrepreneur in starting up several online businesses. Danny founded Exvo.com in 2000 en Venturez.com in 2009.

Danny’s innovative ideas, for virtualizing business organization using technology have been acknowledged by two European patents, two Dutch patents and one American patent pending. The intellectual property covers specific methods of distributing work over large populations of resources, whilst optimizing quality and efficiency of the work in real time. This technology is applied in Tolq.com.

In 2012, Tolq was a Finalist at the Launch! Conference in San Francisco and was recognised by Forbes as a startup with very promising technology describing the platform as a faster, easier and cheaper way to translate any website.

In 2014 Tolq was selected as one of the top 15 companies at The Next Web Conference and mentioned by Forbes again as one of the leading startups in Holland.

Tolq.com provides LSP's with technology architecture that puts A.I. at the core of all your LSP operations.

Thursday, April 20, 2017

LSP Perspective: MT Post-Editing Means a Drastic Reduction in Translation Cost

This is a short guest post by @translationguy also known as Ken Clark.  

These initial preamble comments in italics are mine. 

Today, many LSPs and Enterprises are working with MT and there is enough evidence that MT works even when you don't really know what you are doing. Unfortunately, many agencies still try to do it themselves with Moses and most of these DIY experiments either completely fail or produce systems that are not as good as the public systems produced by Microsoft and Google, which defeats the whole point of doing it. MT as a technology only provides business leverage if you have a superior MT system and have aligned processes to take advantage of this. 

Ken differentiates between light and full post-editing in his view of post-editing, and I would like to add another dimension to this discussion. It is my experience that full post-editing is done with smaller (in MT terms) projects, or when the information translated is very critical to get right. Thus, in a knowledge base project context, content related to security, privacy, and legal terms may be sent for full post-editing, and other content may just get a lighter post-edit. Also, when one is involved with very large MT projects like the team at eBay is, where hundreds of millions of words are involved, it is not possible to do a full post-edit on all the data so a light post-edit is done, or maybe nothing beyond the very specific linguistic work on high-frequency n-grams and important patterns that Silvio Picinini describes in this post. Unfortunately, it’s hard for translators and clients to agree on when we’re done with “light” post-editing, so it’s a headache to manage as editors often cannot tell when to stop.

Thus, as agencies really get involved with "real MT " projects they will do corpus profiling work and focus their attention on critical patterns as Juan Rowda has described in this post.

To me, real competence with MT in an agency or enterprise is demonstrated when there is some expertise with as many of the following core functions as possible:

  • Understanding the Data - Corpus Analysis
  • Focusing Linguistic Work on High-Frequency Patterns
  • Working with Expert MT Systems Developers in a pro-active way
  • Understanding MT Output Quality
  • Driving MT quality higher with specific linguistic feedback
  •  Managing Post-Editing Processes and Compensation
 
TAUS provides an excellent overview of the larger perspective in this post on best practices in MT


 
As the MT technology evolves I think we will see that strategies that made great sense with phrase-based SMT may not always make sense with the new Neural MT technology. I am talking to SYSTRAN about the realities in the NMT paradigm and hope to produce a post on this soon.

 

-------------------------


Machine translation has improved by leaps and bounds. What was once considered machine-produced gibberish is increasingly giving human translators a run for their money, particularly for predictable texts like weather reports.

While machine translation (MT) is also more economical than human translation, it’s not a true alternative yet. In most cases, machine translation can’t be used as is. And that’s where the expertise of machine translation post-editors comes in. Machine translation post-editors are the human editors that work to improve the output of machine translation. They combine the MT output with their linguistic expertise to provide a better reading experience to human audiences.

Besides the cost savings, it is estimated that machine translation plus post-editing is 40% more efficient than human translation alone. But what exactly do machine translation post-editors do, and how do they do it?

Types of Machine Translation Post-editing

Machine translation post-editing comes in two flavors: light post-editing and full post-editing.
Light post-editing suggests a lighter touch, only asking the human editors to ensure that the MT output is accurate in meaning and understandable to the reading audience. However, this means that style is not taken into account, grammar and syntax may be awkward, and the text may sound as if it were produced by a computer. It’s the most economical option, but for reasons of quality, light post-editing is typically only used when a translation is needed urgently and/or for an organization’s internal purposes.

Full post-editing, on the other hand, calls for a higher level of involvement by the post-editor. (This makes it more expensive than light post-editing, but still less expensive than full human translation.) In addition to making sure that the MT output is accurate in meaning and understandable to the reading audience, full post-editing addresses the text’s grammar, syntax, and punctuation, ensuring they are correct and appropriate. The result is similar in quality to a human translation, although it may not yet match the style of a native-speaking translator. Full post-editing is typically used when a machine-translated text is intended to be published, or widely disseminated inside or outside an organization.

MT Post-editing Strategies

How do they do it? Let’s examine some of the things that post-editors watch out for.

Light post-editors use the machine translation output as much as possible. However, they take special care that information has not been inadvertently added in or left out. They also edit anything they have identified as offensive or culturally unacceptable.

In addition to the above, full post-editors correct any grammatical and syntactical errors. They pay particular attention to terminology, making sure that the terms have been translated in the appropriate way (or left untranslated per the client’s wishes). They also ensure that the spelling and punctuation, as well as formatting, are correct.



Read more at http://www.responsivetranslation.com/blog/machine-translation-postediting/#r4ZiiLOHouYJ8E2O.99

Tuesday, April 11, 2017

The Problem with BLEU and Neural Machine Translation

There has been a great deal of public attention and publicity given to the subject of Neural Machine Translation in 2016. While experimentation with Neural Machine Translation (NMT) has been going on for the last several years, 2016 has proven to be the year that NMT broke through and became a big deal, and became more widely understood to be of great merit outside of the academic and research community, where it was already understood that NMT has great promise for some years now.

The reasons for the sometimes excessive exuberance around NMT are largely based on BLEU (not BLUE) score improvements on test systems which are sometimes validated by human quality assessments. However it has been understood by some that BLEU, which is still the most widely used measure of quality improvement, can be misleading in its indications when it is used to compare some kinds of MT systems.



The basis for the NMT optimism is related both to the very slow progress in recent years with improving phrase-based SMT quality, and also the striking BLEU score improvements that were seen coming from neural net based machine learning approaches. Much has been written about the flaws of BLEU but it still remains the most easily implementable measurement metric, and also really the only one where there are long-term longitudinal data available. While we all love to bash on BLEU, there is clear evidence that there is a strong correlation between BLEU scores and human judgments of the same MT output. The research community and the translation industry have not been able to come up with a better metric that can be widely implemented to enable ongoing test and evaluation of MT output so it remains as the primary metric.The alternatives are too cumbersome, expensive or impractical to use as widely and as frequently as BLEU is used.


However, there is also evidence that BLEU tends to score SMT systems more favorably than RBMT and NMT systems, both of which may produce very accurate and fluent translations to a human perspective, but differ greatly from the reference translations that are used in calculating the BLEU score. To a great extent the BLEU score is based on very simplistic "text string matches". Very roughly, the larger the cluster of words that you can match exactly, the higher the BLEU score.


To illustrate this, lets take a very simple example, say a reference translation is: "The guests walked into the living room and seated themselves on the couch." and an NMT system produces something like: "The visitors entered the lounge and sat down on the sofa." This would result in a very low BLEU score for the NMT segment, even though many human evaluators might say it is quite an acceptable and accurate translation, and as valid as the reference sentence.

If you want a quick refresher on BLEU you can check this out:

The Need for Automated Translation Quality Measurement in SMT: BLEU


Some of the optimism around NMT is related to its ability to produce a large number of sentences that look very natural, fluent and astonishingly human. Thus, much of the early results with NMT output show that it is considered to be clearly better to human evaluators, even though BLEU scores may show only 5% to 15% improvement (which is also significant). The improvements are most noticeable when considering fluency and word order issues with machine translation output. NMT is also working much more effectively in what were considered difficult languages for SMT and Rule Based MT, e.g. Japanese and Korean.

And here are some examples provided by SYSTRAN from their investigations where the NMT seems to make linguistically informed decisions and changes the sentence structure away from the source to produce a better translation. But again these would not necessarily score much better in terms of BLEU scores even though humans might rate them as significant improvements in MT output quality and naturalness.



But we have seen that in spite of this there are still many cases where NMT BLEU scores significantly outpace the phrase-based SMT systems. These are described in the following posts in this blog:

A Deep Dive into SYSTRANs Neural Machine Translation (NMT) Technology

 

An Examination of the Strengths and Weaknesses of Neural Machine Translation

 

Real and Honest Quality Evaluation Data on Neural Machine Translation 

 

and this is even true to some extent in the exaggerated over-the-top claims made by Google when they claimed that Google NMT was “Nearly Indistinguishable From Human Translation” and “GNMT reduces translation errors by more than 55%-85% on several major language pairs" which is described below.

The Google Neural Machine Translation Marketing Deception

 

The KantanMT NMT vs PB-SMT Evaluation Results


I had an interesting conversation with Tony O'Dowd at KantanMT about his experience with his own initial NMT experiments.While Kantan does plan to publish their results in full detail in the near future, here are some highlights Tony provided from their experiments, that certainly raises some fundamental questions. (Emphasis below is mine.)

  1. Scope of Test - We built identical systems for SMT and NMT in the following language combinations - en-es, en-de, en-zh-cn, en-ja, en-it. Identical training data sets and test reference materials were used throughout the development phase of these engines. This ensured that our subsequent testing would be of identical engines, only differing in the approach to build the models. The engines were trained with an average of 5 million parallel segments ranging from 44 - 110 million words of training data.
  2. BLEU Scores - In all cases, the BLEU scores of NMT output was lower than SMT. 
  3. Human Evaluation:  We deployed a minimum of 3 evaluators for each language group and used KantanLQR to run the evaluation. We used the A/B Testing feature of KantanLQR. Sample A was from SMT, Sample B was from NMT. We randomized the presentation of the translations to ensure evaluators did not know what was NMT and SMT - this was done to remove any bias for one approach or the other. We sampled 200 translations for each language set.
  4. In all cases NMT scored higher in our A/B Testing than SMT. On average NMT was chosen twice as often as SMT in our controlled A/B testing.
  5. For low scoring BLEU NMT segments, we found a high correlation to these segments being the preferred translation by our [human] evaluators - this pretty much proves that BLEU is not a useful and meaningful score for use with NMT systems.


Clearly, this shows that BLEU is of limited value when the human vs. automated metric results are so completely different and even diametrically opposed. The whole point of BLEU is that should provide a quick and simple way to get an estimate of what a human might think of sample machine translated output. So going forward it looks like we are going to need better metrics that can map more closely to human assessments. BLEU is not a linguistically informed measure and thus the problem. This is easy to say but not so easy to do.  A recent study pointed out the following key findings:

  • Translations produced by NMT are considerably different than those produced by phrase-based systems. In addition, there is higher inter-system variability in NMT, i.e. outputs by pairs of NMT systems are more different between them than outputs by pairs of phrase-based systems.
  • NMT outputs are more fluent. We corroborate the results of the manual evaluation of fluency at WMT16, which was conducted only for language directions into English, and we show evidence that this finding is true also for directions out of English.
  • NMT systems do more reordering than pure phrase-based ones but less than hierarchical systems. However, NMT re-orderings are better than those of both types of phrase-based systems.
  • NMT performs better in terms of inflection and reordering. We confirm that the findings of Bentivogli et al. (2016) apply to a wide range of language directions. Differences regarding lexical errors are negligible. A summary of these findings can be seen in the next figure, which shows the reduction of error percentages by NMT over PBMT. The percentages shown are the averages over the 9 language directions covered.

 Reduction of errors by NMT averaged over the 9 language directions covered


Given that there are currently no real practical alternatives to BLEU, there is perhaps an opportunity for an organization like TAUS to develop an easy to apply variant from their overall DQF framework, that can focus on these key elemental differences and can be done quickly and easily. NMT systems will gain in popularity and better measures will be sought. The need for an automated metric will also not go away as developers need some kind of measure to guide system tuning while they are in the development phase. Perhaps there is some research underway that I am not aware of that might address this, but I have seen that SYSTRAN uses several alternatives but everybody still comes back to BLEU.

Comparative BLEU score-based MT system evaluations are particularly problematic as I pointed out in my critique of the Lilt Labs evaluation, which I maintain is deeply flawed, and will result in erroneous conclusions if you take the reported results at face value. Common Sense Advisory also wrote recently about how BLEU scores can be manipulated to make outlandish claims by those with vested interests and also point out that BLEU scores naturally improve as you add multiple references.

"However, CSA Research and leading MT experts have pointed out for over a decade that these metrics are artificial and irrelevant for production environments. One of the biggest reasons is that the scores are relative to particular references. Changes that improve performance against one human translation might degrade it with respect to another. "
Common Sense Advisory, April, 2017


There is really a need for two kinds of measures, one for general developer research that can be used everyday like BLEU today, and one for business translation production use which indicate quality from that different perspective. So as we head into the next phase of MT, driven by machine learning and neural networks, it would be good for us all to think of ways to better measure what we are doing.  Hopefully some readers or some in the research community might have some ideas on new approaches to do this but this is an issue that is something worth keeping an eye on. And if you come up with better a way to do this, who knows, they might even name it after you. I noticed that Renato Beninatto has been talking about NMT recently, and who knows he could come up with something, I know we would all love to talk about our Renato scores instead of those old BLEU scores!


Wednesday, March 22, 2017

LSP Perspective: A View on Translation Technology

This is an unsolicited guest post that provides a view of translation technology that is typical of what is believed by many in the translation industry.  

These initial preamble comments in italics are mine.  

It provides an interesting contrast to the previous post (Ending the Globalization Smoke Screen) on the need for LSPs to ask more fundamental questions and climb up higher in the value chain and contribute higher value advice on globalization initiatives. This is a view that sees the primary business of LSPs, and thus the role of technology, as being the management and performance of human translation work as efficiently as possible. 

I think we have already begun to see that the most sophisticated LSPs now solve more complex and comprehensive translation problems for their largest customers, which often extends much beyond human translation work. In December 2016, the new SDL management reported that they translate 100 million words a month using traditional TEP human translation strategies,  but they also translate 20 billion words a month with MT.  The VW use case also shows that for large enterprises, MT will be the primary means to translate the bulk of the customer-facing content, in addition to being the dominant way to handle internal communications related translations. Clearly, much of the translation budget is still spent on human translation but it is also much clearer that MT needs to part of the overall solution. MT competence is valuable and considered strategic when choosing an agency, and by this, I don't mean running sub-standard Moses engines.  Rather, it is about working with agencies who understand multiple MT options, understand corpus data preparation and analysis and can steer multiple types of MT systems competently.

Aaron raised what I think are many very interesting questions for the "localization" industry. How do we as an industry add more value in the process of globalization, and he suggests quite effectively I think, that it has more do with things other than using basic automation tools to do low-value things more efficiently. The globalization budget is likely to be much higher than the translation budget and involve answering many questions before you get to translation.

It is also my sense that there is a bright future for translation companies that solve comprehensive translation problems (i.e. MT, HT, and combinations), help address globalization strategies, or perform very specialized, high-value, finesse-driven human translation work (sometimes called transcreation,  an unfortunate word that nobody in the real-world understands). The future for those that do not do any of these things I think will be less bright, as the freely available and pervasive automation technology that is available for business translation tasks will get easier to use and more efficient. The days when building a TMS to get competitive advantage made sense are long gone. Many excellent tools are already available for a minimal cost and it is foolish to think that your processes and procedures are so unique as to warrant your own custom tools. The value is not in the tools you use but how, when, and how skillfully you use them. Commoditization happens when the industry players are unable to clearly demonstrate their value add to a customer. This is when price becomes the prime determinant of who gets the business, as you are easily replaceable. This also means that you are likely to find that the wind is no longer in your sails and it is much harder to keep forward momentum. In the post below, the emphasis is not mine.




======================================= 
↓↓↓↓↓


What’s out there, and what’s to come?


Technology is improving all the time. Technological advances like Artificial Intelligence (AI), Virtual Reality (VR) and the advance of smartphones are rousing the public’s interest.

It’s the same for translation technology. The way we translate and interpret content is changing all the time. Reliable translation technology is making it easier, faster and more productive to do our jobs.
Take for instance machine translation (MT). We see this type of technology as more of an additional language service to enable more content to be translated – rather than as a substitute language service to replace human translation.

MT is often considered in circumstances where the volume of content requiring translation cannot realistically be approached as a human translation task, for reasons of cost or speed. In this setting automatic translation can be deployed as part of a wider workflow.

Technology like this, for example, may enable a company to translate millions of words of user-generated content which would otherwise be completely out of reach. MT would not, however, be advisable for public-facing content, such as press releases.

Machines that translate


The benefits of machine translation largely come down to two factors: it’s quicker and less expensive. The downside to this is the standard of translation can be anywhere from inaccurate, to perplexing – machines can’t translate context you see.

The disadvantages as noted above can be avoided if the machine translation is checked by a professional. The last thing you want is a call from a lawyer telling you you’ve mistranslated one of their clients’ quotes.




Machine translation consists of rules-based systems that generate a translation by combining a vocabulary of words with syntactical rules. Whereas with statistical MT, the engine is fed with large volumes of translations that are analyzed using pattern-matching and word-substitution to predict the translation which is statistically the most likely to be correct.

It can be argued that machine translations are more suited to internal use, if your documents are only being used within your company, complete accuracy may not be vital. Another example would be for very basic documents – the more simplistic your original documents are, the easier they will be for a machine to interpret.

You need to be certain there is ample precision in your machine translations to hurry up the process. Otherwise, it will only slow the progression down and you’ll attain very little by using it. Machine translation is a time-saving tool – if it doesn’t do that, then it’s not worth using, or at least no solely relying on. That’s not to say that machine translation isn’t vital in some cases. It certainly is, more on that later.

Human translation basically shifts the table in terms of pros and cons. A higher standard of accuracy comes at the price of longer turnaround times and higher costs. What you have to decide is whether that initial investment outweighs the potential cost of errors.

More creative or intricate content such as poems, slogans or taglines ask far too much of machine translation tools so it’ll always make sense to opt for a human translator. When accuracy is paramount, take, for example, legal translation, safety instructions, and healthcare, machines leave far too much room for error.

Another point to make when deciding if it’s appropriate to use human over machine translation is when there’s not sufficient accuracy for machines to work with. If the content is too chaotic then it may be easier for a human translator to work and edit the original text – machine-free.
The translation of content is attainable using machine translation don’t get me wrong, for example, when translating high-volume content that changes every hour of every day – humans just can’t keep us and it would cost far too much. But if you require full control of your communication then a human translator is the better option – granted that the task isn’t too large or would not be obtainable without using MT

In the machine versus human translation debate, the latter has the edge – for now anyway – because the translator can provide a more accurate translation of your message.

This aside, companies like TripAdvisor and Amazon rely on machine translation because their online content and the daily visitors to their websites are so vast. Machine translation gives them the chance to stay up-to-date and offers users multilingual content rapidly. Companies like these would find that solely relying on humans to translate their message a demanding if not impossible task.

Machine translations have their place in the world – it’s an important place for sure – and can deliver the basic meaning of a text when your company is in a bind. However, cannot live up to the quality of a human-powered translation, which is the service you should choose when you want an official communication of your company to be fully understood by its readers.

I want translation now


Moving swiftly on, advanced translation software which allows its users to centralize all their translation requirements, making it simple to tailor translation workflows is translation management systems (TMS). Though nothing new anymore, software like this is not only saving people time but the automated processes mean it saves money too.


Larger companies are opting for one easy-to-use TMS platform in order to have complete control over their translation workflows. The software gives users a 360-degree overview of every current and completed translation job submitted. A TMS platform also gives users real-time project status information.

Moreover, creative translation tools allow teams of graphic designers and creative agencies to use web browsers rather than costly Adobe packages to review INDD or IDML content. Translation technology like this means no extra license fees to pay when localising and reviewing content.
Reviewers who do not have InDesign installed on their systems can see a live preview, edit text so it is exactly how they want it, and save their changes so that the InDesign file is updated. Tools like this give users peace of mind in the knowledge that the InDesign document cannot be “broken”, and that no time is spent copying and pasting text, or trying to decipher reviewers’ comments, or having to repeatedly transfer files backward and forwards.

As technology improves so does the expectations of its consumers. People want their information fast. When it comes to translation, in an online sense specifically the content needs to remain up-to-date and be easy to find. TMS platforms can be integrated with websites, CMS, DMS and database applications – this makes them an essential part of translation services.

Big technology brands like Apple and Google offer translation services of their own. iTunes and Google Play allow you to download transcription apps, giving you your own personal translator in the palm of your hand.

They’re handy devices but useless if you need accurate voice recognition and more than a few sentences transcribed. Your main concern with any transcription app will always be accuracy. You want the device to understand every word you say and accurately type it out in text form. Well, unfortunately, this is where the technology continues to fall way short of the demand.

Technology has evolved. But we’ve been evolving too and we still have a few tricks up our sleeves. A mother-tongue translator is still the only sure-fire way to ensure the most natural reading target text. It’s arguably the best way to get the most relevant translation possible. Language services companies still receive far more human translation inquiries. More than 90% of job requests are for human translation.

What’s next?


In the realm of AI for instance, a lot. Take for instance AI web-design software and an increasing list of automated marketing tools hitting the scene. All hoping to make translation services a more streamlined process – one that’s faster, cheaper and demands less manpower to make things happen.
Personal assistant apps such as Apple’s Siri and Amazon’s Alexa are driving online business in a way never seen before. These AI-powered apps are also changing web localisation in a huge way. More than ever businesses need to be aware of third party sites like Google Maps, Wikipedia, and Yelp because apps such as Microsoft’s Cortana are pulling snippets of content from all around the web.

According to Google Translate’s FAQ section, “Even today’s most efficient software cannot master a language as well as a native speaker and have by no means the skill of a professional translator.”

If the technology available helps. If it saves us time, money and precious resources then surely it’s vital and something we should be taking advantage of. But at the same time, most of the translation technology available to us should be used wisely. Often than not it should be used as a tool to aid us, not necessarily something to be relied upon.

Don’t get me wrong, I’m not for a second lessening the importance of machines when it comes to translation services. All I will say is that human translators are more familiar with expressions, slang, and grammar of a modern language. Often human translators are native speakers of the target language which gives greater depth and more of an understanding of the tone of the required translation.

Human translators also boast translation degrees, some specialise in a specific area of expertise and their understanding in the field of the project expedites the translation. Although it’s safe to say, one type of language translation that still baffles the most educated of linguists is emojis.
The multifaceted landscape of interpreting symbols makes translating these icons tough for both machines and humans – fact.

These ideograms are actually making it to court cases where text messages are regularly being surrendered as evidence. So it’s paramount that context and the interpretation of each emoji is understood.

The meanings of each smartphone smiley are often unclear and sometimes puzzling. This leaves far too much for misunderstanding, in fact in 2016 professional translators from around the world attempting to decipher emojis and the results were miserable.

Technological advances in the translation industry are going to change the way businesses operate for the better. Even though some of it threatens to compete against us, we fully expect this to continue and start happening in a much wider range of industries.

Machine translation services will play a vital part in producing multilingual content on a large scale. Big brands that need content fast, in large quantities will opt for MT. In fact, more and more professional translators will need to adapt to working more closely with this technology as it advances over time.

For things like AI, VR and apps the future is prosperous. We’re entering a world in which we take technology for granted. What’s capable nowadays and what will be in the future will aid us, maybe even guide us one day. But for the time being – though essential – the translation technology at our disposal still fails to deliver what humans are fully capable of.




                                                          = = = = = = = = 



 





Tom Robinson, Digital Marketing and Communications Executive at translate plus, a Global Top 50 language services provider by revenue, offering a full range of services, including translation, website localisation, multilingual SEO, interpreting, desktop publishing, transcription and voiceover, in over 200 languages. All this is complemented by our cutting-edge language technology, such as i plus®, our secure cloud-based TMS (translation management system).