Guide for community members
Introduction
A Language Quality Assessment (LQA) analyzes a source text and its translation to record and classify any issues using an error typology. The result is a score used to inform the quality management process.
Auto LQA is a Phrase feature powered by OpenAI. It automatically generates quality analysis predictions inside of the Phrase environment.
We can perform this analysis as a standard quality assurance measure during the revision process or when we suspect quality issues on a file.
In this task, we would need you to analyze AI-generated Auto LQA predictions and:
- Implement necessary changes in Phrase TMS so our files are finalized.
- Help us decide to update/revise our glossary or create specific instructions or style guides.
- Guarantee consistency between translations and help us improve in future work.
If you need support during this task, please reach your Project Officer by email.
Evaluate translation quality
Auto LQA uses an adapted version of the MQM (Multidimensional Quality Metrics) error typology to assess the quality of translations.
Below are the main categories Auto LQA works with:
Accuracy: Ensure the translation communicates the meaning of the original text correctly and precisely. Examples:
- Addition or omission
- Mistranslation
- Under-translation or over-translation
Fluency: Issues related to the form or content of a text, irrespective of whether it is a translation or not. Examples:
- Punctuation
- Spelling
- Grammar
- Inconsistencies
Style: Create a translation that sounds idiomatic (natural) when you read it and is appropriate for the readers. Examples:
- Awkward
- Inconsistent
- Unidiomatic
Locale Convention: Errors occur when the translation violates locale-specific content or formatting requirements for data elements. Examples:
- Formats of numbers, currencies, measurements, time, dates, addresses, and telephone numbers
- Shortcut keys
Design: Whether or not the translation is appropriately formatted. Does the document contain tables, images, or other visual elements that are easy to read and follow?
Terminology: Look at whether or not the keywords and phrases are translated accurately and if the same translation is used for each term throughout the text.
These categories are further divided into subcategories.
Auto LQA associates severity levels to errors: minor, major, and critical, depending on their impact on the translation.
Note that there are additional factors that may impact the quality of the translation:
- the quality of the source document
- the formatting of the source files
- the volume of content
- the complexity
- tight turnaround times
Auto LQA checks of a Translation in Phrase TMS
Instructions
If you have any questions about the instructions provided or need anything, please do not hesitate to contact your Project Officer. They will be happy to help out!
Important: For all tasks, please make sure to read this post on How to Provide Constructive and Respectful Feedback.
Request
You will receive the documentation below to help you perform this task:
- The source text and the translation with Auto LQA results
- Language Style guide
- Glossary
Process
Phrase TMS Auto LQA feature calculates a PASS/FAIL score prediction based on all segments/word count in a given job. It also marks the errors in their context, providing an explanation of why something is an issue and suggests a correction.
- To see the results, open the editor.
- Click the purple flag icon next to the segment assessed:
- or select the Flag from the right-click menu:
- The issues will appear in a list:
- You can also filter issues by clicking on the filter on the right:
Go through the segments flagged as problematic and apply any necessary corrections.
Feel free to add any other comments/suggestions and to flag or ignore it if you disagree with an error marked. You can also manually edit the automatic annotations in the panel.
You can find here more information about this feature in this Phrase Auto LQA article.
Pre-delivery checklist
Do I have all the materials needed for this task?
Have I confirmed any doubts with the Project Officer?
Have I reviewed all changes/suggestions/revisions?
Have I analyzed all segments with errors with a Flag?
Have I applied all necessary changes based on the Auto LQA flags?
Once you are done, let the Project Officer know the task is completed.