Empowering Human Translators via Interpretable Interactive Neural Machine Translation

Activity: Talk and presentationAcademic presentationAcademic

Description

Recent advances in neural machine translation (NMT) led to the integration of deep learning-based systems as essential components of most professional translation workflows. As a consequence, human translators are increasingly working as post-editors for machine-translated content. This project aims to empower NMT users by improving their ability to interact with NMT models and interpret their behaviors. In this context, new tools and methodologies will be developed and adapted from other domains to improve prediction attribution, error analysis, and controllable generation for NMT systems. These components will drive the development of an interactive CAT tool conceived to improve post-editing efficiency, and their effectiveness will then be validated through a field study with professional translators.
Period14-Dec-2021
Event titleWorkshop on eXplainable AI approaches for debugging and diagnosis
Event typeWorkshop
Degree of RecognitionInternational

Keywords

  • Neural Machine Translation
  • Interpretability for NLP
  • Explainable AI