Description
With the astounding advances of artificial intelligence in recent years, the field of interpretability research has emerged as a fundamental effort to ensure the development of robust AI systems aligned with human values. In this talk, two perspectives on AI interpretability will be presented alongside two case studies in natural language processing. The first study leverages behavioral data and probing tasks to study the perception and encoding of linguistic complexity in humans and language models. The second introduces a user-centric interpretability perspective for neural machine translation to improve post-editing productivity and enjoyability. The need for such application-driven approaches will be emphasized in light of current challenges in faithfully evaluating advances in this field of study.Period | 18-May-2022 |
---|---|
Event title | Tech Talk at Translated |
Event type | Other |
Degree of Recognition | Local |
Keywords
- interpretability
- natural language processing
- machine translation
Related content
-
Research output
-
InDeep × NMT: Empowering Human Translators via Interpretable Neural Machine Translation
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review
-
DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review
-
That Looks Hard: Characterizing Linguistic Complexity in Humans and Language Models
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review