DescriptionIn this talk I will present the PAT project in which we investigate the use, effects and optimisation of documents that contain pictures and text (PAT). While the benefit of including pictures has been established, the design of pictures, text, and picture-text relation(s) has not been researched in a systematic manner. PAT aims to gain an in-depth understanding of their characteristics to augment existing theories on cognitive processing of multimodal presentations. Resulting models will be validated by implementing them in natural language generation algorithms and comparing their output to human-authored presentations.
PAT launches a methodical investigation of multimodal instructions (MIs) used in first-aid practices to help people in need. Currently, there are no guidelines for the design of MIs that effectively instruct people to operate an AED, place a victim in a recovery position, remove ticks etc. The huge variations in pictorial and verbal means employed in these instructions demonstrate the urgency to obtain validated guidelines based on empirical evidence collected from readers and users. Investigating multimodality in these MIs allows us to evaluate the effectiveness of combining pictures and text in a practical context focussing on e.g. attention, comprehension, recall, user judgements, and task performance.
The PAT project makes use of an annotated corpus of MIs and a workbench that has been developed for the annotation and retrieval of the MIs. The MIs are first-aid instructions that appear in Het Oranje Kruisboekje and variations of these instructions from other sources, like hospitals, health and safety organisations and the internet.
In the PAT project approaches from Information Design Research and Computational Linguistics employing corpus collection and analysis, (automatic) annotation, experimentation, and natural language generation are combined. The project will deliver theoretical results in terms of empirically validated models for effective MIs. Results of practical value include new annotated multimodal corpora, implemented taggers to automatically annotate potentially effective properties of MIs, algorithms to automatically generate effective text-picture combinations and authoring guidelines to produce good quality instructions.
|Held at||University of Gothenburg, Gothenburg, Sweden|