Artificial witness: Generative AI and the visual politics of war representation

    Activity: Talk and presentationAcademic presentationAcademic

    Description

    One application of generative AI is the production of highly realistic war images, often indistinguishable from photographs taken by human witnesses. These images do not merely document events but actively shape public opinion, legitimizing certain perspectives while obfuscating others. This raises broader cultural implications about how AI-generated imagery may (mis)represent cultural or political nuances of conflicts and wars, prompting reflection on the ethical and perceptual consequences of complementing journalistic authenticity with synthetic, machine-generated representations.

    Framing generative AI as an artificial witness – an ‘instrument for seeing’ and for ‘assessing how well we can see’ – this talk addresses the uses of generative AI in imaging and imagining wars. By exploring what Gillian Rose terms ‘the site of the image itself’ (Visual Methodologies, 2016), we can uncover biases in AI model training that otherwise remain hidden due to proprietary models and algorithmic opacity. This knowledge can then be applied to consider the relationship between AI-generated images of war and the contexts in which they are shared and viewed, ultimately contributing to public awareness of how generative AI is reshaping the boundaries of authenticity, trust, and power in war representation.
    Period21-Jan-2025
    Held atManchester Metropolitan University, United Kingdom
    Degree of RecognitionInternational

    Keywords

    • generative AI
    • critical AI
    • representation
    • conflict and war
    • critical generative AI studies