Beyond magic: Prompting for style as affordance actualization in visual generative media

Nataliia Laba*

*Corresponding author for this work

    Research output: Contribution to journalArticleAcademicpeer-review

    6 Downloads (Pure)

    Abstract

    As a sociotechnical practice at the nexus of humans, machines, and visual culture, text-to-image generation relies on verbal prompts as the primary technique to guide generative models. To align desired aesthetic outcomes with computer vision, human prompters engage in extensive experimentation, leveraging the model’s affordances through prompting for style. Focusing on the interplay between machine originality and repetition, this study addresses the dynamics of human-model interaction on Midjourney, a popular generative model (version 6) hosted on Discord. It examines style modifiers that users of visual generative media add to their prompts and addresses the aesthetic quality of AI images as a multilayered construct resulting from affordance actualization. I argue that while visual generative media holds promise for expanding the boundaries of creative expression, prompting for style is implicated in the practice of generating a visual aesthetic that mimics paradigms of existing cultural phenomena, which are never fully reduced to the optimized target output.
    Original languageEnglish
    JournalNew Media and Society
    DOIs
    Publication statusE-pub ahead of print - 29-Oct-2024

    Keywords

    • visual generative media
    • prompting for style
    • affordance actualization
    • text-to-image generation
    • prompt modifiers
    • Midjourney

    Fingerprint

    Dive into the research topics of 'Beyond magic: Prompting for style as affordance actualization in visual generative media'. Together they form a unique fingerprint.

    Cite this