Beyond magic: Prompting for style as affordance actualization in visual generative media

    Research output: Contribution to journalArticleAcademicpeer-review

    5 Citations (Scopus)
    103 Downloads (Pure)

    Abstract

    As a sociotechnical practice at the nexus of humans, machines, and visual culture, text-to-image generation relies on verbal prompts as the primary technique to guide generative models. To align desired aesthetic outcomes with computer vision, human prompters engage in extensive experimentation, leveraging the model’s affordances through prompting for style. Focusing on the interplay between machine originality and repetition, this study addresses the dynamics of human-model interaction on Midjourney, a popular generative model (version 6) hosted on Discord. It examines style modifiers that users of visual generative media add to their prompts and addresses the aesthetic quality of AI images as a multilayered construct resulting from affordance actualization. I argue that while visual generative media holds promise for expanding the boundaries of creative expression, prompting for style is implicated in the practice of generating a visual aesthetic that mimics paradigms of existing cultural phenomena, which are never fully reduced to the optimized target output.
    Original languageEnglish
    Pages (from-to)148-168
    Number of pages20
    JournalNew Media and Society
    Volume28
    Issue number1
    Early online date29-Oct-2024
    DOIs
    Publication statusPublished - Jan-2026

    Keywords

    • visual generative media
    • prompting for style
    • affordance actualization
    • text-to-image generation
    • prompt modifiers
    • Midjourney

    Fingerprint

    Dive into the research topics of 'Beyond magic: Prompting for style as affordance actualization in visual generative media'. Together they form a unique fingerprint.

    Cite this