Abstract
Our lives are increasingly mediated, regulated and produced by
algorithmically-driven software; often invisible to the people whose lives it affects.
Online, much of the content that we consume is delivered to us through algorithmic recommender systems (“recommenders”). Although the techniques of such recommenders and the specifc algorithms that underlie them differ, they share one basic assumption: that individuals are “users” whose preferences can be predicted through past actions and behaviors. While based on a set of assumptions that may be largely unconscious and even uncontroversial, we draw upon Andrew Feenberg’s work to demonstrate that recommenders embody a “formal bias” that has social implications. We argue that this bias stems from the “technical code” of recommenders – which we identify as a form of behaviorism. Studying the assumptions and worldviews that recommenders put forth tells us something about how human beings are understood in a time where algorithmic systems are ubiquitous. Behaviorism, we argue, forms the episteme that grounds the development of recommenders. What we refer to as the “behavioral code” of recommenders promotes an impoverished view of what it means to be human. Leaving this technical code unchallenged prevents us from exploring alternative, perhaps more inclusive and expansive, pathways for understanding individuals and their desires. Furthermore, by problematizing formations that have successfully rooted themselves in technical codes, this chapter extends Feenberg’s critical theory of technology into a domain that is both ubiquitous and undertheorized.
algorithmically-driven software; often invisible to the people whose lives it affects.
Online, much of the content that we consume is delivered to us through algorithmic recommender systems (“recommenders”). Although the techniques of such recommenders and the specifc algorithms that underlie them differ, they share one basic assumption: that individuals are “users” whose preferences can be predicted through past actions and behaviors. While based on a set of assumptions that may be largely unconscious and even uncontroversial, we draw upon Andrew Feenberg’s work to demonstrate that recommenders embody a “formal bias” that has social implications. We argue that this bias stems from the “technical code” of recommenders – which we identify as a form of behaviorism. Studying the assumptions and worldviews that recommenders put forth tells us something about how human beings are understood in a time where algorithmic systems are ubiquitous. Behaviorism, we argue, forms the episteme that grounds the development of recommenders. What we refer to as the “behavioral code” of recommenders promotes an impoverished view of what it means to be human. Leaving this technical code unchallenged prevents us from exploring alternative, perhaps more inclusive and expansive, pathways for understanding individuals and their desires. Furthermore, by problematizing formations that have successfully rooted themselves in technical codes, this chapter extends Feenberg’s critical theory of technology into a domain that is both ubiquitous and undertheorized.
Original language | English |
---|---|
Title of host publication | The Necessity of Critique |
Subtitle of host publication | Andrew Feenberg and the Philosophy of Technology |
Editors | Darryl Cressman |
Publisher | Springer |
Chapter | 8 |
Pages | 143-159 |
Number of pages | 16 |
ISBN (Electronic) | 978-3-031-07877-4 |
ISBN (Print) | 978-3-031-07876-7 |
DOIs | |
Publication status | Published - 28-Sept-2022 |
Publication series
Name | Philosophy of Engineering and Technology |
---|---|
Publisher | Springer |
Volume | 41 |
ISSN (Print) | 1879-7202 |
ISSN (Electronic) | 1879-7210 |
Keywords
- Technical code
- Behaviorism
- Recommender systems
- Formal bias
- Andrew Feenberg
- B.F. Skinner
- Algorithms
- Data