TY - JOUR
T1 - Addressing discrimination in algorithmic profiling
T2 - Examining risk governance in Dutch public social security agencies
AU - Haitsma, Lucas M.
N1 - Publisher Copyright:
© The Author(s) 2025. This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
PY - 2025/6
Y1 - 2025/6
N2 - In the social security sector, both in the Netherlands and abroad, the irresponsible use of algorithmic profiling technologies to combat misallocation of social security benefits has contributed to instances of discrimination. While a developing legal framework—made up of fundamental rights, the GDPR, and the recently adopted AI Act—requires risks of discrimination to be identified and mitigated, it offers limited guidance on how to implement these obligations. Social security agencies seeking to address the systemic risk of discrimination in algorithmic profiling are frustrated by the associated sociotechnical complexity, scientific uncertainty, and socio-legal ambiguity. This study examines how two Dutch social security agencies address discrimination risks in algorithmic profiling, using van Asselt and Renn's principles of systemic risk governance—communication and inclusion, integration, and reflection—as a theoretical framework. Through case studies involving document analysis and interviews, the research explores how these agencies address discrimination risks. This study highlights the importance of socially robust risk governance structures that encompass both simpler rule-based selection systems and trained algorithmic systems, that include scientific and client perspectives, and draw on the experiences of other agencies.
AB - In the social security sector, both in the Netherlands and abroad, the irresponsible use of algorithmic profiling technologies to combat misallocation of social security benefits has contributed to instances of discrimination. While a developing legal framework—made up of fundamental rights, the GDPR, and the recently adopted AI Act—requires risks of discrimination to be identified and mitigated, it offers limited guidance on how to implement these obligations. Social security agencies seeking to address the systemic risk of discrimination in algorithmic profiling are frustrated by the associated sociotechnical complexity, scientific uncertainty, and socio-legal ambiguity. This study examines how two Dutch social security agencies address discrimination risks in algorithmic profiling, using van Asselt and Renn's principles of systemic risk governance—communication and inclusion, integration, and reflection—as a theoretical framework. Through case studies involving document analysis and interviews, the research explores how these agencies address discrimination risks. This study highlights the importance of socially robust risk governance structures that encompass both simpler rule-based selection systems and trained algorithmic systems, that include scientific and client perspectives, and draw on the experiences of other agencies.
KW - Algorithmic profiling
KW - discrimination
KW - risk governance
KW - social security
UR - https://www.scopus.com/pages/publications/105010350780
U2 - 10.1177/13882627251351093
DO - 10.1177/13882627251351093
M3 - Article
AN - SCOPUS:105010350780
SN - 1388-2627
VL - 27
SP - 191
EP - 212
JO - European Journal of Social Security
JF - European Journal of Social Security
IS - 2
ER -