Artificial Intelligence and Human Rights: Corporate responsibility in AI governance initiatives

Lottie Lane*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

3 Citations (Scopus)
253 Downloads (Pure)

Abstract

Private businesses are central actors in the development of artificial intelligence (AI), giving them a key role in ensuring that AI respects human rights. Since the establishment of the State-centric framework of international human rights law (IHRL), technological developments have occurred that were not envisaged by its drafters, leaving IHRL scrambling to catch up. Despite progress in the development of international legal standards on business and human rights, uncertainties regarding the role and responsibilities of AI businesses remain. This article addresses these uncertainties from a governance perspective and against the backdrop of the public/private divide, viewing laws as instruments of governance, which comprises activities by many public and private actors.⁠ In Part 2, the current framework of IHRL regarding AI and businesses is briefly assessed, focusing on the lack of legal certainty. In Part 3, AI initiatives beyond IHRL, adopted at the international, regional and national levels, are critically analysed to gain insight into specific standards of behaviour expected of AI businesses, as well as to challenge a dichotomous public/private divide in this context. Conclusions and recommendations are provided in Part 4.
Original languageEnglish
Pages (from-to)304-325
Number of pages22
JournalNordic Journal of Human Rights
Volume41
Issue number3
Early online date5-Jan-2023
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'Artificial Intelligence and Human Rights: Corporate responsibility in AI governance initiatives'. Together they form a unique fingerprint.

Cite this