Preventing long-term risks to human rights in smart cities: A critical review of responsibilities for private developers of AI

Lottie Lane*

*Bijbehorende auteur voor dit werk

OnderzoeksoutputAcademicpeer review

4 Citaten (Scopus)
61 Downloads (Pure)


Privately developed artificial intelligence (AI) systems are frequently used in smart city technologies. The negative effects of such systems on individuals’ human rights are increasingly clear, but we still only have a snapshot of their long-term risks to human rights. The central role of AI businesses in smart cities places them in a key position to identify, prevent and mitigate risks posed by smart city AI systems. The question arises as to how such preventive responsibilities are articulated in international and European governance initiatives on AI and corporate responsibility, respectively. This paper addresses the questions regarding: (1) the Organization for Economic Cooperation and Development’s ‘Business and Finance Outlook 2021: AI in Business and Finance’; (2) the EU’s proposed ‘AI Act’; and (3) the EU’s ‘Proposal for a Directive on corporate sustainability due diligence’. The paper first discusses the role of private AI developers in smart cities and the relevant limitations of applicable legal frameworks (section 1). Section 2 categorises long-term risks to human rights posed by the private development of smart city AI systems. Section 3 discusses how preventive responsibilities in the three initiatives reflect considerations of long-term risks. Critical observations and recommendations are provided in section 4, and conclusions are in section 5.
Originele taal-2English
Aantal pagina's28
TijdschriftInternet Policy Review
Nummer van het tijdschrift1
StatusPublished - 31-mrt.-2023

Citeer dit