TY - JOUR
T1 - Artificial Intelligence Crime
T2 - An Overview of Malicious Use and Abuse of AI
AU - Blauth, Tais Fernanda
AU - Gstrein, Oskar Josef
AU - Zwitter, Andrej
N1 - Publisher Copyright:
Author
PY - 2022/7/18
Y1 - 2022/7/18
N2 - The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding existing vulnerabilities, and introducing new threats. This article reviews the relevant literature, reports, and representative incidents which allows to construct a typology of the malicious use and abuse of systems with AI capabilities. The main objective is to clarify the types of activities and corresponding risks. Our starting point is to identify the vulnerabilities of AI models and outline how malicious actors can abuse them. Subsequently, we explore AI-enabled and AI-enhanced attacks. While we present a comprehensive overview, we do not aim for a conclusive and exhaustive classification. Rather, we provide an overview of the risks of enhanced AI application, that contributes to the growing body of knowledge on the issue. Specifically, we suggest four types of malicious abuse of AI (integrity attacks, unintended AI outcomes, algorithmic trading, membership inference attacks) and four types of malicious use of AI (social engineering, misinformation/fake news, hacking, autonomous weapon systems). Mapping these threats enables advanced reflection of governance strategies, policies, and activities that can be developed or improved to minimize risks and avoid harmful consequences. Enhanced collaboration among governments, industries, and civil society actors is vital to increase preparedness and resilience against malicious use and abuse of AI.
AB - The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding existing vulnerabilities, and introducing new threats. This article reviews the relevant literature, reports, and representative incidents which allows to construct a typology of the malicious use and abuse of systems with AI capabilities. The main objective is to clarify the types of activities and corresponding risks. Our starting point is to identify the vulnerabilities of AI models and outline how malicious actors can abuse them. Subsequently, we explore AI-enabled and AI-enhanced attacks. While we present a comprehensive overview, we do not aim for a conclusive and exhaustive classification. Rather, we provide an overview of the risks of enhanced AI application, that contributes to the growing body of knowledge on the issue. Specifically, we suggest four types of malicious abuse of AI (integrity attacks, unintended AI outcomes, algorithmic trading, membership inference attacks) and four types of malicious use of AI (social engineering, misinformation/fake news, hacking, autonomous weapon systems). Mapping these threats enables advanced reflection of governance strategies, policies, and activities that can be developed or improved to minimize risks and avoid harmful consequences. Enhanced collaboration among governments, industries, and civil society actors is vital to increase preparedness and resilience against malicious use and abuse of AI.
KW - Artificial intelligence
KW - Artificial Intelligence
KW - Artificial Intelligence Typology
KW - Computer crime
KW - Computer Crime
KW - Data models
KW - Legislation
KW - Machine learning
KW - Malicious Artificial Intelligence
KW - Security
KW - Social Implications of Technology
KW - Taxonomy
KW - Training data
UR - http://www.scopus.com/inward/record.url?scp=85135233337&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2022.3191790
DO - 10.1109/ACCESS.2022.3191790
M3 - Article
AN - SCOPUS:85135233337
SN - 2169-3536
VL - 10
SP - 77110
EP - 77122
JO - IEEE Access
JF - IEEE Access
ER -