AI and Technology-Facilitated Violence and Abuse

Document Type

Book Chapter

Publication Date

2021

Keywords

AI, Artificial Intelligence, Technology-Facilitated Violence, Technology-Facilitated Abuses, TFVA, TFV, Deep Fake, Canada, Algorithmic Profiling

Abstract

Artificial intelligence (AI) is being used—and is in some cases specifically designed—to cause harms against members of equality-seeking communities. These harms, which we term “equality harms” have individual and collective effects, and emanate from both “direct” and “structural” violence. Discussions about the role of AI in technology-facilitated violence and abuse (TFVA) sometimes do not include equality harms specifically. When they do, they frequently focus on individual equality harms caused by “direct” violence (e.g. the use of deepfakes to create non-consensual pornography to harass or degrade individual women). Often little attention is paid to the collective equality harms that flow from structural violence, including those that arise from corporate actions motivated by the drive to profit from data flows (e.g. algorithmic profiling). Addressing TFVA in a comprehensive way means considering equality harms arising from both individual and corporate behaviours. This will require going beyond criminal law reforms to punish “bad” individual actors, since responses focused on individual wrongdoers fail to address the social impact of the structural violence that flows from some commercial uses of AI. Although, in many cases, the harms occasioned by these (ab)uses of AI are the very sort of harms that law is used to address or has been used to address, existing Canadian law is not currently well placed to meaningfully address equality harms.

Share

COinS