Keywords
artificial intelligence, explainability, algorithmic accountability, GDPR, Bill C-11, PIPEDA reform, Digital Charter Implementation Act
Abstract
In November of 2020, the Privacy Commissioner of Canada proposed creating GDPR-inspired rights for decision subjects and allowing financial penalties for violations of those rights. Shortly afterward, the proposal to create a right to an explanation for algorithmic decisions was incorporated into Bill C-11, the Digital Charter Implementation Act. This commentary proposes that creating duties for operators to properly select and supervise artificial agents would be a complementary, and potentially more effective, accountability mechanism than creating a right to an explanation. These duties would be a natural extension of employers’ duties to properly select and retain human employees. Allowing victims to recover under theories of negligent hiring or supervision of AI-system-as-agents would reflect their increasing (but less than full) autonomy and avoid some of the challenges that victims face in proving the foreseeability elements of other liability theories.
Recommended Citation
Richard Zuroff, "Recognizing Operators’ Duties to Properly Select and Supervise AI Agents – A (Better?) Tool for Algorithmic Accountability" (2023) 19:1 CJLT 93.
Included in
Computer Law Commons, Intellectual Property Law Commons, Internet Law Commons, Privacy Law Commons, Science and Technology Law Commons