Document Type
Conference Proceeding
Publication Date
2020
Keywords
Artificial Intelligence, AI, Machine Learning, ML Adversarial Machine Learning, Ethics, Physical Testing, Physical Domain, Methodology, ML attacks, FRT, Facial Recognition Technology, Invisibility, Algorithms, Research Ethics, Design, Security, Privacy, Human Rights
Abstract
This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects. Many papers that deploy such attacks characterize themselves as “real world.” Despite this framing, however, we found the physical or real-world testing conducted was minimal, provided few details about testing subjects and was often conducted as an afterthought or demonstration. Adversarial ML research without representative trials or testing is an ethical, scientific, and health/safety issue that can cause real harms. We introduce the problem and our methodology, and then critique the physical domain testing methodologies employed by papers in the field. We then explore various barriers to more inclusive physical testing in adversarial ML and offer recommendations to improve such testing notwithstanding these challenges
Recommended Citation
Kendra Albert et al, "Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning" (Paper delivered in Workshop on Dataset Curation and Security // Workshop on Navigating the Broader Impacts of AI Research, Proceedings of the 34th Conference on Neural Information Processing Systems, 6-12 December 2020) [unpublished].
Included in
Computer Law Commons, Human Rights Law Commons, Internet Law Commons, Legal Ethics and Professional Responsibility Commons, Science and Technology Law Commons