Document Type

Conference Proceeding

Publication Date

2020

Keywords

Artificial Intelligence, AI, Machine Learning, ML Adversarial Machine Learning, Ethics, Physical Testing, Physical Domain, Methodology, ML attacks, FRT, Facial Recognition Technology, Invisibility, Algorithms, Research Ethics, Design, Security, Privacy, Human Rights

Abstract

This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects. Many papers that deploy such attacks characterize themselves as “real world.” Despite this framing, however, we found the physical or real-world testing conducted was minimal, provided few details about testing subjects and was often conducted as an afterthought or demonstration. Adversarial ML research without representative trials or testing is an ethical, scientific, and health/safety issue that can cause real harms. We introduce the problem and our methodology, and then critique the physical domain testing methodologies employed by papers in the field. We then explore various barriers to more inclusive physical testing in adversarial ML and offer recommendations to improve such testing notwithstanding these challenges

Share

COinS