Loading…
TPRC45 has ended
Saturday, September 9 • 3:05pm - 3:40pm
Smile for the Camera: Privacy and Policy Implications of Emotion AI

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
We are biologically programmed to publicly display emotions as social cues and involuntary physiological reflexes: grimaces of disgust alert others to poisonous food, pursed lips and furrowed brows warn of mounting aggression, and spontaneous smiles relay our joy and friendship. Though designed to be public under evolutionary pressure, these signals were only seen within a few feet of our compatriots — purposefully fleeting, fuzzy in definition, and rooted within the immediate and proximate social context.

The introduction of artificial intelligence (AI) on visual images for emotional analysis obliterates the natural subjectivity and contextual dependence of our facial displays. This technology may be easily deployed in numerous contexts by diverse actors for purposes ranging from nefarious to socially assistive — like proposed autism therapies. Emotion AI places itself as an algorithmic lens on our digital artifacts and real-time interactions, creating the illusion of a new, objective class of data: our emotional and mental states. Building upon a rich network of existing public photographs — as well as fresh feeds from surveillance footage or smart phone cameras — these emotion algorithms require no additional infrastructure or improvements on image quality.

Privacy and security implications stemming from the collection of emotional surveillance are unprecedented — especially when taken alongside physiological biosignals (e.g., heartrate or body temperature). Emotion AI also presents new methods to manipulate individuals by targeting political propaganda or fish for passwords based on micro-reactions. The lack of transparency or notice on these practices makes public inquiry unlikely, if not impossible.

To better understand the risks and threat scenarios associated with emotional AI, we examine three distinct technology scenarios: 1) retroactive use on public social media photos; 2) real-time use on adaptive advertisements, including political ads; 3) mass surveillance on people in public.

Based on these three technically plausible scenarios, we illustrate how the data collection and use of emotional AI data falls outside of existing privacy legal frameworks in the U.S. and in the E.U. For instance, within the comprehensive EU General Data Protection Regulation the law only restricts data that are identifying and thus considered biometrics. Many risks associated with emotional AI do not require individual identification, like adaptive marketing or screening at an international border. Emotional data are also not currently considered health information, but could relay sensitive information about the internal mental state of an individual — especially when recorded over time.

Our research points to the unique privacy and security implications of emotion AI technology, and the impact it may have on both communities and individuals. Based on our assessment of analogous privacy laws and regulations, we illustrate the ways emotional data could cause harm even when conducted in accordance with EU and US laws. We then highlight possible elements of these laws that could be restructured to cover these threat scenarios. Given the challenges in controlling the flow of these data, we call for the development of policy remedies in response to outlined emotional intelligence threat models.

Moderators
JS

Jesse Sowell

Senior Advisor, Vice Chair of GDC Directing Outreach, Cybersecurity Fellow, M3AAWG / Stanford

Presenter
ES

Elaine Sedenberg

UC Berkeley

Author

Saturday September 9, 2017 3:05pm - 3:40pm EDT
ASLS Hazel Hall - Room 225