Biometric Mirror exposes the possibilities of artificial intelligence and facial analysis in public space. The aim is to investigate the attitudes that emerge as people are presented with different perspectives on their own, anonymised biometric data distinguished from a single photograph of their face. It sheds light on the specific data that people oppose and approve, the sentiments it evokes, and the underlying reasoning. Biometric Mirror also presents an opportunity to reflect on whether the plausible future of artificial intelligence is a future we want to see take shape.
Big data and artificial intelligence are some of today’s most popular buzzwords. Both are promised to help deliver insights that were previously too complex for computer systems to calculate. With examples ranging from personalised recommendation systems to automatic facial analyses, user-generated data is now analysed by algorithms to identify patterns and predict outcomes. And the common view is that these developments will have a positive impact on society.
Within the realm of artificial intelligence (AI), facial analysis gains popularity. Today, CCTV cameras and advertising screens increasingly link with analysis systems that are able to detect emotions, age, gender and demographic information of people passing by. It has proven to increase advertising effectiveness in retail environments, since campaigns can now be tailored to specific audience profiles and situations. But facial analysis models are also being developed to predict your aggression level, sexual preference, life expectancy and likeliness of being a terrorist (or an academic) by simply monitoring surveillance camera footage or analysing a single photograph. Some of these developments have gained widespread media coverage for their innovative nature, but often the ethical and social impact is only a side thought.
Current technological developments approach ethical boundaries of the artificial intelligence age. Facial recognition and analysis in public space raise concerns as people are photographed without prior consent, and their photos disappear into a commercial operator’s infrastructure. It remains unclear how the data is processed, how the data is tailored for specific purposes and how the data is retained or disposed of. People also do not have the opportunity to review or amend their facial recognition data. Perhaps most worryingly, artificial intelligence systems may make decisions or deliver feedback based on the data, regardless of its accuracy or completeness. While facial recognition and analysis may be harmless for tailored advertising in retail environments or to unlock your phone, it quickly pushes ethical boundaries when the general purpose is to more closely monitor society.
This project is a collaboration between The University of Melbourne’s Microsoft Research Centre for Social Natural User Interfaces (SocialNUI) and Science Gallery Melbourne.
Niels Wouters, Digital Media Specialist, School of Computing and Information Systems, University of Melbourne
Frank Vetere, Professor & Director, Microsoft Research Centre for SocialNUI, University of Melbourne
Rose Hiscock, Director, Science Gallery Melbourne
Eduardo Velloso, Lecturer, School of Computing and Information Systems, University of Melbourne
Ryan Kelly, Research Fellow, School of Computing and Information Systems, University of Melbourne
Hasan Shahid Ferdous, Research Fellow, School of Computing and Information Systems, University of Melbourne
Zaher Joukhadar, Lead Software Engineer, Microsoft Research Centre for SocialNUI, University of Melbourne
Joshua Newn, PhD Candidate, Microsoft Research Centre for SocialNUI, University of Melbourne
Nick Smith, PhD Candidate, Microsoft Research Centre for SocialNUI, University of Melbourne