Emotion recognition technologies and dignity in AI-based surveillance capitalism

Bruce Baer Arnold, Wendy Elizabeth Bonython, Tess Rooney

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Businesses, governments and other entities are increasingly presented with AI-based ‘emotion recognition’ biometric systems, promoted as tools offering robust insights into the honesty, comprehension or health support needs of individuals, particularly students and employees. Australian universities may consider adopting this technology as they expand their AI engagement in learning/assessment platforms and student support systems. Automated emotion recognition systems pose legal and human rights challenges arising from their potential to be used deterministically; their potential lack of reproducibility, replicability and validity; and their susceptibility to bias, notwithstanding their possible utility. Further, they rely on non-consensual or co-opted participation of individuals whose dignity is eroded by consequent reduction from persons to data subjects. This article evaluates such systems through a dignitarian human rights lens, highlighting the need for a precautionary approach.
Original languageEnglish
Pages (from-to)28-44
Number of pages17
JournalGriffith Journal of Law & Human Dignity
Volume12
Issue number2
DOIs
Publication statusPublished - 29 Jul 2025

Fingerprint

Dive into the research topics of 'Emotion recognition technologies and dignity in AI-based surveillance capitalism'. Together they form a unique fingerprint.

Cite this