By Fieke Jansen

The integration of data-driven decision-making and Artificial Intelligence (AI) in many aspect of present day life has triggered debates on its impact on society, a prominent one is the impact of biases in datasets or unfair algorithms on affected communities. The HUMAINT Winterschool on Human Intelligence and Machine Intelligence in Seville organized by the joint research centre of the European Commission provided 5 days to reflect on the ethical, social, legal and economic impact of AI. In this interdisciplinary event computer scientists presented approaches that predominantly focussed on how to engineer fair AI models by minimizing bias and error rate, at times overlooking the messy, complex, ever changing and unjust contexts in which these systems are deployed. There is a tension between accounting for datafication as a continuation of historical and on-going power struggles over the organization of society and practically translating societal values and norms into functions and code. In this blog I argue that it is important to recognize that, if the integration of optimized and opaque data-driven decision-making or AI systems in police, migration, labour markets and insurance are seemingly inevitable, designing fairer systems requires engineers to work with individuals, communities and organizations to understand how to support these people’s ability to challenge the premise of these systems, their implementation and, when needed, make social justice claims.

Relying on engineering to fix ingrained social inequalities or optimizing AI and data-driven systems for fairness can be considered a techno fix (Huesemann & Huesemann, 2011), overlooking the premise on which these systems are deployed in the first place. Looking at a system in isolation of the context prevents an understanding of whose interests are being served, and if and how the system is reinforcing and obscuring power asymmetries in society. Selbst, et. all (2019). identified this tension between fairness as an engineering principle and justice as a property of societal structures.

We contend that by abstracting away the social context in which these systems will be deployed, fair-ML researchers miss the broader context, including information necessary to create fairer outcomes, or to even understand fairness as a concept. Ultimately, this is because performance metrics are properties of systems in total, not subsystems, and fairness and justice are properties of social and legal systems like employment and criminal justice, not the technical systems within (Selbst, et. all, 2019: 2).

Integrating justice as a property of society in computational systems requires an approach that takes into account the broader context. In her lecture at the HUMAINT Winterschool Virginia Dignum proposed a design for values approach, a methodology that aims to put human value at the core of AI systems (Pigmans et al, 2017; Dignum et al, 2018). She argues that AI systems are artefacts that are designed by people to solve specific problems, the designing for values approach requires a clear articulation of the goals of the system, who decides on these objectives, and which values are important and prioritized. The second half of the HUMAINT Winterschool was dedicated to project work, creating a space to put the different theoretical approaches in to action. The project group I was a part of was asked to use the designing for value approach in the development and implementation of a facial recognition technology for police. We were asked to use this approach to translate the values ‘privacy’ and ‘security’ into norms and functionalities.

The project group decided that our facial recognition system had technically eliminated most bias in terms of racial and gender disparities and would only be used by police to identify and arrest criminals whose mugshots are in an active police investigation watch lists, it will not be used for prediction nor profiling. This triggered a discussion on the extent to which facial recognition, when potential racial or gender bias is eliminated, can still reinforce structural inequalities. To address this question it is important to take a step back to the current state of how facial recognition systems are implemented by police in the wild. Researchers from ACLU (Snow, 2018) and MIT Media Lab (Buolamwini and Gebru, 2018) showed that Amazon’s facial recognition software ‘Rekognition’, which is used by law enforcement in the USA to identify potential suspects, falsely identifies individuals, specifically women of colour. Big Brother Watch reported that the implementation of a facial recognition system by the police in the UK was very problematic. “On average, a staggering 95% of ‘matches’ wrongly identified innocent people” (Ferris, 2018). While optimizing the system to reduce error rates for people and especially women of colour might distribute the negative impacts of facial recognition more equally across society, it is important to understand the system in the context of policing. Police, the enforcement arm of the state, will have to make choices on where to deploy the facial recognition system, for which crimes, and on which suspects. Choices which are informed by historic and on-going societal struggles and contested state practices (Gates, 2011; Dencik et al, 2018; Vitale, 2017). In the case of police the definition of who populates the watch lists is contested in itself. “It has been estimated by the Biometrics Commissioner that hundreds of thousands of images on the Police National Database are of innocent people” (Ferris, 2018 :6). The report of Amnesty (2018) and Scott (2018) on the Gang Martix, a program of the London Metropolitan Police to curb gang activity, revealed how innocent citizens were put on the list because they were black, under 30, male, lived in certain neighbourhoods and have been subjected to stop and search police activity. At the same time other types of criminals are not included on these watch lists. The Dutch public prosecutors decided not to prosecute the Dutch bank the Rabobank and its employees for their involvement in the Libor affair, the fraudulent bank activity to inflate or deflate banking rates which had immense negative impact on the financial market and people across the world. The argument to not prosecute revolved around Rabobank’s financial settlement with the oversight committee, which not only absolved the bank and its employees from any responsibility (Driessen, 2017; Deira, 2015), it also prevents Rabobank’s employees, who were involved with a criminal activity that impacted millions of people financially, to appear on police watch lists. This societal bias towards a certain type of crime and criminal will impact who will be subjected to the facial recognition systems. No engineering property can account for this societal bias, so which functions can be designed that will allow these systems to be monitored and challenged?

When criminal databases are contested how can people challenge individual disproportionate targeting by police? Current protection mechanism and possibilities for resource are not sufficient to challenge discriminatory practices that are engrained in society and technology. The first challenge is that rights are given to an individual and not a group. An individual can submit a subject access request to learn what information police has on them and challenge their presence on a watch list, not questioning the system as a whole. Secondly, finding, exposing and legally challenging systematic disproportionate targeting of specific communities in Europe runs into the challenge that the majority of the police forces do not officially collect data on ethnicity and race (Deutch, 2015). Thirdly, legal regimes that govern the use of personal data and the implementation of technology in policing such as the European Union data protection regulation for police, the Police Directive 2016/680 (European Commission, 2016), focus amongst others on protecting personal data, privacy impact assessments, data retention and audit trails. It does not account for the need to collect data to reflect on police practices. In the values for design session I argued that if we listen to the needs of interest groups in society in the design process, it might become clearer that we need to integrate functions that enable people to challenge the broader system in which facial recognition is implemented. One approach could be to make it mandatory for these systems of oppression to collect and publish an annual insight into what crime the tools are targeting, which location, on which groups (ethnicity) and where the police is acting on the output of the tool. This will not create a fair system, but it would enable anti-discrimination groups to, if it was disproportionately applied to specific communities in society, challenge the implementation and use cases of these tools in court. Therefore, the questions introduced by Virginia Dignum on who is involved in the design process and whose values count are crucial. Integrating concerns from those affected by technology in the process of responsible AI development allows our focus to move beyond reducing bias and error rates, to account for the context in which these tools will be utilized. That is, the aim is not to build a tool to fix societies’ failures but to give people the tools to challenge power in an age of opaque technologies.


Amnesty International (2018). Trapped in the Matrix: Secrecy, stigma, and bias in the Met’s Gangs Database. May 2018

Buolamwini, J. & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research 81:1–15, 2018

Deira, S. (2015). Waarom Rabobank niet wordt vervolgd voor Libor-affaire. Elsevier weekblad

Dencik, L., Jansen, F. & Metcalfe, P. (2018). A conceptual framework for approaching social justice in an age of datafication.

Deutch, J. (2015). Profiling (In)justice: Disaggregating Data by Race and Ethnicity to Curb Discriminatory Policing.

Dignum, V., Baldoni, M., Baroglio, C., Caon, M., Chatila, R., Dennis, L., Génova, G., Kließ, M., Lopez-Sanchez, M., Micalizio, R., Pavón, J., Slavkovik, M., Smakman, M., van Steenbergen M., Tedeschi, S., van der Torre, L., Villata, S., de Wildt, T. & Haim, G. (2018). Ethics by Design: necessity or curse? Conference paper PRIMA2017

Driessen, C. (2017). Libor-fraude laat Rabobank maar niet los. NRC

European Commission (2016). Directive (EU) 2016/680. Official Journal of the European Union

Ferris, G. (2018). Face Off: The lawless growth of facial recognition in UK policing. Big Brother Watch

Gates, K. A. (2011). Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance. New York: New York University Press

Huesemann, M., & Huesemann, J. (2011). Techno-fix: why technology won’t save us or the environment. New Society Publishers.

Pigmans, K., Aldewereld, H., Dignum, V. & Doorn, N. (2017). The Role of Values. In Proceedings of Coordination, Organizations, Institutions, and Norms in Agent Systems XII

Scott, S. (2018). The War on Gangs or a Racialised War on Working Class Black Youths. Monitoring Group

Selbst, A.D., Boyd, D., Friedler, S.A, Venkatasubramanian, S. & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency FAT* ’19

Snow. J. (2018). Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots. ACLU

Vitale, A. S. (2017). The End of Policing. Verso Books: London and New York

Engineering for social justice instead of fairness: a perspective from the context of policing

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.