PANDORA – Criminal law and artificial intelligence

Research projects

Research Group: Foundations of Criminal Normativity and Interdisciplinary Relations with Other Sciences and Philosophy


Main Researcher: António Brito Neves, Ricardo Tavares da Silva e Jorge Silva Santos


Researchers: Armando Dias Ramos, Catarina Abegão Alves, Christoph Bublitz, David Ramalho, Mafalda Moura Melim, Margarida Neiva Antunes, Myriam Herrera Moreno, Nuno Igreja Matos, Rita do Rosário, Vanessa Pelerigo, Maria Fernanda Palma, Helena Morão


Project Status: Ongoing (2024-2028)


Description

   The research project Criminal Law and Artificial Intelligence is shaped by the transdisciplinary approach that characterizes the activities of CIDPCC. It is not limited, in this case, to seeking a one-way bridge of dialogue between Artificial Intelligence (AI) and Criminal Law. It also involves other sciences and areas of knowledge necessarily engaged, on the one hand, with the questions that AI compels Criminal Law to answer, and, on the other, offering contributions that may promote and reformulate these and other issues.

   Regarding human mental functioning, AI renews the relevance of advances in Neuroscience, as the naturalistic reduction it seems to support can be paralleled with the translation of the decision-making process into replicable algorithms for computer systems. Indeed, if consciousness phenomena can be summarized as brain mechanisms, on the one hand, and if something equivalent to these mechanisms can be reproduced in computational domains, on the other, AI, like Neuroscience, will compel a questioning of traditional criteria of criminal liability and the assumption of the agents’ decision-making freedom that underlies them. AI, with its potential, also promises avenues to materialize these advances into threats to the role of criminal culpability, such as when its techniques are used to automatically define profiles in order to predict someone’s behavior and limit the suspect’s freedom before the crime is even planned. If, on another note, AI allows the reproduction of the human decision-making process, the prospect of replacing the human judge with a machine judge immediately arises.

   If such scenarios raise questions related to the definition of (what is still, and what is no longer) human behavior, for which answers should be sought from Neuroscience and Philosophy, they also confront us with doubts about the risks and benefits involved for the criteria of Criminal Law. The exposed line of thought not only questions the limits of individual responsibility but also those of the machine itself. If the algorithm can be programmed so that the decision about what to do in a given case dispenses with human intervention, and it is confirmed that this operation sufficiently replicates human decision-making processes, then it is worth inquiring whether Criminal Law, or any form of sanctioning response with similar criteria, should extend its scope to the AI systems themselves.

   The decision made by the machine, without the human operator’s involvement in the specific situation, may find clearer illustration in situations of conflict of interest with the action of programmed devices, such as autonomous vehicles. Defining decision-making criteria and mobilizing the lines of responsibility for programmers under Criminal Law can only be successful by calling upon the teachings of Philosophy and Ethics, rethinking classical dilemmas in light of new scenarios.

   The services provided by AI do not have to imply the replacement or removal of human operators; they can serve an auxiliary role in criminal proceedings. However, it is important to understand the extent to which these services shape the convictions of investigators and decision-makers, simultaneously becoming pretexts or easy foundations for adopting restrictive measures, including convictions. The black-box effect, where the decision-making process within the algorithm becomes inaccessible, is perhaps the most evident risk that AI might end up making the reasons for state interventions in citizens’ spheres more impenetrable—and, therefore, resistant to understanding and scrutiny.

   These dangers need not be limited to decision-making regarding investigative measures but could also extend to the substantive analysis of criminal liability. The multiplication and growing complexity of algorithms urge us to question whether they could be used to verify elements of the theory of crime, such as objective imputation or intent.


Objectives

The Criminal Law and Artificial Intelligence project aims to pursue the research lines outlined above, bringing together various disciplines to address the challenges posed by AI to the classical structures of Criminal Law and Criminal Procedure Law.

Neuroscience, the Philosophy of Mind, and Ethics are not merely seen as suppliers of contributions to be considered; they intervene to the extent that the law, guided by a critical openness, is compelled to rethink and even deconstruct its traditional categories. Non-legal sciences thus play a role in the very definition of the issues and criteria assumed in the theory of criminal responsibility and the decisions made at key moments of the criminal process.

The responses obtained, or the paths suggested, are also intended, in the reverse direction, to contribute to the areas involved—both by imposing limits and regulatory criteria (since that is also the role of the law) and by defining perspectives and lines of questioning to be explored further by those sciences (for the law also seeks answers and guidance).


Activities:

  • Bimonthly Research Seminar (with researchers)
  • Final Collective Conference (with researchers and invited experts)
  • Publication of a collective work disseminating the results

Articulation with Postgraduate Education

Integration into the Project, as junior researchers, of Master’s and PhD students whose dissertation topics align with the description and purposes of the Research Project, as well as students from the postgraduate course ‘Criminal Law and AI’ whose contributions prove to be an asset, particularly in light of the final reports submitted.

Subscribe to our
NEWSLETTER