The Accountability Principles for AI (AP4AI) Project develops solutions to assess, review and safeguard the accountability of AI usage by internal security practitioners in line with EU values and fundamental rights.

AP4AI will mean a step-change in the application of AI by the internal security community by offering a robust and application-focused Framework that integrates security, legal, ethical as well as citizens’ positions on AI.

The project started in 2021 as a collaboration between CENTRIC and Europol and is supported by FRA, Eurojust, EUAA and CEPOL in the framework of the EU Innovation Hub for Internal Security. It is currently completing its first phase.

Products and results

Accountability Icon
Accountability Principles for AI Framework
Toolkit Icon
Toolkit for AP4AI
Roadmap Icon
Implementation Road Map
Citizen Icon
Citizens engagement
Expert Icon
Expert consultation

The core result of this project will be a defined and validated set of universal Accountability for AI Principles that internal security practitioners including the justice sector may adopt in order to demonstrate accountability in their use of AI. The principles will be universal and jurisdiction-neutral, to be used as a guide by internal security and justice practitioners globally in order to support existing governance and accountability mechanisms through self-audit, monitoring and review.

In a second step, the Project will create the AP4AI Framework which will offer interrelated and governance-based guidelines, as well as an openly and freely accessible toolkit to assist organisations and the public in the identification of and implementation for AI accountability needs.


AP4AI aims to ensure that project results are grounded in and acceptable to the different groups involved in and affected by AI applications in the security and justice field. Therefore, engagement with a broad range of stakeholder groups is key, as well as an empirically grounded, bottom-up approach.

AP4AI consults and engages with the following groups:

  1. Law enforcement agencies and border police
  2. Justice and Judiciary
  3. Human rights experts
  4. Legal AI experts
  5. Ethical AI experts
  6. Civil Society and NGOs
  7. Technical AI experts
  8. Citizens

To ensure the robust development and validation of accountability principles for AI, the project employs a sequential mixed method approach that uses consecutive steps of exploration, integration and validation using three cycles.

Cycle 1 - Development of an agreed set of accountability principles for AI using expert consultations with above groups. Cycle 2 - Validation and Refinement of the Accountability Principles through citizen consultation. Cycle 3 - Integration into the AP4AI Framework and validation by expert consultation

The AP4AI consortium further recognises that AI in the internal security domain is strongly affected by the national contexts in which AI capabilities are deployed. The consortium therefore conducts its consultation across 30 countries (all 27 EU countries, UK, USA and Australia).

Current status

AP4AI is currently completing Cycle 1 having conducted consultation sessions with six expert groups and gathered written input from 69 experts in 28 countries. Overview of expert consultation sessions at this point:

The written expert inputs cover all expert communities and stakeholder groups (cp. list above, with exception of citizens).

The citizen consultation (Cycle 2) is currently in progress across 30 countries. The results of the consultation of approx. 6,000 citizens from across Europe, the US, UK and AU will be available in March 2022.

  • 08/04/2021: Expert domain: Legal; Participants: Public prosecutor, Prosecutor, Judges, liaison prosecutor, Justice sector experts
  • 04/05/2021: Expert domain: Law enforcement; Participants: Interior ministries, counter-terrorism experts, national police forces
  • 05/05/2021: Expert domain: Technical; Participants: Private sector AI providers, Software developers, Academia (Technical)
  • 02/06/2021: Expert domain: Human rights; Participants: Fundamental Rights, NGOs, Academia
  • 17/06/2021: Expert domain: Legal; Participants: Academia (Law)
  • 14/07/2021: Expert domain: Law enforcement; Participants: Law enforcement agencies

Project Coordination

Europol Logo


Europol is the Law Enforcement Agency of the European Union. Europol hosts the EU Innovation Hub for Internal Security a collaborative European network of innovation labs aimed at ensuring coordination and collaboration between EU internal security actors (law enforcement, justice, fundamental rights, border security, immigration, customs etc.) in the field of innovation. It supports the delivery of innovative solutions for internal security practitioners working for citizens' security in the area of freedom, security and justice. The Innovation Hub also contributes to establishing a common innovation picture for internal security and promote the alignment of innovation and security research efforts across Europe.



(Centre of Excellence for Terrorism, Resilience, Intelligence and Organised Crime Research) CENTRIC is a multi-disciplinary and end-user focused centre of excellence, located within Sheffield Hallam University. The global reach of CENTRIC links both academic and professional expertise across a range of disciplines providing unique opportunities to progress ground-breaking research. The mission of CENTRIC is to provide a platform for researchers, practitioners, policy makers and the public to focus on applied security research

Supporting Partners


European Union Agency for Criminal Justice Cooperation


European Union Agency for Asylum


The EU Agency for Law Enforcement Training

Additional Organisations

Advice and contributions by the EU Agency for Fundamental Rights (FRA).