What is AP4AI?
The Accountability Principles for AI (AP4AI) Project develops solutions to assess, review and safeguard the accountability of AI usage by internal security practitioners in line with EU values and fundamental rights.
AP4AI means a step-change in the application of AI by the internal security community by offering a robust and application-focused Framework that integrates security, legal, ethical as well as citizens’ positions on AI.
- Operational objective: Improve the knowledge and capabilities of practitioners in the internal security domain to integrate AI Accountability into their decision making about AI capabilities throughout the full AI lifecycle (i.e., design, procurement, deployment, migration); provide practical capabilities to assess and demonstrate that specific AI capabilities and uses adhere to AI Accountability principles.
- Policy-related objective: Support policy-making and governance bodies with a mature, tested and expert- and citizen-validated definition of AI Accountability
- Societal objective: Improve societal awareness of AI Accountability and participation in AI Accountability procedures; improve informed public trust in AI deployments in the internal security domain
The AP4AI Project is jointly conducted by CENTRIC and Europol Innovation Lab and supported by Eurojust, the EU Agency for Asylum (EUAA), the EU Agency for Law Enforcement Training (CEPOL) and the EU Agency for Fundamental Rights (FRA), in the framework of the EU Innovation Hub for Internal Security.
The project started in 2021 as a collaboration between CENTRIC and Europol. The first two phases of the project are completed. In the third phase, AP4AI, is finalising a self-assessment guideline to support compliance with the forthcoming EU AIA. The first version of the tool (beta versions) was based on the content of AP4AI and aimed at testing the user interface and the functions of the web-based applications. Several agencies have been granted access to the tool and evaluated its capabilities. This tool has been now reformed and branded as CC4AI to reflect the content/requirements set forth by the EU AI Act and will be deployed soon.
The AP4AI Framework is grounded in empirically verified Accountability Principles for AI as carefully researched and accessible standard, which supports internal security practitioners in implementing AI and Machine Learning tools in an accountable and transparent manner and in line with EU values and fundamental rights.
AP4AI aims to ensure that project results are grounded in and acceptable to the different groups involved in and affected by AI applications in the security and justice field. Therefore, engagement with a broad range of stakeholder groups is key, as well as an empirically grounded, bottom-up approach.
AP4AI consults and engages with the following groups:
- Law enforcement agencies and border police
- Justice and Judiciary
- Human rights experts
- Legal AI experts
- Ethical AI experts
- Civil Society and NGOs
- Technical AI experts
To ensure the robust development and validation of accountability principles for AI, the project employs a sequential mixed method approach that uses consecutive steps of exploration, integration and validation using three cycles.