Reports
Disclaimer
AP4AI is an ongoing research project. This website and its content are developed jointly by several parties in order to support security and judiciary community practitioners. Outcomes, opinions, critical reflections, conclusions and recommendations on this website and related reports, publications, tools and presentations represent the output of collaborative effort. They do not necessarily reflect the individual views of participants of the AP4AI project, Europol, CENTRIC, FRA, Eurojust, EUAA or CEPOL. AP4AI has received ethics approval by the university ethics board of Sheffield Hallam University, where CENTRIC as academic lead of the AP4AI project is located. AI legal regulation is in the process of development: adherence to these non-binding principles cannot replace compliance with legal rules (e.g. national legislation) nor do the project partners replace the role of a regulatory authority in relation to use of AI for internal security purposes. This website contains links to other websites. These are not under the control of the partners to AP4AI and are not endorsed by the project. By clicking on a link and navigating on a separate website, you may agree to the terms and conditions of that website.

Building on the foundations of AP4AI, we have created a web-based tool to support internal security practitioners assess compliance of their AI systems with the requirements of the AI Act. This will allow users to evaluate whether, existing or future applications, meet the criteria set by the new regulatory framework.
The EU, recognizing the transformative potential of AI, has proposed a legal framework to balance innovation with the protection of fundamental rights and societal values. This legislative proposal is referred to as the Artificial Intelligence Act (AIA).