Details
- Status
- Closed
- Reference
- DIGITAL-2022-DEPLOY-02-LAW-SECURITY-AI
- Publication date
- 15 February 2022 in https://europa.eu/!8VgCM9
- Opening date
- Deadline model
- Single-stage
- Deadline date
- 17 August 2022, 17:00 (CEST)
- Funding programme
- Department
- European Health and Digital Executive Agency
- Programme Sector
- Digital
- Programme
- Digital Europe Programme
- Tags
- Digital technology
- Digital transformation
- EUFunded
- Proposals
Description
Objective
Constantly growing digitalisation in all sectors and the rapidly changing technological landscape provide vast opportunities for criminals and terrorists. Law Enforcement Agencies (LEAs) often lack the necessary technical and financial means as well as digital skills when preventing, detecting, investigating or prosecuting criminal and terrorist activities supported by advanced technologies. In that context, supporting Member States law enforcement (LE) cyber capacity building is paramount, in particular in the field of AI applications that are key to address the data overload. Projects under this action should pay specific attention to fundamental rights challenges notably by proposing adequate bias mitigation158 and non-discrimination mechanisms as well as by providing enhanced data quality and protection. They should also demonstrate a strict compliance with the EU legal framework on data processing for police purposes as set out in Directive 2016/680 of the European Parliament and the Council of 27 April 2016 and the GDPR.
The activities supporting this policy are organised around two complementary actions: the data space for security and law enforcement (see topic 2.2.1.12.2) and the pilots, subject of this action.
The overall objective is to enable the final validation and foster the uptake of AI systems for LE by running large scale pilots in LEAs premises. This is necessary, as AI systems for LE need, in most cases, a final validation on real operational datasets160 that can only be accessed in stand-alone secured environments.
This action will contribute to close the gap between prototypes that have been developed with the support of EU funded security research and innovation programmes (i.e. up to TRL 7) and systems proven in operational environment that bring clear added value to police practitioners (i.e. TRL 8/9).
Due to the sensitivity of data handled in investigations, this can only be done by LE, in their own premises and on real use cases. This is particularly true in the context of AI where the representativity of data sets plays an important role in avoiding inaccurate, biased or even discriminatory outcomes.
From a data perspective, this action complements the creation of a data space for security and law enforcement. The data space for security and law enforcement will gather pseudo operational data (or anonymized datasets) that will be used to train and test AI systems, while this action will make full use of real operational data in stand-alone environments to assess, validate and better train AI systems.
The involvement of the Europol Innovation Lab in a steering role e.g. to identify the most promising prototypes and to contribute to the assessment of the applications, and the participation of end-user driven networks such as the European Anti Cybercrime Technology Development Association (EACTDA) will ensure the European added value. The coordination of large-scale pilots across Member States will be done via the establishment of Core Groups in the framework of the Europol Innovation Lab in order to contribute to the emergence of European technological solutions in key areas.
Scope
To achieve the above mentioned objective, it is necessary to foster the testing, validation and optimisation of innovative digital forensic and investigation tools over sufficient periods of time (minimum 6 months) in real operational environment. It is also necessary to coordinate the pilots and to ensure that the validated solutions can benefit to EU LE at large and duly address fundamental rights challenges notably by enhancing data quality, mitigating bias, detecting errors and avoiding any form of discrimination in the decision making process.
This would be done by:
- Setting up a validation methodology for innovative investigative tools that should be designed and validated by the Europol Innovation Lab with the involvement of its dedicated core groups of EU Member States.
- Running large scale pilots in LE premises for training, validating and adapting a limited number of best in class (selected among a set of tools proposed by the Europol Innovation Lab that comply to EU standards in terms of regulation and protections of fundamental rights) innovative AI tools in real environment,
- Creating when necessary a set of annotated data during the pilot projects that could be shared among LE and potentially Europol (and eventually feed the data space for security and law enforcement),
- Ensuring that the solutions validated through pilots can benefit to a number of EU LEAs with the support of the Europol Innovation Lab, Networks of LE practitioners and the enforcement of appropriate IPR.