Policy Lab 1

25th May 2022 – Athens, Greece

Main outcomes case study 1 : predictive, research, and detection systems using crime data to improve policing and combat crime

Organizational/Regulation Level

AI systems need to support the decision making not to make the decisions; There should always be a human supervision in the whole lifecycle of the AI system and the final decision to be made by humans.

Sandboxes for the implementation of the systems in protected environments/settings should be developed. In this way, explainability and cyber security could be further explored without putting risks on data subjects.

Regulation to promote and ensure citizens’ awareness regarding the existence and implementation of an AI system and enable objection to potential unjust decisions.

Establishment of AI observatory body: potentially an independent authority with technical, organizational, and practical capabilities to assess the system’s compliance with legal and ethical rules and regulations set by interdisciplinary committees and stakeholders.

Certification of system accountability through specific processes and frameworks; algorithm audit: “democratic” data – “robust” algorithm; what data are collected, for what purpose, qualitative assessment, potential bias etc.

Need for qualified staff/users and model/technology designers with continuous training processes in place. Relevant certifications to be described in the legal framework.

Legal harmonization of the AI usage in national and European level. Legal framework to ensure data protection and enable judges’ intervention regarding permission for the data usage.

Interdisciplinary assessment of the whole process of development, implementation and regulation of the system ensuring also ethical processing of data.

Before system procurement, the technical specifications must be accepted by social organizations and agencies while during the implementation, it needs to be checked by representatives of social and other bodies and to give an opinion that the system covers the institutional framework.

Technical level

Systems need to be constantly improved/ updated based on specific criteria, regulations and limitations that serve specific needs (optimality).

Multi-disciplinary approach is needed and collaboration between ethicists, lawyers, psychologists, in general humanities and social studies, and data scientists, software engineers (technical approach).

Transparency can be approached with open-source algorithms.

Metrics, benchmarks, and thresholds need to be explored so to ensure the qualitative development of the models.

System explainability regarding outcomes and recommendations. It is important that the system can indicate the key parameters considered regarding the specific result/outcome. The system should make clear to the user the “logic” so the end-user can assess the validity/relevance of the outcome to support the decision-making process.

Interoperability for collaboration between different data bases for best and most effective implementation of the system. For example, recording of unaccompanied minors, foster care cases and adoptions. – This needs to be assessed under relevant legal and ethical framework too. In the case interoperability is granted, the regulation needs to describe very detailed and specifically the purpose, use, etc.

Main outcomes case study 2 : systems for predicting dangerous driving using video footage from traffic management cameras or other real-time footage to prevent traffic accidents

Organizational/Regulation Level

Creation of data intermediaries: bodies that provide -free of charge- their services to citizens to manage third parties’ data based on their preferences. In this way, when police use data originally collected for other purposes citizens will be notified regarding the process, the purpose, storage etc. This process describes the right to informed consent in an automatic way.

Open data for accountability and transparency purposes and an entity/civil society organization for data benchmarking.

Operators’ training and supervision and a clear institutional operating framework justifying access to the system and describing the process of an accountable regular control of the use of the system (how, who, and why can use the system data).

Clear legal framework on protection/restricted use of biometric data and copyright issues.

Legal frameworks relevant to data protection in the context of the case study: GDPR, AI Act, Data governance act.

Technical level

Interoperability between crime predictive systems and traffic division for more effective profiling and valid predictions.

Securing user access (access grading) and providing Usage Logging.

Driving behavior exception parameters (e.g. ambulances).

Data security – system (transparency accountability)

Record system decisions.

Continuous evaluation of system decisions and readjustment.

Agenda of the meeting