Policy Lab 3

13th December 2022 – Bratislava, Slovakia

popAI third Policy Lab was held on 13th December 2022 at the Police Academy of Bratislava. The event was conducted in a hybrid mode and hosted by the Vice-rector for Science and Foreign Relations, prof. JUDr. Mojmír Mamojka, PhD. Thirty-two people attended both onsite and remotely. Experts belonged to different sectors, such as academia, applied research, the Ministry of Interior and national agencies. This variegated composition helped address the topics from diverse angles, thus allowing a more comprehensive understanding of the phenomenon and its challenges.

Ethical aspects using AI tools in Law Enforcement

Given the local dimension of the event, the group addresses several aspects relevant to law enforcement in the Slovak Republic and/or using AI tools in law enforcement. Participants recognised that AI tools can truly support and facilitate the daily tasks of law enforcement (in the Slovak Republic and elsewhere); nevertheless, it is vital to have a common and coherent framework of action to ensure that the development and use of such systems do not contravene any legal or ethical principle.

Currently, indeed, the AI component is not particularly present in technologies used by law enforcement agencies. Police, for instance, use standard databases as information systems, which lack an AI component. The exemption is a system from the company SOITRON, which is able to recognise license plates.

More consistent use of AI-based technologies would be recommendable, provided that the ethical aspects and potential risks deriving from the use of such applications, specifically in relation to GDPR, are carefully taken into consideration.

Case study: AI tools in monitoring social networks

The workshop then addressed a specific case study dealing with the monitoring of social networks. The Ministry of the Interior of the Slovak Republic, in cooperation with some other law enforcement authorities, has been preparing tools with AI elements to search and evaluate information found on social networks.

In the Slovak Republic, it is considered that discussions on the internet should only be allowed to verified user accounts (i.e. it must be clear who the user is). In this regard, questions arose about whether this requirement is in line with ethical principles and freedom of speech.

Following this general introduction, a case study based on an actual situation was presented. In October 2022, two people from the LGBTI+ community were murdered in the city of Bratislava. A few weeks before the murder, the perpetrator posted hateful tweets towards several minorities, which were also posted immediately before and after the murder. The workshop participants got acquainted with some of these tweets and then divided into four working groups to discuss and then reconvene for a final open discussion. Each working group was composed of a representative from the police or other law enforcement authority, a technical expert, a member of the Police Academy in Bratislava and a member of another university or other authority that has an impact on law enforcement.

Key outcomes:

Law enforcement authorities must be able to act as quickly as possible in an attempt to identify the perpetrator and use all means of operative-search activity

These actions must be done in accordance with legal regulations (e.g. the Criminal Code) and ethical standards to ensure no abuse of the power could occur

A thorough monitoring of social networks is necessary, although hardly possible, without the help of AI tools. However, corresponding ethical standards must be created and clear legal limits set

It is necessary to use AI tools in law enforcement, but in the Slovak Republic prevention at all levels should be strengthened in order to reduce the risk of criminal acts not only on the internet

Questions also emerged about whether prevention should start at early stages, i.e. at elementary schools (familiarity with law, ethics, risks of social networks, etc.)

In this framework, popAI project would help understanding challenges, risks but also opportunities related to the use of AI tools in support of Law Enforcement Agencies, thus contributing to find an appropriate balance between the needs of public law in law enforcement on the one hand and the protection of private law on the other.