Policy Lab 4
20 th April 2023 – Italy
popAI fourth Policy Lab was held on 20th April 2023 in Rome, Italy. This page summarizes the results of a discussion conducted among three multidisciplined groups composed by representative of the local police, legal experts, ethics experts and engineers, experts of artificial intelligence. They were asked to discuss on the potential benefits, challenges and possible solutions regarding a use case on the implementation of artificial intelligence in the video surveillance system. During the meeting, various perspectives and key issues related to ethics, regulations, and the balance between security and privacy emerged. The following are the main conclusions reached by the three participating groups.
Case study: Implementation of artificial intelligence in the video surveillance system
Following a brutal murder where the murderer struck a random victim among passers-by, an AI system has been set up in your city in the video surveillance network, with the adoption of algorithms for data recognition, extraction and analysis, in real time from video streams, which allows the production of massive amounts of value-added information (metadata) in the domain of security, monitoring, analysis and planning. This will allow police, starting from information derived from witness accounts, which is fragmentary and qualitative, and to the exclusion of using biometric data, to extract frames of interest that need to be validated. By way of example only, we mention in relation to vehicles: vehicle type; color, lettering, markings; license plate and country of registration; direction and speed etc.; and to pedestrians: distinction between adult/child; color of clothing and shoes; presence of objects such as bags, backpacks, hats, glasses etc.
The system will be able to process the video streams acquired from the City’s cameras and from unconnected private cameras and – once appropriately uploaded to the platform – will be able to metadata base the information by comparing and integrating it with that present in the video streams generated by the connected camera system.
Based on the algorithms adopted, it will be possible to:
- detect anomalies in the behavior of people or vehicles;
- locate in real time their presence in the various areas of the city of people/vehicles based on their description;
- acquire and validate the frames of interest so that preventive actions or checks can be planned;
- identify in real time the presence of an individual and his or her movements in the various areas of the city from the description given by a witness
- significantly reduce the time it takes police after a crime to acquire digital sources of evidence
- counteract the phenomenon of road homicides with runaway investors
The introduction of AI software in surveillance has undeniable advantages, notably a significant reduction in investigation times for law enforcement. The system in fact, could be able to analyze many hours of video in a very short time while it would take several days and several police officers to do the same type of job. At the same time, the adoption of this technology can enhance the public’s perception of safety, although it is not proved that it could also serve as a deterrent. This means that, despite the expectations of increased security, there is no direct correlation between the adoption of this software and a decrease in crime rates.
Among the disadvantages, there is a risk that such tools could be used for social scoring, labeling individuals, or groups, and creating prejudices and biases. It is important to consider that the misuse of these tools may have negative consequences that differ from the intended purpose for which the software was initially designed. Additionally, the issue of security and protection of personal data is crucial. The lack of clear regulations regarding data collection, retention, and access has raised concerns about its management, some examples that were cited were: the duration of data retention, the number of individuals with access to the data. It is also necessary to consider that this software will indiscriminately collect of information on a wide range of people involved in an investigation, even if only a fraction of them will be involved in the investigation.
During the discussion, the need for clear norms and regulations governing the use of AI in surveillance became evident. It is essential to emphasize the minimization of data collection and metadata necessary for the project in order to mitigate associated risks. Important questions that the participants stated that required concrete answers included: How long are the data retained? How many people have access to such data? Moreover, it is crucial to reflect upon the fact that data is being collected indiscriminately on a wide range of people involved in a preliminary investigation, even if only some of them will be involved in the subsequent investigation. Current regulations are inadequate, therefore detailed norms are necessary to ensure the protection of citizens’ rights.
The integration of AI in surveillance systems raises complex ethical concerns. The discussion is not solely about individual data protection, but also about individual and collective freedoms that may be compromised in the pursuit of enhanced security. The stakeholders stressed how important it is to understand that the implementation of such projects represents a choice of great societal importance. The invasive nature of AI usage in surveillance systems calls for careful consideration. Special attention must be paid to the group of people who may be disadvantaged by an AI system that does not recognize everyone equally, thereby risking excessive surveillance.
In order to mitigate the potential risks associated with the introduction of AI systems in surveillance, it would be beneficial to utilize existing best practices. Currently, there is a lack of data on the positivity rate of these software tools that could be used by law enforcement.
A contextual evaluation should be conducted for each specific use case, involving a dedicated multidisciplinary team with diverse expertise capable of assessing the applicability of these technologies within a given context. This assessment should consider a range of contextual trade-offs that depend on the objective or the city where the technology will be deployed. It is important to determine the specific purposes for which this technology will be used. Will it only be applied to specific cases?
The technological process of such a system is not entirely clear. Although it does not collect biometric data and only gathers metadata, ultimately, the identification of individuals requires the analysis of biometric data.
On the legal expert side it was suggested the enactment of comprehensive legislation, including laws on the determination of administrative offenses and the legality of using these systems under specific conditions. The principles underlying such legislation would include transparency, functionality, and procedural requirements. There should be public, rather than purely technical, control based on ethical values. Additionally, principles such as maintenance to prevent and correct algorithmic errors and non-exclusivity of decision-making algorithms should be considered. Furthermore, the principle of non-discriminatory algorithms should be implemented to ensure that the algorithm does not disproportionately focus on individuals with specific physical characteristics or in certain territories.
The professionalism and ethical conduct of those carrying out the assessments should be given careful consideration, with the implementation of training courses and adherence to professional codes of conduct. It is impossible to halt the progress of such technology, and it is unwise to rely solely on technology due to time-saving advantages. Continuous verification is essential. In addition to operator training, transparency and clear communication with the public are vital aspects that should not be overlooked. It is crucial to inform the citizens transparently about the deployment of AI in the surveillance network. By providing clear information, the public can develop a better understanding of the system’s capabilities, limitations, and safeguards in place. This transparency fosters trust and ensures that citizens are aware of how their privacy and security are being protected.
On the other hand, the points outlined in this hypothetical legislation align with the requirements identified by the group of experts currently drafting guidelines for the ethical use of artificial intelligence at the European level. One possible solution is to adopt an ethics- by-design approach, incorporating ethical considerations during the design phase. Involving multiple stakeholders from the outset can minimize risks during the programming phase. Despite efforts to anticipate potential applications of a technology, there is always a risk of discovering concerning aspects that were not initially foreseen, but this holds true for other fields as well, not just computer science.
None of the computer scientists advocate for the complete replacement of humans with technology; rather, these tools are intended to provide support. Having a fully automated decision-making process is undoubtedly risky. The scientific community agrees that these intelligences should be transparent and explainable. These systems can provide explanations for why they have selected specific individuals in a video stream and are not considered “black boxes.” The ability to understand the instrument, as well as the quality of the data and the algorithm helps protecting even the right to defense.
In conclusion, training and information play a central role in introducing an AI system into the surveillance network. Adequate operator training and transparent communication with the public are essential for responsible and effective utilization of AI technology. Encouraging academic research enables a deeper understanding of the potential benefits and risks, facilitating informed decision-making. Establishing robust regulations and technical specifications ensures proper data handling and privacy protection. Through a holistic approach that combines rigorous training, transparent communication, academic research, and robust regulations, we can navigate the complexities of integrating AI into surveillance systems while upholding ethical principles and safeguarding societal values.