Daiana MauraVESMAȘ
Andreea Nicoleta DRAGOMIR
Ana MORARI (BAYRAKTAR)
| Abstract: | In a world where big data analytics and artificial intelligence are rapidly evolving into new forms of policing, attention has returned to the delicate line between public security and individual rights. Therefore, this article presents a modern, empirical framework on predictive policing technologies and defines human security as the protection of the individual that takes into consideration seven facets defined by the 1994 UNDP Report. With the rise of automated systems making decisions relating to suspicions and risks, the long-standing guarantees of presumption of innocence, equality of arms, and the right to a fair trial are under significant pressure. The analysis is doctrinal and interdisciplinary, based on developments in Europe and internationally. By examining experiences from the United States and Europe, the article highlights the contrasting ways in which democratic systems attempt to regulate predictive policing and balance efficiency with fundamental rights. The focus is also on the new EU regulatory framework, namely the Artificial Intelligence Act, protections under the GDPR for automated decision making and the European Court of Human Rights’ precedents in decisions like S. and Marper, Gaughran and Big Brother Watch. Used collectively, this brings critical standards to bear on the question of whether algorithmic policing can fit in with democratic legality. Predictive policing will continue to be deeply contested unless we have real institutional defenses to ensure transparency, accountability, and meaningful human oversight. To the extent that legal systems can manage the tension between technological disruption and respect for human dignity will ultimately decide the resilience of the rule of law in the age of algorithms. |
| Keywords: | predictive policing, human security, artificial intelligence, algorithmic accountability, fundamental rights, rule of law |
