The use of artificial intelligence in law enforcement: the scope of ethics and the impact on human rights
Keywords:
Artificial Intelligence, Law Enforcement, Ethics, Human Rights, Digital TechnologyAbstract
This research aims to study: 1. the scope of artificial intelligence use in law enforcement; 2. ethical issues and accountability principles in its use; and 3. the potential impact on human rights in the justice system. It is a qualitative study with a key informant of 10 experts in law, artificial intelligence, and human rights. Data were collected through document review, in-depth interviews, and field observations. The analysis involved content analysis and thematic analysis, with data quality verified through triangulation and member checks, all conducted under research ethics principles, to reflect perspectives on the application of AI in the justice process.
Research findings indicate that the use of artificial intelligence (AI) in law enforcement has a broad scope, covering surveillance, crime data analysis, and risk area prediction, which helps improve the efficiency and speed of officers' operations. However, significant ethical issues have been identified, such as algorithmic bias, transparency in decision-making processes, and accountability when system errors occur, affecting human rights, particularly privacy rights and the right to a fair justice process. Therefore, the use of AI in the public sector should be guided by principles of ethics, transparency, and oversight mechanisms to prevent rights violations and build public trust.
References
กิตติคุณ ตันรักษ์. (2564). กฎหมายกับปัญญาประดิษฐ์: กรอบคิดพื้นฐานและประเด็นทางกฎหมาย. กรุงเทพฯ: สำนักพิมพ์วิญญูชน.
สถาบันเพื่อการยุติธรรมแห่งประเทศไทย (TIJ). (2564). AI กับกระบวนการยุติธรรมไทย: ความเสี่ยง โอกาส และแนวทางเชิงนโยบาย. กรุงเทพฯ: TIJ.
สำนักงานตำรวจแห่งชาติ. (2567). รายงานสถานการณ์อาชญากรรมทางเทคโนโลยีในประเทศไทย. กรุงเทพฯ: ศูนย์ปราบปรามอาชญากรรมทางเทคโนโลยีสารสนเทศ (ศปอส.ตร.).
สำนักงานพัฒนารัฐบาลดิจิทัล (องค์การมหาชน). (2566). นโยบายและแนวทางรัฐบาลดิจิทัล พ.ศ. 2565–2570. กรุงเทพฯ: สพร. (DGA).
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals—and it’s biased against blacks. ProPublica. Retrieved fromhttps://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Bryson, J. J. (2019). The past decade and future of AI’s impact on society. In Towards a New Enlightenment? BBVA OpenMind.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.
Council of Europe. (2021). Guidelines on Artificial Intelligence and Human Rights. Strasbourg: Council of Europe Publications.
European Commission. (2022). Ethics guidelines for trustworthy AI. Brussels: European Union.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People-An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Grimson E. (2025) สรุป AI อดีต ปัจจุบัน อนาคต สถาบันเทคโนโลยีแมสซาชูเซตส์ (MIT) สืบค้นจาก https://techsauce.co/tech-and-biz/ai-past-present-future-eric-grimson-mit-keynote
Perry, W. L., McInnis, B., Price, C. C., Smith, S. C., & Hollywood, J. S. (2013). Predictive policing: The role of crime forecasting in law enforcement operations. RAND Corporation.
Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Harlow, England: Pearson Education.
Toronto Declaration. (2018). Protecting the right to equality and non-discrimination in machine learning systems.Amnesty International and Access Now.Retrieved from
https://www.accessnow.org/the-toronto-declaration
United Nations Human Rights Council. (2021). The right to privacy in the digital age: Report of the United Nations High Commissioner for Human Rights. Geneva: United Nations Publications.