Sains & Teknologi

AI Surveillance: Studi baru memaparkan risiko tersembunyi terhadap privasi Anda

Robot Partikel Merah Kecerdasan Buatan
Para ilmuwan telah menciptakan model untuk mengevaluasi risiko identifikasi AI, membantu melindungi privasi. Alat ini membantu menyeimbangkan manfaat AI dengan perlindungan data. Kredit: scitechdaily.com

Model matematika baru meningkatkan evaluasi risiko identifikasi AI, menawarkan solusi yang dapat diskalakan untuk menyeimbangkan manfaat teknologi dengan perlindungan privasi.

Alat AI semakin banyak digunakan untuk melacak dan memantau orang baik secara online maupun secara langsung, tetapi keefektifannya memiliki risiko yang signifikan. Untuk mengatasi hal ini, para ilmuwan komputer dari Oxford Internet Institute,[{” attribute=”” tabindex=”0″ role=”link”>Imperial College London, and UCLouvain have developed a new mathematical model designed to help people better understand the dangers of AI and support regulators in safeguarding privacy. Their findings were published in Nature Communications.

This model is the first to offer a solid scientific framework for evaluating identification methods, particularly when handling large-scale data. It can assess the accuracy of techniques like advertising codes and invisible trackers in identifying online users based on minimal information—such as time zones or browser settings—a process known as “browser fingerprinting.”

Lead author Dr. Luc Rocher, Senior Research Fellow, Oxford Internet Institute, part of the University of Oxford, said: “We see our method as a new approach to help assess the risk of re-identification in data release, but also to evaluate modern identification techniques in critical, high-risk environments. In places like hospitals, humanitarian aid delivery, or border control, the stakes are incredibly high, and the need for accurate, reliable identification is paramount.”

Leveraging Bayesian Statistics for Improved Accuracy

The method draws on the field of Bayesian statistics to learn how identifiable individuals are on a small scale, and extrapolate the accuracy of identification to larger populations up to 10x better than previous heuristics and rules of thumb. This gives the method unique power in assessing how different data identification techniques will perform at scale, in different applications and behavioral settings. This could help explain why some AI identification techniques perform highly accurately when tested in small case studies but then misidentify people in real-world conditions.

The findings are highly timely, given the challenges posed to anonymity and privacy caused by the rapid rise of AI-based identification techniques. For instance, AI tools are being trialed to automatically identify humans from their voice in online banking, their eyes in humanitarian aid delivery, or their face in law enforcement.

According to the researchers, the new method could help organizations to strike a better balance between the benefits of AI technologies and the need to protect people’s personal information, making daily interactions with technology safer and more secure. Their testing method allows for the identification of potential weaknesses and areas for improvement before full-scale implementation, which is essential for maintaining safety and accuracy.

A Crucial Tool for Data Protection

Co-author Associate Professor Yves-Alexandre de Montjoye (Data Science Institute, Imperial College, London) said: “Our new scaling law provides, for the first time, a principled mathematical model to evaluate how identification techniques will perform at scale. Understanding the scalability of identification is essential to evaluate the risks posed by these re-identification techniques, including to ensure compliance with modern data protection legislations worldwide.”

Dr. Luc Rocher concluded: “We believe that this work forms a crucial step towards the development of principled methods to evaluate the risks posed by ever more advanced AI techniques and the nature of identifiability in human traces online. We expect that this work will be of great help to researchers, data protection officers, ethics committees, and other practitioners aiming to find a balance between sharing data for research and protecting the privacy of patients, participants, and citizens.”

Reference: “A scaling law to model the effectiveness of identification techniques” by Luc Rocher, Julien M. Hendrickx and Yves-Alexandre de Montjoye, 9 January 2025, Nature Communications.
DOI: 10.1038/s41467-024-55296-6

The work was supported by a grant awarded to Luc Rocher by Royal Society Research Grant RG\R2\232035, the John Fell OUP Research Fund, the UKRI Future Leaders Fellowship [grant MR/Y015711/1]dan oleh FRS-FNRS. Yves -alexandre de Montjoye mengakui dana dari Kantor Komisaris Informasi.

Related Articles

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *

Back to top button
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.