AI tools are increasingly being used to track and monitor us both online and in-person, yet their effectiveness comes with big risks. Computer scien

Pioneering new mathematical model could help protect privacy and ensure safer use of AI

submited by
Style Pass
2025-01-09 18:00:08

AI tools are increasingly being used to track and monitor us both online and in-person, yet their effectiveness comes with big risks. Computer scientists at the University of Oxford have led a study to develop a new mathematical model which could help people better understand the risks posed by AI and assist regulators in protecting peoples’ privacy. The findings have been published today in Nature Communications.

For the first time, the method provides a robust scientific framework for evaluating identification techniques, especially when dealing with large-scale data. This could include, for instance, monitoring how accurate advertising code and invisible trackers are at identifying online users from small pieces of information such as time zone or browser settings (a technique called ‘browser fingerprinting’).  

Lead author Dr Luc Rocher , Senior Research Fellow at the Oxford Internet Institute, part of the University of Oxford, said: ‘We see our method as a new approach to help assess the risk of re-identification in data release, but also to evaluate modern identification techniques in critical, high-risk environments. In places like hospitals, humanitarian aid delivery, or border control, the stakes are incredibly high, and the need for accurate, reliable identification is paramount.'

Leave a Comment