A recent demonstration of Meta’s smart glasses has sparked serious privacy concerns after two Harvard students revealed how the technology can be used to collect sensitive personal information on anyone in view. Using a modified version of the glasses, called I-XRAY, AnhPhu Nguyen and Caine Ardayfio combined facial recognition with artificial intelligence to identify individuals and retrieve private details about them from the internet.
Powered by a large language model (LLM), the glasses can scour the web in real-time, finding information such as home addresses, phone numbers, and even social security numbers. The LLM allows the glasses to quickly cross-reference images with publicly available data, piecing together profiles of individuals without their consent.
This type of technology raises serious privacy questions, as it enables users to access sensitive data simply by looking at someone. Critics argue that this could lead to a host of issues, including identity theft and harassment, as the glasses make it easier to expose private information.
Nguyen and Ardayfio recommend that people take steps to protect themselves, such as removing their personal details from online databases and reverse image search engines. However, with the increasing use of AI in everyday technology, these protections may not be enough.
Many are calling for stricter privacy laws and regulations to address the risks posed by AI-driven tools like these glasses. The potential for misuse is high, and without proper safeguards, individuals’ private data could be exposed on an unprecedented scale.
As technology continues to advance, concerns about privacy are likely to grow, pushing for greater accountability and oversight in the development of AI-driven tools.