Meta Smart Glasses and the Privacy Problem

I recently had my roofing company come by to look at a water leak. The owner came over, we chatted, and went upstairs to the bathroom where the problem was. That’s when I realized he was wearing Meta glasses that were on and recording. I didn’t say anything in the moment because, honestly, I was caught off guard and didn’t know how to respond. I wish I had, because he should have told me before stepping into my home that he was recording and asked whether I was comfortable with that. It made me deeply uncomfortable afterward. My privacy and my home felt violated. I don’t want my personal life captured without my consent.

In retrospect, what I should have done is as soon as I realized he was wearing the smart glasses is that I should have asked him to please stop recording and then educated him, kindly, on why he should ask permission before entering someone’s home with an active recording device. But the fact that I had to think through that response at all says something. The burden shouldn’t be on the person being recorded.

I’ve thought about the ethical use of smart glasses. In all honesty, is there really an ethical use for them? But, like most tech, I doubt smart glasses are going away. So it comes down to privacy, our right to it, and what we can do, especially when regulations are not protecting consumers.

There are serious ethical considerations with Meta glasses and smart glasses in general. The first, and most obvious, is the privacy concern, and it runs deeper than most people realize.

Image Credit: Cath Virginia / The Verge, Getty Images, Meta

What Meta Promised in 2021

When Meta and Ray-Ban launched the first generation of Ray-Ban Stories in 2021, Meta framed the product as one designed with privacy in mind from the start. The main safeguards were a hardwired capture LED that lights up during recording, a physical power switch for the cameras and microphone, and user controls in the Facebook View app for managing content. Photos and videos were encrypted on the glasses, and the glasses could only be paired with one account at a time, with data automatically deleted if someone tried to pair them to a new account.

Those baseline features still exist. But the product has changed substantially since then, and the gap between what was promised and what the technology now does is significant.

The Ethical Issues

1. A Light Nobody Sees Isn't Consent

The small LED light is the cornerstone of Meta’s argument that bystanders are notified when recording is happening. It hasn’t held up. Smartglasses are designed to resemble regular glasses, to the point where most reviewers note that friends didn’t notice cameras were embedded in them, and the small indicator light can be disabled with cheap hacks. My roofer is a perfect example. I’m reasonably tech-aware, and I still nearly missed it the little LED light being on. Regulators in Ireland and Italy have questioned the effectiveness of the LED as a notice mechanism for bystanders since 2021. The design of the glasses undermines the consent architecture built around the light.

2. AI Features Shifted the Data Defaults, Without Much Notice

This is the most significant change from the original product. In April 2025, Meta revised the privacy policy for Ray-Ban Meta glasses, expanding data collection, and notified users that AI features would be switched on by default, including the “Meta AI with camera” function. Any time an AI feature is triggered, like the “Hey Meta” voice command, footage is fed to Meta. You can use the glasses without the Meta AI app, but since you can’t easily download footage to your phone without it, most users end up using the app anyway.

Voice recordings triggered by the wake word are stored in the cloud by default and can be kept for up to a year to help improve AI systems, with no option to opt out beyond manual deletion. The glasses that entered your home, your doctor’s waiting room, or your child’s school play are not just cameras. They are live data collection nodes connected to one of the world’s largest advertising companies.

3. Your Footage May Be Reviewed by Human Contractors

A joint investigation by two Swedish newspapers found that some videos captured through Meta glasses are sent to contractors in places like Nairobi, Kenya for data annotation, a process of labeling footage to help machine learning algorithms learn from it. Meta says data can only be used this way if users opt in, but the investigation raised questionsabout how consistently that distinction holds in practice.

4. Bystanders Have No Rights in the Current Framework

My roofer wasn’t recording with my consent. The people on the street near someone wearing Meta glasses haven’t consented either. People who unintentionally appear in the background of Meta smart glasses photos are at risk of having their images and data used for AI training, as Meta utilizes both automated systems and human reviewers to process this captured content. Meta’s reliance on user behavior to uphold privacy norms may not be sufficient to address complex questions around consent, surveillance, and data exploitation.

A number of women have shared experiences about being secretly filmed by people wearing smart glasses in public. One woman discovered a video of her, recorded by a stranger wearing the glasses during a conversation on a walk, had been posted online and received nearly a million views. In October 2025, the University of San Francisco issued a warning after similar incidents were reported on campus. In February this year, a Swedish report that employees of a Meta subcontractor who provide data annotation services have watched footage captured by Ray-Ban Meta smart glasses showing sensitive user content.

5. Real-Time Facial Doxing Is Already Possible

This is the clearest proof of what the hardware enables in the wrong hands. In October 2024, Harvard students AnhPhu Nguyen and Caine Ardayfio built a tool called I-XRAY that combined the Ray-Ban Meta glasses, AI, and publicly available databases to reveal a stranger’s name, address, and other sensitive information in just over a minute by capturing an image of their face. They identified dozens of people, including fellow Harvard students, without those people ever knowing.

I-XRAY connected the glasses to PimEyes, a facial image search engine, and used a large language model to compile names, addresses, and phone numbers within seconds. The students demonstrated how this could be used to deceive people or facilitate stalking. They did not release the code, but the mechanism relies entirely on publicly available tools. Every piece of data I-XRAY pulled was publicly available, making this a potential open source intelligence nightmare at scale.

One of the students was direct about the implications: someone could find a stranger’s home address on the train and follow them home. That’s the current capability of hardware that is now in its third generation and has sold millions of units.

6. Facial Recognition: Officially Excluded, Practically Accessible

Meta chose not to include facial recognition in the first generation of Ray-Ban Meta glasses, citing ethical concerns, but recent reports suggest the company may revisit that decision for future models. I-XRAY demonstrated that the gap between “we don’t include it” and “it can’t be done with this hardware” is essentially zero. The ethical restraint is self-imposed and not structural.

Image Credit: AI generated image / Shutterstock

The U.S. Regulatory Gap

Here is where the concern becomes a policy failure. There is no federal law specifically governing the use of smart glasses in the United States. They fall under a patchwork of existing regulations covering recording consent, wiretapping, and state-level privacy laws, none of which were designed with always-on wearable AI in mind.

In practice, that means:

Recording consent laws vary by state. Some states require all parties to consent to being recorded (two-party consent states). Others require only one party. Violations of two-party consent laws carry criminal penalties, but the fact that many AI glasses lack obvious recording indicators compounds the risk. AI-generated transcripts created without consent raise a wide range of issues. New Jersey, where I live, is a one-party consent state, meaning that as a party to our conversation, my roofer was technically within the law to record it. That’s exactly the problem. The law wasn’t written with a stranger walking through your home wearing an AI-connected camera in mind, and it shows. There is no enforcement mechanism proactively protecting me, or anyone else in a similar situation.

Biometric data has almost no federal protection. Three states have dedicated biometric privacy statutes. Illinois’s Biometric Information Privacy Act is the most consequential, requiring informed written consent before collecting biometric identifiers, which include face geometry. Illinois residents have legal recourse, but most Americans do not.

Children are a particular exposure. Under the FTC’s 2025 updates to COPPA, minors’ biometric data became tightly restricted. But for a device to determine whether someone is underage, it has to scan and process their face first. That initial scan already counts as collecting data, which is prohibited. As one legal expert noted, you cannot comply with the law without first breaking it.

The most active legislative response is happening at the state level. In February 2026, California Senator Eloise Gómez Reyes introduced Senate Bill 1130, the Wearable Device Privacy Protection Act, which would make it a crime to secretly record people using wearable devices inside places of business. It would also target manufacturers, sellers, and distributors of technology that disables recording indicator lights. This is a meaningful step, but it is one state, focused on commercial settings. It doesn’t cover what happened in my home.

At the federal level, there is no comprehensive privacy law in the United States, full stop. The American Data Privacy and Protection Act has stalled repeatedly in Congress. There is no equivalent to the EU’s GDPR, and there is no federal body with clear jurisdiction over what Meta’s glasses collect from bystanders, in homes, in schools, in hospitals, or on the street.

The EU Is Doing More, But It’s Not Enough Either

For comparison, in May 2025, privacy advocacy group NOYB sent Meta a cease-and-desist letter alleging unlawful use of EU personal data for AI training, threatening collective redress actions under the EU Collective Redress Directive. The EU AI Act, which entered into force in 2024, already classifies real-time biometric identification in public spaces as high-risk, requiring risk management systems, data governance controls, human oversight, and transparency measures. Full enforcement of those high-risk obligations is currently scheduled for August 2, 2026, though the European Commission proposed in late 2025 to delay that deadline further while harmonized standards are still being developed. AI-powered wearable devices that perform real-time biometric categorization in publicly accessible spaces are prohibited outright under the EU AI Act.

Regulatory pressure abroad has produced real changes, including restricted AI feature rollouts and delayed product capabilities in EU markets. But European rules were never designed to protect you in your state, and no current U.S. framework fills that gap. In a one-party consent state like New Jersey, a contractor can walk through your front door wearing a live recording device, and under current law, that’s permitted.

The Costs Outweigh the Benefits

I think the costs far outweigh the benefits of these glasses. There are no meaningful protections for bystanders who are being recorded with or without consent. The safeguards Meta designed in 2021 were built for a camera while the product is now a mobile AI node. The ethics haven’t kept pace with the capability, and in the United States, the law is even further behind.

So what can you do right now? If someone enters your home or a private space wearing smart glasses, you have every right to ask them to turn them off or remove them. That conversation is uncomfortable and it shouldn’t have to be. But until regulation catches up to the technology, the burden is falling on individuals, one awkward moment at a time.


This post was written by me, with editing support from AI tools, because even writers appreciate a sidekick. This post originally appeared on www.kristinakroot.me.


References

AI Act | Shaping Europe’s digital future. (2026, March 19). https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Becher, B. (2026, March 25). Smart Glasses Laws: Are They Legal in the U.S.? | Built In. Built In. Retrieved April 2, 2026, from https://builtin.com/articles/are-smart-glasses-legal

California bill targets secret recordings by smart glasses in workplaces. (2026, February 25). PPC Land. https://ppc.land/california-bill-targets-secret-recordings-by-smart-glasses-in-workplaces/

Growcoot, M. (2025, May 1). Meta Updates Smart Glasses Policy to Expand AI Data Collection. PetaPixel. https://petapixel.com/2025/05/01/meta-updates-smart-glasses-policy-to-expand-ai-data-collection/

Harding, S. (2026, March 5). Workers report watching Ray-Ban Meta-shot footage of people using the bathroom. Ars Technica. https://arstechnica.com/gadgets/2026/03/workers-report-watching-ray-ban-meta-shot-footage-of-people-using-the-bathroom/

Klosowski, T. (2026, March 10). Think Twice Before Buying or Using Meta’s Ray-Bans. Electronic Frontier Foundation. https://www.eff.org/deeplinks/2026/03/think-twice-buying-or-using-metas-ray-bans

Law, R. (n.d.). Recording Law—Recording Laws by State & Country. Recording Law. Retrieved April 2, 2026, from

https://www.recordinglaw.com/

Lazzarotti, J. J., & Lewis, J. (2025, December 18). Compliance Concerns with Artificial Intelligence Smart Glasses. https://natlawreview.com/article/hidden-legal-minefield-compliance-concerns-ai-smart-glasses-part-2-two-party

Markovic, S. (2026, February 5). Smart glasses are back, privacy issues included. Help Net Security. https://www.helpnetsecurity.com/2026/02/05/ai-smart-glasses-privacy-risk/

McArthur, V. (Vicky). (2024, September 23). Meta’s AI-powered smart glasses raise concerns about privacy and user data. The Conversation. https://doi.org/10.64628/AAM.rxn9s4aks

Merritt, B. (2024, October 4). Students created a way to access personal info via AI and smart glasses. Morning Brew. Retrieved April 2, 2026, from https://www.morningbrew.com/stories/students-access-personal-info-via-smart-glasses

Nassar-Smith, A. (2024, October 3). Harvard Students Connect Meta Ray-Bans to PimEyes Face Search, Provoking Privacy Concerns. ID Tech. https://idtechwire.com/harvard-students-connect-meta-ray-bans-to-pimeyes-face-search-provoking-privacy-concerns/

Navigating the AI Act | Shaping Europe’s digital future. (n.d.). Retrieved April 2, 2026, from https://digital-strategy.ec.europa.eu/en/faqs/navigating-ai-act

Ray-Ban and Facebook introduce Ray-Ban Stories, first-generation smart glasses. (2021, September 9). Tech at Meta. https://tech.facebook.com/reality-labs/2021/9/ray-ban-and-facebook-introduce-ray-ban-stories-first-generation-smart-glasses/

Ray-Ban Meta. (2026). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Ray-Ban_Meta&oldid=1342589237

Wanjala, A. (2026, March 15). New Investigation Reveals Smart Glasses Are Recording Your Most Private Moments. BGR. https://www.bgr.com/2121168/smart-glasses-record-private-moments/

Next
Next

The Window Is Closing: What OpenAI's Pivot Tells Us About AI Accountability