Meta is facing significant backlash regarding its new wearable tech risks. Probes are currently starting in the UK and the US. People are really worried about AI data security and privacy. Recent reports show humans actually review recorded user footage daily. Reviewers in Kenya watch clips capturing very intimate private moments.

Users might accidentally expose their bedroom activities or bank details. Nevertheless, the company claims people agree to these usage terms. Most folks do not read those long, boring agreements. Consequently, they have no idea that strangers watch their daily lives.
The Human Element Behind Machine Learning Algorithms
Tech giants need human workers to label data. People categorize visual information to train the AI systems properly. On the flip side, this creates massive data privacy headaches. Lawsuits allege that the company violates laws and tricks innocent consumers. Authorities in the UK launched an official privacy probe recently. US courts are also reviewing alleged false advertising claims now.
Meta insists media stays on your device until you share. If you upload it, human eyes might just watch it. Users definitely need to understand what they are giving away. We really need much better transparency from these massive corporations.
Trust Issues Piling Up For The Tech Giant
This situation is another huge PR nightmare for the giant. They already have a terrible track record with user data. Above all, the brand is fighting battles about teen safety. Adding smart glasses controversies makes things a whole lot worse. Public trust drops every single time this frustrating stuff happens. Consumers love cool gadgets.
Still, they really value their personal boundaries and private data. Wearable tech companies must respect privacy rules. Otherwise, they will face very serious legal trouble soon enough. Nobody wants random strangers watching their most sensitive private moments.
We need your help to keep this website free. You can 


