In a bid to improve its AI-powered security cameras, Anker’s Eufy asked customers to submit and even simulate footage of thefts and break-ins. The program raised fresh privacy and ethics concerns.
By TechTrib Newsroom
Anker’s smart home brand, Eufy, quietly launched an unusual campaign to boost the capabilities of its AI-powered security cameras, and it involved turning users into data providers. Between December 2024 and February 2025, Eufy offered customers $2 per submitted video of package thefts, car break-ins, and other suspicious activity, even allowing users to stage incidents for training purposes.
The move has sparked renewed concerns about privacy, transparency, and ethics in AI development, particularly as tech firms increasingly rely on user-contributed data to train their algorithms.
A $2 Incentive for Crime Footage, Even the Fake Kind
The program encouraged Eufy camera owners to share real security footage of crimes captured outside their homes. But notably, users were also invited to simulate those events, for example, by pretending to steal a package from their own porch as a way to “help improve AI accuracy.”
Participants were required to upload clips through a Google Form and provide a PayPal address to receive payment. Anker reportedly received participation from over 120 users, though the company hasn’t disclosed how many videos it collected in total.
The campaign ran for just over two months, but the company continues to promote similar incentives through its Eufy Security app. Instead of cash, users now earn points, gift cards, or free devices for submitting their footage. One user reportedly contributed more than 200,000 videos, according to public app leaderboards.
What the Data Is Used For
According to Eufy, the videos are used exclusively to train AI algorithms to better detect suspicious behavior, including identifying when someone is loitering, stealing a package, or tampering with a vehicle. The goal is to make Eufy cameras more proactive and accurate in recognizing potential threats.
The company claims that submitted videos will not be shared with third parties and are only intended for internal machine learning purposes.
But the idea of encouraging users to reenact crimes even for the sake of training AI has opened a new conversation about the ethics of simulated data, the quality of those datasets, and the transparency of how consumer-generated content is handled.
Privacy Concerns Resurface
While the campaign was voluntary, critics argue that Eufy has not earned the benefit of the doubt when it comes to responsible data practices.
In 2022 and 2023, Eufy faced backlash for misleading users about how their video data was stored and transmitted. The company had publicly stated its systems were fully encrypted and that footage was never uploaded to the cloud. But researchers later discovered that some streams could be accessed unencrypted via web portals, without user passwords.
Following the exposure, Eufy promised to improve its privacy disclosures and enhance encryption standards, but trust was shaken, and it remains a sticking point in any new data initiative involving the brand.
With this latest campaign, the key questions remain:
– How long are the videos stored?
– Are users allowed to delete their data, and is that deletion permanent?
– Is footage anonymized or stripped of identifying details?
– How is staged footage labeled or separated from authentic clips?
So far, Eufy has not provided detailed answers to most of these questions.
AI Training Risks: Real Data vs. Fake Crime
From a technical standpoint, offering users a cash reward for rare footage like thefts or break-ins raises a difficult problem: real crime is unpredictable and rare, especially when captured on a home security camera.
By inviting users to simulate those events, Eufy gained access to “ideal” examples for AI training, but experts warn that this could introduce bias or false patterns into machine learning models. If the AI is trained mostly on exaggerated or choreographed actions, it may struggle to recognize subtle or authentic behavior in real-world situations.
“Simulated crime footage might be visually clean, but it doesn’t reflect the messy, uncertain conditions in actual incidents,” said a security AI researcher contacted by TechTrib. “There’s a risk of overfitting your model to unrealistic data.”
This problem becomes even more pressing as smart home devices are used not just for recording, but for real-time alerts and decision-making, such as locking doors, triggering sirens, or alerting law enforcement.
The Bigger Picture: AI vs Privacy in the Smart Home
Eufy’s video training campaign is a clear example of a broader trend: smart device manufacturers are increasingly looking to their user base as a source of training data. Whether it’s video footage, voice commands, or usage patterns, user-generated data is the fuel for next-gen AI systems.
But this approach comes with trade-offs.
While some users are willing to share data in exchange for small rewards or improved services, others may not fully understand what they’re giving up or how their data might be used in the future. The line between consent and exploitation becomes blurry when the product relies on a continuous stream of behavioral data.
And once a company has your video footage, the question is no longer just about AI. It’s about control.
Final Thoughts
On the surface, Eufy’s campaign might seem like a clever and relatively harmless way to improve its product. But it sits at the intersection of some of today’s biggest tech dilemmas: privacy, transparency, and trust.
Users should think carefully before opting into such programs, especially with companies that have a history of spotty data protection records. Brands like Eufy must do more to earn user trust, not just by improving their AI, but by being crystal clear about what they collect, how they use it, and who has access to it.
As AI becomes more deeply embedded in the devices we bring into our homes, the most important question may not be “how smart is it?” but “how safe is it?”