Google Pays $68M to Settle Claims Its Voice Assistant Spied on Users
Class-action settlement addresses allegations of unauthorized recording and data sharing for advertising purposes
Google has agreed to pay $68 million to settle claims that its voice assistant illegally spied on users to, among other things, serve them advertisements, according to Reuters reports.
The Allegations
Google did not admit wrongdoing in the settlement of the class-action case, which accused the firm of “unlawful and intentional interception and recording of individuals’ confidential communications without their consent and subsequent unauthorized disclosure of those communications to third parties.”
The suit further claimed that “information gleaned from these recordings was wrongly transmitted to third parties for targeted advertising and for other purposes.”
The “False Accepts” Problem
The case centered on what the lawsuit termed “false accepts”—instances wherein Google Assistant is alleged to have activated and recorded the user’s communications even if they had not intentionally prompted it to do so with a wake word.
This technical issue raises fundamental questions about how voice assistants determine when they should be listening and recording. In theory, devices like Google Assistant should only activate when they hear their designated wake phrase (such as “Hey Google” or “OK Google”). However, the lawsuit alleged that the system frequently misidentified other sounds or words as wake commands, leading to unauthorized recordings of private conversations.
False accepts occur when a voice assistant mistakenly believes it has heard its wake word and begins recording, even though the user did not intentionally activate it. This can happen when:
• Similar-sounding words or phrases trigger activation
• Background noise is misinterpreted as the wake command
• The device’s sensitivity settings are too high
• Multiple devices in proximity react to ambient sounds
The lawsuit alleged these “mistakes” resulted in systematic recording of private conversations that users never intended to share with Google.
TechCrunch reached out to Google for comment but did not receive a response by publication time.
A Growing Pattern of Suspicion
Americans have long suspected that their devices inappropriately spy on them. Those suspicions have led, increasingly, to claims of legal wrongdoing and multi-million dollar settlements.
In 2021, Apple agreed to pay $95 million to settle similar claims that its voice assistant, Siri, had recorded their conversations without a prompt from users. The parallels between the Apple and Google cases are striking—both involved allegations of unauthorized recording through false activation, and both resulted in settlements without admission of wrongdoing.
Google’s Privacy Track Record
Google, like other tech giants, has faced other privacy-related litigation in recent years. The $68 million settlement is just the latest in a series of legal challenges to the company’s data practices.
What the Settlement Means
While $68 million may seem substantial, it represents a relatively small fraction of Google’s massive revenue. For context, Google’s parent company Alphabet reported revenues of over $300 billion in 2024, making this settlement equivalent to less than a day’s worth of revenue for the tech giant.
The settlement also follows a familiar pattern in tech litigation: companies agree to pay substantial sums to make lawsuits go away, but never admit they did anything wrong. This approach allows them to avoid the potential for much larger judgments at trial while maintaining they acted lawfully.
The Broader Privacy Implications
This settlement raises important questions about the privacy of voice-activated devices more broadly:
Consent and Control: If devices can activate and record without intentional user prompting, how much control do users really have over their own privacy? The “false accepts” problem suggests that merely having a voice assistant in your home creates ongoing privacy risks that users may not fully understand or accept.
Data Usage: The allegations that recorded data was shared with third parties for advertising purposes highlights how voice data can be monetized. Even if users accept that their intentional interactions with voice assistants might inform advertising, unintentional recordings represent a different category of privacy violation entirely.
Transparency: Google, Apple, and other tech companies have long promoted their voice assistants as privacy-respecting, with various safeguards and user controls. However, repeated lawsuits and settlements suggest a gap between marketing promises and actual practices.
While these settlements continue, users concerned about voice assistant privacy can take several steps:
• Review and adjust sensitivity settings on voice-activated devices
• Use mute functions when having sensitive conversations
• Regularly review and delete voice recording history in device settings
• Consider disabling voice assistants entirely for maximum privacy
• Place voice-enabled devices in common areas rather than bedrooms or private spaces
• Read privacy policies to understand what data is collected and how it’s used
Industry-Wide Implications
The Google settlement, following Apple’s earlier Siri settlement, suggests that voice assistant privacy violations may be a systemic industry problem rather than isolated incidents at specific companies.
Amazon’s Alexa, Microsoft’s Cortana, and other voice assistants use similar technologies that could potentially face the same “false accepts” challenges. While these companies have not faced comparable lawsuits yet, the Google and Apple settlements may embolden plaintiffs’ attorneys to pursue similar claims against other tech giants.
From a regulatory perspective, these settlements highlight the limitations of current privacy laws. The fact that companies can settle for relatively modest sums without admitting wrongdoing or making binding commitments to change their practices suggests that stronger regulatory frameworks may be needed to meaningfully protect consumer privacy.
Looking Ahead
As voice assistants become increasingly ubiquitous—embedded not just in smartphones and smart speakers, but in cars, appliances, and even clothing—the privacy stakes only grow higher. The Google settlement serves as a reminder that the convenience of voice-activated technology comes with significant privacy trade-offs that many users may not fully appreciate.
Whether this settlement will lead to meaningful changes in how Google Assistant operates, or simply become another line item in the company’s legal budget, remains to be seen. What is clear is that the tension between voice technology’s convenience and users’ privacy rights will continue to generate litigation and regulatory scrutiny for years to come.
For consumers, the lesson is straightforward: if you invite a voice assistant into your home, you may be granting access to far more of your private conversations than you realize—and that access may extend not just to the device manufacturer, but to advertisers and other third parties with commercial interests in your data.
Update: This article will be updated if Google responds to requests for comment or if additional details about the settlement terms become available.












Be First to Comment