Why Chatbots and Voice Assistants Are the Next Big Battleground in Privacy Law
Originally published on the Darrow blog >> Read Full Article
The explosion of AI-driven tools like chatbots and voice assistants is transforming how businesses engage with consumers—but it's also triggering a new wave of privacy litigation. As companies race to implement conversational technologies, many are doing so without fully understanding the legal risks associated with capturing, processing, and storing user communications. Courts are beginning to take notice.
Over the past year, privacy lawsuits citing the California Invasion of Privacy Act (CIPA) and the federal Wiretap Act have surged—particularly those focused on how companies collect data through pixel tracking, session replay, and now, voice and chatbot interfaces. These tools, often deployed in customer service or lead generation contexts, may record conversations without meaningful disclosure or user consent. That puts companies at risk of violating privacy laws, even if they never intend to misuse the data.
In this piece, I analyze three recent class actions that illustrate how plaintiffs are challenging the boundaries of lawful data collection in the context of automated communication tools:
In Valenzuela v. Nationwide, a court allowed claims under CIPA Section 631 to proceed after a user alleged that her chat with a website’s embedded AI assistant was intercepted by a third-party vendor. The ruling suggests that companies can be held liable not only for direct data collection, but also for enabling surveillance through third-party tools.
In a high-profile settlement, Apple agreed to pay $95 million to resolve claims that its Siri assistant recorded users without consent. This case illustrates the risks associated with always-on or passively triggered devices that capture private conversations—even if that data is later anonymized or used for product training.
In Ambriz v. Google, a federal court took a broad view of CIPA liability by allowing claims to proceed based solely on Google’s capacity to access and exploit recorded voice data. The ruling shifts the focus from what a company actually did with the data to what its systems could do, expanding the scope of legal exposure for AI vendors and their enterprise customers.
Together, these cases mark a turning point in privacy enforcement. Courts are becoming increasingly attuned to how AI-mediated communication blurs the line between human interaction and digital surveillance. For plaintiff-side litigators, this means new opportunities to challenge technologies that fail to clearly inform users when they are being recorded—and by whom.
Consent, disclosure, and third-party involvement are quickly becoming central issues in privacy law, especially as conversational interfaces become more prevalent. Companies deploying these tools would be wise to re-evaluate their consent flows, vendor relationships, and privacy policies—before the lawsuits come knocking.
To dive deeper into each case and its implications, read the full article on Darrow’s blog → Chatbots, Voice Assistants, and the Escalation of Privacy Enforcement