Categories
News

Lawyer says Google shut down his Gmail, Voice and Photos after NotebookLM upload

His case highlights a broader issue as U.S.-based AI tools block analysis of sensitive public records, including documents from the Epstein files.

Imagine losing your email address, phone number, photos, contacts and more after using an AI tool for work.

That’s what Brian Chase says happened after he uploaded text-only law enforcement reports to Google’s NotebookLM while working on a criminal case. NotebookLM is an AI research tool that summarizes and answers questions about files and links that users upload.

Chase is an adjunct professor at the University of Arizona law school and managing director of digital forensics and eDiscovery at ArcherHall.

In a Feb. 16 LinkedIn post, he wrote that he uploaded reports to NotebookLM and “within seconds” received a notification that he had violated Google’s terms of service. He said the reports referenced child sexual abuse material because the defendant was charged with possessing it, but that the upload included “no images or videos … only text.”

“Google stored all my photos, contacts, phone backups, Gmail account, and even my phone number,” he wrote. “I cannot access any of it today.” He added that his phone number was a Google Voice number and that other services tied to his Google account stopped working.

Chase said he uploaded the report on Saturday, Feb. 14, received a terms-of-service warning and deleted it the same day. He said that on Monday, he woke up signed out of Google services and saw an alert that his account was disabled.

“Although I submitted an appeal,” Chase wrote that day, “Google offers no way to contact them to provide additional information.”

Early Tuesday, Chase said he received an email stating the material violated Google’s terms of service and that if he agreed to them, he could download his account contents through Google Takeout. He said the email “never really said my account was restored.” Later Tuesday, he posted a comment on the LinkedIn post saying, “Google restored access to my account.”

Chase said he was doing routine legal work. “Nothing I uploaded was illegal. Nothing I did violated the attorney ethical rules. But Google flagged it anyway, and there is very little recourse once that happens.”

I emailed Google’s media team on Monday with questions about Chase’s post, whether NotebookLM activity can trigger an account-level enforcement action and why a text-only upload tied to lawful legal work would lead to an account-wide lockout. I followed up multiple times through late Tuesday. Google did not respond.

In other cases involving sensitive material, the consequence is not an account lockout but an AI tool that won’t answer.

Epstein files

NotebookLM users report that the tool refuses to summarize or answer questions about public records from the Epstein files. In a Reddit thread, users say it returns a standard message, “NotebookLM can’t answer this question. Try rephrasing it, or ask a different question,” when asked to summarize documents or extract basic information, including questions about associates.

In my own testing, NotebookLM repeatedly refused to summarize or answer questions about Justice Department documents from the Epstein case, sometimes after generating a few lines in response. I sent Google a screenshot from that testing as part of my request for comment.

OpenAI ‘working on a fix’

On OpenAI’s ChatGPT, users, including me, noticed a similar pattern when analyzing Epstein case records. The AI tool begins generating an answer, then the text disappears and is replaced by a red warning that says, “This content may violate our usage policies.”

OpenAI’s communications team responded by email on background, saying, “This was an incorrect refusal, and we’re working on a fix to address it.” 

The company did not answer follow-up questions about what caused the behavior or when a fix would roll out.

The refusal behavior is not uniform across AI systems.

In my own testing, I gave the same Epstein case document to DeepSeek and Kimi, each based in China. Both summarized it and answered questions without the refusals I encountered in ChatGPT and NotebookLM. Reddit users described similar experiences.

Last Updated on February 18, 2026 by Joe Douglass