We all do it – research online – when we need answers, information, or to learn about something. And now, AI systems like ChatGPT, Claude, Gemini, Perplexity, and Copilot are becoming an increasingly popular and helpful tool used in everyday life and business.
Research shows that among the working age population, AI usage in the US reached 28.3% in the second half of 2025. Globally, roughly one in six people worldwide are using AI to work, learn, or find solutions to or provide guidance for nearly any problem.
Countless industries and businesses have been impacted by the technology, and the legal industry is no exception. Law firms across the country have integrated AI systems into their business model, and there are significant advantages to using it; it’s helped lawyers increase productivity by handling time-consuming tasks, like reviewing and summarizing data, performing basic research, and drafting agreements.
While there are clear benefits, there are also clear risks; AI isn’t perfect, and cases of made-up legal precedents and errors indicate that law firms must use these systems carefully and set parameters around its’ use.
But some of the greatest risks of AI use for legal purposes affect the very people who need lawyers or are involved in upcoming or ongoing litigation. And what’s been seen so far is that there are immense dangers of using AI for legal advice.
Most recently, a judge in New York ruled that a defendant’s AI files can be used by the FBI in a fraud case. These include the query history, and documents and arguments prepared by the chatbot, Claude.
Judge Rules Chatbot Files and Queries Admissible
The FBI had seized the electronics of Bradley Heppner, chairman of GWG Holdings, Inc, upon his arrest for alleged fraud. Heppner’s laptop contained documents prepared by Claude, which he shared with his attorneys prior to his arrest. What followed was a motion from federal prosecutors that argued they should be able to use AI files and queries because they aren’t protected under client-attorney privilege.
The judge ruled in favor of the FBI, and now, prosecutors can question Heppner about his specific interactions with the chatbot and any files it created in court. What’s possibly even worse for Heppner and his legal team is that the FBI has access to legal strategies they may have planned to use.
This ruling should make anyone involved in a legal situation hesitant to use AI, both as a plaintiff or defendant. But the risks extend far beyond what happened in New York.
What are the Risks of Getting Legal Advice from AI?
There are several dangers of getting legal advice from AI. That’s not to say it shouldn’t be used at all if you have questions about the law. Rather, what you use and take away from chatbot outputs needs to be approached and received with caution.
One of the reasons for this is that even the creators of artificial intelligence programs don’t know what, exactly, they’re capable of. However, they do know how data training can affect outputs – and the problems these systems may create.
So, if you’re using Claude, Gemini, ChatGPT, or a similar system for legal advice, keep the following in mind.
AI Can ‘Hallucinate’
One of the most dangerous issues with AI chatbots and other programs is that they can make up information. These programs aren’t perfect, and there have been recorded cases of AI hallucinations – when the outcome or results they generated are false, inaccurate, or fabricated.
The reasoning behind this relates mostly to how they’re trained and how they function. But the key takeaway is that AI can invent a wide range of outputs, including legal cases that never occurred or incorrectly interpreted laws. Knowing they aren’t always right and can fabricate outputs is crucial if you use AI for legal advice.
AI Lacks Attorney Insight
Attorneys know more than just the laws related to their practice areas. The skills they have and their experience practicing law gives them deep insight into factors that can affect cases, and AI simply doesn’t have this capability.
Lawyers anticipate and assess possible issues and elements, and provide legal advice and guidance, and even build legal arguments, with these in mind. What AI claims to be your best legal option may, in fact, be the wrong one because it doesn’t have the insight that a lawyer has. The same goes for negotiations.
Negotiating is an integral part of law. The experience of sitting at the negotiating table, dealing with real people and their attorneys, cannot be learned or factored into legal advice from AI.
Information from AI Can be Outdated
Because of the way AI learns and functions, the information they output can be outdated or irrelevant, yet it seems factual and credible. This is a serious risk of using AI for legal advice. Laws change quickly and frequently, and these systems don’t always access – or even have access to – the most current information.
In the US, there are city, state, and national laws, and knowing what applies to a civil or criminal lawsuit is crucial. If you act on an outdated or irrelevant output, it could affect your case. At worst, taking the wrong step could destroy your chance at justice and/ or compensation, a high cost of AI-generated legal advice.
Legal Nuances Don’t Exist in AI
There’s a reason becoming a lawyer in the US takes so many years, and why attorneys have practice areas, the specific areas of law they specialize in. The law is complicated, plain and simple, and legal nuances are not something AI can currently develop or interpret from training data.
Context and legal complexities directly affect all types of situations; how legal language is interpreted can change from one case to another, and AI simply can’t do this. The same goes for emotional implications and motives, which are another factor that lawyers take into account in legal disputes. And yet, such nuances are often the cornerstone of success in litigation and negotiations.
Attorney-Client Privilege Does Not Apply to AI
Whether you’re a plaintiff or defendant, or expect to become one, do not assume that the questions you ask AI, or the commands you tell it, are private. As seen in the FBI case against Heppner, what you use these programs for is not protected under attorney-client privilege.
The terms of service for chatbots cover privacy, and these terms are clear: the information they gather through prompts may be disclosed to third parties. If you choose to use it for legal purposes, you’re essentially waiving your right to privacy.
Consult a Lawyer Before You Use AI for Legal Advice
There are many benefits for businesses that use chatbots to help streamline certain tasks and improve productivity. But while AI may shape the future of law firms, how it’s used when you’re the client or may be facing legal troubles is completely different – and comes with several risks.
Given the dangers of using AI for legal advice, the safest way to learn your legal options is to reach out to a lawyer first. Many attorneys offer free consultations and can provide guidance on what you should or shouldn’t do, and what the next step is.