What Does It Mean to Regulate Legal AI?

Flow chart that shows a series of questions explored in this piece, about all the things to consider when discussing "what does it mean to regulate AI?"

In collaboration with the Minnesota State Bar Association’s AI Committee and my colleague Damien Riehl, we recently broke down some ideas relating to what we should think about when we’re talking about “regulating AI in legal.” This blog post shares those illustrations and text, which are also available on the Minnesota State Bar Association’s Artificial Intelligence Committee page.

Legal advice is often described as “applying facts to law,” as something only licensed lawyers can give. But anyone can provide “legal information” like cases, statutes, and regulations—which also include many facts.

Illustration that shows the distinction between legal advice and legal information (the line is fuzzy).


So can we know with certainty when someone or something provides “legal advice,” not “legal information?” Can an LLM apply facts to law, and provide “legal advice?” Courts and regulators haven’t given us clear answers here. Are clear answers even possible?

Should we regulate “legal advice” that comes from someone or something (an LLM) that is not a licensed lawyer? If we deem it “legal information,” will any resultant harm be addressed by common law claims (e.g., negligence, product liability, fraud)?

Illustration showing a person believing an AI large language model, and an arrow asking, "regulate this"?

If we do regulate “legal advice” or “legal information” from someone or something that is not a licensed lawyer, who would enforce it? County prosecutors? State attorneys general? A state agency? The courts? And will it be enforced against everyone, including the giants like OpenAI, Google, Anthropic, etc.?

Illustration that has the question "Who would enforce it?" and shows county prosecutor and other regulators pointing at a large language model illustration.

On the other hand, how do we regulate the use of AI by licensed attorneys? Do the rules of ethics and professional responsibility serve this purpose already?

Illustration of a licensed lawyer + AI +bad legal advice, then rule of ethics? It's an illustrated equation.

Do we regulate the AI tool itself, or the entity building or operating the tool?

Illustration of a computer (representing AI/large language model) versus the entity (illustration of a person on their computer, messing with the AI tool. The text says, "Which one?"

Whether we regulate attorneys using AI or anyone or anything other than attorneys for providing legal advice using AI, what method are we using? Unauthorized practice of law statutes? Waivers allowing certain entities to provide legal advice when they normally would risk prosecution? A sandbox where ideas can be explored? Prosecutors’ “de-prioritization” of UPL claims?

Illustration shows the text "What method?" and shows a number of different methods for regulation, deprioritized UPL claims, sandboxes, waivers, etc. Shows these things as cartoonish style.

Do we regulate any of these up front (beforehand), or wait for a harm (if any) to appear?

Illustration says, "Regulate when, exactly?" and points to a spot before and a spot after the word "HARM, if any""

Should lawyers who use AI be regulated in the same way as entities or LLMs that provide “legal advice” or “legal information” with AI?

So when we talk about “regulating AI use in legal,” which of these do we tackle?

Or do we tackle them all? Or none?

If we do nothing, where will people who can’t afford lawyers get their legal advice from? Will it be reliable? What if the LLM is more reliable than the average lawyer? What if the tool does them harm? If they get no help — from anyone or any AI — then what’s the resultant harm?

So when someone says “Let’s regulate legal AI,” it’s crucial to identify what we mean precisely.


Next
Next

Tariffs in Fall 2025, Explained Visually