How Attorneys Can Think About Artificial Intelligence by Will Fletcher

Man and AI robot waiting for a job interview

Imagine the future of legal practice: A potential client calls about representation for injuries from an automobile accident. But instead of a human answering the phone, the person speaks with an “AI intake specialist,” a form of artificial intelligence (“AI”)[1] trained to ask probing questions about their injuries, background, and the crash. After the call, the same AI system orders relevant medical and billing records, as well as the police report, which it uses to evaluate the claim. A reviewing attorney accepts the case, prompting the AI to draft an initial demand letter with a settlement figure informed by its database of thousands of similar matters.  

Across the server aisle, dueling AI “agents”[2] negotiate a major contract between two large companies, agreeing on terms from each side’s digital playbook of preferred and fallback positions. Lawyers will only step in later to settle any major sticking points.[3]

If this future sounds far-fetched, consider that it is beginning to unfold now. For example, the legal tech startup EvenUP currently offers an AI-powered platform for helping plaintiff’s attorneys prepare cases and demand letters. The company says its AI was trained on hundreds of thousands of injury cases and millions of medical records, which it says has helped its customers claim over $1.5 billion in damages. In October 2024, EvenUp was valued at over $1 billion. [4]

“Generative AI”—the type of artificial intelligence that learns from data, creates new content, and communicates with users in their “natural language” instead of code—became a household term with ChatGPT’s public release in November 2022. The three years since have been marked as the beginning of a renaissance in how people work and interact with the world. For lawyers, this has included the realization that GenAI has the power to reshape much of legal practice.

Unsurprisingly, thinking about GenAI in the law can be dizzying. For starters, there is the hype—Legal AI tools can promise near magical results that they may struggle to deliver on. Legal AI has become big business with investor firms contributing approximately $2.2 billion to legal AI startups in 2024.[5] Consequently, attorney inboxes are often filled with solicitations for these new tools, at times with eye-catching prices. In a profession traditionally reluctant towards technological change, it is no wonder many practitioners would like this “AI revolution” to slow down or go away. This may be wishful thinking. The American Bar Association has already imagined the day when GenAI use could be part of a lawyer’s duty of competence.[6]

Though even without a current requirement, lawyers should not wait to explore the AI tools relevant to their work.[7] Even seemingly minor uses can provide tremendous value to both attorneys and clients, particularly in small practice and public interest settings that lack the resources of their bigger law counterparts. For those overwhelmed or frustrated by our new “AI age,” this article offers a few thoughts for getting started.

Embrace GenAI, but Don’t Over-Rely

Finding boundaries between what AI should do and what must remain the province of licensed, human lawyers is often central to the legal-AI discussion. Also, at the center of the legal-AI discussion are many other questions of professionalism and ethics, such as obtaining informed client consent, billing practices, and candor to the tribunal, which are outside the scope of this article.[8]

Given how quickly the technology is advancing,[9] these boundaries can feel urgent, especially as “agentic” AI systems capable of making decisions and performing tasks with limited human involvement begin to appear in legal settings.[10] In its formal opinion on GenAI tools, the American Bar Association Standing Committee on Ethics and Professional Responsibility described GenAI as a “rapidly moving target—in the sense that their precise features and utility to law practice are quickly changing and will continue to change in ways that may be difficult or impossible to anticipate.” To navigate this shifting landscape, it can be helpful to reflect on what, exactly, we attorneys are hired to do, anyway.

Preamble 2 to the rules of professional conduct outline the lawyer’s primary functions. We are foremost informed legal and practical advisors, zealous advocates, honest but adept negotiators, and critical evaluators.[11] Fulfilling these functions requires consistently exercising competence, promptness, and diligence.[12] Powering this work includes the use of critical and strategic thinking, sound judgment, strong interpersonal skills, and the ability to nimbly plan and direct the events in our legal matters. These are the lawyer’s core “powers,” and they create the central value we bring to clients.

The problem is that GenAI can increasingly do these things, too. When considering any new GenAI use case, one of the first risks to evaluate is the potential for uncritical or overreliance. Indeed, the hypotheticals at the start of this article elicit a few moments where this line might be reached.

“Lawyers, then, must think of AI as a tool to enhance the core attorney functions outlined in the rules of professional conduct rather than as a means to outsource them.”

Many state bar associations—including Idaho’s—have yet to issue formal, comprehensive guidance on GenAI use in legal practice, including avoiding overreliance. However, ABA’s Standing Committee on Ethics and Professional Responsibility addressed the issue in Formal Opinion 512, released in July 2024. The opinion warns against overreliance with GenAI, stating:

While [GenAI] may be used as a springboard or foundation for legal work—for example, by generating an analysis on which a lawyer bases legal advice, or by generating a draft from which a lawyer produces a legal document—lawyers may not abdicate their responsibilities by relying solely on a [GenAI] tool to perform tasks that call for the exercise of professional judgment.[13]

The Committee further emphasized, “lawyers may not leave it to [GenAI] tools alone to offer legal advice to clients, negotiate clients’ claims, or perform other functions that require a lawyer’s personal judgment or participation.”[14] Lawyers, then, must think of AI as a tool to enhance the core attorney functions outlined in the rules of professional conduct rather than as a means to outsource them.

The State Bar of California Standing Committee on Professional Responsibility and Conduct has also addressed the issue of AI overreliance in clear terms. Its Standing Committee on Professional Responsibility and Conduct has stated: “Overreliance on AI tools is inconsistent with the active practice of law and application of trained judgment by the lawyer.” The committee continued: “A lawyer should take steps to avoid over-reliance on generative AI to such a degree that it hinders critical attorney analysis fostered by traditional [skills such as] research and writing.”[15]

Avoiding GenAI overreliance typically begins with independently verifying that the outputs it produces are correct. The level of verification needed, though, depends on the GenAI tool and the context. As an example, the ABA explained:

[I]f a lawyer relies on a [GenAI] tool to review and summarize numerous, lengthy contracts, the lawyer would not necessarily have to manually review the entire set of documents to verify the results if the lawyer had previously tested the accuracy of the tool on a smaller subset of documents by manually reviewing those documents, comparing them to the summaries produced by the tool, and finding the summaries accurate.

Moreover:

[a] lawyer’s use of a [GenAI] tool designed specifically for the practice of law or to perform a discrete legal task, such as generating ideas, may require less independent verification or review, particularly where a lawyer’s prior experience with the [GenAI] tool provides a reasonable basis for relying on its results.[16]

The near frequent headlines about attorneys facing sanctions for citing nonexistent caselaw, [17] or the example of attorneys submitting a ChatGPT analysis to support the reasonableness of their heightened hourly rate in a motion for attorney’s fees,[18] offer clear examples of uncritical or overreliance on GenAI. Few would argue, though, that using to GenAI help present an argument in the best possible terms, automate routine or low risk tasks to save time and client expense, or simulate a “hot bench” in preparation for arguments crosses any professional and ethical lines. But blindly relying on GenAI to tell you what the law is, or make key legal decisions without human input, supplants rather than enhances the attorney’s core functions under Preamble 2.

AI Is Getting Better with Facts—but Still Check Your Cites

Understanding a bit about how GenAI works is also helpful for navigating the boundaries between proper and improper use. The ABA has said that to competently use GenAI for legal work, lawyers don’t need to be experts. But they must have “a reasonable understanding of the capabilities and limitations of the specific [GenAI] technology that the lawyer might use.” [19]

On their own, large language models—the engines behind most GenAI tools—don’t come equipped with a database of facts from which they draw their answers. Instead, they generate responses by making predictions of each next word in a sequence based on patterns in data to which they’ve previously been exposed (i.e., their training).

“Public tools like ChatGPT can do a lot of things, but unless connected to a proven database of authorities, they remain a poor choice for researching case law.”

For example, if you asked an LLM to name the capital of France, it would not consult a definitive list of national capitals. Instead, it would generate a response based on statistical patterns learned during training. That’s why GenAI might produce a “hallucination,” and give you an answer other than Paris. Additionally, the model’s training data may have been incomplete, biased, or incorrect. And over time, training data inherently becomes outdated. In a sense, then, all outputs can be thought of as hallucinations, many just happen to also be correct. This is one reason publicly available tools like ChatGPT have been better suited for creative tasks, where there is no single right answer. But that’s changing.

With retrieval augmented generation (“RAG”), the GenAI model connects to the internet or an external database to pull in data for its outputs. For legal research, this should be a well-curated database of relevant and trustworthy legal information. RAG-based tools are far more reliable for tasks requiring definitive answers, like providing a legal citation. RAG-based tools can also link you to the source of the information, allowing you to verify it. Most specialized legal AI tools (like the ones in all those vendor emails) now incorporate RAG to varying degrees. Still, Rag-based GenAI can get it wrong. That’s why independent review remains critical, or at a minimum, developing strong trust in a tool through repeated use before relying on it. Public tools like ChatGPT can do a lot of things, but unless connected to a proven database of authorities, they remain a poor choice for researching case law.

It’s also helpful to know that GenAI is by design, stochastic—meaning it can give you different answers to the same question asked multiple times. This is why the handlers behind the Economist’s “SCOTUSbot,” a GenAI tool created to predict Supreme Court rulings, run each query at least 10 times to find an “average” outcome for how the justices might rule.[20] Recognizing these limitations can be crucial for lawyers to use GenAI to enhance, rather than undermine, their work.

A Word on Confidentiality and Security

Unless you’re certain otherwise, assume that any GenAI tool you use is processing both your inputs and its outputs in the cloud. GenAI tools capable of “self-learning” also create the risk that information relating to representation can be inadvertently disclosed to another user outside of the attorney-client relationship.[21] These factors a host of client confidentiality, consent, work product protection, and data security concerns, which the ABA has acknowledged can be difficult to navigate.[22] That doesn’t mean it’s impossible to use GenAI tools for legal work involving client confidential information. But doing so requires thoroughly investigating the provider’s practices and the risks, and at times even consulting with cybersecurity and IT professionals.[23] According to the ABA, key considerations include:

  • Ensuring the tool is properly configured for confidentiality and security, and that the provider’s obligations are legally enforceable;
  • The ability to receive notices in case the provider’s obligations are breached;
  • Maintaining control over your data, particularly whether it is retained and when it is deleted; and
  • Confirming whether the provider has any rights to use your data beyond delivering the AI service to you (e.g., using your data to train its models).[24]

As a rule of thumb, paid subscriptions to legal-specific tools tend to offer stronger—although not infallible—privacy and security assurances than general-use tools (like Anthropic’s Claude or Google NotebookLM). And the “enterprise” or “professional” grade versions of general-use tools (like ChatGPT) offer better assurances than the free versions of the same tools, which often come with minimal or no assurances.[25]

Fortunately, there is a universe of potential GenAI uses that don’t require sending client confidential information to the cloud, like creating your own version of “SCOTUSbot” to draw insights about a local court you regularly appear before. Other uses include creating rough drafts of or refining standard documents or summarizing public documents like cases and statutes.

Finding the Right Use Cases

It can be easy to feel like incorporating GenAI requires instantly reinventing your practice. It doesn’t. Instead, start by simply identifying existing challenges in your practice. Then, consider ways to add AI to the things you already do in ways that will make them a little more efficient. Also talk to other practitioners about ways in which they are using GenAI.

Early research suggests that more than work quality, GenAI’s greatest value may lie in providing large and consistent improvements to the speed with which legal work can be done.[26] There’s also GenAI’s ability to automate many legal tasks. According to a 2024 Goldman Sachs study, up to 44 percent of current legal tasks performed in the US could be automated by AI.[27] Moreover, a 2024 Thompson Reuters report estimates that within five years the efficiencies AI can create in legal practices could free up to 12 hours per week for lawyers and other white-collar professionals.[28]

And as a side benefit to all this efficiency, a 2023 study found that participants reported increased satisfaction when completing legal work-related tasks with GenAI. According to the study, this was presumably because the technology reduced or eliminated the burden of tedious tasks common throughout the legal profession. The study authors noted that while this might seem to be a minor point, “In an era where lawyer dissatisfaction and burnout are widespread, a tool that has the potential to increase lawyer wellbeing. . . is one that is worth taking seriously.”[29] For those who’ve yet to consider how GenAI might help them grow their legal powers, the promise of improved job satisfaction may be reason enough to take a closer look.

Fletcher, Will headshot

Will Fletcher is the General Counsel at Zasio, a software and consulting company based in Boise. Since 1987, Zasio has helped enterprise companies around the world solve their toughest information governance challenges. Outside of the office, Will and his wife keep busy raising three wonderful, active daughters.


[1] “Artificial Intelligence” generally refers to a computing system capable of performing tasks typically associated with intelligent beings, such as the ability to reason, discover meaning, generalize, or learn from past experience. Brittanica, https://www.britannica.com/technology/artificial-intelligence (last visited July 2, 2025).

[2] “Agentic AI” generally describes AI systems capable of performing autonomous actions without human intervention. See Agentic AI, Wikipedia,  https://en.wikipedia.org/wiki/Agentic_AI (last visited July 2, 2025).

[3] See Tollen D., What’s agentic AI, and how does it compare to typical gen-AI for contracting? (May 14, 2025) https://www.linkedin.com/pulse/whats-agentic-ai-how-does-compare-typical-gen-ai-i0uac/?trackingId=cHr5Wsmw7WzVzSCFoOFYyw%3D%3D.

[4] “EvenUP Raises $135M in Series D Funding and Launches New Products to Help Level the Playing Field in Personal Injury Cases,” EvenUp news release (October 4, 2024),  https://www.prnewswire.com/news-releases/evenup-raises-135m-in-series-d-funding-and-launches-new-products-to-help-level-the-playing-field-in-personal-injury-cases-302270003.html.

[5] Glasner J., Legal Tech Startup Investment is Riding High, Thanks to AI Boost (Feb. 26, 2025). crunchbase news. https://news.crunchbase.com/venture/legal-tech-startup-investment-ai-clio-harvey/

[6] American Bar Association Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512, Generative Artificial Intelligence Tools, p 5 (July 29, 2024) https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf

[7] See id.

[8] See Generative AI in Legal Practice: A Survey of Professional and Ethical Challenges, Montoya, T., the Advocate, Vol 68, No. 3/4, pp. 20—23 (March/April 2025).

[9] ABA, Formal Opinion 512, p. 2.

[10] See Ambrogi, B. Thompson Reuters Teases Upcoming Release of Agentic CoCounsel AI for Legal, Capable of Complex Workflows. LawSites (June 2, 2025) https://www.lawnext.com/2025/06/thomson-reuters-teases-upcoming-release-of-agentic-cocounsel-ai-for-legal-capable-of-complex-workflows.html

[11] Idaho Rules of Professional Conduct preamble 2 (2014); American Bar Association, Model Rules of Professional Conduct preamble 2 (2025).

[12] Id. at preamble 4; Id. at preamble 4.

[13] ABA, Formal Opinion 512, p. 4.

[14] Id.

[15] See AI and Attorney Ethics Rules: 50-State Survey https://www.justia.com/trials-litigation/ai-and-attorney-ethics-rules-50-state-survey/ (last visited June 24, 2025).

[16] ABA, Formal Opinion 512, p. 4.

[17] See e.g., Butler Snow LLP’s Response to Order to Show Cause, Johnson v. Dunn, et al., Case No. 21:21-CV (N.D. Alabama, So. Div.) (May 19, 2025) (Instead of the beleaguered junior associate having ChatGPT provide non-existent citations for submission in a brief, in this case, it was the supervising partner who added case law imagined by ChatGPT to the more junior attorney’s work.).

[18] See DC Bar, Ethics Opinion 388, Attorney’s Use of Generative Artificial Intelligence in Client Matters, pp. 8—9 (April 2024). https://www.dcbar.org/for-lawyers/legal-ethics/ethics-opinions-210-present/ethics-opinion-388

[19] ABA, Formal Opinion 512, Pp. 2—3.

[20] (June 4, 2025). Meet SCOTUSbot, our AI tool to predict Supreme Court rulings. The Economist. https://www.economist.com/united-states/2025/06/04/meet-scotusbot-our-ai-tool-to-predict-supreme-court-rulings

[21] American Bar Association Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512, Generative Artificial Intelligence Tools, p. 7.

[22] Id. at p. 6.

[23] Id. at 7.

[24] Id. at 11.

[25] See DC Bar, Ethics Opinion 388, pp. 10—12 (stating the confidentiality assurances given by publicly available “free” tools like ChatGPT can be so inadequate that clients shouldn’t even be asked to provide consent for their use with client confidential information).

[26] See generally, Choi, J.H., Monahan, A.B., & Schwarcz, D. (2024), Lawyering in the Age of Artificial Intelligence, MINN. LAW REV., 109(1).

[27] Ma M., Sinha A., Tandon A., Richards J. (March 2024).; Generative AI Legal Landscape 2024, pg. 6. Goldman Sachs. https://www.gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d7-967b-d7be35fabd16.html

[28] Future of Professionals Report. Thompson Reuters. P. 19. (July 2024). https://www.thomsonreuters.com/content/dam/ewp-m/documents/thomsonreuters/en/pdf/reports/future-of-professionals-report-2024.pdf

[29] Choi, J.H., et al, Lawyering in the Age of Artificial Intelligence, 191.