Generative AI in Legal Practice: A Survey of Professional and Ethical Challenges by Texie Montoya

E-learning technology, webinar, online education and training to develop new skills and knowledge. AI-enhanced learning with personalized courses. Remote learning on internet. Virtual screen

Across industries and around the globe, generative Artificial Intelligence (“AI”) is revolutionizing the way we work, create, and solve problems. While many use AI for tasks like answering basic questions or generating creative content, its impact extends far beyond these everyday applications. From manufacturing to healthcare to entertainment, this technology is driving unprecedented evolution, opening doors to possibilities once thought impossible. Naturally, these sweeping changes have also reached the legal profession, where tools like CoCounsel,[i] Spellbook,[ii] and Clearbrief,[iii] are reshaping how attorneys practice law and engage with their clients. These tools bring immense potential for drafting documents, conducting research, and generating creative content. However, their integration raises critical ethical and professional questions.

Notably, generative AI has become embedded in the tools and platforms professionals already use daily. Word processors suggest text completions, email solutions use predictive text to draft replies, and legal research platforms, including Fastcase, identify relevant case law with AI-powered algorithms. Whether or not you realize it, you are now likely interacting daily with generative AI during routine tasks, underscoring the importance of understanding its capabilities and limitations.

This article explores the professional and ethical considerations of using generative AI in legal practice, highlighting applicable rules of the Idaho Rules of Professional Conduct (“IRPC”) and concludes with some high-level practical tips to avoid professional or ethical missteps.

Competence: Understanding the Technology

Competence, as outlined in Rule 1.1 of the Idaho Rules of Professional Conduct (“IRPC”), requires attorneys to provide skilled and informed representation.[iv] Comment [8] to IRPC 1.1 explicitly states, “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.”[v] This reflects the profession’s acknowledgment that technology, including AI, is not optional but integral to modern practice.

In the context of generative AI, competence means more than knowing these tools exist. Attorneys must understand their capabilities and limitations. AI can generate drafts, suggest arguments, and analyze data patterns, but it is not infallible. I thought we’d all heard of the 2023 case of Mata v. Avianca, Inc., where a lawyer’s blind reliance on AI resulted in a brief containing totally fabricated case law. The court noted the unprecedented nature of submitting fictitious legal authorities.[vi] But apparently some attorneys did not get the memo and attorneys in multiple jurisdictions have continued to submit filings which rely upon and cite cases that are entirely made up.[vii] Ensuring accuracy and staying updated on AI’s rapid evolution through continuing education or industry publications are essential aspects of competence.

Generative AI tools also bring risks of “hallucinations,” where outputs appear plausible but lack factual basis. Lawyers must verify all AI-generated content, especially when it affects legal advice or is submitted to a court. Because these tools are often embedded in widely-used tools like email platforms and document editors, attorneys might unknowingly rely on these systems for tasks like grammar suggestions or summarizing client correspondence. This makes it even more critical to verify outputs, test tools, and recognize when AI is influencing the work product.

Attorneys new to AI can begin by using it for low-stakes tasks like drafting internal memos or brainstorming arguments. Comparing the results produced by an AI tool to known cases or scenarios can help attorneys assess the tool’s accuracy which can in turn build confidence while ensuring reliability. Investigating how the AI tool was trained and how it handles biases is another step toward responsible use. Oftentimes, the provider of the tool is transparent about its data sources, training methods, and bias mitigation on its website and in its documentation but an attorney may also consider questioning the vendor, independently testing the AI’s outputs for bias, or consulting legal tech reviews and academic studies. While AI can enhance efficiency, it remains a helpful assistant, not a substitute for human expertise.

Confidentiality: Safeguarding Client Information

IRPC 1.6 provides that “a lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted” or an exception applies. Generative AI often requires data inputs, posing risks if sensitive client information is mishandled. Some platforms retain input data for training purposes, potentially exposing confidential details. Attorneys using platforms that retain data have faced ethical complaints—using platforms that train on client data could surely subject them to even more serious complaints.

“Generative AI should enhance, not replace,
human expertise, with attorneys exercising
judgment and diligence at every step.”

As mentioned, many of the tools lawyers have been using for years or even decades, including email, document review platforms, and case management solutions now incorporate generative AI features like automatic summarization or predictive text. While these features can be convenient, they also introduce potential risks if sensitive client information is inadvertently shared or processed inappropriately. For example, an attorney might use their email platform’s AI-powered summarization feature to condense a long email thread about a case into brief highlights. To do so, the AI tool likely uploads the email content to cloud-based servers for processing; that content, which could include confidential client information, is now accessible to the AI provider’s system. Using that same example, if the AI-powered email tool includes auto-reply suggestions, it could generate a premature or inappropriate response which, if sent, could also violate IRPC 1.6 or have other unintended consequences.

Lawyers must consider whether they can anonymize client data before using AI. Replacing specific details, such as names, addresses, and dates of birth, with placeholders or generalized descriptions is one strategy to mitigate risk. Reading and understanding the terms of service of AI providers is equally critical to ensure compliance with confidentiality obligations. Moreover, choosing tools designed specifically for legal professionals such as those described previously—which prioritize data security—adds an extra layer of protection.

Failing to safeguard client information can result in severe consequences, including ethical violations, loss of trust, and legal liability. Thoughtful integration of AI, supported by clear confidentiality protocols, helps attorneys navigate these challenges responsibly.

Communication: Informed Use of Technology

Effective client communication, as mandated by IRPC 1.4, includes informing clients when AI is being used in their representation. IRPC 1.4(a)(2) specifies that a lawyer must “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.”[viii] If the use of AI significantly impacts the outcome of legal work—such as drafting key documents—disclosure is advisable. Similarly, when AI contributes to cost savings, clients may appreciate knowing how it is improving efficiency.

Transparency fosters trust. Clients should understand how AI is being used, including its potential risks and benefits. For example, a transactional lawyer using AI to draft a merger agreement might explain, “We utilize advanced technology to streamline drafting, but rest assured, I review every detail to ensure it aligns with your goals and complies with the law.” Such communication reassures clients that technology enhances the practice without compromising quality.

Fees: Ethical Billing Practices

Under the IRPC 1.5, “A lawyer shall not make an agreement for, charge, or collect an unreasonable fee or an unreasonable amount for expenses.”[ix] Lawyers using AI tools must ensure that fees for AI-assisted tasks align with this requirement by reflecting the actual time spent and any associated costs. For example, billing for tasks automated by AI must not result in inflated rates but should correspond to the efficiencies gained.

Candor Toward the Tribunal: Ensure Accuracy in Submissions

IRPC 3.3 imposes a duty of candor, stating, “A lawyer shall not knowingly make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal.”[x] Lawyers relying on AI-generated outputs must meticulously verify the accuracy of such content to prevent submitting fabricated case law or misleading analysis.

Nonlawyer Assistants:

Attorneys remain fully accountable for all work products, including those involving generative AI tools. Under IRPC 5.3, lawyers have a duty to supervise nonlawyer assistants, including their use of technological tools, ensuring they are used in a manner consistent with professional obligations.[xi] This rule highlights the importance of oversight when delegating tasks to subordinates or third-party vendors.

Supervisory responsibilities require attorneys to establish clear standards for AI use and provide thorough review of its outputs. For example, cross-checking AI-generated citations and arguments against authoritative sources is essential to prevent errors or misrepresentations. Failure to verify such outputs could harm a client’s case and undermine the attorney’s credibility. Attorneys must also implement policies and training programs within their firms to ensure subordinates understand the ethical implications of using AI tools. Generative AI should enhance, not replace, human expertise, with attorneys exercising judgment and diligence at every step.

Personally, I wonder if Rule 5.3 will someday include generative AI tools in its definition of “nonlawyer assistants.”

Marketing and Advertising:

“Lawyers must verify all AI-generated
content, especially when it affects legal
advice or is submitted to a court.”

AI-generated marketing content must comply with ethical rules, such as IRPC 7.1 and 7.2, which prohibit false or misleading statements.[xii] Attorneys should review all AI-generated promotional materials to ensure they accurately reflect their qualifications and experience. Maintaining a professional tone is essential, as overly generic or exaggerated content may undermine the firm’s reputation.

For instance, while AI can draft LinkedIn posts or client newsletters, human oversight ensures alignment with the firm’s branding and ethical standards. This balance between automation and authenticity is key to effective and responsible marketing.

Avoid Misconduct:

Under IRPC 8.4(c), “It is professional misconduct for a lawyer to […] engage in conduct involving dishonesty, fraud, deceit, or misrepresentation.”[xiii] This rule applies to the use of generative AI tools, as relying upon or citing inaccurate or misleading outputs—even those generated unintentionally—could constitute misconduct. Attorneys must carefully evaluate AI-generated work to ensure it is accurate and transparently presented, safeguarding the integrity of their practice and maintaining ethical standards.

Access to the Legal System:

The preamble of the Idaho Rules of Professional Conduct emphasizes that “a lawyer should seek improvement of the law, access to the legal system, the administration of justice, and the quality of service rendered by the legal profession.”[xiv] Generative AI offers significant opportunities to lower costs and improve efficiency, but not all sectors can equally benefit. Sole practitioners, small firms, and legal aid organizations often lack the resources to adopt advanced AI tools, exacerbating existing disparities in access to justice. Furthermore, AI models trained on biased data risk perpetuating systemic inequities, potentially affecting vulnerable clients who rely on these organizations.

For individuals, AI-driven platforms provide new avenues for self-help, guiding users in creating legal documents, understanding their rights, and navigating legal processes. Advocating for open-source AI tools and ensuring inclusivity in their design can empower individuals while reducing disparities. By addressing both resource gaps for legal professionals and promoting fair tools for individuals, generative AI has the potential to bridge divides in the legal system and advance the mission of improving access to justice.

Conclusion

Generative AI is not just a technological advancement—it is a transformative force that can revolutionize legal practice for the better. Its ability to enhance efficiency, improve accessibility, and streamline complex legal tasks makes it an invaluable asset for attorneys who embrace innovation. While ethical considerations must be carefully navigated, in my opinion, the benefits of AI outweigh the risks when used responsibly. By integrating AI thoughtfully and strategically, lawyers can elevate their practice, serve clients more effectively, and focus on the sophisticated reasoning and advocacy that remain uniquely human strengths. After all, no matter how advanced AI becomes, the nuanced art of lawyering—and the occasional bad pun in a legal brief—will always require a lawyer’s touch (or will it?).

headshot montoya

Texie Montoya is an Associate General Counsel at Boise State University where she has worked since 2012. Prior to joining Boise State, Texie clerked for the Honorable Stephen M. Brown at the Washington State Court of Appeals Division III in Spokane, Washington. Texie received her bachelor’s degree from Boise State, where she served as Student Body Vice President and delivered the commencement address with her twin sister in 2006. She earned her juris doctor from Gonzaga University School of Law in Spokane, where she served
as President of the Student Bar Association. Texie currently serves on the executive boards of the Professionalism and Ethics Section, the Government and Public Sector Lawyers Section, and Attorneys for Civic Education. Texie is also the board president of Go Lead Idaho, a local non-profit organization dedicated to women’s leadership. Texie lives in Boise with her husband, stepson, and two daughters.


[i] CoCounsel is an AI-powered legal research tool.

[ii] Spellbook is an AI-powered contract drafting and review tool.

[iii] Clearbrief is an AI-powered legal writing and analysis tool.

[iv] Idaho Rules of Prof’l Conduct r. 1.1 (2014).

[v] Id.

[vi] Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023)

[vii] Anna Tong, AI ‘Hallucinations’ in Court Papers Spell Trouble for Lawyers, Reuters (Feb. 18, 2025), https://www.reuters.com/technology/artificial-intelligence/ai-hallucinations-court-papers-spell-trouble-lawyers-2025-02-18/.

[viii] Idaho Rules of Prof’l Conduct r. 1.4.

[ix] Id. r. 1.5.

[x] Id. r. 3.3.

[xi] Id. r. 5.3.

[xii] Id. r. 7.1, r. 7.2.

[xiii] Id. r. 8.4.

[xiv] Id. Preamble.