Balancing the Positive Impacts of Generative AI With The Very Real Risks to In-House and Outside Counsel
Balancing the Positive Impacts of Generative AI With The Very Real Risks
In recent years, much has been reported concerning both the positive impact and the risk of generative artificial intelligence. From the rapid growth and quick embracing of the technology by the public, in general, to intellectual property lawsuits concerning the material used to train AI Large Language Models (“LLM”) and increasing sanctions assessed against attorneys for AI misuse, not a day goes by without pundits from many sectors issuing new praise—or stark warnings—related to the use of AI. Within this general framework, both in-house and outside counsel face specific risks arising out of the use of the technology that must be weighed against any reward.
AI Ethics and Attorney Responsibility in Legal Practice
While certainly all who use AI bear some level of moral or ethical responsibility, attorneys bear special obligations related to AI ranging from protecting confidential and privileged information to upholding the responsibility of candor to the courts and other tribunals. Balancing these obligations with the obvious reward of AI technology involves reasonably and competently understanding the capabilities, limitations, and risks of generative AI. In other words, attorneys, in particular, must understand how AI tools handle, retain, and even share input data prior to using the technology. Still further, any use of AI technology by attorneys must always include human review and the exercise of professional judgment to (1) oversee and protect against the input of confidential, privileged, or trade secret information and (2) verify the accuracy of any output. The bottom line is this: at the end of the day, the attorney is always responsible. This is particularly true for supervising attorneys as discussed in more detail below.
For further reference, see the American Bar Association’s Model Rule 1.1 related to attorney competence, which notes the responsibility of attorneys to provide competent representation to clients and requires they exercise the “legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation” and should also understand “the benefits and risks associated” with the technologies used to deliver legal services to clients.
Confidentiality and Data Security Obligations When Using AI
One of the most important responsibilities attorneys bear related to the use of AI is data security and confidentiality. Whether in-house or outside counsel, attorneys must understand the risks to company, consumers, clients, and even individuals arising out of generative AI. As noted by IBM recently, the average cost of data breaches, in general, neared $4.5M in 2025. With regard to AI, the risk can be even greater, resulting in the inadvertent disclosure of damaging and highly sensitive client information, the waiver of attorney-client privilege or work product protections, or personal and/or institutional sanctions related to false or misleading information generated by AI.
In-house counsel and outside law firms should prioritize strict protections of privilege, confidentiality, and data security by understanding how AI tools ingest, process, and retain (and share) sensitive data and regulating how such information can or cannot be used with AI – this also means ensuring security parameters related to data retention, data sharing, and the risk of breach or disclosure. Many AI models—including public models like Chat GPT—store, use, and even share input data, resulting in what many consider the public dissemination of input data. For example, it appears that OpenAI may share users’ personal information with undisclosed third parties and that users may not be able to request the removal of specific prompts from storage. The danger is obvious, resulting in some state bar associations either prohibiting (CA) or strongly warning against (NY) the input of confidential client information into general public models. Even outside the specific confines of AI, attorneys must be cognizant of the use of sensitive data with any technology. In 2023, it was discovered that while Amazon repeatedly assured customers they could delete voice recordings, it was later discovered that the recordings were retained and used to improve Amazon’s algorithm. Amazon was charged by the U.S. Department of Justice for violations of the Children’s Online Privacy Protection Act.
Yet another important confidentiality consideration is the need to restrict or monitor employee or third-party access to sensitive content. For in-house counsel, this means oversight both internal to the company and external with outside counsel to ensure policies are in place to prevent the input of confidential information into AI models. This particular risk played out recently with Ring, LLC, the manufacturer of Ring cameras and other accessories. Though not specifically involving AI, Ring failed to prevent employee and contractor access to private consumer videos recorded on its products and used the videos to improve its own algorithm, resulting in the filing of an FTC complaint and final order in D.C. district court. In the context of AI, the “self-learning” aspect of AI LLMs is such that even restricting the use of confidential or sensitive information to an internal AI program can risk inadvertent exposure. This is because information used as input for an AI tool is potentially available to others internal to the organization using the same AI tool and can appear in output that is ultimately shared with a different firm client, filed with a court, or otherwise exposed.
Generally speaking, utilizing private, enterprise level AI or API accounts (paid AI services) with contractual security and confidentiality measures in place (as opposed to public AI tools) can help mitigate risk. ChatGPT services for businesses, such as Team, Enterprise, or API do not use content to train the model. Still, attorneys should be diligent in ensuring sensitive information is not input into AI models and that all outputs are accurate and verifiable.
For further information, see the ABA’s Model Rule 1.6 regarding confidentiality and the obligation to “keep confidential all information relating to the representation of a client, regardless of its source, unless the client gives informed consent.”
Attorney-Client Privilege and Work Product Protections in the Age of AI
Attorney-client privilege and the work product doctrine are important subsets of the confidentiality and data security issue. Recently, in U.S. v. Bradley Heppner (Feb. 17, 2026), Judge Rakoff of the Southern District of New York addressed “a question of first impression nationwide”—specifically, whether communications between a criminal defendant and an AI platform were protected from government inspection. There, shortly after indictment on charges of securities fraud, wire fraud, and related charges—and prior to contacting defense counsel—the criminal defendant used the public AI platform Claude to prepare potential defense strategies. When the government sought disclosure of both the input data and the AI output, defense counsel argued the information was protected by the attorney-client privilege and work product protections because the defendant had conducted the AI search for purposes of speaking to counsel and had provided his counsel with the results of the AI search. The court disagreed. As an initial matter, the court noted that the Claude AI platform is not an attorney and there was no reasonable expectation of privacy in using the technology such that the attorney-client privilege did not apply. Likewise, the work product protection did not extend to the data because the content was not prepared by defense counsel or even at the instruction of defense counsel. In light of this holding, attorneys must be careful to avoid entering client information, personally identifiable information (“PII”), draft briefs, or client documents into any AI platform in order to avoid inadvertent disclosure and waiver of legal protections.
Supervisory Duties and AI Policy Management for Legal Teams
As referenced above, in addition to ensuring their own conduct and use of AI and other technologies does not disclose sensitive information or otherwise violate ethical rules, attorneys (and others) in supervisory roles must implement and enforce clear internal policies and training programs for the ethical use of generative AI by counsel, employees and other non-legal staff, and third-parties like vendors. Faced with the obvious conclusion that generative AI has grown from a novel technology to core business tool, those in supervisory or management roles must ensure proper protections exist for all internal (and some external) employees or contractors.
This means at a minimum defining what AI means to your organization, your clients, and your context, and understanding how employees and third-party vendors (including outside counsel) use generative AI in order to ensure internal policies cover (or where necessary prohibit) all such use. Supervisors should also identify which AI models can/cannot be used and provide clear parameters regarding what use is “acceptable,” including specifically identifying what information can be input into AI Large Language Models. In fashioning any policy, be aware of the local rules of the various jurisdictions in which you are involved as well as state, federal, and international statutes and regulations specific to AI.
The most important part of any supervisory role when it comes to AI is ensuring proper human review and validation processes for AI assisted work to oversee and verify both input and output of all employees and third-party vendors. Understand and check for fake caselaw, false quotations, and inaccurate legal analysis, all known as “hallucinations.” Recent studies suggest general public platforms hallucinate 40-80+ percent of the time and even Lexis AI and Westlaw AI-Assisted Research have been shown to hallucinate at higher rates than may be anticipated. Similarly, be aware that biased/faulty input often results in biased/faulty output – know what is being input and by whom. Still further, cross check AI output with other reliable sources.
Lastly, be aware of possible ownership issues regarding AI output. To date, the Copyright Office continues to hold to the position that output generated by AI is not entitled to copyright protection (although some have been able to secure some level of protection for the human input aspect of AI use). If there is a chance any employee or third-party contractor could claim ownership of any AI output, address the issue on the front end.
Candor Requirements for Attorneys Using Generative AI
Much of what we have all seen in the news recently related to AI involves a violation of the obligation of candor. For in house counsel, this arises in the context of complete truthfulness to clients or tribunals or other agencies overseeing investigations or other administrative review. For outside counsel, this typically refers to candor with our clients (including in house counsel) and the court. Regardless, “candor” in the context of AI includes (1) being candid about how and when AI is used; (2) fessing up when there are mistakes; and (3) ensuring no hallucinations or false, misleading, or biased information is relied upon.
The risk is clear. As noted above—and frequently in the news—AI hallucinations included in documents submitted to court have resulted in increasingly higher sanctions against attorneys (both supervising and associate attorneys) and law firms, including a $10,000 fine handed down by the Second District Court of Appeals in California last September. See here and here for more information on the risk of sanctions associated with AI hallucinations.
Informed Consent and Client Authorization for AI Use in Legal Representation
As referenced above and in the ABA Model Rules related to confidentiality, attorneys have an obligation to protect the confidentiality and/or privilege of data and such data should not be entered into any AI LLM without informed consent. Indeed, as noted last July in the ABA’s Formal Opinion 512 regarding AI, “because many of today’s self-learning GAI tools are designed so that their output could lead directly or indirectly to the disclosure of information relating to the representation of a client, a client’s informed consent is required prior to inputting information relating to the representation into such a GAI tool.” See American Bar Association Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512, p. 7 (July 29, 2024), accessible here. Importantly, for “the consent to be informed, the client must have the lawyer’s best judgment about why the GAI tool is being used, the extent of and specific information about the risk, including particulars about the kinds of client information that will be disclosed, the ways in which others might use the information against the client’s interests, and a clear explanation of the GAI tool’s benefits to the representation.” Id. Similarly, informed consent “requires the lawyer to explain the extent of the risk that later users or beneficiaries of the GAI tool will have access to information relating to the representation.” Id. It is further important to note the ABA’s conclusion that “[t]o obtain informed consent when using a GAI tool, merely adding general, boiler-plate provisions to engagement letters purporting to authorize the lawyer to use GAI is not sufficient.”
Staying Current: Regulatory and Compliance Considerations for AI in Law
AI policies and regulations are in constant motion and evolution. Whether you are in-house or outside counsel, institute regular check-ups and check-ins to keep pace of global, federal, state, local, and industry-specific legislation and regulations, including state and federal privacy laws and regulations (i.e., laws restricting the dissemination of Personally Identifiable Information (“PII”), the Children’s Online Privacy Protection Act), state and federal AI-specific laws and regulations, global privacy and AI specific regulations (EU AI Act, Canada’s AIDA), and even state bar regulations and the local rules of the various federal courts in your jurisdiction. The state of the law is fluid and must be reviewed and updated regularly, much like overall cybersecurity.
Likewise, attorneys should conduct regular risk and impact assessments related to the risk of harm, appropriateness of input, and the credibility of output. Are your attorneys, employees, staff, third-party contractors, and vendors inputting only approved data? Are humans (and where necessary, attorneys) reviewing and confirming the accuracy of the output and eliminating the risk of faulty information or hallucinations? Is there any bias or faultiness in the input that is tainting the output? In general, are policies in place and being followed? Staying on top of these important inquiries is the key to avoiding increased risk associated with the use of AI technology.
Finally, while it is true that AI-specific laws and regulations are still evolving, attorneys should not forget that many of the existing laws and our own rules of professional conduct already provide instruction and guidance on how to navigate emerging technologies.
For more information about AI law, see our technology law services practice page.
Klemchuk PLLC is a leading IP law firm based in Dallas, Texas, focusing on litigation, anti-counterfeiting, trademarks, patents, and business law. Our experienced attorneys assist clients in safeguarding innovation and expanding market share through strategic investments in intellectual property.
This article is provided for informational purposes only and does not constitute legal advice. For guidance on specific legal matters under federal, state, or local laws, please consult with our IP Lawyers.
© 2026 Klemchuk PLLC | Explore our services