Review this exciting guide to some of the recent content additions to Practical Guidance, designed to help you find the tools and insights you need to work more efficiently and effectively. Practical Guidance...
By: Romaine Marshall and Jennifer Bauer , Polsinelli PC This article addresses the broad scope of artificial intelligence (AI) laws in the United States that focus on mitigating risk, and discusses the...
By: Bijan Ghom , Saxton & Stump This article addresses existing deepfake technology and covers topics such as the available platforms to both create and detect deepfakes and the best practices for...
By: Ellen M. Taylor , SLOAN SAKAI YEUNG & WONG LLP THIS ARTICLE ADDRESSES THE BROAD SCOPE OF artificial intelligence (AI) laws in the United States that focus on mitigating risk. AI-driven employment...
By: Jessica Bishop and Sarah Stothart , GOODMANS LLP This checklist provides an overview of key legal considerations attorneys should review when advising clients on negotiating and drafting contracts...
Copyright © 2025 LexisNexis and/or its Licensors.
By: Romaine Marshall and Jennifer Bauer, Polsinelli PC
This article addresses the broad scope of artificial intelligence (AI) laws in the United States that focus on mitigating risk, and discusses the patchwork of laws, regulations, and industry standards that are forming duties of care and legal obligations. In lieu of a well-regulated industry with established legal frameworks around AI, professionals interested in mitigating AI risks within their businesses will need to consider other signals such as Federal Trade Commission (FTC) enforcement trends and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF), to identify, evaluate, and mitigate AI risks.
The NIST AI RMF provides voluntary guidance for managing AI risks throughout its life cycle to ensure trustworthy AI models. It outlines characteristics such as validity, reliability, safety, and transparency, among others. The framework suggests risk management techniques like AI risk management policies, AI system inventories, and AI incident response plans. It also highlights the importance of balancing these characteristics based on the AI's use case.
The applicable use case requires a practical interpretation of the broad risks proposed. The accompanying NIST CSF 2.0 Implementation Examples provide greater clarification. A high-level overview of the risk management techniques, risks, and controls we typically recommend encompassing in such a program is enclosed in the below visual.
The FTC has been active in addressing improper AI use and development, as seen in cases against Rite Aid Corporation and 1Health.io Inc. These cases underscore the importance of proper data handling, transparency, and adherence to privacy policies. The FTC's actions serve as a warning to companies about the consequences of retroactive privacy policy changes and inadequate data protection measures.
In the absence of federal AI laws, states like Utah and Colorado have enacted their own legislation. Utah's AI bill focuses on transparency and data privacy, while Colorado's AI Act, influenced by the NIST AI RMF, imposes requirements on developers and deployers of high-risk AI systems. California has also passed AI-related bills, emphasizing disclosure and risk assessments in collaboration with industry leaders.
The Colorado Act imposes criteria for deployers and developers and a reasonable duty of care to protect consumers from risks:
Holistic AI Risk Management
The document outlines key takeaways for developing an AI risk management strategy, and recommends:
These strategies aim to reduce business risk and ensure compliance with evolving legal and regulatory frameworks.
The above information is a summary of a more comprehensive article included in Practical Guidance. Customers may view the complete article by following this link.
Not yet a practical guidance subscriber? Sign up for a free trial to view this complete article and other current AI coverage and guidance.
Romaine Marshall is a shareholder at Polsinelli PC. He helps organizations navigate legal obligations relating to data innovation, privacy, and security. He has extensive experience as a business litigation and trial lawyer, and as legal counsel in response to hundreds of cybersecurity and data privacy incidents that, in some cases, involved litigation and regulatory investigations. He has been lead counsel in multiple jury and bench trials in Utah state and federal courts, before administrative boards and government agencies nationwide, and has routinely worked alongside law enforcement.
Jennifer Bauer is counsel at Polsinelli PC. She has extensive experience in global privacy program design, evaluation and audit, regulatory compliance and risk reporting, remediation, and data privacy and cybersecurity law. She is a trusted and experienced leader certified by the International Association of Privacy Professionals. Jenn has transformed the privacy programs of five Fortune 300 companies and advised blue-chip companies and large government entities on data privacy and security requirements, regulatory compliance (particularly GDPR / CCPA), and operational improvement opportunities.
For more practical guidance on artificial intelligence (AI), see
> GENERATIVE ARTIFICIAL INTELLIGENCE (AI) RESOURCE KIT
For an overview of proposed or pending AI-related federal, state, and major local legislation across several practice areas, see
> ARTIFICIAL INTELLIGENCE LEGISLATION TRACKER (2024)
For a survey of state and local AI legislation across several practice areas, see
> ARTIFICIAL INTELLIGENCE STATE LAW SURVEY
For a discussion of legal issues related to the acquisition, development, and exploitation of AI, see
> ARTIFICIAL INTELLIGENCE KEY LEGAL ISSUES
For an analysis of the key considerations in mergers and acquisitions due diligence in the context of AI technologies, see
> ARTIFICIAL INTELLIGENCE (AI) INVESTMENT: RISKS, DUE DILIGENCE, AND MITIGATION STRATEGIES
> ARTIFICIAL INTELLIGENCE AND AUTOMATION IN E-COMMERCE
For a checklist of key legal considerations for attorneys when advising clients on negotiating contracts involving AI, see
> ARTIFICIAL INTELLIGENCE (AI) AGREEMENTS CHECKLIST
National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (July 2024).
National Institute of Standards and Technology, The NIST Cybersecurity Framework (CSF) 2.0 (Feb. 26, 2024).
The NIST Cybersecurity Framework (CSF) 2.0.
National Institute of Standards and Technology, CSF 2.0-Implementation Examples.
FTC v. Rite Aid Corp., No. 23-cv-5023 (E.D. Pa. Dec. 19, 2023).
FTC v. Rite Aid Corp., No. 23-cv-5023 (E.D. Pa. Feb. 26, 2024).
1Health.io Inc., 2023 FTC LEXIS 77 (Sept. 6, 2023).
See Utah Artificial Intelligence Policy Act, Utah Code Ann. §§ 13-72-101 to -305.
Colo. Rev. Stat. §§ 6-1-1701 to -1707.
Colo. Rev. Stat. § 6-1-1701(6) and (7).
See Colo. Rev. Stat. §§ 6-1-1702 to -1704.
Colo. Rev. Stat. § 6-1-1706.
See Colo. Rev. Stat. § 6-1-1701(3).