Copyright © 2025 LexisNexis and/or its Licensors.
05 May 2025
Artificial Intelligence (AI) Technology Legal Risks Checklist
The following article is a summary of the full checklist, available to Practical Guidance subscribers by following this link. Not yet a Practical Guidance subscriber? Sign up for a free trial here.
The complete checklist is written by Yasamin Parsafar, Sheppard, Mullin, Richter & Hampton LLP
This checklist provides best practices and considerations for corporations to effectively manage the legal risks associated with the development and use of artificial intelligence (AI) technologies.
The checklist addresses the issues companies and their counsel must understand in order to ensure compliance and mitigate potential legal challenges associated with AI. Notably, there are numerous considerations implicated by the development and use of AI technology, and as such this checklist is not comprehensive. The legal issues that a company will encounter depend on the type of AI development and use it engages in and its specific role in the AI ecosystem. For example, a company may be licensing their content for others to train AI, training AI models themselves, using third-party AI models as-is or fine-tuning them, or creating applications based on AI models.
- Establish a Multidisciplinary AI Governance Team
- Assemble members from diverse backgrounds and roles within the company.
- Provide comprehensive education on legal, technical, and business aspects of AI.
- Ensure ongoing training to keep pace with evolving legal developments.
- Define the Team’s Core Responsibilities
- Policy Creation and Implementation
- Develop written policies covering areas such as:
- Licensing content for AI training.
- Collecting and using data to train, fine-tune, and augment AI models.
- Implementing and deploying applications that leverage AI.
- Managing the use and output of both internal and third-party AI tools.
- Develop written policies covering areas such as:
- Employee Training and Compliance
- Educate staff on AI risk, proper usage, and compliance issues.
- Establish clear employee-use policies, ensuring staff understand both the rules and the reasons behind them.
- Vendor Diligence and Approval
- Enhance standard vendor due diligence to include AI-specific issues.
- Approve AI tools based on company-approved criteria and ensure the tools comply with established policies.
- Protection of Intellectual Property (IP)
- Ensure that:
- AI input data remains confidential and exclusively owned by the company.
- Outputs from AI systems are owned by the company without inadvertently licensing rights to AI tool providers.
- AI tools, especially code generators, do not compromise proprietary technology or trigger unwanted open‑source obligations.
- Ensure that:
- Policy Creation and Implementation
- Ensure Legal and Regulatory Compliance
- Comply with applicable federal, state, and international laws and guidelines, including:
- AI‑specific laws (e.g., Colorado AI and chatbot-related laws, EU AI Act).
- Broader laws impacting consumer protection, privacy, and specific industries (healthcare, education, transportation, etc.).
- Follow guidance from regulatory bodies such as the FTC, EEOC, FCC, SEC, USPTO, FDA, and others.
- Map, track, and audit any data used for training AI models to confirm proper rights and compliance.
- Comply with applicable federal, state, and international laws and guidelines, including:
- Address Issues Related to Training AI Models
- Data Rights and Privacy
- Confirm that training data is correctly licensed and compliant with privacy regulations.
- Avoid unauthorized data scraping and ensure the data does not exceed usage permitted by existing privacy policies.
- Copyright and Infringement Risks
- Determine whether training content is under copyright or qualifies as fair use.
- Secure necessary licenses or assess risks if using copyrighted content for AI training.
- Bias, Discrimination, and Data Quality
- Evaluate datasets for representativeness and bias.
- Use proactive measures to ensure fairness and avoid discriminatory outcomes.
- Data Rights and Privacy
- Build and Deploy AI Tools with Responsible AI Principles
- Emphasize transparency, accountability, and explainability in AI system design.
- Prioritize accuracy and establish mechanisms for timely error correction.
- Clearly mark and label AI‑generated (synthetic) content.
- Implement strong cybersecurity practices to safeguard AI systems.
- Avoid misleading claims about AI capabilities and ensure that the systems do not lead to algorithmic discrimination.
- Use Third‑Party AI Tools Responsibly
- Perform thorough AI‑specific vendor diligence before integrating or deploying third‑party AI tools.
- Review and understand licensing terms for AI tools, including:
- Use restrictions, confidentiality of inputs, and the disposition of ownership rights.
- Indemnity clauses and liability for infringement or open‑source compliance issues.
- Carefully assess any AI‑generated content to determine if it requires copyright or patent considerations.
- Establish measures (such as filters or references) when using AI code generators to manage open‑source risks.
- Consider Application‑Specific Issues
- Employment Decisions and Consumer Benefits
- Ensure AI systems used for employment or consumer-facing applications do not foster bias or violate consumer protection laws.
- Address risks such as:
- Disproportionate impact on specific demographic groups.
- Inaccuracies or “hallucinations” in critical decision‑making scenarios.
- Chatbots and Interactive AI Systems
- Disclose to users when they are interacting with a chatbot.
- Confirm compliance with relevant laws (e.g., wiretapping/eavesdropping, data privacy).
- Employment Decisions and Consumer Benefits
- Update Contracts and Internal Policies as Needed
- Define acceptable use matrices specific to different AI tools and applications.
- Clearly specify the rights and obligations regarding the use of AI outputs.
- Ensure vendor agreements include warranties related to intellectual property protection and consumer/civil rights compliance.
This summary captures the main points to consider when forming an AI governance team and implementing policies for the responsible development, deployment, and use of AI. Practical Guidance subscribers may review the complete checklist by following this link.
Not yet a Practical Guidance subscriber? Sign up for a free trial here.
The complete checklist is written by Yasamin Parsafar, partner in the IP Practice Group at Sheppard Mullin