Use this button to switch between dark and light mode.

Copyright © 2024 LexisNexis and/or its Licensors.

Cautions and Legal Considerations of Using Generative AI in Healthcare

August 24, 2023 (10 min read)

By: Sara Shanti, Phil Kim, Christopher Rundell, Arushi Pandya, and Elfin Noce, SHEPPARD MULLIN

The use of generative artificial intelligence (AI) and machine learning (ML) in healthcare is developing at a fanatical and fascinating pace. 

BECAUSE THE CONSEQUENCES OF SUCH TECHNOLOGY are yet to be fully understood, thoughtful consideration of its use by industry stakeholders and users is necessary, especially with respect to the legal implications within the healthcare industry. This article discusses AI’s development in healthcare and federal and state efforts to regulate its use. It provides health law practitioners with an overview of the legal considerations associated with AI’s use in healthcare, including data privacy, corporate practice of medicine, provider licensing, reimbursement, intellectual property, and research. It concludes with a discussion of the ethical considerations involved with AI in healthcare and considerations for protections against potential liability.

AI’s Development in the United States and Certain Foreign Jurisdictions

Although AI can be described simply as the engineering and science of making intelligent machines, its effects are much more complex. ML is a subset of AI focused on how to improve computer operations based on informed actions and statistics. While AI programming has been in existence for decades, the recent developments in generative AI have been transformative in mainstream use. Accelerated growth in healthcare can be attributed, at least in part, to the COVID-19 Public Health Emergency (PHE) when digital healthcare, including products driven by AI, emerged as a marketable means to accessible care.

Pre- and post-PHE, the United States has been a premier healthcare leader with breakthrough innovations and research, and this continues to be the case with AI’s evolution. However, the current barren regulatory landscape has cast a unique shadow over AI’s potential, which is particularly significant in light of an aging population, high Medicaid and Children’s Health Insurance Program enrollment—growing 29.8% from February 2020 to December 2022—and multiple ongoing epidemics in mental health and substance abuse. Considering this healthcare climate, AI as a regulated and tamed tool has an incredible opportunity in history with its unique ability to renovate the health and wellness not only of the nation, but the entire global population, at a pivotal point in human history.

Such optimism stands in stark contrast to warnings about AI’s potential to harm or mislead. In fact, the World Health Organization (WHO), which issued the Ethics & Governance of Artificial Intelligence for Health in 2021, recently called for caution to be exercised as “the data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness.” While international bodies, like the European Union, have been actively monitoring and pushing for limitations on AI for years, to date, the United States has virtually allowed the industry to regulate itself. Without swift action, de facto legal regimes for AI may be established outside of the United States, most significantly in China, if only due to the size of its population base. This is notable, as is the lack of experience by federally elected officials and staff in the crucial arena of computer science and law, coupled with the fact that Congress has been notoriously averse to imposing sweeping limitations on technology companies. The United States has a tremendous opportunity to grow and lead in this arena. Alternatively, many experts strongly believe the role of governing AI must be a global collaboration with international monitoring, similar to how the nuclear field is regulated. While AI now has legislators’ attention and future regulation is ultimately expected, stakeholders are hyper-aware of the implications of further delay.

Deaf to legislation battles, AI/ML in healthcare has advanced in a broad range of applications, from innovations in identifying acute health episodes and improving personalization of care and treatment plans, to pharmaceutical development and isolation and self-harm prevention. Understanding that AI is constantly evolving, this article focuses on the legal considerations of AI in healthcare in the United States that can be applied alongside regulatory developments to support protective and successful implementation.

Existing Legal Framework of AI Regulation in the United States

Currently, no comprehensive federal framework to regulate AI/ML exists. The White House’s Blueprint for an AI Bill of Rights does offer high-level direction in the design, deployment, and use of automated systems to prioritize civil rights and democratic values. A number of federal agencies have issued high-level guidance or statements, and Congress is taking steps to educate itself, including through hearings with stakeholders and technology executives. However, material and standardized safeguards have yet to be established. In contrast, certain states are actively developing and implementing laws to oversee the development and deployment of AI that impacts healthcare. For example, the California Consumer Privacy Act (CCPA) provides consumers with rights to opt out of automated decision-making technology. Illinois’ proposed Data Privacy and Protection Act would regulate the collection and processing of personal information and the use of so-called covered algorithms, which include computational processes utilizing AI/ML. Approximately half of the country’s states already have pending or enacted AI legislation.

Stakeholder and industry groups are also actively releasing guidance, despite the lack of enforceability, which materially limits its implementation. For instance, in order to align on health-related AI standards in a patient-centric manner, the Coalition for Health AI released a Blueprint For Trustworthy AI Implementation Guidance and Assurance for Healthcare. The American Medical Association (AMA) has similarly published Trustworthy Augmented Intelligence in Health Care, a literature review of existing guidance, in order to develop actionable guardrails for trustworthy AI in healthcare.

For guidance related to AI regulatory considerations in U.S. Healthcare, sign up for a free trial of Practical Guidance to read this complete article.

Ethical Considerations of AI Use in Healthcare

AI/ML has the potential to both improve and exacerbate concerns of health inequity, especially as caused by the social determinants of health (SDOH). The incorporation of SDOH into AI/ML technologies may provide higher quality of care. However, human review and oversight is a key mechanism to promote ethical deployment of AI and to monitor AI’s potential harms. The possibility of AI/ML inflicting harm in healthcare encompasses a broad range of malicious and unintended consequences, including to the tremendous detriment of whole societies, such as biohacking and the creation and use of bioweapons.

Bias and Discrimination

While the utilization and development of AI implicate a variety of ethical concerns, these issues are exacerbated and extrapolated within the healthcare industry. Ethical frameworks have been developed by a variety of stakeholders, including the AMA, WHO, and academia. Ethical risks of AI in healthcare include that the source and integrity of data underpinning AI/ML technologies can greatly impact their accuracy and consistency and, ultimately, cause bias and discrimination. Biases can be further perpetuated in data sets as a result of the inaccuracies in data resulting from its human-annotated nature. Algorithms may incorporate biases at multiple stages of their development and can consequently compound and perpetuate preexisting inequities in the healthcare system.

Integrity of Healthcare Delivery

The risk at the forefront of using AI/ML technologies in healthcare is that these systems can sometimes be inaccurate, which could result in patient harm. Generative AI systems have been known to hallucinate and create false information. Inaccuracies can also be caused by algorithmic biases. Security is another risk that comes with the very sensitive and large data sets necessary to produce quality AI/ML models for healthcare use cases. Hallucination and false information are examples of how AI, by its very nature, can extrapolate any bias, discrimination, or misinformation quickly and extensively if it is not mitigated or caught.

For guidance on protecting against potential Healthcare AI liabilities, sign up for a free trial of Practical Guidance to read this complete article. 

Conclusion—Successful AI Requires Sophisticated Regulation and Regulatory Counsel

The healthcare regulatory framework surrounding AI/ML is unsettled and still developing, yet there are far-reaching implications. Unless the federal government adopts wide-ranging, preemptive rules for the creation and use of AI/ML products, the rise of a patchwork of varying state laws, with overreaching global standards, is likely to govern this arena. As a result, legal developments require careful monitoring, and industry actors should proceed with caution and thoughtful citizenship when developing AI/ML products or entering into arrangements to use AI/ML products. It is key to build flexibility into AI/ML products and arrangements to ensure they can adjust and pivot as needed to accommodate legal developments to come.

The revolutionary nature of AI/ML catalyzes healthcare’s age-old oath to care for patients and to do no harm. This oath, in using AI, must be applied in a broader and more deliberate manner to encompass the many, and society at large, to ensure that the benefits of AI in healthcare are not reaped at the cost of individual or public rights and safety. 


Sara Shanti is a partner in the Corporate Practice Group in the firm’s Chicago office. Sara’s practice sits at the forefront of health-technology. Her practice focuses on providing practical counsel on healthcare innovation and complex data privacy matters. Using her medical research background and HHS experience, Sara advises providers, payors, start-ups, and technology companies, and their investors and stakeholders on digital and novel healthcare regulatory compliance matters, including artificial intelligence and machine learning, augmented and virtual reality, data assets and privacy, gamification, implantable and wearable devices,
and telehealth.

Sara has deep experience advising clients on data use and protection under Part 2, HIPAA, GINA, and state privacy laws, such as BIPA and CCPA, and multinational border transmissions. She also assists clients in implementing compliance programs, launching health innovations and investments, and responding to governmental investigations. Her experience extends to consumer and patient rights, including under the American with Disabilities Act and Section 1557 of the Affordable Care Act, medical staff relationships, and navigating the evolving regulatory landscapes for next-generation technology.


Phil Kim is a partner in the Corporate and Securities Practice Group in the firm’s Dallas office. Phil advises various types of healthcare providers in connection with transactional and regulatory matters. He counsels healthcare systems, hospitals, ambulatory surgery centers, physician groups (including non-profit health organizations), home health providers, and other healthcare companies on the buy- and sell-side of mergers and acquisitions, joint ventures, and operational matters, which include regulatory, licensure, contractual, and administrative issues.

Phil has a particular interest in digital health. He has assisted a number of multinational technology companies entering the digital health space with various service and collaboration agreements for their wearable technology. He also assists public medical device, biotechnology, and pharmaceutical companies, as well as the investment banks that serve as underwriters involved in the public securities offerings for such healthcare companies.


Christopher Rundell is an associate in the Corporate Practice Group in the firm’s Chicago office and a member of the Healthcare Team. He advises healthcare corporations on mergers and acquisitions and other corporate transactions and governance matters. His representative work experience includes representation of healthcare provider and management organizations, technology companies, commercial insurers, managed care organizations, Medicare Advantage health plans, nonprofit and for-profit health systems, academic medical centers, community hospitals, and post-acute and sub-acute providers such as home health and hospice providers, and behavioral health providers.


Arushi Pandya is an associate in the Governmental Practice Group in the firm’s Washington, D.C. office. Arushi advises healthcare clients on regulatory and transactional matters. Prior to joining Sheppard Mullin, Arushi was an associate at a large Texas firm. While at Texas Law, she served as Managing Editor of the Journal of Law and Technology, Pro Bono Scholar, Dean’s Fellow, Community Engagement Director of the Women’s Law Caucus, and a health law research assistant.


Elfin Noce is an associate in the Intellectual Property Practice Group in the firm’s Washington, D.C. office. He also is a member of the Privacy and Cybersecurity Team. Elfin counsels his clients on a wide range of data privacy and cybersecurity matters. Elfin’s practice includes managing cyber breach response, drafting incident response plans, breach simulations, drafting privacy policies, negotiating and drafting complex technology agreements, and defending companies in cybersecurity litigation.

Subscribers may view the full article in Practical Guidance.

Not yet a Practical Guidance subscriber? Sign up for a free trial of Practical Guidance to read this complete article.

Related Content

For an overview of current practical guidance on generative AI, ChatGPT, and similar tools, see

GENERATIVE ARTIFICIAL INTELLIGENCE (AI) RESOURCE KIT

For a guide to the key concepts and considerations related to clinical trials of drugs and medical devices, see

CLINICAL TRIALS RESOURCE KIT


For practical guidance about health information privacy and security laws, see

HEALTH INFORMATION PRIVACY AND SECURITY RESOURCE KIT


To learn about healthcare fraud and abuse issues faced by healthcare organizations, see

HEALTHCARE FRAUD AND ABUSE COMPLIANCE RESOURCE KIT


For guidance on advising clients on HIPAA compliance, see

HIPAA RESOURCE KIT

For a discussion of the legal issues involved in healthcare management contracts, see

CORPORATE PRACTICE OF MEDICINE AND OTHER KEY HEALTHCARE MANAGEMENT CONTRACT LEGAL ISSUES


For an analysis of rules promulgated under HIPAA, see

HIPAA PRIVACY, SECURITY, BREACH NOTIFICATION, AND OTHER ADMINISTRATIVE SIMPLIFICATION RULES

 
For information on the statutes and regulations governing privacy of patient information in clinical trials, see

PRIVACY AND CONFIDENTIALITY IN CLINICAL RESEARCH