Free subscription to the Capitol Journal keeps you current on legislative and regulatory news.
FTC, 17 States File Antitrust Lawsuit Against Amazon
The long-expected antitrust action against Amazon finally came last week with the filing of a complaint in the U.S. District Court for the Western...
NC Budget Would Preempt Local Government Minimum Wage Rates
The state budget ( HB 259 ) approved largely along party lines this month in North Carolina’s Republican-controlled legislature includes...
Medicaid Expansion Coming to NC in December
North Carolina Gov. Roy Cooper (D) announced last week that the state will launch Medicaid expansion on Dec. 1, which will leave just 10 states that haven’t...
In the early days of the COVID-19 pandemic, Congress enacted the Families First Coronavirus Response Act , which among other things required state Medicaid programs to keep people continuously enrolled...
Biden Administration Seeks to Exclude Medical Debt from Credit Scores
The Biden administration announced plans to develop new rules that would prevent unpaid medical bills from counting towards consumers’...
For years, movies have been depicting a future where artificial intelligence would provide us with medical care and counseling.
In Star Wars: Episode III - Revenge of the Sith, Anakin Skywalker’s life is saved by droids who replace his missing limbs with robotic appendages, transforming him into the iconic villain Darth Vader.
In 2013’s Elysium, Matt Damon’s hard-scrabble character Max has a frustrating encounter with a humorless AI parole officer who doesn’t listen to him and eventually asks, “Would you like to talk to a human?”
Even in Disney’s animated super-hero movie, Big Hero 6, the plot turns on Baymax, a huggable “personal healthcare companion” programed “with over 10,000 medical procedures.”
Today’s AI technology, of course, isn’t to Hollywood’s level – at least, not yet. But AI has seeped enough into America’s medical system that legislators across the country have started to take notice.
From Massachusetts to California, state lawmakers are working on bills to regulate the use of AI in healthcare.
The legislation being considered today doesn’t deal with anything as advanced as robot surgeons or automated therapists. But while it’s more mundane, it’s no less potentially impactful.
For example, California AB 1502 seeks to ban healthcare service plans or health insurers from discriminating on race, gender, national origin, age or disability through the algorithms they use in their decision making.
Similarly, Georgia’s HB 203, which has already been enacted, regulates the use of AI used to conduct eye assessments, while Illinois’s pending HB 1002 would regulate the use of algorithms to diagnose patients.
New Jersey’s SB 1402 outlaws discrimination through the use of “automated decision systems” by a litany of professionals and institutions, including banks, mortgage companies, insurance companies and healthcare providers.
Another area legislators seem particularly concerned about is the use of AI in guiding nurses’ decision-making. Illinois’ HB 3338 would require hospitals, surgical treatment centers and other healthcare facilities to defer to a human nurse’s judgment over that of AI. Maine’s SB 656 would regulate the use of algorithms to achieve nursing-care objectives.
“Artificial intelligence, the development of computer systems to perform tasks that normally require human intelligence, such as learning and decision-making, has the potential to spur innovation and transform industry and government,” the National Conference of State Legislatures writes on its website, where it is tracking AI legislation considered this year. “Concerns about potential misuse or unintended consequences of AI, however, have prompted efforts to examine and develop standards. The U.S. National Institute of Standards and Technology, for example, is holding workshops and discussions with the public and private sectors to develop federal standards for the creation of reliable, robust and trustworthy AI systems.”
One area receiving particular attention from lawmakers is the use of AI in mental health assessments.
Massachusetts’ HB 1974 seeks to regulate the use of AI in providing mental health services, requiring, in part, that “Any AI system used to provide mental health services must be designed to prioritize the safety and well-being of individuals seeking treatment and must be continuously monitored by a licensed mental health professional to ensure its safety and effectiveness.”
Rhode Island’s HB 6285 contains the exact same language.
“One of the obvious costs associated with replacing a significant number of human doctors with AI is the dehumanization of healthcare,” writes Francesca Minerva of the University of Milan and Alberto Giubilini of the University of Oxford in a May 2023 article, “Is AI the Future of Mental Healthcare?” published by Topoi, an international philosophy journal. “The human dimension of the therapist-patient relationship would surely be diminished. With it, features of human interactions that are typically considered a core aspect of healthcare provision, such as empathy and trust, risk being lost as well. Sometimes, the risk of dehumanizing healthcare by having machines instead of persons dealing with patients might be worth taking, for instance when the expected outcomes for the patient are significantly better. However, some areas of healthcare seem to require a human component that cannot be delegated to artificial intelligence. In particular, it seems unlikely that AI will ever be able to empathize with a patient, relate to their emotional state or provide the patient with the kind of connection that a human doctor can provide. Quite obviously, empathy is an eminently human dimension that it would be difficult, or perhaps conceptually impossible, to encode in an algorithm.”
At least 11 states have considered legislation this year dealing with the use of artificial intelligence in healthcare, according to the National Conference of State Legislatures’ AI legislation tracker. Two states have enacted such measures.
For attorneys, weighing the benefits and risks of incorporating AI into healthcare can be challenging, as the laws and regulations can vary from state to state, or between technologies.
“Rather than a uniform law governing AI technologies, certain aspects of these technologies are governed by a patchwork of laws and regulations,” wrote Jennifer L. Urban and Avi B. Ginsberg of the law firm Foley & Lardner. “The resulting current regulatory framework is complex and may vary based on several factors including types of data implicated, geographic location, use case, and technical implementation. There are various efforts underway to address the regulatory gap, including the European Union’s Artificial Intelligence Act (which the European Parliament recently approved) and the Blueprint for an AI Bill of Rights published by the Biden Administration in October 2022. While this shows progress toward enacting formal AI regulation, the patchwork remains.”
Kyle Y. Faget, a partner at Foley, said regulators are still trying to get their hands around how to handle AI in healthcare, particularly the U.S. Food and Drug Administration. She said that’s because the regulatory process is designed for an iterative process that’s fixed in time. AI is different, however, because it’s constantly changing. How do you assess the safety and risk of algorithms that are learning in real time?
This is especially concerning for some, Faget said, as AI is increasingly being applied to diagnostic functions. For all these reasons, she said she believes AI is a top priority in the medical—and regulatory—communities.
“I think this is the hot issue in healthcare right now,” she said. “No question about it.”
—By SNCJ Correspondent BRIAN JOSEPH
Please visit our webpage to connect with a State Net® representative and learn how the State Net legislative and regulatory tracking solution can help you identify, track, analyze and report on relevant legislative and regulatory developments.