By Madison Johnson, Esq. | Manager, Large Markets Legal professionals understand the importance of staying informed about breaking developments in the business world, but it is especially critical for...
By Madison Johnson, Esq. | Manager, Large Markets As the legal landscape continues to evolve, the ability to conduct efficient and accurate research remains paramount for law students and legal professionals...
By Serena Wellen & Min Chen There’s an entirely new category of generative AI that’s rapidly emerging – with the express purpose of making your legal work easier and faster. If...
By Madison Johnson, Esq. | Manager, Large Markets Consumers were introduced to the generative artificial intelligence (Gen AI) revolution two years ago with the launch of ChatGPT, then the category of...
By Madison Johnson, Esq. | Manager, Large Markets Law firm marketing professionals are under the gun more than ever in today’s legal industry landscape. A report presented at last year’s...
Few technology innovations in recent memory have touched off the wide range of emotions in social conversation as the nascent category of generative artificial intelligence (AI) tools. Depending on whom you ask, generative AI either represents an exciting breakthrough in human creativity, a terrifying new world of deepfakes run amok, or both.
After spotting the tech industry a head start since the launch of ChatGPT—the first commercially available generative AI tool—in November 2022, U.S. legislators and regulators have quickly entered the fray with various attempts to get their arms around this revolutionary technology.
And as is so often the case, state and local governments are moving much faster and with greater results than the federal government.
“Concerns about potential misuse or unintended consequences of AI have prompted efforts to examine and develop standards,” reported the National Conference of State Legislatures (NCSL). “State lawmakers are considering AI’s benefits and challenges — a growing number of measures are being introduced to study the impact of AI or algorithms and the potential roles for policymakers.”
In fact, more than two-thirds of the states have now introduced or passed bills that seek to address AI governance or ethics issues since the start of 2023, according to the LexisNexis® State Net® legislative tracking database.
Given the rapid growth and transformation of generative AI applications, it is certain that this early legislation is just the tip of the iceberg. For professionals charged with overseeing corporate compliance, this is likely to be the new frontier of legislative monitoring and regulatory compliance.
Legislative Themes
The State Net Capitol Journal™ reports that most of the measures introduced to date by state legislators involve the creation of task forces or government agencies to oversee how AI technologies are deployed in their states, while others are striving to impose specific regulatory conditions right away.
For example, one interesting bill in Massachusetts takes direct aim at ChatGPT and similar generative AI models with a number of proposed guardrails to restrain the technology as it develops. A California measure would require the deployers of AI products to perform annual impact assessments on any AI tools they build or use.
Here are some of the primary topics that state legislators are targeting with their AI-related bills in 2023, based on analysis by NCSL:
An Arizona bill would establish an automated law enforcement crime victim notification system that leverages conversational AI technology. A California proposal would require an interagency review and inventory of all high-risk automated decision systems that utilize AI. A Pennsylvania bill would establish a registry of businesses operating AI systems in the state and vest regulators with oversight of the industry.
A Massachusetts proposal has the lofty goal of “preventing a dystopian work environment” by regulating the use of AI in the workplace. In Maryland, a bill would establish a technology grant program to provide financial aid to small and medium-sized manufacturing companies seeking to leverage AI technologies. A proposal in North Carolina seeks to document the impact of AI and automation on the state’s workforce.
A bill in Maine addresses healthcare facility staffing by prohibiting the use of AI for monitoring patients. A Georgia proposal seeks to regulate the use of AI devices and equipment in vision care. Another Massachusetts bill proposes to regulate the use of AI in the provision of mental health services; a similar bill has been introduced in Texas. This is an important area to monitor as state regulators strive to ensure that patients’ privacy rights and treatment protocols are protected amid the expansion of AI-driven mental health care.
Proposed legislation in Texas targets the use of AI in the creation of “intimate visual material” that depicts another person. A bill in Minnesota would make it a crime to disseminate “deep fake” sexual images without the consent of the depicted person and establish a cause of action for aggrieved individuals.
A number of measures have been introduced that would require the disclosure of AI use in content such as: publicly displayed images/videos (Illinois); advertising (New York); social media (Illinois); and political campaigns (Washington).
State Net can help businesses monitor these emerging developments regarding AI by providing access to a comprehensive database of legislative and regulatory activity at all levels of government, as well as in-depth analysis that helps executives better understand the potential impact of new AI-related legislation and plan accordingly.
Eager to learn about the AI legislation impacting your company? Request your complimentary personalized AI Report, powered by State Net, here.