Government & Politics

2019-2022 State-Level Artificial Intelligence Legislation Tracker

By |

Artificial intelligence has the potential to disrupt U.S. politics, business, and social life. The of AI technology in the last decade follows the exponential growth in computer performance, the increased availability of large datasets to train machine learning algorithms, advances in machine learning techniques, and significant and rapidly growing commercial development.

Rapid innovation in artificial intelligence has created a disconnect between industry and government efforts to regulate and manage its proliferation. The currently drives progress in AI development and investment, creating governance gaps in the development of AI and the displacement it may cause. Congressional action has lagged behind. If policy is not built to guide technology, technology will guide policy.

In the absence of comprehensive U.S. federal AI legislation, 30 states and the District of Columbia have introduced over 150 AI-related bills since 2019. An analysis of the 152 proposed pieces of state-level AI legislation between 2019 and 2022 reveals an increased interest by state legislatures to research, harness, and regulate the public impact of artificial intelligence.

As state legislation forms to address the concerns surrounding artificial intelligence-enabled applications, federal legislation will likely echo similar priorities of supporting privacy and transparency, mitigating bias, and investing in research and development to maintain a technological lead against strategic competitors. While specific to economic and civil activities within their borders, state-level policy in enacting policies that affect activities more broadly.

The 2019-2022 State-Level Artificial Intelligence Legislation Tracker provides greater insight into the breadth and depth of these trends, providing a future glance at goals for national-level AI legislation as AI is further integrated into the lives of 麻豆果冻传媒s.

Key Findings

While many proposals never made it out of session, a select 27 AI laws and statutes have been passed by 14 states between 2019 and 2022. Of these 27 pieces, 16 related to the creation or scope of work for a state-level task force dedicated to researching the opportunities and threats of artificial intelligence within a state. Over half of the enacted rules relate to privacy, transparency, bias (PTB) or research and development (R&D). Across all 152 proposed pieces of legislation, just under two-thirds are classified within the PTB and R&D categories. While states prioritize a better understanding of this emerging technology, they are also attempting to mitigate the potential adverse effects of AI-enabled technologies related to bias and data privacy.

AI proposals have grown substantially over the last four years, with 2022 experiencing nearly double the number of proposals than 2019 (36 vs. 58). Patterns emerge among the states enacting AI legislation. Not surprisingly, legislation has moved quickly in states with technology hubs: Silicon Valley, New York City, Seattle, Austin, and Denver. About a third of all proposed legislation comes from just five states. As investment, research, and development in AI technologies increase, it is understandable that it would attract the attention of state legislatures eager to capitalize on the emerging technology鈥檚 strengths while also trying to mitigate its pitfalls.

The recent spread of proposed AI legislation shows that concern extends from coast to coast, but not all states are moving at the same pace. While some states, including New Jersey, Massachusetts, Georgia, and Nevada, are still unable to successfully pass legislation creating task forces to analyze the use cases and impact of AI within their states, others, including Illinois, California, Idaho, and Utah have already developed research programs and enacted transparency and disclosure requirements.

Additionally, 20 states have no record of proposed AI legislation in the last three years. Many of these states are within regions associated with blue-collar industries, including farming, mining, energy, and production. While the flashy advances in AI technologies such as ChatGPT or StableDiffusion garner the most attention, some of the will be in production-level AI enabled automation. Therefore, the absence of state-level AI legislation in the farming and manufacturing-rich regions of the Midwest and Appalachia should not be conflated with an absence of influence from artificial intelligence.

While the increase in state-level proposed AI legislation indicates a heightened focus on the impact of emerging technologies within specific states, AI-enabled technologies operate without borders. As states continue to enact AI-related legislation , they will begin to identify and define the opportunities and threats posed by artificial intelligence, a necessary step before enacting stringent regulations.

As more states create regulations and requirements for developing and deploying AI-enabled technologies, conflicts may arise between competing statutes across state lines. Conflicting rules may lead to the inability of users to access certain products or processes within particular states or create changes in which markets companies may find most attractive. Further analysis is required to understand the specific commonalities and distinctions between enacted state-level legislation nationwide. As the national AI policy debates continue, these local and state-level policies will play prominent roles in affecting AI policy beyond state lines.

Methodology

The 2019-2022 State-Level Artificial Intelligence Legislation Tracker dataset is built from information available on LegiScan and the National Conference of State Legislatures of pending, failed, and enacted pieces of state-level legislation related to artificial intelligence. Each piece of legislation is coded based on its topic and goal.

Topics

Topics include privacy, transparency, bias, research and development, responsible AI, review, and social impact. Each piece may be related to more than one topic but has been coded based on an overarching theme.

Privacy, Transparency, and Bias (PTB): Transparency around the ways in which algorithms utilize private and public data is necessary to monitor and evaluate algorithms for bias. Proposed legislation related to PTB is typically intended to achieve a goal of disclosure.

Research and Development: Legislation related to investment and efforts into better understanding, preparation for, and development of AI-enabled tools or strategy.

Responsible Artificial Intelligence: Legislation with the main thematic element of creating and enforcing principles surrounding responsible AI, such as accountability, reliability, and safety are coded as Responsible Artificial Intelligence.

Review: Many states are in the beginning stages of analyzing the existing impact and use cases of AI-enabled tools and algorithms within their states. Legislation intended to review the current state of the field are coded as Review.

Social Impact: Proposed legislation citing concerns of the social impact around AI-enabled algorithms, particularly regarding social media, credit worthiness, and eligibility decision-making are categorized under the topic Social Impact.

Goals

Legislative goals include disclosure, prohibition, task forces, and a miscellaneous category. I assign each piece of legislation to one goal, based on the intended outcome of the legislation鈥檚 theme.

Disclosure: Proposed legislation created with the intent to enforce disclosure of the reasons for utilizing AI-enabled systems, the use of AI-enabled systems, and/or the impact of AI-enabled systems.

Prohibition: Proposed legislation meant to restrict the use cases of AI-enabled systems.

Task Force: Proposed legislation intended to establish a new reviewing committee, task force, or office related to AI-enabled systems.

Other: Proposed legislation with an intended outcome that does not align with the more common prior categories. Includes efforts to establish education subsidies and incentives, hosting AI conferences, and various funding or language amendments to substantive legislation.


About the Author:听

Caroline Thompson is currently pursuing a MA in US Foreign Policy and National Security focusing on emerging technologies and cybersecurity with an interest in biotech, big data, and artificial intelligence.


*THE VIEWS EXPRESSED HERE ARE STRICTLY THOSE OF THE AUTHOR AND DO NOT NECESSARILY REPRESENT THOSE OF THE CENTER OR ANY OTHER PERSON OR ENTITY AT AMERICAN UNIVERSITY.

more_csint_articles