2024 Legislative Session Dates
image/svg+xml Skip to main content
Search image/svg+xml

Key Takeaways:

  • Artificial intelligence (AI) will be one of the most prevalent issues addressed by state lawmakers this year. In addition to the flurry of expected bill introductions on the subject, state regulators are also preparing for the proliferation of AI tools.
  • Recently, the California Privacy Protection Agency (CPPA) released a draft of regulations on businesses' use of “automated decision making technology.” And the breadth of the draft text has put many businesses, even outside of the tech space, on notice.
  • This entire saga reveals a few themes regarding the challenges of regulating AI. First is the huge interest in regulating this issue, even if policymakers haven’t settled on the best way to move forward with regulation. Second is that we’re still in the very early innings of AI regulation. Lawmakers are still doing their best to get educated on this complex issue.


As lawmakers enter the 2024 legislative sessions, artificial intelligence (AI) is squarely within their crosshairs. In addition to the flurry of expected bill introductions on the subject, state regulators are also preparing for the proliferation of AI tools. Recently, the California Privacy Protection Agency (CPPA) released a draft of regulations on businesses' use of “automated decision making technology.” And the breadth of the draft text has put many businesses, even outside of the tech space, on notice. 

The draft regulations in California would establish a framework for how businesses can implement automated decision making technology (ADMT) that uses personal information to make a decision or acts as a replacement for human decision making. The draft text outlines a notice requirement where consumers must be given notice that ADMT is being used and that notice must include a specific, plain-language explanation of how the business is using the technology. Another key issue that the CPPA draft regulations address is when a consumer can and cannot opt out of a business’ use of this technology.  

Last week, the CPPA held a public board meeting where board members criticized the draft regulatory text as so broad that it could cover essentially any technology. We’ve previously discussed how properly defining “AI” and its key terms is a critical challenge for policymakers looking to regulate AI. If the definition is too broad, you end up regulating a ton of widely deployed technologies that predated the current AI boom, but if your definition is too narrow, you open potential loopholes. Righting the scope of a regulatory target is a situation that policymakers are accustomed to, but AI is so abstract and rapidly evolving that policymakers are unlikely to hit on the correct scope right out of the gate. 

The CPPA board members also had issues with the opt-out provisions of the draft text, which allow consumers to opt out of having their information collected, evaluated, and used by an ADMT. Board members noted that the current opt-outs could permit consumers to opt themselves out of nearly any technology. As a result, the CPPA Board directed staff to prepare revised drafts that take into account the feedback from board members. The Board is expected to meet again early next year.

This entire saga reveals a few themes regarding the challenges of regulating AI. First is the huge interest in regulating this issue, even if policymakers haven’t settled on the best way to move forward with regulation. A repeated theme we’ve seen from policymakers looking to regulate AI is the regret that lawmakers failed to act on social media and that they won’t make the same mistake with AI. Second is that we’re still in the very early innings of AI regulation. Lawmakers are still doing their best to get educated on this complex issue. CPPA sent its draft regulation back to the drawing board before industry had even provided input. Establishing a broad regulatory framework is going to be a monumental task, which is why we’re going to see a lot of targeted bills aimed at AI issues like deepfakes, combating biases in technologies like facial recognition, and addressing other problematic aspects of AI as they crop up. 

Now that legislative sessions are underway, we're off to the races on state lawmakers regulating AI. And as the California regulatory case shows, these laws will pull in many widespread technologies and industries beyond “AI” as we think of it today and beyond just “tech” firms. 


multistate.ai

To help you make sense of state acitiving on artifical intelligence and other emerging technology, we launched multistate.ai, a resource website and weekly update highlighting key developments in state AI policy and diving deep into select issue areas.


Morning MultiState

This article appeared in our Morning MultiState newsletter on December 12, 2023. For more timely insights like this, be sure to sign up for our Morning MultiState weekly morning tipsheet. We created Morning MultiState with state government affairs professionals in mind — sign up to receive the latest from our experts in your inbox every Tuesday morning. Click here to read past issues and sign up.