California Leads in Setting First Principles for AI

Peter Leroe-Muñoz
General Counsel and SVP of Technology & Innovation, SVLG

Chairman of Responsible AI Working Group, SVLG
Published
Share

President Biden’s recent Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence is a sweeping framework that seeks to lay the foundation for future regulations and requirements. 

Over the past few years, the White House has refined its understanding and approach to artificial intelligence, from issuing a Blueprint for an AI Bill of Rights to securing voluntary commitments from AI companies regarding the safe and responsible deployment of the technology. 

The most recent Presidential Executive Order follows many of the ideas and principles set forth in Governor Newsom’s September GenAI Executive Order. That executive order charts a course toward the trustworthy and efficient use of AI in the public sector by directing state agencies to explore how its implementation can improve government service delivery. 

The current pace of federal legislation in this area, combined with the fact that California is home to the innovation ecosystem, means Sacramento will be a leader in setting first principles for trustworthy AI rules and standards that can be replicated in other states and at the federal level.

There are several reasons why California will be a first-mover in this space. 

The present gridlock of Congress is historically challenging, even by D.C. standards, making it difficult to get anything done. In contrast, California policymakers have already begun to consider a series of AI bills that will be voted on next year. As these regulations are set, the federal government will look to the Golden State as they fashion federal rules. California is also home to the industry leaders, entrepreneurs and research institutions that have spent considerable time thinking about AI’s foundational principles and how they will contribute to shaping the future of the technology. 

Because of this SVLG is committed to working with policymakers, particularly in Sacramento, to develop guardrails to accelerate AI’s positive impact without slowing down innovation. 

SVLG’s Responsible AI Working Group recently unveiled our principles for responsible AI deployment: 

  1. AI should be human-centered, so that design and evolution consider how humans will use and be impacted by AI, while also preserving human control as a final arbiter. 
  2. Outcomes should be accurate and appropriate for the user, and people should know when AI is involved. 
  3. AI should avoid creating or perpetuating biases, and should be respectful of the norms of diverse communities. 
  4. Artificial intelligence systems should be secure from unauthorized access or use, with multilevel security checks. 
  5. Laws and regulations associated with AI systems should be balanced and weigh potential benefits against the challenges of different use cases.

Developing and deploying responsible AI will require private and public actors to commit to honor first principles that protect and prioritize users. California is poised to get there first and set precedent for subsequent federal rules. We look forward to continuing to engage with policymakers and regulators at all levels as we collectively build trustworthy AI rules and systems.

Topics