On Tuesday, August 29, the Silicon Valley Leadership Group’s Responsible AI Working Group met with state assembly members, senators and legislative staff representing diverse regions across California to discuss the state of Artificial Intelligence, private sector use cases, and potential applications in government operations.
SVLG member companies, including Amazon, Amazon Web Services, DLA Piper and others, joined working group co-chairs Google and Johnson and Johnson for two different sessions on AI’s positive use cases and potential pitfalls. In conversations with lawmakers, the working group highlighted that the cutting-edge technology must be used in a way that is human-centric, results in explainable outcomes, and is authentic and equitable.
“It’s important for government leaders and industry to proactively engage in conversations to ensure that AI models are deployed in a responsible manner,” said Javier González, Google’s Head of Government Affairs, Public Policy, and External Affairs for California. “We look forward to being part of further conversations about creating an AI future that is safe, inclusive, and unlocks the economic opportunities and benefits.”
Sameer Desai, Assistant General Counsel for Johnson and Johnson, offered insight into how his leading company is pushing the boundaries of healthcare with AI. “This technology makes great doctors even better,” he said. “Physicians and specialists are able to use AI to augment and expand their skill set to more efficiently and effectively serve patients. We look forward to continuing the conversation with California lawmakers on how AI can be used responsibly to improve patient outcomes.”
For their part, legislators shared their interest in the technology, as well as questions around AI’s impacts on jobs, workforce training and digital security. Conversations focused heavily on the need for government investments in AI workforce development programs and the technology’s potential to streamline government services.
One interesting idea was the use of cleaned and anonymized public data as training data for future applications in government. Concerns were also voiced over how to keep such data and technology from falling into the hands of malicious actors already using deep fakes and AI-enabled voice clones to scam unsuspecting victims.
Policymakers and the Working Group on Responsible AI agreed that this was the first of many discussions to be had between industry and legislators and each looks forward to further engagement on policy and regulatory actions in the future.