Governor Gavin Newsom signed an executive order on Monday establishing new requirements for artificial intelligence companies that want to secure state contracts. The move aims to cement California’s role as a leader in the ethical development of AI, setting a high bar for safety, privacy, and responsible use while national policies are being dismantled.

The order directs state agencies to develop procurement processes that scrutinize AI vendors based on their policies and safeguards. The state wants to ensure the technology it adopts cannot be used to exploit user data, compromise security, or violate civil rights. This initiative stands in direct contrast to what the governor’s office describes as the Trump administration’s rollback of protections and contracting standards at the federal level.

As the fourth-largest economy in the world and the global epicentre of the tech industry, California’s new rules could create a ripple effect, influencing AI development standards for companies far beyond the state’s borders. The order also includes a provision allowing California to separate its procurement authorization process from the federal government’s, granting the state more autonomy in vetting its technology partners.

A new standard for AI procurement

The executive order directs the Government Operations Agency to formulate a new plan for state contracting that vets AI companies on their commitment to public safety. Vendors seeking business with the state will need to attest to and explain their policies for preventing the misuse of their technology.

California’s always been the birthplace of innovation. But we also understand the flip side: in the wrong hands, innovation can be misused in ways that put people at risk. California leads in AI, and we’re going to use every tool we have to ensure companies protect people’s rights, not exploit them or put them in harm’s way. While others in Washington are designing policy and creating contracts in the shadow of misuse, we’re focused on doing this the right way.
— Gavin Newsom, Governor of California

The state’s vetting process will focus on three key areas of risk: the exploitation or distribution of illegal content, the potential for AI models to display bias, and the violation of civil rights and free speech. Companies will have to demonstrate they have the technology and policies in place to prevent such outcomes.

Furthermore, the order tasks the California Department of Technology with creating recommendations for watermarking AI-generated images and manipulated videos. This measure, a first of its kind nationwide, aims to combat the spread of deepfakes and misinformation, consistent with existing state law.

Governor Newsom signs an executive order concerning AI protections in a realistic setting with natural lighting.
Governor Newsom signs an executive order establishing new standards for AI companies conducting business with California.

California’s leadership in the age of AI

The executive order seeks to leverage California’s immense economic and cultural influence. The state is home to 33 of the top 50 privately held AI companies globally and leads the United States in AI patents and research papers. From the third quarter of 2024 to the second quarter of 2025, the Bay Area alone attracted 51 per cent of all U.S. venture capital funding for AI startups, far outpacing New York and Boston combined.

This concentration of talent and capital gives California a unique position to shape the industry’s future. The state is home to tech giants like Google, Apple, and Nvidia, all of which are deeply involved in AI and have created hundreds of thousands of jobs. According to the 2025 Stanford AI Index, California had the highest demand for AI talent in the U.S. in 2024, accounting for 15.7 per cent of all job postings.

The governor’s office emphasized that this leadership comes with a responsibility to establish safeguards. The state has already passed laws to guide the development of frontier AI models, protect children online, crack down on sexually explicit deepfakes, and prevent scams from AI-generated robocalls. The new executive order builds on this legislative foundation.

Addressing workforce and public concerns

Recognizing that the rapid advancement of AI raises questions about job security, the governor also announced a new statewide engagement effort through the "Engaged California" platform. This digital democracy tool will be used to gather public input on how the state should respond to AI’s impact on the workforce and the economy.

The platform was previously used in pilot programs, including helping shape recovery efforts after the Los Angeles firestorms and soliciting efficiency ideas from state employees. This will be its first statewide deployment, inviting all Californians to contribute to the conversation around AI policy.

This public engagement initiative is presented as a sharp contrast to the federal government’s approach, which the governor’s office claims has failed to pass even basic protections related to AI. In addition to public consultations, California has partnered with companies like Nvidia, Google, and Microsoft to expand AI training for over two million students and faculty in the state’s public high schools, community colleges, and universities.

The state is also committed to expanding its own use of GenAI to improve services. One new tool will help Californians navigate available government programs and benefits based on life events, such as starting a business or looking for a job. The upcoming launch of the statewide "Enguedo California" effort in the coming months will provide a direct channel for residents to help guide the future of AI in the state.