Business and Professions Code section 22757.11
(a)
“Affiliate” means a person controlling, controlled by, or under common control with a specified person, directly or indirectly, through one or more intermediaries.(b)
“Artificial intelligence model” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.(c)
(1)“Catastrophic risk” means a foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident involving a frontier model doing any of the following:(A)
Providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon.(B)
Engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense.(C)
Evading the control of its frontier developer or user.(2)
“Catastrophic risk” does not include a foreseeable and material risk from any of the following:(A)
Information that a frontier model outputs if the information is otherwise publicly accessible in a substantially similar form from a source other than a foundation model.(B)
Lawful activity of the federal government.(C)
Harm caused by a frontier model in combination with other software if the frontier model did not materially contribute to the harm.(d)
“Critical safety incident” means any of the following:(1)
Unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury.(2)
Harm resulting from the materialization of a catastrophic risk.(3)
Loss of control of a frontier model causing death or bodily injury.(4)
A frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.(e)
(1)“Deploy” means to make a frontier model available to a third party for use, modification, copying, or combination with other software.(2)
“Deploy” does not include making a frontier model available to a third party for the primary purpose of developing or evaluating the frontier model.(f)
“Foundation model” means an artificial intelligence model that is all of the following:(1)
Trained on a broad data set.(2)
Designed for generality of output.(3)
Adaptable to a wide range of distinctive tasks.(g)
“Frontier AI framework” means documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks.(h)
“Frontier developer” means a person who has trained, or initiated the training of, a frontier model, with respect to which the person has used, or intends to use, at least as much computing power to train the frontier model as would meet the technical specifications found in subdivision (i).(i)
(1)“Frontier model” means a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.(2)
The quantity of computing power described in paragraph (1) shall include computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model.(j)
“Large frontier developer” means a frontier developer that together with its affiliates collectively had annual gross revenues in excess of five hundred million dollars ($500,000,000) in the preceding calendar year.(k)
“Model weight” means a numerical parameter in a frontier model that is adjusted through training and that helps determine how inputs are transformed into outputs.(l)
“Property” means tangible or intangible property.
Source:
Section 22757.11, https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?lawCode=BPC§ionNum=22757.11. (updated Jan. 1, 2026; accessed Dec. 22, 2025).