Business and Professions Code section 22602


(a)

If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.

(b)

(1)An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm.

(2)

The operator shall publish details on the protocol required by this subdivision on the operator’s internet website.

(c)

An operator shall, for a user that the operator knows is a minor, do all of the following:

(1)

Disclose to the user that the user is interacting with artificial intelligence.

(2)

Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human.

(3)

Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.

Source: Section 22602, https://leginfo.­legislature.­ca.­gov/faces/codes_displaySection.­xhtml?lawCode=BPC§ionNum=22602.­ (updated Jan. 1, 2026; accessed Dec. 22, 2025).

Green check means up to date. Up to date

Verified:
Dec. 22, 2025

§ 22602's source at ca​.gov