Oregon - Session 2026R1
Title: Requires operators of artificial intelligence companions and artificial intelligence companion platforms to provide notice to users that the users are interacting with artificial output if a reasonable person that interacts with the artificial intelligence companion or artificial intelligence companion platform would believe that the person was interacting with a natural person.
Tells those who make AI software to tell users that the users are talking to software, not a human. Tells them they must try to prevent users from getting output that causes suicidal feelings or thoughts. (Flesch Readability Score: 71.0). Requires operators of artificial intelligence companions and artificial intelligence companion platforms to provide notice to users that the users are interacting with artificial output if a reasonable person that interacts with the artificial intelligence companion or artificial intelligence companion platform would believe that the person was interacting with a natural person. Requires the operators to have in place a protocol for detecting suicidal ideation or intent or self-harm ideation or intent and to prevent output that could cause such ideation or intent in users. Specifies minimum contents of the protocol<b>, including referral to an appropriate crisis lifeline and additional intervention informed by clinical best practices and expertise</b>. Requires an operator to make certain statements and disclosures if the operator has reason to believe that a user that interacts with the operator's artificial intelligence companion or artificial intelligence platform is a minor. Requires the operator to take reasonable steps to prevent the artificial intelligence companion from generating statements that would lead a reasonable person to believe that the person was interacting with a natural person and to require the artificial intelligence companion to make certain other statements. Requires an operator to post a report each year on a publicly accessible website that discloses incidents in which the operator referred a user to resources to prevent suicidal ideation, suicide or self-harm. Allows a user that suffers ascertainable harm to bring an action for damages and injunctive relief.
Tracking state legislation? Support LegiList with a small contribution. Independent, ad-free, and built by one developer.
| Date | Event | Detail |
|---|---|---|
| 2026-02-02 | Introduced | Bill introduced |
| 2026-02-20 | Status | in_committee |
| 2026-02-20 | Latest Action | Referred to Behavioral Health. |
| Bill | Title | Status |
|---|---|---|
| HB 4045 | Requires a social media platform to respond to a search warrant within 72 hours of service, and all other communications providers to respond within five business days of service, when the warrant pertains to an investigation of stalking or a crime constituting domestic violence. | in_committee |
| HB 4092 | Prohibits a retailer from knowingly selling or offering for sale, and a retail platform operator from knowingly permitting a retailer to advertise or offer for sale, a child safety system that does not comply with federal standards or standards the Department of Transportation adopts by rule. | in_committee |
| HB 4103 | Establishes the Senator Aaron Woods Commission on Artificial Intelligence within the office of Enterprise Information Services. | in_committee |
| SB 1580 | Prohibits an online news aggregating platform from accessing for an Oregon audience the online content of a digital journalism provider without an agreement. | introduced |
| HB 4054 | Requires certain health insurers offering a health benefit plan in this state that provide utilization review or have utilization review provided on their behalf to notify a health care provider each time the insurer uses artificial intelligence or other automated technology to automatically downcode a claim for reimbursement submitted by the provider. | introduced |