Tennessee - Session 114
Title: AN ACT to amend Tennessee Code Annotated, Title 4; Title 10, Chapter 7; Title 47; Title 58 and Title 68, relative to artificial intelligence.
This bill prohibits a frontier model developer, excluding an accredited college or university to the extent that the college or university is developing or using frontier models exclusively for academic research purposes , or a person who makes a covered chatbot , which is a service that allows an ordinary person to have conversations where humanlike responses are generated by a foundation model, is foreseeably likely to be accessed by minors, and has at least 1 million monthly active users , available in this state and who, together with the person's affiliates, co llectively had an annual revenue of at least $25 million ("large chatbot provider"") from making a materially false or misleading statement about a catastrophic risk or a child safety risk (together, ""covered risk"") from the frontier developer or large chatbot provider's activities or their management of covered risks. Further, a frontier developer who together with its affiliates collectively had annual revenue of at least $500 million (""large frontier developer"") and a large chatbot provider is prohibited from making a materially false or misleading statement about its implementation of, or compliance with, its public safety plan or child safety plan. Large Frontier Developers This bill requires a large frontier developer to implement and clearly publish on its website a public safety plan that describes in detail how the large frontier developer defines and assesses thresholds used by the developer to determine whether a fron tier model has capabilities that could pose a catastrophic risk. As used in this bill, a "" c atastrophic risk"" means a foreseeable and material risk that a frontier developer's development, storage, use, or deployment of a frontier model will materially cont ribute to the death of, or serious injury to, 50 people or more than $1 billion in damage to, or loss of, property arising from a single incident involving (i) providing expert-level assistance in the creation or release of a chemical, biological, radiolo gical, or nuclear weapon; (ii) engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion, or t heft, including theft by false pretense; or (iii) evading the control of its frontier developer or user. However, c atastrophic risk does not include a foreseeable and material risk from (i) information that a frontier model outputs if the information is otherwise publicly accessible in a similar form; (ii) lawful activity of the federal government; or (iii) harm cause d by a frontier model in combination with other software if the frontier model did not materially contribute to the harm. This bill requires the plan to also describe how the large frontier developer addressed all of the following: Applies mitigations to address the potential for catastrophic risks. Reviews assessments of catastrophic risk as part of the decision to deploy a frontier model or use it internally. Uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks . Implements cybersecurity practices to secure unreleased frontier model weights, which are numerical parameter in a frontier model that is adjusted through training and that helps determine how inputs are transformed into outputs, from unauthorized modification or transfer by internal or external parties . Assesses and manages catastrophic risk resulting from the internal use of the frontier developer's frontier models . Incorporates national standards, international standards, and industry-consensus best practices into the large frontier developer's public safety plan . Revisits and updates the public safety plan. Identifies and responds to "" critical safety incidents, "" which means (i) unauthorized access to, or modification, inadvertent release, or exfiltration of, the model weights of a frontier model; (ii) the death of, or serious injury to, more than 50 people or more than $1 billion dollars in damage to, or loss of, property resulting from the materialization of a catastrophic risk; (iii) loss of control of a frontier model that cause death or bodily injury, or that demonstrates materially increased catastrophic risk; or (iv) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit such behavior and in a manner that demonstrates materially increased catastrophic risk. Loss of value of equity does not count as damage to or loss of property. Institutes internal governance practices to ensure implementation of the public safety plan. This bill requires a large frontier developer to clearly publish any material modifications to its public safety plan, along with a justification for the modification, within 30 days of making the material change. Before a large frontier developer deploys a new frontier model, this bill requires the large frontier developer to publish summaries of any assessments of catastrophic risks from the frontier model, the results of the assessments, the extent to which third-party evaluators were involved in the assessments, and other steps taken to fulfill the requirements of the public safety plan. This inf ormation may be published as part of a larger document, including a system card or model card. Large Chatbot Providers This bill requires a large chatbot provider to implement and clearly publish on its website a child safety plan that describes in detail how the large chatbot provider assesses potential for child safety risks. As used in this bill, a ""c hild safety risk"" means a material and foreseeable risk that a frontier developer's artificial intelligence model that is trained on a broad data set, designed for generality of output, and adaptable to a wide range of distinctive tasks ("" foundation model ""), when used as p art of a covered chatbot operated by the frontier developer, will engage in behavior when interacting with a minor that, if it had been engaged in by a human, would be deemed to intentionally or recklessly cause death or bodily injury to the minor, includ ing as a result of self-harm; or damage to the mental health of such minor that constitutes severe emotional distress. This bill requires the plan to also describe how the large chatbot provider addresses all of the following: Applies mitigations to address the potential for child safety risks . Uses third parties to assess the potential for child safety risks and the effectiveness of mitigations of child safety risks . Incorporates national standards, international standards, and industry-consensus best practices into the large chatbot provider's child safety plan . Revisits and updates the child safety plan . Identifies and responds to "" child safety incidents, which means a covered chatbot engaging in behavior when interacting with a minor that, if the behavior had been engaged in by a human, would be deemed to intentionally or recklessly cause death or bodily injury to such minor or damage to the mental health of such minor that constitutes severe emotional distress. Institutes internal governance practices to ensure implementation of the child safety plan . This bill requires a large chatbot provider to clearly publish any material modifications to its child safety plan, along with a justification for the modification, within 30 days of making the material change. Before a large chatbot provider integrates a foundation model into a covered chatbot, this bill requires the large chatbot provider to publish summaries of any assessments of child safety risks, the results of the assessments, the extent to which third-p arty evaluators were involved in the assessments, and steps taken to fulfill the requirements of the large chatbot provider's child safety plan. Confidentiality This bill authorizes a large frontier developer or chatbot provider to make redactions to published safety plan documents, if the redactions are necessary to protect the large frontier developer or large chatbot provider's trade secrets or cybersecurity, public safety, or the national security of the United States, or to comply with federal or state law. However, the developer or chatbot provider must describe the character and justification of the redactions and retain the redacted information for at l ea st five years. REPORTING OF INCIDENTS This bill requires the attorney general to establish a means for reporting a child safety incident or a critical safety incident (together, ""safety incident""). The form must allow the report to include, at least, the date of the safety incident, the reasons the incident qualifies as a safety incident, and a short and plain statement describing the safety incident. A frontier rep orter is required to report a critical safety incident to the attorney general within 15 days of discovery. If a critical safet y incident poses an imminent risk of death or serious physical injury, then this bill requires the frontier developer to disclose the incident within 24 hours to an appropriate authority. This bill requires a large chatbot provider to report a child safe ty incident to the attorney general within 15 days of discovery. This bill requires the attorney general to establish a mechanism for a large frontier developer to confidentially submit summaries of any assessments of catastrophic risks resulting from internal use of frontier models. Large frontier developers must tra nsmit to the attorney general a summary of such assessment beginning on January 1, 2027, and every three months thereafter. This bill authorizes the attorney general to transmit reports of safety incidents, summaries of assessments of catastrophic risk resulting from internal use, and reports from employees to the general assembly, governor, federal government, or appropriate state agencies. In transmitting such reports, the attorney general may consider any risks related to trade secrets, public safety, cybersecurity, or national security. This bill requires the department of safety and attorney general to designate one or more federal laws or guidance documents that impose standards or requirements for safety incident reporting that are equivalent to or stricter than the reporting require ments described above and is intended to assess, detect, or mitigate catastrophic or child safety risk. If a frontier developer or large chatbot provider intends to comply with reporting requirements of such designated federal law or guidance, then it mu st declare its intent to do so to the attorney general and department of safety. After declaring such intention, the frontier developer or large chatbot provider is in compliance with this bill to the extent that it meets the requirements of the designated federal law or guidance. However, the failure of a frontier developer or large chatbot provider to comply with the designated federal law or guidance constitutes a violation of this bill. CIVIL PENALTIES This bill provides that a large frontier developer that violates this bill is subject to a civil penalty of $1 million or less per violation for a first violation and $3 million or less for subsequent violations. On the other hand, a large chatbot provi der that violates this bill is subject to a civil penalty of $50, 000 or less per violation. The attorney general has the exclusive right to enforce this bill. RULEMAKING This bill authorizes the department of safety, in consultation with the office of the attorney general, to promulgate rules to effectuate this bill. APPLICABILITY This bill applies to conduct occurring on or after January 1, 2027."
Tracking state legislation? Support LegiList with a small contribution. Independent, ad-free, and built by one developer.
| Date | Event | Detail |
|---|---|---|
| 2026-01-22 | Introduced | Bill introduced |
| 2026-03-25 | Status | in_committee |
| 2026-03-25 | Latest Action | Placed on cal. Government Operations Committee for 3/30/2026 |
| Bill | Title | Status |
|---|---|---|
| HB 1441 | AN ACT to amend Tennessee Code Annotated, Title 38; Title 39 and Title 40, relative to criminal impersonation. | in_committee |
| HB 1628 | AN ACT to amend Tennessee Code Annotated, Title 4, Chapter 3, Part 22, relative to tourism. | in_committee |
| HB 1631 | AN ACT to amend Tennessee Code Annotated, Title 4, Chapter 57; Title 43, Chapter 21 and Section 48-101-502, relative to exhibitions. | enrolled |
| HB 1639 | AN ACT to amend Tennessee Code Annotated, Title 4, Chapter 29; Title 4, Chapter 3, Part 20 and Section 38-3-114, relative to the office of homeland security. | in_committee |
| HB 1642 | AN ACT to amend Tennessee Code Annotated, Section 10-7-504, relative to the expiration dates of public records exceptions. | enrolled |
| HB 1705 | AN ACT to amend Tennessee Code Annotated, Title 4; Title 5; Title 6; Title 7; Title 8; Title 12 and Title 50, relative to employment. | in_committee |
| HB 1710 | AN ACT to amend Tennessee Code Annotated, Title 4, Chapter 1 and Title 4, Chapter 58, relative to public benefits. | in_committee |
| HB 1770 | AN ACT to amend Tennessee Code Annotated, Title 63, Chapter 6 and Title 63, Chapter 9, relative to the practice of medicine. | in_committee |