Skip to Main Content

Privacy & Data Security Alert

June 12, 2025

Texas Legislature Passes House Bill 149 to Regulate AI Use

By Amanda Witt, Jennie Cunningham

On May 31, 2025, the Texas legislature passed House Bill 149 (H.B. 149), the Texas Responsible Artificial Intelligence Governance Act (“TRAIGA”), and presented the bill to Governor Greg Abbott on June 2, 2025 for signature. If enacted, TRAIGA imposes a comprehensive regulatory framework for AI in Texas and joins a growing number of states to enact AI legislation to date (Colorado, Utah, and California, in addition to state laws targeting certain sectors or use cases). TRAIGA primarily impacts companies developing or deploying AI products available to Texas consumers, businesses using AI in customer-facing contexts, government agencies using AI for any purpose, and vendors and contractors supplying AI-related services to the government. However, the bill underwent significant changes during the legislative process, and no longer resembles a broad, risk-based framework, but rather a set of prohibitions on certain egregious uses of AI and a reinforcement of current state and federal protections, such as anti-discrimination statutes.

Governor Abbott has until June 22, 2025, to sign or veto the bill. Should the governor sign the bill into law or allow the 20 days to pass without vetoing the bill, TRAIGA will become effective on January 1, 2026. The law will apply to all individuals and entities operating, developing, or deploying AI systems in Texas, or offering AI products/services to Texas residents.

Notably, even if the law passes, its impact could be limited. Language in the yet-to-pass 2025 federal budget reconciliation bill would put a 10-year moratorium on new state AI laws, potentially halting bills such as TRAIGA before they come into effect.

Key Provisions

Prohibitions on Discrimination and other Harmful and Manipulative AI Uses

If enacted, the bill will prohibit the development and deployment of AI systems that intentionally discriminate against protected classes under state and federal law. TRAIGA clarifies that existing law, which already prohibits intentional discrimination, applies to the use of AI tools. The bill also clarifies that, in line with recent guidance from the current administration, disparate impact alone is not sufficient to show discrimination.

The Texas bill aims to prohibit certain other harmful or manipulative uses of AI systems. TRAIGA bars people from developing or deploying AI systems with the sole intent of impairing any person’s rights under the U.S. constitution. In addition, the bill makes it unlawful to use AI tools to manipulate human behavior in a manner designed to incite self-harm, violence, or criminal behavior. TRAIGA also prohibits the development or deployment of AI systems to produce or distribute certain sexually explicit content or chatbots.  

Government entities are further banned from using AI systems to carry out “social scoring” that classifies people based on social behavior or other characteristics to assign or estimate a scaled score to an individual and results in certain types of harm to the individual.

Transparency

State government agencies, but not private businesses, must disclose to consumers when they are interacting with an AI system. The disclosure is required regardless of whether the AI interaction seems obvious.

Biometric Guardrails

TRAIGA updates the Biometric Identifier Act’s existing consent requirements for the collection and use of biometric data. The bill states that an individual has not consented to the collection of biometric data (defined under the bill as including retina or iris scans, fingerprints, voiceprints, or records of hand or face geometry) simply because media containing one or more of these biometric identifiers is available on the internet or was otherwise made publicly available -- unless it was made publicly available by the individual to whom the biometric identifiers relate. In practice, this means that an individual’s online presence does not provide consent to harvest biometric data on that individual unless the relevant individual is responsible for making the online media “publicly available”, a term that is not defined in TRAIGA.

The consent requirements do not apply to financial institutions using voiceprint data, businesses that use biometric data only to train AI systems but do not use or deploy those systems to identify individuals, and biometric data used in AI systems for security, fraud prevention, and similar purposes.

TRAIGA also prohibits government entities from developing or deploying AI systems to perform biometric identification of individuals or identification of individuals using images and other media gathered on the internet, without obtaining the individual’s consent, if gathering that data would infringe on the individual’s constitutional rights or state or federal law.  This prohibition extends to “data generated by automatic measurements of an individual’s biological characteristics”, including fingerprints, voiceprints, eye retina or iris scans, or other unique biological patterns or characteristics used to identify a specific individual, with certain exceptions.

AI Sandbox Program

The bill appears to have been designed to allow AI innovation to continue while addressing specific types of harm to consumers. In that vein, the bill allows businesses to test AI systems for up to 36 months in a controlled environment without the need for full regulatory compliance. Businesses that utilize this sandbox must submit quarterly reports on performance, risk mitigation, and consumer and stakeholder feedback for the system in question.

Enforcement and Oversight

Under the bill’s enforcement provisions, the state Attorney General has the sole authority to enforce AI regulations and may enforce these regulations regardless of where the AI system is based. There is no private right of action included in the bill; however, individuals may file complaints with the state Attorney General’s office through an online reporting mechanism. Remedies include civil penalties, including fines of up to $200,000 for uncurable violations, fines of up to $40,000 per day for continued violations, and injunctive relief.  The bill, however, provides for a cure period, a safe harbor related to NIST compliance, and sets out various exemptions from penalties, such as where a violation is discovered through red-team testing.

The Texas Artificial Intelligence Council, attached to the Department of Information Resources, will be tasked with ensuring AI systems in Texas are ethical and developed in the public’s best interest. The Council may offer guidance in the form of non-binding reports to the legislature on the regulation and use of AI systems in Texas. The Council will also be tasked with the implementation of training programs for state and local government on the use of AI systems.

Conclusion

TRAIGA addresses certain harms related to AI systems but, on balance, the enrolled version is a decidedly innovation-friendly regulation rather than a broad, risk-based framework. While the bill creates some new obligations and guardrails targeted at protecting civil liberties and behavior manipulation, TRAIGA relies in large part on reinforcing existing constitutional and legal rights and prohibitions. The harshest regulations are targeted at governmental agencies while private business are afforded more space to innovate within the civil liberties guardrails contained in the bill.

For more information, please contact Amanda Witt and Jennie Cunningham.