Sunday, March 8, 2026

Parents are calling on Fresh York’s governor to sign landmark artificial intelligence safety bill

Share

On Friday, a group of more than 150 parents sent a letter to Fresh York Governor Kathy Hochul, urging her to sign the Responsible Artificial Intelligence and Education (RAISE) Act into law. The RAISE Act is a high-profile bill that would require developers of enormous AI models – such as Meta, OpenAI, Deepseek and Google – to create security plans and adhere to transparency principles regarding reporting of security incidents.

The bill passed both the Fresh York State Senate and Assembly in June. But this week, Hochul apparently has proposed a near-total rewrite of the RAISE Act to make it more favorable to tech companies, similar to some of the changes made to California’s SB 53 after major artificial intelligence companies expressed concern.

Not surprisingly, many AI companies are strongly opposed to the legislation. The AI ​​Alliance that matters
Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks and Hugging Face among their members have sent a message letter in June to Fresh York lawmakers describing their “deep concern” about the RAISE Act, calling it “unworkable.” And Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (a16z), OpenAI CEO Greg Brockman and Palantir co-founder Joe Lonsdale, was the target Fresh York State Assemblyman Alex Bores, who co-sponsored the RAISE Act, posted the latest ads.

Two organizations combined: ParentsTogether Action and the Technology Supervision Project Friday letter to Hochulwhich states that some of the signatories have “lost children to AI chatbots and social media.” The signatories called the RAISE Act in its current form a “minimalist guardrail” that should become law.

They also stressed that the bill passed by the Fresh York State Legislature “does not regulate all AI developers – only the largest companies that spend hundreds of millions of dollars annually.” They will have to disclose large-scale security incidents to the attorney general and publish security plans. Developers would also be prohibited from releasing a borderline model “if doing so would create an undue risk of serious harm,” which is defined as death or solemn injury to at least 100 people or damages of $1 billion or more in monetary or property rights arising from the creation of chemical, biological, radiological or nuclear weapons; or an AI model that “operates without significant human intervention” and “if committed by a human” would be subject to certain criminal offenses.

“Big Tech’s covert opposition to these basic protections feels familiar because we know it
I have seen this pattern of avoidance and evasion before,” the letter reads. “The widespread harm done to young people –
including their mental health, emotional stability and ability to function at school
widely documented since major tech companies embraced algorithmic development
social media platforms without transparency, oversight and accountability.”

Latest Posts

More News