According to Meta’s Ad Library, a group of technology companies and academic institutions spent tens of thousands of dollars last month – likely between $17,000 and $25,000 – on an ad campaign against Novel York’s landmark artificial intelligence security bill that may have reached more than two million people.
The groundbreaking bill is called the RAISE Act, or the Responsible Artificial Intelligence and Education Act, and a few days ago a version of it was signed into law by Novel York Governor Kathy Hochul. The law is closely watched dictates that AI companies developing enormous models – OpenAI, Anthropic, Meta, Google, DeepSeek, etc. – must establish security plans and transparency policies for reporting large-scale security incidents to the attorney general. However, the version signed by Hochul – different than the one passed in both the Novel York State Senate and Assembly in June – was reworked, making much more advantageous to technology companies. A group of over 150 parents sent a letter to the governor urging her to sign the bill without any changes. And a group of tech companies and academic institutions called the AI Alliance participated in the charge of destroying it.
The AI Alliance – the organization behind the opposition advertising campaign – counts Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks and Hugging Face among its members, which is not necessarily surprising. In June, the group sent a letter to Novel York lawmakers expressing “deep concern” about the bill and calling it “unenforceable.” But the group doesn’t just consist of tech companies. Its members include many colleges and universities around the world, including Novel York University, Cornell University, Dartmouth College, Carnegie Mellon University, Northeastern University, Louisiana State University and the University of Notre Dame, as well as Penn Engineering and Yale Engineering.
Advertisements started on November 23 and was titled: “The RAISE Act Will Halt Job Growth.” They said the legislation would “slow New York’s tech ecosystem, which supports 400,000 high-tech jobs and heavy investment. Instead of stifling innovation, let’s champion a future where AI development is open, trustworthy, and empowers the Empire State.”
When Edge asked the academic institutions listed above if they were aware that they had inadvertently participated in an advertising campaign against the much-discussed AI security regulations, none of them responded to a request for comment, except Northeastern, which did not provide comment by press time. In recent years, OpenAI and its competitors have increasingly sought to join academic institutions research consortiums or offering technology directly to students for free.
Many of the academic institutions that are part of the AI Alliance are not directly involved in individual partnerships with AI companies, but some are. For example, Northeastern’s partnership with Anthropic this year has resulted in Claude access for 50,000 students, faculty and staff at 13 campuses around the world, according to an announcement by Anthropic in April. In 2023, OpenAI financed Novel York University’s Journalism Ethics Initiative. Dartmouth announced a partnership with Anthropic earlier this monthprofessor at Carnegie Mellon University currently serving on the OpenAI and Anthropic disc funded programs at Carnegie Mellon.
The original version of the RAISE Act stated that developers were prohibited from providing a frontier model “if doing so would create an undue risk of serious harm,” which the act defined as death or earnest injury to at least 100 people or damages of $1 billion or more in monetary or property rights resulting from the production of chemical, biological, radiological, or nuclear weapons. This definition also extends to an artificial intelligence model that “operates without significant human intervention” and would be subject to certain criminal offenses “if committed by a human.” Version signed by Hochul this clause has been removed. Among other things, Hochul also extended the deadline for disclosing security-related incidents and reduced penalties.
The AI Alliance has previously lobbied against AI security policies, including the RAISE Act, California SB 1047AND President Biden’s Executive Order on Artificial Intelligence. This states that its mission is to “bring together developers and experts from diverse fields to collaboratively and transparently address the challenges of generative AI and democratize its benefits,” especially through “member-led working groups.” Some of the group’s projects beyond lobbying included cataloging and managing “credible” data sets and creating a ranked list of artificial intelligence security priorities.
The AI Alliance wasn’t the only organization opposing the RAISE Act in terms of allocating advertising dollars. How Edge recently wrote Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (a16z), Palantir co-founder Joe Lonsdale and OpenAI CEO Greg Brockman spent money on ads targeting RAISE Act co-sponsor, Novel York State Assemblyman Alex Bores. But Leading the Future is a super PAC with a clear agenda, while AI Alliance is a nonprofit that works with an industry association – with mission “developing artificial intelligence together, transparently and with a focus on safety, ethics and the greater good.”
