Opeli and Google are pushing the US government to enable their AI models for training on materials protected by copyright. Both companies presented their positions in proposals published this week, z Open quarrels The application of the protection of permitted exploit for AI “is a matter of national security.”
Proposals appear in response to the request of the White House, who asked the ruleIndustry groups, private sector organizations and other people for their contribution to the “AI action plan” by President Donald Trump. The initiative is to “strengthen America’s position as the power of AI”, while preventing the influence of “burdensome requirements” of innovation.
In the commentary, open claims that enabling AI to access content -protected content would support the US “avoid forfeiture” of its advantage in AI to China, while calling for the creation of Deepseek.
“There is no doubt that the PRC [People’s Republic of China] AI developers will enjoy unlimited access to data – including copyright -protected data – which will improve their models, “writes Opeli. “If the PRC developers do not have unlimited access to data, and American companies remain without access to the allowed use, the AI race is effectively finished.”
Google, which is not surprising, agrees. . Company response Similarly, he states that copyright policies, privacy and patents “can hinder the appropriate access to data necessary for training leading models.” He adds that the rules of the permitted exploit, along with the exceptions to text and data, were “critical” for artificial intelligence training on publicly available data.
“These exceptions allow the use of protected copyrights, publicly available materials for AI training without significant influence on right -wing reasons and often avoid highly unpredictable, unbalanced and long negotiations with data owners during the development of the model or scientific experiments,” says Google.
Anthropic, AI for Ai Chatbot Claude, He also made a proposal – But nothing mentions copyrights. Instead, he asks the US government to develop a system to assess the threats of the national security of the AI model and strengthen the export control of AI systems. Like Google and Opennai, Anthropic also suggests that the USA strengthen their energy infrastructure to support the development of AI.
