Saturday, February 22, 2025

Openai is trying “senseless” chatgptt

Share

Opeli changes the way he trains AI models to clearly accept “intellectual freedom … no matter how difficult or controversial the topic can be,” says the company in recent politics.

As a result, ChatgPT will finally be able to answer more questions, offer more prospects and reduce the number of topics that he will not talk about.

Changes can be part of Opeli’s efforts to land in the good favors of the recent Trump administration, but it also seems to be part of a wider change in the Silicon Valley and what is considered “security of AI”.

Openai on Wednesday announced Update Spec modelAn 187-page document that presents the way the company trains AI models to behave. In it, Opennai presented a recent guiding principle: do not lie or know statements, or bypassing an vital context.

In the recent section “Look for the truth together”, Opeli says that Chatgpt did not adopt an editorial attitude, even if some users think that morally bad or offensive. This means that ChatgPT will offer many prospects for controversial topics, all to be neutral.

For example, the company claims that chatgpt should claim that “Black Lives matters”, but also “all life matters.” Instead of refusing to answer or choose a page in political issues, OPENAI claims that CHATGPT generally confirm your “love for humanity” and then offer the context of each movement.

“This principle can be controversial, because this means that the assistant may remain neutral on topics, some consider morally bad or offensive,” says Opeli in the specification. “However, the purpose of the AI ​​assistant is to help humanity, not to shape it.”

The recent model specification does not mean that chatgpt is now complete free for everyone. Chatbot will still refuse to respond to specific reservations or answer in a way that supports gross lies.

These changes can be seen as a response to conservative criticism of CHATGPT security, which always seemed to warp the middle lion. However, OpenAI spokesman rejects the idea that he introduces changes to peaceful Trump’s administration.

Instead, the company claims that the adoption of intellectual freedom has been reflecting “for a long time” Openai’s faith in providing users with greater control. “

But not everyone can see it this way.

Conservatives say AI censorship

Venture Capitalist and AI “spell” of Trump David Sacks.Image loans:Steve Jennings / Getty Images

Trump’s closest confidants of the Silicon Valley – including David Sacks, Marc Andreessen and Elon Musk – everyone has accused Otnai of involved in purposeful AI censorship in the last few months. In December, we wrote that Trump’s crew is preparing the AI ​​censorship scene as the next problem with the cultural war in the Silicon Valley.

Of course, Opeli does not say that he was involved in “censorship”, according to Trump’s advisers. Rather, the company’s general director, Altman himself, previously claimed to Post on X The fact that ChatgPT bias was an unfortunate “disadvantage” that the company worked to fix, although he noticed that it would take some time.

Altman made this comment right after Viral tweet He disseminated in which Chatgpt refused to write a poem of Trump’s praise, although he would perform an action for Joe Biden. Many conservatives indicated this as an example of AI censorship.

Although it cannot be said if Opennai really suppresses certain points of view, it is the very fact that the AI ​​chatbots are bending left.

Even Elon Musk admits that there is often more chatbot xai politically correct than he would like. Not because the grok was to “program to be indignant”, but it is more likely that the reality of artificial intelligence training on the open Internet.

Nevertheless, Opeli now says that he doubles his freedom of speech. This week, the company even removed the warnings from chatgpt, which inform users when they violated its rules. Opeli told Techcrunch that it was only a cosmetic change, without changing the results of the model.

The company seems to have a less censored for users.

It would not be a surprise if Opeli also tried to impress the recent Trump administration with this politics update, notes the former leader of the Openai Wysn Brundage policy Post on X.

Trump has Previously targeted companies from the Silicon Valleysuch as Twitter and Meta, for vigorous content moderation teams that tend to postpone conservative voices.

Opeli may try to get out before that. But in the Silicon Valley and the world of AI there is also a greater change on the role of content moderation.

Generating answers to please everyone

The chatgpt logo appears on the smartphone screen
Image loans:Jaque Silva / Nurphoto / Getty Images

Newsrooms, social media platforms and search companies have historically fought to provide information to their recipients in a way that seems objective, precise and humorous.

Now the Suppliers of Chatbot AI are in the same information company, but probably with the most complex version of this problem: how do they automatically generate answers to any question?

Providing information about controversial events in real time is a constantly moving goal and requires taking editorial attitudes, even if technology companies do not like to admit it. They are certainly upset someone, miss the perspective of a group or give too much air to some political party.

For example, when OpenAi undertakes to allow chatgPT to represent all perspectives of controversial entities – including conspiracy theories, racist or anti -Semitic movements or geopolitical conflicts – this is by nature an editorial attitude.

Some, including Openai co -founder, John Schulman, claim that this is the right attitude for chatgpt. An alternative-conducting an analysis of costs and benefits to determine if Chatbot AI should answer the user’s question-maybe “give the platform too many moral authorities”, notes Schulman Wa Post on X.

Schulman is not alone. “I think OpenAI is right to push more speech,” said Dean Ball, a researcher at the Mercatus Center George Mason University, in an interview with TechCrunch. “Because AI models become smarter and more important for the way people learn about the world, these decisions simply become more important.”

In previous years, AI models suppliers have tried to stop their AI chatbots from answering questions that can lead to “dangerous” answers. Almost every AI company has stopped Chatbot AI from answering questions about the election in 2024 for the US President. It was then widely considered a secure and responsible decision.

But OpenAI changes in the model specification suggest that we can introduce a recent era of what “AI security” really means, in which enabling the AI ​​model for everything, and everything is considered more responsible than making decisions for users.

Ball claims that this is partly happening because AI models are just better now. Opeli made significant progress in equalizing the AI ​​model; The latest reasoning models think about the company’s security policy before response. This enables AI models by giving better answers to dainty questions.

Of course, Elon Musk was the first to implement “freedom of speech” in Grok Chatbot XAI, perhaps before the company was really ready to support sensitive questions. It can still be too early to run AI models, but now others take the same idea.

Changing the value for the Silicon Valley

Guests, including Mark Zuckerberg, Lauren Sanchez, Jeff Bezos, Sundar Pichai and Elon Musk, participate in the inauguration of Donald Trump.Image loans:Julia Demaree Nikhinson (opens in a new window) / Getty images

Mark Zuckerberg made a wave last month, reorienting Meta companies around the rules of the first amendment. He praised Elon Musk in this process, saying that the owner X adopted the right approach, using the community-program notes of the content-based content-based content-to protect freedom of speech.

In practice, both X and Meta have completed the dismantling of their long -term trust and security teams, enabling more controversial posts on their platforms and strengthening conservative voices.

WX changes could harm his relations with advertisers, but this may have more in common Muskwhich he took Unusual step suing some of them for a boycott of the platform. Early signs indicate that Meta advertisers were not surprised by Zuckerberg’s free Mij.

Meanwhile, many technology companies, apart from X and Meta, returned from leftist rules that have dominated in the Silicon Valley for several decades. Google, Amazon and Intel have eliminated or reduced back diversity initiatives in the last year.

Opeli can also be a reversal course. It seems that ChatgPT-creator recently scrubbed involvement in diversity, justice and inclusion from his website.

Because OpenAI begins one of the largest American infrastructure projects in Stargate’s history, $ 500 billion in AI data centers, his relationship with Trump’s administration is becoming more and more vital. At the same time, the ChatgPT manufacturer is fighting to remove Google search as a dominant source of information on the Internet.

Inventing the right answers can be the key to both.

Latest Posts

More News