Friday, May 16, 2025

Altman’s goal, that ChatgPT remembers “all life” is both stimulating and disturbing

Share

The CEO of Opeli, Altman himself, presented a great vision of the future of Chatgpt at the AI ​​event led by VC Sequoia at the beginning of this month.

Asked by one participant about how Chatgpt can become more personalized, Altman replied that he finally wants the model to document and remember everything in human life.

The ideal, he said, is “a very small model of reasoning with the trillion of context tokens in which you put your whole life.”

“This model can reason your whole context and do it effectively. And every conversation you have ever had in your life, every book you have ever read, every message e -mail you have ever read, everything you have ever looked at is there, and also combined with all data from other sources. And your life just joins the context,” he described.

“Your company does the same for all your company’s data,” he added.

Altman can have a reason to think that this is the natural future of chatgpt. In the same discussion, when they were asked for frosty ways in which teenage people utilize chatgpt, he said: “People in college use it as an operating system.” They send files, combine data sources, and then utilize “complex hints” in relation to this data.

Additionally CHATGPT memory options “who can use previous conversations and the facts remembered as a context,” he said that one of the trends he noticed is that teenage people “do not really make life decisions without not asking chatgpt”.

“A gross simplification is: older people use chatgpt as deputy Google,” he said. “People over twenty and 30 years use him like a life adviser.”

This is not a massive jump to see how Chatgpt can become an omniscient AI system. In conjunction with agents, which the valley is currently trying to build, it is an stimulating future to think about.

But the terrifying part? How much should we trust the great technology of a profit -oriented company to know everything about our lives? These are companies that do not always behave in a model way.

Google, which began his life with the motto “Don’t be Evil”, lost his lawsuit in the USA, who accused him of engaging in anti -competitive, monopolistic behavior.

Chatbots can be trained to respond in a politically motivated way. It was found that Chinese bots not only meet the Chinese censorship requirements, but Chatbot GROK XAI this week randomly discussed the South African “white genocide” when people asked it completely unrelated to questions. Behavior, Many noticedDetermined intentional manipulation of the engine reacting to the command of his founder born in Africa, Elon Musk.

Last month, chatgpt became so pleasant that he was even a sycophery. Users began to share screenshots dangerous decisions AND ideas. Altman quickly replied, promising that the team repaired corrections that caused the problem.

Even the best, most reliable models are still straight Do things from time to time.

So having an omniscient AI assistant can lend a hand our life in the way we can start seeing. But given the long history of BIG Tech behavior, it is also a mature situation for improper utilize.

Latest Posts

More News