Tuesday, April 29, 2025

Artificial intelligence uses likes to get on your head

Share

What is this The future of the button similar in the artificial intelligence era? Max Levchin – co -founder of PayPal and general director of the AFIRM – is associated with a up-to-date and extremely valuable role in losing data for artificial intelligence training to draw conclusions more consistent with these human decision -makers.

It is a known leopard in machine learning that the computer presented with a clear function of the prize will be involved in constant reinforcement learning to improve its performance and maximize this reward-but that this optimization path often leads AI systems to very different results than it results from human judgments.

To introduce corrective force, AI programmers often employ so -called reinforcement learning based on human feedback (RLHF). They basically put human thumb on a scale when the computer arrives in its model, training it on data reflecting the real preferences of real people. But where does this data on human preferences come from and how many of them are needed to make the input data correct? Until now, it was a problem with RLHF: it is a costly method if it requires employment of superiors and gate by the introduction of feedback.

And this is a problem that according to Levchin can be solved by a similar button. He believes the accumulated resources, which today is in Facebook’s hands as heavens for every programmer who wants to train an knowledgeable agent in the field of human preference data. And how is the large offer? “I would say that one of the most valuable things that Facebook has is the mountain of data preferences,” Levchin told us. Indeed, at this point of inflection in the development of artificial intelligence, access to “what content people like to use to use AI models to train, is probably one of the earliest things on the Internet.”

While Levchin provides for learning AI from human preferences by a similar button, and already changes how to shape these preferences. In fact, social media platforms actively employ artificial intelligence not only to analyze likes, but to predict – consisting in the fact that the button itself obsolete.

It was a striking observation for us, because, as we talked to most people, the forecasts came mainly from a different point, describing how a similar button would affect the performance of artificial intelligence, but how Ai would change the world of a similar button. We have already heard that AI is used to improve social media algorithms. For example, at the beginning of 2024, Facebook experimented with using AI Redupe down an algorithm that recommends connecting movies for users. Can it find a better weight of variables to predict which film would like to watch the most? The result of this early test showed that it can: apply artificial intelligence to the task paid in longer observation – Metric Performance Facebook hoped to augment.

When we asked the co -founder of YouTube, Steve Chen, which presses the future, he said: “Sometimes I wonder if the button will be needed when AI will be sophisticated enough to tell the algorithm with a 100 % accuracy, but the final goal you want to make it yourself and sharing.

He pointed out, however, that one of the reasons why a similar button may always be needed is to deal with sharp or temporary changes in viewing because of life events or situations. “There are days when I want to watch content that is a bit more suitable for, say, my children,” he said. Chen also explained that a similar button may have longevity due to its role in attracting advertisers – the second key group next to viewers and creators – because similar works as the simplest possible hinge to combine these three groups. With one tap, the viewer simultaneously provides recognition and feedback directly of the content supplier as well as evidence of commitment and preferences towards the advertiser.

Latest Posts

More News