In the past A year, a software veteran, Jay Prakash Thakur spent nights and weekends prototyping AI agents who could post meals and engineer of mobile applications in the near future. His agents, though surprisingly talented, also revealed novel legal questions that are waiting for companies trying to apply the hottest novel silicon Valley technology.
Agents are AI programs that can act mainly independently, enabling companies to automate tasks such as answering customer questions or paying invoices. While ChatgPT and similar chatbots can develop E -Maile or analyze bills on demand, Microsoft and other technology giants expect that agents will deal with more intricate functions – and most importantly, they do it with little human supervision.
The most ambitious plans of the technology industry include systems of many agents, with dozens of agents one day of connection to replace All work forces. For companies, the benefit is clear: saving time costs and work. The demand for technology is already growing. Gartner technology researcher estimates that Agentic AI will solve 80 percent of joint queries about customer service by 2029. FIVERR, a service in which companies can book independent codes, Reports This is looking for an “agent” has increased by 18,347 percent in recent months.
Thakur, mainly self -taught, living in California, wanted to be at the forefront of the emerging field. His daily work at Microsoft is not associated with agents, but he tinted AutogenousOpen Source Microsoft software for construction agents because it worked at Amazon in 2024. Thakur claims that he has developed prototypes of many agents using autogen with only programming. Last week, Amazon introduced a similar tool to the development of agents called Strands; Google offers what he calls a set of agents’ development.
Because agents are to act autonomously, the issue of who is responsible when their mistakes cause financial damage was the biggest problem. He believes that fault when agents from various companies are disturbing in one, a huge system can become controversial. He compared the challenge of reviewing error diaries from various agents to recreate a conversation based on the notes of various people. “You can often indicate responsibility,” says Thakur.
Joseph Fireman, senior legal advisor at Openai, said on stage at the last legal conference led by Media Law Resource Center in San Francisco, that the injured parties follow people with the deepest pockets. This means that companies like him will have to be prepared to take responsibility when agents caused damage – even if a child who was a mess with an agent could be guilty (if that person was guilty, he probably will not have a valuable number of years, thinking). “There is no one that will not take place on their times. The computer,” said the fireman he began to implement the cover In case of problems with AI Chatbot that will aid companies cover the costs of misfortunes.
Onion rings
Thakur’s experiments were that he had agents in systems that require the least human intervention. One of the projects he implemented was to replace other software programmers with two agents. One has been trained to search for specialized tools needed to create applications, and the other summarized their rules of apply. Thakur says that in the future the third agent could apply identified tools and follow the summary rules to develop a completely novel application.
When Thakur put his prototype for the test, the search engine found a tool that according to the website “supports an unlimited number of demands per minute for company users” (which means that high payment customers can rely on it as they want). But trying to distance key information, a summary agent, the summary agent abandoned the key qualification in a minute. Qualify as a company user so that he can write a program that has submitted unlimited demands to an external service. Because it was a test, no damage was caused. If this happened in real life, the cut tips could lead to the breakdown of the entire system.
