Should I set up a personal AI agent to assist me with everyday tasks?
— In search of assist
Basically, I believe that relying on any kind of automation in everyday life is unsafe when taken to extremes and potentially creates a feeling of alienation, even when used in moderation, especially when it comes to in-person interactions. An AI agent that organizes my to-do list and collects online links for further reading? Excellent. An AI agent that automatically sends my parents messages with quick life updates every week? Frightening.
However, the strongest argument for not incorporating more generative AI tools into our daily routine remains the impact these models have on the environment while training and generating results. With all this in mind, I searched the WIRED archives, published during the glorious dawn of this mess we call the Internet, to find more historical context for your question. After a little research, I came back with the belief that you probably already utilize AI agents on a daily basis.
The idea of AI agents or, God forbid, “agentic AI” is currently the buzzword of every tech leader trying to promote their latest investments. However, the concept of an automated assistant dedicated to performing software tasks is not a fresh idea. Much of the discourse around “software agents” from the 1990s reflects the current discussion in Silicon Valley, where tech company leaders are now promising an influx of generative artificial intelligence agents trained to do work on our behalf online.
“One of the problems I see is that people will question responsibility for an agent’s actions,” says a WIRED interview with MIT professor Pattie Maes, originally published in 1995. “Especially when agents spend too much time on the machine or purchasing something in your name that you don’t want. The agents will cover many interesting topics, but I am convinced that we will not be able to live without them.”
In early January, I called Maes to find out how her view of AI agents has changed over the years. As always, she is bullish about the potential of personal automation, but believes that “extremely naive” engineers do not spend enough time dealing with the complexities of human-computer interaction. He says their recklessness could trigger another AI winter.
“These systems currently being built are optimized from a technical and engineering point of view,” he says. “But they are not at all optimized for human design problems.” It focuses on how artificial intelligence agents are still easily fooled or resort to biased assumptions, despite improvements to the underlying models. Mistrust, in turn, causes users to trust answers generated by AI tools when they shouldn’t.
To better understand other potential pitfalls for personal AI agents, let’s break this nebulous term into two distinct categories: those that feed you and those that represent you.
Food agents are algorithms that utilize data about your habits and preferences to search through areas of information to find what’s significant to you. Sounds familiar, right? Any social media recommendation engine populating your timeline with tailored posts or a relentless ad tracker showing me those mushroom gummies on Instagram for the thousandth time could be considered a personal AI agent. As another example from an interview in the 1990s, Maes mentioned a newsgathering agent specially trained to deliver the stories she needed. This sounds like my Google News landing page.