Daniel Rausch, Amazon Vice President Alex and Echo is in the process of a sedate passage. Over a decade, after launching Alexa Amazon was designed to create a modern version of the Voice assistant, one that is powered by enormous language models. When he put it in my interview with him, this modern assistant, called Alexa+, is “complete reconstruction of architecture.”
How did his team approach the greatest renovation of the Amazon voice assistant? Of course, they used artificial intelligence to build artificial intelligence.
“The speed with which we use AI tools throughout the entire compilation process is quite stunning,” says Rausch. When creating a modern Alexa, Amazon used artificial intelligence at every stage of compilation. And yes, it includes generating part of the code.
The Alexa team also introduced generative artificial intelligence to the testing process. The engineers used a “large language model as a judge to answer” during reinforcement learning processes, in which artificial intelligence chose, which he considers the best answers between the two results of Alex+.
“People receive a lever and can move faster, better through the equipment of AI,” says Rausch. Amazon concentration on the internal operate of generative artificial intelligence is part of a larger wave of disturbances for software engineers at work as modern tools, like the Anysphere cursor, change the way the task is performed – as well as the expected work load.
If this type of workflows focused on artificial intelligence proves excessive, then what it means to be an engineer, it will change fundamentally. “We will need fewer people performed some works that are carried out today and more people performing other types of work,” said Andy Jaszy, CEO of Amazon, Andy Jassa note This week for employees. “It is difficult to know exactly where it is networking in time, but in the next few years we expect that this will reduce our total working force, because we gain benefits from the wide use of AI in the whole company.”
For now, Rausch is focused mainly on the introduction of Alex’s generative version with more Amazon users. “We really didn’t want to leave customers in any way,” he says. “And this means hundreds of millions of different devices that you need to support.”
The modern Alexa+ talks in a more conversational way with users. This is a more personalized experience that you remember your preferences and is able to perform online tasks that you give them, such as searching for concert tickets or buying foodstuffs.
Amazon announced in February Alex+ at a company event and introduced early access to several public users, although it was not complete list of announced functions. Now the company claims that over a million people have access to an updated voice assistant, which is still a diminutive percentage of potential users; Ultimately, hundreds of millions of Alexa users will gain access to the AI tool. The wider edition of Alexa+ is potentially planned Later this summer.
Amazon faces competition from many directions because it works on a more lively voice assistant. The advanced OpenAI voice mode, introduced in 2024, was popular among users who recognized AI’s voice. In addition, Apple announced a review of its native voice assistant, Siri, at last year’s programmers’ conference – with many context and personalization functions similar to what Amazon is working with Alexa+. Apple has not yet launched the rebuilt Siri, even in early access, and the modern voice assistant is expected next year.
Amazon refused to provide wired early access to Alex+ for practical testing (voice?), And the modern assistant has not yet been introduced to my personal Amazon account. As in the case of the advanced OPENAI voice mode, which was launched last year, Wired plans to test Alexa+ and provide an empirical context for readers, because it becomes more available.
