Ford’s up-to-date AI-powered voice assistant will be available to customers later this year, the company’s chief software officer said today at CES. In 2028, the automaker will introduce hands-free, hands-free Level 3 autonomous driving capabilities as part of its cheaper (and hopefully more profitable) Universal Electric Vehicle (UEV) platform, which is scheduled to launch in 2027.
Most importantly, Ford said it will develop many of the key technologies associated with these products in-house to reduce costs and maintain greater control over them. Keep in mind that the company doesn’t create its own multilingual models or design its own silicon like Tesla and Rivian. Instead, it will build its own electronic and computer modules that are smaller and more productive than current systems.
“By designing our own software and hardware in-house, we found a way to make this technology more affordable,” Doug Field, Ford’s general manager of electric vehicles and software, wrote in a blog post. “This means we can apply advanced hands-free driving features to vehicles that people actually buy, not just unaffordable vehicles.”
Ford has said it will develop many of the key technologies related to these products in-house
The news comes amid growing pressure from Ford to bring cheaper electric vehicles to market after a massive bet on electric versions of the Mustang and F-150 pickup trucks failed to excite customers or turn a profit. The company recently canceled production of the F-150 Lightning amid sales of refrigerated electric vehicles and said it would produce more hybrid vehicles as well as battery storage systems to meet growing demand from AI data center construction. Ford recalibrated its AI strategy following the shutdown of its autonomous vehicle program with Argo AI in 2022, transitioning from Level 4 fully autonomous vehicles to Level 2 and Level 3 conditional autonomous driver assistance capabilities.
Still, the company is trying to find a middle ground when it comes to AI: not betting everything on robot armies like Tesla and Hyundai, while focusing on some AI-based products like voice assistants and automated driving features.
Ford said its AI assistant will launch in Ford and Lincoln mobile apps in 2026, expanding to in-car support in 2027. An example would be a Ford owner standing next to his equipment and unsure how many bags of mulch will fit on the floor of his truck. The owner can take a photo of the litter and ask the assistant, which can provide a more precise answer than ChatGPT or Google Gemini, for example, because it contains all the information about the owner’s vehicle, including cargo bed size and equipment level.
Ford CFO Sherry House said at a recent technology conference Ford would integrate Google’s Gemini to their vehicles. That said, the automaker is designing its assistant to be chatbot-agnostic, which means it will work with many different LLMs.
Amid all this, the company is trying to find a middle ground when it comes to artificial intelligence.
“The key thing is that we take that LLM and then give it access to all the relevant Ford systems so that the LLM knows what specific vehicle you’re using,” Sammy Omari, director of ADAS and infotainment solutions at Ford, told me.
Autonomous driving features will be introduced later with the launch of Ford’s universal electric vehicle platform. Ford’s current flagship product is BlueCruise, a hands-free Level 2 driver assistance feature only available on most highways. Ford plans to implement a point-to-point hands-free system that will be able to recognize traffic lights and navigate through intersections. And then eventually the Level 3 system will be launched, where the driver will still need to be able to take over the vehicle on request, but will also be able to take his eyes off the road in certain situations. (Some experts have argued that L3 systems may be unsafe given the need for the driver to remain alert even though most driving tasks are performed by the vehicle.)
Omari explained that by rigorously analyzing every sensor, software component and computational unit, the team achieved a system that is approximately 30 percent cheaper than today’s hands-free system while providing significantly greater capabilities.
All of this will depend on a “radical rethink” of Ford’s computing architecture, Field said in a blog post. This means a more unified “brain” that can process information and entertainment, ADAS, voice commands and more.
For nearly a decade, Ford has been building a team with the right expertise to lead these projects. The former Argo AI team, originally focused on Level 4 robotaxi development, was invited aboard the mothership due to its expertise in machine learning, robotics and software. AND BlackBerry engineering teamwho was initially hired in 2017 is now working on building the next generation of electronic modules that will enable some of these innovations, Paul Costa, executive director of Ford electronics platforms, told me.
But Ford doesn’t want to participate in the “TOPS arms race,” Costa added, referring to a metric that measures AI processor speed in trillions of operations per second. Other companies like Tesla and Rivian have highlighted the processing speed of their AI chips to prove how powerful their automated driving systems will be. Ford has no interest in playing this game.
Instead of optimizing solely for performance, they aimed for a balance between performance, cost and size. The result is a compute module that is significantly more productive, less exorbitant and 44 percent smaller than the system it replaces.
“We’re not just picking one area to optimize here at the expense of everything else,” Costa said. “We were able to optimize across the board and that’s why we’re so excited about it.”
