Do you want smarter insights in your inbox? Sign up for our weekly newsletters to get what is essential for AI leaders, data and security. Subscribe now
The decision on AI models is the same technical decision and is strategic. But the choice of open, closed or hybrid models has compromises.
During this year’s VB transformation, model architecture experts from General Motors, Zoom and IBM discussed how their companies and customers are considering choosing the AI model.
Barak Turovsky, who became the first director of AI GM in March, said that with each up-to-date edition of the model there is a lot of noise and every time the leaders’ table changes. Long before the leader they were the mainstream debate, Turovsky helped launch the first gigantic language model (LLM) and reminded how the masses and training data of AI Open Sourcing led to grave breakthroughs.
“It was probably one of the biggest breakthroughs that helped OpenAI and start launching others,” said Turovsky. “So it’s actually a funny anecdote: Open Source has helped create something that has been closed and now it can go back to opening.”
Decision factors differ and include costs, efficiency, trust and security. Turovsky said that enterprises sometimes prefer a mixed strategy – using an open model for internal employ and a closed production model and customers or vice versa.
AI IBM strategy
Armand Ruiz, vice president of the AI IBM platform, said that IBM initially founded his platform with his own LLM, but then he realized that this is not enough – especially when stronger models appeared on the market. Then the company expanded to offer integration with platforms such as hugging their face so that customers could choose any Open Source model. (The company has recently debuted a up-to-date model gate that gives enterprises API to switch between LLM.)
More enterprises choose more models than many suppliers. When Andreessen Horowitz surveyed 100 CIO, 37% of respondents said that they employ 5 or more models. Last year, only 29% used the same amount.
The choice is crucial, but sometimes too much selection causes confusion, said Ruiz. To support customers in their approach, IBM is not worried about what LLM they employ during the proof of the concept or pilot phase; The main goal is feasibility. Only later do they start to look at whether to distilla the model or adapt one based on the client’s needs.
“First, we try to simplify all this paralysis of the analysis with all these options and focus on the case of use,” said Ruiz. “Then we’ll find out what the best production path is.”
How is Zoom AI approaching
Zoom customers can choose between two configurations for his companion AI, said Cto Zoom Xuedong Huang. One includes the federation of the company’s own LLM with other larger foundation models. Another configuration allows clients concerned about using too many models to employ the model only Zoom. (The company has also recently established cooperation with Google Cloud in order to accept an agent-agent report for AI Companion for Enterprise Worksfls.)
Huang said that the company has created its own model of a diminutive language (SLM) without using customer data. With 2 billion parameters, LLM is actually very diminutive, but it can still exceed other industry models. SLM works best on complicated tasks while working with a larger model.
“It’s really the power of a hybrid approach,” said Huang. “Our philosophy is very simple. Our company is very similar to Mickey’s mouse and an elephant dancing together. The small model will perform a very specific task. We do not say that the small model will be good enough … Mickey mouse and elephant will work as one team.”
