Do you want smarter insights in your inbox? Sign up for our weekly newsletters to get what is critical for AI leaders, data and security. Subscribe now
Because AI applications are increasingly penetrating company operations, from increasing patient care, advanced medical imaging to the supply of elaborate fraud detection models, and even supporting nature protection, a critical bottleneck often appears: data storage.
During VentureBeat Transform 2025Greg Matson, head of products and marketing, Solidigm and Roger Cummings, CEO of Peak: Aio talked to Michael Stewart, a managing partner in M12 about how innovations in storage technology enable the operate of AI in healthcare.
The Monai frame is a breakthrough in medical imaging, building it faster, safer and safer. Progress in memory technology allows scientists to build in addition to these frames, iteration and innovation. Peak: AIO has established cooperation with Solidgm to integrate energy efficiency, capable and high capacity, which enabled Monai to store over two million CT computed tomography in one node in their IT environment.
“As soon as the evolution of the AI Enterprise infrastructure, storage equipment must be more and more adapted to specific cases of use, depending on where they are in the AI data pipeline,” said Matson. “The type of operate of the operate we talked about with Monai, the case of the operate of the edges, as well as feeding the training cluster, are well supported by very high solutions for storing a indefinite state, but the actual training of inference and model require something different. This is very high performance, very high, very high and/o-pet-second SSD requirements. For us, rag, rag, Types, thanks to the fact that we are in all integration, thanks to the fact that we are for us.
Improving AI inference on the edge
To get peak performance on the edge, storage scaling to one node is crucial to close to the data. And the key is to remove the bottleneck of memory. This can be done by creating a memory of AI infrastructure to scale it along with data and metadata. The proximity of data to calculate significantly increases the time to inspect.
“You see all huge implementation, large Green Field data centers for AI, using very specific hardware projects to be able to bring data close to the GPU as possible,” said Matson. “They built their data centers from very high storage content to ensure storage at the level of petabayers, very available at very high speeds, to the GPU. Now the same technology happens in the microcosm on the edge and enterprise.”
Buyers of AI systems becomes crucial to make sure you get the greatest performance in your system, launching it in all solid states. This allows you to enter huge amounts of data and enables incredible processing power in a petite system on the edge.
The future of AI equipment
“It is necessary for us to provide solutions that are open, scalable and at memory speed, using some of the latest and largest technologies to do this,” said Cummings. “This is our goal as a company to ensure this openness, this speed and scale needed by the organization. I think you will see that the economies also match.”
In the case of a general training pipeline and inference, as well as in the conclusion of the hardware needs will grow, regardless of whether it is a very brisk SSD or a very high content that is robust.
“I would say that it will lead it even more in the direction of very high capacity, regardless of whether in a few years it is fees for one-trunk, which works at very low power and which can basically replace four times more hard drives, or a product with very high performance, which is almost close to memory speed,” said Matson. “You will see that great GPU suppliers are looking for how to define the next storage architecture, so that it can lend a hand expand, very close, HBM in the system. It was SSD with general purpose in cloud processing, now develops capacity.
