Saturday, March 14, 2026

Novel AI infrastructure reality: Bring calculations to data, not the data for the calculation

Share


Join the event trusted by corporate leaders for almost two decades. VB Transforma connects people building AI Real Enterprise. Learn more


When AI transforms business operations in various industries, critical challenges still appear around data storage – it does not matter, as an advanced model, its performance is based on the possibilities of quick, protected and reliable access to huge amounts of data. Without adequate data storage infrastructure, even the most powerful AI systems can be used by sluggish, crushed or incompetent data pipelines.

This topic took the central place on the first day VB Transformduring a session focused on medical imaging AI Innovations led by Peak: Aio AND Solidig. Together, next to Medical Open Network for AI (Monai) Open Source design project for developing and implementing artificial intelligence of medical imaging-they again define how data infrastructure supports real-time conclusions and training in hospitals, from improving diagnostics to supplying advanced tests and cases of operational utilize.

>> See all our transform 2025 coverage HERE

Pioneering storage on the edge of the clinical artificial intelligence

Moderated by Michael Stewart, a managing partner in M12 (Microsoft’s Venture Fund), in the session contained the observations of Roger Cummings, CEO PEAK: AIO and Greg Matson, Chief of Products and Marketing in Solidigm. The conversation examined how high capacity storage architecture opens recent doors for artificial medical intelligence, ensuring speed, security and scalability needed to support massive data sets in clinical environments.

Most importantly, both companies are deeply involved in Monai from the first days. Developed in cooperation with King’s College London and others, Monai is deliberately built for developing and implementing AI models in medical imaging. A set of Open Source Framework-Rawarty tools in unique healthcare requirements-in this library and DICOM support tools, 3D image processing and modeling initial training, enabling researchers and clinicists to build high-performance models for tasks and classification of tumors and classification of organ.

The key design goal of Monai was to support local implementation, enabling hospitals to maintain full control over sensitive patients’ data while using standard GPU servers for training and inference. This strictly involves the efficiency of frames with data infrastructure, requiring quick, scalable mass storage systems to fully support the clinical requirements of artificial intelligence in real time. At this point, Solidig and Peak: AIO: SOLIDIGM introduces a immense density flash to the table, while Peak: AIO specializes in storage systems specially built for AI loads.

“We were lucky that we worked early at King’s College in London and Professor Sebastien Orslund to develop Monai,” explained Cummings. “By working with Orslund, we have developed a basic infrastructure that allows researchers, doctors and biologists in natural sciences to build these frames very quickly.”

Meet double requirements for storage in AI healthcare

Matson noticed that he saw a clear fork in memory equipment, with various solutions optimized for specific stages of the AI ​​data pipeline. In cases of utilize, such as Monai, similar implementation of AI Edge-A also scenarios including feeding of training clusters-high-capacity constant status plays a key role, because these environments are often constrained by space and constrained by energy, but require local access to mass data sets.

For example, Monai was able to store over two million CT computed tomography on one node in the existing IT infrastructure in the hospital. “Very limited cosmic, limited with energy and a very large storage reason enabled some extraordinary results,” said Matson. This type of performance is a breakthrough for Edge AI in healthcare, enabling institutions to launch advanced AI local models without prejudice to performance, scalability or data security.

On the other hand, the loads related to inference in real time and vigorous training of the model are very different requirements in the system. These tasks require storage solutions that can be provided by extremely high input/output operations per second (IOPS) to keep up with the data of data needed for high bandwidth (HBM) and ensure that GPUs remain fully used. PEAK: defined software AIO storage layer, combined with high-yielding solid drives (SSDS), solves both ends of this spectrum-giving capacity, efficiency and speed required throughout the entire AI pipeline.

A layer defined by the AI ​​clinical load software on the edge

Cummings explained that AI memory technology defined by AI AI: in combination with high -performance SSD Solidigm discs, it allows you to read, write and archive of massive data sets at the clinical speed of AI requirements. This combination accelerates model training and increases the accuracy of medical imaging, while acting as part of Open Source adapted to healthcare environments.

“We provide a layer defined by software that can be implemented on any freight server, transforming it into a high -performance system for AI or HPC loads,” said Cummings. “In Edge environments, we take the same ability and scale it to one node, bringing concluding to the place where the data live.”

The key ability is the way the peak: AIO helps eliminate time-honored bottlenecks by integration of memory more directly with AI infrastructure. “We treat the memory as part of the infrastructure itself – something that is often overlooked. Our solution is scaled not only by storage, but also the work space of memory and related metadata,” said Cummings. This is a significant difference for customers that they cannot afford to-Pod relative to space or costs-to repeatedly launch immense models. By maintaining tokens residing with memory and available, Peak: AIO enables productive, located inference without the need for constant recomputers.

Bringing intelligence closer to data

Cummings emphasized that enterprises will have to apply a more strategic approach to AI load management. “You can’t be just a destination. You need to understand the load.” So, when inference is so much pressure, we see how generalists become more specialized. And now we take a job that we have done from one knot and push it closer to data to make it more productive. We want more wise data, right? The only way to get closer to this data. “

Some clear trends emerge from AI implementation on a large scale, especially in newly built Greenfield data centers. These objects are designed with highly specialized hardware architecture, which bring data to the GPU so close. To achieve this, they largely rely on all magazines in a constant state-especially the OLTRA-profile SSD-designed capacity to provide memory on a petabyte scale at the speed and availability needed to keep the GPU in constant supply of high capacity data.

“Now the same technology is basically happening on the microcosm, on the edge, in the enterprise,” explained Cumming. “Therefore, buyers of AI systems becomes crucial to determine how you choose the equipment seller and system, even to make sure that if you want to get as much performance from the system, you run all over the constant.

Latest Posts

More News