Alluxioopen-source data platform creator, announced immediate availability of the latest enhancements to Alluxio Enterprise AI. Version 3.2 showcases the platform’s ability to universally leverage GPU resources, improve I/O performance, and deliver competitive end-to-end performance with HPC storage. It also introduces a recent Python interface and advanced cache management features. These enhancements enable organizations to fully leverage their AI infrastructure with maximum efficiency, cost-effectiveness, flexibility, and manageability.
AI workloads face several challenges, including a mismatch between data access speed and GPU compute, which leads to GPU underutilization due to ponderous data loading in frameworks like Ray, PyTorch, and TensorFlow. Alluxio Enterprise AI 3.2 addresses this issue by improving I/O efficiency and achieving over 97% GPU utilization. Additionally, while HPC storage provides good performance, it requires significant infrastructure investment. Alluxio Enterprise AI 3.2 offers comparable performance using existing data lakes, eliminating the need for additional HPC storage. Finally, managing sophisticated integrations between compute and storage is arduous, but the recent release simplifies this with a Pythonic file system interface that supports POSIX, S3, and Python, making it effortless for different teams to adopt.
Alluxio Enterprise AI includes the following key features:
● Apply GPUs anywhere for increased speed and agility – Alluxio Enterprise AI 3.2 enables organizations to run AI workloads anywhere GPUs are available, ideal for hybrid and multi-cloud environments. Its smart caching and data management bring data closer to GPUs, ensuring proficient utilization even for remote data. A unified namespace simplifies access across storage systems, enabling seamless AI execution across diverse and distributed environments, enabling scalable AI platforms without data location constraints.
● Comparable performance to HPC storage – MLPerf benchmarks show that Alluxio Enterprise AI 3.2 matches HPC storage performance by leveraging existing data lake resources. In benchmarks such as BERT and 3D U-Net, Alluxio delivers comparable model training performance across multiple A100 GPU configurations, proving its scalability and performance in real-world production environments without the need for additional HPC storage infrastructure.
● Higher I/O performance and GPU utilization of over 97% – Alluxio Enterprise AI 3.2 increases I/O performance, achieving throughput of up to 10 GB/s and 200K IOPS with a single client, scaling to hundreds of clients. This performance fully saturates 8 A100 GPUs on a single node, showing over 97% GPU utilization in immense language model benchmarks. Up-to-date checkpoint read/write support optimizes training recommendation engines and immense language models, preventing GPU idle time.
● Up-to-date Filesystem API for Python Apps – Version 3.2 introduces the Alluxio Python FileSystem API, an implementation of FSSpec, enabling seamless integration with Python applications. This extends Alluxio interoperability across the Python ecosystem, allowing frameworks like Ray to easily access local and remote storage systems.
● Advanced Cache Management for Performance and Control – Version 3.2 offers advanced cache management features, giving administrators fine-grained control over data. A recent RESTful API facilitates seamless cache management, while an smart cache filter optimizes disk utilization by selectively caching warm data. The cache free command offers granular control, improving cache efficiency, reducing costs, and increasing data management flexibility.
Availability
Alluxio Enterprise AI version 3.2 is now available for download here: https://www.alluxio.io/download/.