Remote-First-Company | NEW YORK CITY, Feb. 13, 2024 (GLOBE NEWSWIRE) -- VAST Data, the AI data platform company, today announced a groundbreaking partnership with Run:ai, the leader in compute orchestration for AI workloads. This collaboration marks a monumental step in redefining AI operations at scale, offering a full-stack solution encompassing compute, storage, and data management. Together, VAST and Run:ai are addressing the critical needs of enterprises embarking on large-scale AI initiatives.
Run:ai streamlines accelerated NVIDIA AI infrastructure across private, public, and hybrid clouds, while boosting AI project efficiency through dynamic workload scheduling and innovative GPU fractioning, enhancing GPU allocation and utilization. The platform caters to various needs, from supporting data scientists’ interactive environments to improving large-scale training and reliable, scalable inference. Run:ai’s Open Architecture ensures a future-proof, collaborative platform that integrates with a broad ecosystem of industry leaders. For the data layer, the VAST Data Platform unifies storage, database, and containerized compute engine services into a single, scalable software platform that was built from the ground up to power AI and GPU-accelerated tools in modern data centers and clouds.
“We've recognized that customers need a more holistic approach to AI operations,” said Renen Hallak, CEO and co-founder of VAST Data. “Our partnership with Run:ai transcends traditional, disparate AI solutions, integrating all of the components necessary for an efficient AI pipeline. Today’s announcement offers data-intensive organizations across the globe the blueprint to deliver more efficient, effective, and innovative AI operations at scale.”
Together, VAST Data and Run:ai are providing organizations with:
- Full-Stack Visibility for Resource and Data Management: The synergy between the VAST Data Platform and Run:ai creates a comprehensive AI solution, providing enterprises with full-stack visibility encompassing compute, networking, storage, and workload management across their AI operations.
- Cloud Service Provider-Ready Infrastructure: CSPs are pivotal in bringing GPU availability to enterprises integrating AI into their business processes. VAST and Run:ai are offering CSPs a blueprint to deploy and manage AI cloud environments efficiently. The VAST Data Platform, together with Run:ai, presents a tested and validated framework for CSPs to deliver secure, enterprise-grade AI environments across a single shared infrastructure, with a Zero Trust approach to compute and better data isolation and utilization.
- Optimized End-to-End AI Pipelines: From multi-protocol ingest to data processing to model training and inferencing, organizations can accelerate data preparation using NVIDIA RAPIDS Accelerator for Apache Spark, as well as other AI frameworks and libraries available with the NVIDIA AI Enterprise software platform for development and deployment of production-grade AI applications, with the VAST DataBase to enable high-performance data pre-processing.
- Simple AI Deployment and Infrastructure Management: Run:ai offers fair-share scheduling to allow users to easily and automatically share clusters of GPUs without memory overflows or processing clashes, paired with simplified multi-GPU distributed training. In addition, the VAST DataSpace makes data access across geographies and multi-cloud environments easy, while providing the encryption, access-based controls, and data security that customers require.
“A key challenge in the market is providing equitable access to compute resources for diverse data science teams,” explained Omri Geller, CEO and co-founder at Run:ai. “Our collaboration with VAST emphasizes unlocking the maximum performance potential within complex AI infrastructures and greatly extends visibility and data management across the entire AI pipeline. This is a first of its kind partnership that will provide immense value to our joint customers.”
The VAST Data Platform is VAST’s breakthrough approach to data management and accelerated computing. For enterprises and CSPs, the platform serves as the comprehensive software infrastructure required to capture, catalog, refine, enrich, store, and secure unstructured data with real-time analytics for AI and deep learning. Through the Run:ai Open Architecture, Run:ai integrates seamlessly with NVIDIA AI Enterprise and NVIDIA accelerated computing, helping customers speed up development, scale AI infrastructure, and lower compute costs so they can orchestrate and manage compute resources effectively.
By deeply integrating NVIDIA’s market-leading AI computing offering with the dynamic AI workload orchestration of the Run:ai platform and VAST’s industry-disrupting AI data platform, organizations can optimally utilize their resources and gain better control and visibility across both the compute and data layers.
VAST + Run:ai blueprints, solution briefs, and demos will be first available at NVIDIA GTC 2024. Learn more about VAST and Run:ai at NVIDIA GTC by visiting VAST Data at Booth #1424.
Additional Resources:
- Explore the possibilities with VAST Data at NVIDIA GTC
About VAST Data
VAST Data is the data platform company built for the AI era. As the new standard for enterprise AI infrastructure, organizations trust the VAST Data Platform to serve their most data-intensive computing needs. VAST Data empowers enterprises to unlock the full potential of their data by providing AI infrastructure that is simple, scalable, and architected from the ground up to power deep learning and GPU-accelerated data centers and clouds. Launched in 2019, VAST Data is the fastest growing data infrastructure company in history. For more information, please visit and follow VAST Data on X (formerly Twitter) and LinkedIn.
About Run:ai
The Run:ai platform brings cloud-like simplicity to AI resource management — providing researchers with on-demand access to pooled resources for any AI workload. An innovative cloud-native operating system — which includes a workload scheduler and an abstraction layer — helps IT simplify AI implementation, increase team productivity, and gain full utilization of expensive GPUs. Using Run:ai, companies streamline development, management, and scaling of AI applications across any infrastructure, including on-premises, edge and cloud. For more information, please visit and follow Run:ai on X (formerly Twitter). and LinkedIn.