Cloudhybrid data, analytics and artificial intelligence platform, today announced the expansion of its Enterprise AI ecosystem at the EVOLVE24 annual data and artificial intelligence conference in Novel York. This initiative brings together a diverse group of industry-leading AI providers to deliver comprehensive, end-to-end AI solutions to customers that support maximize the value of AI.
Gigantic enterprises have special requirements for running AI applications at scale, including:
- Demonstrating business value that justifies the total cost of ownership within a reasonable time frame.
- Adhering to strict security and privacy standards to protect sensitive data and maintain compliance.
- Maintaining the flexibility to deploy a diverse range of models from a wide selection of vendors in the optimal environment for each apply case – which often hosts supporting data.
At last year’s EVOLVE conference, Cloudera launched the Enterprise AI ecosystem, whose founding members include:
- NVIDIA, which provides accelerated full-stack computing for developing and deploying AI workloads in both private and public clouds. Cloudera’s recent announcement highlighted the expansion of Cloudera’s AI inference service through integration NVIDIANIMHi The NVIDIA AI-powered enterprise software platform, a set of easy-to-use microservices designed to securely and reliably deploy high-performance AI model inference across clouds, data centers and workstations.
- Amazon Web Services (AWS) with Amazon Bedrock, which enables customers to build and scale generative AI applications with a single API.
- Pinecone for its leading vector database that powers the most popular technical applications of artificial intelligence: augmented search generation (RAG) and semantic search.
Over the past year, the enterprise AI ecosystem has generated significant interest and a steady flow of requests for Cloudera to leverage existing AI partners and establish up-to-date ones. Now Cloudera is proud to introduce its newest set of AI ecosystem partners at EVOLVE24 in Novel York. These are:
- Google Cloud: Google Cloud’s Vertex AI Model Garden is a centralized hub for discovering, customizing, and deploying a diverse range of models. This includes a selection of over 150 core, proprietary, open and third-party models, including Gemini, Chirp, Imagen and others from Google. Google Cloud infrastructure also supports Cloudera’s DataHub platform, which provides the data foundation for building AI applications.
Additionally, as part of its first ecosystem collaboration, Cloudera has released an Accelerator for Machine Learning Project (AMP) titled “Summary with Gemini from Vertex AI” to support customers quickly implement a summary apply case that leverages the cost effectiveness and performance of Gemini Pro models available from Vertex AI Model Garden via the API.
- Anthropic: Anthropic’s Claude Gigantic Language Models (LLM) are ideal for code generation, vision analysis, data insights, and text generation apply cases. Anthropic’s family of Claude models will enable Cloudera users to balance performance and costs, and their commitment to AI security research will support ensure reliable, unbiased and harmless results. Cloudera releases AMP titled “Image Analysis with Claude LLM from Anthropic” which will significantly shorten the time needed to create production image analysis applications. Cloudera is also making Claude its default base model for the Cloudera AI Coding co-pilot.
- Snowflake: Cloudera and Snowflake, the AI Data Cloud company, are building on their strategic collaboration, also announced at EVOLVE24, with Snowflake’s Arctic Embed models, which excel in SQL generation and offer a good price-performance ratio. Snowflake’s Iceberg-powered platform provides interoperability with Cloudera, making it easier to share data to leverage AI. Cloudera is actively working on product integrations with Snowflake, which you can read about Here.
“We pioneered the enterprise AI ecosystem to meet the complex and ever-evolving enterprise-grade security, privacy, authorization and LLM requirements of large organizations; this includes a complete suite of solutions including accelerated computation, semantic query, vector embedding, multimodal agents, RAG applications, tuning and boundary models,” said Abhas Ricky, Chief Strategy Officer at Cloudera. “Since then, AI researchers and practitioners have deployed over 400 state-of-the-art AI accelerators (AMPs) and numerous agent applications supporting high-value apply cases such as customer voice analytics, invoice reconciliation, and underwriting automation. Together, we deliver a fully integrated Enterprise AI platform, built on leading models and knowledge bases, to further develop high-quality, production-ready solutions delivered with experts at your side.