DeepL, a leading global linguistic artificial intelligence company, today announced that it will be one of the first to commercially deploy this technology NVIDIA DGX SuperPOD with DGX GB200 systems. The NVIDIA DGX SuperPOD chip, which is expected to become operational in DeepL in mid-2025, will be used for research computing. It will provide DeepL with the additional computing power needed to train up-to-date models and develop features and products to take its creative Language AI platform – which breaks language barriers for businesses and professionals around the world – to the next level.
Scalable to tens of thousands of GPUs, liquid-cooled, rack-mount NVIDIA DGX GB200 systems feature NVIDIA GB200 Grace Blackwell superchips that enable DeepL to run high-performance AI models imperative for advanced generative AI applications. Next-generation clusters are designed to deliver extreme performance and consistent uptime for large-scale AI training and inference.
This marks DeepL’s third deployment of the NVIDIA DGX SuperPOD and offers more processing power than the DeepL Mercury, a Top500 supercomputer – DeepL’s previous NVIDIA flagship DGX SuperPOD with DGX H100 systems, implemented a year ago in Sweden. The latest implementation will take place in the same Swedish data center.
With a rapidly growing customer network of over 100,000 companies and governments around the world, including 50% of Fortune 500 companies and industry leaders such as Zendesk, Nikkei, Coursera and Deutsche Bahn, DeepL is revolutionizing global communications with its groundbreaking Language AI platform . The company’s industry-leading translation and writing tools enable companies to break down language barriers, expand into up-to-date markets and conduct unprecedented cross-border collaboration.