Join our daily and weekly newsletters to receive the latest updates and exclusive content on industry-leading AI coverage. Learn more
Although there are literally no dragons in the world of generative artificial intelligence, thanks Abacus.ai, the term Dracarys now also has some meaning. Dracarys is the name of a modern family of open vast language models (LLM) for coding.
Abacus.ai is an AI modeling platform and tools provider that is no stranger to using fictional dragon names in its technology. In February, the company released Smaug-72B. Smaug is the name of a dragon from the classic fantasy book The Hobbit. While Smaug is a general-purpose LLM, Dracarys is designed to optimize coding tasks.
In its first release, Abacus.ai applied its so-called “Dracarys recipe” to the 70B class of parameter models. The recipe includes optimized tuning, among other techniques.
“It’s a combination of training dataset and fine-tuning techniques that improve the coding ability of any open-source LLM,” Bindu Reddy, CEO and co-founder of Abacus.ai, told VentureBeat. “We’ve shown that it improves both Qwen-2 72B and LLama-3.1 70b.”
Gen AI for coding tasks is a growing space
The entire AI generation market in application development and coding is an area full of activity.
An early pioneer in this space was GitHub’s Copilot, which helps developers with code completion and app development tasks. A number of startups, including Tabnine and Replit, have also built features that give developers the power of LLM.
Of course, there are also the LLM providers themselves. Dracarys provides a refined version of Meta’s general Llama 3.1 model. Anthropic’s Claude 3.5 Sonnet also appeared in 2024 as a popular and competent LLM for coding.
“Claude 3.5 is a very good coding model, but it is a closed-source model,” Reddy said. “Our recipe improves on the open-source model and Dracarys-72B-Instruction “is the best coding model in its class.”
The numbers on Dracarys and his AI coding capabilities
According to Living bench In comparative tests of the modern models, there is a clear improvement in the Dracarys recipe.
LiveBench provides an encoding score of 32.67 for the meta-llama-3.1-70b-instruct turbo model. The Dracarys tuned version increases the performance to 35.23. For qwen2 the results are even better. The existing qwen2-72b-instruct model has an encoding score of 32.38. Using the Dracarys recipe increases this score to 38.95.
While qwen2 and Llama 3.1 are currently the only models using the Dracarys recipe, Abacus.ai plans to create more models in the future.
“We will also release Dracarys builds for Deepseek-coder and Llama-3.1 400b,” Reddy said.
How Dracarys will lend a hand with coding businesses
There are several ways developers and enterprises can potentially benefit from the improved coding performance promised by Dracarys.
Abacus.ai currently provides model weights for Hugging Face for both Lama AND Qwen2-based on models. Reddy noted that refined models are now also available as part of Abacus.ai’s Enterprise offering.
“These are great options for enterprises that don’t want to send their data to public APIs like OpenAI and Gemini,” Reddy said. “We’ll also make Dracarys available on our wildly popular ChatLLM service for small teams and professionals if there’s enough interest.”
