As cloud project tracking software monday.comengineering organization had over 500 developers, the team began to feel the strain of their own success. Product lines proliferated, microservices grew, and code flowed faster than reviewers could keep up. The company needed a way to review thousands of pull requests every month without overwhelming developers with boredom and without sacrificing quality.
That’s when Guy Regev, vice president of R&D and head of the Growth and Monday Dev teams, began experimenting with the company’s fresh AI tool DigIsraeli startup focusing on development agents. What started as a lightweight test quickly became a key part of monday.com’s software delivery infrastructure, as new case study published today by Qodo and monday.com.
“Qodo doesn’t feel like just another tool — it’s like adding a new developer to the team who is actually learning how we work,” Regev told VentureBeat in a recent video call, adding that it “prevents over 800 issues per month from reaching production – some of which could have caused serious security vulnerabilities.”
Unlike code generation tools like GitHub Copilot or Cursor, Qodo doesn’t attempt to write fresh code. Instead, he specializes in viewing it – using what he calls it context engineering to understand not only what has changed in the pull request, but also why, how it aligns with the business logic, and whether it aligns with internal best practices.
“You can call Claude Code or Cursor and get 1,000 lines of code in five minutes,” said Itamar Friedman, co-founder and CEO of Qodo, in the same video call as with Regev. “You have 40 minutes and you can’t check it. That’s why Qodo has to actually check it.”
For Monday.com, this feature wasn’t just helpful—it was transformative.
Code review at scale
At any given time, monday.com developers are delivering updates to hundreds of repositories and services. The engineering organization works in tightly coordinated teams, each aligned to specific parts of the product: marketing, CRM, development tools, internal platforms, and more.
And this is where Qodo came to the rescue. The company’s platform uses artificial intelligence to not only check for obvious errors or style violations, but also to assess whether a pull request follows team-specific conventions, architectural guidelines, and historical patterns.
It does this by learning from your own code – training on previous PRs, comments, calls, and even Slack threads to understand how your team works.
“Qodo’s comments are not general – they reflect our values, our libraries, and even our standards for feature flags and privacy,” Regev said. “It is context-aware in a way that traditional tools are not.”
What does “context engineering” actually mean?
Qodo calls it its secret sauce context engineering — a system-level approach to managing everything the model sees when making decisions.
This includes the PR code difference, of course, but also previous discussions, documentation, relevant files from the repository, and even test results and configuration data.
The idea is that language models don’t really “think” – they predict the next token based on the input they receive. Thus, the quality of their results depends almost entirely on the quality and structure of the input data.
As Dana Fine, Qodo Community Manager, put it this way: blog post: “You don’t just write prompts; you design structured input within a set token limit. Each token is a design decision.”
This is not just a theory. In the case of monday.com, this meant that Qodo could catch not only the obvious bugs, but also the subtle ones that usually escape reviewers – hard-coded variables, missing crash errors, or violations of cross-team architecture conventions.
One example stood out. In a recent PR, Qodo flagged a line that accidentally revealed a testing environment variable – something no reviewer caught. If a merger occurred, it could cause production problems.
“The hours we would have spent fixing this security breach and the resulting legal issues would be significantly longer than the hours we reduced as a result of withdrawing the request,” Regev said.
Pipeline integration
Today, Qodo is deeply integrated into monday.com’s development workflow, parsing pull requests and providing contextual recommendations based on the team’s previous code reviews.
“It’s not just another tool… It feels like another team member has joined the system – someone who is learning how we work,” Regev noted.
Developers receive suggestions during the review process and retain control over final decisions – the human-in-the-loop model was critical to adoption.
Because Qodo integrated directly with GitHub via pull request actions and comments, the Monday.com infrastructure team didn’t have a long learning curve.
“It’s just a GitHub action,” Regev said. “Through testing, it creates PR. It’s not a separate tool that we had to learn.”
“The goal is to actually help developers learn the code, take ownership, give each other feedback, learn from that, and set standards,” Friedman added.
Results: Saving time, preventing errors
Since implementing Qodo more broadly, monday.com has seen measurable improvements across many teams.
Internal analysis shows that developers save on average about an hour on each pull request. Multiply that by thousands of PRs per month and the savings quickly add up to thousands of developer hours per year.
These aren’t just cosmetic issues – many of them involve business logic, security, or runtime stability. And because Qodo’s suggestions reflect real-world monday.com conventions, developers are more likely to follow them.
The accuracy of the system is based on its data-driven design. Qodo trains on each company’s private code and historical data, adapting to different team styles and practices. It is not based on universal principles or external data sets. Everything is customized.
From internal tool to product vision
The Regeva team was so impressed with Qodo’s impact that they began planning a deeper integration between Qodo and Monday Dev, the developer-focused product line that monday.com is building.
The vision is to create a workflow where business context – tasks, tickets, customer feedback – flows directly into the code review layer. This way, reviewers can evaluate not only whether the code “works” but also whether it solves the right problem.
“Before, we had linters, threat rules, static analysis… rule-based… you had to configure all the rules,” Regev said. “But he doesn’t know what you don’t know… Qodo… feels like he’s learning from our engineers.”
This is strictly in line with the Qodo roadmap. The company doesn’t just check the code. Creates a full platform of development agents – including Qodo Gen for context-aware code generation, Qodo Merge for automated PR analysis, and Qodo Cover, a regression testing agent that uses runtime validation to ensure test coverage.
All of this is powered by Qodo’s own infrastructure, including the fresh open-source embedding model, Qodo-Embed-1-1.5B, which outperforms OpenAI and Salesforce offerings in code search benchmarks.
What’s next?
Qodo now offers its platform in a freemium model – free for individuals, with a discount for startups through Google Cloud’s Perks program, and enterprise-grade for companies that need single sign-on, non-stop deployment, or advanced controls.
The company already works with teams from NVIDIA, Intuit and other Fortune 500 companies. And thanks to its recent partnership with Google Cloud, Qodo models are available directly in Model Garden Vertex AI, making it easier to integrate into enterprise pipelines.
“Contextual engines will be the biggest story of 2026,” Friedman said. “Every enterprise will need to build its own second brain if it wants AI that actually understands and helps them.”
As AI systems become more embedded in software development, tools like Qodo show how the right context—delivered at the right moment—can transform the way teams create, ship, and scale code across the enterprise.
