
As cloud mission monitoring software program monday.com’s engineering group scaled previous 500 builders, the workforce started to really feel the pressure of its personal success. Product strains have been multiplying, microservices proliferating, and code was flowing quicker than human reviewers might sustain. The corporate wanted a solution to evaluation hundreds of pull requests every month with out drowning builders in tedium — or letting high quality slip.
That’s when Man Regev, VP of R&D and head of the Progress and monday Dev groups, began experimenting with a brand new AI device from Qodo, an Israeli startup targeted on developer brokers. What started as a light-weight check quickly turned a important a part of monday.com’s software program supply infrastructure, as a brand new case examine launched by each Qodo and monday.com in the present day reveals.
“Qodo doesn’t feel like just another tool—it’s like adding a new developer to the team who actually learns how we work," Regev told VentureBeat in a recent video call interview, adding that it has "prevented over 800 issues per month from reaching production—some of them could have caused serious security vulnerabilities."
Unlike code generation tools like GitHub Copilot or Cursor, Qodo isn’t trying to write new code. Instead, it specializes in reviewing it — using what it calls context engineering to understand not just what changed in a pull request, but why, how it aligns with business logic, and whether it follows internal best practices.
"You can call Claude Code or Cursor and in five minutes get 1,000 lines of code," said Itamar Friedman, co-founder and CEO of Qodo, in the same video call interview as with Regev. "You have 40 minutes, and you can't review that. So you need Qodo to actually review it.”
For monday.com, this functionality wasn’t simply useful — it was transformative.
Code Assessment, at Scale
At any given time, monday.com’s builders are delivery updates throughout a whole lot of repositories and providers. The engineering org works in tightly coordinated groups, every aligned with particular components of the product: advertising and marketing, CRM, dev instruments, inside platforms, and extra.
That’s the place Qodo got here in. The corporate’s platform makes use of AI not simply to verify for apparent bugs or type violations, however to guage whether or not a pull request follows team-specific conventions, architectural pointers, and historic patterns.
It does this by studying from your personal codebase — coaching on earlier PRs, feedback, merges, and even Slack threads to grasp how your workforce works.
"The comments Qodo gives aren’t generic—they reflect our values, our libraries, even our standards for things like feature flags and privacy," Regev stated. "It’s context-aware in a way traditional tools aren’t."
What “Context Engineering” Really Means
Qodo calls its secret sauce context engineering — a system-level method to managing every little thing the mannequin sees when making a call.
This contains the PR code diff, in fact, but in addition prior discussions, documentation, related recordsdata from the repo, even check outcomes and configuration information.
The thought is that language fashions don’t actually “think” — they predict the subsequent token based mostly on the inputs they’re given. So the standard of their output relies upon nearly fully on the standard and construction of their inputs.
As Dana High quality, Qodo’s neighborhood supervisor, put it in a weblog put up: “You’re not just writing prompts; you’re designing structured input under a fixed token limit. Every token is a design decision.”
This isn’t simply principle. In monday.com’s case, it meant Qodo might catch not solely the plain bugs, however the delicate ones that usually slip previous human reviewers — hardcoded variables, lacking fallbacks, or violations of cross-team structure conventions.
One instance stood out. In a latest PR, Qodo flagged a line that inadvertently uncovered a staging setting variable — one thing no human reviewer caught. Had it been merged, it might need triggered issues in manufacturing.
"The hours we would spend on fixing this security leak and the legal issue that it would bring would be much more than the hours that we reduce from a pull-request," stated Regev.
Integration into the Pipeline
Right this moment, Qodo is deeply built-in into monday.com’s improvement workflow, analyzing pull requests and surfacing context-aware suggestions based mostly on prior workforce code critiques.
“It doesn’t feel like just another tool… It feels like another teammate that joined the system — one who learns how we work," Regev noted.
Developers receive suggestions during the review process and remain in control of final decisions — a human-in-the-loop model that was critical for adoption.
Because Qodo integrated directly into GitHub via pull request actions and comments, Monday.com’s infrastructure team didn’t face a steep learning curve.
“It’s just a GitHub action,” stated Regev. “It creates a PR with the tests. It’s not like a separate tool we had to learn.”
“The purpose is to actually help the developer learn the code, take ownership, give feedback to each other, and learn from that and establish the standards," added Friedman.
The Results: Time Saved, Bugs Prevented
Since rolling out Qodo more broadly, monday.com has seen measurable improvements across multiple teams.
Internal analysis shows that developers save roughly an hour per pull request on average. Multiply that across thousands of PRs per month, and the savings quickly reach thousands of developer hours annually.
These aren’t just cosmetic issues — many relate to business logic, security, or runtime stability. And because Qodo’s suggestions reflect monday.com’s actual conventions, developers are more likely to act on them.
The system’s accuracy is rooted in its data-first design. Qodo trains on each company’s private codebase and historical data, adapting to different team styles and practices. It doesn’t rely on one-size-fits-all rules or external datasets. Everything is tailored.
From Internal Tool to Product Vision
Regev’s team was so impressed with Qodo’s impact that they’ve started planning deeper integrations between Qodo and Monday Dev, the developer-focused product line monday.com is building.
The vision is to create a workflow where business context — tasks, tickets, customer feedback — flows directly into the code review layer. That way, reviewers can assess not just whether the code “works,” however whether or not it solves the correct downside.
“Before, we had linters, danger rules, static analysis… rule-based… you need to configure all the rules," Regev said. "But it doesn’t know what you don’t know… Qodo… feels like it’s learning from our engineers.”
This aligns intently with Qodo’s personal roadmap. The corporate doesn’t simply evaluation code. It’s constructing a full platform of developer brokers — together with Qodo Gen for context-aware code era, Qodo Merge for automated PR evaluation, and Qodo Cowl, a regression-testing agent that makes use of runtime validation to make sure check protection.
All of that is powered by Qodo’s personal infrastructure, together with its new open-source embedding mannequin, Qodo-Embed-1-1.5B, which outperformed choices from OpenAI and Salesforce on code retrieval benchmarks.
What’s Subsequent?
Qodo is now providing its platform beneath a freemium mannequin — free for people, discounted for startups by way of Google Cloud’s Perks program, and enterprise-grade for corporations that want SSO, air-gapped deployment, or superior controls.
The corporate is already working with groups at NVIDIA, Intuit, and different Fortune 500 corporations. And due to a latest partnership with Google Cloud, Qodo’s fashions can be found instantly inside Vertex AI’s Mannequin Backyard, making it simpler to combine into enterprise pipelines.
"Context engines will be the big story of 2026," Friedman stated. "Every enterprise will need to build their own second brain if they want AI that actually understands and helps them."
As AI programs turn into extra embedded in software program improvement, instruments like Qodo are displaying how the correct context — delivered on the proper second — can rework how groups construct, ship, and scale code throughout the enterprise.

