Qodo Raises $70M to Scale Code Verification for AI-Generated Code

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

This article takes a closer look at Qodo, a New York–based startup that’s building AI agents for code review, testing, and governance. It digs into how the company wants to solve the verification bottleneck as AI-generated code spreads across organizations.

There’s also a look at Qodo’s recent funding, some benchmarks, and the firm’s stance that trust and context matter if we want safer, more reliable AI-assisted software development.

Qodo’s approach to code verification at scale

Most AI review tools stick to diff-level changes, but Qodo goes further. It checks how code changes ripple across entire systems.

The platform pulls in organizational standards, historical context, and each company’s risk tolerance to figure out what a change really means for reliability and safety. Qodo cares a lot about performance and system-wide context.

They’re trying to bridge the gap between what AI coding tools can do and what big engineering teams actually need. Instead of treating software development as a bunch of isolated snippets, Qodo sees it as a stateful process. That way, governance of AI-generated code gets a lot more meaningful.

  • System-wide evaluation over simple diffs
  • Organizational alignment with internal standards and risk tolerance
  • Context-aware verification informed by historical data and governance policies

Funding momentum, performance, and product evolution

Qodo recently announced a $70 million Series B led by Qumra Capital. That brings total funding up to $120 million.

Investors seem confident, especially as more and more code gets written by AI and needs reliable verification. Qodo says it’s leading the market in performance—recently, it ranked No. 1 on Martian’s Code Review Bench with a 64.3% score, which is a solid jump ahead of competitors.

The startup also rolled out Qodo 2.0, a multi-agent code review system. This new version learns and enforces each organization’s unique definition of code quality.

Big names like Nvidia, Walmart, Red Hat, Intuit, and Texas Instruments are already on board. Fast-growing teams at Monday.com and JFrog are picking up the platform too, hoping to scale up trusted AI-assisted development.

The trust gap in AI-assisted coding

One big issue for Qodo is the trust-to-practice gap in AI-generated code. According to a survey they cite, 95% of developers don’t fully trust AI-generated code, and only 48% actually review it every time.

That says a lot about why LLMs alone can’t handle serious software governance. Tools need to bake in context, provenance, and company policies if they want to close that gap.

Founder Itamar Friedman points out that large language models just can’t capture the internal context or “tribal knowledge” needed for real code review. He compares it to moving an engineer between companies—so much crucial, unspoken knowledge gets lost.

From stateless AI to artificial wisdom: the real-world impact

Qodo describes its mission as moving software development away from stateless, generic AI outputs. Instead, they want stateful systems that offer something closer to artificial wisdom for safer, more dependable AI-generated code.

By weaving in organizational nuance, Qodo aims to give recommendations that fit with long-standing development norms, regulations, and risk considerations. This shift could really change how engineering teams handle verification, governance, and compliance as AI-generated code keeps ramping up.

Real-world impact: customers and benchmarks

Qodo works with a wide range of enterprise and fast-growing customers. This shows a real path from funding to actual adoption.

Their performance benchmarks and real-world deployments point to a scalable way forward. It’s not just theory—they’re doing it, and that says a lot about trust in AI-enabled software development.

Industry watchers might want to pay attention to Qodo’s multi-agent system. Their focus on organizational learning makes them a strong player in the shift toward stateful AI for software engineering.

Implications for the future of software governance

Qodo’s approach puts a spotlight on some important trends for both researchers and folks in the trenches:

  • Context-rich AI tools that actually consider company policies and past data
  • Governance-ready AI built for reliability and risk management
  • Provenance and trust at the heart of code review and testing platforms

As AI spreads across more codebases, it’s going to be critical to blend in internal context, cultural knowledge, and governance standards. That’s how you get software people can actually trust, and fast.

 
Here is the source article for this story: Qodo raises $70M for code verification as AI coding scales

Scroll to Top