I was sitting across from a team at a large cloud company — the kind with a logo you'd recognize and a sales deck with a lot of colors. They were demoing a new enterprise product. The UI was clean, the pitch was polished, and it was pretty obvious that a significant chunk of the application had been built with AI assistance. The code patterns had that particular flavor. The feature set had that "we generated our way to this" breadth.
I asked a simple question. Not a technical one — not about architecture or APIs or latency. I asked about the bounds of the product. What it could do. Where it stopped. What problems it wasn't designed to solve.
The answer was: "We're not really sure."
I don't mean they were being cagey. I don't mean they hadn't prepped for the question. I mean they genuinely didn't know. A major company, in an enterprise sales meeting, demoing a product they had shipped — and they couldn't tell me what the product was capable of.
I've been thinking about that moment ever since.
This Isn't a Code Quality Problem
Everyone is having the code quality conversation. AI-generated code has more bugs. Higher security vulnerability rates. Hallucinated dependencies. I've written about it. It's real.
But what happened in that meeting was something different — and in some ways more unsettling. It wasn't that the code was bad. The demo worked. The product ran. The question I was asking wasn't even about the code. It was a product question. A strategy question. What does this thing do, and what doesn't it do?
And nobody in the room could answer it.
Here's what I think is happening. When a human team builds software the traditional way, product knowledge lives in the people who built it. The developer who wrote the payment module knows every edge case in it because they wrestled with each one. The engineer who built the search feature knows exactly why it returns results in that order — because they made that call. There's a living map of the product distributed across the team's heads.
AI-assisted development can build features faster than a team can develop that map. You generate your way to a feature set, the product works, and you ship it. But nobody walked the territory on foot. Nobody has the map.
The Product Comprehension Gap
I want to give this problem a name, because I think it's going to matter a lot more as we go forward: the product comprehension gap.
It's not technical debt, exactly. The code might be perfectly functional. It's not a product-market fit problem. The features might be exactly what customers want. It's the gap between what a product does and what the team knows it does.
That gap has always existed to some degree. Large software teams have always had knowledge silos. But the gap was constrained by the speed of human development. You could only build as fast as you could understand.
AI breaks that constraint. You can now build faster than you can understand. And the gap between "shipped" and "comprehended" can widen into something genuinely dangerous — particularly in enterprise software, where customers are making purchasing decisions and deployment commitments based on what a product does.
When the team demoing the product can't describe its bounds, one of two things is true: either the product has no intentional bounds, which is a product strategy problem, or the bounds exist but the team hasn't mapped them, which is a comprehension problem. Both are bad. In an enterprise context, they're bad in ways that come back to bite you hard — in support tickets, in security audits, in the deals you lose because your own sales team can't answer basic questions.
Why Enterprise Amplifies Everything
In consumer software, a comprehension gap is embarrassing. In enterprise software, it's a liability.
Enterprise buyers don't just buy products. They buy into commitments — integration contracts, compliance postures, security reviews, SLAs. They need to know what the product does in edge cases, because their legal and security teams will ask. They need to know what it doesn't do, because they're deciding whether to build adjacent tooling or expect it from you.
When the answer to "what can this do?" is "we're not really sure," that's not a sales problem. That's a trust problem. Enterprise is built on trust. You don't recover easily from a meeting where the vendor's own team can't describe their product.
And honestly — as a developer watching this happen from the other side of the table — it raised a deeper question. If the team doesn't know what the product can do, who's accountable for what it will do once it's running in a customer's environment?
What To Actually Do About This
If you're building with AI assistance at speed — and you should be, the productivity gains are real — the product comprehension gap is something you need to close deliberately. It won't close itself.
A few things I'd actually recommend:
- Map the product after you build it, not just before. Pre-build specs are great. They're also obsolete the moment AI generates features you didn't plan for. Do a post-build audit: what does this product actually do now? Where does it break? What's its blast radius in production?
- Own your edge cases before your customers find them. AI-generated code is particularly weak on edge cases — it optimizes for the happy path. Someone on your team needs to own adversarial testing: what happens when users do something unexpected, or when data comes in malformed, or when the environment is degraded?
- Write your product's bounds explicitly. Not for marketing. For your team. A living document that says: "This product does X, Y, and Z. It does not do A, B, or C. In conditions Q and R, behavior is undefined." If you can't write that document, you have a comprehension gap.
- Train your sales team on limits, not just capabilities. The meeting I described failed not just because the team didn't know the bounds — it failed because they were in a sales meeting and had no answer. Customers respect "here's what we don't do yet" infinitely more than "we're not really sure." The pace of AI-assisted development is only going to accelerate. That's mostly good. But speed without comprehension is how you build a product nobody — including you — fully understands.
I left that meeting thinking about all the enterprise software quietly deployed right now that nobody has actually mapped. Running in production. Handling customer data. Doing... something.
If your own team can't describe what your product does, maybe slow down long enough to find out.