I've run technical due diligence on over 50 companies at various stages, from seed to pre-IPO. Some of those deals went through. Many didn't. And in almost every case where we walked away, the warning signs were there from the first day if you knew where to look.
The thing about technical red flags is that they're not always obvious. A clean demo and a confident CTO can mask serious problems. Investors without deep technical backgrounds often miss what's lurking beneath the surface. And founders, whether consciously or not, become skilled at steering conversations away from their weaknesses.
Here are the red flags I've learned to spot, the ones that separate healthy technical organizations from those hiding problems that will surface after the check clears.
1. The "Hero" Architecture
Ask who understands how the core system works. If the answer is one person, that's a red flag. If that person is also the only one who can deploy to production, it's a bigger one.
I've seen companies where a single engineer held the entire system in their head. Documentation was sparse because "they just know how it works." When I asked what would happen if that person got hit by a bus (or, more commonly, got recruited by a FAANG company), the room went quiet.
Hero architecture isn't just a bus factor problem. It's a symptom of deeper issues: poor knowledge sharing, inadequate documentation, and a culture that rewards individual heroics over sustainable practices. These companies often can't scale because they can't onboard new engineers effectively.
2. Test Coverage That Doesn't Match Reality
Every company will tell you they have tests. The question is whether those tests actually catch bugs.
I look at three things: coverage numbers, test run time, and test reliability. A codebase with 80% coverage but a 45-minute test suite that fails randomly 20% of the time is worse than one with 40% coverage and a fast, reliable suite. The former trains engineers to ignore test failures. "Oh, that test is flaky, just re-run it" is a phrase that should terrify any investor.
I also ask when the tests last caught a real bug before it hit production. If nobody can remember, the tests might be checking boxes rather than providing actual safety.
3. The Rewrite That's Always Coming
"We're planning a major refactor next quarter" is something I hear in nearly every due diligence. Sometimes it's true and necessary. Often it's a way to acknowledge technical debt without taking responsibility for it.
The red flag isn't that they need to rewrite something. It's when the rewrite has been "coming soon" for a year. Or when the scope keeps growing. Or when they can't articulate exactly what's wrong with the current system and how the rewrite will fix it.
Healthy engineering teams make incremental improvements continuously. They don't accumulate debt until it requires a massive rewrite, then defer that rewrite indefinitely while the debt compounds.
4. Security as an Afterthought
Ask about their security practices. If the answer starts with "Well, we're planning to..." or "Once we have more resources...", proceed with extreme caution.
I've seen companies storing passwords in plaintext. API keys committed to public repositories. Admin panels accessible without authentication. Data being transmitted unencrypted. These aren't sophisticated vulnerabilities; they're basic hygiene failures.
The excuse is always the same: we're moving fast, we'll fix it later. But security debt is uniquely dangerous because it can result in a single catastrophic event that destroys the company. A data breach at a growth-stage startup isn't just a setback; it can be existential.
5. Deployment Fear
How often do they deploy? How painful is it? What happens when something goes wrong?
Healthy engineering organizations deploy frequently and confidently. They have automated pipelines, rollback procedures, and monitoring that catches problems quickly. Deploying on a Friday afternoon might raise eyebrows, but it shouldn't cause panic.
Unhealthy organizations treat deployments like surgery. They deploy infrequently, usually in large batches, often requiring weekend "war rooms." When I hear "we have a deployment freeze before the holidays," I wonder what they're really saying about their confidence in the system.
Deployment fear is a leading indicator of systemic problems: poor test coverage, inadequate monitoring, architectural brittleness. If the team is afraid to ship, there's a reason.
6. The Metrics Mirage
Ask for their engineering metrics. Then ask how they're calculated and what they're used for.
Vanity metrics are everywhere. Story points completed. Lines of code. Number of commits. These numbers can all be gamed, and organizations that optimize for them often produce impressive-looking dashboards while actual productivity suffers.
What I want to see is metrics tied to outcomes. Cycle time from idea to production. Time to recover from incidents. Customer-facing bug rates. And I want to see how those metrics trend over time, not just a snapshot that might have been cherry-picked.
The biggest red flag is when leadership can't explain their metrics or, worse, when the metrics they track don't match the outcomes they claim to care about.
7. The Third-Party Dependency Problem
Every modern software company depends on third-party services. The question is whether those dependencies are managed thoughtfully or accumulated recklessly.
I ask about their critical dependencies. What happens if Stripe is down? What if AWS has an outage in their region? What if the startup they're using for a core feature goes out of business?
Healthy organizations have thought about these questions. They have fallback plans, or at least clear documentation of their exposure. Unhealthy ones give vague answers or, worse, don't seem to understand why I'm asking.
I've also seen the opposite problem: companies that built everything themselves because they didn't want dependencies. This is its own red flag. If you're maintaining your own authentication system, payment processing, and email delivery, you're probably wasting engineering cycles that should go toward your actual product.
8. Documentation That Exists Only in Heads
"We should really document that" is a phrase that signals trouble. It means they haven't, and they probably won't.
I'm not looking for perfect documentation. I'm looking for evidence that documentation is part of the culture. Are there READMEs that actually explain how to run things? Architecture diagrams that reflect reality? Runbooks for common operational tasks?
The test I use: could a competent engineer figure out how to contribute to this codebase within a week, without constantly interrupting existing team members? If the answer is no, every new hire will slow down the existing team, and scaling becomes painful.
9. Outages Without Postmortems
Every company has outages. What matters is what happens after.
I ask to see their incident history and postmortems. Good engineering organizations have a blameless postmortem culture where every significant incident produces a written analysis: what happened, why it happened, and what they're doing to prevent it from happening again.
The red flags: no postmortems at all ("we just fixed it and moved on"), postmortems that blame individuals ("Bob forgot to check the config"), or postmortems that produce long action item lists that never get completed.
How a team responds to failure tells you more about their engineering culture than how they describe their successes.
10. The Technology Stack Lottery
Diversity in technology can be a strength. But when a small team is running five different languages, three databases, and two container orchestration systems, something has gone wrong.
This usually happens when hiring decisions drove architecture decisions. You hired someone who really liked Rust, so now you have a Rust service. Someone else came from a Kubernetes shop, so now you're running Kubernetes alongside the ECS cluster you already had.
The cost isn't just in operational complexity, though that's real. It's in cognitive load. Every new technology is something the team has to stay current on, debug when it breaks, and teach to new hires. Small teams with sprawling stacks are usually teams that can't say no, and that inability to focus tends to show up in other ways.
What These Red Flags Really Mean
None of these issues are necessarily deal-breakers in isolation. Every company has technical debt. Every team has areas where they've cut corners.
What matters is whether leadership acknowledges these issues honestly and has realistic plans to address them. The most dangerous situation isn't a company with problems; it's a company with problems they either can't see or won't admit.
When I find multiple red flags, I'm not just evaluating the technology. I'm evaluating the judgment and self-awareness of the technical leadership. A CTO who can honestly describe their biggest technical risks is someone I trust to navigate those risks. A CTO who insists everything is fine while the codebase tells a different story is someone who will create surprises after the deal closes.
For founders preparing for due diligence, my advice is simple: find your red flags before we do. Acknowledge them. Have a plan. The goal isn't to hide your problems; it's to demonstrate that you understand them and have the capability to solve them.
Because experienced evaluators will find them anyway. And how you handle that conversation matters as much as what we find.
Frequently Asked Questions
Preparing for technical due diligence or need an independent evaluation?
Let's Talk →