I shipped code to production last Tuesday with exactly zero tests.
Not incomplete coverage. Not missing edge cases. Zero. And you know what? It worked fine. The feature landed, the client was happy, and I moved on to the next thing. I should feel guilty about this, right? That's what every conference talk and Medium article tells me. But I don't. Not anymore.
Somewhere along the way, 100% test coverage became a moral issue instead of a technical one.
The testing evangelists will tell you that comprehensive tests save time in the long run. They'll show you graphs. They'll reference studies. What they won't tell you is how much time you'll spend maintaining those tests when requirements change. Or how your velocity tanks when every small feature requires scaffolding an entire mock universe.
I've been on projects where we spent more time fixing broken tests than fixing actual bugs.
Let that sink in for a second. We were solving problems that only existed because we created them. The tests weren't protecting us. They were drowning us.
Here's what I learned after years of trying to follow the rules.
Most bugs live in a surprisingly small section of your codebase. Your authentication logic, your payment processing, your data transformation layers. These deserve tests. Good ones. Integration tests that prove the pieces actually work together. Unit tests for the gnarly edge cases.
Everything else? You're probably fine without them.
The CRUD endpoint that just passes data through to your ORM? The UI component that's basically a styled div? The utility function that formats a date? These are low-risk, high-churn areas where tests cost more than they save.
I started asking myself a simple question before writing any test: "What will break if this code is wrong, and how will I know?"
If the answer is "the feature won't work and I'll notice immediately," I skip the test. If the answer is "money will disappear" or "users will see each other's data," I write the test. It's not rocket science. It's just being honest about risk.
Nobody wants to talk about deadlines, but they exist.
You have a sprint that ends Friday. Your PM needs this feature for a demo. You're already behind because the API you're integrating with had undocumented breaking changes. You could spend two days writing comprehensive tests for this form validation logic, or you could ship it, manually test it, and move on.
Which one keeps your job?
I've worked in codebases where comprehensive testing was a fantasy from day one. Legacy systems with global state, tangled dependencies, and database calls sprinkled everywhere like confetti. You want me to retrofit tests onto that? I'd need to refactor the entire application first. And when I suggest that, management asks why I'm not working on features.
Then there's the junior developer who just needs to get something working. They're learning the framework, the domain, the team's patterns. Now we're going to pile on Jest configuration, mocking strategies, and test pyramid theory? They'll spend a week learning to test before they learn to build.
Sometimes the educational ROI just isn't there.
I know how this sounds.
I can already hear the responses. "Technical debt!" "You'll regret this!" "This is why software quality is declining!" And maybe they're right. Maybe in six months I'll be debugging some weird edge case that a test would have caught.
But I'll also have shipped a dozen features in the time it would have taken to achieve testing perfection on one.
The dirty secret of software development is that pragmatism beats purity almost every time. The perfect codebase with 100% coverage that ships six months late loses to the scrappy MVP that ships next week. Real users don't care about your test suite. They care whether the button works.
I'm not saying don't write tests. I'm saying be honest about which ones matter and which ones are just security theater to make us feel professional. Strategic testing beats comprehensive testing. Shipping beats perfection.
And if that makes me a bad developer in the eyes of the TDD faithful, I'll live with it.
If you want perspective from someone who has actually dealt with this across multiple decades and technology stacks, that link goes somewhere useful. No theory. Just patterns that survived contact with production.