Automated tests seem expensive to set up. They are β but infinitely less than the absence of tests. A bug found in development costs 1x. In production, it costs 100x.
The first argument against automated tests is always the same: we don't have time. Writing tests takes time, maintaining them too. That's true. What's false is believing that not testing is free. The absence of tests has a cost β it's simply deferred and multiplied.
The test pyramid
Mike Cohn formalised the test pyramid: a wide base of unit tests (fast, isolated, numerous), an intermediate layer of integration tests (verifying interaction between components), and a tip of E2E tests (end-to-end, simulating a real user). The higher you go in the pyramid, the slower, more brittle, and more expensive the tests are to maintain. The pyramid is the balance rule.
The cost of a bug based on where it's found
NIST and IBM studies have documented this since the 1990s: a bug found in development costs ~1x. In staging: ~10x. In production: ~100x. The cost includes diagnosis time, potential rollback, crisis communication, loss of customer trust, and sometimes direct business losses. Tests are not a cost β they are insurance.
Test types and tools
- Unit: JUnit (Java), Jest (JS/TS), pytest (Python) β test a function or class in isolation
- Integration: Testcontainers, Supertest β test interaction with databases, APIs
- E2E: Playwright, Cypress β simulate user journeys in the browser
- Contract: Pact β verify that two services respect their interface contract
- Performance: Gatling, k6 β test load capacity
TDD: test before you code
Test-Driven Development reverses the order: write the test before the code. Primary benefit: tests become a design tool. You think about the interface before the implementation. The resulting code is naturally more modular and testable. Coverage percentage? A useful indicator, but often misused: 100% coverage doesn't guarantee the absence of bugs β it guarantees every line has been executed.