Soo… I want to share some broader context around my opinion about tests so that I don’t come across as negative out of nowhere.
Short version:
- Wrong tests can waste more time than not having them.
- Most teams no longer strive for high coverage.
- There’s a cost to not having tests, but there’s also a cost to having them.
- If a test often fails when there’s no actual bug, it’s a flaky test. I will aggressively remove such tests until they’re fixed, but I’m not going to waste time fixing them myself. Some things are simply impossible to test well and not worth the effort.
- I’m okay accepting PRs without tests if the person submitting the PR isn’t comfortable with them. Tests can always be added later by someone motivated to do so.
- I’ll add tests when I see something that frequently breaks and it’s straightforward to create a test that’s consistent and not flaky. Otherwise, I’ll always argue for removing flaky tests immediately.
At the end of the day, there are two formulas:
- Cost of not having a test:
How often something breaks × how much time it takes to test manually × how much time it takes to debug and fix those bugs.
- Cost of having a test:
How much time it takes to write the test + how often it produces false positives × how much time is wasted dealing with those false positives.
If the second formula outweighs the first, then the test loses all its value. In those cases, I’ll argue against having such tests.
Tests should exist where we don’t want change because they make change harder and more costly. That’s fine for the battle-tested core of a project. But for new features or areas of code we know will change soon, tests are a waste of time. Who writes tests for code that will be thrown out next week?
Longer story:
I have a long history with testing. I didn’t understand it at all during university 15 years ago. Later, I took the Berkeley “Software as a Service” online course on Test-Driven Development (TDD) with Ruby on Rails, and I fell in love with their approach, especially behavior-driven testing with Cucumber/Gherkin.
It’s a beautiful system when you’re working with simple web apps and testing backend logic.
But since 2015, I’ve focused on full-stack JavaScript apps, spending most of my time on visual/UI elements. In one company I worked for from 2015-2018, we used Selenium for integration tests. That’s when my “idealistic dreams” of good testing began to fall apart.
At Prezi, we currently use unit tests, Cypress, Playwright, and Selenium. Do you know how long our Selenium tests run? Four hours. Do we block PRs for four hours? Of course not—it’s way too slow.
We also regularly run into issues with flaky tests blocking releases. It’s not uncommon for people to rebuild projects four times over two hours just because tests aren’t reliable. And that’s in a company with experienced, full-time QA Automation engineers.
So, am I against tests? No. I’m all for using them frugally, with one goal in mind: saving time.
If you can add a test that’s quick and easy to write, maintain, works consistently, and applies to a part of the codebase that’s stable, great—that saves time.
But if even one of those criteria fails, the test will waste as much time as not having a test at all. That’s my conclusion after over a decade of being interested in TDD.