Sep 292017
 

Automatic software testing is an interesting thing. Books have been written on the topic, libraries have been built to try to bring the practice to traditionally hard-to-automate sections of testing, and in some places, it’s so important it’s the first code that gets written. There’s a lot of different philosophies about how it should work, but in my experience, it’s probably an area where the less religious you are about it, the better it’s likely to work for you.

One thing I’ve observed about other people and unit tests, is that some people are very traditional about upholding the meaning of that term “unit.” Another thing I’ve observed about unit tests is that I’m not 1 of them. I understand the premise, keep tests small and focused, just like the code is supposed to be. But my problem with the purism is that it involves mocking a lot more stuff than I’ve found convenient or useful. Personally, I like to have the code I’m testing behave as much like it does in production as possible. That means I’m trying to keep my mocks to a minimum and instead focus on using real objects as much as possible. In fact, my general rule of thumb is to only mock an external service that I’m interacting with via API. For everything else, I try to just initialize regular objects inside the test environment to try to get as accurate picture of what the code really does as possible.

By the way, when testing, always use a real database whenever possible. The simple fact of the matter is that they are never as good as the real thing. As a result, it’s not uncommon to have to re-write code, not to make your code more testable, but just to work around the fact that your test database can’t accurately mimic a real database. If you control your own build server, install a copy of whatever database your application uses on the build box. You are deleting after each test and deleting the database itself after your unit tests are done right? If so, then you won’t need a lot of memory to support it. If you don’t control your own build box, I recommend pestering whoever does to give you a database on the machine. If whoever controls the box won’t put a database on it, try to see if you can set up a remote database somewhere for tests to run on. But I recommend leaving embedded databases as a last resort.

Another thing to keep in mind as you’re writing tests is that code coverage is really just a (largely arbitrary) number. It doesn’t tell you whether or not or not you’re covering the important parts of the codebase, or if you’ve covered enough possible code paths, or really much of anything besides how many tests you’ve written. It’s a fun metric to watch grow, but the important measurement of coverage tend to involve more subjective judgment based on the particulars of your code. Generally speaking, I usually focus on simple CRUD operations and any stats the application produces. The CRUD operation tests are simple enough to write which is good for getting the test infrastructure set up. They’re also good for getting a few quick psychological wins, which will come in handy when you start writing the tests for the more complicated parts of the code. I usually start testing statistical output next because it’s typically straightforward to test but likely covers the most important parts of your code (odds are if your application is calculating statistics, it’s showing those values to a user, who’s using them to make some sort of business decision, so that code is likely to be fairly mission-critical). I know this is pretty lousy advice since there’s no clear-cut, unambiguous way to know if you’re doing things correctly, but it’s the best I have. If you want to know if you have enough tests written, ask yourself how comfortable you’d be pushing code with no QA or other tests except for the automated runs in the build process. If you’re OK with that, you probably have enough unit tests. If you aren’t then back to writing tests for you.

However many tests you write, they need to run fast. Remember, these tests are going to be run as part of every build. The slower they run, the more tempted you’re going to be to turn them off on builds just to get them to run faster. Seeing as how the whole point of having unit tests is that they run automatically with every build, that completely defeats the purpose of having tests. I know this sounds like the stupidest possible place to be worried about performance, but slow tests really do discourage running them as part of the build process, which is where you want them running the most, so keep them snappy. In fact, you should keep an eye out for anything that makes running your tests regularly, both locally during development and as part of a build, annoying and fix them ASAP.

One thing I’ve found that comes in handy is injecting errors into methods being tested if I absolutely, positively need them to keep running no matter what. It’s handy way of testing the resilience of your code, and thanks to mocking, it’s easy to do. In fact, you should fee free to write test cases that inject common errors into just about anything you’re testing normally. It’s a nice little bit of (mostly) free extra assurance that your code isn’t going to come to a crashing halt at the first sign of trouble. Not to mention error-handling functionality is still program functionality, and thus deserves to be tested too.

Writing unit tests is time-consuming, feels like a hassle, and seems like you’re not accomplishing anything. They’re also still worth the effort. But remember, the point of the unit tests has nothing to do with whatever’s going on right now. They’re for weeks, months, or even years later when you’re making another change that you thought was unrelated to the code you’re trying to test. It took me days to figure out how to mock out a Netty server, and it took me days again when we upgraded to Netty 4. That time’s worth it because now I know I’m not accidentally breaking that code any time I make a change somewhere in the application. In fact, tests that I’d spent days writing because they covered critical sections of the code have caught several bugs before I even made a pull request. So suck it up and take the time to build out your unit tests, they’ll make up for it later with all the bugs that never made it to production.

All of this being said, I’m not a full-on convert to test-driven development (TDD), but I can certainly see the appeal. The process guarantees that you’re taking the time to write and maintain tests, which is certainly nice. Probably the biggest thing holding it back, at least for me, is that it seems to work best when your requirements are pretty well set, which hasn’t been my personal experience so far. That said, I work somewhere that has lots of cultural encouragement for writing unit tests, which in my experience is really the most important thing you want when it comes to writing tests.

I think probably the biggest idea behind my personal approach to unit testing is that unit tests are like programming languages – they’re good, but it’s generally best not to get too dogmatic over specifics. The most important thing is to get the job (or in this case, the testing) done. Whatever works, well…works. All that matters is that you can run your tests every time you compile your code, and that they at least cover all the parts of your application that would make people want to pay money for it. Make sure you cover failures as well – just because some upstream service fell apart doesn’t mean that your application needs to completely crash too. Tests are meant to make sure your application can run reliably (and correctly) in the wild. Ignore everything else anybody tells you about testing and write whatever tests guarantee that.

 Posted by at 11:01 PM