Zee Spencer

Ignorance and Realism in Testing

I was talking through some of my thoughts on applying visual and interaction design critique principles to code with Enrique Comba. At the time, I was in the middle of analyzing the differences between directly calling the method under test, nested describes and lets, and using a custom assertion.

I was pointing out how moving the method call out to a custom assert hid a breadcrumb developers would use when understanding the system under test, while leaving it in a let would at least keep it in the same test file. Enrique responded with an interesting perspective:

"That's the point of BDD. Your tests shouldn't care about your implementation at all."

I was a little taken aback. One of my favorite parts about clearly written specs is how they give the programmer direction in the use of the code being tested. If the test code is completely ignorant of the implementation, how can it provide those insights?!

After an hour or so, I started thinking about Justin Searls recent presentation on realism in tests. This got me thinking: What if the level of realism and the level of ignorance of a test are two related variables? A test which provides a high level of realism of the system under test may not mock out external services the system uses because it does not know about them. Conversely, tests which provides a low level of realism often very clearly point out how its collaborators must be formed, as well as the usage of the code under test.

Hopefully these complementary insights will provide me with more purposeful, better organized test suites in the future.


Browse posts about:

Want to get notified when I publish new articles or update old ones? Subscribe to my newsletter. It's a weekly-ish set of interesting links with a short essay on programming, design, technical leadership, or anything else that strikes my fancy.

Not sure? Read the archive.