Images in this post missing? We recently lost them in a site migration. We're working to restore these as you read this. Should you need an image in an emergency, please contact us at
Unit Test Independence

Since I just spent the last couple of hours detangling some legacy unit tests, I felt this was worth noting...

Unit tests should never depend on the actions of another unit test!  They should be independent of each other and able to be run in isolation.  For example, a database delete-test should not depend on the preceding insert-test to have completed successfully.  And when running data-access unit tests, use a tool such as NDbUnit to put the test database into a known state both before and after the tests are run.

With that said, it was nice to find tangled unit tests there rather than no tests at all!

Billy McCafferty

Posted 03-29-2007 4:03 PM by Billy McCafferty



Mark wrote re: Unit Test Independence
on 03-29-2007 7:33 PM

So, how do you balance this against the increased code maintenance within your unit tests if you have to duplicate the same "setup" code (the insert in your example) in your method that does the behavior you wish to test (the delete). It seems to me that by relying on one unit test to "setup" your state you would only have to modify that one unit test should that code under test ever need to change.

Ben Scheirman wrote re: Unit Test Independence
on 03-29-2007 8:57 PM

I fully agree with this in theory, but in practice my integration tests invariably test too much, but I can't really help it.

For example, if I'm testing that I can successfully persist an "Order" with NHibernate, I need to have a sessionFactory, configuration, mapping, and the Order class (and anything that it directly references).  This is obviously testing too much.

If I also have a Foo.hbm.xml in my mapping that is incorrectly defined, then the above test will fail.  Along with all the rest of my tests that use the mapping.

How do you deal with such things?

Billy McCafferty wrote re: Unit Test Independence
on 03-29-2007 9:18 PM

I don't mean to imply that unit tests should be independent to the point that don't really test anything.  Assume you have a test class called CustomerDaoTests which include the following methods:  CanInsertNewCustomer, CanGetCustomerById, CanGetCustomerByExample, CanDeleteExistingCustomer.  Each test should be able to run and pass, independent of having to run any of the other unit tests.  So if you run CanInsertNewCustomer by itself, it'll successfully run.  Likewise, if you run CanDeleteExistingCustomer by itself, it'll also successfully run by itself.

Any of these tests, even run independently of each other, may rely on a plethora of infrastructure such as DAOs, app.config files, HBMs, NHibernate, etc.  But the point is that they can run independently of running other unit tests.

Going back to the example, CanInsertNewCustomer should get rid of the customer after it's been inserted to return the DB to its original state.  Likewise, the customer needs to exist before the test CanDeleteExistingCustomer can be run.  NDbUnit can be used to clean the database after CanInsertNewCustomer and populate the database before CanDeleteExistingCustomer.  This allows you to test each bit of functionality without creating interdependencies among the unit tests.

Another approach, instead of creating separate insert, get, delete unit tests, and without having to use a tool such as NDbUnit is to create a single unit test called CanPerformCustomerLifeCycle, or somethin like that.  This test would insert the customer, get the customer just inserted and then delete the customer.  The key is that this happens in a single unit test which can run independently of other unit tests being run before it or after it.  The drawback to this "full object life-cycle in one unit test" is that the unit test is essentially violating the single responsibility principle.  It might sound a bit esoteric at first, but sticking to a single responsibility in each unit test keeps the unit tests atomic and will serve to alert you to the exact problem that's occurring if and when a bug is introduced.

Billy McCafferty wrote re: Unit Test Independence
on 03-29-2007 9:24 PM


The NUnit SetupFixture and Setup methods may be run alongside a unit test and still be able to consider that unit test as being independent of other unit tests.  This is very much what NDbUnit does:  performs the DB setup, allows you to run your test, and then performs any necessary rollback.

The major problem I encountered in the code I was detangling was that four unit tests needed to be run in the correct sequential order in order for each subsequent unit test to pass.  So if you didn't run all of the first three, in the correct order, then the fourth unit test would fail.  I should be able to run the fourth unit test without having to worry about running any other unit tests in any special order.

Dependent wrote re: Unit Test Independence
on 04-02-2007 3:26 PM

Considering the number of people who regularly ask on the JUnit mailing-list how they can have tests that depend on each other, I find this characterization a bit too extreme.  Sometimes, you do want some of your tests to be skipped if previous ones didn't run.  And you want these tests to be reported as "skips" and not as "failures".

This is exactly how it's implemented in TestNG (

Billy McCafferty wrote re: Unit Test Independence
on 04-03-2007 2:19 PM

That's an interesting addition to an xUnit tester.   With the added ability to define unit test dependencies explicitly rather than implicitly, via top-to-bottom test ordering, then it certainly becomes less of a concern to maintain independence.  On the other hand, it also seems to turn the idea of a unit tester (emphasis on "unit") into a user-story tester.  Although I can see some value for it for short "do/undo" scenarios (e.g. specifically with the insert/delete example) I would lean towards Fit and/or StoryTeller to manage tests involving test dependencies.  I'm thinking of Fit's ActionFixture, specifically.

If used with tight discipline, I think TestNG could be used as both a unit tester and a user-story tester.  But I'd be worried about developers making it nearly impossible to test a single piece of functionality due to the unit test depending on a chain of four others.  To interpret the results of the fifth, you'd have needed to review, in detail, the execution and results of the previous four.  To mitigate, the developer could simply make a new unit test to isolate the testing of specific code; but then testing overlap has been introduced to the unit tests.  I understand that some overlap is inevitable, but the less, the better for minimizing unit test maintenance.

I've observed that unit tests which are hard to understand, complicated to follow, or hard to run in isolation, tend to stop being run altogether.  Then again, that's why we (should) have continuous integration environments. ;)

Andy Swain wrote re: Unit Test Independence
on 08-28-2008 11:02 AM

I am relatively new to unit testing so bear with me, I will agree that tests should be independant and it should be possible to run any test in isolation. However when I am writing tests I tend to test the simple behaviour such as "create an object" before writing more complex tests such as "add two objects together". Specifying the order would allow me to test the simple behaviour before moving on to test more complex.  If I later make a change to the code that causes object creating to fail I get a sea of red blobs and no idea where to start. If the create object test failed that makes other tests irrelevant. There is dependency not in the execution of the tests but in the significance of the results. It would be useful to express that dependency in the test order and have the important fail presented first. any thoughts on that?

Billy McCafferty wrote re: Unit Test Independence
on 09-01-2008 4:46 PM

I would agree that there are definitely exceptions to the rule of keeping tests completely independent, but, IMO, it should be an exception rather than a norm.  Unit tests that are dependent on an order and, therefore, on each other usually end up being fragile tests.

Rick O'Shay wrote re: Unit Test Independence
on 01-05-2009 3:41 PM

>> "I would agree that there are definitely exceptions to the rule of keeping tests completely independent, but..."

There's no such rule. Unit tests are independent, period. If you have order dependencies you have integration tests. They are both highly valuable, you could argue unit tests are more valuable because any number of sequences are likely to pass. I would lean toward keeping ordered tests (or tests that contain a sequence of calls) independent of automated unit tests, a separate project perhaps.

AndyB wrote re: Unit Test Independence
on 02-09-2009 5:15 AM

The big problem I have with rigorously enforcing test independence is that in most real-world applications the result of any particular unit test depends on the current system state. Taking the simple add-item / delete item example. In order to test that an object can be deleted, it must be added, if the 'add' fails then the test fails *before* the true object of the test is actually carried out. It must be assumed that there is also a test for the add function which will also fail.

So we have two tests, one test  'add', the other, though nominally testing 'delete' actually tests both 'add'  *and* 'delete'. Taking this to its conclusion you can end up with the most specific tests, in effect, performing the same tasks as many less specific tests in order to perform their own test function.

In other words you have code duplication which, IMHO, is a bad thing as you risk the setup of the more specific tests *not* using the same code or state as other tests which require the same / similar state to be configured. To put it another way, because you are not actually testing all of the steps in the state configuration (because you wish to maintain test independence) you risk the test using an invalid application state, making the test itself invalid.

Billy McCafferty wrote re: Unit Test Independence
on 02-09-2009 8:11 AM


There are certainly times when testing multiple facets of logic concurrently is warranted and desirable.  Fit ( is a great tool for testing these types of integrated tests.  A simple add/delete wouldn't necessarily warrant the introduction of such a framework, but it begins to become useful quite quickly when you want to look at the results of multiple actions sequentially.

Work At Home From Your Computer project wrote re: Unit Test Independence
on 12-05-2010 8:21 PM

Magazine Thin,natural requirement rich off confirm some office release nature passage support single partly top of district troop leading ready these catch defendant vote perhaps assumption enjoy injury ball ship moment involve leader settle although along gas pay through module believe sign minister lady except over persuade pocket instrument background trial home enemy then parliament record anyway bird ring date start tooth tooth official word start shoulder convention engine draw maintain path launch city exercise egg couple overall resource island then pass memory run tone

About The CodeBetter.Com Blog Network
CodeBetter.Com FAQ

Our Mission

Advertisers should contact Brendan

Google Reader or Homepage Latest Items
Add to My Yahoo!
Subscribe with Bloglines
Subscribe in NewsGator Online
Subscribe with myFeedster
Add to My AOL
Furl Latest Items
Subscribe in Rojo

Member Projects
DimeCasts.Net - Derik Whittaker

Friends of
Red-Gate Tools For SQL and .NET


SmartInspect .NET Logging
NGEDIT: ViEmu and Codekana
NHibernate Profiler
Balsamiq Mockups
JetBrains - ReSharper
Web Sequence Diagrams
Ducksboard<-- NEW Friend!


Site Copyright © 2007 CodeBetter.Com
Content Copyright Individual Bloggers


Community Server (Commercial Edition)