Recently I was giving a Mocking presentation and I made the statement that I prefer strict mocking semantics over loose semantics. At the end of presentation one of the members of the audience raised point to my liking strict over loose. We had a great discussion and I thought I would regurgitate it here and give my reasons for liking strict over loose.
Definition of Strict Replay Semantics:
Only the methods that were explicitly recorded are accepted as valid. This mean that any call that is not expected would cause an exception and fail the test. All the expected methods must be called if the object is to pass verification.
Definition of Loose Replay Semantics:
Any method call during the replay state is accepted and if there is no special handling setup for this method a null or zero is returned. All the expected methods must be called if the object is to pass verification.
Pros of Strict Replay
- Developer creating the test must know EXACTLY what the codes does they are testing
- Will cause the test to fail in the future if anyone changes the underlying code
- Creates a 'what you see is what you get' scenario in your tests. Meaning that if you do not setup an expectation you will NOT have success.
Cons of Strict Replay
- May appear to make the test brittle
- Requires the developer to know EXACTLY what the code does (yes this is also a pro). Sadly a lot of developers are too lazy to completely understand the code they are creating, hence this being a con.
- May create a lot of 'noise' (I.e. a lot of mock setups) if your code is not loosely coupled (this is a con, but is really a result of a code smell)
Replay of the conversation
During my presentation I had made comment that I prefer strict semantics over loose and simply left it at that. I did not really go into detail, but at the end of the session one of the audience members wanted to dive deeper into the why, so we did.
He made a very good observation in regards to strict mocking and that was that my test will fail if it was originally setup to expect a call to the 'db mock' 2 times, but later the method was modified to only need 1 call. I agreed with his observation, but told him I actually see this as a good thing.
When we create a test we are testing our code base at that point in time. We are also testing business rules at that point in time. When ever I setup my mocks (create expectations) I am doing so with knowledge that a given dependency will ONLY be called N number of times. If I need to refactor/enhance that method/logic and I change the number of times my dependency will be called, I want my test to fail. This failing test should prompt me to review my test logic to ensure that I am still testing the intent of the method.
By this logic, strict mocking semantics is the best way to go. If I had used loose semantics my original test would still pass when it should fail potentially hiding flaws in my test logic. Remember green tests does not equal good code coverage.
Till next time,
***** Updated *****
Removed a misinterpreted statement from Ayende over at Rhino
***** Updated *****
05-19-2008 8:27 AM