If you're not familiar with the Refactor It! challenges, please read this post
. (Yes, this one is a little late in being published...but it's worth it!) Now on to challenge 2...
Welcome back to another refactoring challenge! I'm excited about this week's challenge and am sure it'll be interesting and challenging for all skill levels. It should take about 2 hours for senior developers and a couple more for intermediate developers. While last week's challenge served as a good introduction to refactoring in general, this week's challenge focuses on design pattern refactorings. Here's the premise...Background & Problem
This application continues from the theme started in last week's challenge and includes a logging system for the local tiki bar. Each day, very early in the morning, a logging process kicks off which creates two logs summarizing orders placed on the previous day. An order has a customer, a date, a specific tiki drink, and a quantity of that drink. If the customer wants different types of tiki drinks, they need to place an order for each type. It's not your tiki bar, so that's just the way it is! The first log created each night is for the bar owner and includes all orders which had a total cost of at least $20 or
had a quantity of at least 5. The second log is for the auditor and includes all tiki drink orders which had a total cost of at least $10 and
were not for the drink called the "London Fog Cutter." (Don't ask me why the auditor doesn't want to know about these drinks...I just work here!)
The problem, in a nutshell, is two-fold. The first problem is that the filtering mechanism for figuring out which orders should be logged is getting quite complicated. The specification design pattern
, a specialized version of the interpreter design pattern
, needs to be employed to make the filtering easier to maintain. The second problem is that a bi-directional dependency exists between the data layer and the business logic layer. Consequently, it's very difficult to unit test the business logic layer without actually hitting the database. We'll need to introduce a separated interface
, also known as the dependency inversion principle
, to decouple the business logic layer from the data access layer. We'll then need to use a mock object
so that we can predictably test the business logic layer without needing to worry about having a database around. (Tools exist to automate mock object creation, but let's we'll use one developed by hand.)Your Incentive
Up for grabs this week is the following book along with a CodeIt.Once Refactoring
tool 3-User license pack! This is a great book and will be a well-earned prize.
Applying Domain-Driven Design and Patterns: With Examples in C# and .NETInstructions to Enter
By Jimmy Nilsson.
Make sure you read all the instructions as they're different from last week's challenge!
Challenge 2 Download Overview
- Make sure you have NUnit installed; available at http://www.nunit.org/.
- Download Challenge2.zip found below.
- Open the solution with VS 2005 and compile it.
- Open NUnit, go to File/Open, then open <solution directory>/Challenge2.Tests/bin/Debug/Challenge2.Tests.dll. Click run and make sure all the tests pass. There should be a few unit tests that are ignored; but none should be failing.
- After running the tests, confirm that two log were generated in <solution directory>/Challenge2.Tests/bin/Debug/. The log files are named Auditor.log and Owner.log and contain a number of order summaries pulled from the "database."
- Complete the refactorings as described below. Be sure to follow the style of the code that is already in place: classes, methods and public properties in PascalCase, private members in camelCase, etc. Following the style that's already there is always a good rule of thumb when modifying legacy code.
- When finished with the refactorings, make sure all the unit tests are still passing and that no test is being ignored, zip up your solution and send to firstname.lastname@example.org. As you may only win once, only submit a solution if you'd like to be eligible for the prize or let me know that you don't want to be entered for the drawing.
Because of the lateness of this post and this week's holiday, the due date for your submission is Wednesday night at midnight, Nov. 29, US mountain time. It's not an easy one, so I wouldn't wait until Wednesday to get started!
The VS 2005 solution contains two projects (at the moment):
Now Refactor It!
- Challenge2.Core: This class library contains the data access and business logic layers of the application. This class library is further broken down as follows:
- The Data folder contains all the data-access code. The only class within this folder, OrderDao, simulates talking to a real database by generating zero or more sample orders. The unpredictability of the results serves to simulate a real-life database.
- The Domain folder contains all the entities and value objects. It also contains a sub-folder, OrderFinder, albeit still in the same namespace, which holds the order-filtering logic.
- The Services folder contains the nightly logger process.
- The Utils folder contains a design-by-contract utility class written by Kevin McFarlane. The utility class is for all the Check.Requires you'll see throughout the code.
- Challenge2.Core: This class library contains the unit tests of the application. As there is no defined presentation layer, the unit tests serve as a portal to how the application works.
- Challenge2.Data: This isn't in the download, but you'll be adding this class library soon enough.
Below is a listing of smells that need to be fixed within the project. Each point indicates the file that contains the smell, a description of the problem and any guidelines for completing the refactoring. There may be other smells in the code, but these are the only ones that should be addressed.Smell: Combinatorial ExplosionRefactoring: Replace Implicit Language with Interpreter
I have yet to write an application that doesn't include a bit of filtering capabilities. Furthermore, it's often the case that the filtering capabilities need to be extended throughout the life of the application. If only a couple of filters are required, then hard-coding the filter logic isn't that big of a deal. But if the number of filters grow or the logic becomes more complicated than just "AND"s, then using the interpreter design pattern
may be an appropriate fit. But there's also the question as to whether filtering should be done by the database or by the code. As I prefer to keep database logic to the simplest CRUD (create/read/update/delete) functionality, I avoid putting complicated filtering logic into the database unless performance warrants otherwise. A good rule of thumb is that if you have a couple hundred items or less to filter, and you have more than a few filters that change regularly, then putting the filtering into the business logic layer is a good way to go; otherwise, the database should be doing the basic filtering. Alternatively, ORMs such as NHibernate provide their own implementation of the interpreter design pattern
. So onto the smell...
Smell: Dependency on Concrete ServiceRefactorings: Extract Interface and Introduce Separated Interface
- Challenge2.Core.OrderFinder: This order-filtering class exposes the verbosely named methods FindByNotTypeAndAtLeastTotalCost and FindAtLeastQuantityOrAtLeastTotalCost. Besides taking a while to read, these methods are difficult to extend, likely to change and are far from flexible. Without removing these methods (yet), refactor to the specification design pattern to make the filtering capabilities much more flexible. To assist, uncomment the unit tests within Challenge2.Tests.Core.Domain.OrderFinderTests one at a time, following the order suggested. Get each unit test working by implementing the implied code. Put all your "Spec" classes into the folder Challenge2.Core.Domain.OrderFinder but keep the namespace of those classes to be Challenge2.Core.Domain.
- Challenge2.Core.Services.NightlyOrderLogger: This class creates writes two logs, one for the tiki bar owner and one for the auditor. To get the appropriate orders to log, it makes a call to the database to get all of yesterday's orders, and then leverages the OrderFinder class to filter, accordingly. Refactor this class to use the specification classes that you just created and delete the methods FindByNotTypeAndAtLeastTotalCost and FindAtLeastQuantityOrAtLeastTotalCost from Challenge2.Core.OrderFinder.
If you're wondering why you've never heard of the smell listed above, you probably won't be alone. (And if I ever write a book, I'll be sure to change that!) One of the most frequent issues I find with legacy code is direct dependencies on concrete classes that provide a service. A service, in this case, may be a data-access object which communicates with a database, an emailing utility class, an external web service, etc. A service is usually resource intensive, time consuming, or something that shouldn't be executed outside of a production environment. Having dependencies on concrete service classes does not present a problem if the code won't ever be unit tested, but neither will standing in the middle of a highway as long as there's no traffic...but I digress.
A solution for removing a dependency on a concrete service is to introduce, instead, a dependency on an interface. (I'll discuss more of the benefits of this in a moment.) But if the service class is still within the same physical assembly as the business logic layer, then there's nothing to stop a developer from bypassing the interface and going back to the concrete service class directly. To enforce a policy of services via interfaces, a separated interface
may be introduced.
- Challenge2.Core.Services.NightlyOrderLogger: Notice that this class creates a new instance of OrderDao to retrieve relevant orders from the database. This presents two problems. The first is that the process is time consuming from a unit test point of view. If you have a couple hundred unit tests and many of them require communications with a database, the unit testing process can begin to take 10 minutes or more. Once this happens, developers stop running unit tests altogether. (I experienced this on a previous project.) Furthermore, unit tests expect predictable results to verify code is working properly. Unless a database is reinitialized at the beginning of each test run, the results may vary and the tests will begin to break. This is another motivating factor for developers to stop running unit tests. (This was another unpleasant experience from a previous project.) One action you can take to mitigate this is to label any test that depends on a database with a "Database Test" categorization. With this in place, it's easy to "ignore" these tests within NUnit. But how can we test the business logic if we're ignoring much of it?
You can alleviate these problems by leveraging a mock object that simulates the service in an efficient and predictable manner. Take a look at Challenge2.Tests.MockObjects.Data.MockOrderDao. This class simulates the behavior of the real OrderDao class but returns the exact same results each time. Consequently, it's possible to use this mock object in place of OrderDao to verify that the business logic is working properly. Using mock objects in place of production data-access objects, you won't need to reinitialize the database on each test run. Carry out the following steps to introduce a Separated Interface and inject mock objects into your business logic layer for testing purposes.
- Use the refactoring technique Extract Interface on the class OrderDao to create an interface called IOrderDao. Put the extracted interface into Challenges2.Core.DataInterfaces and have OrderDao implement it. (The interface should have just one method called GetOrderPlacedOn.)
- Compile and run the unit tests to make sure everything is still working.
- A mock object already exists, to simulate this data-access object, found at Challenge2.Tests.MockObjects.Data.MockOrderDao. Have this class also implement the IOrderDao interface.
- Within Challenge2.Tests.Core.Services.NightlyOrderLoggerTests, there's a method called CanExecuteNightlyOrderLogger. Within this method, create an instance of MockOrderDao and pass it as a parameter to the method ExecuteNightlyLogging. You'll get a compilation error at this point. This is OK as we want the unit test to drive our code changes.
- Add the following two unit test assertion to the same CanExecuteNightlyOrderLogger method: (You'll still be getting compilation errors from the previous step.)
- Modify the ExecuteNightlyLogging method of Challenge2.Core.Services.NightlyOrderLogger to accept an IOrderDao object. Be sure to use Check.Require to make sure it's not null. The name of the parameter should simply be orderDao. This will conflict with the new instantiation of OrderDao a few lines down. That's expected. Delete the line
OrderDao orderDao = new OrderDao();. The NightlyOrderLogger is no longer dependent on a concrete service class; e.g. OrderDao. Instead, it only depends on the IOrderDao interface. But developers are still free to use OrderDao directly if they so wish. Now we'll introduce the separated interface.
- Before proceeding, make sure everything compiles and the unit tests are passing.
- Add a new project to the solution called Challenge2.Data. This project will contain classes which communicate with the database.
- Move Challenge2.Core.Data.OrderDao to this new project and delete the Data folder from Chanllenge2.Core. (You'll need to add a reference to Challenge2.core from the Challenge2.Data project. While you're add it, also add a reference to Challenge2.Data from the Challenge2.Tests project.)
- Finally, repair the namespace reference problems found within Challenge2.Core and Challenge2.Data, compile and confirm that all the unit tests are still passing.
As an optional, bonus step (and to get your name thrown into the hat twice), pass a mock log4net logger to NightlyOorderLogger in addition to the mocked OrderDao so that log files aren't necessarily generated during unit testing.
So what did we just accomplish? There are a few benefits to what we just did: 1) the unit tests are blazing fast as they don't depend on database communications, 2) the data-access and business logic layers are now loosely coupled, 3) the data-access layer can be easily mocked or even, albeit not likely, replaced with a completely different database communications layer, and 4) developers can no longer introduce new, bi-directional dependencies between the data-access and business logic layers. The last point goes a long way to keeping an application clean, testable and maintainable.
This refactoring challenge is quite
a few steps beyond the simple, introductory refactorings we saw last week. But I hope I've illustrated that refactoring techniques can not only make a method cleaner, they can also provide direction for refactoring application architecture as a whole.
Good luck on the challenge and let me know if you have any questions!
11-23-2006 1:46 AM