Images in this post missing? We recently lost them in a site migration. We're working to restore these as you read this. Should you need an image in an emergency, please contact us at
Are abstractions an illusion of improvement?

Last week I kicked off a contest challenging entrants to integrate the Castle Project Automatic Transaction Management facility into my NHibernate Best Practices sample code.  I felt that integrating this framework into the sample code would make the code more flexible, simpler to use and more explicit with respect to transaction management.  That sounds like an improvement, right?  But is adding this higher level of abstraction actually a detriment to our goals of software development?  A concerned, anonymous reader, going by the name "My Word," left the following comment (you'll have to ignore the syntax in favor of the semantics of the comments):

"Has anyone one stopped to think of all types_of *overhead* with this approach?

I would like to present a contest write a Script (choose VB or Java ) app (on any end, including components from scripts) that will perform worse, slower or with more management (and/or 'wu ha star trek practices') involved.

You can not, show me, you can not, not that easy :-) Test it, and see it for yourself, how it scale especially.

Porting all that bloat from Java, including log4j (my word), hibernate (gulp!), or spring is, always was and will be nonsense.

These things leak like hell, bringsmore overhead on what should be short, fast and snappy and most of all:

All techniques and Java-trash was made obselete with Windows OS integrated features, with WWF, with Linq, with SQL server and much more.

I swear they shouldn't have left Fowler do anything with his conceptual theories too.

But I guess that is what all the followers deserve when fashion of 'enterprise theorists' gets working, and produces bloat on top of bloat squared and plus ducks on top of misunderstanding that data is not objects (not since 1998 DOH!)

Java mistakes repeated all the time."

I'm not posting this to start a flame war against this reader.  On the contrary, I believe some interesting points were brought up; albeit, somewhat unintelligibly.  Consider what would happen if the Castle Project Automatic Transaction Management facility was integrated with the sample code.  The following constitutes the basic flow for how it would be employed during an HTTP request:

  1. A user makes an HTTP request after filling out an update form.
  2. The server, after receiving the request, registers the page controller classes with the Castle Windsor container.
  3. The container watches for any [Transaction] attributes to be encountered while the code is processed.
  4. When encountered, another class handles ensuring that a transaction exists.
  5. The code interacts with the NHibernate layer, via XML files, to download data which is then converted into domain objects.
  6. Updates are made to the domain objects.
  7. The controller/presentation layer converts any necessary domain object properties into a readable form for the users.
  8. An HTTP module, or other mechanism, watches for open transactions at the end of the HTTP request and commit them, accordingly.
  9. When the transaction is committed, NHibernate checks to see if any domain objects are dirty and commits them to the database.
  10. And throughout it all, a logger hooked up via Aspect Oriented Programming may be ready and waiting to spit out a bunch of info for the benefit of the developer.

And this is over simplifying the process!  Furthermore, a couple of the steps are a little out of order and are slightly misconstruing the responsibilities.  The point is, there is A LOT going on between steps 1 and 10 to process an update request.  Wouldn't it be much more efficient to write some ADO code directly into the code behind page to start a transaction, send the raw form data to a stored proc, and commit the transaction?  Don't all these layers just add a lot of unnecessary overhead?  Perhaps.

Let's look at the flipside...go ahead and abandon the domain layer, the stand-alone data-access layer, the dependency injection, the logging, the attribute-driven transaction management, and the AOP logging.  What do you have when you throw all this away and focus on minimizing the abstractions between the user and the data storage.  You have a mess formally known, in the Microsoft realm, as Active Server Pages.  You have duplicated, verbose and unmaintainable code which degrades further into incomprehensible dribble with each additional line added during maintenance.  I'm quite sure that all the overhead associated with the additional layers adds an appreciable amount of lag to each web request made within my applications.  Is it worth it?  Perhaps.

I'm sure it would measurably increase the performance of my applications if I abandoned all of these layers of abstraction in favor of an ultra-thin layer between the presentation and database.  Would that be worth it?  Definitely not.  It would lead to fragile code, ticked off developers, unhappy clients (due to being terribly behind schedule and overbudget) and an inability to quickly adapt to change.  My primary goal is not to make the fastest application I can conceive, it's to keep the client happy.  And I can only do this if I focus on a clean domain layer, testable controllers, and dispense with time consuming plumbing issues such as ADO.NET.  And when I find performance bottlenecks, I have the time to address them properly.

Coding for maintainability and testability is far more important than making your code as efficient as possible.  Performance tuning should be left to code profilers which will let you know exactly where you need to spend more time in speeding up your application.  So does all this mean that we have to use all these intermediary layers.  Perhaps not.

I've been a Microsoft web developer for 10 years.  During this time, I've never been completely satisfied with the options I have had for writing software.  To write maintainable, domain-driven software, I feel I have little choice but to use a number of layers which cause performance overhead and added indirection.  There's a learning curve to be had if you're a developer on my team, but once you learn how it all works, it's very easy to extend.  Down the road, in 20 years or so, I believe that people will look back upon our toolset just as we do upon the days of punch cards; software engineering still has a very long way to go.  In the meantime, we do what we need to do to write clean, maintainable software.  But exciting alternatives are arising...look at Ruby-on-Rails.  It's clean, simple to develop with, very maintainable, and does not have a terrible amount of technical overhead running behind the scenes.  But the layers of abstraction are still there and still hide a lot of magic that's happening behind the code you write.  The primary difference between Ruby-on-Rails and ASP.NET, in my opinion, is that the layers of abstraction are all but invisible in RoR.  ASP.NET, with its code-behind pages and page life-cycle still has a ways to go to match the simplicity of RoR...but again, if you write ASP.NET applications, you have to work with what's available.

Higher layers of abstraction have always been, and always will be, a driving force in emerging technologies.  The primary benefit is that more and more details do not have to managed explicitly.  The cost of this is added performance overhead and giving up some low-level control.  (I recall talking to an ex-Assembly programmer who dreaded giving up control when he had to move to C++.)  As software becomes more sophisticated, further abstractions will be needed to manage its development.  Software factories and the intention behind Microsoft Robotics Studio are two such examples of this.  (On a related note, Gödel, Escher, Bach by Douglas Hofstadter does a brilliant job of discussing abstractions and should be a required read for all software developers.)

When used appropriately, layers of abstraction add immense value to software development with the obligatory cost of performance overhead and added indirection.  Fortunately, a well designed, open-sourced abstraction always leaves room for performance driven circumvention and a means to see exactly what's going on.  But if you don't think it's worth it, you can always program in binary.

Billy McCafferty

Posted 05-10-2007 2:33 PM by Billy McCafferty



lilconnorpeterson wrote re: Are abstractions an illusion of improvement?
on 05-10-2007 9:34 PM

Great post!

Rob Eisenberg wrote re: Are abstractions an illusion of improvement?
on 05-10-2007 11:08 PM

Well put.

Evan wrote re: Are abstractions an illusion of improvement?
on 05-11-2007 12:33 AM

To each his own.  I think coming to the TDD/DDD/modeling mindset is very much a journey or at least it was for me.  I also think personality has a lot to do with it.  Some of us (myself included) love working with abstract concepts.  The same can't be said for the rest of the population (which will include some percentage of developers).  For these people, the smart ui pattern might just be best.  All I know is that I have been down the road the commenter describes, and I will not be going back.  I'd rather switch professions than go back.  Maybe he hasn't felt the pain yet.

In the end, you can lead a horse to water, but you cannot make him drink.

Billy McCafferty wrote re: Are abstractions an illusion of improvement?
on 05-11-2007 9:30 AM

You're right on the money hindsight, I definitely went through that as well.  In fact, I'd say that I only began to appreciate OOP and a strong separation of concerns after years of reading piles of books, experimenting with what I read, making plenty of dreadfully terrible decisions, and learning from the experiences of maintaining some of the worst code possible; aka, my own.

Boris Drajer wrote re: Are abstractions an illusion of improvement?
on 05-11-2007 12:23 PM

Hey, he forgot SQL... There's an overhead in parsing SQL queries, for example. Everyone knows it is much faster to iterate through records manually - ok, if I wanted to make queries faster I'd write some indexing code, and it would surely be more performant than the server's generic implementation... And also, pushing data over TCP/IP, do you know how many packaging layers there are (I mean, why would I need the routing logic, it only slows me down)? And why should TCP/IP talk to the driver when it's much faster to access the network card directly?

Etc etc... You just cannot argue with people like these because they are right in saying things like "applications would work much faster if you wrote them directly in microcode". Yes, it's true... But do I want to write microcode? Only when I want to program everything by myself, operating system included. In other words, never.

Udi Dahan - The Software Simplist wrote re: Are abstractions an illusion of improvement?
on 05-11-2007 5:46 PM

First of all, layers hardly ever impact performance in terms of latency per request. It's just one DLL calling another. Tiers are another thing entirely.

Also, to the quote above - are not WF and LINQ abstractions as well?

Anyway, as one who's main business is designing high-performance, and ultra-scalable systems, I have to say that I focus most on IO related things; web services, DB, etc. It's all about finding the bottleneck, like you mentioned, using profilers and such.

The biggest problems my customers have after we've found the bottleneck is changing their software to fix it. It is exactly there that we need the code to be pliable.

Abstractions, for their own sake, are rare in my experience when basing our code on OO designs. I find them much more common in procedural designs, so maybe that's where "My Word" is coming from.

Pete W wrote re: Are abstractions an illusion of improvement?
on 05-11-2007 7:29 PM

Yeah Billy Ive locked horns with people of this mindset before.

In my new company [which will remain unnamed] I did an introductory presentation of Windsor, and how to use it for abstract factory and decorator situations. About half of the eyes of my audience glazed over.

Most of your biggest opponents to abstraction and modularity come from an older practice, where they are afraid to abandon their familiar mechanisms for something new.

It seems like yesterday I heard people complaining, why should I use Java when everything I need is right here in C++?

The common argument they always carry is performance. Abstraction forces you to relinquish a degree of control for the sake of consistency and simplicity. Yes, there will always be anecdotal situations where you need a higher degree of control, and abstraction hides away too much of this control, but thats why a good framework will allow you to harness and override levels of abstraction. A good example is NHibernate 1.2 and its support for stored procedures.


John wrote re: Are abstractions an illusion of improvement?
on 05-22-2007 7:26 PM

(my 2 cents)

Admittedly, I'm a bit wet behind the ears in real world experience with software architecture/development.

I took Software Engineering in college (in 2002-2003). I didn't do very well in it. Probably the bigest reason was that we were being taught to do OO Analysis and OO Design (among other things), but there was no coding in the class!! It was really hard to be able to piece these concepts together in my mind without some hands on examples and understanded of how it fit together at the code level.

I never really had time in college to try a project using those principles. But soon after college, I was looking for work and in the meantime working on my own project everyday in C# ASP.NET , .Net 1.1 and ADO.NET. I was trying my best to use these principles but all the MS documention on MSDN with ASP.NET was little help for designning something that seemed to me to be done "the right way". So I designed and coded the best I could with what I knew.

One of the largest draw backs with the way I was building that application, was that I had to try to forsee all my DB fields for any particular table before I coded. This is because, the only way I knew how to do it at the time would basically require me to rewrite/rebuild a large portion of the application if I just wanted to add one simple field to the DB after the fact. It was a grim situation to have to deal with but I was determined to make my application. Even if it didn't take off, at least I got my feet wet and had some hands on experience to learn from.

Fortunately, I found Bill's articles on codeproject about the same time I started my next big application. I recognized this architecture/framework was like RoR in many ways and I learned alot from his articles and the samples. I basically used modified versions of the samples as a starting point for my project. Building off of this framework and trying to use the various concepts I learned in Software Engineering and elsewhere has proved to work out very well and allowed me to build an application that I wouldn't have been able to otherwise I'm sure. The fact that I can change some things here and there and that it has no effect on the rest of the application allows the flexibility necessary to build in the real world. I can't always see exactly what the product is going to be when it is started and that is the nature of the software industry that I've seen.

Although I am for efficiency and the thought of all those layers and their overhead is kind of disheartening, I'm convinced that this is the "right way" to develop software. As it's said, hardware is relatively cheap. And being able to develop like this is priceless when compared to the thought of the cost of developers trying to maintain code that is not easy to change.

Most everything is built on layers of other things anyway, and I'm sure there will be bigger and more layers underneath the applications of the future. It's the price we pay for higher level languages/frameworks. But we'll pay it because they allow us to accomplish more with less work. The work we don't do still needs to be done somehow, and it is, by the computer. That's what computers are for right? They do the work so we don't have to.

(like I said, my 2 cents :)

Billy McCafferty wrote re: Are abstractions an illusion of improvement?
on 05-23-2007 12:50 PM

Hear hear!!

DamonCarr wrote re: Are abstractions an illusion of improvement?
on 05-27-2007 12:03 PM

Billy (and to the poster),

Your response was great. I’ve run into a few people like this.

They are usually the ones trying to optimize code to shave a few milliseconds off a sorting routine THEY WROTE when the framework already has a binary sort that is blazing.

And right after this code, there is an out of proc DB call that takes .5 seconds and these are not concurrent.

Here are my 2 cents:


1) The history of our field and just 'what can we achieve' with software and why has that improved since say… The 40s?

2) The evolution of Individual/Team Productivity via Abstraction (I am ignoring all the silly teams trying to do waterfall or CMM level -1 – ha ha)

It is a simple fact that the entire evolution of the software industry is based on one behavior: Abstraction. Full stop.

This is so obvious I am embarrassed....

Binary --> Assembly -- 2GL - 3GL - 4GL -- DUH

This post is ignoring the fact that (in spite of the fact software is a miserable failure almost all of the time STILL, mostly for human issues, not technology), we could not have made any progress without abstraction.

And yes, NHibernate is the best abstraction I am aware of for the .NET developer doing 'Domain' driven work.

SOMEDAY Microsoft will deliver something I am sure.. But who cares right now.

And if you are a Microsoft guy and waiting? SHAME ON YOU! Read Billy’s articles and get off your lazy but.

And another thing (grin – RANT CONTINUES)… Do I even have to say it?!

Hardware is now cheap, people are not (and don’t even mention India… Yeah that works….NOT! – said in a Borat accent).

Inefficiencies in software are becoming irrelevant if the cost benefit shows a massive improvement based on the abstraction.

Memory leaks? Well they can be tolerated if they are managed by a system restart at 3AM once a week.

Is that great? No... A purist would cringe (even I do a little)..

But if you can save 20K a month perhaps it is ok….

So you spend an extra 20K on hardware. Is that OK?

Well yeah if you save 100K a month on people.

I suggest the poster:

1) Take a few business classes

2) Read up on the history of our field

3) Choose blogs of perhaps people who are not actually working in software to vent his misinformed dogma

And I really am a fairly nice guy I swear.

Kind Regards,

Damon Carr, CTO


About The CodeBetter.Com Blog Network
CodeBetter.Com FAQ

Our Mission

Advertisers should contact Brendan

Google Reader or Homepage Latest Items
Add to My Yahoo!
Subscribe with Bloglines
Subscribe in NewsGator Online
Subscribe with myFeedster
Add to My AOL
Furl Latest Items
Subscribe in Rojo

Member Projects
DimeCasts.Net - Derik Whittaker

Friends of
Red-Gate Tools For SQL and .NET


SmartInspect .NET Logging
NGEDIT: ViEmu and Codekana
NHibernate Profiler
Balsamiq Mockups
JetBrains - ReSharper
Web Sequence Diagrams
Ducksboard<-- NEW Friend!


Site Copyright © 2007 CodeBetter.Com
Content Copyright Individual Bloggers


Community Server (Commercial Edition)