I was presented with the following discussion opener concerning the management of complexity via the introduction of abstraction...
How do we handle complexity? I asked a few software architects I know and all of them answered, “Abstraction.” Basically they're right, but being a math major in college, there is a principal I believe that software architects and designers miss - complexity is constant. In other words, if you’re designing a system that is inherently complex, you can’t reduce it. In chemistry I remember learning that energy is constant, it can’t be created or destroyed, it's just there. In software, if the solution to a problem is complex, the complexity is always going to be there.
What about abstraction? Abstraction basically hides complexity. This is good, right? The problem is, in a lot of designs, once abstraction hides complexity the designers tend to forget about it. If you work hard at a good design to hide the complexity and forget about it, it will come back and haunt you some day. So what does one do?
I love this discussion! The heart of this is determining if the introduction of abstractions, for the sake of hiding complexity, is truly an overall benefit. In robotics, we're ever dealing with increasing levels of abstraction for this very reason. For example, in Architectural Paradigms of Robotic Control, I briefly discussed the 3T architecture which has three separate layers, implemented as increasing levels of abstraction. E.g., the skill/servo layer would likely be implemented in C++, the sequencing/execution layer might be implemented as a sequence modeling language, such as ESL or NDDL, while the planning/deliberative layer might be implemented with a higher abstraction yet, such as with the Planning Domain Definition Language or Ontology with Polymorphic Types (Opt).
In response to the concerns put forth, I would tend to agree that abstraction hides complexity and that it does make it more likely that you may be bitten by the hiding of the complexity. With that said, encapsulation of complexity into well formed abstractions is an inevitable step to facilitate taking on increasingly complex problems. For example, in the .NET world of data access, Fluent NHibernate is really just a way of hiding the complexities of NHibernate. NHibernate is really just a way of hiding the complexities of ADO.NET. ADO.NET is really just a way of hiding the complexities of communicating to a database via TCP/IP sockets, or whatever underlying mechanism is employed. Along the same vein, tools such as Herbal, NDDL, and ESL are similarly provided as a means to provide an abstraction for hiding complexity.
Because these layers of complexity have been encapsulated in a manageable fashion, we're now able to take on project work which would be far too complex to manage if we were using a lower level of implementation, e.g., pure C++, or Assembly for that matter. Indeed, there will be times when the added layers of abstraction will make it more difficult to tweak a low level capability, but the improved complexity management that the abstractions provide should far outweigh the sacrifice of losing some low level capabilities.
I think the crux for determining if an abstraction is worthwhile:
- Does the abstraction reduce complexity of interacting with the underlying layer that it encapsulates?
- Does the abstraction make it easier to tackle increasingly complex problems?
- Does the abstraction provide enough tweak points to accommodate the 5-10% of times that more low level control is needed?
- Does the abstraction increase maintainability and ease of understanding of the overall goals of the system?
If the answers to the above are yes, then I believe that the encapsulation of complexity, codified as a new layer of abstraction, is pulling its weight. Otherwise, you might not want to throw that Assembly language reference away just yet.
08-18-2010 4:27 PM