Lessons Learned: A Grand Post-Mortem

Taking the development blogosphere at face value, readers would get the impression that development teams rarely run into complications during project development and that things go quite smoothly throughout client engagements.  You often read blogs about the ideal ways of using a Kanban board, the benefits of burn down charts, how great user stories are, how agile development leads to greater project success, and how various design patterns make maintainability a manageable task.  It's hard to find many blog posts that begin with "Man, that was rough!"  The fact is that the perfect project is few and far between.  And sometimes, no matter which techniques are leveraged and no matter how hard a team tries, the project still seems like it's sinking further and further into chaos.  You're certainly not alone if you've ever seen clients get upset due to schedule and/or budget overruns, had to deal with developers quitting half way through a project, suspected that management feels out of control, felt overwhelmed by too much work and too little time, realized that the initial estimates were way off, or felt that quality was being sacrificed.  Obviously, these are undesirable characteristics to run into, but none of which haven't been encountered by developers at one point or another.  I've just completed a couple of six month long projects which I see as hard won successes.  Accordingly, I'll discuss some of the lessons learned from these projects along with some other items of (possible) wisdom I've picked up over the past few years of using agile development from both a developer and project manager perspective.

Agile Development Ain't All or Nothin' (and same for TDD, DDD, xDD)

I frequently hear people state that their project is too small for Domain Driven Design or that it just won't work with Agile Development or that it'll be hampered by TDD or some other reason why they're ignoring a complete set of practices.  Tossing out an entire practice area, such as Agile, DDD, or TDD, is akin to the cliché of throwing the baby out with the bath water...at least keep the bath water!  The point is that even if you feel a practice area is not entirely applicable, there are inevitably specific techniques and/or tools from every practice area that will provide benefit to your project.

For example, you may find immense benefit from applying DDD's ideas of Repositories, Application Services, and Bounded Context to your project without necessarily having to conform to the remainder of DDD principles.  Likewise, if you're not going for 100% test coverage, or even 10%, then you will still find immense value by applying it to a few fragile pieces of code.  Applied in the right areas, TDD (or at least Post Development Testing), can take strides to making your code easier to understand and maintain.  And with respect to Agile Development, if you don't feel that Pair Programming is practical in your environment, then don't do it...but don't ignore all the other agile techniques such as burn down charts or frequent client feedback.  Just like design patterns, the specific practices described within a practice area may frequently be used in isolation, resulting in great benefit to the project.

But just as you shouldn't take an all or nothing approach to practice areas, you shouldn't claim you're doing an "agile project" or a "DDD project" if you're only taking bits and pieces from various practice areas.  For example, on a frequent project, what began as an agile project quickly morphed into a hybrid project taking bits from Waterfall, Agile, DDD, MDD and just about everything in between.  This evolution wasn't necessarily bad, but what was bad was the continued use of agile nomenclature to describe everything.  While the only thing truly agile on the project were daily scrums, the team continued to refer to user stories, iterations and product owners which had no semblance to their agile equivalents.  The drawbacks to this included giving stakeholders and developers new to agile a very wrong impression of what user stories, iterations and product owners were, and wrongly giving agile development a bad name when these wrongly defined things were seen as the culprit of confusion in a few instances.

In order to set expectations correctly, to mitigate misunderstandings of incorrectly used nomenclature, and to endorse the use of practices selected from a wide array of good practice areas, avoid pigeon holing your project as "agile" or "not-DDD" or "data-centric" and instead focus on coming to team agreement on which techniques from which practice areas will be most beneficial to the project, and agreeable to the team as a whole, and make that your process.

Putting It Into Practice:  As a project kick-off activity (among others), maintain two worksheets, called "Development Practices" and "Project Management Practices."  On the former, list those practices that the development team will make a commitment to practicing; e.g., 50% code coverage, data access code only within Repositories, extensive use of Application Services, etc.  On the latter, list those practices that the product owners, project managers, scrum masters, and/or management will commit to practicing; e.g., x-week long iterations, requirements adhering to a referenceable requirements template - be that user stories, use cases or otherwise, burn down charts updated weekly, etc.  During team agreement, make sure that all team members have a complete understanding of what has been committed to by both sides.  This goes a long way towards clearing up misunderstanding and setting defined expectations.  Finally, review what the team has committed to after each iteration (a "sprint retrospective" in scrum speak) to discuss what's working, what's not working, and to re-adjust as needed...everyone must give their buy-in if an adjustment to either list is being made.

Developers Totally Suck at Estimating

To be fair, this isn't totally the developers' fault.  We developers have a number of things going against us:  initial requirements are typically only educated best guesses, very little time is typically given to provide estimates, one developer usually acts as the estimator for the entire project (therefore, for the other developers), developers are terribly optimistic, and requirements ALWAYS change.  But just because our estimates end up totally sucking doesn't mean we can't get better or use techniques to ease the blow.

There are two prevalent estimation anti-patterns that are difficult to overcome:  giving estimates in inflexible hours/days and not taking the time to learn from previous estimation attempts.

A problem with giving estimates in hours/days is that they are typically from the perspective of the estimator.  Consequently, what's 4 hours to the estimator may end up being 12 hours to the developer who ends up working on the project.  So to adjust, the team will then have to go back and adjust the estimates, accordingly.  Another problem with giving estimates in hours/days is that they are not easily adjustable if it is found that the estimates were universally too high or, much more likely, too low.  So again, the team must go back and provide new estimates or apply an estimate multiplier.  As I've never been a part of a project wherein an estimate multiplier did not come in handy, I've learned that estimates in hours/days (especially for an entire project) is a relatively futile effort due to the fact that the meaning of an "hour" or a "day" always ends up changing.  Therefore, I've become a strong proponent of estimating in "points."

In Mike Cohn's Agile Estimating and Planning, Cohn suggests using “points” to estimate work instead of hours or days.  To start this process, a unit of easy-to-estimate work is selected to act as a baseline for other estimates.  This baseline unit of work may be given a points-estimate of 3, for example.  Other units of work are then compared to the baseline unit of work and given estimates relative to that baseline.  For example, if the login process is expected to take 3 points of work, then the shopping cart may be expected to take 8 points of work – or almost three times the effort as the login process.  As the difference between 5 points of work and 6 points of work is unrealistically predictable, a points-scale is adhered to; an example scale is the first few above-zero numbers of the Fibonacci sequence:  1, 2, 3, 5, 8 and 13.  After the highest value / highest risk work units have been estimated, a project schedule can then be organized by determining how many points can be completed in a given iteration and then assigning units of work to the iterations, accordingly.  If the project team realizes that it’s delivering fewer or more points per iteration than originally expected, then only the points-completed-per-iteration needs to be adjusted to fix the schedule expectations; the tasks themselves don’t necessarily need to be re-estimated.  As the schedule progresses, it becomes evident how many hours of work are in one point.  This also applies to individual developers as it does to the team.  It becomes evident how many points of work each team member can reasonably accept at the beginning of each iteration.

In my experience, points are great for estimating the project and determining approximately how much work can be completed in a given iteration.  But once an iteration begins, time should be taken to estimate, in hours or days, how long each user-story/use-case/requirement/task will take.  This exercise helps to verify that developers have a proper understanding of the tasks at hand (by forcing them to think carefully enough about the scope of a task to be able to give a concrete estimate on it) and sets the developers up for improving their estimation skills...

This leads us back to the 2nd anti-pattern of estimating; developers rarely expend any effort to determine if their estimating skills are getting better or worse.  While "points" are a useful band-aid to cover up the fact that estimates are infrequently correct, we should still strive to improve our estimating skills.  Ideally, developers would compare their low level estimates to actuals on an iterative basis; but in reality, developers should, at the absolute minimum, spend at least one project iteration carefully providing low level estimates, in hours for each task given to them, tracking actual hours for each task, respectively, and then comparing actuals to the original estimates.  This exercise helps to improve estimating skills which will benefit estimating capabilities on subsequent iterations and projects.

Putting It Into Practice:  On your team's next project, when asked for an estimate, come up with your estimate using the "points" approach.  If management feels uncomfortable with the idea of communicating points to a customer or if you need a finite "hours estimate" to create a budget, then do your best to determine your initial points-to-hours multiplier for the sake of the budget but continue to try to hone the multiplier after each iteration by comparing how many points were attempted to be completed vs. those that were actually completed.  This will help your team determine if you're on schedule (when looking at how many points) remain.  If you see that the estimates were way off initially, it's best to realize this early in the project schedule and work with management and the client to determine if the schedule can be extended or if scope can be adjusted.  I find that if I remain open and honest with a client early on in the project life-cycle, they're much more likely to reduce scope to help the schedule.

With respect to honing your estimation skills, select at least one iteration in a project to carefully estimate - in hours - individual user-stories/use-cases/tasks and track actual hours spent on each to determine where you are with your own estimating skills...we all have our own multiplier, it's our job to find out what it is!  Honestly, mine's about 2.2; i.e., when I estimate a task, I multiple my initial gut estimate by 2.2 and that's just about what it turns out to be.  Don't be ashamed of your multiplier...take the time to figure out what it is and apply it to your estimates.  You'll save yourself a lot of suffering (and overtime) in the long run.

A Backlog of User Stories is Like a Novel Cut to Shreds

I love user stories.  They're easy to write, they reflect the language of the client, they're pretty easy to estimate, and typically easy to convert into code when granular enough.  But as a team gets larger and scope increases, I've found them to make a mess of keeping the team "on the same page" and communicating the big picture of the project.  This loss of "the big picture" becomes reflected when user stories appear disconjoined in the realized functionality of the project.  It can also take a toll when an important architectural decision isn't made until later in the project.  One could argue that this is due to inadequate user stories.  But when one loses site of the big picture, it's difficult to be able to ask the right questions to avoid such undesirable situations.  After years of using user stories (sometimes "by the book," other times more casually), I'm leaning towards a combination of both "waterfall" style documentation and user stories.  I still like user stories, but I feel that a "big picture" document is also often required to show how the puzzle pieces fit together.  On a recent project, the business analyst began writing waterfall style requirements early on in the project.  After much protesting, the requirements were re-written into user stories and the original documentation was discarded to avoid confusion.  About 2/3's of the way into the project, as user stories evolved we found that some user stories began to contain conflicting direction with other user stories.  We found that some workflows, especially those with many user story interdependencies, began to have holes and misguided assumptions.  As a team, we began wanting to resurrect much of the original documentation to be able to once again talk about how the user stories fit into the overall project.  Only by keeping track of user stories in context of a well organized big picture did we again start having a common vision again and get back on track.  A user story backlog, without site of the overall context, is like a bag full of puzzle pieces - when those puzzle pieces begin to evolve in isolation of each other, you may end up building a puzzle that doesn't fit together.

Putting It Into Practice:  If using a scrum-like product backlog, if using use-cases, or if using an approach to requirements which breaks the functionality down into isolated pieces, agree as a team how you will keep track of the "big picture."  (This agreement should go into the "Project Management Practices" worksheet.)  One approach is to use a backlog "map" as described at http://www.agileproductdesign.com/blog/the_new_backlog.html.  Another approach is to maintain a project "vision" document which lightly describes the overall project in about 5-12 pages, using UML flow diagrams, module descriptions, critical success factors and such which the team can continually refer back to in order to discuss the overall context of the project.  As user stories are created, user story identifiers could be added into the UML diagrams so that everyone knows how the user story fits into the overall context of data flow within an application.  There's no singular best practice here, but it's important that the team agrees on what mechanism will be used to keep track of the overall context.

Sometimes Waterfall Requirements aren't the Spawn of the Devil

What?!?!  Here's an interesting situation that I ran into on a recent hard-won-success project...  The business analyst who was labeled the "product owner" was not given the appropriate authority from upper management (as we discovered) to decide priorities, to make changes, and to truly represent the client in determining requirements.  This was not a fault of the business analyst, just an attribute of the organization that we were dealing with.  As the project progressed, we discovered that essentially a committee was involved in determining requirements and priorities.  When the project began, I had simply assumed that the product owner had appropriate authority and domain expertise to properly fulfill the said role.  Consequently, we went forward with an agile approach to requirements management wherein user stories were defined with the assumption that they would be more fully elaborated at the beginning of each iteration and that scope would be flexible to accommodate changing priorities.

We began running into complications with this approach that took its toll on both the schedule and the coherence of the overall application.  Firstly, when each iteration began, it was terribly time consuming to try to get needed elaboration of user stories.  Consequently, assumptions were sometimes made in an effort to continue moving forward.  In other times, the user story elaboration would conflict with the originally assumed context of the user story because a committee of stakeholders (behind the business analyst) was being pulled in to provide the elaboration of the requirements.  When a committee gets involved with providing user story elaboration, it's much more likely that user stories will begin to digress from their original intention for the sake of intra-committee compromising and due to the inherent nature that each individual on a committee typically has a different "big picture" of the project than others on the committee.  The project continued to fracture and become more and more disjointed until we were able to find a person who had the appropriate authority and domain expertise to cut out the committee and get back to a singular vision.

In this instance, wherein the product owner did not have adequate authority and domain expertise and a committee was involved with driving elaboration, I truly wish we had taken a waterfall approach to requirements definition.  If we could have begun development with a 40 page requirements document having mock screens, expanded use-cases, and committee buy-in, I believe we would have realized appreciable benefits.  We could have still approached development in an iterative manner but would have been able to greatly reduce requirements elaboration at the beginning of each iteration, been able to reduce the number of assumptions that were made due to committee-agreement delays, and ended up with a more cohesive effort throughout the project by avoiding the effects of multiple decision makers having subtly different visions of the project during requirements elaboration.  I'm a huge proponent of agile requirements management, but a waterfall approach to gathering and agreeing upon them is sometimes justified.

Putting It Into Practice:  At the beginning of the project, ask point blankly the following questions:

  • Who is the product owner?
  • Does the product owner have the authority to decide priorities?
  • Does the product owner have the authority to adjust scope?
  • Does the product owner have intimate knowledge of the domain of the project, or have a singular representative to stand in for the product owner for particular modules of functionality?
  • Is the production owner going to be logistically available to provide timely expansion and clarification of requirements when needed?
  • Can the product owner fulfill all of the above without having to go back to a committee of stakeholders?

If your answers to a few of these questions is "No," you may want to consider a more waterfall approach to requirements development and management.  Having waterfall requirements doesn't preclude iterative development, and it can improve the chances of project success in some organizations.

Configuration Management First or Suffer

On a recent project, I setup the continuous integration env't to get and build changes and deploy to a dev env't.  At first I had it only deploy when all unit tests were passing and having a minimum level of code coverage.  But I neglected automating deployment of changes to the DB.  Consequently, if changes were made to the DB and a developer checked in changes, there was a good chance that unit tests would break and the code wouldn't deploy until the changes were made to the DB.  This wouldn't have been a big deal to resolve properly, I just never took the couple hours needed to automate this process.  As a "temporary" band-aid, we turned off running the unit tests in the continuous integration process and allowed automated deployments to the dev env't to occur as long as everything compiled successfully.  This temporary band-aid turned into the long term configuration; consequently, developers stopped being very concerned about checking in a broken unit test.  Once you have a number of broken unit tests, it's difficult to muster the motivation to clean them all up unless you're preventing the problem to occur again in the future.  The lesson learned here is that if you have expectations of your team, such as unit testing coverage, be sure to take appropriate configuration management steps to ensure team compliance from day one.  Policies that aren't enforced are policies that are likely to be broken.

On a similar note to configuration management, another lesson learned over the past couple of years is that you should begin maintaining a comprehensive "Dev Setup and Deployment Guide" from day one of the project.  When I say "comprehensive," I do not mean to imply that it should be long, only that it should include every single step necessary to setup a new development env't and to deploy the project to a staging server or production.  This will save you many hours of tedium and deployment hardship later on in the project when time is of the most importance.

Putting It Into Action:  After agreeing upon "Development Practices" at the beginning of the project, put configuration management (e.g., your continuous integration env't) into place to automatically enforce as many of the development practices as is practically possible.  Not doing so makes it that much simpler for the agreements to be broken.

Additionally, from day one, maintain a "Dev Setup and Deployment Guide" for your project and keep it up to date as needed.  Every 5 minutes that you put into this, earlier in the project, will likely save you an hour towards the end of the project...exactly when you don't have an hour to spare.

Enable Team Members with Proper Training (or control potential damage)

On one of my recent projects, we had 5 developers, including myself as lead.  This isn't a particularly large group, but it's large enough to have warranted some extra steps when getting each developer involved.  The first is that the lead developer MUST take the appropriate time to assess, and train if necessary, each developer with respect to how the project is architected and any development assumptions.  For example, DDD was to be the paradigm of development of the core layer of our application.  I had asked each developer to read Domain Driven Design Quickly (http://www.infoq.com/minibooks/domain-driven-design-quickly) and had followed up to make sure everyone had done so.  What I hadn't done is taken the time to A) personally describe the actual implications of DDD I was expecting to be carried out in development, B) have frequent one-on-one code reviews shortly after each developer had gotten involved, and C) restricted some developers to a subsection of the codebase based on their ability.  The consequences of these three inactions led to developers misinterpreting my vision of DDD on the project, developers leaving an obvious "signature" of their style of development, and some developers mucking in code which they had no business working with; consequences of which will be refactorings which could have been avoided.

On another project, agile nomenclature was being used incorrectly.  For example, a business analyst was providing "user stories" which had no resemblance to user stories.  The lesson from this is that time should have been taken to either correct how user stories were being written, or to agree as a team upon an appropriate nomenclature and means to properly organize the requirements.  Having not come to an agreement, the requirements on the project never seemed to adhere to a predictable structure and means of organization.  It's OK for project teams to adopt hybrid styles of requirements management, but it's important that the team agrees upon what that style is, or to properly educate the team to adhere to the agreed upon processes; e.g., requirements as user-stories, use-cases, or other.

Putting It Into Practice:  At the beginning of a project, if you are the lead developer, take a day to make sure developers truly understand your vision for development.  Carefully explain how you expect unit testing to be implemented (e.g., using behavior driven testing), how you expect important ideas to be expressed (e.g., application services), and explicitly state which areas of the application each developer should be involved with (e.g., stay out of the data access layer and enforce via source control).  If a new developer comes on board, again take a day to explain and train the new member of these important areas.  You want your application to have a predictable style, as if only one developer wrote it - taking the appropriate time up front can greatly assist with this.

With respect to project management practices, if you notice nomenclature (e.g., "user story") being used incorrectly and being bastardized, take swift action to properly train the user or come to a team agreement on new nomenclature to describe the hybrid approach.  And if you're team takes a hybrid approach (e.g., a user-story crossed with a use-case), then make sure everyone has a clear definition of what the hybrid approach is and provide a "best practice" example and/or template for better repeatability.

Don't Leave Hidden TODOs and Refactor Until No Longer Nauseous

I won't talk about this much other than I've observed that it's the rare developer who takes the time to F3 for TODOs within the code and resolve them accordingly.  Instead, add a NotImplementedException() and/or add a new task to your requirements management tool to make it more obvious of missing functionality or a much needed refactoring.

After a project is done, every developer has a spot or two that he/she knows is kludgy or would be embarrassed to show someone else.  This bit of code is likely a maintenance nightmare and is likely to break.  Take the time to refactor when you notice the refactoring needs to be done.  If procrastinated, there's a good chance that'll it never be properly refactored and will become the bane of a future developer (or very likely yourself).

Putting It Into Practice:  If you add a TODO code comment, add a new item to the requirements management tool to make sure it's known to the team.  And if you find (or introduce) a bad code smell, refactor it until you no longer have that sick feeling in your stomach...if you're embarrassed for someone to see it, keep refactoring.

Gold Plating Kills Budgets & Schedules

Contrary to the client's perspective, gold plating killed the budget of a recent project of mine.  What he saw as "core" elements of the application, I saw as features that likely would never be used and features which were never discussed previously.  Although I brought up my concerns of gold plating, I didn't take the appropriate time to create proper documentation to back up my claims of scope creep via gold plating.  Consequently, the client didn't feel compelled to drop a comparable amount of lower priority scope and it was more difficult for me to argue a schedule extension.  This is not the clients fault!  It's the client's job to get a product that they feel is appropriate for the users.  They're going to ask for changes no matter what you do to prevent it.  In these cases, it's your job as the project team to keep the client very aware of the impact of the changes.  Changes can be dealt with by extending the schedule and/or budget or by dropping lower priority scope, but it's very important that you agree with the client at the beginning of the project what action will be taken if and when change requests come up.  But in order to have the client agree that changes are having an impact on a project, it's important to take the appropriate steps to maintain the evidence to back up your claims.

Putting It Into Action:  My favorite way to communicate the impact of client changes to a client (although I neglected to do so on a recent project) is to maintain a burn down chart with "change debt" maintained below the X-axis.  If changes come up, let the client know that it is a change, express how long you expect the change to take, and come to an agreement that the time involved will be added to the "change debt" bar below the X-axis on the burndown chart.  In this way, you'll typically see the vertical bars above the X-axis, in the burn down chart, slowly shrinking as you come closer to project completion, while the "change debt" vertical bar, being maintained below the X-axis will continue to grow in height in the opposite direction.  This makes it very clear to the client that they either have to drop a comparable amount of functionality above the X-axis to stay on schedule and budget, or that the schedule and budget must be extended to accommodate the "change debt."  Without such a measurement of change maintained throughout the project, and shared frequently with the client, you'll have little to stand on when it comes to ask for a schedule and/or budget extension due to gold plating and/or change requests.

Stubs Kick Ass! (but watch out for assumptions)

On a recent project, I was developing an application that integrated with three other applications.  Only one of three other applications was available for integration at the beginning of the project.  I ended up writing extensive stubs that I felt adequately stood in for the other two applications so that I could continue developing in concurrence with the development of the others.  But on one of the stubs, I ended up making an assumption concerning what the "Id" of a given item would be (e.g., that it would be a user's email address).  In the end, it ended up being an integer.  Although the impact of this wasn't extensive, it was an annoying realization to be faced with in the final days of development.  Something that would have removed this incorrect assumption, or at least mitigated the impact, would to have had short, weekly meetings with the developer of the application that I would be integrating with to discuss my assumptions of the integration effort and to ensure they were in alignment with the other developers assumptions of how he intended his application to be integrated with.  One action that greatly assisted is that he and I agreed upon a couple of class interfaces that he and I would share in a common assembly.  He and I both programmed against these shared interfaces throughout the project life-cycle.  In this way, it was trivially easy for me to use test doubles as a stand in and then to replace the stub with the actual integration layer via dependency injection configuration.  The only thing that made the effort a bit difficult is on a couple of methods that we decided to return "object" or to accept "object" parameters to a method.  It was here that we were able to make assumptions that were not in alignment with each other about what "object" actually was.

Putting It Into Practice:  When integrating with another application, especially when the other application is still in development, code to interface!  Taking this further, if two applications are in development and plan be integrated towards the end of the project, agree upon a set of interfaces that each project team will program against to act as an integration contract.  Be wary of any interface methods which return "object" or any method which accepts "object" as a parameter.  Although you may not know what "object" really is until later in the development effort, take the time to meet regularly to mitigate any of the assumptions as early as possible.

Keep Team Communication Channels Open

On a recent project, having a sizable project team with developers, QAers, business analysts, etc., efforts were taken by management to reduce unnecessary chatter between business analysts and developers.  Management was afraid that these "offline" conversations would result in undocumented changes and/or undocumented requirements elaboration.  These undocumented conversations and decisions would then hamper the testers' abilities to know a priori how the system should behave.  This fear was founded, but restricting the development team's access to the business analyst resulted in many assumptions being made during the coding process.  You can get lucky every once in a while with an assumption, but more frequently requirements are misinterpreted and/or wrongly implemented.  So on one hand, it's very important to keep communication channels very open between the development team and the Business Analyst and/or Product Owner to avoid assumptive development.  But on the other hand, it's important that the remainder of the team is made aware of any changes and/or clarifications made that will have an impact on a testers capability to confirm desired functionality.

Putting It Into Action:  Always keep communication channels open between the development team and the person(s) who can elaborate/confirm requirements...do it subversively if you must to avoid assumptive development!  With that said, ensure that QA and Scrum Masters (or PMs or equivalent) are invited to these conversations to ensure that all team members are aware of important clarifying and/or corrective remarks.  Any changes or clarifications that would impact the QAers ability to test should be documented in the appropriate user story.  This doesn't mean that tons of documentation be written and maintained, but "enough" needs to be maintained to ensure that all project team members have enough information to do their job adequately, without assumption.

 Our Job Isn't Easy...

Writing quality software that meets the needs of the end-user is tough - doing this successfully with a pressing schedule, a big team, changing requirements, and a fixed budget makes it far tougher.  While it's easy to find literature on ideal project practices, the real world frequently thwarts even the best of our efforts.  Accordingly, throughout each project, take the time to learn from small successes and failures to keep your project processes evolving.  If I've learned anything over the years, it's that every project is unique and requires a unique set of tools and techniques to be managed successfully.  It's never too late to realign the selected tools and processes with the project's needs, and the sooner corrective actions are taken, the more likely it is that the project will succeed and that all parties involved won't end up killing each other...which is sometimes an appreciable real-world success in and of itself.

Billy McCafferty
http://www.itsamuraischool.com/


Posted 07-14-2009 1:17 PM by Billy McCafferty

[Advertisement]

Comments

#.think.in wrote #.think.in infoDose #37 (12th July - 19th July)
on 07-18-2009 9:14 PM

#.think.in infoDose #37 (12th July - 19th July)

Gerri wrote re: Lessons Learned: A Grand Post-Mortem
on 10-10-2011 1:45 AM

Life is short, and this atrilce saved valuable time on this Earth.

About The CodeBetter.Com Blog Network
CodeBetter.Com FAQ

Our Mission

Advertisers should contact Brendan

Subscribe
Google Reader or Homepage

del.icio.us CodeBetter.com Latest Items
Add to My Yahoo!
Subscribe with Bloglines
Subscribe in NewsGator Online
Subscribe with myFeedster
Add to My AOL
Furl CodeBetter.com Latest Items
Subscribe in Rojo

Member Projects
DimeCasts.Net - Derik Whittaker

Friends of Devlicio.us
Red-Gate Tools For SQL and .NET

NDepend

SlickEdit
 
SmartInspect .NET Logging
NGEDIT: ViEmu and Codekana
LiteAccounting.Com
DevExpress
Fixx
NHibernate Profiler
Unfuddle
Balsamiq Mockups
Scrumy
JetBrains - ReSharper
Umbraco
NServiceBus
RavenDb
Web Sequence Diagrams
Ducksboard<-- NEW Friend!

 



Site Copyright © 2007 CodeBetter.Com
Content Copyright Individual Bloggers

 

Community Server (Commercial Edition)