Monday, July 30, 2012

The hidden cost of planning, precision, and predictability


There appears to be a perception that it would be irresponsible for us to do any work without a very clear idea of what and how we are going to deliver and how much this will cost. Whilst I’m not at odds with this idea, it denigrates all other notions of responsibility. Consider;
  • Is it responsible to make no decisions until all the facts are known even if, by delaying, it is too late to act?
  • Is it responsible to avoid proposing potentially risky courses of action which may yield high returns, because we're unsure of the effort /cost involved?
  • Is it responsible to invest time and energy defining solutions to problems rather than delivering solutions to problems?
Most sane people would agree that all of the above show some levels of irresponsible behaviour, but all too often this exact behaviour is hidden behind the notion that having a clear and precise plan is the only responsible course of action. Now, a clear and precise plan is a great thing to have, especially if you can come up with it at almost no cost, but as the military says ‘No plan survives first contact with the enemy’, and the corollary ‘if your attack is going to plan, it’s an ambush’…

So what is the responsible approach? I’d suggest it’s a compromise, i.e. that the responsible thing is to
  • Balance risks with rewards. If I know something will cost between $100 and $200 I don’t need to go into any more detail if the return is $300. I should switch to delivery mode immediately and get that benefit asap rather than spend time and effort working out that it should cost $173.24. On the other hand, if the return is $150 I might want to do more investigation.
  • Balance planning with action. Have a rough plan for the long term and a detailed plan only for the immediate short term, possibly with other levels between. Spend most of our time delivering value against the short term plan, and revise the long term plan less frequently.
  • Balance precision with effort. Being precise is admirable, but if the effort required to be precise is too high then the benefits of precision are undermined. E.g. if it costs me $20 to determine an investment will be between $100 or $200 (for a total cost of between $120 and $220) and $50 to know the cost is $173.24 (for a total cost of $223.24) I’ve realised no benefit from the extra effort involved in gaining that precision.
  • Balance predictability with adaptability. Having a predictable outcome is admirable, but if it means missing out on opportunities to change course and deliver a better value outcome the advantage is undermined. Knowing that I can spend $150 to achieve a return of $300 is great, but if it means missing out on an opportunity to increase that return to $400 for the same cost I have failed to maximise my effort.
  • Balance budgets with benefits. Rather than try to define a deliverable with a cost, define the benefit associated with an outcome and set a budget accordingly. If the budget appears unachievable reconsider the approach, but if it does appear achievable start delivering and continue to monitor the budget and benefits until the crossover point is reached – i.e. when the incremental benefit of more work is not worth the cost that will be incurred.
That is to say that whilst there's benefit to planning, precision, and predictability they're useless unless coupled with taking action, delivering benefits, and being adaptable. Of course, whilst you're stuck in an environment where an investment has to be proposed and approved through a lengthy bureaucratic process this balanced approach is difficult to achieve, but unless the status quo is challenged effort will continue to be wasted and opportunities missed.

Wednesday, May 30, 2012

Misunderstandings of agile


A list of common misunderstandings of Agile development
  • Incremental vs Iterative development . There’s a big difference between incrementally developing an app (which it appears many people are doing when I ask them about their agile process) and iteratively developing an app. Incremental development normally involves building one piece of the app at a time and stitching the pieces together as they’re built. I’m not saying this is a bad idea, it’s just not ‘at the core’ of agile. On the other hand, Iterative development involves putting the ‘bare minimum’ end-to-end functionality in place then revisiting that same functionality over and over again adding features (and business value) to the application on each pass (or iteration). Of the two methods, incremental development can result in ‘gold plating’ of the first pieces developed at the expense of the quality of the latter ones.  On the other hand, Iterative development quickly allows the business to view the end-to-end functionality and identify which parts of the system need further work, focusing development on the best value. Iterative development relies far more on automated unit testing – since the code is reworked many times we need those tests to ensure we’re not introducing regressions. Because of the nature of iterative development the resulting code tends to be simpler, standardised, resilient to change, well factored, and well understood – in ideal shape to be supported in the long term. 
  • There’s no way to manage scope (aka  How can we pin the client down on what they want without a requirements document). In agile projects the scope is not fixed, but the project constraints are (time, cost). Waterfallers tend to hear the first part of that statement but not the second. Yes, the business can add requirements late in a project, but they may need to drop some other as-yet undeveloped requirement of a similar cost to develop. Scope is not constrained but the overall budget is. If there is benefit in increasing the budget to deliver more valuable requirements then do so - as you would for a CR in a waterfall project. 
  • The project never ends. Let’s first look at the definition of ‘ends’ – an agile project should end as soon as the business value of any new functionality delivered in the next sprint/iteration is less than the cost of delivering that sprint/iteration. With this definition of course some projects would never start since the first one or two sprints would deliver little value on their own, so we only generally apply this rule once we have some functionality working. Does this mean a project could never end – well yes – but what’s wrong with that if it’s delivering more and more value? 
  •  If the developers can choose their tasks, they’ll only choose the ‘fun’ ones. To some extent this is true, but remember that the tasks are created each sprint and need to be completed within the sprint. Developers could pick off the ‘fun’ ones to start with, but very quickly they’d find they have to pick up the rest. Also, because of the natural and usually obvious dependencies between tasks there would be a certain amount of team pressure to pick up the tasks that are urgent rather than letting individuals decide for themselves.
  • Agile has too many unproductive practices – TDD, refactoring. Personally, I’m very dubious of someone saying they practice agile development but don’t have automated builds, tests, and high code coverage. If you’re doing projects without these processes I suspect you’re doing incremental rather than iterative development to avoid the need to refactor ‘delivered’ functionality. If you avoid refactoring then there’s less need for automated tests to confirm that nothing has broken after you’ve refactored. But, in order to avoid refactoring you either have to compromise your design (creating add-ons rather than spotting opportunities for reuse) or you are not getting the benefits of agile by iterating on the same pieces of functionality. Saying automated testing is unnecessary is like not using scaffolding to paint the exterior of a building. For small buildings you can probably get away with it, but with anything larger the scaffolding will make you more productive in the long run and will probably result in a better quality finish than using a ladder. 
  •  Agile works better for smaller projects. I’m actually surprised by the number of people who have stated this opinion. ‘Its fine for <that small project you’re working on> but you wouldn’t use it to build <the air traffic control system / MRI scanning system I’m working on>’. Actually I’ll bet those complex systems would be perfect candidates for agile development. Though they may require more thorough testing before being put into production than a run-of-the-mill business system, they would have too much complexity to design solutions for at the start of the project. Agile’s sweet spot is achieved when used on projects with complex requirements that can’t be worked out up front. If you already know the requirements up front you might as well just write a spec and use a waterfall approach. 
  •  Refactoring should be unnecessary with some more design up front. This is missing the point. The assumption is that not all of the future requirements can be known upfront (otherwise you should just write a spec and use waterfall). If you accept that currently unknown requirements will materialise during the project affecting unknown parts of the system then you can accept that it will be impossible to design the system up front, and that doing so will just be wasted time since the designs will necessarily have to change later. Refactoring should be seen as a positive activity rather than a ‘time waster’. Revisiting and working on the same code should make it simpler, promote reuse, ensure more eyes look over the code, in short, make it easier to maintain and enhance in the long term, 
  •  It’s too unpredictable. Done poorly, agile is very unpredictable. Agile processes require discipline by the team to frequently estimate at the micro (task) level and the macro (release) level. Estimation at the task level helps to track sprint progress. Velocity measurements help to predict the release schedule, but only if the definition of ‘done’ is clear and rigorous. However, agile projects can be far more predictable than waterfall projects when managed correctly. The main reason for this is that agile projects track the progress of the product itself (we know at the end of each sprint how complete the system is), whereas waterfall projects track activities against a schedule, which is sometimes a very poor representation of real progress (we may have completed 80% of the work, but the product is in some unknown state of completeness – maybe 50%, maybe 90% - we won’t really know until we’ve completed testing). 
  • It’s an excuse to avoid documentation. Some people certainly behave as if this is their belief, and it is generally true that less documentation is produced on an agile project. On a waterfall projects documentation is seen as equivalent to the process – therefore if you are lacking documentation you obviously are lacking process. On agile projects the documentation (writings) are more numerous, much smaller, and created over the duration of the project. In agile, requirements are written down and acceptance criteria are defined – what more is necessary? You might be inclined to produce some screen mock ups – but that would require knowing all the functionality that will be displayed on that screen up front rather than iterating towards the design. You might want to document test results – but why bother, if they pass all is well, if they fail you raise it with the developer and/or add another product backlog item depending on when it was found. End user and support documentation is generally valuable, but these should be created as part of the deliverables of the project, and they tend to be overlooked on both Agile and Waterfall projects. How many requirements documents from waterfall projects are valuable after the software is launched?
I tried to list 10 misunderstandings but only reached 9. Maybe you can suggest the tenth one yourself?

Tuesday, April 24, 2012

Responsive Support


We have a couple of old apps with a development cycle like this: Business person calls developer describes problem. Developer codes and tests solution against the production database. Developer applies changes to production system. Developer checks in code changes. How many problems did you spot? No unit tests, no CI, no safegaurds against corrupted data, no roll back plan, yes, yes, yes. But the biggest problem of all? The business thinks this is great!

The business percieves that they:
  1. Can request a change without requiring any analysis of the impact of the change or describing it in too great detail
  2. Don't have to worry about justifying the change from a business value point of view
  3. Don't have to test in those annoying staging environments which are too hard to maintain and setup
  4. Don't have to worry about asking for the wrong thing - because they can just ask for something else straightaway
  5. Don't have to wait too long for code changes to be deployed
  6. Still have the right to complain if they find a critical bug
Essentially we've absolved them of the responsibility to carefully consider the change being requested, justify the change, and thoroughly test the change before it's deployed, whilst giving them a fast turnaround.

What's more, the developer also thinks this is great! They get to be a customer focused team player, with the respect of the business - these developers are often regarded as 'the best' by the business. Not to mention a certain amount of percieved job security because 'this system would fall over without me to maintain it'.

Are we servicing the business well with this arrangement? To some extend this depends on the level of developer doing the work, but in most cases a developer who allows this to happen is a C-level developer who thinks they're an A or a B and therefore 'the normal rules of development do not apply to me'. This, in turn, means the application code quality diminishes over time and as more changes are made, more changes become necessary to fix regression bugs. Eventually then, this 'fast turnaround', 'responsive' mode of operation builds up the technical debt of the application, it becomes legacy, harder and slower to maintain, and eventually needs to be replaced.

How do we turn this situation around? With difficulty! By the time you've got to this situation (or inherited it like me) it may be too late to save the code. If it's not too late it requires education of the business to understand the long term damage to the health of the application, and it requires education or (more usual) replacement of the developer supporting the application. It also requires someone with analysis skills to understand the business domain in which the application sits and the role of the application in the business process and to translate the business change requests into development requirements / test cases. All of this is perceived as a costly overhead - especially when 'we just call Bob and he fixes it now, but you want us to fill in forms and do testing...'. If it IS too late to save the code it may even be better to let the current sub-standard process continue and expend energy on the replacement system.

Tuesday, April 17, 2012

User stories are not a breakdown of a requirements document


I’ve been interviewing a few business analysts recently (yes we still need them despite having nominal business product owners). Many of them claim some agile experience in their past and manage to mention the words ‘incremental development’ but very few appear to think in terms of ‘iterative development’.

A brief word on incremental vs iterative here: Iterative development involves going back to already written functionality, reviewing it, and deciding what, if anything, needs to be done to improve it. Incremental development is developing a large system in pieces – this is not the same. Compare...

Incrementally building piece by piece with a clear upfront design

Iteratively improving on an initial vague design
 
(And just calling sprints 'iterations' does not make your process iterative!!!!)

I know back when I was a BA in a waterfall environment I tried very hard to think of everything during the requirements gathering phase so I could include it in the requirements specification, and a good BA should naturally do so. But in transitioning to an agile environment this often results in BAs simply ‘compartmentalising’ a traditional requirements document into smaller chunks and calling them User Stories.

So what’s the problem with this?
  1. It removes the ability for the business to prioritise the functionality. Imagine a text search function – a BA may specify the multitude of different ways the search should return results given various common search operators (and / or / + / etc) in one user story. This negates the option to have a story for just a basic text search and a lower priority one for more advanced search options (the business may decide the benefits of the advanced search are not as great as another feature). 
  2. It increases the chances of gold plating. Good BAs can always think of improvements that _could_ be made to software and they tend to put all that knowledge in one story. But the _best_ BAs will recognise which improvement has the biggest business benefit, and will separate the stories based on potential business priority, rather than a blanket statement like 'The text search should be like Google'.
  3. It slows perceived progress. Instead of spending 1 day getting a basic search working and demonstrable to the end users, the developers have to spend a week getting the indexing and search engine customised, which in turn prevents them from delivering other features. The value of agile is the fast feedback mechanism it provides, the more we do to stay in this zone the better.
  4. It makes you blind to opportunities. If you've already thought of the 'solution' at the start of the project, you're not going to look for alternative better solutions later on. That's why vagueness is sometimes a good thing - it encourages delaying decisions until the last responsible moment - the moment at which you have the most information available to make the best decision. E.g. 'with the feedback from users of the basic search we think we should add a date filter rather than improving the text search options'.
I encourage BAs and product owners in early sprints of a product to write stories that they will probably never release into production. By this I don’t mean that they should lack precision or functional coherence, but simply that whilst they may be basic enough to show how something could be delivered and can form a basis for further discussion on improvements required, they will probably need to add more ‘feature’ before release - or they may decide that the basic version is adequate of course!

Saturday, April 07, 2012

[off topic] Carcassonne on the iPhone/Pad

Downloaded Carcassonne a few weeks ago on the iPad - the game of the year in 2001 consisting of tiles and wooden 'followers'. Like all the best games, deceptively simple yet very rich in game play strategies. In fact, I hadn't realised just how rich it was until playing the in built AI players. Initially, losing frequently to the 'Easy' players I eventually learned the strategies to overcome them and managed to get my level to at least on par with the strong players, without really understanding the subtle changes I'd made to my game play.


Haven't tried my new found powers on any humans yet, though I'm pretty sure I'm a better player now than a month ago.

The app also lets you play against human opponents or a mix of AI and human, and has a very nice 'solo' game too with completely different rules which is addictive. Highly recommended all round.

Tuesday, April 03, 2012

Scrum For Team System Product Cumulative Flow

If you've used the Scrum For Team System (SfTS) templates for TFS then you may have come across the Product Cumulative Flow Diagram which in my view is the most useful chart for a scrum project. The standard scrum process has only 'Not Done', 'In Progress', and 'Done' states. We extended our states to include 'Ready' - the state of a PBI which is ready for development but not yet in progress. We also split out 'In Progress' to indicate the 'stage' of the work - dev, review, test but this probably isn't always necessary.

With these additional states the product cumulative flow diagram radiates a lot of information. It shows the entire product backlog in terms of story point estimates and the state of the backlog over time. A typical (idealised) cumulative flow diagram is shown below for a completed project. This diagram gives an indication of velocity, product scope changes, effectiveness of the grooming process, and the size of the work in progress, all in one picture.


The project should start with a large backlog of ‘Not Done’ PBIs with a small set of these ‘Ready’ for development. During each sprint some of these Ready PBIs will be assigned to the sprint and will be worked on, setting them to ‘In Dev’. Ideally within the sprint these will move to ‘In Test’, be tested, and marked as ‘Done’. Sometimes work in progress will be carried over to the following sprint. At the end of each sprint some new PBIs will be added to the backlog during the sprint review. Within each sprint some time should be dedicated to ensuring there are enough ‘Ready’ PBIs for the following sprints to work on, this may require breaking down large PBIs and re-estimating the pieces. SfTS in TFS can generate the cumulative flow diagram automatically, and this can be used to indicate potential problems with the project. For example:
  1. PBIs not being created or estimated early enough in the release – the curve should rise steeply at the beginning and then level off with one or two stories being added at sprint review, and fluctuations in the size of the project when large PBIs are broken down into more detailed ones. Towards the end of the release the height may even reduce if PBIs are de-scoped from the release 
  2. If testing is delayed (not performed within the sprint) then the ‘In Test’ work will accumulate – if this happens expect a late surge of PBIs / Bugs when the testing does start 
  3. The diagram should display obvious cycles – In Progress / In Test work should approach zero at sprint boundaries 
  4. At minimum the rate of items moving from ‘Not Done’ to ‘In progress’ should be matched by the rate of items being moved from ‘In Progress’ to ‘Done’ – testing should keep up with development. 
  5. If too little grooming is being done the Ready PBIs will drop close to the In Progress line 
  6. If too much grooming is being done the Ready PBIs will rise up towards the Not Done line – ideally there should be between 2 and 3 sprints worth of Ready PBIs at any one time.
Unfortunately, this chart is not available with the Visual Studio Scrum and MS Agile templates for TFS, though of course you could write your own.

Friday, March 30, 2012

Good software is never complete


We often think in terms of projects to deliver software solutions with a defined scope and deadline. But this often leads to the same software reaching maturity early and in the long term shortens the life span of the software. If software has been architected well, is in a modern language / platform with good developers freely available, has very little technical debt, and is well understood by business users and developers then it likely that some improvements can be made to bring more value to the business than the incremental cost of the development.

Instead of handing the software to support, mothballing the source code, and waiting for the business to raise a request for a change we should be more proactive. Meet regularly with the business owners of the software, check that it’s still being used in the way it was originally intended. If any parts have fallen into disuse find out why – too complicated, business needs changed, found a bug but didn’t report it, adopted offline workarounds. Discuss ways of addressing these issues effectively. Understand how the business is changing and what impact this will have on the software

This is a similar issue to the product vs project centric nature of agile vs waterfall. Traditional software development has been dominated by a tendency to treat a problem as a project – plan the approach, budget it, resource it, execute it, then stop. However, the solution to one problem often reveals a different business problem to solve.

The project centric approach also drives us towards separating PMO and Support functions with often a very abrupt handover, and ultimately a painful transition for both customers and IT staff. Support's remit is to keep the software running, but customers see problems with it and want enhancements. Eventually the frustration builds up enough to kick off another project to cover the gaps between what the product delivers and the current business needs. By the time the project is developed, delivered, and handed off to support there are new gaps, and the cycle continues.

Agile practices like Scrum and XP are as approriate to a support mode of operation as they are to a project, and can be used in a continuum to remove the transition step. Of course in 'support' mode, the number of resources can be reduced, and sprint lengths could be increased (if there is less need to be as 'agile'), but there should continue to be a product owner with responsibility for the managing the ROI on any investment and for maintaining a backlog of proposed changes. At the beginning of each budgeting cycle the product owner can apply and justify the money allocated to their Product for the next budget period - if this is a large amount with significant change the next period will feel like an agile project - if it is a small amount with minimal change it wil lfeel more like support.






Wednesday, March 14, 2012

Product vs Project Health

Imagine a doctor treating a patient. She listens to what the patient says, probing for more information, then makes some direct observations, asking if it hurts when she presses here, listening to breathing, etc, before making some treatment recommendations. Imagine another doctor who never actual meets his patient, but just semds them for blood tests, looks over the results and assumes everything is fine. Which doctor would be more effective at maintaining healthy patients?

Well run agile projects regularly take time to inspect the actual product being produced and judge the health of that product directly. Waterfall projects tend to look at secondary artefacts to give clues (or reassurance) about the health of the product – normally these are adherence to milestones in a plan and to budget constraints. In fact waterfall focuses very much on the project rather than on the product and the terminology reflects this – project plans and project managers vs product backlogs and product owners.

Of course good project managers on waterfall projects instinctively know how their product is looking and will take corrective action if required though there is nothing in the 'methodology to force them to'. However, many ‘green’ PMs are genuinely astonished at the end of the project to find the product doesn’t stack up with users. ‘But everything has gone to plan and we are under budget – I don’t understand what went wrong’.

In summary, agile measures health of the Product, waterfall measures health of the Project, these are different things.

Wednesday, February 08, 2012

Choosing a sprint length


In interview discussions with candidates I've heard about many different flavours of Scrum, though most teams seem to settle for 2 week sprint durations I've heard of anything up to 10 week sprints (which probably stretches the definition a bit). So how do you choose an ideal sprint length for your project. Firstly, I'd say going beyond 3 weeks probably means you're not going to reap the benefits of the iterative nature of agile - more than three weeks and the danger of gold plating and reverting to incremental development will be strong.

So what factors should you consider when choosing a sprint length?
  1. If the requirements are expected to evolve rapidly from sprint to sprint once the business has had a chance to review progress then choose a shorter sprint - i.e. be more agile
  2. If you think it will be a struggle to groom requirements to have them ready for a sprint choose a shorter sprint duration so you only need one or two weeks of work ready at any one time
  3. If the team is in-experienced with scrum choose a shorter length to gain more frequent practice in the process
  4.  Larger projects may benefit from using a longer sprint length, especially if the majority of requirements are well known
  5. If you think getting committment from the product owner is going to be a problem then the choice of sprint length won't fix this, but I'd suggest choosing a shorter length is probably wise. That way they have shorter but more regular meetings to attend, and you can re-engage with them on a more regular basis.
Ideally the sprint duration should not change during the project – e.g. going from a 2 week sprint for the first few then changing to 3 weeks. Changing the sprint length distorts some of the metrics like sprint velocity and product burndown charts that then require more care to interpret.

Also up for debate is when to start / stop sprints. We've migrated to having sprint boundaries mid week so that lessons learned from reviews and retrospectives are at the forefront of our minds during sprint planning held on the same day or next morning rather than after the weekend has passed.


Monday, February 06, 2012

Code coverage considered harmful

‘Code coverage considered harmful’. I interviewed a developer not that long ago and he said this to me (in more words). If I could remember his name I’d give him the credit. For at least 7 years now we’ve had a continuous integration process in place for all our projects with automated unit tests and code coverage measurements. Since DB changes have always been part of our CI process, and since our unit tests had the ability to roll back DB transactions (I know, so that makes them integration tests right), we’ve always had a desire to have a reasonably high code coverage (over 85%) since there should be no excuse for not having this. Since there was an open source tool to do this (NCover) we started to measure - what harm could it possibly do?

Well, over the years this has become a problem for a number of reasons:
  1. Since the developers know this is being measured and have easy access to the results, they often just find uncovered code and write tests just to cover that code with meaningless asserts – e.g. assert that a class has 8 properties. Or potentially worse, with no asserts at all!
  2. Developers have started covering code which is not actually ever used by the software. It’s some lava flow code that’s been superseded by some new refactoring but not cleaned up. Instead of checking whether or not the code is required a test is created to bring up coverage.
  3. Developers have not bothered using TDD (or BDD) practices – since the tooling can tell them _after_ they’ve written their code which tests are ‘missing’ they can just write tests to cover the code after the fact.
  4. Which also means they are just coding to a design in their head rather than to a business requirement expressed as a TDD test+assert (or a BDD behaviour+should).
  5. Writing tests after the code also just results in them rarely failing since the developer assumes it’s coded correctly; if the test fails they assume the test is wrong rather than the code. They also start using automated assert generation tools – which is pretty scary when you think about it – yes I’ve just confirmed that my code does exactly what it does…. duh
  6. Boundary conditions are ignored. Doesn’t matter that a range condition exists – one test can cover it, even though min-1, min, max and max+1 value should ideally be tested.
There is no business reason why a class should contain 8 properties, there is no business reason for a class to exist at all for that matter. There is no business reason to test code which can never be run in production, there is no business reason to test code so that code coverage is higher. There’s no business reason to generate tests just to satisfy a metric.

What’s the solution? Probably we should stop measuring coverage, but that alone will not fix the issues above, and might be throwing the baby out with the bath water. Would it be better to have only 50% coverage with good, meaningful tests? After all a big chunk of any code is simple and may not benefit from test automation.

The real solution is to start doing TDD or BDD as it’s supposed to be done, and reviewing the tests that are being written – there is still no good substitute for code inspections. At minimum extract all the test case names and put them in front of the business person – if they can understand them, then you’re on the right track. If they ask ‘what’s this stuff about making sure we have a non-empty collection of strings’ you’re probably not.

Friday, January 06, 2012

Software life spans


All software has a life span – same as humans. In general a software’s life is much shorter – how much software is actually in use 10 years after it was written? – but it follows a similar pattern. In the childhood years it is nurtured and cared for, and grows relatively quickly, and is the proud achievement of its creators. Then for a while it stands on its own, though still receiving regular attention, getting minor improvements made, and generally being kept healthy. Slowly though it falls into a state of disrepair, it may have lost some of its former gloss, and parts may have fallen into disuse. Finally it is decommissioned, replaced with a newer solution.

My belief is that once software gets past middle aged there is no way back. It has become legacy code and the chances of restoring it to a healthy state are minimal. Unfortunately, some code transitions to legacy very quickly (often before it has even made it to production). Code transitions to this legacy state for a number of reasons:
  1.  The software is written in an obsolete language / framework which very few good developers would want to work in (good developers choose the new technologies), resulting in C level developers maintaining the system. 
  2. The language / framework is fine, but the architecture is brittle. Few good developers want to polish turds, resulting in C level developers maintaining the system. 
  3.  The architecture and language are fine but there are no automated builds, deployments, tests, comments, or documents explaining what it does. Good developers will work on this software, but prefer to work on new software, resulting in level C developers maintaining the system. 
  4.  The architecture and language and automation are fine, but the application is large and complex making it difficult to extend without a lot of investigation. Good developers will work on this software, but prefer to work on new software, resulting in level C developers maintaining the system. 
  5.  There are no good developers in your team because you have no new project work, resulting in C level developers maintaining the system. These developers have worked on the software for long enough to leave it in worse shape than they started.  The slippery slope… 
  6.  The software was written entirely by C level developers in the first place. It is a tangled ball of mud and is legacy before it has even gone live.
  7. Software is considered ‘complete’ by the business rather than being reviewed regularly for improvements. Eventually, the business finds that the system cannot be modified efficiently and accurately enough to support their changing processes 
We (development teams) need to address these issues. Let’s look at them one by one:
  1. Ensuring modern languages and frameworks are used is relatively easy, though we need to invest more time in planning upgrade paths for software – the progression of .Net Frameworks has been fairly straightforward migration (except for 1.0 to 1.1 in the very early days). Almost all modern languages share a common DNA and many frameworks are available in different flavours.
  2. A clean architecture and understandable design is basic coding practice, but all too often solutions are plain hacky. Using NDepend and similar code design inspection tools can help assess existing architectures, but really all that is required is some smart thinking and regular code reviews for compliance. 
  3. All modern apps should be built using TDD / BDD, CI, with high code coverage and appropriate levels of commenting and documentation. Integrated wikis for both users and tech staff should be mandatory. 
  4. Modern apps are complex, but we often make them more so by coupling the internals of the app needlessly. Designing apps as composites of services can decouple the component parts leaving each one more understandable to the developers charged with maintaining them, and more easily modified individually. 
  5.  Hire better developers – or at least only use C-level developers on project work with good developers. 
  6. See above
  7. Create a culture of continuous improvement – state in every meeting you have with the business ‘Good Software is never complete’ – there will always be a way to effectively and efficiently extend it to bring more benefits to the business. This is not a negative statement ‘you’ll never see the end of this project’, it’s a positive statement ‘this software will be a long term partner for the business and will grow and evolve in ways we can’t even imagine right now’ 
Legacy code is a curse for an organisation. It demotivates A level developers making them more likely to move on, and provides a safe haven for C level developers making them more likely to stay. The slow downward progression of skills in the dev team makes it more likely that more code will become legacy which exacerbates the issue.