The first 90% of a project takes 90% of the time, and the last 10% of the project takes the other 90% of the time.
A key tension for development teams is defining what it means to be “done”. It seems like every time we learn something new, the goal posts are changed. This is especially true when what is being delivered needs external validation. External validation can take several forms, especially when different parts of a business are at different stages in their agile journey:
- Quality assurance is a separate team.
- Business stakeholders don’t know what they want until they see it; then they want something different.
- Regulatory bodies or auditors must approve the results, for legal or compliance reasons.
- Competition and customer demands are constantly evolving, requiring rapid innovation.
All these challenges create a similar problem: nailing down what it means to be done. Agile frameworks promote a “definition of done” to guide a team’s work in small increments. While each team may have a different definition of what “done” means, they should all have in common:
… a potentially releasable Increment of “Done” product at the end of each Sprint.
Potentially Releasable Increment
Note that the product increment — new functionality or a change to functionality — does not need to be “released” to be “done”. But it needs to be potentially releasable. That means, if given to customers and end users, they should be able to do something new and worthwhile with it. If it’s a patch or bug fix, it should actually fix the problem.
The epitome of agile delivery models is continuous integration (CI) and continuous delivery (CD). Taken together, this means that work is pushed to production as soon as it’s completed. I can attest firsthand that for teams that embrace the CI/CD model, the benefits are huge. Customers get what they want faster, and changes can be made much more quickly. But sometimes this is not possible, or even desirable.
Why now might there be a delay between “done” and “released”?
- End users may need training before new functionality is dropped on them.
- Additional validation or approval cycles may be needed that are completely outside the development team’s control, especially for legal or compliance reasons.
- The new functionality works just fine, but is no longer relevant.
- The new functionality works just fine, but needs more functionality to be strategically complete.
- Marketing strategy may combine several increments into a single release, to make a bigger splash or a dramatic “reveal”.
- The release process may be too expensive for a continuous delivery model, so releases must be grouped.
While each of these reasons may be strategically valid, they do raise opportunities for significant improvement.
If user training is a bottleneck, can the product be made more intuitive? For example, Apple insists that from the very first launch, an iPhone app should be “fast, fun, and educational“, with quick and easy tutorials.
The one time I tried to write an app for the iOS store, it was rejected for this very reason — a brand new user should understand the app and how it works in less than a minute!
Compliance and validation constraint
A distinction between “done” and “released” is critical when external approval processes are required. Especially when the approval timeline is out of the dev team’s control, your feature tracking and task tracking systems must be set up to handle this limbo state.
The ideal resolution is to embed quality assurance into the development team, and use a test-first model, but sometimes this is impossible due to business constraints or regulatory constraints.
The business tendency is for the validators to say “it’s not done until I say it’s done and so you have to wait to call it done.” This mindset creates two problems. Consider a team on a 2-week sprint cycle, where their work is then taken through a 1-to-2 month approval process:
- Reports of the team’s velocity (how much is getting to done each sprint) are meaningless, because the “done” work is reported based on the approval team’s cycle, not the development team’s work cycle.
- The development team cannot celebrate the completion of their sprint goal at the end of each sprint. Thus one of the key emotional motivators of “doing agile” is lost.
In short, the feedback loop on productivity — getting to “done” — can be kept distinct from the feedback loop on approval for “release”.
When the validation process finds a problem, a defect can be reported as a “bug” for the team to work on, most likely in their current or next sprint. The other situation where “it’s all wrong” is when “you built what I said but that’s not what I meant” or “that’s not what I want now”; in that case, a new task (aka user story) can be created for the development team to schedule in their work.
Release timing constraint
Sometimes release timing is driven by cost constraints or other strategic considerations. In that situation, it’s important to have a system to track what is done all the way through to final release in the product or service being delivered.
The better we get in coordinating activity between development and operations, the more our systems must handle this intermediate state between “done” and “released”. For the technically-minded, here are some topics to track down and understand to handle this intermediate “done” but “not released” state:
- Feature toggle
- Cherry picking (not my first choice, but may be useful)
Final warning: Don’t be a Scrum But
It’s far too easy to use challenges like this to compromise on agile principles and proven DevOps practices. Hats off to Bill Rinko-Gay of the Agile Alliance for challenging us to consider whether we’re making the tough choices we need to make to advance the cause of agility and grow toward more healthy, productive teams: https://www.scrumalliance.org/community/articles/2013/february/you-may-be-a-scrum-but
Embrace the journey. It may take many iterations to implement these principles. Many agile processes and principles are designed precisely because we do not live in a perfect world. When we set our priorities and make one improvement at a time, eventually those changes add up like a snowball going downhill.