The iteration backlog is the list of refined items chosen from the Product Backlog for development in the current iteration, together with the team’s plan for accomplishing the work. Basically it’s a subset of the product backlog that reflects the team’s forecast of what work can be completed during the iteration.
With the iteration backlog in place, the iteration begins and the Team develops the new Product Increment defined by the iteration backlog.
I prefer to put the iteration backlog artefact directly in the cards that are used through all the iteration, on the whiteboard (real or virtual) but you can use your preferred tool, also an excel sheet.
Following is an example of how a backlog can be quickly created with a tool like a spreadsheet.
During the planning meeting, I fill a temporary excel sheet (that has a couple of useful macros and that I pre-filled in advance with a couple of information like start and end date, available days in the iteration net of eventual holidays, the most important stories (priority!) taken from the product backlog and so on).
One macro will then create the final tasks in our tracking system.
The temporary sheet. Each row represents the information for a topic (user story/issue/etc.), has a multiple sub-rows for each of its tasks and includes the following fields:
|ITERATION BACKLOG (example)|
|July iteration (Priority decreasing) – Start: 1. July End: 28. July|
|As a campaign manager I want to display in my dashboard the statistics by time …||Design the UI||Alice||8||wireframes approved by PO||based on UX product guidelines|
|Code the front-end part||Bob||6||standard DoD||–|
|Adapt the user manual||Carole||3||accepted by training team||–|
- Story – it’s just the story name or ID taken directly from the product backlog; here as a reference.
- Task – the task title (something meaningful but not too long); if you wish you can add an extra column for the task description. You need to list under the story all the tasks that are necessary for it.
- Owner – the name or initials, something to identify the volunteer who will implement the task.
- Estimation – what has been decided in the planning meeting, for example the number of hours; some teams add extra columns (one for each day of the iteration) and put this estimation in the specific day/column. I prefer not to fix in advance in which days will be implemented and not to track the progress inside this file.
- Definition of done – this one requires some more explanation below.
One of the macro is taking the estimations and subtracting them from the availability of the owner. In this way we can see real-time when a team member has already committed to enough tasks and also when all team members have enough to do in this iteration and the planning can stop, without adding any more stories.
As mentioned, another macro will export at the end all the lines in the excel sheet into a tracking system, where every task will have a parent (the story), an iteration, an owner, an estimation and a status (initially “NEW”). From the system is then easy to move the tasks, visualise a virtual whiteboard and extract the current progress in form of e.g. a burn-down chart.
This is just an example, easy to do but there are many tools, commercial or not, that can accomplish the same.
Definition of done
When a backlog item is described as “Done”, everyone must understand what “Done” means. Although this varies significantly per team, its members must have a shared understanding of what it means for work to be complete, to ensure transparency. Especially, it is important that the product owner and the team agree on a clear definition of “done”.
Is a backlog item complete when all code is checked in? When your teammate is convinced? Or is it complete only when it has been deployed to the customers, including a user guide? Or a middle way, when it has been deployed at least to a test environment and verified by an integration test team?
Up to your team. My suggestion is to focus on the primary artefact: a working product / feature.
Therefore a good DoD could be: “ready to deploy to production” that means it has been implemented, peer-reviewed and verified in a test environment.
DoD is not unique nor static.
Not all backlog items can be treated the same: for an item named “Write the installation guide”, the DoD might simply be “accepted by the operations team”.
This is why it’s a good idea to have a “definition of done” field on each individual backlog item; you can always paste something saying “use standard definition” there.
The product owner (or sometimes can be a proxy, like a tester rather than a developer: developers often say something is done when it really isn’t) could use a checklist to support the outcome (is the code checked in? Unit tests passed? All automatic test cases? and so on); it’s especially useful at the beginning and some tracking system can support / enforce it and in general I am a big fan of checklists.
The DoD may change over time.
As you progress and become more experienced you can decide to use the DoD not only for backlog items but for iterations: the Product Owner decision about what it take to release. Not only the demo passed the “how to demo” specification but all necessary documentation is ready, test protocols, certifications, trainings, whatever.
Or you can extend your initial DoD to be more strict. You can add that Release Notes have been written or that the code base has not been messed up. Here is one example from Scrum Inc.
“Done means coded to standards, reviewed, implemented with unit Test-Driven Development (TDD), tested with 100 percent test automation, integrated and documented.”
If you are at the level you can consistently use this kind of DoD, congratulations! Excellent team.