The Iteration Review
The review happens at the end of each Iteration (each team organises its own) and provides the true measure of progress by showing a working and tested team’s increment; the increment being one or more software or other components functionalities.
Teams demonstrate every Story, spike, refactor and non-functional requirements (NFR) implemented.
Attendees are the team and its stakeholders.
If a major stakeholder cannot attend, the Product Owner should follow up individually.
As always, the Iteration Review is time boxed and 1-2 hours shall suffice.
Being a realistic review of what has been done during the iteration, the preparation shall be reduced to a very minimum: the goal is not a polished demo. One or two hours will be more than enough to prepare it.
The content of an Iteration Review is very similar to what we have in the single team review (how we did in the iteration) and its Agile best practices:
Did we meet the goal?
It’s basically a Story-by-Story review
But there is an extra view added and specific to the scaled version of the review:
How are we doing in the Program Increment (Pl) ?
- Review of Pl Objectives
- Review of remaining Pl scope and reprioritising if necessary
Sample Iteration Review Agenda
This is an example of an agenda comprising all the points above:
- Review business context and Iteration goals
- Demo and solicit feedback for each story, spike, refactor and NFR
- Discuss Stories not completed and why
- Identify risks and impediments
- Revise Team Backlog and team Pl Objectives as needed
The best approach for the team is to begin considering how and what to demo during the Iteration Planning.
This is responsibility of the Scrum Master and Product Owner.
During the demo, also make sure that the team has time to celebrate its accomplishments and that stakeholders acknowledge them
The Team Iteration Review is a prologue of the System Demo, with all teams together, that we can see in details later.
- A lot of time is spent preparing for the demo
- Demo is mainly talk / slides as opposed to working software and hardware
- PO sees things for the first time in the Team Demo
- System Demo is not done because ‘the Team Demo is enough’
- The Team concentrates only on their part and are not ready for the System Demo, not coordinating with the other Teams and the System Team.
- Team members are not invited to the Demo to save time
- Scrum Master does everything and Team Members have not the opportunity to demo
- The Demos are not interesting or relevant to ART stakeholders
- The right participants are not present
Agile Teams continuously adapt to new circumstances and improve the methods of value delivery.
Too often organisations assume that the culture, processes and products that led to today’s success will also guarantee future results. That mindset increases instead the risk of decline and failure.
In this fast moving world, it’s the adaptive learning organisations – with the ability to learn, innovate and improve more effectively and faster than their competition – that will dominate their markets going forward.
This is achieved committing to relentless learning and improvement.
Since its inception in the Toyota Production System, kaizen – or the relentless pursuit of perfection – has been one of the core tenets of Lean.
Though perfection is impossible to reach, the act of striving for it leads to the continuous improvements.
Taiichi Ohno, the creator of Lean, emphasised that the only way to achieve kaizen is for every employee at all times to have a mindset of continuous improvement. The entire enterprise as a system is continuously being challenged to improve.
But improvement requires learning.
It’s never easy to identify the causes and solutions of complex problems.
The Lean model for continuous improvement is based on a series of small iterative and incremental improvements and experiments that enable the organisation to learn its way to the most promising answer to a problem.
Many of SAFe’s principles and practices directly support these efforts; here are some of the ways to promote a learning organisation:
- Use both successes and failures as learning moments to build mastery.
- Iteratively refine a shared vision during each PI Planning period.
- Teams learn continuously through daily collaboration and problem solving, supported by events such as team retrospectives and Inspect & Adapt.
- Systems Thinking is a cornerstone of Lean-Agile and one of the ten SAFe principles.
- Dedicate regular time and space for learning through the Innovation and Planning (IP) iteration that occurs every Program Increment.
Competing in the digital age requires a culture of creative thinking and curiosity—an environment where norms can be challenged and new products and processes emerge.
The Iteration Retrospective is a critical event to foster continuous improvements and for SAFe is pretty much similar to Agile Retrospectives: time boxed, just the Agile Team (no managers!) and outputs 1 – 2 things that can be done better or preserved in the next Iteration. The improvement items are added into the Team Backlog.
It’s responsibility of the Scrum Master too organise it and encourage improvement between retrospectives
Common anti-patterns to avoid
- The only focus is on what to improve and not what to preserve
- Focus on problems that are outside of the team’s control
- Failure to achieve results
- Inviting people outside the team (especially management) to the retrospective
The retrospective is only one of the improvement tools.
For example, a Scrum Master should actively engage with other Scrum Masters to drive improvement on the ART, very similar to a Community of Practices.
Another good example is to actively track iteration metrics with the goal to continuously improve them, and also changing, adapting them as the Teams get better and can tackle more challenges.
Here are some metrics to start that a Team can easily track in each iteration:
- # Stories (loaded at beginning of Iteration)
- # accepted Stories (defined, built , tested, and accepted)
- % accepted
- # not accepted (not achieved within the Iteration)
- # not accepted : deferred to later date
- # not accepted : deleted from backlog
- # pushed to next Iteration (rescheduled in next Iteration)
- # added (during Iteration; should typically be 0)
Quality and test automation
- % Stories with test available / test automated
- Defect count at start of Iteration
- Defect count at end of Iteration
- # new test cases
- # new test cases automated
- # new manual test cases
- Total automated tests
- Total manual tests
- % tests automated
- Unit test coverage percentage