The idea of the velocity is to improve the estimations.
A team’s velocity is simply how many stories (or points, depend what you use in the estimation) that team can complete in a standard iteration.
Given a team’s known historical velocity in a given domain (i.e. the average, measured after several iterations), it is possible then to predict how long (how many iterations) it will take them to complete an arbitrary amount of work. It is also a valuable calibration that will help them estimate the next round of stories more accurately.
Note: in theory the story size should be taken into account when measuring velocity but in practice when you had several iterations and stories behind you, the sizes of the stories are quite evenly distributed and you can just use the number of features or stories.
Let’s say the team just completed a project that was split into n iterations and we have an average velocity that is: 10 features per 2-weeks iteration (in the team’s standard iteration – 2 weeks – the team was able to complete in average ten features, small or big).
Now the same team will start a similar project and we want to estimate its duration. The project has a backlog and its size is 230 features, again some small some big.
Just use this formula:
estimate of the backlog size / team velocity = 230 / 10 = 23 iterations will be needed = 46 weeks
This is a simple and reliable process that works quite well but keep in mind these caveats:
- It is based on historical data and is predictive only to the extent that the future (new stories) looks similar to the past (stories already completed).
- It is valid only to the extent that the team continues to have the same individuals. If you change the team members, velocity will change (but it should stabilise after a few iterations).
- A team’s velocity cannot be compared to any other team (estimations depend on what the team uses as the smallest story and compares everything to that).
- It isn’t so easy to understand by the team, and it’s even less easy to understand by the stakeholders, including those who provide project management assistance and financial governance.
- It’s hard to get started. Until teams have done a few iterations, they have no idea how to predict what they can accomplish. That gets even trickier at the program level, where you need to aggregate these estimates to attempt to predict when some larger functionality will be available.
- Getting to schedule and cost estimates is very indirect. You have to work through relative estimates, establish velocity, and so on, and you have to understand the burden cost of each individual team, before you can translate a story point into a cost.
- Teams occasionally struggle to adjust their velocity based on the availability of team members, for example if a team member is only part-time for a sprint or a key resource is not available for a period.
- Team velocities are not normalised. It’s not unusual for one small team to have a x velocity, while a team twice that size has a velocity of half that. That makes for some pretty uncomfortable discussions.
Make it visual.
In many knowledge-work domains there are queues but because they are invisible (usually they are just a bunch of digital data) they are not seen as such or felt as problems. On the other side, a business person who has invested ten million euros to create a gigantic pile of partially done stuff will feel the pain and urgency to get it moving.
The idea is to make immediately visible those queues of wasteful work-in-progress (WIP): untested features, incomplete information, half written documents and manuals, bugs, …
Either if you use a spreadsheet or a whiteboard to keep track of the work in progress during an iteration, it’s easy to draw a simple chart that gives a visual idea of how the iteration is going. Visual signals will help your team stick to focus.
The most famous visual signal is the so-called burn-down chart from Scrum.
On the X axis you have the time: the iteration days.
On the Y axis, the remaining effort measured in hours or the remaining number of features completed (this chart above combines both).
Then you track each day how many features or how much effort remains: on day zero you have still all the features in front of you, that’s the complete iteration work; then the next day say you completed one feature then the remaining features are initial minus one, and so on, plotting a curve that should discretely going down, ideally until zero.
Therefore is called a burn-down chart: it shows – as time goes by – how the progress towards no remaining work is going and if you are on track (the ideal burn-down rate is a line between the initial work at time zero and zero work on last day).
If you prefer to see the other way around, how something is built, you can (additionally) use this information radiator I have used sometimes (it works better with projects with a big GUI part, such as web interfaces). You print a big paper on how the interface will look like at the end in a form of a simple wireframe, then every time a feature is completed (for example the rating system) you print out its final GUI full colours and glue it on the wireframe, like a collage that will take shape feature after feature. It is a strong motivation helper for a team to see how something is taking shape!
There are many more metrics you can collect and visually display them, depending on your goals, projects, teams … here are some example.
- how many tasks were included in the iteration
- how many tasks have been completed (according to the definition of done)
- how long each task stayed in every state (new / ongoing / developed / tested / done / etc.); and the averages
- elapsed time per task: how many days it took to cross the board (from new to done)
- touch time: how many days the task was actually worked on (net of waiting times)
- process cycle efficiency % : elapsed / touch