AIRWave Guideline

AIRWave

Adapt to Inspect Ratio (AIR) – guideline for Continuous Improvement

Inspection and adaptation are the fundamental aspects in our personal or professional life. If we are not inspecting, we will never know if we need to improve in the first place. When we do inspect, if we are not acting on the exposed improvements needed, then why bother inspecting anyway? It gets tricky when we don’t have a guideline to measure how much are we improving and are we improving enough to stay ahead in the game? These are real concerns in any industry and is not limited to software development.

Let’s take the Retrospective event as an example as the inspection time box. It is a mandatory event Scrum proposes the team should have, at the end of each sprint. It is, in my opinion, is the most important one out of the 4. But still in some teams it is considered as a waste of time. Not because it is boring but because it gives an opportunity to bitch about (or applaud) something we may or may not feel comfortable with; it can raise questions on us and our contribution towards the team or expose vulnerable areas which we might be hiding for years.. *evil laugh*.

The generalised format is, the team members explore the topics they wanted to discuss, debate, establish a shared understanding and document a list of things to improve or remove an impediment which needs to stay in the radar. Then team members share the ownership depending on type and try to act on them in the following sprint. So what’s missing which triggered the necessity of what I am about to introduce to the world?

The missing link is the inspection of the fate of the outcomes, in any team or group.

Are they actually being actioned? Are they staying buried deep inside a tool? Have they become part of the long list in a corner of the whiteboard with other previous actions which are not actioned yet? How many we need to action on the following sprint? Can we measure it without complicating the process?

Why the name AIRwave?

The answer to all these questions lies in the simplest new guideline metric “Adapt to Inspect Ratio (AIR)”; to be used for visualising a trend and to act on improvements “continuously”. The ratio is converted to decimal for simplicity and can never have a value 0. In other words, it is mandatory to act on at least one identified improvement over a negotiable “Inspection Cadence”. Simply identifying an improvement doesn’t magically solve our problem or get actioned, hence this is the one and only mandatory aspect of this metric we need to carefully monitor.

Inspection Cadence (IC) – is the only constant time box which will be in place to reduce vigorous monitoring of adaptations and to reduce complexity of this background activity. Continuous inspection identifies the improvements and continuous adaptation resolves them. The cadence simply visualises the AIR in a graph to project a trend and does not predict a deadline. The resultant visualisation of the graph plot will resemble a wave, called the “AIR Wave(more on this later).

 

*WARNING: It is important we establish that this is aimed to provide a guideline for Improvement Measurement and NOT a metric for Performance Management on the basis of these measurements, at any level.

How and Why?

This AIR will be purely based on resolved versus raised improvement ideas. It visualises how well the culture is responding to change. It directly reflects the adaptation competency. The aim will be to keep the decimal value of the ratio close to 1 where –

  1. The recommended AIR stays in between 0.5 to 1.5 (1 ± 0.5).
  2. A score higher than 1.5 may seem like an excellent score (higher number is better right?) but it is designed to reflect non-validated or unnecessary adaptations without honest inspection, which can have some disastrous effect.
  3. A score from 0.5 to 1 will reflect a satisfactory and steady adaptation competency. We should always aim for this.
  4. A score below 0.5 will reflect existence of too many identified but inactioned improvements; should be treated as a warning for the area of the business.

AIR greater than 1 – How?

In theory, the ratio should always be 1 and will be a straight line in a perfect world. In reality, we will never manage to identify and resolve most improvements at the same time (or within few days). It will heavily depend on what is the identified improvement.

If we say –

  1. We want to reduce the duration of our meetings from 2hrs to 1.5hrs – May be.
  2. We want to have 80% test coverage from the current 20% – Nope.

We will therefore generate, yes you have guessed it right, the Improvements Backlog (IB) to act upon continuously. In some week (or the cadence of inspection cycle) we may be able to remove 3/5 from the IB, which will leave 2 waiting to be removed (current AIR = 0.6). The following week, say another 5 identified improvements are added but this time some of them were linked to the remaining 2 from last week. We may be able to remove all 7 in that 2nd week (for the sake of simplicity to explain). Which will show 7/5 making the AIR 1.4.

Therefore, if we look at both weeks: AIR = (0.6 +1.4)/2 (weeks) = 1

The AIR (and the resultant AIR Wave) will represent unique visual interpretation for almost any field we can think of that needs improvement like teams, codebase, department, a process or more. Go on, add that “we need a free drink Friday” to the backlog, if that helps you.

This is assuming all improvements are achieved in those 2 weeks which may be a stretch of our imagination but you get the point. In fact, I would be surprised to see any team/department going beyond AIR value 1 at all, in such a short time. That’s why it is recommended to monitor the AIR every month or every few months. The aim is to identify and implement improvements continuously, that’s why it’s a backlog and not a regimented training camp.

Case Study

It’s always good to have real examples (client is confidential), especially from a greenfield product with 3 teams working on it. The teams were “sort of” owning 3 feature areas of the application. AIR was measured over 10 weeks for all 3 Scrum teams, where the Inspection (Retrospective) cadence was 2weeks (over 5 sprints) but the AIR was documented in a weekly basis, on background. Have a look at the plots below and here is what we learned from them:

  • All 3 teams during their first retrospective loved the concept of improvement backlog linked above proposed by Mike Cohn.
  • All 3 teams, as we would expect, managed to identify way more improvements than they thought they would need and ended up with a huge list which was impossible to clear within the next few sprints. Good, it’s all part of the learning.
  • Initially, all teams were told that the range is min 0.5 (1 being ideal) with no upper limit.

Team A – The Opportunists

Started clearing out the backlog quite fast, most improvements being small things like – get a whiteboard, buy stationery, reduce length of refinement session and so on. By the week 5, they almost managed to gamify the metric by adding stuffs like “create new swimlane” and removed it from backlog within an hour. This was an intended experiment to establish an upper limit (of 1.5) which was a reasonable modification of the original range and considering it a worst case scenario for any other team.

Knowing the upper limit, the team A realised that no one is actually using this for their performance management. Instead it was a “guideline” all along and no one would get paid more for their number game. Moment of truth, they stopped adding random improvements and started focusing on “real improvements”. As we can see in the graph, they continued raising improvements and resolving continuously that made sense to them with no specific time limit. They became the A-Team 😉

Team B – The Pessimists

Interesting group and helped us realise that some improvements become obsolete after few weeks when not get actioned in time. They started rather slow as they wanted another team (A) to try it out first. The team members were used to follow orders and not willing to self organise for several reasons. Reason why their improvements were remarkably different week by week.

They did remove a lot of improvement ideas just because they lost the purpose. Some of them included ideas that were taken care by other teams (common interests), some were simply so big that it would have stayed on the backlog for a year or more (e.g. Microservices implementation across all components) and some were simply out of budget (Need standing desks for healthy work life).

Team C – The Teacher’s Pet

They were the most steady team and followed the instructions by book. After a few hiccups on the first few weeks they managed to make this their second nature and a way of work rather than treating it like a separate activity. Much like we would expect how a team would get benefit from AIR wave visualisation in general.

I did measure the improvement beyond 10 weeks and found in most cases it is always between 0.5 – 1 for the first few months and then fluctuates quite a bit depending on the product/project, when it is actively used as a guideline.

Conclusion

It will be great to see AIR Wave implemented on background by fellow Agile Coaches/Scrum Masters and help the teams see what they are capable of. This is meant to generate enthusiasm creating a subtle observer’s effect. As mentioned before, the best use of this metric will be to start with an AIR of 1 (one identified improvement to one resolved) and then establish a suitable AIR range based on empirical evidence as a standard for that backlog. Goal is to always stay close to 1 at all times for a steady and continuous improvement.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s