In the event of failure (ha ha ha, I couldn’t resist that), this is what I’m aiming to cover….
The Swiss Cheese Model of Accident Causation (to give it the full name), was developed by Professor James T. Reason at the University of Manchester about 25 years ago. The original 1990 paper,“The Contribution of Latent Human Failures to the Breakdown of Complex Systems”, published in the transactions of The Royal Society of London, clearly identifies these are complex human systems, which is important.
Well worth reading is the British Medical Journal (BMJ), March 2000 paper, ‘Human error: models and management’. This paper gives an excellent explanation of the model, along with the graphic I’ve used here.
The Swiss Cheese Model, my 300 second explanation:
- Reason compares Human Systems to Layers of Swiss Cheese (see image above),
- Each layer is a defence against something going wrong (mistakes & failure).
- There are ‘holes’ in the defence – no human system is perfect (we aren’t machines).
- Something breaking through a hole isn’t a huge problem – things go wrong occasionally.
- As humans we have developed to cope with minor failures/mistakes as a routine part of life (something small goes wrong, we fix it and move on).
- Within our ‘systems’ there are often several ‘layers of defence’ (more slices of Swiss Cheese).
- You can see where this is going…..
- Things become a major problem when failures follow a path through all of the holes in the Swiss Cheese – all of the defence layers have been broken because the holes have ‘lined up’.
Who uses it? The Swiss Cheese Model has been used extensively in Health Care, Risk Management, Aviation, and Engineering. It is very useful as a method to explaining the concept of cumulative effects.
The idea of successive layers of defence being broken down helps to understand that things are linked within the system, and intervention at any stage (particularly early on) could stop a disaster unfolding. In activities such as petrochemicals and engineering it provides a very helpful visual tool for risk management. The graphic from Energy Global who deal with Oilfield Technology, helpfully puts the model into a real context.
Other users of the model have gone as far as naming each of the Slices of Cheese / Layers of Defence, for example:
- Organisational Policies & Procedures
- Senior Management Roles/Behaviours
- Professional Standards
- Team Roles/Behaviours
- Individual Skills/Behaviours
- Technical & Equipment
What does this mean for Learning from Failure? In the BMJ paper Reason talks about the System Approach and the Person Approach:
- Person Approach – failure is a result of the ‘aberrant metal processes of the people at the sharp end’; such as forgetfulness, tiredness, poor motivation etc. There must be someone ‘responsible’, or someone to ‘blame’ for the failure. Countermeasures are targeted at reducing this unwanted human behaviour.
- System Approach – failure is an inevitable result of human systems – we are all fallible. Countermeasures are based on the idea that “we cannot change the human condition, but we can change the conditions under which humans work”. So, failure is seen as a system issue, not a person issue.
This thinking helpfully allows you to shift the focus away from the ‘Person’ to the ‘System’. In these circumstances, failure can become ‘blameless’ and (in theory) people are more likely to talk about it, and consequently learn from it. The paper goes on to reference research in the aviation maintenance industry (well-known for its focus on safety and risk management) where 90% of quality lapses were judged as ‘blameless’ (system errors) and opportunities to learn (from failure).
It’s worth a look at the paper’s summary of research into failure in high reliability organisations (below) and reflecting, do these organisations have a Person Approach or Systems Approach to failure? Would failure be seen as ‘blameless’ or ‘blameworthy’?
It’s not all good news. The Swiss Cheese Model does have a few criticisms. I have written about it previously in ‘Failure Models, how to get from a backwards look to real-time learning’. It is worth looking at the comments on the post for a helpful analysis from Matt Wyatt. Some people feel the model represents a neatly engineered world and is great for looking backwards at ‘what caused the failure’, but is of limited use for predicting failure. The suggestion is that organisations need to maintain a ‘consistent mindset of intelligent wariness’. That sounds interesting…………
There will be more on this at #LFFdigital, and I will follow it up in another post.
So, What’s the PONT?
- Failure is inevitable in Complex Human Systems (it is part of the human condition).
- We cannot change the human condition, but we can change the conditions under which humans work.
- Moving from a Person Approach to a System Approach to failure helps move from ‘blameworthy’ to ‘blameless’ failure, and learning opportunities.