Good Practice Case Studies, You’re Doing It Wrong

IMG_4391Good practice case studies are like Hollywood movie trailers…… they only show you the best bits of what happened (with some exceptions).

Sorry if that’s upset anyone, I am trying to be helpful.

There is a great deal that can be learnt from the things that ‘didn’t quite go to plan’ (failure in many cases). However, in most examples you don’t generally get to find out about these golden nuggets of learning.

That’s a Bleak View of the World.  Well, I do go to a lot of conferences and seminars, where I listen to lots of people presenting their good practice case studies. I also read lots of case studies on a variety of topics (for good reasons, it’s not an obsession or anything).

The one thing that strikes me about ‘Good Practice Case Study Land’ is that,……Nothing Every Goes Wrong!

OK, I’m exaggerating a bit, but things hardly ever seem to go wrong in case studies, particularly when they are written down and presented at a big conference, or even a modest workshop with colleagues.  It’s not quite the world I experience where mistakes, errors and deviations are part of everyday life.

I’m not suggesting that the people who are presenting their ‘perfect’ good practice case studies are fibbing, or are committing any sort of terrible crime. To use the Hollywood Movie analogy, the case studies are just a ‘trailer’ for what actually happened.

When you condense weeks, months or even years of work down into 800 words or 15 minutes on stage you have to leave things out. Dropping the less glorious events (things that didn’t quite work to plan) feels like a very reasonable thing to do.

Did anyone suffer? The point I’m trying to get across is that by not including any of the things that didn’t work (or failed), we aren’t helping people who might want to learn from our experience. if they don’t know about our pitfalls, mistakes and failures, they will probably make them themselves. I think it would be nice if we saved them some suffering.

An unexpected benefit could be ‘helping’ those people who seek, ‘simple quick fixes’, understand that transferring good practice might be a little more complicated in reality. You might have experienced one of them, racing back from a conference (literally, in the car talking excitedly over the phone) insisting on the implementation of some latest ‘good practice’. Not fully recognising some of the complex issues (and failures) that sit behind the 15 minutes of highly polished case study they have experienced. If you are ever on the receiving end of some of this just ask, “……did they mention any failures or setbacks?”

So, what’s the PONT?

  1. Good Practice Case Studies are like Hollywood Movie Trailers – they show the best bits and there’s usually a lot more going on.
  2. It’s rarely a straight line from problem to solution. Failures and mistakes happen, which are golden learning opportunities – we need to share them.
  3. If you are presenting a good practice case study, do introduce some failure. People might like you more – honestly, it’s called the Pratfall Effect.

Disclaimer. For Matt Wyatt’s friend Gareth (who works on the Oracle programming in the basement), the sequence in the graphic of things going wrong is not prescribed. It is only an illustration. These are things that might happen. You could have; just one of those things (an almost perfect project), all of those things, or 77 of them in a long line (a bit like Edison’s 1000’s of lightbulb attempts). It’s just to get the idea across. Apparently though there is a formula for failure and success in IT startups, mainly in California. Thanks to Dave Snowden for sharing this. For a successful Digital Startup you need to:

  • Socialise the Idea (talk to people about it),
  • Fail 3 times,
  • Pivot (a sort of Plan B / do something completely different), and
  • Success!

Finally: Have a look at this trailer for the Sylvester Stallone film Cliffhanger. Considered to be one of the best trailers ever (see below). I’ll leave it up to you to decide if the film lived up to the promise of the trailer, and, have a think, do you have any examples of good practice case studies where the reality wasn’t quite as glossy (please let me know).




Corporate Amnesia: Deliberately Forgetting Failure Can Be Hard Work

I love this; cruel but fair

I love this – cruel, but fair

The recent VW emissions scandal has brought together two interesting ideas, corporate amnesia and learning from failure.

Instead of learning lessons from failure and retaining them in the corporate memory (so you don’t make the same mistakes in the future), what if organisations adopted the opposite approach?

What if an organisation took action to deliberately forget failure, accidents and mistakes (and wrongdoing); so they were free to repeat the practice in years to come? Deliberate corporate amnesia, sounds a bit unlikely? Do read on……

Corporate Memory Loss. At the moment there’s a fair bit of angst in (some) organisations about ‘corporate memory loss’. You know the sort of thing; a financial crisis appears, and the response is to offer lots of people severance and/or restructure. The result is that people with many years of knowledge, skills and experience walk out of the door, taking with them what gets referred as ‘corporate memory’. The potentially negative effects of this loss of knowledge are fairly obvious, which is a bit strange when you think about how often so little is done to retain some of that knowledge.

Shelves full of (failed) products at GfK Custom Research

Shelves full of (failed) products at GfK Custom Research – aka The Museum of Failed Products

When you combine this organisational loss of knowledge with learning from failure things get interesting.

The Museum of Failed Products (which I’ve mentioned before) is a good example. Basically its vast collection of consumer products produced over the last 40+ years. All neatly arranged on the shelves of a huge facility in Anne Arbor, Michigan, it serves as reminder that most new consumer products (8 out 10 it’s claimed)  don’t succeed.

A secondary feature is the ‘rediscovery of lost corporate memory’. Apparently it’s not uncommon for a Product Developer/R&D Scientist; new to ‘Company X’ (and bursting with ideas) to browse the shelves and ‘discover’ that Company X tried that product idea 10 years ago, and it was a complete failure. A classic piece of corporate amnesia.

IMG_4243Meanwhile at Emissions Test Center…. Bringing things right up to date it turns out that VW aren’t exactly strangers to the world of engine emission ‘defeat devices’. Way back in 1973 they ran into trouble with the US Environmental Protection Agency (EPA) for using emission defeat devices.

Megan Geuss has written a fascinating article in Ars Technica UK, ‘Volkswagen’s emissions cheating scandal has a long, complicated history’, which brings together all the key facts. To be fair to VW, they weren’t alone in this sort of practice (but you probably suspected that anyway).

The picture above, with ‘defeat devices’ written in the margin is from the 1973 EPA press release, well worth a look.

The point is, that in 1973 VW were involved in activities that have a very similar odour to the emissions cheating scandal that has happened 42 years later. Did they learn nothing from 1973? Have they forgotten the (‘not an admission of guilt’) payment of $120,000 they made to the EPA?  Surely, there must be some story about ‘1973 and the EPA’ floating around VW? There must be some crusty old engineer lurking in an obscure department that remembers it well? Apparently not………

Forgetting is Hard Work. As it turns out there may be a phenomenon of deliberate ‘forgetting’ in some organisations. Researchers (Sebastien Mena et al) at The Cass Business School in London have written this paper ‘On the Forgetting of Corporate Irresponsibility’ which gives a through analysis. If you fancy something shorter have a look at ‘VW and the never-ending cycle or corporate scandals’ from the BBC. Professor Andre Spicer at Cass is also worth looking at on twitter (@andre_spicer)

What this all boils down to is that it takes hard work and some determination to ‘forget’ a corporate scandal. The table below provides a helpful (non academic, very loosely based upon what I’ve been reading) summary. Do see if you recognise anything from organisations you have worked in.


Remember, all of this ‘forgetting’ this takes time and quite a bit of deliberate effort – it doesn’t feel like something that happens by accident. So, you have to ask the question, “Is it worth the effort?”

So, What’s the PONT?

  1. Corporate memory exists and can have a negative as well as a positive effect.
  2. ‘Forgetting’ corporate memory requires a deliberate effort, and I would suggest it probably isn’t worth it in most cases.
  3. Better to remember the lessons, to avoid repeating the mistakes of the past.

Checklists don’t work* (*sometimes, particularly if you get implementation wrong)

 B17 Flying Fortress Checklist, 1944

B17 Flying Fortress Checklist, 1944

This isn’t an anti-checklists post.

It’s an illustration of why picking up an example of good practice in one location, and dropping down in another doesn’t always work. No matter how brilliantly conceived, beautifully constructed or obviously ‘good’ the original good practice might be.

So, next time a politician or clever speaker at a conference tells you “it’s just a simple matter of transferring good practice”, please ask them; “what about Atul Gawande and the Hospital Checklists?” (and then get sacked or thrown out of the conference).

Checklists are a good thing. The thinking behind checklists is difficult to argue against. Basically a checklist breaks down a complicated procedure into a series of logical, easily understood steps. This helps a person/operator successfully complete the procedure.

The origins of the modern checklist (which is a fascinating story) track back to 1935 and the adoption of the B17 Flying Fortress Bomber by the U.S. Airforce. The B17 checklist helped to reduce accidents caused by human error, and versions of it have been ubiquitous in the aircraft industry for the last 80 years.

Checklists are so effective that they aren’t just restricted to aviation. They have a role in everything from; ‘packing for your round the world trip’ through to auditing and ‘shutting down your nuclear reactor in an emergency’. Useful in any complicated situation where the human brain could become confused or muddled, particularly in a stressful situation. So, it’s no surprise that the medical world (eventually) picked them up.

IMG_4093Atul Gawande helped us love checklists. You’ve probably heard of Atul Gawande, author of the 2009 bestseller, The Checklist Manifesto? (Yes, I do own a copy, a birthday present from my Father in Law).

The book talks about how eight hospitals took part in a pilot of the  World Health Organisation (WHO), 19 Point, Surgical Safety Checklist during 2007/8. The results were impressive:

  • Post operative complications reduced by almost a third.
  • Death rates reduced by almost a half.

The WHO went on to recommend that the Surgical Safety Checklist (or something similar) was adopted by all hospitals. The UK National Health Service (NHS) required that all its facilities use the checklist, and by 2012 over 2000 facilities worldwide had tried checklists. An impressive example of ‘evidence based good practice’ being rolled out across organisations.

If you want to get a feel for just how enthusiastically this example of good practice has been ‘rolled out’, have a look at You Tube and search for ‘WHO Surgical Safety Checklist’. You might be surprised by just how many diverse organisations and groups have uploaded videos, I was. You can lose a few hours browsing the 3000+ search results. ‘How NOT to do the WHO Surgical Safety Checklist’ is a particularly uncomfortable yet compelling video to watch.

Checklists haven’t worked like they should. This feels like a bit of a spoiler, but do have a look at this article from Nature; Hospital checklists are meant to save lives – so why do they often fail?

The Nature article uses an example of 200,000 procedures carried out at 101 hospitals in Canada, where there was no evidence of any reduction in complications and deaths following use of checklists. It also details the failure to match results of the Michigan Checklist, which was used to prevent problems associated with the introduction of catheters into veins (eurrgh!).


Source: Nature

For me, the article is making a very clear point that; checklist aren’t the problem, it’s how the people who receive the ‘new’ checklist choose to use it. There are a number of ‘issues’ used to illustrate this: (have a look at the helpful graphic)

  • Staff resisted, or failed to complete the checklist. Sounds like a bit of the ‘not invented here syndrome’ to me.
  • The checklist was illogical or inappropriate. Fair point, one size rarely fits all.
  • The checklist was seen as a waste of time. A bit like the first point, people don’t see it as useful.
  • ‘Parachuted in’ solutions. It was just another ‘initiative’ dropped on front line staff by Managers and Administrators.
  • It felt ‘Imposed’. People do have a tendency to resist things that are imposed upon them, regardless of how much it is the ‘right thing to do’ (I know I do).
  • It didn’t fit the local context. The requirements of a checklist developed in a well-resourced American hospital might not apply in an under-resourced hospital in a conflict zone. You may not have the same standard of equipment or even the same number of people available to perform the task.

It’s all about the implementation. The Nature article goes to talk about the need for careful thinking about how you transfer good practice and the growth of Implementation Science.

As an example of proven good practice (which saves lives) the WHO Surgical Safety Checklist stands out. It’s been enthusiastically adopted in many places, just have a look at You Tube. However there are places where it has failed to have the desired impact. Based on this experience I think there is a case for thinking more widely about the process of implementation in the transfer of good practice and the role of Implementation Science. Good practice transfer is not always a simple case of ‘just do it’.

So, what’s the PONT?

  1. Checklists are a very effective way of reducing human error in complicated (and straightforward) procedures. An example of ‘transferable good practice’.
  2. Issues around ‘implementation’ the WHO Surgical Safety Checklist have prevented them from being fully effective in some settings.
  3. Universal acceptance and use of the Surgical Checklists might take time (Aviation checklists are after all 80 years old).

Related Post: Why is Good Practice such a bad traveller?

The Rise of Troll Farms. Can You Really Trust Social Media in a Crisis?

Wikinomics - I've owned several copies

Wikinomics – I’ve owned several copies

A version of this post originally appeared on

Way back in 2008 I read ‘Wikinomics’ by Don Tapscott and Anthony Williams, and it pretty much changed my life. At the core of Wikinomics was the idea that the large-scale collaboration of people online, was going to change everything we do.

The ‘phrase d’jour’, was Web2.0; used a lot at the time to describe the ideas around ‘online mass collaboration’, including what we now recognise as social media.

Wikinomics was full of inspiring examples of Web2.0 working in real life, including one about the response to a disaster that really intrigued me.

Computer Coders to the Rescue. This was the story of how a group of computer coders had quickly come to the rescue following the 2005 Hurricane Katrina; which had devastated many USA Southern States and in particular New Orleans. Responding to the missing persons crisis almost 3000 computer coders gathered online; developed and populated a system called “People Finder”.

The system automatically collated data from the dozens of fragmented databases, message boards and other sources online where people were posting information. The result was a single source which outperformed any government initiative and more importantly, helped to reunite people with loved ones. It’s an inspiring story, well worth reading about in Wikinomics (pg 185 -188).

Google Person Finder from the Haiti Earthquake

Google Person Finder from the Haiti Earthquake

Since Hurricane Katrina organisations like Google have become active in helping locate people during a crisis with Google Person Finder, continuing the good work started by the Web2.0 Katrina Coders.

Mass collaboration online, Web2.0 as it was described, (including social media) was going to save the world, and I was hooked!

Imagine my disappointment several years later when some people who really know about Web2.0 started telling me that “you need to be cautious of social media in a crisis”.

My ‘caution’ was needed because social media was being systematically used and manipulated to influence how people behave. They suggested that, “approximately 30% of tweets in a crisis are ‘bot’ (automatically) generated, and many are specifically aimed at escalating the incident”. The reason why this happens was not explained (or fully known I suspect), so I was happy to partially ignore it. Web2.0 and social media is a force for good….. Isn’t it?

Troll Farms - not full of Pabbie the Gentle Troll King clones

Troll Farms – not full of clones of Pabbie the Gentle Troll King

The Rise of The Troll Farm. Then I bumped into the phenomenon of Troll Farms, so I sat up and paid close attention.

Basically Troll Farms are coordinated efforts to spread misinformation through social media for an unclear but often dubious purpose.

This article, ‘The Agency’, in The New York Times describes a social media crisis that ‘somebody’ manufactured in the town of St Mary Parish in Louisiana in 2014. Anyone looking at social media at the time would have been convinced that a local Chemical Plant had actually exploded.

Imagine how that would make you feel if you lived anywhere nearby, or had friends and family in the vicinity? Basically there was a concerted effort via; Twitter, fake websites, You Tube videos, Wikipedia pages, texts and other methods to create the impression that there had been a major explosion at a Chemical Plant in St Mary Parish.

Nobody seems to know why, but the scale of the efforts suggest something very well-resourced and highly organised was behind the ‘crisis’, possibly a ‘foreign state’? The New York Times article goes on to talk about the workings of the Internet Research Agency, an alleged Russian Troll Farm*.

It’s all very interesting, detailing how workers would spend 12 hours a day on multiple false social media profiles spreading misinformation for a particular purpose. 100’s of staff spending 12 hours a day doing things like innocuously commenting on blog posts while subtly (or not) inserting political messages. Depending on your point of view, you can decide how much of The New York Times article might also be misinformation about the Russian misinformation service (it’s a funny old world).

*Note – Other Troll Farms are probably available – this isn’t just a Russian thing. Other Countries/Large Corporations/Political Groups very probably have their own equivalents of ‘Troll Farms’.

What Happened to Web2.0? This all feels very different to the world of Web2.0 Tapscott and Williams talked about in Wikinomics;

  • Why on earth would someone want to manufacture a crisis using social media?
  • Why would someone want to escalate an existing crisis using ‘Twitter bots’?

I’m not clear on any of the answers to these questions. I do however think that some of the challenges aren’t very different to what we’ve always faced – people want to influence what we do by feeding us certain types of information.

The rise of Web2.0 and Troll Farms just seems to give a bit more scale and speed to the transfer of the (mis)information. But no need to panic just yet (Mr Mainwaring) and switch off your social media.

The trick in responding might be bit of old-fashioned good practice; check your sources, work out who you can trust, test what you find and do a bit of ‘triangulation’…… before you press the share or retweet button.

So, What’s the PONT?

  1. Tapscott and Williams were right in Wikinomics – Mass Collaboration Online has changed (just about) everything.
  2. As with many ‘tools’ available to humans, we can choose to use them for good or bad purposes.
  3. How we deal with the information we receive hasn’t changes that much, always check your sources and work out who you can really trust, especially in a crisis.


The James Reason Swiss Cheese Failure Model in 300 Seconds

James Reason Swiss Cheese Model. BMJ, 2000 Mar 18:320(7237): 768-770
James Reason Swiss Cheese Model. Source: BMJ, 2000 Mar 18:320(7237): 768-770

This week I’m at the Cardiff pilot of Practical Strategies for Learning from Failure (#LFFdigital), explaining the Swiss Cheese Failure Model in 300 seconds (5 minutes).

In the event of failure (ha ha ha, I couldn’t resist that), this is what I’m aiming to cover….

The Swiss Cheese Model of Accident Causation (to give it the full name), was developed by Professor James T. Reason at the University of Manchester about 25 years ago. The original 1990 paper,“The Contribution of Latent Human Failures to the Breakdown of Complex Systems”, published in the transactions of The Royal Society of London, clearly identifies these are complex human systems, which is important.

Well worth reading is the British Medical Journal (BMJ), March 2000 paper, ‘Human error: models and management’. This paper gives an excellent explanation of the model, along with the graphic I’ve used here.

The Swiss Cheese Model, my 300 second explanation:

  • Reason compares Human Systems to Layers of Swiss Cheese (see image above),
  • Each layer is a defence against something going wrong (mistakes & failure).
  • There are ‘holes’ in the defence – no human system is perfect (we aren’t machines).
  • Something breaking through a hole isn’t a huge problem – things go wrong occasionally.
  • As humans we have developed to cope with minor failures/mistakes as a routine part of life (something small goes wrong, we fix it and move on).
  • Within our ‘systems’ there are often several ‘layers of defence’ (more slices of Swiss Cheese).
  • You can see where this is going…..
  • Things become a major problem when failures follow a path through all of the holes in the Swiss Cheese – all of the defence layers have been broken because the holes have ‘lined up’.
Source: Energy Global Oilfield Technology

Who uses it? The Swiss Cheese Model has been used extensively in Health Care, Risk Management, Aviation, and Engineering. It is very useful as a method to explaining the concept of cumulative effects.

The idea of successive layers of defence being broken down helps to understand that things are linked within the system, and intervention at any stage (particularly early on) could stop a disaster unfolding. In activities such as petrochemicals and engineering it provides a very helpful visual tool for risk management. The graphic from Energy Global who deal with Oilfield Technology, helpfully puts the model into a real context.

Other users of the model have gone as far as naming each of the Slices of Cheese / Layers of Defence, for example:

  • Organisational Policies & Procedures
  • Senior Management Roles/Behaviours
  • Professional Standards
  • Team Roles/Behaviours
  • Individual Skills/Behaviours
  • Technical & Equipment

What does this mean for Learning from Failure?  In the BMJ paper Reason talks about the System Approach and the Person Approach:

  • Person Approach – failure is a result of the ‘aberrant metal processes of the people at the sharp end’; such as forgetfulness, tiredness, poor motivation etc. There must be someone ‘responsible’, or someone to ‘blame’ for the failure. Countermeasures are targeted at reducing this unwanted human behaviour.
  • System Approach – failure is an inevitable result of human systems – we are all fallible. Countermeasures are based on the idea that “we cannot change the human condition, but we can change the conditions under which humans work”. So, failure is seen as a system issue, not a person issue.

This thinking helpfully allows you to shift the focus away from the ‘Person’ to the ‘System’. In these circumstances, failure can become ‘blameless’ and (in theory) people are more likely to talk about it, and consequently learn from it. The paper goes on to reference research in the aviation maintenance industry (well-known for its focus on safety and risk management) where 90% of quality lapses were judged as ‘blameless’ (system errors) and opportunities to learn (from failure).

It’s worth a look at the paper’s summary of research into failure in high reliability organisations (below) and reflecting, do these organisations have a Person Approach or Systems Approach to failure? Would failure be seen as ‘blameless’ or ‘blameworthy’?

High Reliability Organisations: Source  BMJ, 2000 Mar 18:320(7237): 768-770

High Reliability Organisations: Source BMJ, 2000 Mar 18:320(7237): 768-770

It’s not all good news. The Swiss Cheese Model does have a few criticisms. I have written about it previously in ‘Failure Models, how to get from a backwards look to real-time learning’. It is worth looking at the comments on the post for a helpful analysis from Matt Wyatt. Some people feel the model represents a neatly engineered world and is great for looking backwards at ‘what caused the failure’, but is of limited use for predicting failure. The suggestion is that organisations need to maintain a ‘consistent mindset of intelligent wariness’. That sounds interesting…………

There will be more on this at #LFFdigital, and I will follow it up in another post.

So, What’s the PONT?

  1. Failure is inevitable in Complex Human Systems (it is part of the human condition).
  2. We cannot change the human condition, but we can change the conditions under which humans work.
  3. Moving from a Person Approach to a System Approach to failure helps move from ‘blameworthy’ to ‘blameless’ failure, and learning opportunities.

Learn the Rules Like a Pro, So You Can Break Them Like an Artist.

IMG_3851“Learn the rules like a pro, so you can break them like an artist” is a quote attributed to Pablo Picasso. Variations of it have been used widely by people from The Dali Lama to Fashion Guru Alexander McQueen, which is where my story begins…..

Once upon a time my friday evenings typically involved quality moments at the bar in Pontyclun Rugby Club. You get the picture…. a robust discussion of culture, philosophy, macro-economics and global politics (and beer). So, you can imagine how easy it was for me (8pm last friday evening) to merge seamlessly with the crowds at The Victoria & Albert Museum. Waiting patiently in a very warm queue to see the sellout Alexander McQueen exhibition, Savage Beauty.

I’m ashamed to say that I wasn’t expecting to enjoy it. High Fashion isn’t really my thing, and my impressions of Alexander McQueen’s work were based upon occasionally flicking through the pages of Vogue. All pretty scary stuff as far as I could see.  So…… I’m shuffling along with the crowds, feigning interest, when I’m  suddenly confronted with a quote from McQueen that changed everything…..“You’ve got to know the rules to break them…. That’s what I’m here for – to demolish the rules but keep the tradition”.

I was transfixed – I was immediately out of the line and pushing my way back in at the start of the exhibition. I was like a man on a mission, reading everything I could and looking at the displays with renewed interest. What on earth was going on you might now be asking….

IMG_3840Alexander McQueen Spent 5 Years as an Apprentice. It turns out that McQueen left his East End school (with one O Level in Art) at 16 years of age and went to work as a Tailors Apprentice in London’s Savile Row. During this period, including two years at a military tailors, he learnt how to produce beautifully tailored garments.

A Masters Degree in Fashion at St Martins College followed, and the rest is history (very interesting history as it happens). One of the things that stood out for me in the exhibition was the fact that McQueen was always recognised for his skill and ability in garment making – he knew how things worked.

When you look at what he produced in his early years, you can see this – and you know it is there with the more ‘avant grade’ productions from later years – they wouldn’t exist without that deeper understanding (even the birch plywood dress). The quote “you’ve got to know the rules to break them….” beautifully sums up that approach. It also embodies why I think the idea of an apprenticeship (time spent learning your craft) is so very important. As Picasso said, it’s all about “learning the rules like a pro, so you can break them like an artist”.

IMG_3849How do Apprenticeships Work? Apprenticeships have been around since the 1300’s and have been broadly defined as ‘a person who learns a trade from a skilled employer, for a fixed period at low wages’. I think there’s a bit more to it nowadays and it is worth a look at the Wikipedia description of apprenticeships, particularly in countries like Germany and France.

It you fancy going deeper, The Educational Theory of Apprenticeship is also worth a look. This is all about ‘learning through the physical integration into practices associated with the subject’, getting your ‘hands dirty’. I particularly like the explanation of the passage of a novice through to ‘journeyman’ in 5 Phases:

  1. Modelling – the learner observes and contemplates. Basically you have a look at what the expert is doing and think about what you have seen.
  2. Approximating – in non-critical scenarios, the learner mimics the actions of the teacher. There is opportunity to make mistakes and fail in this ‘non-critical’ environment, and essential part of the learning process.
  3. Fading – the learner (still within a safety net) starts to ‘play’ with what they have learnt , and there is less dependence upon the teacher.
  4. Self Directed Learning – the learner attempts actions in the real world, where the scope is well understood. They only seek assistance from the expert when required.
  5. Generalising – the learners skills are applied to multiple scenarios in the real world as they continue to learn and grow their ability. Within each of these phases there is an emphasis upon thinking and reflecting upon what you have experienced and learnt.

I think this is a very useful model that can be applied far more widely than trade and craft apprenticeships. All sorts of activities and professions (and dare I say Leadership and Management) could benefit from people spending time going through these phases.  To sum up what I’ve picked up from my recent experiences of learning about apprenticeships:

  • They take time (typically between 2-7 years to fully learn and understand)
  • You need to know how things work (learn the rules)
  • There needs to be a ‘safe’ space to learn from mistakes (and failure)
  • You have to keep learning (or you’ll stay as a ‘Journeyman’)

IMG_3844Fast Track Graduate Schemes. Finally, I do wonder if there is a case for taking more of an apprenticeship approach to ‘fast track’ graduate schemes?

Based on what I’ve been reading I think that you do need time to learn how things work. You also need time to test, fail and make mistakes – it’s a key part of learning. How many fast track schemes allow this?

Ultimately you really need to learn the ‘rules’ (written and unwritten) – only then can you behave like Picasso, The Dali Lama and Alexander McQueen. It takes time and hard graft.

So, What’s the PONT?

  1. Understanding ‘how things work’ really is necessary to perform any task or job.
  2. This will take time, commitment and a fair bit of reflection / thinking, as well as the practical stuff.
  3. If you want to break the rules like a pro/effectively, you really need to understand what you are breaking, so learn them in the first place.

Thanks to my old school friend Ian Davies for also inspiring this post with our chats about apprenticeships, and for coaching one of my sons in the dark arts of quality management systems.

The Ladder of Inference. Climbing Down from Expert Bias

The Ladder of Inference is a concept developed by the late Harvard Professor Chris Argyris, to help explain why people looking at the same set of evidence can draw very different conclusions.

The difference comes from the idea that, based on their beliefs, people ‘choose’ what they see in amongst a mass of information.

More on that later, but first off, who fancies an experiment?

If I was being a bit hipster I could claim it as a Randomised Control Trial (RCT if you are uber hipster), but I’ll stick to plain old ‘experiment’.

Try this experiment at home or in the office:

  1. Go to twitter and find a hashtag for a recent conference or seminar where people have been busily tweeting,
  2. The topic doesn’t really matter, but something reasonably linked to your area of interest/business might be useful,
  3. Search on the hashtag so that you can see a good selection of tweets – about 100 will do,
  4. Copy the 100 tweets and paste them into a document. The aim is to have about 3 pages of written text for people to look at (pretty straightforward so far),
  5. Now find two Test Subjects (people). Colleagues with very different views on the world would be good. Mr Grumpy and Miss Sunshine or Cruella de Vil and Ronald McDonald (we all have them).
  6. Now ask the Test Subjects to independently review the tweets and provide you with a summary of the key points emerging from the Twitter stream,
  7. Just to make it interesting you could ask the Test Subjects to summarise their findings in no more than 5 tweets,
  8. Sit back and wait for the results.
  9. If you are feeling ambitious, you could repeat this experiment a number of times, with different Test Subjects, different collections of tweets or a different context.
  10. For example choose Test Subjects (people) with a very similar outlook. Before they do the analysis, brief one that you thought seminar was excellent, and the other that you thought it was rubbish (sneaky!).

The Results. I’ve not formally run this experiment (yet) but I have experienced a fair number of the summaries of Twitter conversations that people like to share about conferences. Storify Twitter summaries are almost mandatory for public sector conferences nowadays.

What has intrigued me is just how differently people can interpret and present the summary of the discussion at the same event. I appreciate that there will be a certain amount of bias and pushing of corporate messages. If your organisation is running the conference/seminar you will probably want to push key messages – dissenting voices and challenge probably isn’t something you are going to ‘share’ given the choice.

However, it is the variation in the summaries from the apparently independent/unbiased people that intrigues me. I’m sure that people will have very good intentions, but I do think there is a fair bit of ‘expert bias’ going on in these situations.

This is something neatly illustrated by The Ladder of Inference developed by Chris Argyris and had also been used by Peter Senge in his book The Fifth Discipline.

The basic idea is that:

  • When presented with a range of information/data/facts, we select what fits with a belief we already hold about that situation.
  • All other information is ignored.
  • We make our decision on the ‘evidence’ we have selected (evidence based decisions are always the best).
  • Our beliefs become stronger based on that good decision we have made (a feedback loop).
  • In the future, when we look at information/data/facts, and what we select will be influenced our now more strongly held beliefs.

Both of the graphics I’ve used in this post illustrates the concept very effectively. If you would prefer here is a 3 min video, The Ladder of Inference Created Bad Judgement, from Edward Muzio.

Am I suffering from ‘Expert Bias’? I’ve been worrying for a while that I’m a candidate for The Ladder of Inference. It’s not just how I view the tweets from a seminar, but just about everything where I’m presented with data/information/facts.

Everything I’ve ever experienced informs how I see things and gives me an ‘expert bias’. You might now be thinking, ‘that would never happen to me….’ (oh lucky you…..)

At one level it could be argued that I should stop getting anxious and get over it – that’s the job, to make sense of complicated information. At another level I would like to put some rigour into what I’m doing, how do I perform a cross check?

The work from Chris Argyris suggests a process of ‘climbing down’ The Ladder of Inference. Going back down into the facts and looking at all them more closely, attempting to remove your bias. The Edward Muzio video summarises this as:

  • Climb down the Ladder of Inference (a lovely metaphor)
  • Question your Assumptions and Conclusions (with a trusted colleague)
  • Seek Out Contrary Data (to test what you are seeing)

These are all good suggestion, but quite hard to do. What if you have no idea of how you are biased? The topic of unconscious bias, how you recognise it, and how you deal with it is helpfully the focus HR Training Programmes in many organisations, linked to things like equalities and diversity work, but helpful in so many other areas.

At the end of the day, if we just get as far as recognising there is such a thing as The Ladder of Inference and Expert Bias, I think that’s a pretty good start. It might help me when I’m reading the next tweet Storify.

So, What’s the PONT?

  1. People can interpret the same set of ‘facts’ in different ways.
  2. Recognising it as Expert Bias, Unconscious Bias or The Ladder of Inference is helpful as it can help prevent wrong conclusions and bad decisions.
  3. Once your bias is recognised, you need to take steps to ‘Climb Down The Ladder’. It’s always better to fall from a low step.