Saturday, 17 June 2017

The Case of the Missing Methodology - An Inspector Ofsted Mystery

Inspector Ofsted thumbed through his note book in an attempt to review the evidence he had collected. He had certainly written a lot, but the problem was he just didn’t know how valid and reliable it all was. “If only the methodology had not gone missing”, he thought, “the mystery would be solved”. But sadly, nobody knew where the methodology was or even where to begin looking for it …. 
In a recent post, I recounted what seemed to me to be a sorry tale. I have been asking Ofsted questions about their inspection methodology, particularly how their social care inspectors avoid selecting samples of work which are unrepresentative of the work done by the local authority they are inspecting. What precautions, if you like, do they take to avoid selecting just examples of poor work or just examples good work, when they choose cases to ‘sample’ or ‘track’?

Published sources cast no light on this issue. Reference is made to inspectors making “proportionate decisions ... in order to secure a representative judgement” but how they take these decisions and what ‘proportionate’ means remains mysterious.

After much wasted effort trying to explain my Freedom of Information Act request I seem to have hit Ofsted’s buffer stops. The well has run dry. It appears that nobody there knows the answer to my question. Put another way there seems to be a massive black hole in the inspectorate’s methodology.

Does that matter? You bet it matters! If, as seems to be the case, Ofsted inspectors just roll up to inspect child protection and children’s social care with no prior thinking about how they are going to get a sample of work which truly represents the standards of work achieved by the authority, then the inspection is a lottery. One authority might just be unlucky and have lots of poor cases selected resulting in a verdict of ‘inadequate’, while its neighbour may be lucky in only having good cases selected for inspection.

And we all know the consequences. An authority that fails an inspection usually has to reorganise, fire staff and hire new ones. It is likely to lose committed staff as morale slumps. Those who remain are likely to be disorientated by the frantic pace of change. Their work will suffer. Service users are also likely to suffer, as all attention is diverted towards managing the results of the inspection, not the service they receive. Some of that might be justified if the eventual result was guaranteed to be accurate, and the ultimate consequences constructive. But all I have learnt recently suggests the opposite.

Because this post is a just a tiny bit critical of Ofsted, I thought it was only fair to send it to their Press Office and invite a comment before posting. This is what they said:
“Inspectors select samples that reflect the full range of the local authority’s work with children. Throughout the inspection, they ask social workers, their managers and leaders to identify good practice and practice they think they can improve, so that inspectors can evaluate the quality and extent of this work. 
“Inspectors meet regularly with local leaders to discuss their findings and give them an opportunity to challenge any findings they feel are not representative. 
"We are confident that our methodology and the parameters in our inspection guidance are sufficient to ensure a fair and accurate judgement.”
Nicely put, I’m sure. It’s good to know that they give people the opportunity to challenge their findings. But that’s just politeness and common sense. What I wanted to know is how Ofsted ensures that the cases selected are representative and that the findings of the inspection are valid. I don’t think they’ve told me that!

Back to square one perhaps?