Thursday, 22 June 2017

The cupboard is bare

“I'm afraid there is not a lot (in the Conservative Party manifesto) around children in care and/or young people more widely which is a concern with me… It was a shame children and young people's issues didn't play such a prominent part in the campaign.”

Not my words, but those of former Children’s Minister and Conservative MP, Tim Loughton, speaking to Children and Young People Now last week.

Now, with a hung Parliament and an unhealthy preoccupation with Brexit, there is even less in the Queen’s Speech, which, for those unfamiliar with the archaic ways of the British Parliament, is the way the Government of the day announces its forthcoming legislative programme. In fact, there is hardly anything. Certainly, as Children and Young People Now reports, there is nothing about improving (perhaps I should say 'saving') local child protection services and children’s social care.

The only silver lining, perhaps, is that this Government may not last long. We live in hope!


Monday, 19 June 2017

Gloucestershire – blaming the blamers?

Some of the most interesting comments in the Ofsted report on children’s services in Gloucestershire are to be found in the section on leadership, management and governance, an area in which the inspectorate found the authority ‘inadequate’.
  
The first worrying statement in this section of the report is:

“Inspectors discovered significant discrepancies in some information provided to them by the senior leadership team, bringing into question the integrity of the leadership of children’s services.” (p. 29)

This is an extraordinary statement, which in the absence of further clarification, suggests that inspectors were deliberately misled. A natural first reaction to reading it might be to jump up and down with moral outrage and blame those who may have done any misleading.

But my reaction, while not condoning egregious behaviour, is to ask if the current inspection regime generally predisposes towards lack of candour. The stakes in Ofsted inspections are very high. For those at the top of a local authority, a good Ofsted can result in personal approbation and career success. A poor Ofsted is equally likely to end promising careers or, at the very least, to tarnish indelibly professional histories. It is difficult to be open and honest if the personal consequences of doing so are likely to be disastrous. An inspection regime which makes people afraid to tell the truth will always be one which encourages some level of dishonesty.

Fear, and the evasion and dissembling that go with it, are the inevitable consequences of having inspections which judge rather than enlighten and inform and support. There are alternatives to the ‘take-names-and-kick-arse’ philosophy currently adopted by Ofsted. One approach would be to help local authorities develop better systems of quality management. Another would be to provide analysis of the genesis of problems like staff shortages and high turnover and provide helpful suggestions about how to rectify them or mitigate their impact. A third would be to act as a catalyst for continuous improvement and greater understanding of the needs of children and young people.

The next worrying statement from the report is:

“Senior managers do not provide an environment in which healthy challenge is evident and social work practice is allowed to flourish, and a high number of staff reported that they feel vulnerable, unsupported by senior managers and fearful of challenging or exposing poor practice.” (p.29)

I’m sure the inspector is right to disparage a management culture in which staff feel afraid to speak out about things that they think are wrong, but I baulked at the concluding words of this paragraph – “… challenging or exposing poor practice”. Those words leave me with an uncomfortable image of encouraging practitioners to speak out about their perceptions of their colleagues’ failings; an image of an organisation in which everybody is challenging, confronting and criticising everybody else.

It suggests to me that the inspector has what I believe is a wrong-headed view about how to improve services for children and young people and make them safer. It is what Professor James Reason calls “the person approach” which he says focuses on the errors and failings of individuals, blaming them for “forgetfulness, inattention, or moral weakness”. He goes on to observe that “… (b)laming individuals is emotionally more satisfying than targeting institutions” but he concludes that it “… has serious shortcomings” because “…(e)ffective risk management depends crucially on establishing a reporting culture”. [1]

In contrast Reason believes that what he calls “the system approach” is the correct path to organisational safety. That focuses on the systems and conditions under which individuals work and attempts to construct defences which prevent errors or mitigate their impact. The system approach is about openness and transparency, but it is not about blame. It recognises that we all make mistakes and that we all perform below expectations from time to time. However, the vast majority of mistakes and failings occur not because somebody has acted egregiously. Most mistakes and failings are committed by people acting in good faith.

The key to better services – greater safety, greater quality –is to focus not on blame and accountability, but on learning and improvement. If people feel free to report and discuss errors and service failings without fear of reproach or sanction, they are also free to analyse the things that go wrong and to understand their causes. They are free to put things right. That is the route to constructing better defences and more resilient systems and processes. It is the route to safer organisations delivering higher quality services.

A third worrying remark caught my eye. The inspector writes:

“Instability in the workforce is having a significant impact on the quality of practice. The turnover of social workers and managers is high. The majority of social workers have less than two years’ post-qualifying experience and, for too many, the caseloads are too high and include complex cases that require a good depth of knowledge and experience.” (p. 29)

To be sure, the inspector is right to draw our attention to this fact. But it is baldly stated and there is no analysis. How has this situation arisen, what are its causes and what can be done to reverse the trend? I suspect, although I do not know, that there has been high turnover, not least because “…a high number of staff reported that they feel vulnerable, unsupported by senior managers”. But again, that raises the question why? How has a such an unconducive working environment arisen and what needs to be done to change the culture? I don’t know, I suspect the inspector doesn’t know and I doubt that those charged with putting right the mess will be very open about what they believe. Just holding folk accountable for creating a bad culture, without trying to understand why it happened, is not going to put things right.

Once again, it seems to me, an Ofsted report is part of a cycle of fear and recrimination. It’s almost as if somebody had said: “We are going to punish all those responsible for sustaining the blame culture”. That’s what philosophers like Bertrand Russell [2] used to call a ‘semantic paradox’. Paradoxes may be interesting to logicians, but they aren’t a very good basis for building better organisations.

Just blaming the blamers will get us nowhere.

[1] Reason J. “Human error: models and management” British Medical Journal 2000; 320:768–70
[2] Russell, B. The Problems of Philosophy (London : Oxford University Press, 1912).


Saturday, 17 June 2017

The Case of the Missing Methodology - An Inspector Ofsted Mystery

Inspector Ofsted thumbed through his note book in an attempt to review the evidence he had collected. He had certainly written a lot, but the problem was he just didn’t know how valid and reliable it all was. “If only the methodology had not gone missing”, he thought, “the mystery would be solved”. But sadly, nobody knew where the methodology was or even where to begin looking for it …. 
In a recent post, I recounted what seemed to me to be a sorry tale. I have been asking Ofsted questions about their inspection methodology, particularly how their social care inspectors avoid selecting samples of work which are unrepresentative of the work done by the local authority they are inspecting. What precautions, if you like, do they take to avoid selecting just examples of poor work or just examples good work, when they choose cases to ‘sample’ or ‘track’?

Published sources cast no light on this issue. Reference is made to inspectors making “proportionate decisions ... in order to secure a representative judgement” but how they take these decisions and what ‘proportionate’ means remains mysterious.

After much wasted effort trying to explain my Freedom of Information Act request I seem to have hit Ofsted’s buffer stops. The well has run dry. It appears that nobody there knows the answer to my question. Put another way there seems to be a massive black hole in the inspectorate’s methodology.

Does that matter? You bet it matters! If, as seems to be the case, Ofsted inspectors just roll up to inspect child protection and children’s social care with no prior thinking about how they are going to get a sample of work which truly represents the standards of work achieved by the authority, then the inspection is a lottery. One authority might just be unlucky and have lots of poor cases selected resulting in a verdict of ‘inadequate’, while its neighbour may be lucky in only having good cases selected for inspection.

And we all know the consequences. An authority that fails an inspection usually has to reorganise, fire staff and hire new ones. It is likely to lose committed staff as morale slumps. Those who remain are likely to be disorientated by the frantic pace of change. Their work will suffer. Service users are also likely to suffer, as all attention is diverted towards managing the results of the inspection, not the service they receive. Some of that might be justified if the eventual result was guaranteed to be accurate, and the ultimate consequences constructive. But all I have learnt recently suggests the opposite.

Because this post is a just a tiny bit critical of Ofsted, I thought it was only fair to send it to their Press Office and invite a comment before posting. This is what they said:
“Inspectors select samples that reflect the full range of the local authority’s work with children. Throughout the inspection, they ask social workers, their managers and leaders to identify good practice and practice they think they can improve, so that inspectors can evaluate the quality and extent of this work. 
“Inspectors meet regularly with local leaders to discuss their findings and give them an opportunity to challenge any findings they feel are not representative. 
"We are confident that our methodology and the parameters in our inspection guidance are sufficient to ensure a fair and accurate judgement.”
Nicely put, I’m sure. It’s good to know that they give people the opportunity to challenge their findings. But that’s just politeness and common sense. What I wanted to know is how Ofsted ensures that the cases selected are representative and that the findings of the inspection are valid. I don’t think they’ve told me that!

Back to square one perhaps?

Wednesday, 14 June 2017

What’s in a name?

The only apparent relevant qualification of the new Children’s Minister is his name - Goodwill.

We certainly need a lot of goodwill in children’s social care. 

Otherwise Robert Goodwill seems to know about farming and transport and immigration, but there seems to be no evidence that he knows very much about children or child protection.

Perhaps he is a quick learner? 

I won’t hold my breath.

Wednesday, 7 June 2017

Ofsted's methodology - how good is it?

Ofsted social care inspectors spend a lot of time reading case files in order to make judgements about the quality of the casework undertaken with children and young people.

Published sources are quite clear about this. Ofsted’s inspection handbook and a document nattily titled “Framework and evaluation schedule: children inneed of help and protection and care leavers and Local Safeguarding ChildrenBoards” say much the same thing. Paragraph 61 of the handbook puts it like this:

“Inspectors will track the individual experiences of at least 25 and usually no more than 30 children in need of help and protection, children looked after and care leavers. In exceptional circumstances the number of cases tracked may need to exceed 30 in order to secure a representative judgement. The lead inspector should make a proportionate decision about the number of additional cases to track. They will take an in-depth look at the quality of the help, care and protection children and young people have experienced and the implementation of children in need, child protection, care, placement and pathway plans.”

Paragraph 65 of that document goes on to say that the sample of children and young people whose cases are to be tracked and sampled should be adjusted to ensure a balance of the following:
  • age, gender, disability and ethnicity
  • type of maltreatment/problem
  • educational achievement and ability
  • type of placement
  • stages of “their journey”
  • at least one child from a large sibling group
  • practitioner and team
  • children and young people supported by a third-party provider operating with social services functions delegated to it by the local authority

What all this fascinating detail fails to show, however, is how the lead inspector ensures that the sample of cases is representative of the work undertaken by the authority. How does s/he ensure that an undue proportion of either good or bad work is not included in the sample? The answer to that question is clearly crucial because if a biased sample is inspected, that will influence the overall judgement of the inspection, effectively invalidating its findings.

The handbook clearly recognises that this is an issue. Paragraph 96 says: “The lead inspector should make a proportionate decision about the number of cases to sample in order to secure sufficient evidence for a representative judgement.” But it doesn’t say how. And what is meant by the phrase ‘proportionate decision’ is not explained. And Paragraph 61 says: “In exceptional circumstances the number of cases tracked may need to exceed 30 in order to secure a representative judgement. The lead inspector should make a proportionate decision about the number of additional cases to track.” What these ‘exceptional circumstances’ are is not explained. Nor is there any explanation of what securing a ‘representative judgement’ involves. But clearly, we can all relax because that good old ‘proportionate decision’ will, no doubt, be made to resolve the issue!

That, you may be surprised to hear, is it. The handbook says no more on the issue of how to ensure that the sample of cases inspected is representative. There appears to be no further guidance to inspectors. It’s over to them, I suppose.

It seems to me that this is very worrying. In a post I made some time ago I looked at what I call the red bead problem. In that post I considered the problem of natural variation and asked the question: “How can Ofsted be sure that the variation between different local authorities, revealed in its inspections of children’s services in England, is due to differences in performance rather than just due to chance?”

Of course, I never got an answer from Ofsted, probably because I expect nobody there reads my blog. I have eventually got around to putting in yet another Freedom of Information Act request to Ofsted, asking them how they ensure their samples are representative. So far, they haven’t given me much of answer, but when they do I’ll keep you informed.


Watch this space.