When I was a student social worker in the 1970s I undertook
one of my practice placements - in what was then called ‘social services’ - in a
large building in the centre of a large town not too far from London. I may
have only been a student social worker, but, like every other social worker
that worked there, I had my own office. To be sure it was not a large office
and it was nothing fancy, but it was private and quiet and clean.
It seems things have changed .... for the worse.
This week I read a sad report in Professional Social Work of an interesting and alarming survey
concerning working conditions of social workers in Britain in 2015. It is
peppered with words like ‘cramped’, ‘noisy’, ‘dark’ and ‘dirty’. Indeed more
than 60% of those questioned did not think their workplace was fit for purpose.
Many of the respondents complained vociferously about shared
work spaces and 'hot desking' with more than 60% not having a quiet place to make
sensitive phone calls and more than 70% saying that there was no quiet place to
work and concentrate.
It is shocking that in 2015 working conditions for social
workers are as bad as they are. It’s not just that it is unpleasant and
dispiriting to work in an unpleasant environment, it is downright dangerous.
Distraction due to background noise is an important
factor in workplace error and has been recognised as something that needs to be
minimised in many safety critical industries. In civil aviation the ‘sterile
cockpit rule’ is a mandatory requirement for there to be no distraction on the
flight deck during critical parts of the flight.
https://en.wikipedia.org/wiki/Sterile_Cockpit_Rule
In nursing red tabards are worn by nurses, indicating that
they should not be disturbed during a drug round. These have been found to
result in reduced medication errors.
http://www.ncbi.nlm.nih.gov/pubmed/24930500
There is simply no excuse for employers creating or tolerating conditions
in which social workers are more likely to make errors. Serious attention needs
to be given to how to achieve safe workplaces – albeit on limited budgets -
which are quiet, comfortable and have dedicated private spaces. Failure to do so
risks more than the welfare of members of the workforce – it risks the safety of children
and young people.
Wednesday, 23 December 2015
Saturday, 19 December 2015
Ofsted and the Parable of the Red Beads
The government’s proposed reforms of children’s services in
England assign a pivotal role to the inspectorate Ofsted. If a local
authority’s children’s services department is rated ‘inadequate’ by Ofsted, it
will now be given six months to improve or risk being taken over. That’s
drastic stuff, so there has never been a better time to think very hard about
how valid and reliable Ofsted inspections are.
To help do just that I have developed a thought experiment which is based on the red bead game that was
used by the quality guru, Dr. W. Edwards Deming, as a teaching aid in the seminars and lectures he gave
across the world until his death in 1993. Dr. Deming used the game to
demonstrate that even with identical methods and tools there will always be
variation in results and that this variation often has nothing to do with what
individuals and groups actually contribute to delivering a particular process.
My thought experiment adapts the red bead game as follows:
Imagine you have 150 pots, each one corresponding to a local
authority in England. In each pot you place 5000 beads, 4000 of which are white
and 1000 of which are red. The beads represent ‘cases’ or ‘service episodes’.
The white beads are examples of acceptable or good practice and the red ones
are examples of poor practice. So 1 in every 5 cases (20%) is substandard. [1]
Now simulate the activity of an inspector by randomly
extracting from each pot 50 beads and examining what you get [2]. You will be
very lucky indeed to find that each extract contains 40 white beads and 10 red
ones (corresponding to the overall proportion of 20% red beads in the pot). On
the contrary you are highly likely to have quite a lot of variation in the
white/red proportion of each extract. In some cases the number of red beads
will be well below 10, in some it may even be 0, and in some cases it will be
considerably higher than 10. In a few cases there may even be more red beads in
the extract than white.
Results for the first 10 pots might look like this:
Pot
|
No. (%) red
|
A
|
5 (10)
|
B
|
15 (30)
|
C
|
11(22)
|
D
|
19 (38)
|
E
|
2 (4)
|
F
|
17 (34)
|
G
|
8 (16)
|
H
|
23 (46)
|
I
|
5 (10)
|
J
|
18 (36)
|
This variation cannot be ascribed to anything that is going
on inside the pots (because we know that we put in 4000 white and 1000 red
beads into each one and that they have just stayed there until they were
extracted). So it would be very wrong indeed to ascribe to any particular pot a
description such as “too many reds” or “too much poor practice” or
“inadequate”. And it would be very wrong to conclude that pots D, F, H and J
should be made subject to special measures while those responsible for pots E,
A and I should be lauded for their outstanding performance!
But I hear you ask, perhaps Ofsted has taken steps in the
way it has designed its inspections, and the ways in which it selects its
samples, to minimise the natural variation which occurs in the red bead game?
Perhaps they use clever statistics to ensure that their results are valid?
Well, perhaps they do but there is no
evidence of it. I have scoured the Ofsted website for anything which suggests
that they have thought about the red bead problem. And I have written to them
and pursued them with a Freedom of Information Act request to find out if they
use statistical techniques to try to ensure inspections are valid. The reply I received
gives no indication that they do. [2]
But it is not really up to me to justify Ofsted’s methods.
It is up to them. In 2012 Professor Dylan Wiliam, of the University of London’s
Institute of Education, challenged Ofsted to evaluate the reliability of its
school inspections and publish the findings, asking: “If two inspectors inspect
the same school, a week apart, with no communication between them, would they
come to the same ratings?” (Times Educational Supplement 03/02/12 ).
I don’t know whether Prof. Wiliam got an answer but I can’t
find one that has been published. Maybe in 2016 Ofsted could answer a similar
question for me. “How can Ofsted be sure that the variation between different local
authorities, revealed in its inspections of children’s services in England, is
due to differences in performance rather than just due to chance?”
If Ofsted cannot answer that question in a convincing way it
should not be in the business of inspecting children’s social care and the
government should certainly not be assigning a pivotal role to Ofsted in its so-called
‘reforms’.
Notes
[1] I have no evidence that 1 in 5 cases is in fact substandard,
although it seems to me to be a reasonable 'guestimate', especially in view of
the fact that Ofsted finds such a large number of authorities ‘inadequate’ or
‘requiring improvement’. I have tried, without success, to discover if Ofsted
is able to estimate what the proportion of substandard cases is in the entire
‘population’ of the cases they have reviewed in (say) the last 10 years.
[2] Ofsted’s ‘Inspection Handbook’ speaks of ‘tracking’ no
more than 30 children during an inspection and ‘auditing’ a ‘sample’ of 20 case
files. I could find no detailed information in this document about how the cases
are chosen.
Friday, 18 December 2015
Outcomes of Ofsted child protection inspections
Hard on the heels of the much trumpeted, but in my view
wrongheaded, government ‘reforms’ comes the latest state of play information
from Ofsted.
Apparently Ofsted has informed BBC News that more than three-quarters
of reports of inspections published between February 2014 and September 2015
found local authority children services in England to be in need of
improvement.
19 out of 74 (just over a quarter) were judged to be
‘inadequate’ by Ofsted inspectors. 38 out of 74 (just over a half) were judged
to ‘require improvement’. Only 17 out of 74 (just under a quarter) were judged
to be ‘good’. None were judged to be ‘outstanding’.
The pie chart says it all.
With the government now prepared to give inspections such a
pivotal role in determining the future of local authority children’s services, and
with recent inspections seemingly revealing such a bleak picture, Ofsted should
be required to provide a robust defence of the validity of its inspections. If,
as I suspect, Ofsted has been getting it wrong and unnecessarily labeling some
authorities ‘inadequate’, then viable organisations are now in danger of being
unnecessarily dismantled and the services they provide needlessly turned over
to untried newcomers.
Tuesday, 15 December 2015
We don’t want ‘landmark reforms’ – we want safer child protection services
Yesterday the Prime Minister outlined what is being
described as a major overhaul of child protection in England. He claims that it
is “a landmark reform”. There is a very thorough account of what was announced
in the Yorkshire Post.
At the heart of the package of changes is the proposal that “failing”
local authority children’s services will be taken over unless they rapidly
recover from an Ofsted finding of ‘inadequate’. Trusts composed of other more
successful local authorities and charities will step-in to run the services.
Something similar has, of course, already happened in places
like Doncaster, so the idea is not entirely new, but what the Government is now
proposing appears to be much more of an automatic process. If Ofsted rates an
authority ‘inadequate’, and the authority does not improve within the next six
months, a commissioner, who can call in outside help from other local
authorities and charities, will be put in charge. It is as simple as that.
For reasons outlined
below I don’t like these reforms. They seem to me to be Ofsted-driven, process
focused and not based on a clear understanding of how service failures occur.
In short, they are a bad idea.
Instead of setting out the argument in a separate post I am
going to reproduce (with permission) an article being published by the Safer Safeguarding Group.
This is a recently formed group of professionals from a
variety of fields who all want to see much clearer thinking about safety in
child protection. Needless to say I am a member. If you would like to join the
group or want any more information, use the ‘contact’ section of the group’s
website or email: SaferSafeguarding@gmx.com
Here is the article:
At the heart of the
package is a proposal that failing local authority children’s services will be
taken over unless they rapidly recover from an Ofsted finding of ‘inadequate’.
In the absence of marked improvement within six months, a commissioner will be
appointed who can establish a trust, composed of other more successful local
authorities and charities, to step-in and run the services. With one-in-four Ofsted
child protection inspections resulting in a verdict of ‘inadequate’, there
looks likely to be plenty of scope for commissioners stepping in and take-overs
by trusts occurring. We have important reservations about any improvement
process that is largely driven by the outcomes of Ofsted’s inspections which
tend to concentrate on issues of process rather than the important fundamental
issues of safe operation.
Nor do we believe that
root and branch organisational change is a good way to develop safer services.
New commissioners, trust boards and management structures may all sound like a
‘new broom’, but we believe that lasting safety advances come about through
slow, incremental and continuous improvement in which front-line practitioners,
in particular, are involved in understanding how service failings occur and how
to prevent and mitigate them. Large-scale organisational change is highly
disruptive. All too often once the dust has settled, unhelpful and unsafe
working practices are found to have persisted unaddressed. Not only that but
changes of this type do not come cheap. Large-scale reorganisations eat up
scarce resources and seldom demonstrate value for money. We believe that scarce
resources should be targeted on front-line services and on trying to understand
where the weaknesses in organisational defences are to be found. Initiatives to
eliminate those weaknesses and so increase safety and service quality should be
the priority.
In short we believe
that the Government has fallen into the trap of believing that lasting
improvements can be brought about by heavy-handed top-down initiatives. In their
ideal world Ofsted will point the finger and a new commissioner and a new trust
will sweep in - like the proverbial cavalry - to reconfigure services. But in
reality this type of approach is unrealistic. It will fail to engage those
people who actually do the work, causing a more stressful working environment,
and it will fail to identify the systematic and structural weaknesses that
underlie poor performance and safety failings.
What is required
is to create a learning culture in which the people who do the work feel
free to explore how things go right and how things go wrong and to propose and
research improvements; in other words, promoting a just reporting culture. In contrast blame cultures, in which bullying
and threats impede thinking, are inherently unsafe because fear prevents people
from challenging the hierarchy and initiating change. Only through the
development of a just learning culture can an organisation achieve real
progress towards making children safer.
Subscribe to:
Posts (Atom)