The final evaluation report on the Children’s
Social Care Innovation Programme [1] seemed to me to raise more questions than
it answered.
While the authors seemed generally upbeat
about the impact of ‘Wave 1’ of the programme, to my mind their report seemed
to qualify many of its conclusions on the service quality improvements which
the programme may have spawned. I found myself wrestling with phrases such as “The
quality of services increased in 42 of the 45 projects that reported outcomes
in Wave 1, in so far as these outcomes reflected the aims, or service users
reported improvements” (page 70). I am still debating with myself exactly what
that sentence means.
Many of the ‘hard’ indicators used to
evaluate the projects (such things as reducing the number of children looked
after, the number of children assessed as being in need, the number of
re-referrals etc. etc.) also seemed to me to suffer from being what I call the ‘usual
suspects’ – data that is collected centrally in the belief that it somehow
relates to service quality, but no-one is exactly sure how.
And some of the ‘soft indicators’ seemed
very soft indeed – e.g. ‘improving the quality of relationships between young
people and their peers’ and ‘improving young people’s and families’ resilience’.
I can’t think how I would measure either of those.
I also wasn’t convinced by the section of
the report dealing with the value for money of the projects. While there seems
to be evidence that there were some savings as a result of the projects, the
report gives little information on the methodology used, except to say that not
all the projects used the same methodology to monitor costs and benefits. There
is also no discussion of the considerable difficulties in measuring unit costs
in organisations which have large overheads and in assigning indirect costs to
particular activities. [2] And I could find no discussion of whether the issue
of whether local authority costs might be reduced, not because of greater
efficiencies but as a result of work being picked up by other agencies.
My final reservation about the qualified
optimism of this report concerns what is known as the Hawthorne effect [3]. In
the 1920s an Australian psychologist, Elton Mayo, conducted research at a
factory in Illinois. The aim of the study was to see if workers would become
more productive in improved lighting conditions. At first productivity appeared
to improve when changes to the lighting were made. But the changes were not
sustained and productivity dived when the study ended. Mayo and others
hypothesised that the productivity gains occurred because of the effect on workers’
motivation as a result of the interest being shown in them by the researchers.
Subsequent research confirmed the existence of such an ‘observer effect’.
Armed with this piece of knowledge from
what used to be called ‘industrial psychology’, it does not take a great deal
of imagination to see how many of the perceived improvements witnessed in the
innovation projects may be as a result of workers and managers experiencing
improved morale and motivation as a result of the interest shown in them by the
project’s funders and by evaluation researchers. It follows that a true test of
the effectiveness of the innovation can only be made some time after the first
evaluation has taken place. Are the changes sustained or do they quickly erode
after all the fuss has died down?
A lot of what we know about innovation in
organisations suggests that it is a fact of life that innovations can make
considerable initial impact, only to be followed by a period of sustained retrenchment.
That thought brings me to developments in theory and practice that took place
in Japan in the second half of the twentieth century [4] and that will be the
subject of my next post.
Notes
[1] Sebba, J. Luke, N. McNeish, D. and Rees,
A. Children’s Social Care Innovation Programme - final evaluation report,
Children’s Social Care Innovation Programme Evaluation Report 58, November
2017, London, Department for Education
[2] For a brief account of some of these issues
see the following article in The
Economist: “Activity-based costing” 29th June 2009
[4] Imai, M. Kaizen, the Key to Japan’s Competitive Success (McGraw-Hill, New York, 1986).