Tuesday, 21 November 2017

Two, four, six, eight, let’s all inn-o-vate!

The final evaluation report on the Children’s Social Care Innovation Programme [1] seemed to me to raise more questions than it answered.

While the authors seemed generally upbeat about the impact of ‘Wave 1’ of the programme, to my mind their report seemed to qualify many of its conclusions on the service quality improvements which the programme may have spawned. I found myself wrestling with phrases such as “The quality of services increased in 42 of the 45 projects that reported outcomes in Wave 1, in so far as these outcomes reflected the aims, or service users reported improvements” (page 70). I am still debating with myself exactly what that sentence means.

Many of the ‘hard’ indicators used to evaluate the projects (such things as reducing the number of children looked after, the number of children assessed as being in need, the number of re-referrals etc. etc.) also seemed to me to suffer from being what I call the ‘usual suspects’ – data that is collected centrally in the belief that it somehow relates to service quality, but no-one is exactly sure how.

And some of the ‘soft indicators’ seemed very soft indeed – e.g. ‘improving the quality of relationships between young people and their peers’ and ‘improving young people’s and families’ resilience’. I can’t think how I would measure either of those.

I also wasn’t convinced by the section of the report dealing with the value for money of the projects. While there seems to be evidence that there were some savings as a result of the projects, the report gives little information on the methodology used, except to say that not all the projects used the same methodology to monitor costs and benefits. There is also no discussion of the considerable difficulties in measuring unit costs in organisations which have large overheads and in assigning indirect costs to particular activities. [2] And I could find no discussion of whether the issue of whether local authority costs might be reduced, not because of greater efficiencies but as a result of work being picked up by other agencies.

My final reservation about the qualified optimism of this report concerns what is known as the Hawthorne effect [3]. In the 1920s an Australian psychologist, Elton Mayo, conducted research at a factory in Illinois. The aim of the study was to see if workers would become more productive in improved lighting conditions. At first productivity appeared to improve when changes to the lighting were made. But the changes were not sustained and productivity dived when the study ended. Mayo and others hypothesised that the productivity gains occurred because of the effect on workers’ motivation as a result of the interest being shown in them by the researchers. Subsequent research confirmed the existence of such an ‘observer effect’.

Armed with this piece of knowledge from what used to be called ‘industrial psychology’, it does not take a great deal of imagination to see how many of the perceived improvements witnessed in the innovation projects may be as a result of workers and managers experiencing improved morale and motivation as a result of the interest shown in them by the project’s funders and by evaluation researchers. It follows that a true test of the effectiveness of the innovation can only be made some time after the first evaluation has taken place. Are the changes sustained or do they quickly erode after all the fuss has died down?

A lot of what we know about innovation in organisations suggests that it is a fact of life that innovations can make considerable initial impact, only to be followed by a period of sustained retrenchment. That thought brings me to developments in theory and practice that took place in Japan in the second half of the twentieth century [4] and that will be the subject of my next post.

Notes

[1] Sebba, J. Luke, N. McNeish, D. and Rees, A. Children’s Social Care Innovation Programme - final evaluation report, Children’s Social Care Innovation Programme Evaluation Report 58, November 2017, London, Department for Education

[2] For a brief account of some of these issues see the following article in The Economist: “Activity-based costing” 29th June 2009


[4] Imai, M. Kaizen, the Key to Japan’s Competitive Success (McGraw-Hill, New York, 1986).






Tuesday, 14 November 2017

Quality in children’s services – the central problem?

I missed a very important, if small scale, piece of research which was published about a year ago. So belatedly I have only just finished reading this very important, if underreported, piece of work. It points very clearly to what I think should be seen as the central problem of understanding issues of quality in children’s services in England.

The report in question [1], funded by the Nuffield Foundation and undertaken by researchers from the NSPCC, Loughborough University and the Child Outcomes Research Consortium, sets out the findings of a feasibility study into undertaking a larger project to try to understand how to define ‘good’ children’s social care services and how to assess if improvements occur.

It consists of two parts: a very useful and excellently reported literature review; and an analysis of the relationship between the Department for Education’s (DfE) outcome data for children and Ofsted ratings of children’s services.

The literature review section of the report concludes that:
  • There is a lack of consensus about what are good and what are bad outcomes for children’s social care services. 
  • There is no clarity about what indicators should be used to measure these outcomes.
  • There is only ‘mixed evidence’ about what characterises good children’s social care services and much of it is based on expert opinion rather than quantitative research.

Perhaps there are few surprises there, but having such a clear and systematic account provides an important baseline for future thinking.

Much more surprising is the analysis of the relationship between the Department for Education’s outcome data for children and Ofsted ratings of children’s services [2].

While one might reasonably expect to find that local authorities rated as ‘good’ or ‘outstanding’ by Ofsted also scored highly on the DfE’s outcome data [3], in fact the researchers found very little association between the two. Perplexingly, of the six local authorities which were ranked in the best 10%, according the DfE outcome data, only two were judged to be ‘good’ by Ofsted while two were found to be ‘inadequate’ and one ‘requiring improvement’ [4].

Looking at the data for all the local authorities, a regression analysis, showed that only one child outcome variable and one workforce variable had statistically significant relationships with Ofsted ratings and these associations were weak. Bizarrely the child outcome variable concerned was 'the percentage of looked after children who had a missing incident during the year'. The analysis showed a weak positive relationship indicating that the better the Ofsted rating the more missing incidents there were! The workforce indicator that had a weak statistically significant relationship with Ofsted ratings was the agency worker rate. Reassuringly this showed a negative relationship, with the lower the agency worker rate the better the Ofsted rating.

There were no statistically significant relationships between the other nine variables and the findings of the Ofsted inspections.

These are very disturbing results. 

The authors of the study put their findings before a seminar which was attended but what they describe as a variety of ‘experts’ from the DfE, Ofsted, the Association of Directors of Children’s Services, the Local Government Association, local authorities, the NSPCC and researchers from the various universities. The seminar, we are told, concluded with a ‘strong consensus’ that the DfE data and Ofsted ratings could not be relied upon to assess the quality of children’s social care services.

Just in case anybody thinks that we can just note these conclusions and move on, we need to be clear that they point to the systematic unreliability of at least one, and possibly both, of the main approaches used to measure the quality of children’s services in England. That is a major problem, a fundamental flaw.

Clearly both Ofsted and the DfE should take this very seriously. But there is no evidence that they are doing so. Ofsted’s Chief Inspector, Amanda Spielmann, was recently asked about Ofsted’s fitness to inspect children’s social care at a meeting of the House of Commons Education Select Committee, but she didn’t say anything about this research [5].

And as usual the DfE seems to apply a least-said-soonest-mended philosophy to communicating with the rest of us, so I could find nothing from them either.

To my mind the research points to a very hard rock in a very hard place, namely that the whole edifice of quality improvement in children’s social care is built on very shaky foundations.

I think there’s a mixed metaphor in that last sentence but frankly I don’t care!

Notes

[1] La Valle, I., Holmes, L., Gill, C., Brown, R., Hart, Di., Barnard, M. (2016).
Improving Children’s Social Care Services: Results of a feasibility study. London: CAMHS Press.

[2] Ofsted is the Office for Standards in Education, Children’s Services and Skills. It inspects and regulates services in England that care for children and young people, and services providing education and skills in England for learners of all ages. Ofsted’s inspectors carry out inspections of children’s services, including child protection, which rate individual local authorities as ‘outstanding’, ‘good’, ‘requires improvement’ or ‘inadequate’. https://www.gov.uk/government/organisations/ofsted/about

[3] As the government department responsible for children’s services in England, including child protection, the Department for Education’s (DfE) has amassed a ‘data set’ relating to the outcomes for children it believes to be important. These indicators measure Child Outcome Indicators (such as referrals within the past 12 months of a previous referral, repeat children protection plans, return home from care and emotional and behavioural health of looked after children) and Workforce Indicators (children in need per social worker, social worker turnover rate and agency worker rate). See La Valle et al (op cit.) Chapter 4 for more details.

[4] One inspection was incomplete.

[5] House of Commons, Education Committee, Tuesday 31st. October 2017

Wednesday, 8 November 2017

Up and up and up we go ...

The Department for Education’s Characteristics of Children in Need: 2016 to 2017 England statistics have just been published.


It has been yet another year of unrelenting growth in the work of children’s services departments throughout the country.

The number of Section 47 enquiries (conducted when there is a concern a child may be at risk of significant harm) has increased yet again, from 172,290 in 2016 to 185,450 in 2017, an increase of 7.6%.

There was also an increase in the number of initial child protection conferences which took place in the year, from 73,050 in 2016 to 76,930 in 2017, an increase of 5.3%.

And the number of children who were the subject of a child protection plan at 31st March has increased, from 50,310 in 2016 to 51,080 in 2017, an increase of 1.5%.

These worrying figures are part of the marked upward trend in child protection work in England since 2010. We hear a lot from ministers and civil servants about things like their half-baked scheme for the accreditation of social workers.

But we don’t seem to hear a great deal from them about these sustained and unremitting increases in workload which are the single biggest issue facing child protection work in England today. 

Unless some substantial action is taken, the volumes of work will become unmanageable; it is as simple as that.

Mandatory Reporting of Child Abuse and Neglect – sadly the saga continues

I was sorry to read in Children and Young People Now that the British government has not yet ditched the idea of introducing mandatory reporting of child abuse and neglect in England, although to be fair it is hard to see how they could have just ‘lost’ the results of the consultation that they carried out in 2016.

My principle objections to mandatory reporting are as follows.

Firstly, it perpetuates the blame culture by making a failure to report child abuse and neglect a criminal offence for some groups of workers. If people feel that they may be blamed and criminalised for making mistakes at work, they are unlikely to be open and honest about the mistakes they make, practicing in an increasingly defensive manner. By increasing the fear of blame, mandatory reporting would reduce openness about service failures and reduce reporting of slips and lapses. That would make services less safe than they currently are.

Secondly, it ignores the fact that all too often professionals and other practitioners are not faced with clear cases of child maltreatment, but rather evolve suspicions and concerns over a period of time. What is needed is not a threat of punishment to force people to report, but support and guidance for those who have a concern which they may not yet fully understand. Mandatory reporting is an on/off process. If you believe a child is being abused you must report it or face sanctions; if you do not just carry on. In the real world things are a lot more fuzzy and it is often difficult to distinguish the wood from the trees. Helping workers understand more about the nature of child abuse and neglect, and how to recognise it, is much more likely to ensure that the right children are referred for help.

Thirdly it is a distraction. The government should be pursuing policies which actually make children safer, not introducing punishments for people who get things wrong. Once you have a mandatory reporting regime people have to be trained to work in it. Suspected violations have to be investigated. Decisions about prosecution have to be made. There has to be trials. Some people might be wrongly convicted. The impact of further stoking up the blame culture would have to be managed as defensive practice proliferates.

Mandatory reporting is a bad idea. I hope the government has the guts to resist its introduction.

Thursday, 2 November 2017

Cuts

In Britain ministers have an unwholesome habit of turning on public officials and others who draw attention to the inevitably negative impact of funding cuts. “Just get on with the job and stop behaving like wimps” appears to be their knee-jerk-response.

Only the other day our Home Secretary, Amber Rudd, was caught lambasting police chiefs for their whingeing about swingeing cuts and for daring to point out that these coincide with rising crime and increased public demand. The Metropolitan Police Commissioner, Cressida Dick, was unrestrained in condemning the funding squeeze faced by her force, an “incredibly demanding” £400m more in annual savings on top of the £600m a year of cuts already made. But her pleas fell on Amber Rudd’s deaf ears. The Home Secretary wants no coming-the-old-soldier or shroud-waving on her watch.


Having tried to pretend for many years that child protection in the UK is immune from funding cuts, I dare say that the government wants to hear no more from organisations like the National Children’s Bureau, which has had the temerity to conduct very useful research showing the scale and impact of cuts on children’s services. Forty percent of local authorities are reported as being unable to meet their statutory duties.


The problem, of course, is not just restrictions on cash budgets, but rising demand. Children’s minister, Robert Goodwill, naively points to what he calls increased spending, but fails to set this against unprecedented high levels of demand. He fails to remember that you don’t get ‘owt’ for ‘nowt’, as they say in his native Yorkshire.

In a post last year, I drew attention to the impact this sort of thinking had had on services in Louisiana, where year after year of cuts and squeezes had emaciated services. Now the same is happening here.


What ministers don’t realise, or don't want to admit, when it comes to cutting is that services don’t become more efficient simply because you give them less money. Usually they just shrink. Services can become more efficient and so require less funding, but that doesn’t happen by fiat. It needs to be planned for and carefully engineered.


Penny-pinching usually has only seriously negative consequences. The likes of Mrs. Rudd and Mr. Goodwill need to take note.