I just managed to get my response to the consultation on the British Government’s “Draft of Children’s Safeguarding Performance Information to the the Department for Education in time. See my earlier post for my first reaction.
Below is an edited version of what I said.
It is naïve to think that a small central group, however
well informed, can specify in advance “… the key information required by local
areas to understand changes in concentrations of children's need and trends
over time, inform service planning and development and inform service
improvement”. At best what can be
achieved in this way is the creation of some core data by which national
aggregations and comparisons can be created.
The collection and use of information in delivering any
service is a dynamic process. Attempts to constrain it too much result in those
who deliver the service having to do so while having the ‘wrong’
information – i.e. information that other people think they should have, not
information which they need.
I believe that Government should mandate local authorities
to base the provision of their Children’s Services on sound, relevant and
reliable information. But I believe that Government should be extremely
cautious about specifying what this should be.
Government itself needs to be realistic about what
information collected through national statistics can show. Rather than
proposing detailed specific suggested measures it would be much more helpful
for Government to be clear and consistent about what types of information it seeks
to hold.
For example the “Draft of Children’s Safeguarding
Performance Information for Consultation” document is inappropriately named. In
the first place there is no attempt to make clear whose ‘performance’ is being
measured. Secondly, many of the proposed measures are not measures of
‘performance’ but rather descriptors of states of the environment and
indicators of demand for services. It is really a crass and potentially
damaging mistake to be unclear or ambiguous about whether performance is or is
not measured by a particular statistic. The careless use of ‘performance’ in
the titling of the consultation document suggests to me that thinking by the
Department about this issue remains rudimentary, if not crude.
I suggest that in general, both at local and national level,
decision-makers and service deliverers will be served by information of the
following types:
1.
Information about the environment in which the
service operates, such as general demographics, trends in population growth,
statistics concerning economic trends and social well-being etc.
2.
Specific information about the demand for
services as they are presently configured – e.g. number of referrals, number of
assessments, number of Section 47 enquiries, number of child protection conferences,
number of children made subject to child protection plan, number of care
proceedings initiated, number of children taken into care, number placed for
adoption etc. etc.
3.
Views of service users (especially children and
young people), service providers (especially those who work at the front-end)
and other key stakeholders about how the service needs to respond and develop
in order to better meet the needs of children and young people.
4.
Information about how services are meeting the
five performance objectives (Slack, N. et al, “Operations and Process
Management”, 2nd ed, Harlow: Prentice Hall, 2006, page 40). These are quality,
speed, dependability, flexibility and cost.
5.
Information about the single most important
input into the system – i.e. information about the people who provide the
service.
Many of the measures proposed in the consultation document have an
arbitrary character and suggest the question, ‘if this, then why not that as
well’. It is hard not to suspect
that some measures have been included simply because they are already collected
and, conversely, that some equally interesting (or disinteresting) measures
have been excluded simply because the data is NOT currently collected.
A lot more interesting information is required in order to
achieve a proper understanding of the environment
in which a child protection service operates. For example, at local level
quite rapid changes can occur as a result of depopulation, migration or changed
economic conditions. Service planning needs, so far as is possible, to take
these into account. Birth rates, the opening of new businesses attracting young
families, an influx of migrants or movements of children and young people from
inner cities to suburbs are examples of the kind of information that needs to
be factored into any consideration of the extent and nature of the services to
be provided.
An important element of such environmental information is
the kind of information that is provided in the NSPCC’s prevalence studies
(Cawson et al. 2000, Radford et al, 2011). Both nationally and locally there
needs to be more frequent and more sophisticated attempts to understand both
the prevalence and incidence of child abuse and neglect. No commercial service
(e.g. an airline) would try to operate without having at least a rough idea of
the size of demand for particular services, but that is what we seem to expect
local authority practitioners and managers to do as a matter of course in
providing child protection services. How much child abuse is there in London or
Birmingham or Leeds or Manchester or Bristol? Presently the answer is that
nobody knows. I found little in the draft that would cast much light on these
most important considerations.
Specific information about the demand for current services is more adequately addressed in the
draft, but even here there are some inexplicable omissions in what is proposed.
I frequently despair of the inaccessibility of basic statistical information
about the demand for current child protection services in a form that is readily
accessible to all and which can be replicated at both national and local level.
The number of children referred (and the rate), the number assessed, the number
of section 47 enquiries undertaken, the number of emergency protection orders
or police place of safeties, the number of child protection conferences, the
number of children made subject to a plan and the number made subject to care
proceedings (and the corresponding percentages) seem like items of very basic
information which I would expect everyone to know at local and national level.
That way a practitioner or a manager in one location can immediately tell an
enquirer that a higher rate of referral exists in this locality (than
nationally), but that a smaller proportion of cases result in an assessment or
section 47 enquiry or that the proportion of all cases resulting in care is
smaller than the national average etc. etc.
There is very little in the consultation document about the views of service users, especially children
and young people, and what there is appears to be envisioned at the local
level. Collecting this type of information is relatively expensive and it
might, in the first instance, be better collected nationally than to engage in
under resourced local initiatives. It ought to be possible to fund a national
programme of data collection perhaps under the overall direction of the Children’s
Commissioner using a grant from central government.
There is some information in the draft relating to the five
operations objectives, but this is confined largely to quality and speed.
Little relates to dependability, flexibility and cost.
In my view a major component of child protection quality concerns the issue of re-abuse.
There is an urgent need to develop valid and reliable measures of the number of
children who, having received some part (or all) of a child protection service,
suffer re-abuse. Such a measure would also be a good indicator of the amount of
‘re-work’ that is in the system – i.e. cases which are recycling because wrong
decisions were taken on a first attempt at providing a service. A significant
difficulty with providing such a measure is in achieving an objective
assessment of whether re-abuse has occurred. However, it should not be
impossible to develop an adequate measure.
Another important dimension of service quality is the user
perspective. How responsive, helpful, respectful, listening, caring and
sagacious are those who deliver the service? How do children and young people
perceive the services they are offered? To what extent do they see the services
as meeting their needs in ways that they want? How satisfied are they with what
they receive? How would they improve the service? There are obviously very
great difficulties in collecting this type of information from very young
children, but there are certainly ways in which these problems can be
addressed. Having a sample of cases involving babies and toddlers being
observed by researchers expert in child psychology is one way in which relevant
information can be generated.
Capturing this aspect of service quality might also fall
within the remit of the Children’s Commissioner as outlined above. National
data collection may need to precede local initiatives because of issues of
complexity of the research design and expense of data collection.
A final aspect of quality concerns recording of information.
The focus here needs to be on the accuracy, not simply on the completeness, of
records. The retrievability and accessibility of information are also
important.
The issue of speed
has bedevilled previous attempts to develop satisfactory safeguarding
performance indicators. Speed is often relatively easy to measure as the
difference between two dates. But
what is frequently forgotten is that it is bottlenecks in a service process
that limit the output from the whole process and that if any part of a process
exceeds the rate of activity permitted by the bottleneck, then work is being
produced that cannot be used in time, thus squandering scarce resources.
So the decision about which parts of the system’s speed to
measure must be taken very carefully. There must be good intrinsic reasons for
specifying a particular speed target. For example it is reasonable to try to
measure the speed at which a fire engine reaches the seen of a fire, because
there is a crucial safety consideration involved here – the sooner the fire
brigade are on-scene the sooner help can be delivered. But it is not reasonable
to focus on the time interval between leaving the fire station and first
delivering water at the incident scene, because in some cases there will be
very good reasons why water should not be deployed too quickly if at all (e.g.
some types of electrical fires). Likewise with child protection, it is
important to respond to the referral quickly, but this does not always mean
rushing an assessment or Section 47 enquiry, especially if to do so would lower
the quality of the assessment. I think that the new measures proposed by
Professor Munro, which focus on the distribution of the time taken for parts of
the work, are particularly helpful – because they do not lend themselves to
being converted into meaningless targets.
However it is usually the case that when an issue of a
child’s safety is raised, those responsible for delivering a safeguarding
service should respond quickly and someone should see the child quickly to
determine to what extent any emergency protection is required. I would
therefore want to include a ‘first response’ statistic, based on a distribution
and not a target.
It is also reasonable to measure the speed of the whole
process and to derive a distribution.
There is a range of possible statistics about dependability some of which are
mentioned in the draft. The extent to which service users are kept informed,
the keeping of promised dates or actions, the extent to which aspects of the
service design are invariably provided – e.g. seeing the child: these are all
important aspects of dependability.
Flexibility,
however, is not well addressed. Crucial is the extent to which the service can
cope with changes in circumstances, such as increased demand or labour
shortages. How often are events (such as Child Protection Conferences)
cancelled or delayed? How are unusual circumstances (e.g. a large influx of
cases resulting from the activities of a serial abuser) coped with? How quickly
can a service be recovered when things go wrong?
Then there is the issue of cost. It is very surprising that no mention of cost is made in the
draft, especially since a very large body of information resides in the
accounts of local authorities, which is easily accessible. Unit costs are often
difficult to produce, and should not be attempted in an amateurish way, but costs
of providing the whole service, or parts of it, at both local and national
level should be readily available.
A local authority with a high cost, low demand, low service
volume and low quality would certainly be one to require further investigation.