SCRs have hit the headlines again this week, with a study by OFSTED showing that 1 in 3 reports is "inadequate". According to The Guardian only 22% of the reports inspected were found to be "good".
I have tried to find the source material for this story on the OFSTED website, but the best I have so far achieved is to find a spreadsheet which contains only some raw data. It does not include any discussion about how the categories used by OFSTED to classify the reviews are defined. Nor does it include any discussion of the methodology. Do OFSTED inspectors simply read the reports or do they make further enquiries about the case in order to determine the extent to which the report is accurate? And do they attempt to measure how much learning from the SCR has occurred in local agencies? I expect that most of the evaluation is based on the report itself and how well it conforms to OFSTED guidelines.
Sadly we seem to be drifting into "ends-means displacement". What is important about the SCR process is not the report itself, or for that matter the SCR process, but the organisational learning which is supposed to accompany it. There is a danger of creating a purely bureaucratic beauty contest based around the formal characteristics of the written documents themselves.
As I have argued before, I do not think that SCRs are the best means of learning about child protection mistakes. A process carried out by a small group, operating in conditions of secrecy, and which results in most people only ever seeing a highly edited executive summary, must have severe limitations. Rather than producing an annual evaluation of SCRs, with its similarities to school league tables, OFSTED would be better advised to conduct thematic research into how best to promote learning in child protection.