A good example of a lot of a lot of data confusing a problem was the release of this year’s National Curriculum tests for 11 year olds. The BBC website reported yesterday “Sats test results show improvement” and the leader of the NASUWT said "The results continue the year-on-year rise in standards we have seen over the last decade in schools" .Well up to a point. The headline statistic is the global figure for 11 year olds taking Sats tests, which were then marked and forwarded to the Department for Education and that said 81% of those taking the English test and 80% of those taking the mathematics test, achieved the level expected of them. Nowhere in the news reports I have read are the confidence limits for this statistic mentioned which the Department for Education’s “Statistical First Release” identifies as 1% (they don’t actually mention confidence limits in the report, I got the 1% from this tantalising passage: The Department’s Head of Profession for Statistics has determined that a sufficient volume of results is available to give a representative estimate of achievement nationally, but has advised that caution needs to be exercised in over-interpreting small changes of 1 per cent or less in changes between years and between groups).When we know the confidence limits, we can say that since 2005 or possibly 2004 there has been no statistically significant change in global Sats scores in English and a similar situation for maths from 2008 since the totals have hovered around 79 - 80%. If Sats scores really do measure what they purport to measure, which is pupil achievement in these subjects, then it would suggest that whatever drives pupil achievement has not changed in the last few years. And this, I would have thought, is the real information “hidden in plain sight” in the data.