Speaking with our staff about data on Monday evening, I had four (yes!) thoughts:

a) What is the purpose of data reported to SMT on a termly basis? Is it to report on assessments? Is it to track the rhythms of students’ performance?

I have seen data reported to SMT at KS4 as a staff prediction of where that student will be by the end of that course. While there are issues with this (especially for NQTs without the judgement), there are benefits:

– SMT gain a big-picture of potential progress across the year, albeit based on judgement in the initial terms.
– Intervention can be put into place quickly if progress is not judged to be made.
– Students and parents are informed early on potential ‘under-performance’, with appropriate initiatives put into place in such instances.

If data reported is based solely on student assessment, does this take into account ‘easier’ modules? Speaking and listening modules in the UK, for example, are marked on average a ‘B’ grades: reading and writing averages ‘C’ grades (with a broad definition of the word ‘average’ there!). Coursework is often higher than an exam grade, too (particularly for girls). What analysis can be done with this data if there is not a story accompanying it.

b) How applicable are CATs progress targets when aspiring to produce higher-level skills in humanities and language subjects?

Teachers with decades of experience can mark a whole grade difference based on handwriting (and spelling). Senior markers can be out on paper marking by double-grade boundaries. The criteria for the top levels is vague at best: “personal insight” “sophistication”. Exam boards and consultants revert to saying ‘an A is the one in the standardising book’, as if that particular essay (or five) can encompass all that which makes the best writing.

Exam boards cannot even agree whether a A* is the best that a student of fifteen or sixteen can achieve, or whether it represents a flawless performance.

The truth is that, like with reading age tests (see my earlier post) there are not better tests to indicate potential progress. If they are to produce credible points of progress, though, I think that they might do well to be linked with teacher end-of-course predictions (see above). Unless a child has SPLD, or entirely inextricable social issues, they deserve to be shown and treated as if the top criteria is something to which they can aspire.

Reading the above, I should give the proviso that I have taught classes of students that have been put together on ‘personality’ as well as ability, and I know the ardour of managing and motivating such students who have a decade or more of negative schooling experience. It is not easy. But if you can’t show students what a sophisticated analytical response feels like, or what adjusting your register for effect can do, who can? Outstanding is not a charismatic performance (and a funny pseudonym) but rather students making more progress than they would do by osmosis.

c) How should mock and other ‘exam-like’ data be reported?

Students in Year 10 may bomb on mocks, especially if the teaching focusses on skills of literary analysis (with coursework requirements) in Year 10. There can be an issue of reporting pass grades for coursework modules all year until the mock grades indicates a drop (from Cs to Fs or Us for example).

A student who makes errors of style of exam-approach may benefit from having a ‘predicted’ grade reported alongside their mock exam performance data. Again, all this depends (I think) on the character of the student, and how they might respond to diagnostic feedback and criticism. This is something that should be trained in students on a whole-school basis, and from their entry. It is Learning to Learn, it is HPL, it is making students aspire to think like a writer, an architect, a scientist or a mathematician.

d) How can formative data be reported?

This is a big one. Formative data informs what a teacher does, and why they do it. It moves away from content-based teaching into criteria-based teaching. I have written fairly extensively about how formative data can be recorded, and used to inform future teaching: http://www.thequillguy.com/an-example-of-using-my-markbook/

I think that formative data is useful to report to an observer, and for showing patterns to the teacher. I presented to our whole-school staff about this earlier last year, and I attach the PowerPoint here: Using Data in the Classroom (c) The Quill Guy 2013