Jack Marwood's Icing on the Cake - An education blog
  • Home
  • Blog
  • MiniBlog
  • About
  • Contact

Using Data Properly: Ditch the Cargo Cult Data for Actual Data

26/5/2014

6 Comments

 
Picture
Cargo Cult Science is the term popularised by Richard Feynman in his 1985 book ‘Surely You're Joking, Mr. Feynman!’, referring to activity which has the trappings of real science, but which lacks the rigour which we might reasonably expect of real science. Tom Bennett, in his book 'Teacher Proof', gives a set of signs to look for ‘Cargo Cult’ research. Tom’s list of klaxons include: Research carried out by interested parties, small sample sizes, a lack of a control group, the absence of double-blind testing, confirmation bias, illusory correlation, mistaking cause and effect, the Hawthorne effect and the appeal to novelty.

Much of the progress-tracking ‘data’ which has become all too important in English education could actually be more correctly described as Cargo Cult Data. Cargo Cult Data has the appearance of real data, without having any of the requirements of statistically valid actual data from which one could reasonably draw inferences. It doesn’t pass the simple Fair Test most primary school children would recognise, much less the expectations of those who wish to Teacher Proof the data they are working with.

Michael Tidd has made the clear case that tracking is not the same as assessment, a point which is often lost in schools. As Michael notes, “Inevitably, the way that Ofsted works has meant that schools have been forced to use their assessments in the form of National Curriculum levels to demonstrate that they are tracking progress towards the end-of-key-stage expectations. However, in doing so we have all but divorced the act of assessment from the processes of teaching and learning.”

All teachers assess, all the time. We all have a good idea which of the children we teach are thinking hard, making progress, struggling with the demands of school, not in a position to learn or actively disrupting the learning of others, and so on. Assessment is an instinctive thing to do if you are trying to help children to learn.

Most data used to track progress, however, is simply guesswork, and often fairly random guesswork at that since, as Daniel Willingham is fond of saying, we can’t get into children’s heads to find out what they are learning or have learnt. We can ask children to write things down, but this isn’t the same as tracking their progress. It is pretty much the source of all progress-tracking data, however.

Where the progress-tracking data originates from high stakes written tests, at best it indicates how well children can answer (or be taught to answer) a written test. Where the progress-tracking data originates from what has become know as ‘teacher assessment’, it is hugely compromised by what Owen Elton refers to as the Teacher’s Dilemma, the effects of targets and many other distortions. The end result is that much of the progress-tracking ‘data’ used to assess learning, schools and teachers is simply ‘Cargo Cult Data’ – it looks like it can be used in the way statisticians use data, but its inherent flaws mean that this simply isn’t the case.

How did we end up in this mess?

Whilst many teachers will know the history of progress-tracking data, many of those not working in schools might not know how we came to be where we are. It’s worth looking at a bit of background.

Up to the mid-1980s, English schools worked in splendid isolation, for the most part. Teachers taught, children learned, but no one really had any overview of what teachers were teaching or children were learning in different schools below the age of 16, and there was no data which could be used to compare schools. That all changed when SATs were introduced, examination results began to be published, and the National Curriculum introduced ‘levels’ in the early 1990s. At that point schools began to develop progress-tracking Cargo Cult Data, taking guesses about the ‘level’ a child might be working at any given point in their education.

Levels were first used as indicators, and if that was all they were being used for, there would be some argument for them being useful. After all, it is helpful to have some information on where children are and where they need to go next. Many argue that teachers have always done this, and were doing it before the National Curriculum levels were introduced. Having a rough linear(ish) plan of development across the curriculum clearly makes sense. What doesn’t make sense is what happened next.

Within schools, teachers began to balk at the assumptions which were being made about the Cargo Cult Data which was rapidly being generated from the broad stages of development which the NC levels outlined. A figure of two levels of progress across a key stage was plucked from the air. The levels, which were already very broad, very subjective descriptions of learning, were subdivided into three sublevels.

In Primary Schools, with 4 years in Key Stage 2, this soon transmogrified into an expectation that a child would make two sub-levels of progress each year to be deemed to be make ‘good progress.’ Sublevels had numbers attached to them, which were labelled ‘point scores’. These point scores look, to those who simply don't understand the assumptions underlying actual data, misleadingly like something which has been measured, rather than simply guessed.

Picture
The point scores had become measures, at which point the true progress-tracking Cargo Cult Data was born. Driven by companies such as RM plc, the numbered point scores were treated if they were accurate measurements, which could then be subjected to statistical analysis. RM’s targeting offshoot, the FFT, then extrapolated this Cargo Cult Data into ‘estimates’ of future progress, in a way of which the financial services industry would be ashamed – past performance being no indication of future performance, as we all now surely know.

The attempt to find out how children were progressing in school spiralled downward into an unholy mess which ended up eating itself – or did it?

Enter politicians and their well thought out ideas

In a bizarre final twist, we are now on the threshold of a brave new world, since the current government have officially abolished levels. What’s that, I hear you ask? They’ve been abolished? Really? How did that happen? Well, with all of the criticism of the levels system, it was fairly obvious to all concerned that whatever levels were supposed to do, they weren’t very good at it. So they’ve gone.

Except they haven’t. Kevin Bartle wrote an excellent article about this this time last year (Spirit Levels: Exorcising The Ghost of Assessment Past), and Joe Kirby followed it up last November with thoughts on Life after Levels.

Liz Truss, Minister for Schools, speaking in April 2014, said that, "The old system for tracking a child’s progress was called ‘levels’. Levels became an end in themselves. So in 2012, we decided to scrap levels." And in one bound we were all free, to do what we want, any old time.

The Dead Levels message hasn’t got through to many schools, however. My school, for example, has a numeracy policy entirely based on levels, and children’s literacy work continues to be graded into sublevels each half term. The school I left in December had levels embedded deeply within all its progress-tracking. When OFSTED judged my school last year, they did so entirely based on progress-tracking Cargo Cult Data based on levels – Achievement of Pupils (based on levels) lead to a Quality of Teaching grade which lead to Overall Effectiveness grade, as it does in (bar one or two exceptions) every OFSTED report.

So what do we do now?

Liz Truss still seems to think that progress-tracking Cargo Cult Data can be used effectively in the brave new No-Level world. Here she is again in that speech in April, “Children will get a score. If they get 100, they’ve hit the expectations for their age. Above that score - and they’re ahead. Below, and they’re behind. It’s consistent across year groups. This is how the new end of key stage 2 tests will work - and schools can decide whether to have similar tests for the other years.”

So at 11 years of age (or significantly younger in some children's cases), children's knowledge will be guessed at and given a number on a scale. As to what schools do to show the progress demanded of by government accountability system 'for the other years', well, who knows. The government, based on Liz Truss's remarks above, clearly has no idea. Conferences are being held, and schools are trying to figure out what to do.

My strong advice is that we ditch anything which is Cargo Cult Data and only allow actual data to be used to track progress. I know. I can but hope, and keep chipping way at the flawed foundations of the Data Disaster.

What actual data can we collect and analyse?

The truth is, not very much. Knowledge is too complicated to be reduced to numbers, on the whole. Tests assess test-taking, and Cargo Cult Data is likely to rear its ugly head again. That said, some knowledge is fundamental to making progress in education, and that knowledge can be assessed and tracked numerically.  

The actual data I collect and analyse is:

  • Number Bond knowledge
  • Times Table Knowledge

This, added to the birthdate information I wrote about in my last article, is just about all I can see can be reasonably described as 'actual data'. I'd be interested to hear what those education think that they collect which isn't Cargo Cult Data. In particular, I'd be interested to hear what data secondary teachers collect. Do secondary maths teachers track times tables and number bond knowledge? Do you track information which is actual data?

I assess number bond and times table knowledge on a weekly basis with all the children I teach. The children take a two minute test, and I record children’s results and progress over time. The process takes around ten minutes a week and it gives me excellent information which I can then use in my teaching.

For the record, I also keep an eye on the following:
 
  • Knowledge of the Alphabet
  • Knowledge of phonemes and graphemes
  • Knowledge of punctuation marks

This is harder to track in any meaningful 'numerical data' way, but it is core knowledge which I can and do monitor. For some children, I do record this using numbers, particularly when I've spotted gaps and am attempting to fill them.

Beyond this, I record indicators of children’s progress. My guess is that, currently, this is what most teachers actually do, even if the 'data' then ends up being used as Cargo Cult Data by others. I find the National Curriculum levels quite useful as indicators. Yes, using the NC levels is hugely subjective, and subject to all kinds of biases, and I probably under record some children and over record others. Although, as long as no-one tries to subject this information to techniques used to summarise actual data (a forlorn hope in most cases), I have no problem with it, and I find it useful.

I'd be interested to hear what actual data other teachers and schools have found to be useful, and how this data is collected and used. Please comment below or contact me at jack.marwood@yahoo.co.uk.
6 Comments

Using Data Properly: Which side of March 2nd?

19/5/2014

7 Comments

 
Picture
Whilst much of the ‘data’ collected and analysed in English education is so heavily compromised as to be of little use, some data is incredibly useful. One set of data which is (or should be) readily available is the date when children were born, and their position within their cohort. Age data can and should inform practice within a classroom, and it helps teachers to be more aware of how and why children are likely to be able to do what they can do.

Knowing and using birth date information can ensure that, whatever age a child is, sensible assumptions can be made about progress, ability and development. It can help to understand both those who seem to high flyers and those who appear to struggle. If age is not considered, wrong and unhelpful conclusions are highly likely to be made about what children can do.

Age matters

In England, we group children into a school year cohort based on their date of birth. Unlike a calendar year, the school year starts on September 1 and continues through to August 31. There is no particular reason for continuing to do this other than historical precident – and other countries have quite different school years, often running from January 1 to December 31 – but it is what it is, and we have work with the implications of the arbitrary cut offs.

Whilst children are supposed to start school when they are five years old, in practice this has become the September before their 5th birthday. For some children, this means starting school when they are 364 days away from their 5th Birthday, and, on average, children start six months earlier than they should. Campaigns highlighting this anomaly are underway, but there seems to be little official desire to enforce current legislation, and some groups are actively encouraging assessments of children at earlier and earlier stages of their lives.

The difference in age within a cohort is significant for most of children’s time in education; in some important ways it is always significant and it can have a huge influence on a children’s sense of their own abilities when compared to their peers. It’s worth looking closely at the difference birthdate can make to a child in school.

September-borns just got lucky

In reception, the youngest child in a cohort will start at 48 months old, and the oldest will be 59 months old. By the time of the Year 1 phonics check in May of Year 1, the youngest will be 69 months and the oldest 80 months. The oldest have had around one seventh more experience as a result.

If you consider that most children become confident talkers at around 30 months of age (as a rough figure), and that thought develops with language, that means that when they start reception the September-borns have actually being developing their intellect for 29 months compared to just 18 months for the youngest. That’s more than 50% more time than their August-born class mates.

The difference is still significant when children sit SATs in Year 6, when the youngest are 129 months old compared to the 140 months the September-borns have under their belts. Remove the first 30 months in the pre-language phase, and the difference is 99 months to 110 months, a huge 11% more time to grow, learn and develop.

Here is a graph of mean standardised average point score at Key Stage 1 and Key Stage 2 by birth date and cohort, based on analysis undertaken by the Institute of Fiscal Matters:

Picture
Whilst this data is dubious on many levels, and says nothing about individuals, it does suggest that there is a huge difference in average outcome simply based on a child's age within their school cohort.

Being older is a huge advantage in non-academic areas


Malcolm Gladwell explored this in his book Outliers, where he looked at research into the ages of NHL Ice Hockey Players. Players are grouped by year based on their birthdate, with a hockey year beginning in January and ending in December. The physically bigger children born in the first three months of the year are consistently selected for higher level coaching and game play, and by the time players enter the NHL, the difference in the quarter in which you were born is staggering.

Picture
In school, older children have been ahead of the rest of their class from the off. They are more likely to find school easy, to be selected for responsible roles, to be more articulate, numerate and generally academic than their peers. The cumulative effect is startling, and it’s not surprising once you are aware of the advantage of age to find that those children who become head boy and girl, captain sports teams and generally shine compared to their classmates are often those born in the Autumn term.

Of course, this is not written in stone, and children can gain advantage in other ways; the children of sportspeople are often good at sport, musicians tend to have musical children and so on. Parents who are aware of the effect of age can do quite a bit to mitigate the effects of birthdate, but the simple fact that some children are considerably older or younger than their peers has a huge effect.

The more able are often simply the older children

A few years into my teaching career, I was asked to teach a Year 6 class. With the test-based requirements of the last year of primary hanging over me, I set about assessing the children’s ability to shine under test conditions. And, because I was young and keen, I looked for patterns in the data I gathered.

The thing which became obvious as I looked closely at the class I was teaching was that the oldest children were invariably also the most able, whichever subject I looked at. Likewise, the ‘less able’ tended to be the younger children in the cohort. I taught in a single form entry school, and all 30 children were in my class, so I could see the effect of age clearly.

Since then, I’ve always started the year by seating children in age order. After a few weeks, I put everyone into ability groups, as demanded by the current view of primary education. I usually have to move a few children around, but not by very much. But it reminds me that, just because a given child can do more than another, it doesn’t necessarily make them more able. It often means they are older, or that their parents know what will help their child in school.

March 2nd is important

The middle of the school year is on the second day of March, with 182 possible birthdates before and after this date. For a given school cohort, it is very illuminating to look at the spread of ages within a class around this date.

I recommend looking at two different measures:

1)      Know your seasonal groupings

Group the children into Autumn, Spring and Summer, with a four month spread in each group. Look at this regularly to keep the different groups familiar.

2)      Know the mean age difference between different groups within your class

I define the 'mean age difference' as being the mean difference between a cohort and March 2nd, the middle of the school year. 
Compare the data for boys and girls, and for maths, reading and writing groups, using a simple spreadsheet to find differences between birthdate and 2nd March and then calculating the mean age difference.
It’s often illuminating to see how age affects sporting and creative ability, too. Some of the younger children in a cohort are often neglected because they have had less time to practice.
I always look for those children who are not where you might expect them to be, either because they have made significantly more or less progress than their peers.

There is a reasonable body of research on the effect of age on children’s progress in school, and I recommend the Institute of Fiscal Matters report ‘When you are born matters: the impact of date of birth on educational outcomes in England’ for further reading.

This is the kind of data which we should use in school, free from bias and providing useful insight into how best to help children in the school system.

August 2014 update: I have added a sample spreadsheet here following a request by TFScientist below. It should be self explanatory, but feel free to ask questions in the comments below.
7 Comments

NTENRED: Thinking hard about CPD

12/5/2014

1 Comment

 
Picture
One of the best things about being in education is being inspired to learn by those you teach. When the children you deal with are developing before your own eyes, it seems churlish not to think about your own development, both personally and professionally. There are many ways to develop your own educational knowledge, skills and understanding, of course, and the plethora of educational books, films, websites and journals is testament to this.

Schools have institutionalised development in teaching practice under the umbrella heading of Continuous Professional Development (CPD). Of late, teachers have taken a greater interest in organising their own CPD, and 3rd May 2014 saw the latest in the events organised by ResearchED, the loose collective steered by teacher, blogger and all round good egg Tom Bennett.

The NTEN ResearchED conference at Huntingdon School in York was quite the most frustrating and, at the same time, downright inspiring CPD I have done so far in my career, and I’d advise anyone who wants to develop their core ideas in education to make time to attend something similar.

It wasn't possible to attend everything, which meant that everyone missed some sessions they wish they had attended – and people tweeting memorable or inspirational quotes from sessions happening right next door was something I found, initially, immensely frustrating. But as frustrating as this was, it was also inspiring, as it encouraged me to find out more about the speakers I have missed, and to talk to people about sessions they had attended. Only being able to attend 5 out of 27 sessions meant that we all had to think hard before, during and after the day about what choices we had made.

There are lot of excellent reviews of the day collated here, which in themselves provide enough food for thought to keep your inner learner busy for weeks. There are films of speakers in the main hall here. I want on reflect on the power of ‘festivals of ideas’, and what this might mean for future CPD in schools.

Much of the CPD I have done has seen me spend a day at a drafty conference centre, listening to speakers telling me things I have tried desperately hard to be interested in, but which have often left me frustrated, somewhat bored and with usually little more than a collection of doodles on printouts of PowerPoint presentations. I have always been surprised by presenters who have made no effort to find out what their audience already think or know, at how little delegates are able to discuss the ideas they presented with, and how often I have been unclear as to what exactly I was expected to learn from the experience.

Mostly, I’ve used CPD as an opportunity to put myself in children’s shoes, considering how to avoid the children I work with feeling as frustrated as I do when those tasked with helping me to develop my knowledge, skills and understanding fall short, for whatever reason. We all know teaching isn’t easy. CPD sessions nearly always remind us why.

But some CPD does work. Some of it works really, really well. In the past few years I’ve have had two days of CPD which have stood head and shoulders above the rest. The first was a ‘Teacher Speed Dating’ event, where attendees were asked to prepare short five minute presentations on ideas which had worked for us, much in the style of Pedagoo (http://www.pedagoo.org) although we did this one-on-one and we didn’t get to present to or hear everyone speak. The enthusiasm, originality and sheer inspiration was a joy to behold.

The second event had a range of presenters, from teachers, headteachers, advanced skills teachers and external companies presenting ideas in thirty minute sessions. With three sessions held concurrently in five different time slots, my colleagues and I split up and came together at the end of the day to share ideas and to discuss the ideas we wanted to take forward.

Both of these sessions were hugely motivating. When I reflected why this was the case, it was clear that the element of choice was massively important. Additionlly, since my colleagues and I had attended different sessions or heard different presentations, we had things to discuss before, during and after the day. I missed some things which I wish I had seen, and sat through some sessions which made me want to argue with the core ideas. All of it made me think, and to think hard.

And that’s what ResearchED made me do, too. I thought hard. About the core assumptions being made, about the excellent ideas which were presented, about the next steps I would take to explore and understand new concepts and research, about… being a teacher; being inspired to learn more about teaching; being responsible for my own continuing professional development.

And that’s why I heartily recommend this kind of ‘festival of ideas’ approach to CPD. Give teachers a choice of areas to investigate, and that in itself will inspire them to think hard about what they do, and to share ideas with others who have thought hard too. Thanks to everyone who made NTEN ResearchED York such an excellent event. An inspiring, amazing piece of CPD. And that’s a sentence I never expected to write.


1 Comment

FFT: Tea Leaves in Education

5/5/2014

7 Comments

 
Picture
Having prised the mask off the Fischer Family Trust, and looked at the FFT Governor Dashboard, it's time to have a look at what has made the FFT infamous in schools throughout England - supplying schools with dubious data, primarily for use when setting targets.

Playing Mystic Meg has made the FFT a household name, at least in the homes of countless teachers and senior managers who have been force-fed its dubious rubbish at taxpayers’ expense. Peddling stories of the past and tales of the future, conjuring up ‘estimates’ and foisting target culture onto an unsuspecting educational world has cost bucket loads of cash and wasted huge amounts of teachers' time and effort.

It has to be said that the ‘estimates’ crunched by the FFT are so loose, so woolly and, even according to the FFS itself, so hedged with caveats the size of Belgium that they are worse than useless. They give the impression of foretelling the future much as any sideshow charlatan might. Worse still, this rubbish is paid for by you and me, at an estimated cost of £15 million over the last 13 years, and is another substantial cog in the money-extracting Data Driven Disaster machine leeching English education.

Take some data and construct Castle Doom

The FFS currently run an entity called FFTLive, a cartoonishly colourful website which looks like this:

Picture
According to their blurb, it is ‘a powerful online reporting system used by schools, LAs and Academy Sponsors. We process data for all schools and pupils in England and Wales and provide online reports which analyse pupil results and pupils' progress across all subjects and key stages, comparing performance to similar schools and the national average.  FFTLive provides estimates of future pupil performance using FFT’s unique models which have been developed over 10 years.’

There is a fair bit of info on the FFT website about its history and magic, which I suggest you read for yourself. The highlights are briefly:

2001: FFT Founded by Mike Fischer of RM Plc and Mike Treadaway, ICT Advisor
2004: DFE awards National Pupil Database contract to FFT
2005: FFTLive launched
2006: RMFFT win contract to manage NPD and Performance Tables
2013: FFT launch Governor Dashboard
2014: Due to launch FFT Aspire in Autumn


If you’d like to have a look at what FFTLive looks like for a school, you can log in using either of the following usernames 9992004X (Primary) or 9994002X (Secondary)and password ANON. (I found these here and here, by the way, in case you’re interested).

There is far too much stuff available on the FFTLive website for me to go into in too much depth. Feel free to poke around yourself to see quite how much has been wrung out of the data. There are various guides which you can download (often called ‘Quick Start Guides’ accessed through ‘Help’ buttons), which are worth reading, although they don’t tell you anything at all about the methodology behind the data crunching.

Here are some highlights before we get to estimates and target setting, the bit of FFT magic at which every teacher, parent and politician should take a very, very close look.

Dashboards You can find the 4 page Governor Dashboard here, along with enormously data intense ‘self evaluation booklets’, which have an extraordinary 26 pages at KS1, 32 pages at KS2 and 16 pages at KS4 of stuff to plough through.

Explore This has magic such as ‘opportunities and alerts indicators’ and ‘turbulence and context factors’ for which no methodology is given. I assume that we are simply supposed to accept the ‘analysis’ at face value, which I’m fairly sure we shouldn’t.

Interactive reports Here you get into the murky world of ‘Reviewing Past Progress’ and ‘Supporting Target Setting (Estimates)’. ‘Reviewing Past Progress’ borrows the idea of ‘Value Added’ from economics, and, like many Data Disaster proponents, the FFT makes the highly disputed assumption that you can isolate a ‘teacher effect’ or ‘school effect’ from a ‘pupil effect’.

I’ve shown before that most people in schools don’t have the knowledge, skills or understanding to question this assumption, which is entirely unjustified and makes Value Added Not Even Wrong. Suffice to say that it simply makes no sense to assume that a child’s educational development is 100% school and teacher and nothing else, much less to model an individual child's future performance based on the performance of entirely different children in the past, but that’s what happens here.

It’s worth noting at this point that the FFT does two very separate things within FFTLive:

  • Assess the past
  • Predict the future

The methodology for both of these is highly suspect, and almost entirely opaque. I can make educated guesses about what RMFFT does in each area, but they haven’t made it easy to find out exactly what they do to data. Before looking at these two different but related aims of FFTLive, here are the final things to look at:

Innovate New ideas for crunching data by ‘Reviewing Past Progress’ and ‘Supporting Target Setting (Estimates)’ similar to the current Interactive reports. This shows that the FFT has started to think beyond some of the issues I’ll highlight below, and that they are desperately trying to keep their teeth around the government’s DDD jugular.

You can also export the data to perform more daft analysis yourself or have consultants charge you to ensure that you are a ‘Data confident school’, and the information section tells you a few things before tries to sell you training to become an Operating Data Thetan and explain that we are all actually ruled by lizards (this may not be true).

So, there’s a lot here, but you don’t get to charge the government a lot of money for nothing, even if what you have produced has no value. And speaking of no value, let’s have a look at the Big Daddy of the FFT: reporting the past and guessing the future

Looking back with FFTLive

All schools have to justify themselves to OFSTED when the inspectors come to call. These days, data is just about everything when being judged, and the FFT has been at the vanguard of the Data Driven Disaster. It has pushed a ‘Value Added’ model since its inception in 2001, and now all schools are expected to be solely responsible for the academic development of their pupils, as if children existed in suspended animation for the 80% of their waking hours they aren’t in school each week day.

Value Added is, in essence, a (deeply flawed) measure of how much a school has added to a child’s academic development. It’s far from clear how all the FFT’s Value Added alchemy works. There is an indication of the thinking of the FFT in some of the data which is crunched in FFTLive, however.

In reading at KS2, for example, some children have ‘Actual Levels’ of 5.1, 5.3, 5.7, which may be 5C, 5B and 5A; but then some children have 4.2, 4.3, 4.4, 4.7 and 4.9, which can’t correspond to 4C, 4B and 4A. Some ‘Actual Levels’ are coded in blue, which is apparently ‘lower than estimate by half a level or more’ Some are green for ‘higher than estimate by half a level or more’.

So what are these estimates which these 'actual levels' are measure against? Well, in order to calculate how much ‘value’ a school had ‘added’, the FFT required an estimate of a given pupil’s future test results. This had to be a single number, which could then be compared to what a pupil actually got in the tests at Key Stage 2, 4 or 5.

As far as I can guess, and based on the way in which RMFFT create estimate models for the OSDD, RAISEonline and Performance Tables, data for previous students is crunched to produce a model which has fixed coefficients to produce a linear line of best fit using regression analysis. Deep breath, non-mathematicians. It’s not so bad, really. Basically, this means this:
Picture
This is for KS2 to KS5, but it was produced by Mike Treadaway, who clearly understands how dubious the models he pushes actually are. As Mike explains, this is actually a line of best fit for data which looks more like this:
Picture
And as you can see from this, the line is a huge over simplification of what actually happens between one key stage and another, because that’s how regression works. The analysis works, just, at a group level, provided the data is identically and independently distributed, so a group of children with a mean of x at one Key Stage can be assumed to be likely to get a mean of y at another. But any statistician worth their salt would make it clear that any line of best fit is just that, and only works at the group level, and looking at y and reading off x for a given child is clearly the work of a fool. 

Showing an ‘Estimated level’ of 4.6 was Not Even Wrong, because the student could get literally anything between in a wide data range and not surprise anyone with a vague idea of how grouped data works. To Mike Treadaway’s credit, he acknowledges this. But then he goes on to use it anyway to assess how well a school has ‘added value’ to children. I’ve demolished the whole ‘estimates’ nonsense before here, but that doesn’t make this any less irritating or wrong.

Predicting the future, or not

Most people probably know the FFT for its futurology, which we’ll look at next. The ‘Supporting Target Setting (Estimates)’ is the FFT data most teachers are presented with when setting targets with their senior management teams.

Until 2009, teachers were given lists of ‘Estimated levels’ a child might get in their Key Stage 2 SATs, as used in the Value Added models above. They looked like this:

Picture
Someone at the FFT clearly realised that this was incredibly daft at an individual pupil level, since children were getting all kinds of different results and the estimates clearly made no sense for individuals. In 2009, the FFT (having used this Not Even Wrong model for eight years) amended the way they produced estimates for individual children, whilst, as I showed above, continuing to use the dubious ‘single number’ estimates to calculate 'Value Added'. Shamefully, many schools still used the single number estimate because they'd become used to it. Many may still do so. In their defence, I doubt many teachers would have understood the deep-seated problems with FFT futurology, but it clearly demonstrates the danger of bad data use in education.

Currently, primary schools get FFT Estimates which look like this:
Picture
This is similar to the data currently presented to teachers in secondary which looks something like this:
Picture
The secondary estimates are presented in a slightly more palatable version of the older way of presenting estimates still used in primary (a good example of the old secondary version is on David Didau’s Learning Spy blog here), in that the percentages are given as a cumulative possibilities. 41% chance of a B+ looks a bit less appealing when you realise that the model actually means the student is most likely to get a C.

Either way, this stuff shows you two important things:

In Primary, the ‘Estimated Levels’ tell you nothing.
In Secondary, the ‘Estimated Levels’ tell you nothing.

In case it isn’t obvious why this is the case, I’ll repeat: A student could get literally anything between the lowest and highest level available and not surprise anyone with a vague idea of how grouped data works. You might get a B, then again you might not. You might get level 4, then again you might not. The estimate tells you nothing which you, as a child’s teacher or parent, couldn’t work out for yourself.

There are umpteen other things wrong with this model, but here are a few to start with:

  • What data is used to produce the regression models for the estimates? All of it? Complete data points only? Partially complete data?
  • Is the data in the model, and therefore each estimate, changed each year that a child is in a key stage? If not, why not? If it is, what does it suggest? 
  • What exactly is the methodology used to produce this magic?

Examining the Educational Tea Leaves

Once again, its hard to know where to start. So much energy has gone into this stuff - and at least £15 million over the years by my reckoning - and it doesn't tell you anything whatsoever that someone working in a school couldn't tell you given the opportunity. The 'Value Added' fiction is just that - the models are so deeply flawed as to be meaningless. The 'Estimates' are so woolly that they add little to the professional judgement of the staff on the ground.

I haven't even gone into the vagaries of FFTA, FFTB and FFTD, as you can find information about them elsewhere. I can't find any criticism of the kind I've made here about the fundamental error of using grouped data analysis to predict individual outcomes, which is why I've written about this here. I hope that this article provokes the debate as to whether using data in the way RMFFT does has any justification, and I'd like to hear your thoughts in the comments below.

Thirteen years of FFT analysis has shown that trying to summarise every diverse school community in England is witchcraft of the highest order and, at individual child level, is little better than examining patterns in tea leaves. The cost, both financially and on the diminished education of children by the limited focus on badly assessed levels, is simply not worth paying. Examining tea leaves is ultimately pointless, because they tell you nothing you couldn't have worked out for yourself. And in this case, having looked closely at the tea leaves, we need to stop throwing our money away on yet more worthless data driven nonsense and completely rethink the way we assess 'achievement' and 'progress' in English schools..
7 Comments

    Author

    Me? I work in primary education and have done for 18 years. I also have children in school. I love teaching, but I think that school is a thin layer of icing on top of a very big cake, and that the misunderstanding of test scores is killing the love of teaching and learning.

    Archives

    March 2021
    February 2021
    January 2021
    April 2020
    December 2019
    March 2019
    February 2018
    September 2017
    July 2017
    April 2017
    October 2016
    September 2016
    July 2016
    June 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014

    Categories

    All
    Data
    FFT
    Follow Up Post
    Monday Article
    Ofsted
    Ofsted-schools-data-dashboard
    Ofsted-schools-data-dashboard
    OSDD
    Performance Tables
    Proposals

    RSS Feed

Powered by Create your own unique website with customizable templates.