The second of my posts inspired by Daniel Kahneman’s ‘Thinking, Fast and Slow’
As a student, you will slowly be getting used to teachers making predictions about you (you may never grow to like it, but you will at least get used to it). You have predicted grades for your GCSEs and predicted grades for your AS and A-Levels. The education system to some extent relies upon the idea that teachers can predict with reasonable accuracy how students will do in their exams, before they have even taken them. But how good are teachers’ predictive abilities really?
In the 1950s a psychologist called Paul Meehl reported an example where trained school councillors were asked to predict the grades of students at the end of the year. They were allowed to interview the students for 45 minutes and had access to large amounts of other data such as a personal statement, previous grades and a number of aptitude tests. At the end of the year, the predictions of the councillors were compared to another, much simpler, prediction method, a statistical equation that only looked at previous results and one aptitude test. Who do you think was more accurate? In 11 of 14 cases, the simple statistics did a better job than the professionals. Before you decide that teaching professionals are useless, however, be aware that over the next 30 years similar patterns have been found across numerous areas: from predicting success in pilot training, the price of bottles of wine, cancer survival rates, prospects of success for new businesses and many more.
These cases are called ‘Meehl patterns’ and they occur when an expert in a field tries to predict something complicated, but has a lower success rate doing so than a simple statistical formula (or ‘algorithm’). Meehl thought that the patterns illustrated a potential problem with knowing a lot about a certain subject; it makes us too confident in our judgements and more likely to try to be clever or unexpected in our predictions, rather than just sticking to the simple data. Meehl patterns work in so called ‘low-validity’ environments, ones that involve a significant degree of uncertainty and which are very hard to predict. In such difficult situations, it is often better to use simple statistics to predict performance, rather than trusting an ‘expert’.
As a student, you will slowly be getting used to teachers making predictions about you (you may never grow to like it, but you will at least get used to it). You have predicted grades for your GCSEs and predicted grades for your AS and A-Levels. The education system to some extent relies upon the idea that teachers can predict with reasonable accuracy how students will do in their exams, before they have even taken them. But how good are teachers’ predictive abilities really?
In the 1950s a psychologist called Paul Meehl reported an example where trained school councillors were asked to predict the grades of students at the end of the year. They were allowed to interview the students for 45 minutes and had access to large amounts of other data such as a personal statement, previous grades and a number of aptitude tests. At the end of the year, the predictions of the councillors were compared to another, much simpler, prediction method, a statistical equation that only looked at previous results and one aptitude test. Who do you think was more accurate? In 11 of 14 cases, the simple statistics did a better job than the professionals. Before you decide that teaching professionals are useless, however, be aware that over the next 30 years similar patterns have been found across numerous areas: from predicting success in pilot training, the price of bottles of wine, cancer survival rates, prospects of success for new businesses and many more.
These cases are called ‘Meehl patterns’ and they occur when an expert in a field tries to predict something complicated, but has a lower success rate doing so than a simple statistical formula (or ‘algorithm’). Meehl thought that the patterns illustrated a potential problem with knowing a lot about a certain subject; it makes us too confident in our judgements and more likely to try to be clever or unexpected in our predictions, rather than just sticking to the simple data. Meehl patterns work in so called ‘low-validity’ environments, ones that involve a significant degree of uncertainty and which are very hard to predict. In such difficult situations, it is often better to use simple statistics to predict performance, rather than trusting an ‘expert’.
This got me thinking about two potential examples of ‘Meehl patterns’ in school settings. The first, as I mentioned at the start, is predicted grades. I would actually argue that predicting exam performance is likely to be less of a Meehl pattern now than it may have been in the 1950s, as teachers today are far more used to using data to inform their decisions, especially data from aptitude tests and so on. What this means, of course, is not that we are able to predict grades with any great accuracy (these are still ‘low-validity’ environments, so even the best guess of an aptitude test is a pretty poor prediction), it just means that teachers might be a bit less bad at it than they used to be! Every teacher will be able to name individuals who have far exceeded their expectations (and predictions) in exam situations, and probably just as many who have gone the other way. This will often be especially pronounced in the more subjective subjects such as English, History or Psychology, where one examiner may grade an answer very differently to another. Such environments (where individual performance, question topic and marker judgements can all vary greatly) are very low-validity indeed, in fact it makes me think that it’s amazing that we ever get any predictions right at all. The next time, however, that I am tempted to give a prediction that is completely at odds with the data before me, just because I have a feeling that I know the student better, I might have to stop and reassess my own biases and the illusion of my own expertise!
The second area of school life in which I see a clear example of a Meehl pattern shows itself every October and November as students prepare their university applications and, in particular for those applying for Oxford or Cambridge, begin to practise their interview technique. Oxbridge interviews are of course the stuff of folklore; subject to countless column inches each year and the source of no end of student angst and public bemusement. They are defended to the hilt, of course, by academics who maintain that such questioning and face-to-face interactions with the applicants provide crucial insight which cannot be ascertained from the pile of glowing school references and string of A and A* grades that each student will arrive with. I can’t help but wondering, however, if they are falling prey to the illusion of their own expertise, over and above any real ability of theirs to truly pick out the most talented. Nervous students, intimidating environments, a random battery of questions which may or may not by chance have relevance to the wider reading that they have been desperately trying to do over the previous weeks; it’s hard to envisage a more ‘low-validity’ environment by which to predict future academic success.
After they’ve made their decisions, of course, the professors will have their judgements vindicated by confirmation bias, another cognitive error. Some students who the professors remember giving particularly impressive answers to interview questions will go on to do important and noteworthy things at university and beyond. Given the number of people from these two institutions who go on to high-profile and important jobs, this is not actually a very surprising fact, but it will create the impression in the minds of the professor that they were right all along, and that interviews remain the way forward.
In actual fact, I suspect that there would be a much simpler way to predict the likelihood of success at university, a method that Meehl would no doubt have approved of: AS level scores, plus the result in a standardized admissions test given by the university. The top performers across these two measures are admitted. Simpler, cheaper, very possibly fairer (especially on students from poorer backgrounds who may be far more intimidated by the atmosphere of the Oxford college than other applicants) and perhaps, if the lessons of Paul Meehl are anything to go by, a fair bit more accurate as well.
After they’ve made their decisions, of course, the professors will have their judgements vindicated by confirmation bias, another cognitive error. Some students who the professors remember giving particularly impressive answers to interview questions will go on to do important and noteworthy things at university and beyond. Given the number of people from these two institutions who go on to high-profile and important jobs, this is not actually a very surprising fact, but it will create the impression in the minds of the professor that they were right all along, and that interviews remain the way forward.
In actual fact, I suspect that there would be a much simpler way to predict the likelihood of success at university, a method that Meehl would no doubt have approved of: AS level scores, plus the result in a standardized admissions test given by the university. The top performers across these two measures are admitted. Simpler, cheaper, very possibly fairer (especially on students from poorer backgrounds who may be far more intimidated by the atmosphere of the Oxford college than other applicants) and perhaps, if the lessons of Paul Meehl are anything to go by, a fair bit more accurate as well.