As a student, you will slowly be getting used to teachers making predictions about you (you may never grow to like it, but you will at least get used to it). You have predicted grades for your GCSEs and predicted grades for your AS and A-Levels. The education system to some extent relies upon the idea that teachers can predict with reasonable accuracy how students will do in their exams, before they have even taken them. But how good are teachers’ predictive abilities really?
In the 1950s a psychologist called Paul Meehl reported an example where trained school councillors were asked to predict the grades of students at the end of the year. They were allowed to interview the students for 45 minutes and had access to large amounts of other data such as a personal statement, previous grades and a number of aptitude tests. At the end of the year, the predictions of the councillors were compared to another, much simpler, prediction method, a statistical equation that only looked at previous results and one aptitude test. Who do you think was more accurate? In 11 of 14 cases, the simple statistics did a better job than the professionals. Before you decide that teaching professionals are useless, however, be aware that over the next 30 years similar patterns have been found across numerous areas: from predicting success in pilot training, the price of bottles of wine, cancer survival rates, prospects of success for new businesses and many more.
These cases are called ‘Meehl patterns’ and they occur when an expert in a field tries to predict something complicated, but has a lower success rate doing so than a simple statistical formula (or ‘algorithm’). Meehl thought that the patterns illustrated a potential problem with knowing a lot about a certain subject; it makes us too confident in our judgements and more likely to try to be clever or unexpected in our predictions, rather than just sticking to the simple data. Meehl patterns work in so called ‘low-validity’ environments, ones that involve a significant degree of uncertainty and which are very hard to predict. In such difficult situations, it is often better to use simple statistics to predict performance, rather than trusting an ‘expert’.
After they’ve made their decisions, of course, the professors will have their judgements vindicated by confirmation bias, another cognitive error. Some students who the professors remember giving particularly impressive answers to interview questions will go on to do important and noteworthy things at university and beyond. Given the number of people from these two institutions who go on to high-profile and important jobs, this is not actually a very surprising fact, but it will create the impression in the minds of the professor that they were right all along, and that interviews remain the way forward.
In actual fact, I suspect that there would be a much simpler way to predict the likelihood of success at university, a method that Meehl would no doubt have approved of: AS level scores, plus the result in a standardized admissions test given by the university. The top performers across these two measures are admitted. Simpler, cheaper, very possibly fairer (especially on students from poorer backgrounds who may be far more intimidated by the atmosphere of the Oxford college than other applicants) and perhaps, if the lessons of Paul Meehl are anything to go by, a fair bit more accurate as well.