by Sugi Sorensen, October 9, 2018

What if someone distilled all the thousands of studies in education and learning down to a rank-ordered list of educational strategies based on how well they work? As an educator or educational administrator, you would then have a crystal ball that could guide all of your policy decisions. This is essentially what New Zealand Professor of Education John Hattie claimed to do in his “ground-breaking” Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement1 book when it was published in 2009.
When first published, the Times Educational Supplement wrote about Hattie’s book:
It is perhaps education’s equivalent to the search for the Holy Grail – or the answer to life, the universe and everything.
Grappled with by teachers and educationists for millennia, the perennial question goes a bit like this: if you could change one thing about the way our schooling system is run, what would it be?
Now, what is believed to be the largest ever educational research study – covering more than 80 million pupils and bringing together more than 50,000 smaller studies – has come up with the answer.2
I first became aware of Hattie’s work in a La Canada Unified School District (LCUSD) parent forum on homework in February 2018, when Superintendent Wendy Sinnette presented a summary of Hatties’ findings called the “Hattie Ranking” as an assigned reading for all attendees to the forum:

Note: The above is a partial listing. The complete listing may be accessed in its entirety here.
Sinnette presented it as a yardstick against which to measure homework as an educational practice. Homework sat low on the ranking with an effect size of just 0.29. Hattie prescribes that any educational intervention with an effect size lower that 0.40 should be ignored, since he calculated that the average of all effects in his meta-meta-analysis was 0.40 and declared it to be equivalent to one year of learning in an average classroom.
As I perused the listing shown above, it became clear that something was clearly wrong. The overly broad and sometimes cryptic topics, and incongruous comparison of vastly different topics, seemed odd. After all, the highest ranked intervention — self-report grades — was ambiguous, and the stated effect size of 1.44 astronomically large. It seemed nearly impossible to believe that the practice of encouraging students to report their own grades instead of receiving grades from teachers would be the most effective educational practice on earth. A little farther down the list, one sees feedback. This is ambiguous. I asked myself what it could mean — teacher feedback to students? student feedback on teachers? student feedback on their own work? teachers providing specific versus general feedback to students? And how could the hundreds of educational studies on various forms of feedback be distilled down to a single effect size of 0.73? The more I scrutinized, the more ridiculous the ranking appeared.
After the forum, I began to investigate. And within a few Google searches I discovered that Hattie’s methodology was indeed problematic and that dozens of peer reviews of his work found it both mathematically flawed and baseless in its conclusions. Perhaps the most concise critique is by Pierre-Jérome Bergeron, professor of statistics at McGill University in Canada:
Unfortunately, in reading Visible Learning and subsequent work by Hattie and his team, anybody who is knowledgeable in statistical analysis is quickly disillusioned. Why? Because data cannot be collected in any which way nor analyzed or interpreted in any which way either. Yet, this summarizes the New Zealander’s actual methodology. To believe Hattie is to have a blind spot in one’s critical thinking when assessing scientific rigor. To promote his work is to unfortunately fall into the promotion of pseudoscience. Finally, to persist in defending Hattie after becoming aware of the serious critique of his methodology constitutes willful blindness.
[Source: Bergeron, Pierre-Jérome, “How To Engage In Pseudoscience With Real Data: A Criticism of John Hattie’s Arguments in Visible Learning from the Perspective of a Statistician”, McGill Journal of Education, Volume 52, No 1 (2017), http://mje.mcgill.ca/article/view/9475/7229.]
A little more digging unearthed at least a dozen peer reviews finding serious methodological problems with his “research”:
- Slavin, Robert (2018), “John Hattie is Wrong”, Robert Slavin’s Blog. https://robertslavinsblog.wordpress.com/2018/06/21/john-hattie-is-wrong/
And sure enough, Hattie is profoundly wrong. He is merely shoveling meta-analyses containing massive bias into meta-meta-analyses that reflect the same biases.
- Nielsen, Klaus (2017), “The Blind Spots of Visible Learning”, Nordic Studies in Education, 37 (01): 3-18. March 2017.
Abstract: “This article poses a number of critical questions to John Hattie’s work about visible learning and the so-called ‘Hattie revolution’, which presently dominates Danish educational debate. This article analyzes and discusses Hattie’s methodological presupposition and a number of the theoretical assumptions on which Hattie’s work is founded. It is argued that the large number of meta-analyses which Hattie uses in his work do not only function as scientific documentation, but also as rhetorical components in discussions about what the problems and the solutions might be in today’s educational system. A central aim in Hattie’s work is to develop a theory about what constitutes good learning and good teaching based on the evidence-based measures from a large number of meta-analyses. In the paper, however, we question if this aim is realized in Hattie’s own work.”
- Solfjell, Elvind (2012), “Did Hattie Get His Statistics Wrong?”, http://sporetterspor.blogspot.com/2012/02/did-hattie-get-his-statistics-wrong.html
…an article has emerged, titled “Can we trust the use of statistics in educational research” (“Kan vi stole på statistikkbruken i utdanningsforskinga”) (Topphol, 2011), published in the Norwegian Journal of Pedagogy (Norsk pedagogisk tidsskrift), questioning Hattie’s use of one of the statistical measures in Visible learning, namely the Common Language Effect size (CLE).
- Steve Higgins & Adrian Simpson (2011) “Visible Learning: A Synthesis of over 800 Meta-Analyses Relating to Achievement. By John A.C. Hattie”, British Journal of Educational Studies, 59:2, 197-201, DOI: 10.1080/00071005.2011.584660. Retrieved from: https://www.tandfonline.com/doi/full/10.1080/00071005.2011.584660?needAccess=true.
- Ollieorange2, “John Hattie admits half of the Statistics in Visible Learning are wrong”, Aug. 25, 2014, https://ollieorange2.wordpress.com/2014/08/25/people-who-think-probabilities-can-be-negative-shouldnt-write-books-on-statistics/
At the researchED conference in September 2013, Professor Robert Coe, Professor of Education at Durham University, said that John Hattie’s book, ‘Visible Learning’, is “riddled with errors”. But what are some of those errors?
The biggest mistake Hattie makes is with the CLE statistic that he uses throughout the book. In ‘Visible Learning, Hattie only uses two statistics, the ‘Effect Size’ and the CLE (neither of which Mathematicians use).
The CLE is meant to be a probability, yet Hattie has it at values between -49% and 219%. Now a probability can’t be negative or more than 100% as any Year 7 will tell you. This was first spotted and pointed out to him by Arne Kare Topphol, an Associate Professor at the University of Volda and his class who sent Hattie an email.
- Eacott, Scott (2017), “School Leadership and the Cult of the Guru: the Neo-Taylorism of Hattie”, School Leadership & Management, DOI: 10.1080/13632434.2017.1327428
“…the work of John Hattie, principally beginning with Visible Learning (Hattie 2009), has become not only the latest fad or fashion, almost to the point of saturation, but reached a level where it can now be labelled the ‘Cult of Hattie’. … Hattie’s work has become the dominant feature in contemporary educational leadership rhetoric in Australia – especially by the largest professional association, the Australian Council for Educational Leaders (ACEL). To be clear, this paper is not a critique of Hattie personally or the quality of his analysis, although to say I am neutral would be mistaken. Rather, I argue that the message of brand Hattie has been uncritically adopted by the masses and spread across Australian education systems in a previously unmatched scope and scale.”
- Lilley, George, “An Investigation of the Evidence John Hattie presents in Visible Learning”, Jan 18, 2016, VisibleLearning, https://visablelearning.blogspot.com/.
“Our discipline needs to be saturated with critique of ideas; and it should be welcomed. Every paradigm or set of conjectures should be tested to destruction and its authors, adherents, and users of the ideas should face public accountability.” (John Hattie, 2017, p 428).
The peer reviews are saturated with detailed critiques of Hattie’s work. Most educators do not seem to be aware of them. The aim of this blog is to raise awareness of these critiques in the hope that Hattie’s conjectures are “tested to destruction”.
A full list of 40 peer reviews of Hattie’s work can be found here:
https://visablelearning.blogspot.com/p/references.html
In summary, there appeared to be serious problems with the approach and findings of Hattie’s Visible Learning research. Most troubling was that the basic mathematical errors in his book were not discovered for three years, and then by statisticians, not members of the educational community who had uncritically accepted Hattie’s work as the gospel truth. Worse still, thousands of educators around the world used Hattie’s work, as LCUSD had done in its homework forum, as a yardstick to measure proposed educational interventions. When confronted with some of the above critiques, LCUSD administrators and Board members did not respond in February 2018. Normally when confronted with questions about methodology or process in academia and science, the person using the questionable practice must defend its use and a disputation follows until a common understanding is reached. Apparently such conventions of intellectual rigor are beneath practitioners in education.
After using Hattie’s findings to justify its review of homework, LCUSD then returned to Hattie to guide its selection of math intervention practices in Fall of 2018. Superintendent Sinnette announced her Superintendent’s goals to the LCUSD Governing Board in a regularly scheduled board meeting on Tuesday, October 9th, 2018. One of her minor goals for the year was presented as below:

Visible Learning: What works to Best Optimize Student Learning for Mathematics for Grades K-12 (2017) is Hattie’s version of Visible Learning (VL) for mathematics. Hattie has co-authored about twelve different books since the publishing of the original Visible Learning book in 2009, all variations on the same theme, and all containing the same flawed analysis as a starting point. One will find VL books for math, literacy, K-5, science, teaching, and a host of other spinoffs.
Concerned that the district was again using Hattie’s pseudoscience as a guide to evaluate potential educational practices, La Canada Math Parents met with Superintendent Sinnette on Tuesday October 9th, 2018 a few hours before her presentation to the Governing Board. Sinnette defended the use of Hattie by stating that “the book in question is highly regarded by our consultants, is researched based, and evidences many best practices in math education and instruction.” A follow-up question revealed that the consultant that had recommended the book was the Teachers Development Group (TDG), a nonprofit consulting group in Oregon whose stated mission is “dedicated to increasing all students’ mathematical understanding and achievement through meaningful, effective professional development.” TDG has been working with LCUSD for several years to help the district implement the Common Core State Standards in Mathematics and other professional development.
When presented with the evidence (again) that there were severe problems with Hattie’s research approach, Sinnette said that Hattie’s book would only be one of many sources used to inform the elementary principals in the development of site plans to improve delivery of mathematics instruction, particularly to provide differentiation of instruction.
The district has made no further attempt to defend the use of such problematic research in such an important way. It continues along blithely indifferent of the growing body of evidence that Hattie’s approach is fundamentally flawed. Most concerning, the district seems unable to tell the difference between good research, bad research and pseudoscience.
-
- Hattie, John (2008). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. NY: Routledge. p. 392. ISBN 978-0-415-47618-8
- Mansell, Warwick (2008). Research Reveals Teaching’s Holy Grail, TES, Nov. 21, 2008, https://www.tes.com/news/research-reveals-teachings-holy-grail.