Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes
- 1 ,
Copyright: © 2020 James Carifio and Rocco J. Perla. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
A recent article by Jamieson in Medical Education outlined some of the (alleged) abuses of "Likert scales" with suggestions about how researchers can overcome some of the (alleged) methodological pitfalls and limitations. However, many of the ideas advanced in the Jamison article, as well as a great many of articles it cited, and similar recent articles in medical, health, psychology, and educational journals and books, are themselves common misunderstandings, misconceptions, conceptual errors, persistent myths and "urban legends" about "Likert scales" and their characteristics and qualities that have been propagated and perpetuated across six decades, for a variety of different reasons. This article identifies, analyses and traces many of these aforementioned problems and presents the arguments, counter arguments and empirical evidence that show these many persistent claims and myths about "Likert scales" to be factually incorrect and untrue. Many studies have shown that Likert Scales (as opposed to single Likert response format items) produce interval data and that the F-test is very robust to violations of the interval data assumption and moderate skewing and may be used to analyze "Likert data" (even if it is ordinal), but not on an item-by-item "shotgun" basis, which is simply a current research and analysis practice that must stop. After sixty years, it is more than time to dispel these particular research myths and urban legends as well as the various damage and problems they cause, and put them to bed and out of their misery once and for all.