10 Surprisingly Common Neuromyths

fakurian-design-58Z17lnVS4U-unsplash.jpg

What is a neuromyth? A neuromyth is an erroneous belief about how the brain (or the mind) works that is held by a substantial number of people. Neuromyths have become increasingly common, pervasive, and in some sense, represent a failure of the ‘public engagement’ that scientists are supposed to be doing. Indeed, neuromyths often have a kernel of truth or are at least a distortion of something much more reasonable, which makes them even harder to combat. But as so many people believe things about the brain that are so wrong, the consequences can be devastating, since they impinge upon important areas which yield tremendous societal impact, such as education. 

Think for a minute: Where do you tend to get information about the brain and the mind? How reliable are those sources? 

Here are 10 common neuromyths that impact society. Some of them leave the realm of neuroscience and touch on the closely related field of psychology, but all have at least a link to research on the brain. 


1. People Only Use 10% of their Brains

This neuromyth has formed the basis of two recent(ish) sci-fi films: Neil Burger’s Limitless (starring Bradley Cooper) and Luc Besson’s Lucy (starring Scarlett Johansson). It’s rare to see neuromyths stated as boldly as the one on the poster for Lucy: ‘The average person uses 10% of their brain capacity.’ These films seem to only add fuel to the fire of scientific misinformation. 

As it’s been noted many times, this is a ‘feel-good’ myth. People want to believe that they have some huge, untapped well of brainpower that they might be able to access one day. Unfortunately, marketers have latched onto this myth and have used it to promote all manner of pills, courses, and ‘brain training’ games to help people ‘unlock their potential’. None of them have much more than a shred of evidence for their benefit. 

Obviously, the 10% myth is not true. First, it completely contradicts all the research from neuropsychology, particularly research investigating patients who have had strokes or other trauma that damages parts of their brain. These people lose critical functions such as speech, vision, memory, movement or grammatical understanding – often almost entirely. There’s not one section of the brain that can be damaged or removed without having this kind of negative effect on these functions more generally.

Second, there’s plenty of evidence from functional brain scans that no matter what someone is doing while in the scanner (even if they’re told just to lie there and not do anything in particular) — pretty much all of the areas of the brain are active. 

Third, taking a wider view, it makes no sense evolutionarily to have an organ in which 90% of that organ is unused. This is especially noteworthy because the brain uses around 20% of the body’s energy, (despite being only 2% of its weight). If you only needed to use 10% of your brain to have the same performance as you do today, then a much smaller and more streamlined brain would give you a massive evolutionary advantage compared to other species — in terms of energy efficiency. There are some aspects of the body that are wasteful and poorly-designed, but having such an energy-hungry organ using up a fifth of the body’s energy, largely for no reason, would surely have been strongly selected against as we evolved. 

The psychologist Barry Beyerstein attempted to trace the 10% myth back to its origins. His best guess is that it arose from a misunderstanding of something that the philosopher, and ‘father of psychology’, William James said in his writings of the late 19th and early 20th centuries. There, James wrote that he thought it was unlikely that the average person reached 10% of their potential. This was possibly a little overstated, but it’s largely a matter of opinion nonetheless, rather than a scientific question and certainly isn’t a claim of anything biological. Yet somehow, over the years, ‘10% of their potential’ became ‘10% of their brains’, and we were thus left with an enduring popular myth that makes reference to the actual physical structures of the brain rather than the abstract notions of people’s untapped potential.

2. ‘Right-brained’ vs ‘Left-brained’

This neuromyth is so well-known that it’s become a household phrase. It’s very common to describe someone who’s creative as ‘right-brained’ or highly analytical as ‘left-brained’. But there is no scientific evidence to support this idea at all.

This myth is also probably an over-extrapolation of what some researchers have previously stated. In this case, it was most likely about ‘lateralisation’ in the brain. Lateralisation refers to the fact that some skills – for example, language – seem to be more strongly represented in one brain hemisphere compared to another. In the case of language, that hemisphere would be the left. However, ideas like this one represent a massively dumbed-down version of the truth, which is much messier. It is also possible that this neuromyth may have emerged from a misunderstanding of the split-brain work of Nobel Prize winner Roger Sperry, who noticed differences in behavior when he studied people whose left and right brains had been surgically disconnected.

Indeed, functional magnetic resonance imaging studies of the brain show that both hemispheres are active when people are engaging in creative tasks. The set of brain images on the left here show that, while performing the ‘divergent thinking’ task in the scanner (that’s the task where you have to come up with as many uses for an object as you can in a certain time limit, thus acting as a rather poor proxy for creativity writ large), participants showed strong activation in both hemispheres. There’s certainly no reason to think that the right hemisphere had some kind of specific role. 

Left brained vs Right brained image.png

The brain images on the right are from a study where professional pianists improvised while having their brain scanned. Again, there was activation in both hemispheres. If anything, the top row (which shows activities associated with the goal of pressing specific piano keys) shows stronger activation in the left hemisphere, whereas the bottom row (which shows activity associated with the goal of expressing specific emotions) shows activation in the right and the left. 

Overall, there’s absolutely no reason to think that creativity is, in whatever sense, ‘localised’ to the right hemisphere of the brain. The review paper where these brain images come from, concludes that ‘creative thought involves dynamic interactions of large-scale brain networks’. There’s no mention in this up-to-date review of the brain and creativity to suggest that neuroscientists have any use for the idea of some people being ‘right-brained’.

3. There are Multiple Intelligences

The Harvard psychologist Howard Gardner came up with the theory of multiple intelligences in the 1980s. I say ‘came up with’ because there wasn’t any actual data involved with this theory: it was simply Gardner’s opinion that multiple intelligences exist and that their levels can vary wildly in different people. If the lack of empirical data didn’t worry you enough, you should also know that Gardner has, apparently on a whim and certainly not with any actual data, added new ‘intelligences’ to his theory over the years. For example, to his original list of seven intelligences – including visual-spatial intelligence (favoured in artists), interpersonal intelligence (favoured in occupations that involved dealing with people) and intrapersonal intelligence (favoured in novelists, who need to be in touch with their own emotions) – in 1995 he added naturalistic intelligence (the ability to know the names of flowers and animals in nature) and, in the early 2000s, he added existential intelligence and moral intelligence, and even suggested that cooking intelligence and sexual intelligence, among others, might also need to be added.

To reiterate: no statistical analysis went into this and these categories are not based on any measurements or metrics. Gardner just decided these new intelligences should be part of the picture.

The second thing you might wonder is what the actual data say about the topic of intelligence. Do we have all these skills that are, as Gardner would suggest, broadly independent of one another (such that you’d regularly find people who are high on one skill but not others)? 

The answer, from 100 years of research on intelligence, seems to be a resounding: no. An enormous amount of data attest to the fact that if you test people on a wide range of mental tests, their scores will tend to correlate positively together – people who are good at one test tend to be good at them all. 

That is, the different tests might all share something in common. That ‘something’ has been described as ‘general intelligence’ or the ‘g-factor’. Obviously, people can specialize – there are specific abilities (being particularly good at spatial tasks, for instance) that only partly overlap with g. But there is a big part of intelligence that’s general, and this is in direct contradiction to Gardner’s ‘multiple intelligences theory’. 

Probably the best, most balanced textbook on intelligence was written by Mackintosh, IQ and Human Intelligence.

It’s worth taking a moment to discuss IQ tests as well. There is a widely held myth that ‘IQ tests just tell you how good you are at doing IQ tests’. However, IQ tests – while certainly flawed in all sorts of ways – are some of the most reliable and predictive tests we have in the whole of psychological science. 

IQ and death risk.png

Consider this graph: it shows the relation between IQ score (split into nine levels) on the x-axis and risk of death in the 20 years following the IQ test on the y-axis. The data come from almost a million men who did an IQ test as part of their military service in Sweden. You can see that there’s an impressive, staircase-like relation between IQ and the risk of death: the people with higher IQ scores were less likely to be dead at the follow-up. The way to read the graph is that if the men in the highest IQ category, ‘9’, have a chance of dying that’s set at ‘1’, then people in the lowest category have a chance that’s almost three-and-a-half times higher than that. By the way, this has been shown in other studies for women too – there’s a similar link. 

Why do smarter people live longer? There are at least four reasons and they aren’t mutually exclusive. The first reason might be that being smarter helps you get a better education and a better job and, thus, more money to live in nicer areas and access better quality healthcare. The second might be that smarter people look after themselves better: IQ is negatively correlated with smoking and getting into car accidents, for example. 

The third reason might be that the lower IQ is a signal that things haven’t gone well for you as you developed: maybe you’re from a deprived background, or you had malnutrition or an illness as a child – this might have affected your IQ by stopping your brain from developing to its full potential, and it also might affect your life expectancy. The fourth reason might be that some luckier people just have better-built systems: the genes that build a healthier brain may also build healthier hearts, lungs and so on. 

Whatever the reason, IQ tests do predict mortality, as well as educational attainment, occupational success, physical and mental health, and many other things. The myth that they only tell you how good you are at IQ tests has been comprehensively debunked. 

4. There are Different ‘Learning Styles’

This neuromyth has become enormously popular in schools and universities, which can bring about devastating consequences on students. The theory goes that people differ in the way that they learn due to some innate property of their brains and that teaching should, therefore, be customised to their particular style. 

Probably the most common taxonomy of learning styles is often called VAK for Visual-Auditory-Kinaesthetic. Now, nobody is denying that some people have a preference for learning in some particular way: people revise differently for tests and exams and do what feels right to them (though other evidence does indicate that people are very bad at judging which way of studying is actually the best and end up working in suboptimal ways). But the important question is whether there really are these brain types, these learning styles, that would require teachers to change the way they teach to accommodate them. 

Still the best discussion of the idea of learning styles is the paper published in the journal Psychological Science in the Public Interest by Hal Pashler and colleagues over a decade ago. They discussed what kind of evidence would be required to convince scientists that learning styles was a real thing that should be considered by teachers in the classroom. 

Overall, as Pashler and colleagues say, to justify personalising the type of learning, and to justify the idea that there’s a taxonomy of different learning styles that has a real impact in the classroom, you’d need to see lots of acceptable evidence to suggest that learning it optimized under the conditions of those learning styles and suboptimal under other learning styles. But here’s the catch: the number of studies that show any of this ‘acceptable’ evidence is... zero. I’m not aware of any such research appearing in the decade since Pashler’s paper, despite the enduring popularity of learning styles as an idea in schools. 

A neuroscientist’s advice to educators: there’s no reason to customize your curriculum. There’s no evidence, as yet, to support the idea of ‘learning styles’.

5. “Brain-boosting’ Games Can Increase Your Intelligence

You’ve probably come across someone that is a fan of ‘brain training’ computer games and claims to be training their brain in order to increase their general intelligence. It’s not possible. So where did this idea even come from? 

N-Back-Memory-Test-1024x768.jpeg

A high-profile paper in 2008 by Jaeggi and colleagues kicked the whole thing off. Specifically, the authors use ‘n-back’ working memory training. When you’re doing n- backs, you see a series of little white squares presented one at a time in different places on a blank screen, and you have to remember where the square was ‘n’ times back in sequence. This is very easy when it’s ‘2-back’ — it isn’t hard to recall where the square was just two presentations ago. But it gets more and more complex when you go up to 5-, 6- or 7-back, just because of the sheer amount of information that you’re having to hold in your memory. What makes it even more difficult is the fact that at the same time, you have to do the same thing with letters that you hear presented in headphones. It’s pretty nightmarish. 

The author’s claimed that regular training on this dual n-back task boosted people’s intelligence and, thus, started a craze for brain training. 

But the research since 2008 hasn’t been kind to the n-back hypothesis.

Meta-analytic studies have shown that what’s happening with working memory training is ‘near transfer’ – that is, when you train working memory with tasks like the n-back you get better at doing n-backs, and also doing other quite similar working memory measures. But there’s very little evidence for ‘far transfer’ – the effect of working memory training on cognitive skills that aren’t working memory tasks. So, whereas you might get better at verbal working memory tasks (which might be great and fair play to you if this specific skill is what you’re after), it doesn’t seem like you’ll get much better at tasks like arithmetic or intelligence more broadly. 

So, the working memory hype boils down to ‘if you train hard at doing a task, you’ll get better at doing that specific task’. Hardly revolutionary. But sadly, many people still believe the myth sold to them by marketers (and, alas, a few of the scientists who were responsible for the initial working-memory-training findings) that this kind of brain training can improve their general abilities.

Sometimes there are consequences for when the marketing goes way beyond the science. The company Lumosity had to pay a $2 million fine to the US government in 2016 after being charged with ‘deceptive advertising’ to promote its brain training products, which it claimed could raise your cognitive abilities and stave off age-related cognitive decline. Since then, other companies have been far more circumspect in the claims they make for their products. 

6. Brain Development is Finished by Puberty

Like other neuromyths, this belief is grounded in some truth, but is not articulated accurately. Indeed, the human brain reaches 90% of its size by roughly age 6-7 and does not increase significantly after puberty (girls seem to reach their largest brain size around age 11 and boys, around age 14). However, the brain continues to develop well into the mid-to-late 20’s, especially in the pre-frontal lobes, which are critical for executive reasoning and decision-making. During these later teenage and early adult years, the creases in the brain become more complex and there is a significant increase in myelination and synaptic pruning in the prefrontal cortex, thus improving the efficiency of information processing.

Strikingly, all of the big changes in the brain seem to occur in late adolescence, which corresponds with when we see many mental health disorders commonly emerge — such as schizophrenia, anxiety, depression, bipolar disorder, and eating disorders.

The limbic system develops a few years before the prefrontal cortex and plays a key role in evaluating rewards and punishments and emotional processing. During puberty, hormones directly target the amygdala, which makes emotional sensations more compelling. Simultaneously, the limbic system increases levels of the neurotransmitters dopamine and serotonin, which makes young teens more responsive to emotions and stress. fMRI scans have confirmed that cognitive control is not fully developed until the mid-20’s because the prefrontal cortex is still undergoing this myelination and synaptic pruning process. Because of this underdevelopment of the prefrontal cortex, adolescents are more likely to seek immediate thrills from risky behavior, such as smoking, drinking, or drug-use, as they cannot use the cognitive control to weigh future-potential risks.

7. Mental Capacity is Something You’re Born With and It Cannot Change

Neuroplasticity, also known as neural plasticity, or brain plasticity, is the ability of neural networks in the brain to change through growth and reorganization. In fact, the brain is not a "hard-wired" structure with fixed neuronal circuits. There are indeed many instances of cortical and subcortical rewiring of neuronal circuits in response to brain-training or in healing from injury. There is also now ample evidence out of the Karolinska Institute in Stockholm, Sweden that neurogenesis (birth of brain cells) occurs in the adult, mammalian brain, a process which continues well into old age.

So what does this mean for you, exactly? That it’s possible to improve focus and mental energy. Your brain, like any other organ in the human body, can be trained and optimized. All of the cells in your body run on adenosine triphosphate or ATP, which is made by your mitochondria. About 20% of the ATP produced in your body every day is used to fuel your brain, which is why your brain has the highest concentration of mitochondria. To improve your focus and mental energy, you’ll want to keep your mitochondria strong by training your mental muscles (your brain cells live and die by the use-it-or-lose-it principle).

In order to support optimal brain health, you’ll want to optimize your diet, exercise, sunlight exposure, reduce toxins, manage stress, optimize your sleep, and expose yourself to new stimuli.

8. When You Sleep, Your Brain Shuts Down

One of the most pervasive neuromyths is that our brains shut down while we sleep. In fact, it's quite the opposite. The brain is most active when we are asleep. It is the body that takes a break, while the brain coordinates many various physiological processes which repair and protect itself. Sleep is what we call a neuroprotective process. When you sleep, your brain clears out cellular waste and repairs old and damaged brain cells


A minimum of 7 hours of daily sleep seems to be necessary for proper cognitive and behavioral function. Even a medium-quality sleep can impact your reaction time, attention, memory, and mental accuracy. Whereas a good night sleep (with quality deep sleep) can give you up to 50% faster mental processing, a more stable mood, and an increased capacity for learning

Sleep.png

In order to optimize your sleep, you’ll want to avoid pre-bedtime blue light exposure, sleep with a cooler room temperature, don’t eat too late, and pause all caffeine consumption by 2:00 PM.

9. The ‘Mozart-effect’ Will Make Your Children Smarter

The ‘Mozart-effect’ was first ‘discovered’ in 1993 in a paper in the top scientific journal Nature and quickly became a worldwide sensation. The idea is that you can improve people’s intelligence (particularly babies) just by playing them Mozart! This proved irresistible to marketers, who took a finding that had been made on a small number of students and applied it to babies. The ‘baby Mozart’ phenomenon (play Mozart to your child, and maybe they’ll become smarter) had always seemed implausible, but the findings crumbled when they were meta-analyzed in 2010. 

A 2010 paper entitled ‘Mozart Effect – Schmozart Effect’, reported a meta-analysis of all the Mozart effect findings. The overall effect of listening to Mozart on intelligence was – as you may be able to tell from the paper’s title – essentially zilch. The original result was likely a fluke (and this is something we’re finding from a lot of psychological studies recently, as the subject goes through its so-called ‘Replication Crisis’). 

Does this mean that you shouldn’t bother playing your children (or yourself) Mozart or other classical music? Of course, not. Mozart is great!

10. Multitasking is Possible

This neuromyth has not only become incredibly pervasive, but has somehow also morphed into an idea that women are even better multitaskers than men. Simply put: the brain can’t attend to two or more attention-rich stimuli simultaneously. Multitasking doesn’t work.

A study by Naveh-Benjamin and colleagues offers a deeper understanding of the implications of multitasking on learning. The authors noted that there was a significant difference between encoding and retrieval activities when process information while multitasking. They demonstrated that encoding requires more attention than retrieval and that divided attention during the encoding phase of learning significantly reduced memory. Since encoding is the first of three memory stages (1. Encoding 2. Storage 3. Retrieval), this research implies that the quantity and quality of memory is profoundly diminished by multitasking.


An fMRI study revealed that while multitasking, the brain attempts to store information in the striatum, which does not result in a strong ability to recall that information. Conversely, when not multitasking, information is encoded through the hippocampus, a region of the brain involved in sorting, processing and recalling information, and critical for declarative memory (memory for facts and events). 


Bonus: Smarter People Have Bigger Brains. 

This ‘neuromyth’ is actually true and supported by a great deal of scientific evidence. It turns out that measures of brain volume from structural MRI imaging do correlate with measures of IQ. 

In 2015, a meta-analysis by Pietschnig and colleagues looked at all the studies that have ever been performed where there was an IQ measure and an MRI structural brain scan, to assess the overall correlation, which can be seen in this meta-analytic ‘forest plot’. The further the central black square is to the right, the larger the correlation between total brain volume and the full-scale IQ score.

1-s2.0-S014976341500250X-gr3.jpeg

The bigger the square, the bigger the sample size. 

The overall correlation – the average of all the samples which, when added together, include around 5,000 people – is shown at the bottom of the second column next to ‘RE model’ (which stands for ‘random-effects’ model, a standard way of doing meta-analysis). It came to a correlation of r = 0.26 on the usual correlation scale between -1 and 1. That is, it’s a pretty modest-sized correlation, but it definitely exists. 

What we don’t really know is why having a bigger brain is related to better performance on IQ tests. Is it to do with the sheer number of neurons? Are the neurons themselves bigger and, thus, healthier and more robust, giving better cognitive performance? We simply don’t know at this point. Overall, though, it would be incorrect to deny that the correlation exists, especially now that there’s so much research on it.


For further debunking of myths from the world of education – mainly aimed at teachers but of interest to anyone – see the book Urban Myths about Learning and Education by Bruyckere, Kirschner and Hulshof. 

For a broader and more detailed discussion of myths, not just restricted to education but about neuroscience and psychology more generally, see Tall Tales about the Mind and Brain, an edited volume by Sergio Della Salla

Previous
Previous

The Power of Light