That’s what happened to a new (old) study that showed that selling junk food to kids in school doesn’t lead to overweight kids. According to a study by Penn State sociology professor Jennifer Van Hook, Competitive Food Sales in
Schools and Childhood Obesity: A Longitudinal Study:
Employing fixed effects models and a natural experimental approach, they found that children’s
weight gain between fifth and eighth grades was not associated with the introduction or the duration
of exposure to competitive food sales in middle school. Also, the relationship between competitive foods
and weight gain did not vary significantly by gender, race/ethnicity, or family socioeconomic status, and it
remained weak and insignificant across several alternative model specifications (bolded for emphasis)
The real travesty is that Prof. Van Hook sat on the data for almost two years.
Van Hook said that the findings surprised the researchers so much that they held off publishing for nearly two years “because we kept looking for a connection that just wasn’t there.”
This is a problem with a lot of junk science now. A lot of researchers fall victim to Belief Bias. They attribute the validity of the research based on what they believe the valid conclusion should be. In this case, Dr. Van Hook had already made up her mind that junk food in middles schools should lead to more overweight kids. When the data fails to show a correlation, they person simply thinks that there is an error in the data, not an error in themselves. They then try to tease (more like torture) the data to try and fit the preconceived paradigm. In this case, they couldn’t torture the data enough to find anything that fits what they think ought to be true. The opposite is also very true in Academia, when they have one piece of data that confirms their bias, they tout that data as proof positive that their hypothesis is right.
There is nothing wrong with this. This is how science is done. You make a hypothesis, form an experiment, look at the data to see if it fits with your hypothesis. Three things can happen; the data can fit your hypothesis, in which you try different experiment to test your hypothesis. If repeated experiment all confirm your hypothesis, you can make a reasonable assumption that your hypothesis is correct. The second thing to happen is that the data totally refutes your hypothesis, in which case you reject the hypothesis and try again. The third thing to happen (which is common) is that some sort of systemic error occurred in your experiments that makes the data inconclusive. The only thing to do is try to reformulate your experimental procedure to get rid of the error. That is what should happen.
The problem now is what to do with all that legislation that was passed aiming to help the children? Policies were put into place based off of bad science. They made the assumption that junk food in schools WERE the cause of obesity, before any data could be looked at. This is the central fallacy of most Statist (Paternal) solutions to societal problems. They are never really based on any actual science. They same can be said for cell phone bans around the country, when there is no evidence that banning cell phones while driving actual does anything?
The other thing about this junk food study, is that it shows once again that the conventional wisdom is usually wrong. It shows that Academics are the easiest people to fool. It shows the depths to which people will hold on to their beliefs when the data is staring them in the face telling them they are wrong. I do have to give credit where credit is due. The research, Dr. Van Hook, actually published the study. A lot of researchers get so married to their pet hypothesis, they will not publish anything that might refute it.
I highly recommend listening to this Econtalk podcast with Gary Taubes.
Gary Taubes, author of Good Calories, Bad Calories, talks to EconTalk host Russ Roberts about what we know about the relationship between diet and disease. Taubes argues that for decades, doctors, the medical establishment, and government agencies encouraged Americans to reduce fat in their diet and increase carbohydrates in order to reduce heart disease. Taubes argues that the evidence for the connection between fat in the diet and heart disease was weak yet the consensus in favor of low-fat diets remained strong. Casual evidence (such as low heart disease rates among populations with little fat in their diet) ignores the possibilities that other factors such as low sugar consumption may explain the relationship. Underlying the conversation is a theme that causation can be difficult to establish in complex systems such as the human body and the economy.