You can read my initial reactions
in an earlier post but now that I've had more time to digest the results (and others have had more time to react to the results), a couple of things are worth noting:
First, as is frequently the case with the Big-Splash Popularizations of research, the
discussion continues but
that isn't allowed to make a splash*.
This is a general problem, not limited to any particular type of science reporting.
It is a Bad Thing for the understanding of science. Just as the popularizations of some study findings which were later retracted (or never actually approved for publication in the first place) can still be found, uncorrected, all over the Internet, bits and pieces of that retracted research will still float around in the brains of the original readers of the said popularizations. The incentives to correct this problem appear nonexistent.
That argument is
not about the Ingalhalikar
et al. study, specifically, and I am not saying that it would be retracted or not replicated. The point I want to make is that neither the science community nor the community of science journalists is given incentives
not to push for the Big Splash form of science reporting, even when something that causes a big splash may have more than the usual probabilities of finally being found incorrect or at least exaggerated or misapplied.
The research community rewards new research, not replication of old research or criticism of the research of others (unless that is the basis for new research). The research community definitely does NOT get rewarded for negative findings.** Hence the usual omission of mentions to gender differences when they were not found. That biases the findings of the field on, say, brains and gender, towards reporting differences, not similarities.
The journalistic community is stressed for time and expertise and under the pressure to get more eyeballs to read the popularizations. The arrival of the Internet has created a setup where the first to get to the goal line gets the goodies! So if a press release of a new study is sent to all the journalists before it is even available in a published form and if it looks like a Big Splash, out go the popularizations! A few careful science writers ask for comments from some person not involved in that research. Often those comments are fuzzy and general and cold weak tea, and that's probably because the paper isn't actually available for scrutiny yet!
By the time perhaps critical responses to the study start dribbling in, the journalists are pressed to look for new Big Splashes, to get more eyeballs and more clicks for the advertisers.
What is supposed to keep this system from collapsing?
I think it's the idea of peer review. If academic peers have passed on article, it must be OK, right? But the peer review system was never as strong as outsiders might think, and the proliferation of new research, new e-journals for science and the increased pressure to publish-or-perish, not only in the top research universities, has created impossible demands for those who do the reviewing.
Most don't have access to the original data, and even if they did, would not have the days, months or years looking at the analyses in detail would require. Indeed, peer review was never expected to do that kind of work, just to check that the numbers and calculations etc. looked reasonable.
A different problem arises when a study is about a particular theory and uses a particular, perhaps novel method. If the peer reviewers are selected on the basis of their knowledge of the theory, they may know nothing about the method, not even, whether it is the correct one for the particular questions. If the peer reviewers are selected on the basis of their knowledge of the methodology, they may not be able to state whether the conclusions about the theory are correct.
This is a particular problem in social sciences. In some sense, every review of empirical research should have a statistician looking at the study. But statisticians are few and already busy with reviews of their own fields.
Then you have the bubble fields. In those all peers believe in the same basic theories, so the reviews they provide will never attack the theories themselves.
Second, the tendency of negative findings being left in the file drawer and not published could be particularly severe in fields such as the study of gender differences. A field named like that could be (just could be!) especially tuned towards seeking for differences.
As there is no field of gender similarities, what happens to those studies which don't find a gender difference, after looking for it?
My guess is that they are often recast to be about something else than gender, something which can be viewed as a positive finding. If that is true, then these particular results about gender won't have much impact on our discussions about gender and the brain, say. In short, my hypothesis is that the field which studies gender differences will under-report gender similarities. The overall impact of that is to tilt the published research in one direction. A useful project would be to go through all the brain studies which didn't find a gender difference, despite including gender in the variables studied, and add those to the existing literature on brain and gender.
Third, those prior beliefs! The
Economist's popularization of the Ingalhalikar
et al. study is probably the worst in this respect. It begins with a
whole paragraph on the author's own prior beliefs:
MEN and women do not think in the same ways. Few would disagree with that. And science has quantified some of those differences. Men, it is pretty well established, have better motor and spatial abilities than women, and more monomaniacal patterns of thought. Women have better memories, are more socially adept, and are better at dealing with several things at once. There is a lot of overlap, obviously. But on average these observations are true.
Suggesting why they are true in evolutionary terms is a game anyone can play. One obvious idea is that because, in the days of hunting and gathering, men spent more time wandering away from camp, their brains needed to be adapted to able to find their way around. They also spent more time tracking, fighting and killing things, be they animals or intrusive neighbours. Women by contrast, politicked among themselves and brought up the children, so they needed to be adapted to enable them to manipulate each other’s and their children’s emotions to succeed in their world.
This is an excellent description of what the person who wrote this particular popularization believes! But it has very little or nothing to do with the imaging findings of the Ingalhalikar
et al. study.***
Prior beliefs matter in this field tremendously, and mostly for gender-political reasons. This is very clear from the comments threads to the popularizations.
But prior beliefs of the researchers themselves matter, too****. Suppose that those with prior beliefs in, say, evolutionary ("hard-wired") gender differences in cognition tend to choose fields such as evolutionary psychology or the study of gender differences in neuroscience more often than those whose prior beliefs are not so strongly linked to evolutionary explanations or the idea of "hard-wiring". What effect might this have on the research that is produced?
I'm not sure, because some researchers are able to ignore their own prior beliefs in their work. But I think this
could introduce a subtle bias. To flip that around, those whose prior beliefs about gender differences privilege cultural and environmental explanations might be more likely to enter fields such as feminist studies. In both cases the prior beliefs
could influence how research topics are picked (and what is not studied) and how the results are interpreted.*****
Is that important to keep in mind when writing (or reading) popularizations? I believe it is.
-----
*One example of this process is discussed (contents: race and intelligence)
here (pdf).
**It's somewhat ironic that the
popularization the
Economist has published on the Ingalhalikar et al. study is pretty weak on the very aspects that the earlier
Economist article critiques.
***I love to read that description of how the author believes our
prehistoric ancestors lived! It assumes that women seldom left "the
camp" (given that most evolutionary psychologists assume that the
prehistoric people they theorize about were small nomadic tribes, the
concept of "a camp" is debatable), that gatherers didn't have to learn
how to find their way around at all, that "politicking" is something men
clearly haven't evolved to do (which rather contradicts current
political arenas almost everywhere), that women and men hardly
interacted so women only had to "manipulate" the emotions of other women
and children and men didn't have to learn to "manipulate" anyone's
emotions, not to mention the fact that "manipulation of emotions" isn't
something the Ingalhalikar
et al. study (or any study?) has studied as an evolutionary adaptation.
All
that is fun, of course, and as I stated, it tells much about the person
who wrote this particular popularization. But sadly, we cannot go back
in time to collect information about gender roles at various points of
time, including the hypothetical time period in Pleistocene when the
evolutionary psychologists assume that various adaptations "stuck" to
us. That means that stories of this kind really are JustSo stories. Or
rather, they are stories to "explain" differences observed today as
evolutionary adaptations, and the differences observed today depend on one's own framework. Thus, this author seems to think that women are emotionally manipulative and then goes backwards to find an explanation for it.
****A neurogeneticist, say, who advocates Simon Baron-Cohen's book
The Essential Difference, as good reading in the field of differences between the sexes certainly comes across as biased to me, because I have read the book. Baron-Cohen argues that there is a male brain which is wired for building and understanding systems and a female brain which is wired for empathy!
He created a test for measuring such important differences. The test contains
many biased questions (many more than that short post covers) which steer answers in a certain direction by using male hobbies as examples in the systematizing questions and by not using female hobbies as such examples.
It even included a question about being able to understand the electrical wiring in a house. Given that this can be learned over time, we are not measuring something innate. Given that taking care of house repairs is coded male in this culture, we are tying the test results to gender roles. Essentially all the systematizing questions or assertions in the original test with examples picked those examples from male hobbies, thus making it less likely that women would come across as systematizing.
Despite that,
the test seems to have trouble correctly predicting "male" brains and "female" brains by gender, and doesn't seem to have anything to say about the idea that one might score high in both systematizing and empathizing or neither.
*****Or as
one commentator jokingly at Mindhacks.com puts it:
I would really like to see a study which examines if the brains of neuroscientists who believe there is a fundamental difference between men and women are wired differently than those of neuroscientists who don’t.
------
A postscript, added later:
Yes, I know I said I was finished with the topic (and there might be more if I ever get a response to an e-mail question I sent the study authors), but
this piece in the
Guardian
is worth reading, mostly because it is openly and boldly on the other
side of all the other popularizations I have read, and we do need some
balance in the coverage.
It gives one of the main reasons why studies of this type should be carefully scrutinized: The
history of the field:
For more than 30 years, I have seen a stream of tales about gender
differences in brain structure under headlines that assure me that from
birth men are innately more rational and better at map-reading than
women, who are emotional, empathetic multi-taskers, useless at telling
jokes. I am from Mars, apparently, while the ladies in my life are from
Venus.
And there are no signs that this flow is drying up, with
last week witnessing publication of a particularly lurid example of the
genre. Writing in the US journal Proceedings of the National Academy of Sciences, researchers at the University of Pennsylvania in Philadelphia revealed they had used a technique called diffusion tensor imaging to show that the neurons in men's brains are connected to each other in a very different way from neurons in women's brains.
The Big Splash in these kinds of studies is to find sex differences, not to find sex similarities, and the differences some study finds are usually sold as the final word in the field. It's preferable if, indeed, we can state that women and men are so different that they might as well be from different planets
Science and its treatment in the popular media cannot be ultimately understood without understanding the culture in which they happen, and part of that understanding is that the explanations favoring (marriage-based) stereotypes (about how husbands and wives cannot communicate) are privileged, and that those who argue otherwise are labeled as feminazis, non-scientific or simply in denial.
Hence the obligatory warning that none of what I say here implies that I believe in no innate differences in male and female brains. But I don't think they are as large as most of the armchair speculators assume, and I also believe that the brain changes based on how it is used. As long as gender roles are operative, the average woman and the average man are likely to have somewhat different use patterns of their brains.