Remember that study in 2012: "The Fluctuating Female Vote: Politics, Religion and the Ovulatory Cycle" by Kristina M. Durante, Ashley R. Arsena and Vladas Griskevicius? It argued that women's voting is affected by their menstrual cycles, and linked the argument to the usual evolutionary psychology stuff about reproductive drives and how they might influence women differently at different stages of the ovulatory cycle.
The next stage of the game: A new study*, Harris, C., & Mickes, L. (2014). "Women Can Keep the Vote: No Evidence That Hormonal Changes During the Menstrual Cycle Impact Political and Religious Beliefs Psychological Science" argues that a replication of the Durante et al. study failed to find the same effects.
Neuroskeptic, at a Discovery magazine blog, writes about some of the methodological concerns with that particular field of psychological studies which might exist in both the original studies and in the replications, especially something called "researcher degrees of freedom," from here:
[I]t is unacceptably easy to publish “statistically significant” evidence consistent with any hypothesis.I'm not sure if that's a polite way to hint at the possibility that researchers can go on fishing trips with the data until they find significant results in at least one tiny part of the analyses, and that it is those significant results which then will be published.
The culprit is a construct we refer to as researcher degrees of freedom. In the course of collecting and analyzing data, researchers have many decisions to make: Should more data be collected? Should some observations be excluded? Which conditions should be combined and which ones compared? Which control variables should be considered? Should specific measures be combined or transformed or both?
It is rare, and sometimes impractical, for researchers to make all these decisions beforehand. Rather, it is common (and accepted practice) for researchers to explore various analytic alternatives, to search for a combination that yields “statistical significance,” and to then report only what “worked.” The problem, of course, is that the likelihood of at least one (of many) analyses producing a falsely positive finding at the 5% level is necessarily greater than 5%.
Setting all that aside, replication still matters, even "replication"** suffering from possible "researcher degrees of freedom" problems. That's because replications which fail to produce the original results tell us something about the fragility of the results, so to speak, and about the ease with which opposite results can be manufactured.
Finally, it's worth noting that the women-and-their-hormones (and men-and-their-hormones) field also needs to be studied by people outside the evolutionary psychology camp, because any overt biases of the researchers will be different. Evolutionary psychologists seek to verify their reproductive theories, those outside that field have different basic theories and thus different blinders.
*I have not read this replication study or the response to it, though I did read the original Durante et al. study.
**In quotation marks, because replication should not deviate from the steps the original study took. But if we don't know what those steps where...