Are millennial men (in the US) as sexist as their dads? That's what Andrea S. Kramer and Alton B. Harris ask in the June Harvard Business Review (HBR). They begin by setting up the needed tension in the article by proposing that this is not so at all:
They then give us the evidence that the millennial men might be every bit as sexist as their dads and granddads, maybe even more sexist.Millennials, those Americans now between 16 and 36 years old, are often spoken of as if they’re ushering in a new era of enlightened interpersonal relations. For example, in 2013 Time predicted Millennials would “save us all” because they are “more accepting of differences…in everyone.” That same year, The Atlantic stated that Millennials hold the “historically unprecedented belief that there are no inherently male or female roles in society.” And in 2015 the Huffington Post wrote that Millennial men are “likely to see women as equals.”
The main bits of evidence are two:
The first is a study published last February which looked at how biology students ranked other students in their class in terms of intelligence and the grasp of the taught material. The study found that female students ranked other students the way objective measures would rank them, but male students showed a bias which favored other male students as being particularly intelligent and well prepared in the material, even when, say, a female student had the highest grades in that class. I have written about that study on this blog.
The second bit of evidence is the truly interesting one. Kramer and Harris:
Millennial men’s views of women’s intelligence and ability even extend to women in senior leadership positions. In a 2014 survey of more than 2,000 U.S. adults, Harris Poll found that young men were less open to accepting women leaders than older men were. Only 41% of Millennial men were comfortable with women engineers, compared to 65% of men 65 or older. Likewise, only 43% of Millennial men were comfortable with women being U.S. senators, compared to 64% of Americans overall. (The numbers were 39% versus 61% for women being CEOs of Fortune 500 companies, and 35% versus 57% for president of the United States.)
Now that made my hair stand up and my mood plummet! Back to the patriarchy we go. Hmm.
I downloaded that 2014 survey, to find out more about it: How many subjects were interviewed in each age-and-sex category? Did the survey standardize for other possible demographic differences between the older and the younger men? How did the millennial and older women compare to each other and the men in their answers those questions? And exactly what is it the questions asked and exactly how were those questions framed?*
The survey report gives an e-mail address for anyone who wishes to know more about the research methodology, so I gave it a try. But my e-mail was returned to me with one of those "recipient unknown" messages. Bitter despair followed, of course, because I couldn't get any answers to those very important questions, so important that they determine how much our hairs should rise and our moods plummet after reading the findings.
But what I did find was this:
Respondents for this survey were selected from among those who have agreed to participate in online surveys. the data have been weighted to reflect the composition of the adult population. Because the sample is based on those who agreed to participate in the panel, no estimates of theoretical sampling error can be calculated. For complete survey methodology, including weighting variables, please contact ..
In ordinary speech, the participants were people who had agreed to participate in online surveys. That means they were not picked as numbers are in a lottery, which means that they are not a random sample from the general population. That, in turn, means that we can't use the results to draw statistical conclusions about that general population. All this is hiding behind that 'theoretical sampling error' talk.
Isn't that fun? Because those who are keen to take online surveys could differ from those who are not keen in other ways, too, we cannot tell if the 2014 Harris poll says something about the millennial men in the US. All we can tell is that some unknown number of young men in that specific poll answered certain questions, the exact specification of which we are not told, in a certain way.
The two numbers the survey summary gives us is the total sample size, 2047, and the number of full-time or self-employed among those 2047: 889. Both of those apply to people over the age of eighteen.
We could go and look up the statistics for full-time and self-employed people as percentages of all people eighteen and over in the US in the Bureau of Labor Statistics tables, to see if at least the two numbers we are given in the survey seem to match what is going on out there in the bigger population.
I did that for ten minutes. The data I found was only for full-time employed people and for the groups of those aged sixteen and over or twenty and over. But even that rough research suggests that the Harris poll people tilt strongly towards those not employed. It's likely, therefore, that the sample wouldn't look like the population in other ways, too.
None of that means the results can be proved not to apply to the population of young millennial men in the US, just that they cannot be proved to apply to it.
So where does that leave us? Asking for a better study, my erudite and sweet readers (all willing to work with female engineers), and restraining ourselves from reading too much into this particular one.
-----------
* Those questions would matter more if the sample had been a statistically random one, but they still matter for the understanding of the answers.
To address my questions one by one:
- How many subjects were interviewed in each age-and-sex category?
This clearly matters. If the number of young men in the sample was, say, 25 or 50, we would judge the results differently than if it was 250 or 500. Note that we cannot guess that sub-sample sized from the overall sample size (which is told to us) by using our knowledge about how many people, in general, are of different ages, because the sample is not a random drawing from the population.
- Did the survey standardize for other possible demographic differences between the older and the younger men?
This matters, because the HBR article implicitly compares young men to their fathers, thus assuming that the sample of young men in the Harris poll would differ from the sample of old men in the poll only in age, not in the percentages of, say, different ethnic groups in the sub-samples.
To see why these other demographics matter, suppose that in some country immigration has changed the average makeup of the citizens a lot in the last thirty years or so. Then any apparent change in values we might see might not be a change over time in the same population, but a reflection of the different values new immigrants have brought with them from their source countries. Those values could be more progressive or less progressive than the 'native' values, depending on the values of the source countries, but we cannot interpret the apparent change in values as meaning that the long-time citizens of that country have changed their values over time.
- How did the millennial and older women compare to each other and the men in their answers those questions?
This question matters for checking purposes and for the purposes of interpretation: The HBR article looks at the more sexist attitudes of young men, compared to older men. That case would be made stronger if we found that young women are less sexist than older women, and it would be made different, and weaker in one sense, if we found that sexism has apparently increased in the youngest population for both men and women.
It's always good to check what the reported percentages might be in the implicit comparison group in any social science study. If I tell you a made-up finding that 90% of Italian-origin people eat pasta you are unlikely to assume that 0% of other groups eat pasta, because you know about pasta-eating customs. But in other contexts it's easy to slip into the alternative assumption that some percentages are 0% in the implicit comparison group. Think of crime rates by race or ethnicity or literacy rates for girls and boys in some developing countries.
- And exactly what is it the questions asked and exactly how were those questions framed?
The importance of this is fairly obvious. If I asked you when you stopped flossing your teeth you would find that questions enraging. The way a question is formulated is crucial for the proper interpretation of the answers. Were the people asked if they would be comfortable working with a female engineer, say, or were they asked if they thought female engineers were less capable, or if the occupation was less gender-apppropriate? The exact formulation of the question may carry one or more of these types of associations.
Note, also, that Kramer and Harris suggest that the findings might be because young men haven't had exposure to female engineers, CEOs and presidents (hmm). A question asking about that would have not cost very much and would have made the survey results more useful. Pilot studies (tiny pre-studies for methodological and question-setting purposes) are useful for making improvements of these types.