Tuesday, May 31, 2016

More on The Demos Misogyny Study And The Gender Of The Sender


Remember how I ranted about the lack of a written report on the Twitter misogyny study by Demos?  Well, now there IS a written report, of a sort (thanks to AT in my comments for bringing it to my attention).  I quote from the beginning of the report:



The results were presented at House of Commons launch of Recl@im
on 26 May 2016.

Emphasis is mine.

My previous blog post on the study was posted on 26 May, too, so that the report wasn't available then.  What a relief, to be right (goddesses mostly are), for Demos to be right when they told me that there was no report on the 25 May,  and also to now have that report!

Then to the analysis.  I am particularly interested in how the study defined the gender of the person who sent misogynistic tweets, for the simple reason that when I look at the information various people on Twitter give about themselves I'm often unable to deduce their gender from it.


Here is how the report tells us it was done:




It seems that the algorithm uses someone's Twitter handle (or actual name, if used) and the user description people give about themselves on Twitter.  Demos tells us that the algorithm was tested in 2015 against traditional survey questions which allowed them to see that its accuracy was approximately 85%.  But there is no link to that study.

I don't want to sound like a grumpy goddess here, but academics are taught the way to write research reports for a very good reason:  Transparency of methods and the ability of others to go backwards to any sources they wish to study in greater detail.  I can't go and read that 2015 study, because it is not sourced*.

This criticism does not mean that I'm saying the 2015 study doesn't exist or doesn't show what the above quote says it does.  All it means is that I can't access that information on my own and must place complete trust on Demos' say-so.  And note that this 2015 survey or study is crucial.  It's the only information we are given about the accuracy of the method Demos used.

The results section of the current study tell us that when the algorithm was applied to all 213,000 tweets labeled as aggressive (by an algorithm), it gave 48% of the originators as male and 42% as female, and the remaining 10% as
"institution" which seems to also cover any tweeters whose gender cannot be ascertained by the algorithm.  If the latter category is omitted, the breakdown would be 52% female and 47% male.

A sample of 250 tweets was selected for closer scrutiny by a human analyst, with these results:


The report then concludes that the algorithm "slightly over-estimated" the proportion of male tweets.

Which means that the human analyst is assumed to have gotten the gender of the senders right, right?  What does the human analyst base his or her decisions on?  Was that person asked to use the same rules the algorithm uses or just his or her own feelings about what the gender of the sender might be?

The answers to those questions matter.  If the analyst was asked to use the same rules as the algorithm uses, the verification process itself would depend on the accuracy of the algorithm, and that takes us back to the 2015 results.  If the analyst was asked to use some other criteria or just personal feelings, it would still be important to understand what those criteria and feelings might be.**

Once again, I'm asking for a more thorough write-up of the study, because that is necessary for greater understanding of its results.

The report contains some information I didn't get from the various summaries of its findings:  It's possible that specific events in popular culture might have had an impact on the findings:


That seems to be a very good reason not to sample just one short time period in studies of this sort, but, say, to sample one day in each of the preceding twelve months.

Note that none of this is really a critical reading of the study itself, because for that I'd need the 2015 study about the accuracy of the algorithm and more data on how the human analyst classified gender.  Rather, this post is a critique of the way the study is reported.

It is possible that women tweet misogyny at roughly the same rates as men, because both women and men grow up in the same cultures and are taught the same tools of attack against women.  Evidence from various discrimination studies, both about gender and race, suggest that those who belong to the discriminated groups can also be found among those who discriminate against that group.***

At the same time, much more precise studies are required before we can state  something like that.  I'd like such studies to create a sample which amounts to some averaging over time, so that specific events don't influence the likely composition of the hate-tweeters, and to do (or at least show) a lot more work on the question of how to identify the gender of those who send misogynistic tweets.  Finally, a more thorough analysis of the contents of a sample of those tweets truly is needed.  There's a big difference between someone tweeting "You slut, how dare you not like x"  and "I'd like to hate fuck you, you slut."




--------------

*  And no, it's not OK to expect me to spend hours looking for some such study on the net.  It's a courtesy researchers are expected to extend to their readers to give their sources.

**   For example, only pictures, Twitter handles and the description people give of themselves in the user description?  Or also the contents of tweets the person has sent recently?  Or something else?

***  For gender, see this, this and this post of mine.  For race, see this recent study about Airbnb discrimination against African-American renters or hotel guests which found that African-American hosts were not less likely to be found among the discriminating hosts than white hosts.

At the same time, other anecdotal evidence suggests something different about the quality of anonymous hatred.  For instance,  the sample of voicemails the Chairwoman of Nevada's Democratic Party received from various callers shows a gender difference in the type of hostility Lange received.  Likewise, the hate tweets I've seen women receive on Twitter often describe the desire of the sender to commit various sexual (often violent) acts on the recipient, and those acts require a male body.