Thursday, July 29, 2010

Popularizing Research in the Conventional Media



(This is a reprint from here. I believe the ideas I discuss in it are important enough to pardon the stilted language.)


Being a journalist with the task to write about research results for the general audience must be very hard. You are supposed to have the statistical skills to understand all kinds of methods, you are supposed to understand several fields of sciences and social sciences well enough to distill them into simpler sound bites, and you are supposed to write the popularization pieces in a few days' time.

Anyone who has done academic research in a field knows that the task is pretty much impossible. There are no Renaissance scholars with the whole toolkit hanging off their belts, no geniuses instantly aware of every single new study in every obscure academic journal, no Doctors of General Criticism out there. Certainly not with the job of popularizing social science research, say.

Add to that the usual restrictions journalists face. Where's the hook for this piece? Why would anyone want to read it? Where's the sex? How can you write the piece so that it presses all those emotional buttons which will guarantee maximal readership numbers? If you write a decent and careful analysis, won't the competing newspapers or websites steal the show from you?

I'm not envious of the jobs of popularizers. Still, I'm going to criticize the results of popularized versions of academic research. I see at least four major problems in what the media tells us about social science research.

The first one is that the need for a journalistic hook biases the studies which are given more publicity. A study which finds, say, that women and men are pretty much the same in some behavior will not be covering the front pages of any major magazines or newspapers. A study which finds, say, a 9% gender difference in something will not attract all those readers like a magnet. Much better to ignore the number and just talk about the chasm that separates men from women.

That the studies are selected for reasons which have little to do with how well they were constructed, how general the conclusions are that can be drawn from them or how much they are in agreement with the mainstream thoughts in an area of research is a serious problem. It makes the general audience draw faulty conclusions about what such studies in general find.

The second problem is also related to the journalistic need to find a hook, and that concerns the fact that issues become stale very fast. Hence, if a popularization of a bad study makes the headlines this week, the corrections and criticisms of that same study will not make the headlines next week. The story is old and stale, let's move on. Never mind that the story was also a false one, yet now remains in the memory banks of many in the audience.

The third problem has to do with the excessive reliance popularizations place on the authors of a study. Every single popularization I have read contains several direct quotes from the study authors. But the authors of a study are going to sell it. Their quotes are not going to be those of a neutral observer. The neutral observer is supposed to be the popularizer who, in general, does not have the expertise to actually provide the necessary counterweight.

This is assumed to be solved by the academic system which screens studies before they get into print. But the screening system has its problems. For instance, suppose that I started an academic journal called "Echidne Studies". To get published in it you must find good things about Echidne. I can gather together several like-minded people, people who appreciate the true essence of Echidne, and I can use those people as my anonymous referees, to make sure that all the articles published in the journal will be of interest to us Echidneites. Don't you think that some of those reviewers might let a few statistical problems slip through, assuming that the anonymous reviewers I picked contained any familar with statistics?

Then academic reviewers are busy people, in general, the number of journals that need reviewers is very large and some journals have a better reputation than others. All this means that anyone who really tries can find quite silly articles printed in some reviewed journal. Not all of those are equally worthy of public attention.

The fourth problem has to do with the way expert assessment is usually added to the popularizations, at least the better ones (the not-so-good popularizations skip this part altogether). This consists of asking someone else, presumably another researcher in the same field, for a quote about the study to be popularized. The problem in many of these quotes I've read is that they appear to be by someone who has not read the article at all. Whether this is actually true is impossible to state but mostly I learn nothing new from the additional expert statements. And these are always kept very, very short, certainly in comparison to the space the study authors are given.

I'm sure that there are more problems than these four. But even these four are serious problems, because the way most of us learn about new research findings is from those popularizations. As a result, we will end up distorted ideas about what research actually has found.
------
Another post on analyzing research and its popularizations is this one. Now I think it's the better post but the two overlap only partially.