Catching all the emotions
This wasn't the blog post I was intending to write today. But in light of recent revelations of the extent to which a social network is allowed inside our brains, I was worked up enough to want to discuss it. Thus, blog post
The story goes, Facebook selectively edited the newsfeeds of 689,003 users to weed out either positive or negative postings from friends over the course of a week in January, 2012. They then monitored whether the subjects' own postings became more positive or negative to examine if emotions are contagious online. Initially, I was irrate. I did not give informed consent for this. I don't care that there's a vaguely worded line about using the information they collect for research, buried deep in Facebook's Data Use Policy. I didn't explicitly say yes, it doesn't count.
So what are the details behind informed consent? I took an undergrad course in medical ethics about an eternity ago, so I supplemented my knowledge with google. Certainly for medical procedures or research, informed consent must have 3 parameters; disclosure, capacity and voluntariness. The doctor or researcher must give full disclosure of what's going to happen, including use of placebos and any possible side effects. The patient/subject must be intellectually and emotionally capable of understanding everything that is discussed. And the consent must be given entirely voluntarily, with no coercion (which is where some research gets into trouble by giving monetary incentives).
Facebook doesn't hit any of those key targets. The Data Use Policy is very ambiguous. It references using data for "internal operations, including... research" and giving your information to "outside vendors to... conduct and publish research" in order to "provide, understand and improve the services we offer". That's it. And really, I wouldn't personally consider this experiment to fall under internal operations or to improve Facebook services. So this is, at best, a poorly worded disclosure. But there's also no vetting of subjects for comprehension of the experiment and the drawbacks to saying no are too steep (partial social exclusion by not agreeing to the policy) to be considered entirely voluntary.
But then I started reading up on the legal ethics in play here. The difference is that informed consent may not actually be required for this situation. I won't get into the details here because, again, I'm no expert. The gist is that if there's minimal risk to the research subjects and the study couldn't really be performed after getting full consent, this can be waived. But it also states that, "whenever appropriate, the subjects will be provided with additional pertinent information after participation", which clearly didn't happen.
Reading that info definitely left me feeling less angry and more... skeeved out? Side note: I may have just let my age slip by using that term, but it just feels so appropriate. Good thing I'm not concerned by aging, since I was "ma'amed" twice last week! So maybe this Facebook experiment will eventually be determined to be legally and ethically sound, but it doesn't make it less slimy in my opinion. Because covertly manipulating people isn't ok.
See, here's the thing: people everywhere are trying to influence you. I'm not being a pessimist or some sort of conspiracist. When you smile at someone to make them like you more, you're trying to manipulate their emotions to suit yourself. It's how we interact as a species and it's not necessarily a bad thing. The problem is that corporations aren't the same as people and purposefully toying with our emotions isn't really ok. Even ads that intentionally evoke strong emotions to convince people to buy products irritate me. Fear mongering news reports in order to improve viewer numbers (no link necessary, just turn on your TV) are a big pet peeve. I would love to see a move toward responsible advertising and reporting with emphasis on product quality and actual facts, respectively, but that's an unlikely wish.
And now for one of my biggest pet peeves; altering data representation to manipulate readers! Guess what journal article demonstrates this? Yep, you got it, contagious Facebook! The one and only figure has reversed vertical axes for negative (bottom 2 bar graphs) compared to positive words. I can't think of a reason for doing that other than to emphasize the changes in data. If you need to use visual tricks to show your conclusions, something's wrong. This certainly isn't the biggest violation of data representation (something I'll get into in later posts about writing science papers), but it's not a good sign.
As for the actual science by Kramer et al *, I'm skeptical. Sure, that's a researcher's default state, but a strong paper can convince otherwise. This isn't one of those situations. Their conclusion that this experiment demonstrates "emotional contagion" is interesting, but not well supported. The biggest issue is that they assume the negative/positive words in Facebook statuses indicate mood, but there's no research showing that to be true and I'm doubtful that it is. I've personally found that a lot of people use social media more to echo the surrounding sentiments and "fit in", rather than accurately represent their mental state. So changes in their own statuses after being exposed to more negative or positive newsfeeds isn't necessarily emotional changes.
Overall, I'm just not impressed by this research. The ethics are iffy, the effect of newsfeed manipulation is small and the conclusions are over-reaching. I'm no longer angry, but I'm not particularly happy either. It's kind of ironic, that's often how I feel after reading a bunch of stuff on Facebook itself! Maybe the tone of the journal article is right after all. :P
* This is just the fancy, sciencey way of saying "the journal article by Kramer and those guys". It's short for et alia in Latin which means "and others".