Run a web site, measure anything, make any changes based on measurements? Congratulations, you’re running a psychology experiment!
— Marc Andreessen (@pmarca) June 28, 2014
If you a/b test to make more money that’s fine, but if you a/b test to advance science that’s bad? I don’t get it.
— Chris Dixon (@cdixon) June 29, 2014
In July 1961, Yale University psychology professor Stanley Milgram sought to understand how far people would go against their conscience when following orders from an authority figure. Milgram was trying to understand the culpability of Nazi soldiers and other accomplices to the Holocaust and ran an experiment where subjects (at the behest of a lab proctor) thought they were sending electric shocks to other subjects (they were not).
Apart from the amazing findings (that more than 60% were often willing to provide high levels of shock), the main takeaway was the physical and psychological toll taken on the test subjects who thought that they had inflicted harsh electrical shocks on innocent people. As a result of this and other questionable social science research projects, most (all?) educational institutions now require all research get approval of an institutional review board.
When I was in grad school and studying the efficacy of anti-drug advertising on teens, I had to jump through a number of hoops to ensure the safety and well-being of the participants. A key component of the whole process was one of informed consent, when dealing with minors, I had made things more complicated.
All of this background should be helpful in understanding why there is legitimate concern about Facebook performing a psychology experiment with 689,000 of its users. The lack of informed consent and the apparent lack of institutional review would be problematic in dealing with 100 people. When you’re talking about six thousand times that many people being affected, it becomes a whole other story. Studying user behavior makes a lot of sense, but manipulating the news feed to purposefully provide more negative content leads it open to a lot of questions about the impact.
I wonder if Facebook KILLED anyone with their emotion manipulation stunt. At their scale and with depressed people out there, it's possible.
— Lauren Weinstein (@laurenweinstein) June 29, 2014
I’m sure that $FB shareholders would hope that their was some business objective from this study, but at this point that seems entirely unclear. In fact, seems more like a skunkworks psychology experiment because valuable business findings are rarely published in peer review journals. This clearly separates it from an A-B test because those have tangible business objectives.
But now that the argument has been raised that A-B tests are psychology experiments, new questions arise.
Should all A-B tests be subject to an institutional review?
What is the moral and ethical responsibility of computer engineers and web developers?
If their impact can be as great on a person’s well-being as a doctor or lawyer, should there be a code of conduct that needs to be followed?
My guess is that rather than having this debate, it would be easier to admit that A-B tests are not psychology experiments.