Monday, May 23, 2016

Dhiraj Murthy is a Reader of Sociology at Goldsmiths, University of London. Dhiraj Murthy’s current research explores social media, virtual organizations, and big data quantitative analysis. His work on social networking technologies in virtual breeding grounds was funded by the S. National Science Foundation, Office of Cyber Infrastructure. Dhiraj also has a book about Twitter, the first on the subject, that is published by Polity Press. His work on innovative digital research methods has been cited widely. For further information, visit his website .
@dhirajmurthy

The Facebook Psychology ‘experiment’ which manipulated the emotional content of nearly 700,000 users provides evidencethat corporations need to have review procedures in terms of ethics that universities of been developing for some years surrounding social media research.In a university context, Institutional Review Boards (IRBs)are responsible for monitoring the ethics of any researchconducted at the University. The US government’s Department of Health and Human Services publishesvery detailed guidance for human subjectsresearch. Section 2(a) of their IRB guidelines states that “for the IRB to approve research […] criteria include, among other things […] risks, potentialbenefits, informed consent,and safeguards for human subjects”.Most IRB’s take this mission quite seriouslyand err on the side of cautionas people’s welfare is at stake.
The reason for this is simply to protect human subjects. Indeed, part of IRB reviews also evaluatewhether particularly vulnerable populations (e.g. minors, people with mental/physical disabilities, women who are pregnant, and various other groups dependingon context) are not additionally harmed due to research conducted. Animal research protocolsfollow similar logics. BeforeUniversity researchers conductsocial research, the ethical implications of the researchare broadly evaluated within ethics and other criteria.If any human subject is participating in a social experiment or any socialresearch, most studieseither require signed informedconsent or a similar protocolwhich informs participants of any risks associated with the research and allows them the option to opt out if they do not agree with the risks or any other parameters of the research.
Therefore, I was tremendously saddenedto read theProceedingsof the National Academy of Sciences (PNAS) paper co- authored by Facebook data scientist Adam D. I. Kramer, Jamie E. Guilloryof University of California, San Francisco and Jeffrey T.Hancock of CornellUniversity titled ‘Experimental evidence of massive-scale emotional contagion throughsocial networks’. The authors of this study argue that agreement to Facebook’s ‘Data Use Policy’constitutes informed consent(p. 8789). The paper uses a Big Data (or in their words ‘massive’) perspective to evaluate emotional behavioron Facebook (of 689,003 users).Specifically, the authorsdesigned an experiment with a controland experimental group in which they manipulated the emotional sentimentof content in a selectionof Facebook users’ feeds to omit positive and negativetext content. Their conclusion was that the presence of positive emotionin feed contentencouraged the user to post more positiveemotional content. They also foundthat the presence of negative emotion in feed content encouragedthe production of negativecontent (hence the disease metaphorof contagion). In my opinion, any potential scientific value of these findings (despitehow valuable they may be) is outweighed by gross ethical negligence.
This experiment shouldhave never gone ahead. Why? Because manipulating people’s emotional behaviorALWAYS involves risks. Or as Walden succinctly put it ‘Facebook intentionally made thousandsupon thousands of people sad.’
In some cases, emotional interventions may be thoughtto be justifiable by participants. But, it is potential researchsubjects who should (via informed consent)make that decision.Without informed consent,a researcher is playing God. And the consequences are steep. In the case of the Facebookexperiment, hundreds of thousands of users were subjected to negative content in their feeds. We do not know if suicidalusers were part of the experimental group or individuals with severe depression, eating disorders, or conditions of self-harm. We will neverknow what harm this experiment did (which could have even lead to a spectrum of harm from low-level malaise to suicide).Some users had a higher percentage of positive/negative content omitted (between 10%-90% accordingto Kramer and his authors. Importantly, some users had up to 90% of positivecontent stripped out of their feeds, which is significant. And users strippedof negative contentcan argue social engineering.

To conduct a psychological experiment that is properlyscientific, ethicsneeds to be central. And this is truly not the case here. Facebookand its academic co-authors have conducted bad science and give the field of data science a bad name. PNAS is a respected journaland anyone submitting should have complied with accepted ethicalguidelines regardless of the fact that Facebook is notan academic institution. Additionally, two of the authors are at academicinstitutions and, as such, have professional ethical standards to adhere to. In the case of the lead author from Facebook, the company’sData Use Policy has been used as a shockingly poor proxy for a full human subjectsreview with informed consent. What is particularly upsettingis that this was an experiment that probably did real harm. Some have argued that at least Facebook publishedtheir experiment while other companiesare ultra-secretive. Rather than praising Facebook for this, such experiments cast lighton the major ethical issuesbehind corporate researchof our online data and our need to bring these debates into the public sphere.

0 komentar:

Post a Comment

LightBlog

BTemplates.com

Categories

#BigData (1) #bookofblogs (6) #einterview (5) #nsmnss (21) #SoMeEthics (2) AHRC (1) Amy Aisha Brown (2) analysis (2) analytics (1) API (1) auxiliary data source (1) Big Data (8) big data analytics (1) blog (14) blogging (7) blogs (8) Book of blogs (3) book review (8) case studies (1) Christian Fuchs (1) coders (1) cognition (1) community (2) community of practice (1) computer mediated (1) conference (3) content analysis (1) crowdsourcing (3) data (1) data access (1) Data Base Management System (1) data linkage (1) data protection (1) definitions (4) demographics (1) Dhiraj Murthy (1) digital (3) digital convergence (1) Digital debate (7) digital humanities (1) dissemination (1) Dr Chareen Snelson (2) Dr Sarah-Louise Quinnell (1) Dr Steve Jones (1) e interviews (2) e-privacy (1) ECR (1) einterview (2) empathy (1) Eran Fisher. (1) ESRC (2) ethics (13) event (3) facebook (3) fanfiction (1) funding (2) Geert Lovink (1) graduate (3) guidelines (5) hootsuite (1) HR (1) identity (3) impact (1) imputation (1) international research (2) janet salmons (7) Japanese (1) Jenna Condie (1) jobs (1) Katheleen McNiff (2) Language (1) learning (1) linguistic anthropology (1) Make Money (2) Mark Carrigan (1) market research (2) media (2) methods (1) mixed methods (1) natcen (1) NCapture (1) netnography (2) network (3) Networked Researcher (1) networked spaces (2) new media (2) NVivo (2) Online (2) online communities (1) online footprint (2) online interview research (2) online personas (2) online research (2) organisational management (1) ownership (1) Paolo Gerbaudo (1) phd (2) PhDBlogger (2) politics (1) power (1) privacy (4) QSR International (1) Qualitative (4) qualitative research methods (6) Quantitative (4) Recruitment (1) research (8) research methods (8) researcher (2) RSS (1) RTI International (3) rumours (1) SAGE (1) Sampling (3) semantic analysis (1) semantics (1) sentiment (1) sentiment accuracy (1) Sherry Turkle (1) small data (1) small datasets (1) social media (36) Social Media MA (10) Social Media Managment System (1) social media monitoring tools (2) social media research (12) social science (4) Social Science Space (2) social scientists (6) social tensionn (1) sociolinguistics (1) sociology (3) software (2) statistics (1) Stories (1) storify (1) surveillance (2) survey (4) teaching (2) technologies (4) tools (2) trust (1) tweet chat (11) Twitter (20) University of Westminster (13) user views (1) video interview (7) vlogging (9) web team (4) webinar (2) weighting (1) YouTube (10)
Responsive Ads Here

Recent

Recent Posts

Navigation List

Popular Posts

Blog Archive