Wednesday, March 5, 2014

In 2011 then Census Director Robert Groves wrote on the Census Director’s Blog about the burgeoning volume of “organic data”—data that, as opposed to “designed data,” have no meaning until they are used (surveys are a primary example of the latter). He noted that finding ways to combine these two types of data to increase the “information-to-data ratio” was a challenge, but also represented the future of surveys. Using terms identified as “big data descriptors” in Groves’ piece, as well as a few other terms I think qualify, I put together the graph below to show the number of AAPOR presentation titles between 2010 and 2013 that contain a big data descriptor.1,2
big data descriptors in AAPOR presentation titles 2010-2013
One take away is the increased interest researchers have shown in big data over the past few years. An equally important lesson is that almost all of the attention big data has received from AAPOR members—at least measured by the number of presentations they’ve done—has been on social networking sites (SNS). I found only one presentation in the past four AAPOR conference programs that contained a big data descriptor for a non-social media topic—a demonstration in 2012 by Ben Waber on the use of wearable sensors for measuring behavior.
To some extent this is explained by scientific lag. Just like there is cultural lag—the time between the emergence of a new technology and when culture catches up—there is a lag time between when consumers adopt technologies and when our research methodologies catch up (i.e., scientific lag = cultural lag + time until research methodologies using those technologies are implemented). And, technologies often don’t remain static, but rather evolve making it a continuous game of catchup (development) for research methodologists. I’ll go into more detail about this in a presentation I’m giving at the AAPOR conference this year, but one quick example from the annals of survey research history is the development of computer-assisted telephone interviewing (CATI). While telephone exchanges had existed for almost a century and programmable computers emerged in the 1940s, it took until 1971 for CATI systems to be developed by market researchers, another five years for academic researchers to begin using it, and the federal government another seven years to implement its use. Certainly, cultural lag played a role. It took years for enough households to have telephones for probability based telephone sampling to make sense. In addition, it took time for the programmable computer to develop into a device usable for this purpose. But, it also took researchers time to figure out such a system was possible and the value it presented.
Now, let’s fast forward a bit. In 1997, one of the first SNSs, sixdegrees.com, was created.  It lasted until 2001. A host of other networking sites, the ones most of us are familiar with, sprang up in the early 2000s—Myspace (2003), Facebook (2004), and Twitter (2006). There are, I suppose, two ways of looking at the cultural and scientific lags and SNSs. On the one hand, it took a few years SNSs to grow to significant numbers. For example, it took Facebook four years (2004 – 2008) to grow to 100 million users. Within four years of that development there were multiple presentations at AAPOR on the subject. That’s certainly much faster than the development of CATI technology/adoption. On the other hand, social researchers took nearly a decade from the birth of widely popular SNSs to begin formally recognizing their research utility.
Now, we may be at the cusp of another such tsunami of consumer technology adoption. Groups disagree on the exact timing (e.g., Forbes says 2014 and MIT Technology Review says 2013), but the evidence points to the start of rapid growth in the use of internet connected sensors and devices for a multitude of purposes. I’ve recently written about how and why I think the devices and the IoT will affect social science data collection.
My question is whether the research community can be more proactive, and therefore decrease the scientific lag between adoption and research implementation. My hope is we will and that it will have a positive effect on survey data collection.
I’ll be presenting more thoughts on this topic at AAPOR and look forward to the discussion we have about big data in the session. Between now and then I’d welcome the thoughts others have on or experiences others have had with using wearable tech, sensors, or the IoT for research.
This was first posted on Survey Post  on 24/02/14
Brian Head is a research methodologist at RTI International with 5 years of experience in the government and not-for-profit research sectors.  Training in sociology and research methods and statistics led him to a career in research where his work has included questionnaire design and evaluation,  managing data collection efforts, and qualitative and quantitative data analysis.
Share


    0 komentar:

    Post a Comment

    LightBlog

    BTemplates.com

    Categories

    #BigData (1) #bookofblogs (6) #einterview (5) #nsmnss (21) #SoMeEthics (2) AHRC (1) Amy Aisha Brown (2) analysis (2) analytics (1) API (1) auxiliary data source (1) Big Data (8) big data analytics (1) blog (14) blogging (7) blogs (8) Book of blogs (3) book review (8) case studies (1) Christian Fuchs (1) coders (1) cognition (1) community (2) community of practice (1) computer mediated (1) conference (3) content analysis (1) crowdsourcing (3) data (1) data access (1) Data Base Management System (1) data linkage (1) data protection (1) definitions (4) demographics (1) Dhiraj Murthy (1) digital (3) digital convergence (1) Digital debate (7) digital humanities (1) dissemination (1) Dr Chareen Snelson (2) Dr Sarah-Louise Quinnell (1) Dr Steve Jones (1) e interviews (2) e-privacy (1) ECR (1) einterview (2) empathy (1) Eran Fisher. (1) ESRC (2) ethics (13) event (3) facebook (3) fanfiction (1) funding (2) Geert Lovink (1) graduate (3) guidelines (5) hootsuite (1) HR (1) identity (3) impact (1) imputation (1) international research (2) janet salmons (7) Japanese (1) Jenna Condie (1) jobs (1) Katheleen McNiff (2) Language (1) learning (1) linguistic anthropology (1) Make Money (2) Mark Carrigan (1) market research (2) media (2) methods (1) mixed methods (1) natcen (1) NCapture (1) netnography (2) network (3) Networked Researcher (1) networked spaces (2) new media (2) NVivo (2) Online (2) online communities (1) online footprint (2) online interview research (2) online personas (2) online research (2) organisational management (1) ownership (1) Paolo Gerbaudo (1) phd (2) PhDBlogger (2) politics (1) power (1) privacy (4) QSR International (1) Qualitative (4) qualitative research methods (6) Quantitative (4) Recruitment (1) research (8) research methods (8) researcher (2) RSS (1) RTI International (3) rumours (1) SAGE (1) Sampling (3) semantic analysis (1) semantics (1) sentiment (1) sentiment accuracy (1) Sherry Turkle (1) small data (1) small datasets (1) social media (36) Social Media MA (10) Social Media Managment System (1) social media monitoring tools (2) social media research (12) social science (4) Social Science Space (2) social scientists (6) social tensionn (1) sociolinguistics (1) sociology (3) software (2) statistics (1) Stories (1) storify (1) surveillance (2) survey (4) teaching (2) technologies (4) tools (2) trust (1) tweet chat (11) Twitter (20) University of Westminster (13) user views (1) video interview (7) vlogging (9) web team (4) webinar (2) weighting (1) YouTube (10)
    Responsive Ads Here

    Recent

    Recent Posts

    Navigation List

    Popular Posts

    Blog Archive