Friday, February 28, 2014

Digital Globe
This post was first published on Survey Post on Feb. 3rd, 2014.
Recently, I attended two statistical events in the Washington, DC, area: one was the 23rd Morris Hansen Lecture  on “Envisioning the 2030 U.S. Census”; the other was the SAMSI workshop on “Computational Methods for Censuses and Surveys.” “Big data” was a popular keyword at both events and stirred up discussions on how to utilize it (such as from administrative records and online data sources) for current government statistics, especially when combining big data with  traditional survey data.
Statisticians are exploring new ways in which big data can be used. The US Census has initiated investigations on using administrative records in the 2020 Census. The National Center for Health Statistics (NCHS) has identified some research opportunities combining multiple data sources. University-based researchers  have launched studies on the use of Google trends and other online data in small area estimation.
When big data dominated the mainstream discussion at these events, I started thinking more about “small data.” Can small data help us make better use of big data? Here are some of my thoughts.
  1. Applying a conventional sampling-based approach to big data: more and more administrative records are collected electronically. Statisticians are excited about using these records that may contain information from the entire population for analytic purposes. Literature in the past two decades has extensively discussed the advantages of administrative records. Processing administrative records data, however, can be quite time consuming. In addition, it can be cumbersome to run analyses on these large datasets because of the large data volume. Especially, when analysts use conventional statistical software, such as SAS, Stata and R, it becomes increasingly complex to handle, store and analyze these data. The question is: is there a way to reduce the data volume and increase computational speed? Applying conventional sampling-based approach (e.g. optimal sampling, calibration weighting) may make a big data smaller and more manageable while allowing researchers to maintain decent data quality.
  2. Combining non-probability sample data with probability sample data: many big data, such as data collected by Google/Twitter/Facebook, are not census (population) data. We may treat them as non-probability sample data.  Elements are chosen arbitrarily in these datasets and there is no way to estimate the probability that each element in the population will be included. Also, it is not guaranteed that each element has a chance of being included, making it impossible either to assess the validity (always measured in terms of “bias”) and reality (always measured in terms of “variance”) of the data. One solution to make the data more representative of the entire population is to combine them with probability sample data (e.g. survey data), which can be relatively smaller. This method can also assist us estimating sample variability and identifying potential bias in big data.
  3. Using high-quality small data for measuring and adjusting errors in big data: big data is not only non-representative of the target population, but also carry loads of measurement errors because the construct behind a particular measure in these data can differ from the construct that analysts require. To evaluate errors in the big data and improve precision, small survey data can be collected for validation. Take the National Health Interview Survey (NHIS) as an example. This is a household interview survey with only self-reported data. To improve on analyses of the NHIS self-reported data, an imputation-based strategy for using clinical information from an examination-based health survey (i.e. National Health Nutrition Examination Survey, NHANES) was implemented that predicts clinical values from self-reported values and covariates. Estimates of health measures based on the multiply imputed clinical values are different from those based on the NHIS self-reported data alone and have smaller estimated standard errors than those based solely on the NHANES clinical data. Similarly, we may assess potential errors in big data through a more sophisticated and accurate small survey.
While big data provides us massive and timely information from various sources (e.g. social media, administrative records, small data is simple, easy to collect and process, and can be more accurate and representative.  Can small data help you when dealing with your big data problems?


Dan Liao is a research statistician at RTI International. She currently works on multiple aspects of data processing and  analysis for large, multistage surveys of health care in the United States, including sampling design, calibration weighting, data editing and imputation, statistical disclosure control, and the analysis of survey data. Her survey research interests include multiphase survey designs, combining survey and administrative data, domain estimation, calibration weighting, and regression diagnostics for complex survey data. Dan has a PhD in Survey Methodology from the Joint Program in Survey Methodology at University of Maryland and has published research focusing on regression diagnostics, calibration weighting and predictive modeling.

0 komentar:

Post a Comment

LightBlog

BTemplates.com

Categories

#BigData (1) #bookofblogs (6) #einterview (5) #nsmnss (21) #SoMeEthics (2) AHRC (1) Amy Aisha Brown (2) analysis (2) analytics (1) API (1) auxiliary data source (1) Big Data (8) big data analytics (1) blog (14) blogging (7) blogs (8) Book of blogs (3) book review (8) case studies (1) Christian Fuchs (1) coders (1) cognition (1) community (2) community of practice (1) computer mediated (1) conference (3) content analysis (1) crowdsourcing (3) data (1) data access (1) Data Base Management System (1) data linkage (1) data protection (1) definitions (4) demographics (1) Dhiraj Murthy (1) digital (3) digital convergence (1) Digital debate (7) digital humanities (1) dissemination (1) Dr Chareen Snelson (2) Dr Sarah-Louise Quinnell (1) Dr Steve Jones (1) e interviews (2) e-privacy (1) ECR (1) einterview (2) empathy (1) Eran Fisher. (1) ESRC (2) ethics (13) event (3) facebook (3) fanfiction (1) funding (2) Geert Lovink (1) graduate (3) guidelines (5) hootsuite (1) HR (1) identity (3) impact (1) imputation (1) international research (2) janet salmons (7) Japanese (1) Jenna Condie (1) jobs (1) Katheleen McNiff (2) Language (1) learning (1) linguistic anthropology (1) Make Money (2) Mark Carrigan (1) market research (2) media (2) methods (1) mixed methods (1) natcen (1) NCapture (1) netnography (2) network (3) Networked Researcher (1) networked spaces (2) new media (2) NVivo (2) Online (2) online communities (1) online footprint (2) online interview research (2) online personas (2) online research (2) organisational management (1) ownership (1) Paolo Gerbaudo (1) phd (2) PhDBlogger (2) politics (1) power (1) privacy (4) QSR International (1) Qualitative (4) qualitative research methods (6) Quantitative (4) Recruitment (1) research (8) research methods (8) researcher (2) RSS (1) RTI International (3) rumours (1) SAGE (1) Sampling (3) semantic analysis (1) semantics (1) sentiment (1) sentiment accuracy (1) Sherry Turkle (1) small data (1) small datasets (1) social media (36) Social Media MA (10) Social Media Managment System (1) social media monitoring tools (2) social media research (12) social science (4) Social Science Space (2) social scientists (6) social tensionn (1) sociolinguistics (1) sociology (3) software (2) statistics (1) Stories (1) storify (1) surveillance (2) survey (4) teaching (2) technologies (4) tools (2) trust (1) tweet chat (11) Twitter (20) University of Westminster (13) user views (1) video interview (7) vlogging (9) web team (4) webinar (2) weighting (1) YouTube (10)
Responsive Ads Here

Recent

Recent Posts

Navigation List

Popular Posts

Blog Archive