Challenges Facing the Disclosure Review Board

Written by: William Wisniewski, Center for Disclosure Avoidance Research

At the U.S. Census Bureau, the Disclosure Review Board is best known as the team that establishes and reviews official Census Bureau disclosure avoidance policies for the public release of data products that do not reveal any information about the survey respondents. Yet, the boards’ members also serve other important and lesser known roles. For example, they work with researchers in the Center for Disclosure Avoidance Research to determine the effectiveness of current disclosure avoidance techniques in protecting data products. In addition, these researchers study and develop new techniques that may be applied to future releases of data products.

This work is critical in meeting the guidelines established under Title 13 and Title 26 of the U.S. Code, which states that the Census Bureau is required to protect the confidentiality of individual respondents when it releases data to the public .

This seemly simple mission can often pose challenges. For example, what occurs if a researcher wants to release counts and demographic characteristics of individuals in every county in the United States? What if a researcher wants to release an infinite number of variables in a Public Use File? What should a researcher do if they encounter small cell sizes within their data product?

These types of questions and others, along with their solutions, will be presented in a topic-contributed session at the 2016 Joint Statistical Meetings on Wednesday, August 3, 2016, titled ”Innovations in Disclosure Avoidance at the U.S. Census Bureau.” We explain specific issues and walk through some of the methods and techniques that are used to ensure the Disclosure Review Board meets its mission. That is, to support the Data Stewardship Executive Policy Committee in its efforts to ensure that the Census Bureau protects all Title 13 and Title 26 respondent confidentiality of publicly released data products.

Looking to the future, the Disclosure Review Board will also continue to face other challenges. It is likely that Census Bureau and other researchers will need to develop, test, and apply new methodologies and techniques to Census Bureau data, particularly as the quantity of potentially linkable data outside of the Census Bureau increases.

 

Posted in Uncategorized | 2 Comments

Evaluating Possible Administrative Records Uses for the Decennial Census

Written by: Andrew Keller and Scott Konicki

When a household does not respond to the census, the U.S. Census Bureau must send a field worker to that address to complete a nonresponse follow-up interview. For the 2010 Census, 72 percent of American households mailed back a completed census form. The remaining 28 percent that did not respond by mail were counted via a census taker that visited their address. In-person interviews are much more costly than getting a response back in the mail. For the 2020 Census, the Census Bureau is researching the possible use of administrative records to provide a status and count for some addresses in the nonresponse follow-up universe—that is, to indicate whether the housing unit is likely to be occupied or vacant, and how many people may live in it. As outlined below, this information will aid in reducing the number of contacts during the nonresponse follow-up operation.

Over the last four years, the Census Bureau has tested various methods using administrative records to reduce the nonresponse follow-up workload. All tests used administrative records modeling with varying levels of complexity. In the tests, the administrative records allow us to split the nonresponse follow-up address universe into three categories: (1) units identified as administrative records occupied, (2) units identified as administrative records vacant, and (3) addresses identified as no determination.

The figure below shows the flowchart of the contact strategy related to administrative records cases for the nonresponse follow-up operation specific to the 2016 Census Test. When administrative records indicated that an addresses was vacant, it received no in-person visits during the nonresponse follow-up operation.

Keller

Addresses that the administrative records indicated to be occupied received only one visit in the 2016 Census Test. All units in the nonresponse follow-up address universe, whether the administrative records indicated they were vacant or occupied, did receive an additional postcard by mail during the nonresponse follow-up operation. The postcard told people at these addresses how to self-respond by filling out the questionnaire online or by responding through the questionnaire assistance line. In short, there are several ways before and during nonresponse follow-up that the Census Bureau is attempting to obtain and use self-responses before using administrative records determinations.

The development of possible administrative records models has been guided by comparing models retrospectively against 2010 Census results. Doing so provides a national evaluation of potential administrative records models. However, a difficulty underlying the evaluation of administrative records modeling usage is handling concerns such as undercounts and erroneous enumerations. Although the analysis using the 2010 Census results provides a solid basis for assessing model performance, it is not the only way to measure it.

To learn more about Nonresponse Follow-Up Contact Strategy for Administrative Record Cases, please join us at the Joint Statistical Meetings.

Posted in Uncategorized | 2 Comments

Researching Methods for Scraping Government Tax Revenue From the Web

Written by: Brian Dumbacher, Mathematical Statistician, Economic Statistical Methods Division, and Cavan Capps, Big Data Lead, Associate Directorate for Research and Methodology

The Quarterly Summary of State and Local Government Tax Revenue is a sample survey conducted by the U.S. Census Bureau that collects data on tax revenue collections from state and local governments. Much of the data are publicly available on government websites. In fact, instead of responding via questionnaire, some respondents direct survey analysts to their websites to obtain the data. Going directly to websites for those data can reduce respondent burden and aid data review.

It would be useful to have a tool that automatically collects, or scrapes, relevant data from the web. Developing such a tool can be challenging. There are thousands of government websites but very little standardization in terms of structure and publications. A large majority of government publications are in Portable Document Format (PDF), a file type not easily analyzed. Finally, both web and PDF documents have constantly changing formats.

To solve this problem, researchers at the Census Bureau are studying and applying methods for unstructured data, text analytics and machine learning. These methods belong to the realm of “Big Data.” Big Data refers to large and frequently generated datasets representing a variety of structures. As opposed to designed survey data, Big Data are “found” or “organic” data. Typically, these data are created for a click log, a social media blog or an online PDF report, but are innovatively repurposed and used for something else such as inferring behavior. Since the data were not specifically designed to infer, they often have unique challenges.

The goal of this research is to develop a web crawler with machine learning that performs three tasks:

  1. Crawls through a government website and discovers all PDFs.
  2. Classifies each PDF as containing relevant data on tax revenue collections.
  3. Extracts the relevant data, organizes it and stores it in a database.

For task 1, we used the open-source software called Apache Nutch. In a production environment, the process will scale up by distributing the work over many computers and then combining the results.

For task 2, we developed a technique to convert PDF documents to text and re-organize the output. A classifying model applied to the converted PDF determines if the document has relevant data on tax revenue collections. This model uses the occurrence of key sequences of words such as “statistical report” and “sales tax income” and other text analysis techniques.

For task 3, we are considering various ideas. Relevant data would probably be found in tables and in close proximity to key sequences of words. We will explore table identification methods based on the distribution of terminology in the PDF and additional modeling that maps the nonstandard data in PDFs to standard definitions in Census Bureau publications.

The Census Bureau looks forward to continuing this web scraping research and exploring new machine learning algorithms that reduce respondent burden, speed survey processing and improve data collection.

To learn more about the research methods for scraping government tax revenue from the web, please join us at the Joint Statistical Meetings on August 2, 2016.

Posted in Uncategorized | 2 Comments

Reducing Respondent Burden in Counting Juveniles

Written by: Suzanne Marie Dorinski, Economic Statistical Methods Division

The U.S. Census Bureau conducts the Census of Juveniles in Residential Placement every other year for the Office of Juvenile Justice and Delinquency Prevention. This survey collects data from almost 2,400 public and private juvenile facilities that hold juveniles charged or adjudicated for a delinquency or status offense to provide a count of juveniles in publicly and privately run juvenile correctional facilities.

The data collection has two parts: (1) questions about the facility and (2) questions about each charged or adjudicated juvenile held in the facility.

For each juvenile, we ask the following:

  • Gender.
  • Date of birth.
  • Race.
  • Who placed the juvenile in the facility.
  • Most serious offense.
  • State or territory where offense was committed.
  • Adjudication status.
  • Admission date.

Facilities have the option of responding by mail, through the internet or by fax. Those that respond online can enter the data for each juvenile or they can upload a data file. For the 2013 collection, we suggested that larger facilities should upload a data file but did not define how big a larger facility is.

Our online data collection tool collects paradata for each response. The paradata file captures the values that the facility enters, as well as any changes that the facility makes, and keeps track of the edit messages that the facility sees while reporting their data. Each action has an associated time stamp, so we can tell how long each facility spends online to report their data.

The graphic below shows that as the number of juvenile records entered online increased, the amount of time spent in the data collection tool increased. To reduce the burden on the juvenile facilities, we could include this graphic in the next data collection and suggest that facilities with 50 or more juvenile records upload a data file instead of spending hours entering that data in the data collection tool. Knowing this information is essential to helping us make responding to the survey easier for staff at the juvenile facilities.

Dorinski 2

We have also shared these results with the Office of Juvenile Justice and Delinquency Prevention, and they plan to use it in the future to adjust their estimation of respondent burden hours that they report to the Office of Management and Budget each year.

I will provide more suggestions for reducing respondent burden for juvenile residential facilities at the 2016 Joint Statistical Meetings and in the conference proceedings.

Posted in Uncategorized | Tagged , , , | Leave a comment

Estimating the Reliability of Product Sales Totals in the Economic Census

Written by: Katherine Jenny Thompson, Complex Survey Methods and Analysis Group; Matthew Thompson, Business Register and MEPS Statistical Methods Branch; and Roberta Kurec, Economic Census and Related Surveys Statistical Methods Branch, Economic Statistical Methods Division

The economic census is the U.S. Census Bureau’s official five-year measure of American business and the economy. It provides industry and geographic detail not typically available from other economic statistics sources benefitting businesses, policymakers and the American public.

The term “census” in this case is actually a slight misnomer. The Census Bureau requests data from most large businesses and a sample of small businesses. We ask each of these businesses to provide data on sales, shipments, and receipts or revenues for each of its establishments (i.e. for each single physical location)—as shown in Figure 1.

Thompson Fig 1

We also ask for the revenues obtained by each establishment from the types of products likely to be produced or sold based on its primary industry. Product statistics are needed by the Bureau of Economic Analysis to benchmark the national accounts, as well as by the Bureau of Labor Statistics in constructing producer price indexes. The North American Product Classification System defines over 8,000 different products that can be reported across the entire census.

As an example, Figure 2 provides a short extract from the product collection for establishments in the “Automobile Dealers” retail trade industry from the 2012 Economic Census. Notice that, on the surface, these products don’t seem to be related to automobile dealers, but they are products that could be found at automobile dealerships, and that is why they are included on the questionnaire. The product list for some establishments can span more than 50 potential products. Additionally, for certain industries Census designates “must-have” products. For example, an automobile dealer should report revenue from automobile sales.

Thompson Fig 2

In most industries, only a few products are frequently reported and many sampled establishments do not report any data on products. This makes it difficult to produce good product statistics and measures of reliability.

For the past two years, the Census Bureau has conducted extensive research into product statistics. Initial research by the team focused on determining a single missing data treatment method for products in the 2017 Economic Census. The research, presented in a topic contributed session entitled “Evaluating Alternative Imputation Methods for Economic Census Products: The Cook-Off” was reported at the 2015 Joint Statistical Meetings.

This year, we have been exploring how to estimate the variance for product sales. Besides the sampling, imputation and post-stratification components, there are additional challenges caused by the lack of good predictors and high expected zero rates for many products, compounded by the high product nonresponse rates. We believe that it is possible to find a variance estimator with good statistical properties for the well-reported products, but we remain concerned about the others. So far, the team has conducted two separate simulation studies that investigate the possibility of finding a variance estimator that performs well on many different products considering only (1) sampling variance and post-stratification, and (2) product nonresponse and hot deck imputation. We will share these results on August 1, 2016, at the JSM. The next phase of our research will combine the findings from the two separate studies to develop a single variance estimator for products.

Posted in Uncategorized | Tagged , | Leave a comment

Update on the Current Population Survey Research

Written by: Stephanie Chan-Yang, Yang Cheng and Aaron Gilary, U.S. Census Bureau

The U.S. Census Bureau’s Current Population Survey is one of the oldest and largest household surveys in the United States. Since 1940, it has produced monthly statistics on labor force information. The Current Population Survey interviews about 72,000 households each month to estimate totals of persons unemployed, employed and not in the labor force, leading to the official estimate of the national unemployment rate.

The Current Population Survey applies a stratified two-stage cluster sampling design to select a representative sample of U.S. households. A housing unit selected for the sample is interviewed for four consecutive months, rotated out for eight months, and then interviewed for another four months. This approach aims to develop overall monthly estimates while also tracking monthly and annual changes among the sampled households.

To manage these design features, the survey team has long relied on cutting-edge research on sampling, weighting and variance estimation. Given several key variables that are beyond our control such as budgets, computational power and stakeholders’ needs, the survey team must be sufficiently agile to adapt to changes. This nimble approach requires a strong understanding of the underlying theory, an ability to adapt the survey quickly, and an opportunity to hone our methods under peer review.

Research Presented at the Joint Statistical Meetings

The survey team will give three presentations in the session “Update on the Current Population Survey Research” at this year’s Joint Statistical Meetings in August 2016.

  • Stephanie Chan-Yang will speak about the sample size for the Current Population Survey. She will also explain the sample size and allocation in relation to the Bureau of Labor Statistics sample design requirements for accuracy. Chan-Yang will further describe the Children’s Health Insurance Program expansion to the survey sample. This expansion increases the survey sample size in order to provide better estimates of low-income children without health insurance. These data feed into the Current Population Survey Annual Social and Economic Supplement. Finally, Chan-Yang’s presentation will explore recent research on reducing the sample size and budget constraints.
  • Yang Cheng will explore a new method to improve our composite estimates. In his research, he proposed an iterative version of our composite estimator (known as the AK composite estimator) for the Current Population Survey. This new method includes the current AK composite estimator as a special case. In addition, the proposed method will reduce the mean squared error of the AK composite estimator when we choose the optimal estimator in this general family. Finally, Cheng will demonstrate the proposed method via comprehensive numerical studies.
  • Aaron Gilary will give an overview of Current Population Survey variance methodology. This talk discusses current survey methods of calculating variances, with a focus on the Balanced Repeated Replication method. This method is used to construct a variance estimate by resampling the data using replicate factors. The talk highlights the components of the variance estimate that come from the survey sample design, and the different variance measures that the survey produces. We will conclude the presentation with ideas for improvement in the future.

To learn more about Current Population Survey methodologies and researches, please join us at the Joint Statistical Meetings on August 3, 2016, or contact us at: <stephanie.chan.yang@census.gov>, <yang.cheng@census.gov>, or <aaron.j.gilary@census.gov>.

 

Posted in Uncategorized | Tagged , , , | Leave a comment

Visit Us at the 2016 Joint Statistical Meetings in Chicago

On July 30, U.S. Census Bureau staff will join several thousand statisticians and experts in related professions to present testing and research results on many topics at the Joint Statistical Meetings in Chicago, Ill. Presented annually by the American Statistical Association, this year’s Joint Statistical Meetings will take place from July 30 to Aug. 4. The theme of this year’s conference is “The Extraordinary Power of Statistics.”

Attendees will present and hear about advances in statistical methodology and applications, including statistical theory and methodological development, state-of-the-art technological advances for data processing, and new advances in statistical sampling, estimation, and modeling.

Census Bureau experts will present on a spectrum of topics, including:

  • New machine learning research for collecting data from the web.
  • Estimating reliability in the economic census.
  • The treatment of imputed earnings.
  • Reducing respondent burden.
  • The Current Population Survey.

The Joint Statistical Meetings offer a unique international forum for Census Bureau staff to present their research for professional discussion. It is a major setting for ensuring that the Census Bureau’s statistical methodology remains at the cutting edge. We look forward to sharing our ideas at this year’s conference. For a complete listing of Census Bureau research presentations, see <http://www.census.gov/research/conferences/jsm/2016.html>.

 

Posted in Uncategorized | Tagged | Leave a comment

Do Refund Anticipation Products Help or Harm American Taxpayers?

By: Maggie R. Jones, Center for Administrative Records Research and Applications

Many taxpayers rely on for-profit tax preparation services to file their income taxes. To make tax filing more appealing to taxpayers, preparers offer financial products that speed up the delivery of refunds. However, recent U.S. Census Bureau research suggests that these products may make families less financially secure.

“A Loan by any Other Name: How State Policies Changed Advanced Tax Refund Payments” examines the impact on taxpayers of state-level regulation of refund anticipation loans (RALs). Both refund anticipation loans and refund anticipation checks (RACs) are products offered by tax preparers that provide taxpayers with an earlier refund (in the case of a refund anticipation loan) or a temporary bank account from which tax preparation fees can be deducted (in the case of a refund anticipation check). Each product comes at the cost of high interest rates (often an annual rate of more than 100 percent) and fees and is very costly when compared to the value of the refund.

States have responded to the predatory nature of refund anticipation loans through regulation. The working paper specifically looks at how the implementation of New Jersey’s interest rate cap in 2008 (no more than 60 percent annual rate) of RALs affected taxpayers. Evidence suggests that the use of refund anticipation products among taxpayers living in ZIP codes near New Jersey’s border with another state increased after the policy changed. In other words, New Jersey’s regulation appears to have suppressed the volume of refund anticipation products offered within the state, with taxpayers near the border crossing into a bordering state to use the products.

Meanwhile, border taxpayers’ use of key social programs such as the Supplemental Nutrition Assistance Program, Temporary Assistance to Needy Families and Supplemental Social Security also increased. In other words, after the change in policy, use of both refund anticipation products and social programs increased for taxpayers in New Jersey border ZIP codes compared with other families, indicating greater hardship. The map below shows the ZIP codes used in the analysis.

maggie

At one time, the Internal Revenue Service informed preparers if there was an offset on a taxpayer’s refund. Under pressure from consumer advocates, the IRS stopped providing the indicator in 2010. By 2012, all of the major tax preparation companies in the industry had withdrawn from the RAL market, turning to RACs as a replacement. Consumers paid a minimum of $648 million in RAC fees in 2014. The maps show the withdrawal from the RAL market and the increase in the RAC market between 2005 and 2012.

Maggie 2

Refund anticipation products pose important questions for policymakers. In order for people filing taxes to receive higher refunds, tax preparers file additional forms that include claims for credits and deductions, which therefore increase tax preparation costs. This translates to higher charges for low-income taxpayers who are eligible for these credits and deductions. Moreover, preparers target RALs and RACs to low-income taxpayers who expect substantial refunds through redistributive credits such as the Earned Income Tax Credit, arguing that RALs and RACs speed up refund receipt and help taxpayers pay off more pressing debt or bills quicker, making low-income families better off. However, some portion of this refund money goes directly from the tax and transfer system to tax preparers rather than intended recipients.

Posted in Uncategorized | Leave a comment

Investigating Alternative Methods to Estimate Time Use Behaviors

Written by: Rachelle Hill, Center for Economic Studies, and Katie Genadek, University of Minnesota

Time diary surveys collect information about the different activities the survey respondents participate in throughout the pre-selected diary day, including a general description of the activity and the amount of time spent in each activity. This unique data structure creates novel research opportunities as well as challenges for choosing the appropriate analytic method. In our paper, Investigating Alternative Methods to Estimate Time Use Behaviors, we compare four different analytic methods in estimating time diary data and demonstrate the importance of considering how different modeling techniques may affect the results. We investigate these alternative methods to help time diary researchers better understand the complexities of choosing the correct analytic method and its potential impact to the results.

The Bureau of Labor Statistics sponsors the American Time Use Survey (ATUS), which is conducted by the U.S. Census Bureau. This annual, cross-sectional, time diary survey began in 2003 and is conducted throughout the year. The survey captures a respondent’s daily activities from 4 a.m. of the day prior to the survey until 3:59 a.m. of the survey day.

Interviewers record each activity according to a six-digit coding scheme. Activities include everything from biking to doing laundry to looking for a job. This coding scheme protects the respondent’s identity while also condensing the information into a useable structure that allows researchers to investigate their activity of interest. Despite the detailed coding structure, some aspects of time diary data make analysis difficult.

The American Time Use Survey diary is limited to a small window of time, specifically 24 hours. This short period of time increases the chances that the respondent may not record participation in some activities of interest regardless of whether or not the activity is one in which they frequently engage. For example, some respondents will report no time spent with extended family members because they did not see them on the diary day but see them at other times.   In contrast, other respondents will report no time spent with extended family members because they never see them. This is referred to below as a true zero.

Figure 1 illustrates the variability in the percentage of zeros across different family members. After limiting the sample to parents, members of couples and all respondents by using the relationship variables captured in the survey instrument, the figure shows the percentage of respondents that spend a given number of minutes with children, spouses/partners and extended family members, respectively. We draw on these similar measures of family time with differing proportions of zeros to compare different analytic methods.

Hill

We compare four analytic methods used in time diary data analyses while drawing on different measures of time with family members (including children under 6, all children, spouse/partner, only spouse/partner, parents and extended family members) from the 2003-2010 American Time Use Survey. By comparing measures of similar concepts across model types, we can compare the estimates produced by the different analytic methods.

The four methods we examine are: Ordinary Least Squares, Tobit, Double Hurdle and Zero-Inflated Count models. Ordinary Least Squares assumes that the variable of interest is continuous and may be biased when the variable is censored at zero. Tobit accounts for a censored distribution and is often applied in time diary analyses but assumes that cases censored at zero are true zeros rather than a mismatch between the diary day and the activity. Double hurdle predicts both the likelihood of not participating on the diary day and the amount of time spent, but there is some evidence of bias when the covariates are related to the likelihood of not participating. Zero-Inflated count models effectively model a large proportion of zeros, predict both the likelihood of participating in an activity and the amount of time spent and assume two causes for not reporting time in a given activity.

In our preliminary results, we find that the model coefficients vary by the proportion of respondents who spend no time with family members. When the proportion of respondents who report no time is smaller, as is the case with parents’ time spent with children, the predictions are nearly the same across the four model types. When the proportion of respondents who report no time is larger, as is the case with respondents’ time with extended family members, then the predictions vary considerably. Specifically, we find that Tobit and Double Hurdle estimates are more variable than Ordinary Least Squares and Zero-Inflated Count models. Such variability is evidence of the need to consider and evaluate different analytic methods and their effects on reported results.

The next step in our analysis is to explore the four methods using simulated data. We will compare estimates from various possible American Time Use Survey data structures including all true zeros, no true zeros and a mix at different proportions. This comparison will help time diary researchers choose the analytically appropriate method for their research question and better understand the implications of their choice for their results.

Posted in Uncategorized | Leave a comment

Implementing Bring Your Own Device (BYOD) in a Survey Organization

Written by: Jessica Holzberg, Mathematical Statistician, Demographic Statistical Methods

Division, and Casey Eggleston, Mathematical Statistician, Center for Survey Measurement

When interviewers administer a survey in a Computer Assisted Personal Interview (CAPI) mode, survey organizations like the U.S. Census Bureau incur great costs acquiring and maintaining devices such as laptops, tablets or cell phones for interviewers to use in the field. One potential way to mitigate these costs is to ask interviewers to use their own personal devices. This is also known as the Bring Your Own Device (BYOD) initiative.

holzberg 1

The Census Bureau is no longer considering BYOD for the 2020 Census due to potential logistical and administrative challenges (see Memorandum 2016.01: Decision on Using Device as a Service in the 2020 Census Program). However, BYOD may still be considered for other Census Bureau surveys in the future, and by other survey organizations. Technical feasibility and cost savings are two major considerations. In this blog, we highlight a few findings from the Center for Survey Measurement on the feasibility of a BYOD program from two other perspectives:

  1. How do current and potential interviewers perceive BYOD? Are people willing to use their own devices for work tasks, including survey fieldwork?
  2. How does the public feel about interviewers using their own devices to collect personal information?

We conducted both qualitative and quantitative research on BYOD. Our qualitative research included focus groups with the public and with Census Bureau interviewers following two major tests leading up to the 2020 Census (the 2014 and 2015 Census Tests). Our quantitative research included surveys of Census Bureau interviewers as well as survey questions asked of the public on a nationally representative Random Digit Dial (RDD) telephone survey conducted by the Gallup organization.

1. How do current and potential interviewers perceive BYOD? Are people willing to use their own devices for work tasks, including survey fieldwork?

We asked the general public about using their own smartphones for work to understand whether potential interviewers would be willing to participate in a BYOD initiative. Generally, we found evidence in our nationally representative Gallup telephone survey that the majority of smartphone and tablet owners would be willing to use their personal device for work-related purposes, with, for example, 72% of owners being willing to use their device for work-related email. These statistics represent data collected from January through April of 2015.

Many of the Census Bureau interviewers we spoke to in our focus groups were open to using their own devices for work as well. However, interviewers were unsure how BYOD would work from a logistical perspective. For example, one concern interviewers had was whether the Census Bureau would have access to private content on their devices. Reimbursement for personal data use was also a common concern. Interviewers who had unlimited data plans for their devices were less concerned about how they would be reimbursed, however.

2. How does the public feel about interviewers using their own devices to collect personal information?

While interviewers’ willingness to participate in BYOD seems promising, public perception of data collection on personal devices is also an important concern. Analyzing responses from Gallup survey questions administered to the public in January through April 2015, we found that less than a quarter of respondents favored interviewers using their personally owned devices to collect Census Bureau data when it was presented as a cost-saving measure (23.6 percent).

However, nearly one-fifth of respondents (18.5 percent) neither favored nor opposed BYOD enumeration. Those who were opposed to BYOD were asked an open-ended, follow-up question to learn more about their concerns. Respondents most commonly reported privacy concerns, as well as concerns about security, data getting into the wrong hands, interviewer misuse of data and fairness to interviewers.

Responses from members of the public to whom we spoke during a series of focus groups echoed many of these concerns. They were unsure about how BYOD would work and tended to assume that their information would not be secure when using a personal device. However, it is not clear that survey respondents will be able to tell whether an interviewer is using a personal device unless the respondent is explicitly told. Focus group respondents struggled to name ways in which they might be able to tell that a device is not a government-issued device.

We also believe that some of these concerns regarding privacy and security may not be as great for surveys that do not ask respondents about their addresses and household composition. Thus, while BYOD is not currently under consideration for the 2020 Census, we hope that our research will help guide other surveys and survey organizations considering it. Surveys that implement BYOD will need to outline clear strategies for reimbursing interviewers and alleviating respondent concerns.

Posted in Uncategorized | 2 Comments