Do Refund Anticipation Products Help or Harm American Taxpayers?

By: Maggie R. Jones, Center for Administrative Records Research and Applications

Many taxpayers rely on for-profit tax preparation services to file their income taxes. To make tax filing more appealing to taxpayers, preparers offer financial products that speed up the delivery of refunds. However, recent U.S. Census Bureau research suggests that these products may make families less financially secure.

“A Loan by any Other Name: How State Policies Changed Advanced Tax Refund Payments” examines the impact on taxpayers of state-level regulation of refund anticipation loans (RALs). Both refund anticipation loans and refund anticipation checks (RACs) are products offered by tax preparers that provide taxpayers with an earlier refund (in the case of a refund anticipation loan) or a temporary bank account from which tax preparation fees can be deducted (in the case of a refund anticipation check). Each product comes at the cost of high interest rates (often an annual rate of more than 100 percent) and fees and is very costly when compared to the value of the refund.

States have responded to the predatory nature of refund anticipation loans through regulation. The working paper specifically looks at how the implementation of New Jersey’s interest rate cap in 2008 (no more than 60 percent annual rate) of RALs affected taxpayers. Evidence suggests that the use of refund anticipation products among taxpayers living in ZIP codes near New Jersey’s border with another state increased after the policy changed. In other words, New Jersey’s regulation appears to have suppressed the volume of refund anticipation products offered within the state, with taxpayers near the border crossing into a bordering state to use the products.

Meanwhile, border taxpayers’ use of key social programs such as the Supplemental Nutrition Assistance Program, Temporary Assistance to Needy Families and Supplemental Social Security also increased. In other words, after the change in policy, use of both refund anticipation products and social programs increased for taxpayers in New Jersey border ZIP codes compared with other families, indicating greater hardship. The map below shows the ZIP codes used in the analysis.

maggie

At one time, the Internal Revenue Service informed preparers if there was an offset on a taxpayer’s refund. Under pressure from consumer advocates, the IRS stopped providing the indicator in 2010. By 2012, all of the major tax preparation companies in the industry had withdrawn from the RAL market, turning to RACs as a replacement. Consumers paid a minimum of $648 million in RAC fees in 2014. The maps show the withdrawal from the RAL market and the increase in the RAC market between 2005 and 2012.

Maggie 2

Refund anticipation products pose important questions for policymakers. In order for people filing taxes to receive higher refunds, tax preparers file additional forms that include claims for credits and deductions, which therefore increase tax preparation costs. This translates to higher charges for low-income taxpayers who are eligible for these credits and deductions. Moreover, preparers target RALs and RACs to low-income taxpayers who expect substantial refunds through redistributive credits such as the Earned Income Tax Credit, arguing that RALs and RACs speed up refund receipt and help taxpayers pay off more pressing debt or bills quicker, making low-income families better off. However, some portion of this refund money goes directly from the tax and transfer system to tax preparers rather than intended recipients.

Posted in Uncategorized | Leave a comment

Investigating Alternative Methods to Estimate Time Use Behaviors

Written by: Rachelle Hill, Center for Economic Studies, and Katie Genadek, University of Minnesota

Time diary surveys collect information about the different activities the survey respondents participate in throughout the pre-selected diary day, including a general description of the activity and the amount of time spent in each activity. This unique data structure creates novel research opportunities as well as challenges for choosing the appropriate analytic method. In our paper, Investigating Alternative Methods to Estimate Time Use Behaviors, we compare four different analytic methods in estimating time diary data and demonstrate the importance of considering how different modeling techniques may affect the results. We investigate these alternative methods to help time diary researchers better understand the complexities of choosing the correct analytic method and its potential impact to the results.

The Bureau of Labor Statistics sponsors the American Time Use Survey (ATUS), which is conducted by the U.S. Census Bureau. This annual, cross-sectional, time diary survey began in 2003 and is conducted throughout the year. The survey captures a respondent’s daily activities from 4 a.m. of the day prior to the survey until 3:59 a.m. of the survey day.

Interviewers record each activity according to a six-digit coding scheme. Activities include everything from biking to doing laundry to looking for a job. This coding scheme protects the respondent’s identity while also condensing the information into a useable structure that allows researchers to investigate their activity of interest. Despite the detailed coding structure, some aspects of time diary data make analysis difficult.

The American Time Use Survey diary is limited to a small window of time, specifically 24 hours. This short period of time increases the chances that the respondent may not record participation in some activities of interest regardless of whether or not the activity is one in which they frequently engage. For example, some respondents will report no time spent with extended family members because they did not see them on the diary day but see them at other times.   In contrast, other respondents will report no time spent with extended family members because they never see them. This is referred to below as a true zero.

Figure 1 illustrates the variability in the percentage of zeros across different family members. After limiting the sample to parents, members of couples and all respondents by using the relationship variables captured in the survey instrument, the figure shows the percentage of respondents that spend a given number of minutes with children, spouses/partners and extended family members, respectively. We draw on these similar measures of family time with differing proportions of zeros to compare different analytic methods.

Hill

We compare four analytic methods used in time diary data analyses while drawing on different measures of time with family members (including children under 6, all children, spouse/partner, only spouse/partner, parents and extended family members) from the 2003-2010 American Time Use Survey. By comparing measures of similar concepts across model types, we can compare the estimates produced by the different analytic methods.

The four methods we examine are: Ordinary Least Squares, Tobit, Double Hurdle and Zero-Inflated Count models. Ordinary Least Squares assumes that the variable of interest is continuous and may be biased when the variable is censored at zero. Tobit accounts for a censored distribution and is often applied in time diary analyses but assumes that cases censored at zero are true zeros rather than a mismatch between the diary day and the activity. Double hurdle predicts both the likelihood of not participating on the diary day and the amount of time spent, but there is some evidence of bias when the covariates are related to the likelihood of not participating. Zero-Inflated count models effectively model a large proportion of zeros, predict both the likelihood of participating in an activity and the amount of time spent and assume two causes for not reporting time in a given activity.

In our preliminary results, we find that the model coefficients vary by the proportion of respondents who spend no time with family members. When the proportion of respondents who report no time is smaller, as is the case with parents’ time spent with children, the predictions are nearly the same across the four model types. When the proportion of respondents who report no time is larger, as is the case with respondents’ time with extended family members, then the predictions vary considerably. Specifically, we find that Tobit and Double Hurdle estimates are more variable than Ordinary Least Squares and Zero-Inflated Count models. Such variability is evidence of the need to consider and evaluate different analytic methods and their effects on reported results.

The next step in our analysis is to explore the four methods using simulated data. We will compare estimates from various possible American Time Use Survey data structures including all true zeros, no true zeros and a mix at different proportions. This comparison will help time diary researchers choose the analytically appropriate method for their research question and better understand the implications of their choice for their results.

Posted in Uncategorized | Leave a comment

Implementing Bring Your Own Device (BYOD) in a Survey Organization

Written by: Jessica Holzberg, Mathematical Statistician, Demographic Statistical Methods

Division, and Casey Eggleston, Mathematical Statistician, Center for Survey Measurement

When interviewers administer a survey in a Computer Assisted Personal Interview (CAPI) mode, survey organizations like the U.S. Census Bureau incur great costs acquiring and maintaining devices such as laptops, tablets or cell phones for interviewers to use in the field. One potential way to mitigate these costs is to ask interviewers to use their own personal devices. This is also known as the Bring Your Own Device (BYOD) initiative.

holzberg 1

The Census Bureau is no longer considering BYOD for the 2020 Census due to potential logistical and administrative challenges (see Memorandum 2016.01: Decision on Using Device as a Service in the 2020 Census Program). However, BYOD may still be considered for other Census Bureau surveys in the future, and by other survey organizations. Technical feasibility and cost savings are two major considerations. In this blog, we highlight a few findings from the Center for Survey Measurement on the feasibility of a BYOD program from two other perspectives:

  1. How do current and potential interviewers perceive BYOD? Are people willing to use their own devices for work tasks, including survey fieldwork?
  2. How does the public feel about interviewers using their own devices to collect personal information?

We conducted both qualitative and quantitative research on BYOD. Our qualitative research included focus groups with the public and with Census Bureau interviewers following two major tests leading up to the 2020 Census (the 2014 and 2015 Census Tests). Our quantitative research included surveys of Census Bureau interviewers as well as survey questions asked of the public on a nationally representative Random Digit Dial (RDD) telephone survey conducted by the Gallup organization.

1. How do current and potential interviewers perceive BYOD? Are people willing to use their own devices for work tasks, including survey fieldwork?

We asked the general public about using their own smartphones for work to understand whether potential interviewers would be willing to participate in a BYOD initiative. Generally, we found evidence in our nationally representative Gallup telephone survey that the majority of smartphone and tablet owners would be willing to use their personal device for work-related purposes, with, for example, 72% of owners being willing to use their device for work-related email. These statistics represent data collected from January through April of 2015.

Many of the Census Bureau interviewers we spoke to in our focus groups were open to using their own devices for work as well. However, interviewers were unsure how BYOD would work from a logistical perspective. For example, one concern interviewers had was whether the Census Bureau would have access to private content on their devices. Reimbursement for personal data use was also a common concern. Interviewers who had unlimited data plans for their devices were less concerned about how they would be reimbursed, however.

2. How does the public feel about interviewers using their own devices to collect personal information?

While interviewers’ willingness to participate in BYOD seems promising, public perception of data collection on personal devices is also an important concern. Analyzing responses from Gallup survey questions administered to the public in January through April 2015, we found that less than a quarter of respondents favored interviewers using their personally owned devices to collect Census Bureau data when it was presented as a cost-saving measure (23.6 percent).

However, nearly one-fifth of respondents (18.5 percent) neither favored nor opposed BYOD enumeration. Those who were opposed to BYOD were asked an open-ended, follow-up question to learn more about their concerns. Respondents most commonly reported privacy concerns, as well as concerns about security, data getting into the wrong hands, interviewer misuse of data and fairness to interviewers.

Responses from members of the public to whom we spoke during a series of focus groups echoed many of these concerns. They were unsure about how BYOD would work and tended to assume that their information would not be secure when using a personal device. However, it is not clear that survey respondents will be able to tell whether an interviewer is using a personal device unless the respondent is explicitly told. Focus group respondents struggled to name ways in which they might be able to tell that a device is not a government-issued device.

We also believe that some of these concerns regarding privacy and security may not be as great for surveys that do not ask respondents about their addresses and household composition. Thus, while BYOD is not currently under consideration for the 2020 Census, we hope that our research will help guide other surveys and survey organizations considering it. Surveys that implement BYOD will need to outline clear strategies for reimbursing interviewers and alleviating respondent concerns.

Posted in Uncategorized | 2 Comments

Methodological Challenges and Opportunities in Web Survey Usability Evaluation

Written by: Lin Wang, Human Factors and Usability Research Group, Center for Survey Measurement

As a scientific investigation, evaluation of web survey usability requires sound methodology. Yet, web survey usability evaluation is a relatively young field that is filled with challenges and opportunities. To promote research in this area and exchange information with colleagues in the public opinion research community, researchers at the U.S. Census Bureau will present a panel on methodological challenges and opportunities in web survey usability evaluation at the 2016 American Association for Public Opinion Research conference.

Web surveys are now widely used to gather information from the public. Like all software applications, web surveys are subject to usability or user experience issues. Usability, in this context, refers to the extent to which a respondent can self-administer a web survey effectively, efficiently and satisfactorily. Usability problems may frustrate the respondent and slow down survey completion and thereby compromising their ability to provide accurate responses. It is thus crucial to ensure adequate ease of use of web survey instruments through rigorous evaluation.

Usability evaluation can both reveal problems affecting user experiences and help address those problems. Addressing usability issues during the development lifecycle of the instrument can help to minimize usability-induced measurement or nonresponse errors in a cost-effective manner.

In our panel at AAPOR, we will introduce the general approach the Census Bureau uses to evaluate web survey usability. We will discuss five major challenges that we experience in practice: (1) conducting cognitive probing techniques, (2) utilizing and interpreting eye tracking data, (3) evaluating accessibility on mobile devices, (4) incorporating usability evaluation in the agile software development process, and (5) completing surveys on a smartphone.

Posted in Uncategorized | Leave a comment

Continuing to Explore the Relationship Between Economic and Political Conditions and Government Survey Refusal Rates: 1960 to 2015

Written by: Joanna Fane Lineback, Center for Survey Measurement

Survey programs are operating in a difficult climate. Response rates for a number of major government surveys have declined. Among them is the Current Population Survey, where the response rate has fallen below 90 percent.

Research into this phenomenon has focused on micro-level influences, such as interviewer workloads, because survey programs are looking for data collection improvements that will maintain or increase response rates. Recently, survey methodologists in the Center for Survey Measurement and the Center for Adaptive Design began thinking about macro-level influences on response rates and asking the following questions: Can we identify large-scale influences on survey response? If so, what are the implications?

We begin to answer these questions by extending the work covered in the article Exploring the Relation of Economic and Political Conditions with Refusal Rates to a Government Survey (1999) by Brian Harris-Kojetin and Clyde Tucker. The authors used Current Population Survey data and a time-series regression approach to examine economic and political influences (unemployment rates, presidential approval ratings, inflation rates and consumer sentiment scores) on survey refusal rates from 1960 to 1988. The authors found relationships between refusal rates and some macro-level characteristics and refusal rates including presidential approval rating, consumer sentiment score and unemployment rate.

We are investigating whether the authors’ findings hold over the extended period of 1960 to 2015 and if there is additional information to add to their time-series models. Most of the data the authors used are still collected, allowing us to pick up where they left off. We successfully replicated their work up to 1988 and extended the work through 2015, and we are identifying covariates for additional analysis that the authors had not considered. We will be presenting our initial findings for the 2016 Annual Association for Public Opinion Research conference and proceedings.

Posted in Uncategorized | Leave a comment

Communicating Data Use and Privacy: In-Person Versus Web-Based Methods for Message Testing

Written by: Aleia Clark Fobia and Jennifer Hunter Childs, Center for Survey Measurement

Communicating messages about privacy, data use and access, and confidentiality is critical to earning and keeping the trust of respondents and to ensuring their willingness to participate in surveys. Informing respondents about their rights and how their data will be used is often required by law. However, there is currently not much data available on how respondents react to these messages or how they understand the meanings we try to convey.

Our research addresses this gap in our knowledge about respondents’ understanding of an intended message. We focused our research on sets of messages that convey different types of information. The sets of messages informed respondents of a range of factors including:

  • Who has access to survey responses.
  • Survey responses are confidential.
  • Data are for statistical use only.
  • Individuals will not be identifiable when statistical data are released.
  • Responses will not be shared with law enforcement or used for eligibility for government benefits.
  • Data are sometimes shared with other federal agencies.

Four additional sets of messages tested statements that the Census Bureau is legally required to provide, such as the mandatory nature of the census, confidentiality protections, burden notifications and other language from the Paperwork Reduction Act.

We used both web-based and in-person methods to test respondent comprehension of messages about privacy and confidentiality. First, we used an open-ended internet instrument to collect qualitative data on respondent comprehension of these messages. Web-based testing was remote and respondents did not interact with an interviewer. We analyzed these data to identify high and low performing messages. We then tested some of those messages in a smaller-scale, in-person cognitive test with 30 respondents.

Combining these two methods allowed for a larger scale data collection than is typical in a qualitative study while retaining the ability to elicit rich description and allow for spontaneous probing. This research not only helps us understand how respondents comprehend our messages, but also facilitates exploration of web-based methods for testing survey questions.

One of the central limitations of in-person interviewing is the difficulty of obtaining respondent diversity. In-person interviewing is also costly in both labor hours and respondent compensation. However, this type of interviewing allows for considerable interviewer flexibility. Through direct comparison with more traditional methods, this research highlights the advantages and limitations of using alternatives to traditional in-person cognitive interviewing. For an update with our findings, come see our presentation at the American Association for Public Opinion Research on Saturday, May 14, 2016, in Austin, Texas.

 

Posted in Uncategorized | 1 Comment

Validating Self-Reported Health Insurance Coverage: Preliminary Results on CPS and ACS

Written by: Joanne Pascale, U.S. Census Bureau; Kathleen Call, State Health Access Data Assistance Center; Angela Fertig, Medica Research Institute; and Don Oellerich, U.S. Department of Health and Human Services

Many federal, state and private surveys include questions that measure health insurance coverage. Each survey has different origins, constraints and methodologies and, as a consequence, the surveys produce different estimates of coverage. While several factors could contribute to the variation in the estimates, research points to subtle differences in the questionnaires as driving much of this variation.

Validation studies that evaluate survey responses for individuals whose coverage is known from insurance plan records are rare. In September 2014, a number of researchers and sponsors from different agencies came together to launch the Comparing Health Insurance Measurement Error study. The goal of this study is to assess reporting accuracy in two major federal surveys, the Current Population Survey Annual Social and Economic Supplement (CPS ASEC) and the American Community Survey (ACS), by  comparing survey reports of coverage to enrollment records from a private health plan. Individuals known to be enrolled in a range of different coverage types, including employer-sponsored insurance, nongroup coverage, qualified health plans from the marketplace and public coverage, were sampled.

Phone numbers associated with these individuals were then randomly assigned to one of two survey treatments, and a split panel telephone survey was conducted in the spring of 2015. Person-level matching was then conducted between the survey data and the enrollment records, and the accuracy of reported point-in-time health insurance coverage was established. My presentation at the 2016 American Association for Public Opinion Research annual conference covers reporting accuracy for public and private coverage, in both the CPS ASEC and ACS, and future research will explore reporting the accuracy of more detailed coverage types.

The line between public and private coverage is becoming increasingly blurry. For example, some states offer public programs that charge a premium while other states offer marketplace coverage (which is considered private) that is completely subsidized. In addition, the “no wrong door” marketplace encourages individuals to explore different plans and complete one application to determine eligibility for a range of coverage types, from completely subsidized Medicaid to unsubsidized marketplace coverage. These blurry lines make it increasingly difficult to capture coverage type in surveys. Our research suggests that no single data point is sufficient for categorizing coverage type but rather several data points are needed, including general sources of coverage (employer, government, direct purchase, etc.), and whether the coverage is (1) obtained on the marketplace, (2) has a premium and (3) whether the premium is subsidized.

Judgements must also be made as to how these data points should be pieced together to categorize coverage. For example, coverage reported to be obtained through the government, on the marketplace, with a subsidized premium could be subsidized marketplace coverage (i.e., private), or it could be Children’s Health Insurance Program (CHIP), Medicaid or another public program that requires enrollees to pay a premium. As a result, an algorithm converting these data points into an estimate of coverage type is needed.

Beginning in 2014, CPS ASEC was adapted with questions about the marketplace, premiums and subsidies. In the Comparing Health Insurance Measurement Error study, two algorithms were developed for categorizing public versus private coverage. The first algorithm, V1, was strictly conceptual and based on program eligibility rules, while the other, V2, was data driven, using a machine learning approach. ACS does not yet include these questions so coverage type was categorized based on answers to a standard “laundry list” of questions on coverage type (direct purchase, Medicaid, etc.), where the direct purchase category is assumed to capture marketplace coverage.

Two different metrics are used to evaluate the agreement between survey estimates of coverage type and the actual coverage type from enrollment records. First is underreporting (see Figure 1). Among those known to be enrolled in private coverage under the CPS ASEC/V1 treatment, 93.7 percent reported private coverage. This was slightly higher than both the CPS ASEC/V2 at 92.1 percent and the ACS at 91.6 percent. There was no significant difference between the CPS ASEC/V2 and the ACS estimates. Among those known to be enrolled in public coverage under the CPS ASEC/V2 treatment, 79.8 percent reported public coverage. This was significantly higher than both the ACS at 71.8 percent and the CPS ASEC/V1 at 68.96 percent. There was no significant difference between the CPS ASEC/V1 and the ACS.

pascale 1

The other metric is the flip side of underreporting (see Figure 2). Among those who reported private coverage, the percent that could be validated in the enrollment records to have private coverage was highest in the CPS ASEC/V2 (92.6 percent), which was significantly higher than both the CPS ASEC/V1 at 86.2 percent, and the ACS at 81.5 percent. The 4.7 percentage point difference between the CPS ASEC/V1 and the ACS was also significant. Among those who reported public coverage, the percent that could be validated in the enrollment records to have public coverage was 92.7 percent in the CPS ASEC/V1 and 91.4 percent in both the CPS ASEC/V2 and the ACS, and no differences across surveys were significant. My presentation places these results in the context of past research and discusses the implications of the findings for questionnaire design.

pascale 2

 

 

Posted in Uncategorized | Leave a comment

Advancements in Cross-Cultural and Multilingual Questionnaire Design and Pretesting

Written by: Patricia Goerman, U.S. Census Bureau Center for Survey Measurement and Mandy Sha, RTI International

Several U.S. Census Bureau employees with the Center for Survey Measurement’s Language and Cross-Cultural Research Group will be presenting in a special panel at the American Association for Public Opinion Research conference in Austin, Texas, this May. The panel is called “Advancements in Cross-Cultural and Multilingual Questionnaire Design and Pretesting” and includes five papers that describe research conducted as a part of a large language research contract in 2015-2016. The Center for Survey Measurement supported and collaborated with the Decennial Language Team on the work that led to this panel. We also collaborated with contractors from RTI International and Research Support Services.

The panel will be of interest to methodologists and practitioners who would like to improve measurement comparability across languages, to design linguistically and culturally appropriate instruments and to encourage participation in Census Bureau surveys among non-English speakers. The larger 2015 language contract was designed to pretest the 2020 Census test questionnaires and the materials used to encourage participation in the census. Our panel will focus on research that was found in the larger pretesting studies.

The research was conducted using English, Spanish, Chinese, Korean, Vietnamese, Russian and Arabic forms and materials and focused on three data collection modes: via the internet, by visits from an enumerator and by the self-administered paper format. A total of 384 respondents participated in cognitive and usability interviews and around 300 participated in focus groups over the year and a half long project.

One example of how the Census Bureau has adapted questionnaires and other survey materials to facilitate the participation of different language speakers comes from the Sha et al. study to be presented in this panel. In English we write from left to right, therefore it is useful for data capture purposes to provide a separate box for each letter. In Arabic, however, writing goes from right to left and letters in a word are connected, similar to cursive writing in English. Therefore, in order to be culturally and linguistically appropriate, the last name box on Census Bureau forms needs to be written as seen below if Arabic language names are collected.

English:

Goerman Fig 1 

Arabic:

 Goerman Fig 2

Another example comes from the Hsieh et al. paper to be presented in this panel. This paper discusses the testing of a draft internet “landing page,” or the first page that users might visit on the Census Bureau website. Hsieh et al. found that using tabs in addition to drop-down menus on a page of this sort can help Asian, non-English speakers to choose their preferred language and facilitate their participation. The language tabs at the top of the draft page were very appealing to test respondents, as Chinese, Korean and Vietnamese speakers tended to click the tab of their native tongue to seek materials presented in their languages. Furthermore, even non-English speakers whose language was not shown liked seeing the different language options and received the message that multilingual support is available.

Goerman Fig 3

The panelists (listed below) will discuss the following:

  • Overview of the methods and the results of the multilingual and multimode research (Goerman, Park & Kenward).
  • Optimizing the visual design and usability of government information to facilitate access by Asian, non-English speakers (Hsieh, Sha, Park & Goerman).
  • Visual questionnaire design of Arabic language survey forms (Sha & Meyers).
  • Russian immigrants’ interpretation and understanding of survey questions (Schoua-Glusberg, Kenward & Morales).
  • Evaluation of the appropriateness of utilizing vignettes in cross-cultural survey research (Meyers, Garcia Trejo, Lykke & Holliday).
Posted in Uncategorized | 1 Comment

Identifying Hard-to-Survey Populations Using Low Response Scores by Census Tract

Written by: Kathleen Kephart, Center for Survey Measurement

The U.S. Census Bureau’s Planning Database is a publicly available data set derived from the 2009-2013 American Community Survey (ACS) and 2010 Census data. It has many potential uses not just for survey practitioners, but for local governments and planners as well. It can be used to locate tracts or block groups with characteristics of interest (e.g. seniors, children, Hispanics, languages spoken, poverty rates, health insurance coverage rates, etc.) to inform sample design and the allocation of financial resources. Additionally, users can employ the planning database to provide information about a target population, create geographic information systems (GIS) maps, enhance reports and construct models.

Unlike the ACS summary files, the Planning Database only contains the “greatest hits” of ACS variables such as age, gender, race and languages spoken at the tract and block group level, so it is a smaller, more manageable file for users of all experience levels. It also contains derived percentages and their margins of error.

In addition to the ACS five-year estimates and 2010 Census data, the Planning Database has a variable called the Low Response Score (LRS). This score is a block group or tract’s mail nonresponse rate. Similar to the Census Bureau’s Hard to Count Score, the LRS allows us to identify areas likely to need additional targeted marketing and field follow-up.

The figure below is a national map of the LRS distribution by tract. Darker colors indicate a tract with a higher predicted mail nonresponse rate that may require more resources for follow-up while lighter colored areas indicate areas with predicted low nonresponse rates.

Kephart

While the current Planning Database is ideal for users who need tract or block group level data, there are instances when practitioners need information at other geographic levels, such as ZIP code. While ZIP code does not perfectly map to census geography, the Census Bureau has created the ZIP Code Tabulation Areas (ZCTA) that can be linked to tracts.

At the 2016 American Association for Public Opinion Research conference, we will highlight the current tract and block group Planning Database, compare the distribution of Planning Database variables between two geographies (ZCTA and tract), and provide a demonstration of how to create a customized Planning Database at the ZCTA and potentially other levels of geography.

Posted in Uncategorized | Leave a comment

Digital Advertising: Encouraging Participation in the Decennial Census

Written by: Matt Virgile, Monica Vines, Nancy Bates and Gina Walejko, U.S. Census Bureau; Sam Hagedorn, Kiera McCaffrey and John Otmany, Reingold Inc.

The U.S. Census Bureau conducted a test of digital advertising and other communications techniques as part of the 2015 Census Test in the Savannah, Ga., test site. This test marks the first time the Census Bureau used communications and paid advertising to drive direct response through the online data collection instrument from visits to the web address prominently featured in advertising materials and through digital advertisements. Additionally, this was the first opportunity for some households to participate without receiving any mailing materials since the Census Bureau adopted the mailout/mailback approach in 1970.

The Census Bureau selected 120,000 households to receive mail materials as part of concurrent operational tests, yet all remaining households (approximately 320,000) were also eligible to respond. These households learned of the test via television and radio commercials, print advertisements and billboards, news stories, partnership events, social media, and digital and targeted digital advertisements. Digital advertising refers to online advertisements in any platform designed for mass consumption. Targeted digital advertising refers to paid digital advertising designed for consumption by and delivered to a specific audience based on demographics or other characteristics that utilizes tailored messaging, content and imagery.

Our research focused on the following hard-to-count groups with low internet usage and historically lower response rates to both the 10-year census and the annual American Community Survey:

  • Young adults (ages 18-25)
  • Seniors (ages 65+)
  • Renters
  • Low-income households
  • African-Americans
  • No high school education
  • Parents/families with children
  • Households with a female head
  • Hispanics

Virgile Fig 1

The test produced many encouraging results (See Table 1). Eighty percent of respondents completed the test questionnaire via the internet. Of those submissions, nearly half (49.2 percent) are directly attributable to the advertising campaign. These respondents entered the online response instrument and completed the test by either typing in the advertising campaign URL (35.5 percent) or by clicking an advertisement online (13.7 percent).

Virgile Table 1

Among households that received 2015 Census Test mailings, 69 percent of responders opted for online completion (See Table 2). The mailing materials contained a unique response URL, however, 16.4 percent of these respondents did not use that URL to access the online form, instead entering directly from digital advertisements, via partnership efforts or by visiting the advertising URL used for this test. Among households not selected to receive a mailing, nearly 24 percent responded after clicking on a digital ad, which is impressive given this was the first attempt to link respondents from a digital ad to the response site.

Virgile Table 2

Accuracy of digital targeting (alignment between the respondent’s self-reported demographics and the targeted ad consumed) is also important (See Table 3). We were largely successful in this area especially for targeted ads aimed at seniors (88 percent accuracy), African-Americans (75.4 percent) and Hispanics (72.9 percent). However, we were less successful with young adults (48.9 percent) and with renters (45.5 percent).

Virgile Table 3

Overall, the results from this advertising test show considerable promise for the use of digital and targeted digital advertising as a primary means to increase awareness about the 2020 Census, to motivate respondents to respond, to connect them directly to the online response instrument and to reach hard-to-count populations. It also validates the interaction between mailing strategies and traditional and digital advertising and its contribution to increases in self-response.

Posted in Uncategorized | Leave a comment