Skip to main content

When the Federal Government Needs Data Fast, Look to Panel Surveys

Research Brief
Close up of someone filling out a survey
Center for Panel Survey Sciences

For inquiries, email:

August 2024

Federal surveys aspire to a higher calling. In an industry whose key guiding principle is “fit for purpose,” federal surveys sit at the top. 

With a purpose of producing official statistics, federal surveys aim to—even more than all other surveys—attain estimates with the lowest total survey error possible, even if such goals bear considerable cost.

Conducting very high-quality surveys requires very careful, methodical steps in every facet of the survey process, including not just questionnaire development, fielding, and back-end data processing, weighting, estimation, and reporting, but contracting and payment functions as well. As a result, federal surveys tend to have very long production and data collection periods.

Yet, sometimes data is needed quickly, especially for governments during instances of health crises such as COVID-19 or avian flu, or when sudden and impactful global events occur. As well, there are many use cases where governments need insight, but the need and precision of such insight does not rise to the level of having to be an “official statistic.”

Meeting the Need for Timely Federal Data

In these cases, probability-based survey panels often are a good option for fielding and attaining data. However, like many cross-sectional surveys (perhaps all without an in-person data collection component), probability panels often suffer from low response rates and modest total survey error. What can probability panels do to meet government “halfway,” not perhaps to the level of official statistics but still, in as much as possible, maintain low total survey error and a moderately high degree of statistical precision?

One model is to firstly include rigorous and high-quality panel recruiting strategies. As an example, the AmeriSpeak probability panel at NORC at the University of Chicago samples 20 percent of households that did not respond to its initial three-mail plus telephone recruiting step and contacts these households via its extensive nonresponse follow-up process, including a highly incentivized federal express mailing and then personal face-to-face visits. Such strategies increase the weighted recruitment rate (AAPOR RECR) from under five percent to nearly 30 percent, and cumulative panel survey response rates (AAPOR RR3) from about three percent to 15 percent. But still, more can be done.



AmeriSpeak’s Federal Panel

AmeriSpeak recently developed a specialized subpanel called “AmeriSpeak Federal” that takes additional measures to increase recruiting and response rates even further as well as lower the typical design effect of a given survey. (Design effect or unequal weighting effect is the cumulative level of variance introduced to survey estimates due to sampling design and weighting. All surveys have this, and its effect is to increase the margins of error associated with survey estimates—so the lower the design effect, the lower the margin of error.)

AmeriSpeak Federal increases the percentage of a sample attained by nonresponse follow-up (NRFU, including Federal Express mailings and in-person recruiting) from 20 percent to 40 percent. In addition, when a survey is fielded, a special prenotification letter is sent to panelists with historically low cooperation rates in past surveys; incentives are doubled; respondents who prefer to take surveys by telephone are given two additional call attempts; a postcard reminder is sent to nonrespondents after two weeks; and the overall field is extended from the standard two weeks to four.

As noted in Bilgen, et al. (2019), AmeriSpeak panelists who are recruited by NRFU are substantially more likely to be persons of low socio-economic status, the young, Spanish speakers, those who are not white, and those without a college degree. These groups are also those most likely to be underrepresented in most surveys. As such, increasing NRFU sampling from 20 percent to 40 percent increases the percent of these hard to reach/low participation groups. For example, as of 2023 Hispanics were 17 percent of AmeriSpeak panelists; they are 23% of AmeriSpeak Federal panelists. African Americans increase from 14 percent to 17 percent and those under age 35 increase from 23 percent to 27 percent.

Of course, a consequence of such a strategy is that the actual cooperation rate of an AmeriSpeak Federal study could in fact decrease rather than increase, given the overall pool of respondents now includes more persons with traditionally lower cooperation rates in surveys. In part because of this, AmeriSpeak Federal utilizes the strategies noted above to attempt to maximize survey cooperation rate even in the face of a panelist cross-section holding more panelists who are historically less likely to participate in surveys.

Results from Three Experiments

To test the efficacy of AmeriSpeak Federal, three independent experiments were conducted, with a random half of a study receiving the standard AmeriSpeak sampling and interviewing protocol and the other half receiving the AmeriSpeak Federal approach. These tests were part of AmeriSpeak surveys, fielding a range of questions from politics, current events, and health.

Overall, the Federal condition improved the AAPOR recruitment rate (RECR) by 60 percent, unsurprisingly, given the sampling of in-person recruits is roughly doubled and in-person recruits attain a recruitment rate about six times higher than mail and phone. Overall, the completion rate (COMR) of the survey only improved by 5 percent, but again, as noted above, this was generally anticipated given that the AmeriSpeak Federal sample pool contains a greater share of hard-to-reach respondents. Overall, the AAPOR response rate (RR4) improved by two thirds. And notably, while not shown in the table, when limited to just active panelists (a practice AmeriSpeak tends to shy away from, but other panels commonly employ to boost reported response rates) response rate on average went from 10.5 percent to 24.6 percent.

Finally, as anticipated, design effects were reduced, here by 12 percent. This is due both to a lower base weight correction due to a higher percent of NRFU sampling (again 40 percent instead of 20 percent) and also due to the sample being slightly more aligned with Census benchmarks, thereby reducing the effort needed by raking in the weights.

Table 1: Differences by Response Rate, Design Effect, and Margin of Error

 Test 1Test 2Test 3Grand
Change
 StandardFederalChangeStandardFederalChangeStandardFederalChange
AAPOR RECR21%34%60%21%34%64%22%33%55%60%
AAPOR COMR18%24%30%25%26%2%22%19%-17%5%
AAPOR RR4*3.9%8.0%105%4.1%6.9%68%3.8%4.9%29%67%
Design Effect1.851.60-14%1.831.68-8%1.951.70-13%-12%
Margin of Error4.2%3.9%-7%2.47%1.96%21%2.57%2.19%15%14%

* These RR4s are for sampling with all panelists; panel surveys that survey active panelists will attain significantly higher COMR and thus, significantly higher RR4.

Of course, a key question is whether the survey estimates themselves are different between the standard and Federal conditions. These are reported in Table 2.

Table 2: Differences in Survey Estimates

 Test 1Test 2Test 3Grand Change
 Number of
Estimates
ChangeNumber of
Estimates
ChangeNumber of
Estimates
Change
Tested141 412 522  
Sig Differences2820%51%51%7%
P > .0196%31%20%2%
P > .00143%10%10%1%

Overall, fewer than one in 10 estimates exhibited significant differences, and only about two percent at a level of significantly greater than p > .01. Factoring a Bonferroni adjustment for multiple tests of significance, we find that there would be no statistically significant differences in estimates across the two approaches.

Conclusion

The findings suggest that while AmeriSpeak as a standard panel does just as well in attaining survey estimates as does AmeriSpeak Federal, researchers may still choose AmeriSpeak Federal to attain greater precision in the estimates vis-à-vis a lower design effect and margin of error, as well as a higher completion rate and cumulative response rate. Overall, AmeriSpeak Federal lowers the risk of error in estimates, and produces response rates more palatable with Federal and other government researchers.


References

Bilgen I., Dennis J. M., Ganesh N. (2019). The Undercounted: Measuring the Impact of Nonresponse Follow-up on Research Data, AmeriSpeak White Paper: NORC at the University of Chicago.

Citation

Dutwin, D. and Bilgen, I. (2024, August 1). Should the Federal Government Use Probability-Based Panels? NORC at the University of Chicago. Retrieved from https://www.norc.org.


Tags

Departments, Centers & Programs

Research Divisions


Solutions


Explore NORC Research Science Projects

Analyzing Parent Narratives to Create Parent Gauge™

Helping Head Start build a tool to assess parent, family, and community engagement

Client:

National Head Start Association, Ford Foundation, Rainin Foundation, Region V Head Start Association

America in One Room

A “deliberative polling” experiment to bridge American partisanship

Client:

Stanford University