Skip to content

American Beliefs Study Methodology

The American Beliefs Study began decades ago as a way to offer insights into the religious preferences, practices, beliefs and media preferences of Americans within specific geographic areas. This is possible only through access to a large sample of respondents who have provided their home zip code, unlocking a wealth of data about their habits and preferences through geodemographics.

This study harnesses the power of research to help local church leaders better understand and reach their communities. 

In 1991 Percept Inc. fielded its first ‘Ethos’ survey to assess American religious preferences and practices. A somewhat larger study followed in 1994, and another in 1997. MissionInsite was later formed by Percept team members who pledged to continue this study of American faith every few years as The Quadrennium Project.  The purpose was to keep the data fresh, ask many similar questions and track changes over time. MissionInsite is now a division of ACST, and the Quadrennium study is now the American Beliefs Study.

Several surveys in following the American Beliefs Study pattern have now been fielded. The first in 2012 was supplanted by the 2017 edition, which is now updated in this 2021 study.

Each edition assesses current religious preferences, practices and beliefs. Of particular interest in each survey are current social and moral issues. Given that these evolve over time, some new questions are added, while others may cycle out.

But the heart of the study is a focus on the American religious landscape, especially across generational groups.

The goal of this study is to equip the Church in America with hyper-local resources for understanding and reaching their communities.

Summary

This online study among 14,942 American adults was conducted for ACST by Campbell Rinker from October 2020 through February 2021. Results were balanced by US Census regions, 19 Experian ‘Mosaic’ clusters, and weighted by age to align with known population characteristics. The study carries a maximum margin of error of ±1.97% at the 95% confidence level within any US Census region. A comparative 2017 study involved the same size audience.

Note: the authors request that media outlets quoting from this paper use the summary paragraph above to describe the study in keeping with AP Style.

Developing the Questionnaire

The first step in any study is to develop the survey questionnaire. This is a challenge in long-term ‘longitudinal’ studies for several reasons. First, respondents change over time.  In fact, survey takers now have quantifiably shorter attention spans than they used to have. Regardless, the goal is to measure the same metrics over and over, with an audience that is faithful to the original sampling methods.

Also, the common currency of language may have changed; the authors had to consider whether the ongoing questions retained their original meaning despite modern interpretations.

Central to this effort was the question of whether the content should remain the same. The first concern was to gather the type of data that would aid our readers – church and nonprofit leaders. What would be helpful for them to know? The data that helped church leaders in years past may now be less relevant. New areas of inquiry might add value in the context of changing social mores and sensibilities. What would readers potentially do differently with insights on American sentiment in new areas? From a strategic level, what kind of information would assist churches and denominational leaders in shaping relevant ministries? What data would reflect current attitudes and beliefs in their mission areas? The authors considered what lines of inquiry might help faith communities effectively collaborate with partners. Further, they even considered what kind of information would impact how a nonprofit designed and delivered services.  They included an entire section on media preferences to identify interaction trends and help ministries reach people.

The authors approached this task by confirming that the main areas of inquiry were still relevant and whether they required any editing.

  • Beliefs About God
  • Beliefs About Jesus
  • Beliefs About Social and Moral Issues
  • Faith Involvement or Non-Involvement
  • Religious Preferences
  • Religious Affiliations
  • Life Concerns
  • Program and Ministry Preferences
  • Media Preferences and Practices

 

There are of course many directions to go within each of these. But again, our experience as well as the input from several collaborators helped us narrow the list significantly. Our clients were very helpful in letting us know what they would like to know. We also spent time researching the kinds of things people were talking about when they talked “religion.” Finally, we researched the kinds of issues that are of concern to Americans today.

Questionnaire Topline

The full text of the survey is available in the next section. The following list provides a sense of the scope of the survey topics.

  • 29 religious/non-religious preferences both now and 10 years ago
  • Active membership for 23 denominations now and 10 years ago
  • Level of personal concern for 44 lifestyle issues using an attitude scale, up from 34 in 2017
  • Level of agreement or disagreement with 25 social and moral issues using an attitude scale, up from 24 in 2017
  • Level of agreement with 11 statements about the existence of a god using an attitude scale to measure level of belief
  • Level of agreement with 11 statements about the person Jesus using an attitude scale to measure level of belief
  • The level of significance (if any) of religious faith in one’s life now and 10 years ago using an attitude scale
  • If currently active in a religious congregation or other religious community; level of activity
  • For respondents not involved in a faith community, 25 possible reasons for non-participation, up from 21 in 2017
  • 25 possible reasons for considering dropping out of a congregation or religious community, among respondents who had thought of it
  • Rating 33 factors seekers use when evaluating participation in a faith community, up from 25 in 2017
  • Sixteen different forced pairs that indicate which kinds of media and specific media outlets people prefer.
  • Items that ask how often the respondent uses one of 15 different social media, up from a dozen measured in 2017

Study Partners and Participation

Several MissionInsite denominational clients participated in designing and writing the questionnaire and the fielding methodology. We are deeply indebted to them for their commitment to this project and their trust in MissionInsite to deliver on the concept.

How does one obtain 15,000 complete surveys? The answer is by using a market research firm that specializes in fielding and interpreting survey research for nonprofit clients and had deep experience with faith communities. In 2020, ACST hired research firm Campbell Rinker to accomplish these tasks.

MissionInsite had formerly collaborated with experts in stratified sampling design to develop a framework for response within each of the 19 Mosaic clusters by Census region to meet exacting standards for validity and reliability. The final target sample was 15,000.

Further, fielding a study of this extent in the modern era requires an online panel provider with a wealth of potential respondents and a trove of targeting data. The panel selected for this project met both these criteria, with roughly two million possible respondents whose households were coded with Mosaic cluster data.

Stratified Random Sampling

As noted earlier, a core concept of our study methodology is that human beliefs, preferences and practices correlate to particular demographic profiles. For this reason, survey respondents were coded to the 19 Mosaic clusters. 

There are actually 71 Mosaic types which aggregate into the 19 clusters used for this study.  Mosaic clusters are essentially very complex demographic profiles, each comprised of a number of sub-groups. A review of the Clusters reveals the common demographic characteristics within each of them. For example, one of the nineteen is titled ‘Family Union.’  It comprises 5.9% of all US households. These are mostly middle income and middle-aged families living in homes supported by solid blue-collar occupations. 

A representative distribution across the 19 Mosaic clusters was the first requirement. The second was that respondents needed to roughly match the percentage for each Cluster’s representation in the four geographic Census regions. The Census Bureau divides the 50 states plus Washington, DC into nine sub-regions which are aggregated into four regions, as seen to the side.

American Beliefs Study Methodology map

Survey Fielding

Once the final 2021 survey questionnaire was approved, Campbell Rinker programmed the survey for online fielding among contracted research panelists, thoroughly tested the skip logic and branching, and fielded a pre-test.

Fielding required several months to complete in late 2020 and early 2021.  Some days produced hundreds of responses, while others delivered only a handful of people meeting very specific qualifying criteria. 

As noted, we contracted for 15,000 completed surveys that fit the mandated demographic profile within two percentage points of the target population. Once the final survey data had been collected, the researchers discovered inequities between data collected in 2017 and in 2021 that could only be corrected by mathematically weighting our younger respondents to count as more, and older respondents to count as less.  With this weighting implemented, the dataset fell into much closer alignment with desired outcomes for statistical variance.  In the process, the number of responses counted fell to 14,942.

The following table shows the percentage of 2021 households for each of the 19 Mosaic clusters and the number of respondents representing each cluster. 

The Delivered Survey Data Set

The table below provides the percentage of US households for each region by Mosaic cluster and shows the weighted percentage of respondents for each region by each Group. In most cases, the balance was very closely accomplished.

methodology chart

Error and Bias

It is in the nature of survey taking that some level of error and/or bias exists. Efforts are made from survey design to respondent criteria selection to minimize both. 

The maximum margin of error of this study ±1.97% at the 95% confidence level within any US Census region.

The study carries a maximum margin of error of ±5.0% at the 95% confidence level within any Mosaic cluster.

Sampling Error:  Ordinary sampling error occurs because one is not doing a census of the entire population. Rather one is taking a sample and assuming based upon statistical principles that the sample fairly represents the full population. This is a non-biased type of error: it is random in relation to the true values.

Sampling Bias: The data gathering technique does not have an equal probability of reaching each US household, and of those it does reach, some are more likely to respond than others. Since there’s probably some correlation between being the sort of household that gets included in the sample and having this or that set of attitudes, there will in general be a gap between the attitudes captured in the survey and the true attitudes of all US households. This will be a bias-type error because the selection process systematically tilts toward households that are likely be included. MissionInsite tried to address sampling bias by insisting that the sample include representation of each of the 19 Mosaic Clusters. We are aware, however, that non-English speaking households may be under-represented in the total sample even though they are represented in the 19 Mosaic Clusters. (See small-sample error below.)

Aggregation error: The aggregation of the surveyed households into 19 groups for each of four large geographical regions may wash out important differences. Example: Golden Year Guardians in the West theoretically includes responses from Silicon Valley (tech sector, politically liberal) and rural Oregon (agricultural, politically conservative). This error will be unbiased on average but potentially biased in application to a local community. It is to off-set this error to the degree possible that the survey was divided into the four Census Bureau regions.

Small-sample error: If one or more of the targeted population strata (such as non-English-speaking Hispanics or newly immigrated Asians) didn’t generate a large enough sample, then any profile which relies heavily on that part of the data will have a larger error than the others. This is unbiased error. We are aware of this error and while we attempted to take it into consideration in the respondent pool, we are less than confident that it has been adequately handled. When one is conducting a national survey, cost is a large factor. To mitigate small-sample errors is financially unrealistic.