Appendix II: Survey Method

Jan 30, 2023 | All Categories

Climate Change in the American Mind: Politics & Policy, December 2022

The data in this report are based on a nationally representative survey of 1,085 American adults, aged 18 and older. Results are reported for the subset of 938 registered voters who participated in the survey. The survey was conducted December 2 –12, 2022. All questionnaires were self-administered by respondents in a web-based environment. The median completion time for the survey was 19 minutes.

The sample was drawn from the Ipsos KnowledgePanel®, an online panel of members drawn using probability sampling methods. Prospective members are recruited using a combination of random digit dial and address-based sampling techniques that cover virtually all (non-institutional) residential phone numbers and addresses in the United States. Those contacted who would choose to join the panel but do not have access to the Internet are loaned computers and given Internet access so they may participate.

The sample therefore includes a representative cross-section of American adults—irrespective of whether they have Internet access, use only a cell phone, etc. Key demographic variables were weighted, post survey, to match US Census Bureau norms.

From November 2008 to December 2018, no KnowledgePanel® member participated in more than one Climate Change in the American Mind (CCAM) survey. Beginning with the April 2019 survey, panel members who have participated in CCAM surveys in the past, excluding the most recent two surveys, may be randomly selected for participation. In the current survey, 308 respondents, 268 of whom are registered voters included in this report, participated in a previous CCAM survey.

The survey instrument was designed by Anthony Leiserowitz, Seth Rosenthal, Jennifer Carman, Marija Verner, Sanguk Lee, Matthew Goldberg, and Jennifer Marlon of Yale University, and Edward Maibach, John Kotcher, and Teresa Myers of George Mason University. The categories for the content analysis of the open-ended responses about the Inflation Reduction Act (IRA) were developed by John Kotcher of George Mason University, and open-ended responses were coded by Patrick Ansah and Nicholas Badullovich of George Mason University. The figures and tables were designed by Sanguk Lee, Marija Verner, and Liz Neyens of Yale University.

Margins of error

All samples are subject to some degree of sampling error—that is, statistical results obtained from a sample can be expected to differ somewhat from results that would be obtained if every member of the target population was interviewed. Average margins of error, at the 95% confidence level, are as follows:

  • All Registered Voters (n = 938): Plus or minus 3 percentage points.
  • Democrats (total; n = 435): Plus or minus 5 percentage points.
  • Liberal Democrats (n = 240): Plus or minus 6 percentage points.
  • Moderate/conservative Democrats (n = 193): Plus or minus 7 percentage points.
  • Independents (n = 78): Plus or minus 11 percentage points.
  • Republicans (total; n = 385): Plus or minus 5 percentage points.
  • Liberal/moderate Republicans (n = 114): Plus or minus 9 percentage points.
  • Conservative Republicans (n = 270): Plus or minus 6 points.

Rounding error and tabulation

In data tables, bases specified are unweighted, while percentages are weighted to match national population parameters.

For tabulation purposes, percentage points are rounded to the nearest whole number. As a result, percentages in a given chart may total slightly higher or lower than 100%. Summed response categories (e.g., “strongly support” + “somewhat support”) are rounded after sums are calculated. For example, in some cases, the sum of 25% + 25% might be reported as 51% (e.g., 25.3% + 25.3% = 50.6%, which, after rounding, would be reported as 25% + 25% = 51%).

Instructions for coding Section 4.2: Open-ended responses about the Inflation Reduction Act (IRA)

A doctoral student and a postdoctoral fellow coded the open-ended responses using instructions and categories developed by one of the Primary Investigators. Percent agreement ranged from 93% – 99% for the categories coded. Differences between the two coders were resolved via discussion between them and the Primary Investigator. “Haven’t heard of IRA” classification was determined by a “nothing at all” response to the preceding question, “How much, if anything, have you heard about the Inflation Reduction Act of 2022 (also known as the IRA), a bill that was passed by the U.S. Congress and signed by President Biden?” Participants who provided that response were not shown this open-ended question. Definitions of the other categories used by the coders are listed below.

For the following variables, we code each survey response for the presence or absence (0 = absent; 1 = present) of the following categories listed below. The order in which the categories are mentioned in the survey response does not matter for the purposes of coding, simply the presence or absence of a particular category.

A survey response can be coded positive for multiple content variables. For example, the response, “Green energy scam” would be coded positive for both climate/clean energy (for the reference to green energy) and skepticism (for referring to it as a scam). Definitions for each content variable are provided below.

  • Climate Change/Clean Energy – This category includes any reference to climate change, global warming, clean/renewable/green energy, the environment or sustainability. This includes any references to solar panels, wind, electric vehicles, energy efficient appliances, and reducing or transitioning away from fossil fuels (i.e. coal, oil, natural gas). Examples include: “Trying to get a handle on the climate crisis” “Climate resiliency and sustainability” “Green energy funding” “Incentives for individuals to purchase energy efficient cars, appliances, solar panels, etc.”
  • Infrastructure – This category includes any reference to infrastructure or repairing roads and bridges. Examples include: “I think it was related to infrastructure” “Infrastructure. building and repairing roads and bridges” “Infrastructure support”
  • Economic Harm – This category includes any reference to claims that the IRA will HARM the economy broadly, or that the respondent will personally experience financial HARM from the law. This includes claims that the law will REDUCE economic fairness and equality, or that the law will benefit high-income earners more than low-income earners. Examples include: “It’s not helping the poorest among us.” “More debt” “Cost me more and doesn’t do anything” “loose all money” “Some of the wrong people will get rebates…..”
  • Economic Benefits – This category includes any reference to claims that the IRA will BENEFIT the economy broadly, or that the respondent will personally experience financial BENEFITS from the law. This includes claims that the law will INCREASE economic fairness and equality. Examples include: “Cheaper groceries and gas.” “lower costs” “making big companies and the wealthy pay their share of taxes.” “Alleviate the high costs of goods for those residing in the United States.”
  • Economic Neutral – This category includes any NON-SPECIFIC reference to economic concepts without any explicit or implicit claims about how the law will affect economic outcomes. Examples include: “Money” “Economy” “Stock Market” “jobs” “interest rates” “tax credits”
  • Drug prices/healthcare costs – This category includes any reference to drug prices or healthcare costs. Examples include: “drug costs” “Medicare can negotiate prices on some drugs” “Medicine prices” “Drug cost reduction”
  • Skepticism – This category includes any general expression of negative sentiment or opposition to the IRA or reference to the claim that the IRA is wasteful, ineffective, deceptive in its name or intentions WITHOUT any explicit reference to economic harm. This also includes expression of negative affect toward major proponents of the law, including Democrats or Joe Biden. Examples include: “It’s a lie!” “Biden is an idiot!” “Government waste” “Lots of pork” “it does not fight inflation!” “It creates inflation.. another democrat scam” “That is a crock of ****”
  • Don’t Know/Nothing – This category includes any response that expresses a lack of sufficient knowledge to provide an answer. Examples include: “Don’t know enough about it.” “Nothing” “Nothing comes to mind.” “None”
  • Other – This category includes any responses that are intelligible, but that don’t fit at least one of the other categories.
  • Unintelligible – This category includes any response that includes random strings of characters, OR a response that does not provide sufficient information to categorize it into one of the above categories. This category should only be applied if the ENTIRE response is unintelligible.
  • Skipped – This category includes any response that is left blank or skipped over with a response of “n/a“, any other variation of “not applicable”.

Download the full report


Leiserowitz, A., Maibach, E., Rosenthal, S., Kotcher, J., Carman, J., Lee, S., Verner, M., Ballew, M., Ansah, P., Badullovich, N., Myers, T., Goldberg, M., & Marlon, J. (2023). Climate Change in the American Mind: Politics & Policy, December 2022. Yale University and George Mason University. New Haven, CT: Yale Program on Climate Change Communication.

Funding Source

The research was funded by the 11th Hour Project, the Energy Foundation, the MacArthur Foundation, the Heising-Simons Foundation, and the Grantham Foundation.