As more states, including Virginia and New York, continue to legalize marijuana, an overwhelming share of U.S. adults (91%) say either that marijuana should be legal for medical and recreational use (60%) or that it should be legal for medical use only (31%). Fewer than one-in-ten (8%) say marijuana should not be legal for use by adults.
The new survey, conducted by Pew Research Center from April 5-11, 2021, comes as congressional Democrats consider legislation that would decriminalize marijuana nationally. Views of marijuana legalization have changed very little since 2019.
At a time when the labor movement in the United States has been facing formidable challenges, majorities of Americans see the long-term decline in the share of workers represented by unions as a bad thing for both the country and working people in the U.S., according to a Pew Research Center survey conducted April 5-11.
In the survey, 56% say the large reduction over the past several decades in the percentage of workers who are represented by unions has been “somewhat” or “very” bad for the country, while 60% say this has been bad for working people. The survey was largely fielded before the vote by workers in an Amazon warehouse in Alabama against forming a union was tabulated and reported.
The European Union and the United States have both been deeply affected by the coronavirus outbreak. The two contribute equally to the world economy, each accounting for about 16% of global output. A key difference is that the EU is home to about 100 million more people than the U.S. But Americans have lost significantly more jobs than their EU counterparts during the COVID-19 downturn.
Roughly 9.6 million U.S. workers (ages 16 to 64) lost their jobs, based on averages of the first three quarters of 2019 and the first three quarters of 2020. In contrast, only about 2.6 million workers in the EU (ages 15 to 64) lost their jobs over this period, despite having a larger population. Young adults in both regions were more likely to lose jobs during the pandemic, according to a new Pew Research Center analysis of U.S. government and Eurostat data.
The STEM workforce (science, technology, engineering and math) has grown rapidly in recent decades. An updated analysis by the Bureau of Labor Statistics since the coronavirus outbreak began projects strong growth for many STEM occupations in the United States, particularly epidemiologists, medical scientists, biochemists and biophysicists, and biological technicians, among others.
But Black and Hispanic workers remain underrepresented in STEM jobs compared with their share of the U.S. workforce, according to a new Pew Research Center analysis of U.S. government data. The representation of women varies significantly across the job clusters that make up the STEM workforce: In health-related jobs, women are overrepresented compared with their 47% share of the overall workforce, while they remain starkly underrepresented in computing and engineering jobs.
The coronavirus outbreak that began in February 2020 sent shock waves through the U.S. labor market, pushing the unemployment rate to near record highs and causing millions to leave the workforce. A year later, a full recovery for the labor market appears distant. Employment in February 2021 was 8.5 million less than in February 2020, a loss that could take more than three years to recoup assuming job creation proceeds at roughly the same monthly rate as it did from 2018 to 2019. But a faster recovery is possible if the job gains seen in March 2021 are sustained in the coming months.
Border Patrol apprehensions of migrants at the U.S.-Mexico border are on the rise again. Although the majority of people attempting to enter the United States illegally are stopped, this trend could foreshadow an increase in the U.S. unauthorized immigrant population after years of relative stability. Yet the activity at the southwestern U.S. border is only one part of the overall story of unauthorized immigration, as a growing share of this population came from regions other than Mexico or Central America and entered the U.S. legally but overstayed their visas.
The unauthorized immigrant population is always changing and churning. The total number in the country can remain stable or decline even as new immigrants enter illegally or overstay a visa, because some voluntarily leave the country, are deported, die or become lawful residents. In short, the dynamic nature and pace of migration patterns has resulted in an unauthorized immigrant population whose size and composition has ebbed and flowed significantly over the past 30 years.
Here are key facts about this population and its dynamics.
Asia appears to be top of mind for the Biden administration when it comes to foreign policy. Japan and South Korea were the first two international destinations of Cabinet officials after Joe Biden’s inauguration as U.S. president. Looking to coordinate in the face of China’s efforts to assert itself in the region, the administration initiated a first-of-its-kind “Quad” summit with the leaders of Australia, India and Japan. The United States also held a high-level, in-person meeting with key Chinese officials in mid-March.
As Americans eye the Asia-Pacific region, they see a mix of friends and some foes, according to a new Pew Research Center survey conducted Feb. 1-7, 2021. Asked to rate their feelings toward four countries in the region on a “feeling thermometer,” where a 0 indicates the coldest and most negative rating possible, 50 indicates a neutral rating, and 100 indicates the warmest and most positive rating possible, Americans generally have warm feelings toward Japan. They give the country an average rating of 59 – largely unchanged since 2018, when the country had an average rating of 61. India receives a more neutral rating of 48 – also largely unchanged from its average rating of 51 in 2018.
Asian Americans recorded the fastest population growth rate among all racial and ethnic groups in the United States between 2000 and 2019. The Asian population in the U.S. grew 81% during that span, from roughly 10.5 million to a record 18.9 million, according to a Pew Research Center analysis of U.S. Census Bureau population estimates, the last before 2020 census figures are released. Furthermore, by 2060, the number of U.S. Asians is projected to rise to 35.8 million, more than triple their 2000 population.
Hispanics saw the second-fastest population growth between 2000 and 2019, followed by Native Hawaiians and Pacific Islanders (NHPI) at 70% and 61%, respectively. The nation’s Black population also grew during this period, albeit at a slower rate of 20%. There was virtually no change in the White population.
The 2016 and 2020 election cycles weren’t the best of times for public opinion polls. In 2016, many preelection surveys underestimated support for Donald Trump in key states. And last year, most polls overstated Joe Biden’s lead over Trump in the national vote, along with several critical states. In response, many polling organizations, including the American Association for Public Opinion Research (AAPOR), the survey research field’s major professional group, have taken close looks at how election surveys are designed, administered and analyzed.
Pew Research Center is no exception. Today, the Center releases the second of two reports on what the 2020 election means for different aspects of its survey methodology. The first, released in March, examined how the sorts of errors that led most polls to understate Trump’s support might or might not affect non-election polls – especially the issue-focused surveys that are the Center’s bread and butter. Today’s report looks at what we’ve learned about the American Trends Panel (ATP) – the Center’s online survey panel of more than 10,000 randomly selected U.S. adults – how well it represents the entire U.S. population, and how it could be improved.
We spoke with the lead authors of the two reports, Director of Survey Research Courtney Kennedy and Senior Survey Advisor Scott Keeter, about their findings. Their responses have been edited for clarity and concision.
Scott, your report from last month concluded that errors in election polls – those that focus on who’s ahead in the race and who’s behind – don’t necessarily lead to similar errors in polls that try to measure public opinion on issues of the day. Courtney, your report today says the Center is taking steps to address underrepresentation of Republicans in the American Trends Panel. At first blush, these two reports don’t seem to exactly track. How do you reconcile them?
Kennedy: Both reports explore the implications of survey samples underrepresenting Republicans. But beyond that, they posed two very different questions. Scott’s piece essentially asked: “Can flaws like those seen in some recent preelection polls lead to wrong conclusions about public opinion on issues?” The answer to that was “no.” This new report, by contrast, focuses on the role and responsibility of pollsters, both to diagnose whether underrepresentation is occurring and to identify ways to address it.
Keeter: Even if a particular problem – in this case, underrepresenting Republicans – doesn’t seriously threaten the validity of our measures of public opinion, we’re obligated to do what we can to fix the problem. Often we can correct imbalances in the composition of a sample through the statistical process of weighting, but it’s much better to solve the problem at its source – especially if we have reason to believe the problem might be getting worse over time. In any case, pollsters should always strive to have their surveys accurately represent Republican, Democratic and other viewpoints.
The Center doesn’t do “horserace”-type preelection polling and hasn’t for several years. Why should we be concerned over errors in election polls in 2016 and 2020?
Kennedy: It’s true that we don’t predict election outcomes, but we do ask people who they would vote for, and we ask about many topics, like immigration and climate change, that are correlated with presidential vote. So if we see an industry-wide problem in measuring vote preference, this signals the possibility of challenges in measuring related things that we do study. For instance, if recent election-polling problems stem from flawed likely-voter models, then non-election polls may be fine. But if the problem is fewer Republicans (or certain types of Republicans) participating in surveys, that could have implications for us and the field more broadly.
Before going any deeper, let’s define our terms. How do issue polling and election polling differ, both conceptually and practically?
Keeter: They’re certainly related, in that both rely on the same research methods to select samples and interview respondents. And issues play a role in elections, of course, so we often measure opinions about issues in polls that also measure candidate preferences and voting intentions. But they differ in two important ways.
First, election polls typically try to estimate which candidate the respondents support and whether they’ll actually turn out to vote. Issue polls usually don’t need to identify who will vote.
Second, election polls are judged by their accuracy in depicting the margin between the candidates. By contrast, issue polls typically are trying to characterize the shape and direction of public opinion, and generally that can’t be summed up in a single number or margin like an election poll result. Often, we want not just an expression of opinion – for example, whether a person believes the earth is warming because of human activity – but also how important they believe the issue is, what factual knowledge they have about the issue, or how the problem might be mitigated.
So given that, how can we assess the accuracy of issue polls, when there’s no ultimate outcome to measure them against like there is for election polls? Put another way, how can the average person tell whether an issue poll’s findings are accurate or not?
Kennedy: We know from various benchmarking studies, where polls are evaluated against known figures like the U.S. smoking rate or the health care coverage rate, that rigorous polls still provide useful and accurate data. We’ve conducted several studies of that nature over the years. Our polling estimates tend to come within a few percentage points of most benchmarks we can measure. And if that’s the case, we can have confidence that a poll’s other findings are valid too.
The analysis in the March report used simulated survey results based on different assumptions about partisan divides among voters and nonvoters. What can we learn from such a simulation?
Keeter: With the simulation, we were trying to find out how different our measures of opinion on issues would be if the survey sample had more Republicans and Trump voters. So, statistically, we added more Republicans and Trump voters to our samples and then looked at how our measures changed. What we found was that, in most cases, opinions on issues weren’t much affected.
This was true for two reasons. First, people don’t fall perfectly into line behind a candidate or party when expressing opinions on issues. What that means is that adding more supporters of a candidate, or more members of that candidate’s party, won’t move the poll’s issue measures by the same amount.
Second, even though we may think the election poll errors in 2020 were large, correcting them actually requires adding relatively few Trump voters or Republicans. And that small adjustment makes even less difference in the issue questions.
Courtney, how did you conclude that Republicans and Trump voters are underrepresented in the American Trends Panel? And do you have any ideas why?
Kennedy: Basically, we ran several different tests and “triangulated” the results. Each individual test showed only a small amount of evidence for underrepresentation, but taken together, we found the evidence quite compelling.
Let’s start at the beginning of the survey process – recruiting the sample. Since 2018, the ATP has used address-based recruitment. Invitations are sent to a random, address-based sample of households selected from the U.S. Postal Service’s database, which means nearly every U.S. adult has a chance of being selected.
What we’ve found is that, in 2020, people living in the country’s most – and least – pro-Trump areas were somewhat less likely than others to join our survey panel. We also noticed a trend in our recruitments: Adults joining our panel in recent years are less Republican than those who joined in earlier years. There are several possible explanations for that, but as we say in the report, the most plausible explanation is increasing resistance among Trump supporters to taking surveys.
We also looked at who has stayed active in our survey panel since 2016 and who has dropped out. We found that a higher share of 2016 Trump voters stopped taking our surveys during the subsequent four years, in comparison with other voters. It’s worth noting, though, that the demographic makeup of 2016 Trump voters basically explains this difference: When we account for voters’ age, race and education level, presidential vote preference doesn’t help predict whether they later decided to leave the panel.
We don’t have any hard data that speaks to why this is happening. That said, it’s clear that Republicans have relatively low levels of trust in various institutions. The polling field is intimately connected with some of those institutions, particularly the news media, which sponsors a good deal of polling. It’s also the case that President Trump had some strong, often critical views of polls, and sometimes messages like that resonate with supporters.
If Republicans are underrepresented, why can’t you correct for that by simply weighting (or reweighting) the raw data?
Kennedy: The short answer is that we do correct for underrepresentation with weighting. ATP surveys have always been adjusted to so that Republicans and Democrats are represented in proportion to their share of the population.
The longer answer is that while weighting can cover a lot of imperfections, it’s not a perfect cure-all. For one thing, there isn’t timely benchmark data for what share of Americans are Republicans or Democrats. The targets that we use to weight are certainly close, but they may not be exactly right. Also, when a pollster relies on weighting to fix something, that tends to make the poll estimates less precise, meaning a wider margin of error. A third limitation with weighting is that it relies on assumptions – the most important one being that the opinions of people who don’t take the survey are just like those who do take the survey, within the groupings that the poll uses in weighting (things like age, education and gender).
We should be clear: Weighting is a best practice in polling. We don’t put any stock in unweighted public opinion polls. But relying on weighting alone to fix any and all skews that a sample might have can be risky. If a pollster’s weighting doesn’t capture all the relevant ways that the sample differs from the general public, that’s when estimates can be off.
Specifically, what changes are you making to the American Trends Panel? What are you hoping to accomplish?
Kennedy: We’ve identified five action steps in total. Most are direct changes to our survey panel, and one is an experiment that may lead to direct changes. The direct changes are retiring several thousand demographically overrepresented panelists; weighting to new targets for the partisan balance of Americans; developing new recruitment materials; and empaneling adults who initially prefer taking surveys by mail rather than online. The experiment involves testing an offline response mode – specifically, an option for people to call into a toll-free number and take a recorded survey (what we in the field know as “inbound interactive voice response”).
These steps are designed to increase the representation in our surveys of people who are rather hesitant to take surveys online. Our goal is to make joining and participating in our survey panel just as appealing to rural conservatives as it is to urban progressives – or as close to that ideal as possible.
Do these changes mean that previous survey results were inaccurate or that they should no longer be relied on?
Kennedy: No. Polling practices have always evolved over time in response to changes in society and technology, and they’ll continue to evolve in the future. But that doesn’t invalidate polling data from previous years. And as Scott’s piece showed, the magnitude of errors that we’re dealing with here is small – generally on the order of 1 percentage point or so for major findings.
The American Association for Public Opinion Research is working on a broader examination of the performance of preelection polls in 2020. What more will that report tell us, and could its findings change anything?
Keeter:The AAPOR Task Force, on which I am serving, will provide a detailed description and analysis of the performance of the polls, so that any future discussion of the issue can have a solid base of evidence. But like the 2016 Task Force, this one is also attempting to understand why polls understated Trump’s support. Was the pandemic a factor? Did pollsters have trouble correctly estimating who would vote? Or was it simply the case that polls had an easier time locating and interviewing Biden supporters and Democrats? By working through the various possibilities systematically, the task force is taking something of a Sherlock Holmes approach. As the great detective said, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”
Democrats are more likely than Republicans to say they use social media overall (77% and 68%, respectively), according to a new Pew Research Center survey of U.S. adults conducted Jan. 25-Feb. 8, 2021. There are also notable differences in the shares of Democrats and Republicans who use particular platforms.
A majority of Americans on both sides of the political aisle say they use Facebook and YouTube. Roughly seven-in-ten Democrats (72%) and Republicans (69%) – including independents who lean toward each party – say they ever use Facebook. And 85% of Democrats report using YouTube, compared with a slightly smaller share of Republicans (79%).
Still, for several other sites and apps measured in this survey, there are large gaps in use by political party. For example, about half of Democrats (49%) report using Instagram, 19 percentage points more than the share of Republicans who say the same (30%). Democrats are also about 10 points or more likely than Republicans to say they ever use Twitter, WhatsApp, LinkedIn or Reddit.
About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts.