The Dire Meaning of Gallup’s Announcement

AI Summary9 min read

TL;DR

Gallup's decision to stop presidential-approval polling reflects broader challenges in measuring public opinion amid declining response rates, rising costs, and a polarized society where traditional methods struggle to capture anti-institutional sentiments.

Key Takeaways

  • Gallup ended its presidential-approval polls due to business strategy changes, amid suspicions about political pressure and declining viability in a divided nation.
  • Polling faces crises from low response rates (down to 5%), rising costs, and competition from cheaper online methods, making quality surveys less economically sustainable.
  • Traditional polling methods increasingly fail to represent anti-institutional groups, as seen in underestimating support for Trump and Brexit, highlighting a disconnect in measuring polarized publics.
  • The decline of polling mirrors broader institutional erosion in America, where once-trusted democratic tools struggle in an era of suspicion and division.
The company’s presidential-approval poll is the latest casualty in a divided, suspicious nation.
Clip art scissors cut through a red line graph
Illustration by Akshita Chandra / The Atlantic
Last week, the polling firm Gallup announced that it would no longer survey presidential-approval ratings. This news stirred suspicions. President Trump’s numbers are declining badly, much worse than Joe Biden’s at the equivalent point in his presidency. Gallup’s most recent presidential-approval poll, in December, had Trump at 36 percent—well below the RealClearPolitics poll average of 42 percent. Trump is known for taking punitive action. He sued The Des Moines Register and its pollster, Ann Selzer, for an ego-bruising 2024 survey that suggested he might lose Iowa to Kamala Harris.

Other companies targeted by the president appear to have folded. When sued by Trump in cases that many legal experts expected them to win easily, CBS and ABC paid huge settlements to Trump’s presidential-library fund. On Monday, Stephen Colbert told his Late Show viewers that his bosses at CBS had scrapped his taped interview with James Talarico, a Democratic Senate primary candidate in Texas, owing to threats from Brendan Carr, the chair of the Federal Communications Commission. Had Gallup taken the next logical step toward appeasing Trump’s vindictive ego?

Assuming the worst is often prudent, but Gallup’s own explanation—citing changes in the company’s business strategy—makes a sad commercial sense. Quality polling companies such as Gallup inhabit a world of rising costs, declining rewards, and multiplying competition. Polling worked because people once accepted a call on the phone the same way they accepted jury duty: as one of the small obligations of citizenship that helped democracy work better. Large numbers of citizens have come to perceive the institutions of democracy as unfriendly to them. The dispassionate stranger on the phone inquiring how a citizen intended to vote—and why—is one of those institutions.

Iowa-born George H. Gallup taught Americans the power of modern polling during the 1936 presidential campaign. Until then, election prediction had been dominated by a magazine called The Literary Digest. In 1916 and every four years thereafter, The Literary Digest mailed postcards to a large sample of Americans to ask them how they intended to vote. These surveys successfully predicted the squeaker election of 1916; then the Republican landslides of 1920, 1924, and 1928; and Franklin D. Roosevelt’s victory in 1932. In 1936, The Literary Digest reached out to a record 10 million Americans, about a quarter of whom replied. Their answers predicted a crushing rejection of the Roosevelt administration and the triumphant election of Republican Alf Landon and his running mate, Frank Knox.

Marc Novicoff: Polling was quietly still bad in 2024

Gallup, who turned 35 in 1936, had launched a research company the year before. To promote his work, he undertook his own election survey in 1936. Gallup reached only 50,000 people, a pitiful fraction of The Literary Digest’s awe-inspiring mailbag. He predicted—correctly—a solid Roosevelt win.

Why did Gallup succeed where The Literary Digest failed? The latter got its list of addresses from places such as state automobile-registration lists and local telephone exchanges. In the Depression, people who had their own phone number—let alone a car!—and also felt so passionately about the race as to take the time to answer a survey were disproportionately Republican. Roosevelt’s strength lay in the much larger number of Americans who went without such things—and were not seething with anger that they wanted to share by mail. The Literary Digest sample was huge but unrepresentative. Gallup’s sample was smaller but more representative. His fame was made; a new industry was born.

For decades, Gallup’s company and its imitators improved their techniques. Then things began going wrong. As the frequency of polling intensified and caller ID caught on, Americans ceased picking up the phone. In the late 1990s, 28 percent of those contacted by Gallup agreed to participate in a poll. By 2017, only 7 percent agreed. At present, the company’s response rate is down to 5 percent, a Gallup spokesperson confirmed to me by email. That figure is typical for the industry. In other words, in the late 1990s, Gallup had to place about 3,500 calls to build a 1,000-person sample. Today it must place 20,000 such calls. Obviously that costs much more.

The difficulty of building sample groups leads to a second and more insidious problem. As fewer Americans answer surveys, are those who do inherently nonrepresentative? Are they more cooperative or more opinionated or, in some other way, simply different from the 95 percent who decline to participate? There are ways to correct this problem. Courtney Kennedy, the vice president of methods and innovation at Pew Research Center, told me that because survey respondents are more likely to claim they volunteer for civic and charitable causes than Americans do generally, the organization overweights the answers of those who say that they don’t volunteer, to make the sample more representative of the country. Developing work-arounds like this, too, costs money.

For many pollsters, costs are covered by media partners. ABC News often hires Ipsos; The New York Times uses the Siena Research Institute. Much of the country’s best polling is done by nonprofit foundations, such as Pew, or by affiliates of educational institutions, such as Quinnipiac University and the National Opinion Research Center at the University of Chicago. Gallup, however, is a profit-seeking company that earns its living by doing commissioned research for governments and corporations. In 2006, CNN and Gallup ended their long partnership. Since then, Gallup has operated its presidential-approval research for more or less the same reason that department stores install shop-window displays at Christmas: in the hope that this public amenity might bring more traffic through the door.

However, that hope was often misplaced. Over time, new technologies made it easy for anyone to create an attention-grabbing poll that met minimal standards of respectability. Remember the 1994 survey that claimed that more young Americans believed in UFOs than believed they would collect Social Security? The poll’s methods were misleading but earned countless headlines nonetheless. In the internet age, the attention economy elevated returns on investment for bad polls, and the returns on good polls correspondingly diminished.

Some companies have responded by developing new methods. Morning Consult, which often partners with Politico, collects very large samples—sometimes in the tens of thousands—-by recruiting people online, which costs less than conducting phone calls. The hope is that the large size of the sample offsets concerns about the demographics of respondents.

All of these methods—traditional and high-tech—have been called into question in the past decade by the worst series of shocks to the U.S. polling industry since Gallup predicted that the Republican candidate Thomas Dewey would defeat incumbent President Truman in the election of 1948.

Gilad Edelman: The asterisk on Kamala Harris’s poll numbers

In 2016, election-eve polls showed Hillary Clinton beating Trump by an average of 3.2 percentage points. Nate Silver’s FiveThirtyEight, which aggregated high-quality polls, projected that Clinton would win 302 electoral votes; she was favored in Florida, Michigan, North Carolina, Pennsylvania, and Wisconsin. Instead Clinton lost those five states and beat Trump in the popular vote by only 2.1 points. She won just 232 electoral votes.

What went wrong with the 2016 polls? First, the decline of local newspapers and television stations shrank the resources available for state polling. Many of the polls that purported to measure opinion in swing states relied on smaller samples and weaker methods. The bigger problem was that even the best polls failed to measure what was important. Every pollster must begin with a theory about what the American electorate will look like in the coming election year. Typically, the people most likely to vote are older, better educated, more affluent, and more trusting of institutions than the American adult population as a whole. Yet Trump powerfully appealed to Americans who were less educated, less affluent, and more alienated from institutions—people, in other words, who might not show up in a polling sample recruited by traditional methods but who showed up at the polls when Trump headed the ticket.

Pollsters received an early warning that their methods were under-measuring the disaffected. In June 2016, most polls predicted that British voters would elect to remain in the European Union. Of seven major surveys, only one—which gathered responses online, a conventionally frowned-upon method—accurately predicted that a British majority would vote to leave the EU.

Pollsters readjusted their methods and weightings. Yet they never quite caught up. In 2024, the consensus forecast again underestimated Trump. The core of the polling problem seems to be the changing public itself: How does a company measure public opinion when a big segment of public opinion resists being measured—and when that segment is not randomly distributed, but concentrates behind certain kinds of politicians (such as Trump) or certain kinds of political movements (such as Brexit)? When Gallup’s methods show Trump six points lower than the polling consensus, does that reveal something about Trump? Or does it reveal that methods that worked well in a cohesive, pro-social, pro-institutional country are mismeasuring a polarized country that contains a large anti-social, anti-institutional minority waiting to be mobilized by the right leader?

The Gallup poll once seemed to be nothing less than the voice of the people. The Gallup poll now departs for the same Valhalla as the big three broadcast networks, bowling leagues, and roast beef for Sunday dinner—institutions that were once almost universally accepted but did not survive in a more divided and mutually suspicious America.

Visit Website