January/February/March 2019 | Washington Monthly https://washingtonmonthly.com/magazine/january-february-march-2019/ Tue, 01 Nov 2022 17:55:59 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg January/February/March 2019 | Washington Monthly https://washingtonmonthly.com/magazine/january-february-march-2019/ 32 32 200884816 How to Close the Democrats’ Rural Gap https://washingtonmonthly.com/2019/01/13/how-to-close-the-democrats-rural-gap/ Mon, 14 Jan 2019 02:06:42 +0000 https://washingtonmonthly.com/?p=91519 Jan-19-Kelloway-FarmCampaignTruck

Forget Trump’s tariffs. Big Ag is driving a new farm crisis.

The post How to Close the Democrats’ Rural Gap appeared first on Washington Monthly.

]]>
Jan-19-Kelloway-FarmCampaignTruck

The midterm elections made two things very clear about Democratic voters in the Donald Trump era. First, they vastly outnumber Republican voters: Democrats gained control of the House of Representatives and won the popular vote by more than eight percentage points, the biggest midterm “blue wave” since Watergate. But, second, those Democratic supporters are geographically clustered in a way that wastes millions of their votes. While the party improved its margins among rural voters compared to 2016, its candidates still lost by a whopping fourteen to eighteen percentage points outside of metro areas, lagging well behind Barack Obama’s 2012 performance. As a result, Democrats lost two net Senate seats. The Senate map in 2020 is only slightly less foreboding, as statewide races in heavily rural states like Iowa, Missouri, and the Dakotas seem to slip ever further out of reach.

There was a time in the very recent past when many on the left were confident that an era of Democratic dominance was just around the corner, the inevitable result of a “rising electorate” of younger, better educated, and more diverse voters. The midterms were a wake-up call. The rising electorate may be swinging left, but there has been an equal reaction among older and less educated whites—and they represent a lot more of the political map. Without doing much better among voters outside of metro areas, Democrats have little hope of regaining the Senate for years, maybe decades, to come, and may even continue to lose the Electoral College despite winning the popular vote. 

Mindful of this reality, many Democratic strategists are rightly warning that the party desperately needs a strategy to win back rural voters. Unfortunately, the most prominent plans tend to combine small-bore ideas that are insufficient to the scale of the problem (like more money for rural broadband) with attacks on Trump’s agricultural tariffs, which Trump-supporting farmers actually tend to give the president a pass on.

While the notion that all Trump voters are motivated by “economic anxiety” has been thoroughly debunked, there’s no denying that America’s agricultural communities have been starkly declining for years-struggling to turn profits through farming, suffering epidemic levels of opioid addiction and suicide. Democrats thus have both an opportunity and a moral obligation to try to win these voters back by offering policies that will improve their livelihoods in a way that stoking white cultural grievance never will. 

Just ask J. D. Scholten. The thirty-eight-year-old former minor league baseball player garnered national attention in November for almost unseating Iowa Representative Steve King, an eight-term Republican incumbent in one of the reddest districts in America—and perhaps the most unabashed racist in Congress. The last Democrat to run against King lost by 22.6 percent; in 2016, the district went for Trump by twenty-seven points. Yet Scholten lost by only 3.4 percentage points—a swing from 2016 that, if it could be replicated nationally, would all but wipe out the current incarnation of the Republican Party. How did he do it? 

“I have a lot of folks calling me thinking of running for president and they want to know what their rural message should be,” Scholten says. His answer: “Talk about market consolidation.” 

Without doing much better among voters outside of metro areas, Democrats have little hope of regaining the Senate for years, maybe decades, to come, and may even continue to lose the Electoral College despite winning the popular vote.

At his thirty-nine town hall meetings, across every county in Iowa’s Fourth District, Scholten spoke about improving the economy by addressing the growing power of agribusiness monopolies, which, by raising prices on what farmers buy and pushing down prices of what farmers sell, are devastating farm incomes. “Agriculture is the backbone of this district,” Scholten says. “At every [town hall] I talked about how farmers are being squeezed on the input and on the output side. . . . That resonated more than tariffs ever did, and I think that’s one thing that national reporters never understood.

“I think farmers view tariffs as temporary, whereas market consolidation is a long-term issue,” he adds, noting that the call for fair competition has bipartisan appeal. “Anti-trust . . . has not been a partisan issue. Traditional Republicans, they want competitive markets, and that goes against what’s happening in the ag business.” 

Scholten’s message on agribusiness monopolies may have resonated with farmers, but it has not yet broken through with big-city liberals, too many of whom write off the possibility that progressive economic populism could appeal to rural voters more than right-wing cultural warfare. Until Democratic leaders and candidates find their voice on the key issue affecting rural communities’ economic fortunes, even the biggest blue wave won’t be enough to take back the map. 

If you have heard about the plight of America’s farmers, it was likely in the context of Trump’s media vortex. You may have seen stories about dairy farms closing because Trump made a fuss about Canadian dairy markets while renegotiating NAFTA. Or you know that China aimed retaliatory tariffs at farmers to target Trump’s base. But while Trump’s trade policies have certainly made matters worse for many farmers, they are hardly the prime cause for the full-blown crisis gripping America’s farm economy. Farming communities have been dwindling for decades. The three years leading up to the 2016 election saw the sharpest decline in farm incomes since the Great Depression. In 2015, more than half of all farm households lost more money than they made farming. 

By now, many observers are predicting that we are on the verge of a farm crisis more dangerous than the one that ripped rural America apart in the 1980s. The downturn is particularly devastating for grain and dairy farmers, who rely on large annual operating loans to keep going each season. After four to five years of losing money on every acre of grain or gallon of milk, these farms have exhausted their credit lines. Farmers’ debt-to-income ratios are the highest they’ve been in three decades. Wisconsin alone has lost 1,100 dairy farms over the past two years. One large dairy co-op in the Northeast sent its members a list of suicide and mental-health hotlines along with their dairy checks. 

As farm income declines, so do whole regions. First, local equipment dealers and seed and feed suppliers close. Then, with the decline in economic activity and eventual loss of population, so do local banks, schools, and hospitals. Most people in rural America are not farmers, but in traditionally ag-dependent regions, even non-farmers’ livelihoods depend, directly or indirectly, on farm income, which is often the only substantial source of incoming wealth. 

What is behind the accelerating decline? One popular idea in elite quarters is that the plight of rural America was foreordained by technological change. As agriculture becomes more productive, the thinking goes, we need fewer farmers, and so rural communities naturally depopulate. Meanwhile, a digitized, global economy naturally gives the highest reward to highly skilled “knowledge workers” who feed off each other’s creativity by clustering in elite cities. That’s the theme of a recent report by the Brookings Institution, revealingly entitled “Strategies for Left-Behind Places.” It concludes that the growing gap between elite cities and rural America is the inevitable result of digital technology, which has “increasingly rewarded the most talent-laden clusters of skills and firms.”

That’s a narrative that appeals to many experts, who not coincidentally live in those “talent-laden clusters.” It suggests that the plight of Left-Behind Places is nobody’s fault—it’s the result of an impersonal evolutionary process that just happens to favor coastal elites while crushing the maladapted. 

Well, here’s a different take—one that has far more resonance in heartland America and is backed up by overwhelming evidence. The biggest cause of growing regional inequality isn’t technology; it’s changes in public policy, embraced by both parties, that have enabled predatory monopolies to strip wealth away from farmers and rural communities and transfer it to America’s snazziest zip codes.

Here’s how it works. Farmers are caught between monopolized sellers and buyers. They must pay ever higher prices to the giants who dominate the market for the supplies they need, like seed and fertilizer. At the same time, they must accept ever lower prices from the giant agribusinesses that buy the stuff they sell, like crops and livestock. 

Start with how corporate concentration affects the prices farmers pay. In 1994, the top four seed companies controlled only 21 percent of the global seed market. By 2013, just the top three controlled 55 percent, with Monsanto alone controlling more than a quarter. With that increase in concentration has come a shocking increase in the cost of seed, because these giants face little pressure to compete on price. USDA data shows that the per-acre cost of soybean and corn seed spiked dramatically between 1995 and 2014, by 351 percent and 321 percent, respectively. 

Today’s seeds are often genetically modified to produce higher yields, but that doesn’t translate into more net income for the farmers. Not only is the cost of genetically modified seed high, but patent monopolies often make it illegal for farmers to use a portion of their crops to produce their own seeds, as most did in the past. Moreover, even as farmers are paying monopoly prices for a diminishing selection of seed strains produced by handful of giant corporations, they also are paying monopoly prices for fertilizers and pesticides, often to the same corporations. Since 2017, the Big Six seed and agrichemical companies have shrunk to four, after Dow merged with DuPont and Bayer purchased Monsanto. The top four producers of nitrogen fertilizer controlled 34 percent of the market in 1977, but by 2015 had increased their share to more than two-thirds. 

As crop prices fall and farmers look for areas to cut back on the cost of inputs, many feel up against a wall. “[Seed companies] figure out, how much will a farmer actually pay for seed corn before he’ll go switch to some other company,” says Nebraska farmer Vern Jantzen. “If I don’t like what Mycogen is charging for seed corn I can go to Pioneer and I can go to Dekalb, but there’s only three guys. If they all kind of talk to each other a little bit, there isn’t a whole lot of difference in prices.” That is, of course, when there are even multiple sellers to choose from. 

The same story holds for chemicals used in agriculture, whose prices roughly tripled from 1990 to today, with the steepest increases after 2007. The fertilizer market is dominated by several international phosphorous and potash cartels, and a 2013 monograph by the American Antitrust Institute made the case that price swings in these markets were due to oligopolistic behavior. 

Farming communities have been dwindling for decades. While Trump’s trade policies have certainly made matters worse, they are hardly the prime cause.

All together, the average farmer spends three times more on inputs per acre today than in the 1990s. Recently, the president of the Nebraska Farmers Union, John Hansen, looked up the corn prices from 1973, when he started farming. At that time, he sold corn for $3.30 a bushel; in 2017 the average price per bushel was $3.33. “So, how does it work to live and farm in 2017 and pay for those 2017 costs with 1973 prices?” Hansen asked in a message to NFU members. “The honest answer is that it does not work.”

It does work, though, for the seed and chemical companies. Before their acquisition by Bayer and recent legal trouble, Monsanto touted record seed sales and profits in 2017. “They’ve had a boom while farmers have gone broke,” says Joe Maxwell, who is a farmer, former lieutenant governor of Missouri, and executive director of the Organization for Competitive Markets. 

Even as monopolization means that farmers pay more for the supplies they need, it also means that they receive less for the food they produce. 

Nationally, the top four beef packers slaughtered 25 percent of cows in 1977; today that’s up to 84 percent. As recently as 1990, four companies processed 61 percent of all soybeans; today those same four process 85 percent. These statistics actually understate the problem, because at the local level, dominant agribusinesses often have total monopolies. “Today you’ll find that in most all this country, really, there’s just one buyer,” says Maxwell. “It’s as if they all sat in a room somewhere and carved the country up.”

Credit: Courtesy J. D. Scholten for Congress

This degree of corporate concentration has turned farmers into price takers. If there’s a single statistic that captures the plight of rural America, it is this: In the 1980s, when American consumers spent a dollar on food, 37 cents out of that dollar went back to the farmer. Today, farmers receive less than 15 cents on every dollar. The difference is increasingly flowing to powerful and concentrated agribusinesses, middlemen, and retailers. In 2016, the big meat-packer Smithfield openly bragged about how its record profits were due to a fourteen-year low in the prices paid to hog farmers and simultaneously higher consumer prices for packaged pork products. 

The same story repeats across agricultural sectors, from grains and produce to eggs and poultry. The Contract Poultry Growers Association of the Virginias says its chicken farmer members haven’t seen an increase in base pay for the past twenty years. Nor do consumers benefit. Chicken and turkey retail prices grew steadily over the last decade, by 19 percent and 47 percent from 2007 to 2013, respectively, before finally slowing and then slightly declining over the past three years.

As Big Ag cuts farmers’ margins, often the only way farmers can see to stay in business is to try making it up on volume. This is what accounts for the emergence of “confined animal feeding operations,” or CAFOs, in which thousands of animals are crammed together in inhumane conditions. CAFOs reflect the ruinous competition farmers face with one another. They also pose a massive threat to human health, as some farmers routinely use nontherapeutic antibiotics on animals crowded into confined spaces, producing antibiotic-resistant bacteria. And they cause vile pollution from large manure lagoons, further detracting from the quality of life across expanding sections of rural America. 

When Barack Obama was competing for the Democratic presidential nomination in 2008, he seemed to understand the role that corporate concentration was playing in immiserating much of rural America. Campaigning in Iowa, North Carolina, and Colorado, he promised to take on abuses by monopolistic agribusinesses, particularly meat-packers. 

Early in his presidency, he followed up on these promises by having top Agriculture Department (USDA) and Justice Department officials hold hearings across the country to investigate malpractice in the poultry, cattle, dairy, and seed industries, as well as the growing gap between the prices consumers paid and farmers received. At the conclusion of these hearings the USDA proposed rule changes that would have given farmers far greater power to stand up to abuses by ag monopolies. 

But the blowback was immediate. Big Meat threw its lobbying weight behind an effort to block the reforms. Soon, sixty-eight Republicans and forty-seven Democrats delivered a letter to the USDA saying that the new rules were unjustified and required more industry input and economic analysis. Obama could have implemented the rules unilaterally, but for whatever reason his secretary of agriculture, Tom Vilsack, hesitated. Then, in 2010, Republicans took the House and began passing appropriations riders that stripped the USDA of the necessary funds to implement the rules even if they had gone into effect. In December 2016, Vilsack finally signed off on a significantly watered-down rule change. But shortly after President Trump took office, new Secretary of Agriculture Sonny Perdue shot down even these modest reforms and dissolved the USDA’s antitrust agency entirely, burying its duties within the agribusiness-friendly Agricultural Marketing Service agency.

So it turns out that the Democrats briefly did have a rural policy that took monopoly seriously, but gave it low priority and retreated in the face of corporate opposition. Today, this record leaves many farmers bitter, especially those who risked retaliation by testifying at the USDA hearings about the abuses they suffered at the hands of monopolists. That bitterness is only amplified when they hear voices in the Democratic Party arguing for solutions that fail to acknowledge the party’s own role in fostering agribusiness monopolies. Writing in the New York Times shortly after the election, columnist Michael Tomasky recommended that the party embrace a “four pillar” plan written by Tom Vilsack, which calls for, among other things, spending more money on opioid addiction treatment, encouraging more local farmers’ markets, and directing more payments to farmers who set aside land for conservation. But Vilsack’s plan pointedly fails to mention a central cause of American farmers’ misery, let alone any strategies for taking on the agribusiness monopolies he failed to roll back during his time in office. Perhaps not coincidentally, Vilsack is currently the president and CEO of the U.S. Dairy Export Council, an agribusiness lobbying group.

Other plans circulating in Democratic quarters embrace a kind of social Darwinism to explain the growth of regional inequality and then propose solutions that are almost insulting in their inadequacy and implicit victim blaming. Brookings Institution’s “Strategies for Left-Behind Places,” for example, states flatly that it would be “inefficient to ‘save’ every left-behind small city or rural community in the U.S.” It then argues that the best that can be done is to offer displaced farmers and rural workers more broadband networks, some targeted federal economic development programs, and training in programming skills. The report concludes by calling for changes to zoning laws that would produce more affordable housing in high-cost cities like New York and San Francisco, thereby allowing more heartlanders to move to those thriving metropolises—a solution that would, of course, only exacerbate the problem of coastal population clustering. Though the report does mention rising monopoly as a cause of regional inequality, it stops short of recommending anti-monopoly measures as part of the solution. 

Big Ag would obviously oppose any Democratic effort to win in rural America with an anti-monopoly playbook. But policy levers, large and small, are available at every level of government to politicians who want to show their solidarity with ordinary voters in rural communities. The big policy levers include things like busting up ag monopolies and reforming giant food co-ops. An encouraging start is a pair of complementary bills, introduced by Senator Cory Booker and Representative Mark Pocan, both Democrats, that would put an eighteen-month moratorium on large agribusiness mergers while a new commission studies ways to improve competition policy enforcement. (For more details, see “The Three Ways Democrats Can Fix the Farm Economy” by Adam Diamond.)

Smaller-scale but still substantive measures include legislation that would prevent meat-packers from vertically integrating into owning livestock or forcing farmers into exploitative contracts. Booker and Republican Senator Mike Lee introduced an amendment to the farm bill that would have addressed agricultural monopoly in still another way, by reforming so-called “checkoff” programs. These are federally sponsored programs that collect money from farmers to market their product but that often wind up financing agribusiness lobbying groups (The amendment failed, but did draw thirty-eight votes from across both parties.) At the local level, mandating that schools use more locally grown food in their cafeterias is one available weapon for combating agribusiness monopoly. 

In the 1980s, when American consumers spent a dollar on food, 37 cents went back to the farmer. Today, it’s less than 15 cents. The difference is increasingly flowing to concentrated agribusinesses, middlemen, and retailers.

Advocating for such policies could appeal to a far broader set of Americans than just farmers. One reason is that monopolization in agriculture has by now become so extreme that even as it drives down prices for farmers, it can also drive up prices for consumers. This reality has been reinforced by recent investigations into price fixing at tuna companies, mega-brewers, dairy co-ops, chicken processors, and pork packers. Monopolization also affects urban and suburban consumers by forcing farmers into a system of industrial-scale monoculture that lacks resiliency in the face of climate change, poses substantial human health and environmental threats, and produces a level of cruelty to animals that more and more Americans can no longer abide. Though rural and urban Americans may differ on many cultural wedge issues, their common victimization by monopolists provides an issue that bridges the divide. 

Taking on agribusiness monopolies also provides a way to do something important for farm and food chain workers exploited under the current system. Many of the people who pick our food and cut our meats, whether they are undocumented immigrants, refugees, people of color, or working-class whites, are further marginalized when a single agricultural giant controls all the jobs available to them across whole regions. 

Rural communities are also deeply oppressed by other forms of monopoly. These include the closing of local hospitals bought out by giant corporate chains; the takeover of locally owned financial institutions by taxpayer-subsidized “too big to fail” banks; the loss of connectivity to the global economy that results when airlines cut off service to “flyover country”; and the threats posed by deregulated freight railroad monopolies that are abandoning service to farms and industries along branch lines while charging monopoly prices on the mainline traffic they keep. 

J. D. Scholten also points to how large chain stores like Dollar General “sink money out of the community” by shutting down locally owned retailers. Restoring antitrust enforcement across the board, including against companies like Walmart and Amazon, would go a long way toward allowing local economies to once again compete on an even playing field. “Part of that rural identity is being independent,” argues Scholten. “Now [rural communities] are reliant on a corporation rather than being self-employed, and I think that’s part of the issue.” 

In the face of such challenges and opportunities, leaders interested in addressing the nation’s growing economic and political divides should stop listening to people who think we can fix the problems of rural America with more broadband, coding classes, and zoning changes. Instead they should pay attention to people like Scholten who know what they are talking about and are willing to say it out loud.  

The post How to Close the Democrats’ Rural Gap appeared first on Washington Monthly.

]]>
91519 J.D. Scholten for IA-04 Taking a stand against Big Ag: J. D. Scholten, left, made taming the agricultural monopolies a central theme of his campaign to win over voters like Iowa farmer Charles Rasmussen.
To Take Back the Map, Democrats Need a Plan to Revive Heartland Cities https://washingtonmonthly.com/2019/01/13/to-take-back-the-map-democrats-need-a-plan-to-revive-heartland-cities/ Mon, 14 Jan 2019 02:03:03 +0000 https://washingtonmonthly.com/?p=91485 Jan-19-Block-3dMap-1

The clustering of growth in a few coastal metro areas is partly to blame for the rise of GOP minority rule.

The post To Take Back the Map, Democrats Need a Plan to Revive Heartland Cities appeared first on Washington Monthly.

]]>
Jan-19-Block-3dMap-1

It had the trappings of a reality television show: 238 competitors from across the country, twenty finalists, but only one victor. (Or, as it turned out, two.) Bidders sent gifts ranging from cacti to free sandwiches in hopes of getting Amazon to place a slice of its headquarters within their boundaries. Less tacky, but more concerning, were the billions of public dollars that cities and states offered the company, which has the eighteenth highest revenue in the world.

Grotesque as it was, Amazon’s HQ2 competition did offer insight into the strengths of its many contestants. Pittsburgh, which made the final twenty, highlighted the presence of Carnegie Mellon and the University of Pittsburgh, the thousands of local graduates with computer science degrees, and its “top food scene as ranked by Zagat.” Atlanta, another finalist, emphasized its multiple direct flights to Seattle and its subway system. A joint bid by Detroit and Windsor, Canada, pointed out that, by straddling the border, Amazon could take advantage of Canada’s weaker currency and friendlier immigration policies while remaining in a U.S. metropolis.

But Amazon was never going to Detroit or Atlanta or Pittsburgh. When the company announced that it was splitting the bounty between New York City and Arlington, Virginia, Jersey City Mayor Steven Fulop said what many were thinking, calling the search “a big joke just to wind up exactly where everybody guessed.” While many of the bidders had local talent, amenities, and often infrastructure, New York and D.C.—America’s financial capital and America’s literal capital—offered an unmatched concentration of economic and political power. For firms like Amazon, a monopolistic corporation that controls nearly 50 percent of America’s e-commerce market and has its eyes on much more, access to this kind of influence is an ideal way to protect the ability to dominate markets. It didn’t hurt that Amazon CEO Jeff Bezos already owned a house in each place, as well as the major newspaper, the Washington Post, in one of them.

The company’s decision is emblematic of a trend that goes far beyond Amazon. In recent years, growth in income and opportunity has overwhelmingly flowed to cities that are already wealthy, most of which are on the East or West Coast. In 1980, the per capita income in the richest 10 percent of metro areas was 1.4 times greater than in the poorest 10 percent. By 2014, it was 1.7 times greater. Similarly, a 2018 study by Issi Romem, a researcher at the University of California, Berkeley, found that the income of people moving into wealthy coastal areas far exceeds the income of people leaving. New residents of the San Francisco Bay Area, for example, earn around $13,000 more than the people they replaced. In struggling interior cities, the reverse is true. Americans who are moving to Greater Detroit make $4,000 less than the people moving out.

The vibrant growth of wealthy cities, in other words, comes at the expense of perfectly viable heartland cities. Places like Detroit or Pittsburgh have people and infrastructure aplenty. Almost all of these cities have dramatically lower housing costs than do elite coastal metro areas, a point that journalists bemoaning Amazon’s decision uniformly mentioned. If Amazon really cared about affordability, as the company suggested it did, then why did it go to two of the most expensive places on earth? Why would any company? The lack of easy answers confirmed Americans’ growing sense that the geographic clustering of opportunity is the result of a rigged economy.

The GOP’s disproportionate—and antidemocratic—Senate representation and the clustering of economic opportunity in elite coastal metro areas are closely related.

Exactly one week before Amazon’s announcement, America held a midterm election that produced an equally strange-yet-predictable outcome. Thanks to a “blue wave” of support, Democrats picked up forty seats in the House of Representatives, taking control of the chamber. Yet despite the Democrats’ nine-point advantage in the national vote, the GOP gained a net of two Senate seats. 

The GOP’s disproportionate—and antidemocratic—Senate representation and the clustering of economic opportunity in elite coastal metro areas are closely related. Democrats won big in cities and suburbs all over the country, as they increasingly do. But metro areas in traditional swing states away from the coasts generally haven’t been growing much in recent decades, leaving the populations of those states skewing much more rural than they otherwise would. Meanwhile, wealth, opportunity, and growth have increasingly flowed to a handful of metro areas in states that are already Democratic strongholds. The Democratic Party, for example, managed to flip four seats and thus win every House district in fast-growing Orange County, California. Given Orange County’s history as a bastion of conservatism, that’s no small feat. But when it comes to the Senate, it changes nothing. California was already blue. 

It also doesn’t change the Electoral College. Hillary Clinton was the first Democratic presidential candidate to win Orange County since 1936, en route to winning the popular vote by nearly three million. Unfortunately, her votes were lopsidedly concentrated in states where her victory was already assured. Donald Trump, meanwhile, eked out game-changing victories in Electoral College–rich midwestern states like Michigan, Wisconsin, and Pennsylvania (plus a blowout in Ohio, a former swing state). These states still have large metro areas, and Clinton won those regions. But these places—like Detroit, Cleveland, and Milwaukee—have for decades either grown very modestly or declined in population. So even big wins there didn’t offset the surge of votes Trump received in exurban, small-town, and rural parts of those states. This fact may have as much to do with Democrats’ struggles in these states as does the rightward shift of rural voters. After all, Colorado is heavily rural, but it has turned solidly blue in recent elections thanks to the explosive growth of Denver, one of the few recent non-coastal urban success stories. 

To avoid watching in horror as the Senate slips away forever while the Electoral College map becomes ever more daunting, liberals need a long-term strategy to combat the decline of heartland cities—to turn Clevelands into Denvers. To do so, they need to first recognize that geographic inequality did not come out of nowhere. It is not the inevitable product of free market forces clustering new skill and innovation around where all the old skill and innovation are found—nothing makes people in St. Louis or Milwaukee any less talented than people in San Francisco or Washington, D.C. Instead, it’s the result of nearly four decades of policy choices in Washington—such as giving large banks and other corporations in elite coastal cities free rein to acquire rival firms headquartered in cities in America’s interior. This has stripped those interior cities of what were once their economic engines, even as it has enriched the already wealthy coastal megalopolises. 

Fixing America’s regional inequality would be a good idea irrespective of its political implications. It would increase innovation and GDP across the country. With economies, as with professional sports leagues, having more cities that can compete ups everyone’s game. It would help curb the broader scourge of income inequality. And it would improve our quality of life by making it easier for talented people to stay with family and friends in the communities where they grew up, or to move wherever else they might like to go, rather than being channeled to a handful of overly expensive, traffic-choked megacities.

But reducing regional inequality is a case where what’s good for the country would also be good for the Democrats. In fact, if the party can’t find policy levers to boost growth rates—and hence the number of Democratic voters—in purple and red state metro areas, they will have a hard time ever overcoming the Republican geographic advantage in the Senate and Electoral College. Yet almost no one on the left talks as if they understand this reality. 

To fix the problem, Democrats first have to realize they have one, and how it came to be. 


Regional prosperity wasn’t always a zero-sum game. From 1930 through 1980, virtually every geographic section of the United States saw its per capita income trend towards the national average. In 1933, average income in the Southwest was not much over 60 percent of the national average. By 1979, it was nearly on par. New England, once 1.4 times wealthier than the country as a whole, fell to just slightly above average. These gains were visible in the nation’s cities. In 1969, the per capita income of Greater St. Louis was 83 percent as high as New York’s, and it climbed even higher in the subsequent decade. In 1978, metro Detroit’s average income was about the same as that of the New York tri-state area. In the mid-1960s, the twenty-five richest metropolitan areas included Milwaukee, Des Moines, and Cleveland. (Throughout this piece, I’m referring to metro areas as defined by the U.S. Office of Management and Budget, which uses census data to designate “metropolitan statistical areas.”)

This convergence helped the country develop a broad middle class. According to Harvard economists Peter Ganong and Daniel Shoag, approximately 30 percent of the growth in America’s hourly wage equality from 1940 to 1980 was the result of wages across different states increasingly resembling one another. In other words, growing equality between regions helped foster more equality within regions.

But suddenly, these trends reversed, and over the next several decades, regional inequality exploded. In 1980, New York City’s per capita income was 80 percent above the national average. By 2013, it was 172 percent higher. Incomes in Washington, D.C., and San Francisco, respectively, went from being 29 and 50 percent above average to 68 and 88 percent higher. Heartland cities, meanwhile, saw their wealth slip away. Gone from the list of America’s richest cities were Milwaukee, Des Moines, and Cleveland. By 2018, twenty of the top twenty-five were on the East or West Coast. Seven are in California. Minneapolis, which clocks in at number twenty-four, is the only entrant from the entire Midwest.

How did this happen? Some analysts cite the impact of deindustrialization. But while the decline of industrial jobs certainly played a role in the stagnation of the Midwest, other places that once had strong manufacturing industries—like New York and Boston—managed to rebound from manufacturing busts. Seattle provides an especially revealing case. Although it’s now one of America’s most affluent cities, in the 1970s it was one of the country’s most distressed. The local economy was heavily reliant on a sole manufacturer—Boeing—and when a mild recession led to a collapse in the airplane market, the region entered a tailspin. One out of every eight jobs in Greater Seattle was eliminated, and unemployment ticked above 12 percent. But, unlike Detroit, the city had the good luck of rearing Microsoft cofounders Bill Gates and Paul Allen, who met while attending the same private school on Seattle’s north side. They decided to return home to grow their then-nascent company, a choice that helped save their city.

Other experts argue that growing regional inequality is the inevitable product of the need for talent to cluster in today’s “innovative” economy. Berkeley economist Enrico Moretti has argued that “once a city attracts some innovative workers and innovative companies, its economy changes in ways that make it even more attractive to other innovators. In the end, this is what is causing the Great Divergence among American communities.” 

To avoid watching in horror as the Senate slips away forever, liberals need a long-term strategy to combat the decline of heartland cities—to turn Clevelands into Denvers.

Moretti is right that the education levels of a region’s population help shape its future. Most of today’s most prosperous cities had a higher proportion of residents with bachelor’s degrees in 1980 than did areas that are now struggling. Access to higher education is crucial. But his diagnosis is incomplete, and his determinism is unfounded. Struggling metro regions had, and still have, hundreds of thousands of residents with college degrees, more than enough to sustain vibrant creative industries. In the 1980s, for instance, St. Louis had booming advertising, pharmaceutical, and financial sectors, and even today is a hub of tech start-ups thanks in part to the presence of an elite research institution, Washington University in St. Louis. In their best-selling recent book Our Towns, journalists James and Deborah Fallows recount their travels to modest-sized cities like Duluth, Minnesota, and Sioux Falls, South Dakota, that are being remade thanks to a combination of civic activism and entrepreneurial energy.

There’s thus little reason why smaller-sized metro areas can’t succeed in the twenty-first-century economy. “Innovation,” after all, didn’t start being important in 1980; it’s something economies have depended on for centuries. To the extent that the digital age is different, it’s that innovative people can now connect and work remotely. If anything, today’s coders should have less need to all be in the same place than did educated professionals in the 1960s.

The likeliest explanation for the regional divergence, then, doesn’t come from economics or sociology. It comes from politics and policy. Between the mid-1930s and the mid-1970s—the height of America’s regional wealth convergence—elected officials worked to level the economic playing field through policies specifically designed to enhance regional and local competitiveness. Federal laws passed in the 1930s, for example, blocked the growth of domineering chain stores by cracking down on practices that would undercut smaller businesses. The federal government also made vigorous use of antitrust laws. In the mid-1950s, for instance, the Justice Department successfully sued to keep two shoe companies from merging. It argued that the resulting firm—which would have controlled just over 2 percent of the nation’s footwear market—could suppress competition and harm consumers. Today, the idea that such an entity might be monopolistic would be roundly dismissed by the courts. But in 1962, the Supreme Court unanimously sided with the Justice Department. In his opinion, Chief Justice Earl Warren wrote that the Court had to respect “Congress’ desire to promote competition through the protection of viable, small, locally owned business.” 

In the mid-1960s, the twenty-five richest metropolitan areas included Milwaukee, Des Moines, and Cleveland. By 2018, twenty of the top twenty-five were on the East or West Coast. Minneapolis was the only one from the Midwest.

Yet in the latter half of the 1970s, just as regional equality was cresting, the government changed course. The process began under Jimmy Carter. In 1978, the president signed legislation that deregulated the airline industry by abolishing the Civil Aeronautics Board. For decades, the CAB had made sure that passengers flying to and from small and midsize cities paid a similar price per mile as passengers flying to and from the country’s largest ones. It required that airlines offer service to places even when such routes were unprofitable, to ensure that no city was left behind. Eliminating the CAB did reduce airfare costs in the nation’s biggest cities, at least initially. But its ultimate effect was the suffocation of many inland metro areas. Since the board’s demise, flights to interior cities have become far less frequent and far more expensive. In Memphis and Cincinnati, they’ve nearly doubled in price.

But regional inequality really took off in the 1980s, when both the Supreme Court and Ronald Reagan’s Department of Justice narrowed the definition of what was enforceable under federal antitrust laws and began approving an enormous number of corporate mergers. The single largest increase in corporate acquisitions in American history happened between 1984 and 1985. This laissez-faire attitude toward monopolies didn’t stop when Reagan left office, or even when Democrats won back the White House. In 1998, for example, Bill Clinton’s administration approved the merger of Exxon and Mobil, then the country’s two largest oil companies. The upshot of these policies is that large firms located in big, economically powerful cities have increasingly captured the market. They have bought out their heartland competitors in industries ranging from banking to retail. The result has been a one-way flow of wealth out of middle America and into elite metropolises.

Greater St. Louis is a prime example of how airline deregulation and the demise of antitrust laws can suck the vitality out of a prominent city. St. Louis was once home to a vibrant collection of internationally competitive corporations and—given its location at America’s center—was a transportation hub and business convention destination. But then it was hit with the by-products of pro-monopoly government policies. Locally headquartered Ozark Airlines was bought in 1986 by Trans World Airlines, which was then bought by Chicago-headquartered American Airlines in 2001, which then cut flights to St. Louis by more than half. In 1980, the area had twenty-two Fortune 500 companies. Today, there are nine. One of them, the health care firm Express Scripts, is in the process of being acquired by Cigna, a Fortune 500 health insurance company based in Connecticut. 

When a city loses the headquarters of its major employers, the damage extends far beyond just the thousands of lost jobs. As regional firms are acquired, many local executives are replaced by managers with less incentive to engage with the community. The new parent company may make business decisions that undermine the subsidiary and its hometown. Executives may decide to relocate local staff, draining talent and resources from the area. After the Belgian-based beer conglomerate InBev bought St. Louis–based Anheuser-Busch, it promptly eliminated more than a fifth of the company’s St. Louis workforce. Even if local jobs aren’t eliminated, profits that once would have stayed in the region are now channeled elsewhere—functionally making these places economic colonies of distant super-cities.

The problem extends beyond the flight of existing capital. As markets become less open and more monopolized, it gets harder for new businesses to break through. While St. Louis boasts a fairly robust start-up scene, many of its biggest successes have been acquired by companies elsewhere, as when a Philadelphia company purchased St. Louis biotech start-up Confluence Life Sciences for $100 million in 2017. In the late 1970s and early 1980s, Gates and Allen could build Microsoft in Seattle—despite the city’s woes—because of a relatively equitable national economy. Today’s entrepreneurs don’t live in that reality. Instead, most eventually cash out by selling their projects to an existing monopoly.


Increasingly, experts and politicians seem aware that monopolization is a serious economic problem. It allows price gouging, forces low wages on an increasingly captive labor force, and redistributes wealth upward. But it’s also a political problem. There are fifty-three metro areas with one million or more residents, located in thirty-eight states. Together, they accounted for roughly 55 percent of votes cast nationwide in 2016. According to research by Patrick Adler, an associate at the Martin Prosperity Institute, Hillary Clinton won two-thirds of these metro areas, including a majority of those located away from the coasts. She won all metro areas with over one million residents by an average of twelve percentage points, over nine points greater than her national popular vote victory.

But the electoral power of America’s metropolises appears to be declining. Clinton became the first candidate in modern U.S. history to lose the presidency while winning counties where a majority of Americans live. Out of the country’s largest 100 counties, Clinton won eighty-eight, the same number that Obama took in his resounding 2008 victory, and eighteen more than Al Gore won in 2000. But because of the Electoral College, it wasn’t enough. Conservatives often complain that large liberal cities are too politically powerful. The truth is closer to the reverse: if politicians can win races while carrying only a small minority of America’s metropolitan areas, then the voices of urban and suburban voters—including in middle America—are ignored.

The most obvious driver of this trend is the intense rightward shift of white, rural voters. But the relative health of America’s metropolises may play an equally large, if barely appreciated, role. Consider the cases of Minnesota and Wisconsin. In addition to a border, the states share similar populations and demographics, shaped by a history of German and Scandinavian immigration. Until Scott Walker came along, they also had a shared tradition of progressive populism, stemming from Minnesota’s Democratic-Farmer-Labor Party and the legacy of Wisconsin’s Fighting Bob La Follette—who made breaking the “combined power of the private monopoly system over the political and economic life of the American people” one of his central tenets. 

Federal policy changes have allowed large firms in coastal cities to buy out their heartland competitors in industries ranging from banking to retail. The result has been a one-way flow of wealth out of middle America and into elite metropolises.

In 2016, the rural parts of both states shifted sharply to the right, matching the national trend. Minnesota, however, stayed blue, while Wisconsin went red. To understand why, take a look at the growth rates of the states’ largest metro areas, both of which voted heavily for Clinton. Between 1970 and 2017, Greater Minneapolis grew at an annual rate of roughly 2 percent, above the national rate of 1.1 percent. The Minneapolis region—home to a variety of corporate giants like Target—now has roughly 3.6 million residents, up from 1.87 million in 1970. By contrast, Greater Milwaukee, buffeted by business closures and a shrinking middle class, grew at an annual rate of only 0.26 percent over the same period. 

In Minnesota, Minneapolis’s growth was enough to offset Democratic losses in rural areas. In Wisconsin, Milwaukee’s wasn’t. If Greater Milwaukee had grown at the same rate as Greater Minneapolis, then Clinton would have carried Wisconsin by approximately 16,000 votes instead of losing by roughly 23,000.

The nearby states of Illinois and Michigan are also illustrative. The two states voted for the same presidential candidate in all but one election from 1952 through 2012 (1968 was the exception). But in 2016, Clinton’s vote share in Michigan dipped by seven points, enough to lose the state, while her more modest two-point decline in Illinois kept that state safely blue. Once again, big cities helped make the difference. Growth in Chicago is nothing to write home about, but unlike Greater Detroit, the area at least hasn’t lost residents over the last forty-seven years. As recently as 1990, Chicago and Detroit were the third and fifth largest metro areas in the country, respectively. By 2010, they were third and twelfth. Clinton won 2,381,476 votes in the Chicago metro area alone, more than Trump won in the entire state, powering her to victory. But her 169,025-vote margin in metro Detroit was less than a tenth of both her and Trump’s statewide total. 

It isn’t just the upper Midwest. Virginia’s transformation from deep red to bright blue is largely the story of metropolitan Washington, D.C., which includes northern Virginia. The area has been a hotbed for high-quality economic opportunity over the last few decades, courtesy of both the federal government and the variety of major private companies that have sprung up around it. In 1970, northern Virginia accounted for only 12 percent of the state’s population. By 2010, one-third of state residents lived in the counties surrounding the District. This growth has been driven by highly educated professionals, including a rapidly growing Asian American population drawn to the region’s glut of high-tech jobs. This has been a godsend for Democrats, who dominate the area. Indeed, Virginia would not be blue otherwise. Hillary Clinton would have lost Virginia if northern Virginia’s Fairfax County alone had cast the same number of ballots as it did in 1972, even if the county had voted Democrat by the same margin.

In Minnesota, Minneapolis’s growth was enough to offset Democratic losses in rural areas. In Wisconsin, Milwaukee’s wasn’t. If Greater Milwaukee had grown at the same rate as Greater Minneapolis, then Clinton would have carried Wisconsin.

Contrast Virginia with Missouri, a similarly sized state that also sits just below the Mason-Dixon Line. Once America’s quintessential swing state (it voted for the winning presidential candidate in every election from 1904 to 2004), Missouri’s politics have lurched to the right as St. Louis, its largest city, has stagnated. From 1970 to 2017, the region’s annual growth rate was an anemic 0.26 percent, even less than the growth rate of Missouri as a whole. 

To be sure, not all growing metro areas skew blue. Some, especially those with large energy industry and retirement sectors, vote red. But the pattern is clear: big cities help Democrats, and the bigger the city, the more help it provides. This makes sense demographically. Metro areas tend to be younger, more diverse, and have more college-educated voters than rural areas. But it isn’t just a matter of liberal-leaning demographics clustering in cities. Experts have also found that even traditionally conservative demographic groups are more likely to vote Democrat when they live in more densely populated places. A study by Catalist, a major progressive data research organization, shows that in 2018, the Democratic vote share among white voters without college degrees—also known as Trump’s base—was thirty-four points higher in suburbs and cities than in rural areas. Indeed, the average non-college-educated white person residing in a city voted Democratic. “Even if you look at white non-college voters, the closer you get to the city, they tend to be more Democratic,” said Ruy Teixeira, a sociologist and senior fellow at the Center for American Progress. “Maybe that’s partly because they’re used to living with people who are different from them, and that produces a certain kind of outlook that’s less Republican.” 


Given these trends, it would seem obvious that liberals should be keenly interested in promoting policies that would equalize geographic opportunity in America. Yet what you tend to hear instead is smug satisfaction about the economic superiority of liberal big cities. “I win the coast, I win, you know, Illinois and Minnesota, places like that,” Hillary Clinton told an audience in Mumbai last March. “I won the places that represent two-thirds of America’s gross domestic product. So I won the places that are optimistic, diverse, dynamic, moving forward.”

Her comments, though no doubt borne of understandable frustration, were tone deaf and prompted an equally understandable online roast. But she also put into words what a whole lot of residents of major metropolitan areas think—and not just liberals. In a March 2016 essay in the National Review, conservative writer Kevin Williamson assailed Trump-supporting white working-class voters and the distressed areas in which they reside. “The truth about these dysfunctional, downscale communities is that they deserve to die,” he wrote. Their residents, he argued, “failed themselves.”

Williamson’s invective makes more sense, ideologically, than Clinton’s unscripted outburst. Letting a monopolized market pick winners and losers fits with modern right-wing economic dogma. Unfortunately, too many liberals have unconsciously bought into the same “free market” view when it comes to the divergent paths of metro areas. Writing in the New Republic in 2017, contributing editor Kevin Baker boasted that “cities now generate the vast majority of America’s wealth—the cities, that is, where blue folks live,” and half-seriously proposed that these nodes of pro-liberal wealth virtually secede from the union via radical federalism—“Bluexit,” he called it.

Liberals ought to reconsider taking such pride in the economic dominance of blue metro areas. It was Robert Bork, the archconservative legal scholar and failed Supreme Court nominee, who laid the intellectual groundwork for destroying antitrust regulations. It was Ronald Reagan who took Bork’s ideas and made them ascendant, channeling wealth to the coasts. Many Democratic politicians then adopted Bork and Reagan’s antitrust ideology, furthering the outward flow of wealth. In other words, thriving blue cities like San Francisco and New York owe much of their privileged status to decades of reactionary laissez-faire economic policy. 

The bigger blind spot is that liberals seem relatively uninterested in the plight of heartland metro areas, and disproportionately preoccupied with what happens in cities in blue states. Arguments over whether the best way to fight gentrification in places like Boston and San Francisco is affordable housing or looser zoning requirements consume liberal policy communities. None of those questions are irrelevant or unimportant. But by dwelling endlessly on the problems facing the residents of elite coastal cities, they ignore a broader class of victims: the less-than-affluent people living everywhere else. 

The fundamental problem in wealthy, high-cost cities is not zoning laws that are too restrictive or that don’t mandate enough affordable housing—even if those are real issues. It’s not that they have too many white-collar working professionals moving into once-affordable communities. It’s that they have too many white-collar working professionals, period. Stagnant heartland cities, on the other hand, don’t have enough. 

But rather than seeing the connection between these two issues, many urbanists attempt to address each problem separately. For affluent cities, the emphasis is on alleviating cost-of-living concerns. For struggling cities, it’s on improving educational opportunities and getting more federal aid. A recent Brookings Institution report on the growth of regional inequality, for example, calls for “an urgent push to boost the tech skills of left-behind places.” It also proposes that the government provide funding for certain “promising heartland metros.” But without simultaneously tackling monopolization, this type of investment would be like trying to fill a bathtub with an open drain. Whatever resources and capital are added would just keep getting sucked out by firms on the coasts.

To its credit, the Brookings report does cite economic concentration as a cause of regional inequality. But its proposed solutions make no mention of monopolization, suggesting that the authors do not think antitrust enforcement is a viable remedy. That’s a shame, because cities have proven that they can turn themselves around when economic power isn’t overly concentrated, as Seattle once did. Big cities like St. Louis, Cleveland, and Detroit could do the same thing today if the playing field were even. They have cultural amenities, school systems as good as or better than those in elite coastal cities, and excellent nearby universities. They have a surplus of lovely, affordable homes, many in walkable urban neighborhoods—and most local residents would see an influx of affluent professionals not as evil gentrification but as a godsend. They have mass transit systems that they’ve expanded in recent years, mostly with local tax funding. They don’t need huge new sums of federal money to thrive economically, though it wouldn’t hurt. What they need are rules allowing them to compete fairly. 

Liberals ought to reconsider taking such pride in the economic dominance of blue metro areas. Thriving blue cities like San Francisco and New York owe much of their privileged status to decades of reactionary laissez-faire economic policy.

The Republican Party’s current economic strategy—tax cuts and less regulation with tariffs on top—will not help heartland cities. It isn’t designed to. It’s therefore up to Democrats to advance policies that will distribute economic power and opportunity to parts of America beyond the coasts. That means, first and foremost, challenging monopolies head-on. The next Democratic administration needs to turn up the dial on antitrust enforcement, blocking proposed mergers like the Express Scripts–Cigna deal and breaking up giants that have already accrued too much market power. Watching major American cities fall over themselves wooing Amazon was nauseating, but given the company’s size, the prostration was understandable. It would be better if, instead of competing to have a part of Amazon, these metro areas were able to have their own successful online retailers. Creating that kind of economy means putting limits on Jeff Bezos’s empire. 

It also means rewriting banking legislation to disperse financial power from the big coastal money centers and out to the rest of the country, as was the case until the recent era of deregulation. Local businesses can’t thrive without sources of financing, and study after study shows that local and regional banks—because of their rootedness and greater local knowledge—are more willing and able to make those loans than Citibank or Bank of America. In Detroit, for example, a consortium of nonprofits and regional banks are creating a program that will make it easier for prospective homeowners to get mortgages that enable them to both purchase and renovate houses. That’s an enormous step in a city where derelict properties deter homeownership and drive the price of real estate down. The federal government needs to strengthen and protect these kinds of institutions.

Helping smaller metro areas thrive also means giving them back the connectivity they once enjoyed. Airline deregulation has not only made flying a miserable experience for everyone, it also has jeopardized the viability of companies in places like Cincinnati, where it costs twice as much per mile to fly anywhere compared to New York or San Francisco. The policy elites in the 1970s who decided that regulating fares and routes was an intolerable burden didn’t foresee how much worse it would be when those decisions were made by hedge fund managers in New York, who own the four big carriers that have locked up the air passenger market. 

Of course, boosting competition and growth in America’s metro areas won’t be enough on its own to solve the growing problem of geographic polarization. It must be in addition to, not in place of, an equally aggressive strategy aimed at rural America. As Claire Kelloway argues elsewhere in this issue, the same bag of antitrust and pro-competition tools would further that effort, too—if anything, rural areas are feeling the sting of monopoly even more acutely than heartland cities. But it’s essential to understand that America’s political geography isn’t just about urban versus rural. It’s also about coastal cities getting richer, more crowded, and more expensive while an overwhelming number of heartland metro areas get left behind. 

Strengthening competition policy and breaking up monopolies requires a national solution. This creates something of a catch-22. To win in more places, Democrats may need to foster healthier heartland cities. But to foster healthier heartland cities, Democrats need to win in more places. Still, it’s imperative that the party do whatever it can as soon as it has the chance. That’s especially true because America’s economic landscape will not change overnight. New businesses take time to grow. Major companies and their executives will aggressively combat new limitations and fight breakups. But if history is any guide, fighting monopolization will prove well worth the effort. Doing so in the early 1900s led to a half century of economic growth that was shared by every region of the country. 

Simply painting a vision for how America can restore that kind of broad-based growth, and the specific policies we’ll need to achieve it, could go a long way to helping Democrats win elections in the near term. But it’s the long term they need to worry about most. Despite repeated predictions that America’s increasing diversity will eventually build a wall around the GOP, Republicans’ willingness to double down on white rural voters, franchise restrictions, and gerrymandering—plus the “natural” advantage they accrue from the unrepresentative Senate and Electoral College—is continuing to pay dividends. Democrats need to make this template impossible for the GOP to keep following. And to do that, they need to vigorously enforce policies that make America’s purple and red state metro areas too big and too vibrant for Republicans to ignore or suppress.  

The post To Take Back the Map, Democrats Need a Plan to Revive Heartland Cities appeared first on Washington Monthly.

]]>
91485
The Democrats’ Pop-Up Foreign Policy Problem https://washingtonmonthly.com/2019/01/13/the-democrats-pop-up-foreign-policy-problem/ Mon, 14 Jan 2019 02:00:12 +0000 https://washingtonmonthly.com/?p=91506 Jan-19-Cortellessa-DemsCapital_REV

Conservatives have created a permanent infrastructure of think tanks to generate foreign policy positions regardless of who’s in the White House. Liberals need to do the same.

The post The Democrats’ Pop-Up Foreign Policy Problem appeared first on Washington Monthly.

]]>
Jan-19-Cortellessa-DemsCapital_REV

Last March, three U.S. senators undertook a seemingly quixotic task. Democrat Chris Murphy, Independent Bernie Sanders, and Republican Mike Lee demanded a vote on their resolution calling for the U.S. government to end its support for Saudi Arabia’s war against Houthi rebels in Yemen. The offensive, they argued, was creating a humanitarian nightmare: the war had already killed thousands of civilians through air strikes and caused a famine resulting in more than 50,000 children’s deaths. 

But the senators’ efforts went nowhere. Republicans sided with the Trump administration’s view that support for Saudi Arabia was vital to counter Iran, which backs the Houthis. Democrats, meanwhile, were conflicted: it was the Obama administration that had first argued for supporting the Saudis in Yemen. With ten Democrats voting against it, the procedural vote was defeated on the Senate floor.

Eight months later, the same three senators pushed for another vote on the same resolution. This time, the results were different. Every Democratic senator voted to advance the measure to the full chamber, as did fourteen Republicans, defying Senate Majority Leader Mitch McConnell’s attempts to bury the resolution. Two weeks later, it passed 56–41. 

What changed? The main reason for the turnaround was the grisly murder and dismemberment of Saudi dissident and Washington Post contributor Jamal Khashoggi at the Saudi consulate in Istanbul by a fifteen-member “assassination squad.” Equally disconcerting to senators was Donald Trump’s refusal to admit that Saudi Crown Prince Mohammed bin Salman had ordered the hit, despite overwhelming evidence, including from the CIA, that he had. 

But while the killing of Khashoggi was the precipitating event, the Senate vote got an important assist from a group almost no one had previously heard of: National Security Action. Founded last winter by Ben Rhodes, former deputy national security adviser to President Obama, and Jake Sullivan, former senior foreign policy adviser to Hillary Clinton, the organization boasts an advisory board made up of national security bigwigs from the Bill Clinton and Obama administrations—including former CIA Director John Brennan, former National Security Adviser Susan Rice, and former United Nations Ambassador Samantha Power. 

For months leading up to the vote, National Security Action (which lends itself to an unfortunate initialism) had been quietly mobilizing support around getting the U.S. out of the Yemen crisis through Capitol Hill briefings and cooperation with progressive advocacy networks. Then, on November 11, a month after Khashoggi’s murder, the group released a statement signed by thirty top Obama-era officials calling for an end to all American involvement in Yemen. Central to their argument was admitting their own role in the policy they were asking to be reversed. “We did not intend U.S. support to the coalition to become a blank check,” they wrote, referring to the Saudi-led forces fighting the Houthis. “But today, as civilian casualties have continued to rise and there is no end to the conflict in sight, it is clear that is precisely what happened.” The successful procedural vote came two weeks later. A senior Democratic Senate aide told me that, while “a number of contributing factors” led to the resolution’s success, “that letter certainly helped galvanize the Democratic caucus.” It also became a selling point for Democrats in the House; Nancy Pelosi cited the letter in her statement on the need to change course in Yemen and limit Trump’s wartime powers. 

National Security Action’s role in the Yemen resolution illustrates the importance of something Democrats desperately need that Republicans have in abundance: an infrastructure of nonprofit groups staffed with substantively knowledgeable, politically plugged-in foreign policy experts. For decades the American right has had multiple ideological think tanks—the Heritage Foundation, the American Enterprise Institute, the Hudson Institute, the Foundation for the Defense of Democracies—that generate national security ideas, disseminate them to mainstream and conservative media, and drive consensus among Republican lawmakers. But when Trump took office in 2017 and began undoing Obama’s signature foreign policy accomplishments—the Iran deal and the Paris climate accord, especially—the Democrats, stuck in the minority in the House and Senate, could count on only one major liberal D.C. think tank with enough foreign affairs chops to fight back: the Center for American Progress. And while CAP’s resources are considerable—it received $40.5 million in contributions in 2016, the most recent year for which federal data is available—they’re a fraction of what’s available on the right: Heritage alone raised $79 million in 2016. 

There are plenty of think tanks that elected officials can go to for nuanced views and balanced proposals on foreign and military affairs: the Brookings Institution, the Center for Strategic and International Studies, the Carnegie Endowment for International Peace. But these organizations typically try to stay above the partisan fray and don’t much involve themselves in the rough-and-tumble of lobbying and advocacy on Capitol Hill. 

Barack Obama and Ben Rhodes
President Barack Obama confers with Ben Rhodes, Deputy National Security Advisor for Strategic Communications, in the Oval Office, Sept. 10, 2014. (Official White House Photo by Pete Souza) Credit: Obama White House/Flickr

That’s where the more ideological think tanks and groups come in, and where National Security Action has found a lane for itself. One of its biggest missions is to create the connective tissue between the left’s brightest foreign policy thinkers and day-to-day Democratic politics. “There has always been a significant gap in the progressive infrastructure around national security that connected people who were experts and practitioners with the political debate,” Rhodes told me. National Security Action’s staff of fourteen organizes near-daily briefings with the press and members of Congress; helps its advisory board members place op-eds and make TV appearances; and coordinates with much larger liberal advocacy groups, like MoveOn and Indivisible, when they need national security expertise and talking points for Democratic candidates, as they did during the 2018 midterms.

But while National Security Action is filling a long-term problem, it is only positioning itself as a short-term solution. “We did set ourselves up as a temporary organization dealing with the emergency moment of the Trump presidency,” Rhodes said. “With the idea being, if things go well, this type of organization is not needed if a Democratic administration takes office.” He and Sullivan have raised the funds, they said, for the group to operate for just three years, from 2017 to 2020. 

In other words, the good news for Democrats is that there’s finally an organization filling what has been a major vacuum in the party’s foreign policy infrastructure. The bad news is that they’re already planning to go out of business. 

It is extremely difficult to get a party’s lawmakers on the same page when it comes to foreign policy and national security. Members of Congress are, by nature, generalists. They have to vote on every issue under the sun—from health care to federal budgets to judicial nominees. While a few have a natural interest in foreign affairs and angle for seats on the Intelligence and Foreign Relations Committees, the vast majority do not. 

The right caught on to this reality a long time ago. Frustrated in the 1970s by what they considered the dovish tendencies of the foreign policy establishment—especially the Democrats who controlled Congress at the time—conservatives created their own more hawkish foreign policy infrastructure to exist outside of government. The Heritage Foundation was established in 1973, while a number of existing think tanks, like AEI, began pumping huge sums of cash into the operation with the idea of creating a conservative “marketplace of ideas.” This work paid dividends when Ronald Reagan became president. Heritage played a critical role in helping the Reagan administration vastly expand the military and take a more confrontational approach to the Soviet Union. By 1986, Time magazine called Heritage “the foremost of the new breed of advocacy tanks.” 

Democrats, meanwhile, had no equivalent organizations. That’s one reason why Clinton’s administration struggled early on to articulate coherent strategies for the major foreign crises of the time. By its second term, however, the Clinton White House had found its foreign policy groove. It scored key successes in expanding the NATO alliance, negotiating a treaty that brought peace to Northern Ireland, and orchestrating military actions in Bosnia and Kosovo that ended wars in both places without losing high numbers of American soldiers—all while garnering bipartisan support for the efforts in Congress. 

But when Democrats lost the White House in 2000, the seasoned professionals who managed the national security and foreign policy portfolios for Clinton scattered to nonpartisan think tanks, universities, and law firms. No organization existed to fashion a liberal foreign policy and national security agenda, much less to work with Democratic lawmakers and liberal advocacy groups to build consensus around it. 

This became tragically apparent after September 11, when the George W. Bush administration put the party on the defensive and peeled off significant numbers of Democrats to support the Iraq invasion. As the Iraq War descended into bloody chaos, key former members of the Clinton administration—led by John Podesta and backed by frustrated liberal donors—created the Center for American Progress, which was meant to be the Democratic counter to Heritage and AEI, combining scholarly and practical expertise on a range of issues, including national security, with a keen ability to work Capitol Hill. One of the think tank’s earliest achievements was helping then Pennsylvania Representative John Murtha release his plan in 2006 for “strategic redeployment,” a phased withdrawal from Iraq.

But it quickly became clear to leading Democrats that having one big think tank, with other domestic policy issues on its plate, was not enough. So in 2006, Rand Beers, a former Marine and counterterrorism expert who had served as a senior adviser to President Clinton and to John Kerry during his 2004 presidential run, founded a new organization: the National Security Network. Its aim was to be a dedicated resource on military and foreign policy for both the media and Congress—to provide “innovative national security solutions that are both pragmatic and principled,” in the words of its mission statement. With an unpopular ongoing war in Iraq, that idea resonated with the Democratic donor class. The NSN played a key role in formulating and disseminating Democratic messaging during the 2008 election, after which Beers left to join the Obama administration. 

During the Obama years, the NSN played a crucial supporting role in helping the administration devise strategies for winding down the Iraq War and clinching the Iran nuclear deal, according to former administration officials. It was also pivotal in helping to craft the White House’s plan to broker a nuclear arms reduction treaty, known as New START, between the United States and Russia in 2010. 

But it also struggled to find the funding it needed to operate. “A very searing memory that I have from 2009 is being told by not one but multiple large individual donors, ‘Now that we’ve elected Obama, national security is solved,’ ” said Heather Hurlburt, a former State Department official and White House speechwriter in the Clinton administration who ran the NSN after Beers. In 2008, the group’s political wing took in $1.2 million in grants and contributions; in 2011, it received only around $279,000. Through an annual gala with corporate sponsors—unlike CAP and virtually every other national security think tank, the NSN refused donations from foreign governments—the organization managed to raise just enough money to keep going, Hurlburt told me. But after advisory board chair Sandy Berger, Bill Clinton’s former national security adviser, died in late 2015, the funding dried up. National Security Network shut its doors in 2016, when the broadly shared view among Democrats—and everyone else, for that matter—was that Hillary Clinton would become the next president and the task of coordinating Democratic foreign policy would be taken care of. 

After Trump’s election, Democrats were left almost exclusively with the Center for American Progress for its foreign policy infrastructure, and there is only so much one think tank can do. “The core thrust of where my program is focused is on ideas generation,” said Kelly Magsamen, a former Obama official who now heads CAP’s national security and international policy division. While her outfit sometimes engages with Capitol Hill, she said, that isn’t its main function. No organization on the left, therefore, was filling that role in 2017, when Trump began trashing NATO, palling around with Russian President Vladimir Putin, cheering on Brexit, and pulling the United States out of the Iran nuclear deal and Paris climate change agreement.

That’s why the creation of National Security Action in February 2018 was so welcome on the left. “There was this gap between CAP and other purely advocacy groups, like MoveOn, for example,” Magsamen said. “National Security Action is somewhere in between. They’re able to connect the two spaces between advocacy and political candidates and idea generation. I think there was that missing piece.”

Crucially, the group represents a fairly wide swath of opinion within the liberal coalition, rather than any specific camp. Its advisory board includes former Obama officials known for their relative hawkishness, such as Samantha Power and former National Security Adviser Tom Donilon, as well as more dovish progressives like Joe Cirincione, president of the Ploughshares Fund, and Ben Wikler, MoveOn’s Washington director. The conscious aim is to be an entity that lots of different Democrats can trust, in order to better build consensus.

The problem is that National Security Action was launched with the intent of existing only for the Trump era. “This is more about the unique nature of the moment we’re in now,” said Ned Price, National Security Action’s director of policy and communications. Indeed, when I went to visit with some the group’s top brass in October, I couldn’t help but notice that it doesn’t even have real offices. It’s set up in a WeWork-like shared office space in D.C.’s Thomas Circle—an obvious sign of its impermanence. 

This pop-up quality is not necessarily the fault of Democratic officials and operatives, like the ones who started National Security Action. Multiple sources said the central issue has long been a lack of sustained commitment from the donor base, who tend to prefer investing in domestic initiatives and support foreign policy organizations only for specific campaigns, like the Iran deal, or during moments of panic when the Democrats suddenly and unexpectedly lose power in Washington. “When the Iraq War concluded, donors started giving to something else; when the Iran deal was secured, they did the same,” said the senior Democratic Senate aide. But right-wing donors, the aide added, are “in it for ten or more years because they understand that to create an environment conducive to these ideas, you need to think in decades, not in years.”  

Another problem—if you want to call it that—is the occasional willingness of Democratic elected officials to cut the defense budget. As a result, defense contractors don’t support liberal national security nonprofits anywhere near as generously, if at all, as they do organizations on the right. Indeed, one of the main reasons think tanks like the Heritage Foundation have been able to maintain national security and foreign policy programs for the long haul is the millions of dollars in funding they receive every year like clockwork from companies like, as in the case of Heritage, Lockheed Martin. 

The founders of National Security Action, which has relied mostly on donations from private individuals and a few grants, according to Price, understand the dilemma, and hope the organization, despite its own impermanence, can prove that groups like it should become a permanent fixture of Democratic politics. “The money is out there, but you have to build up the case for why people should allocate it in this direction,” Sullivan explained. “Our view is if we can start making it a more natural part of the Democratic muscle memory, it will be an enduring contribution going forward because this will just become one of the areas that people give money to.”

Without a sustained foreign policy infrastructure, liberals are destined to repeat the mistakes of the past—like when they had no organizations to help guide the fledgling Clinton administration, or stiffen the spines of lawmakers against the launch of the Iraq War, or orchestrate a unified front against the pro-authoritarian policies of the Trump administration. If liberals really want to get their foreign policy act together, they will need to focus on building and sustaining a permanent infrastructure even when—especially when—one of their own is in the White House. Otherwise, they will eventually have to start from scratch all over again.

The post The Democrats’ Pop-Up Foreign Policy Problem appeared first on Washington Monthly.

]]>
91506 Barack Obama and Ben Rhodes President Barack Obama confers with Ben Rhodes, Deputy National Security Advisor for Strategic Communications, in the Oval Office, Sept. 10, 2014. (Official White House Photo by Pete Souza)
The World Is Choking on Digital Pollution https://washingtonmonthly.com/2019/01/13/the-world-is-choking-on-digital-pollution/ Mon, 14 Jan 2019 01:58:29 +0000 https://washingtonmonthly.com/?p=91496 Jan-19-Estrin-DigitalPollution

Society figured out how to manage the waste produced by the Industrial Revolution. We must do the same thing with the Internet today.

The post The World Is Choking on Digital Pollution appeared first on Washington Monthly.

]]>
Jan-19-Estrin-DigitalPollution

Tens of thousands of Londoners died of cholera from the 1830s to the 1860s. The causes were simple: mass quantities of human waste and industrial contaminants were pouring into the Thames, the central waterway of a city at the center of a rapidly industrializing world. The river gave off an odor so rank that Queen Victoria once had to cancel a leisurely boat ride. By the summer of 1858, Parliament couldn’t hold hearings due to the overwhelming stench coming through the windows.

The problem was finally solved by a talented engineer and surveyor named Joseph Bazalgette, who designed and oversaw the construction of an industrial-scale, fully integrated sewer system. Once it was complete, London never suffered a major cholera outbreak again.

London’s problem was not a new one for humanity. Natural and industrial waste is a fact of life. We start excreting in the womb and, despite all the inconveniences, keep at it for the rest of our lives. And, since at least the Promethean moment when we began to control fire, we’ve been contributing to human-generated emissions through advances intended to make our lives easier and more productive, often with little regard for the costs.

As industrialization led to increased urbanization, the by-products of combined human activity grew to such levels that their effects could not be ignored. The metaphorical heart of the world’s industrial capital, the Thames was also the confluence of the effects of a changing society. “Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface, even in water of this kind,” noted Michael Faraday, a British scientist now famous for his contributions to electromagnetism. 

Relief came from bringing together the threads needed to tackle this type of problem—studying the phenomenon, assigning responsibility, and committing to solutions big enough to match the scope of what was being faced. It started with the recognition that direct and indirect human waste was itself an industrial-scale problem. By the 1870s, governmental authorities were starting to give a more specific meaning to an older word: they started calling the various types of waste “pollution.”  

A problem without a name cannot command attention, understanding, or resources—three essential ingredients of change. Recognizing that at some threshold industrial waste ceases to be an individual problem and becomes a social problem—a problem we can name—has been crucial to our ability to manage it. From the Clean Air Act to the Paris Accords, we have debated the environmental costs of progress with participants from all corners of society: the companies that produce energy or industrial products; the scientists who study our environment and our behaviors; the officials we elect to represent us; and groups of concerned citizens who want to take a stand. The outcome of this debate is not predetermined. Sometimes, we take steps to restrain industrial externalities. Other times, we unleash them in the name of some other good. 

By the 1870s, governmental authorities were giving a more specific meaning to an old word: they called industrial waste “pollution.” Now, we are confronting new and alarming by-products of progress, and the stakes may be just as high.

Now, we are confronting new and alarming by-products of progress, and the stakes for our planet may be just as high as they were during the Industrial Revolution. If the steam engine and blast furnace heralded our movement into the industrial age, computers and smartphones now signal our entry into the next age, one defined not by physical production but by the ease of services provided through the commercial internet. In this new age, names like Zuckerberg, Bezos, Brin, and Page are our new Carnegies, Rockefellers, and Fords. 

As always, progress has not been without a price. Like the factories of 200 years ago, digital advances have given rise to a pollution that is reducing the quality of our lives and the strength of our democracy. We manage what we choose to measure. It is time to name and measure not only the progress the information revolution has brought, but also the harm that has come with it. Until we do, we will never know which costs are worth bearing.

We seem to be caught in an almost daily reckoning with the role of the internet in our society. This past March, Facebook lost $134 billion in market value over a matter of weeks after a scandal involving the misuse of user data by the political consulting firm Cambridge Analytica. In August, several social media companies banned InfoWars, the conspiracy-mongering platform of right-wing commentator Alex Jones. Many applauded this decision, while others cried of a left-wing conspiracy afoot in the C-suites of largely California-based technology companies.

Perhaps the most enduring political news story over the past two years has been whether Donald Trump and his campaign colluded with Russian efforts to influence the 2016 U.S. presidential election—efforts that almost exclusively targeted vulnerabilities in digital information services. Twitter, a website that started as a way to let friends know what you were up to, might now be used to help determine intent in a presidential obstruction of justice investigation.

And that’s just in the realm of American politics. Facebook banned senior Myanmar military officials from the social network after a United Nations report accusing the regime of genocide against the Muslim Rohingya minority cited the platform’s role in fanning the flames of violence. The spread of hoaxes and false kidnapping allegations on Facebook and messaging application WhatsApp (which is owned by Facebook) was linked to ethnic violence, including lynchings, in India and Sri Lanka.

Concerns about the potential addictiveness of on-demand, mobile technology have grown acute. A group of institutional investors pressured Apple to do something about the problem, pointing to studies showing technology’s negative impact on students’ ability to focus, as well as links between technology use and mental health issues. The Chinese government announced plans to control use of video games by children due to a rise in levels of nearsightedness. Former Facebook executive Chamath Palihapitiya described the mechanisms the company used to hold users’ attention as “short-term, dopamine-driven feedback loops we’ve created [that] are destroying how society works,” telling an audience at the Stanford Graduate School of Business that his own children “aren’t allowed to use that shit.”

The feculence has become so dense that it is visible—and this is only what has floated to the top. 

For all the good the internet has produced, we are now grappling with effects of digital pollution that have become so potentially large that they implicate our collective well-being. We have moved beyond the point at which our anxieties about online services stem from individuals seeking to do harm—committing crimes, stashing child pornography, recruiting terrorists. We are now face-to-face with a system that is embedded in every structure of our lives and institutions, and that is itself shaping our society in ways that deeply impact our basic values. 

We are right to be concerned. Increased anxiety and fear, polarization, fragmentation of a shared context, and loss of trust are some of the most apparent impacts of digital pollution. Potential degradation of intellectual and emotional capacities, such as critical thinking, personal authority, and emotional well-being, are harder to detect. We don’t fully understand the cause and effect of digital toxins. The amplification of the most odious beliefs in social media posts, the dissemination of inaccurate information in an instant, the anonymization of our public discourse, and the vulnerabilities that enable foreign governments to interfere in our elections are just some of the many phenomena that have accumulated to the point that we now have real angst about the future of democratic society.

In one sense, the new technology giants largely shaping our online world aren’t doing anything new. Amazon sells goods directly to consumers and uses consumer data to drive value and sales; Sears Roebuck delivered goods to homes, and Target was once vilified for using data on customer behavior to sell maternity products to women who had yet to announce their pregnancies. Google and Facebook grab your attention with information you want or need, and in exchange put advertisements in front of you; newspapers started the same practice in the nineteenth century and have continued to do it into the twenty-first—even if, thanks, in part, to Google and Facebook, it’s not longer as lucrative. 

But there are fundamental and far-reaching differences. The instantaneity and connectivity of the internet allow new digital pollution to flow in unprecedented ways. This can be understood through three ideas: scope, scale, and complexity. 

The scope of our digital world is wider and deeper than we tend to recognize. 

It is wider because it touches every aspect of human experience, reducing them all to a single small screen that anticipates what we want or “should” want. After the widespread adoption of social media and smartphones, the internet evolved from a tool that helped us do certain things to the primary surface for our very existence. Data flows into our smart TV, our smart fridge, and the location and voice assistants in our phones, cars, and gadgets, and comes back out in the form of services, reminders, and notifications that shape what we do and how we behave. 

It is deeper because the influence of these digital services goes all the way down, penetrating our mind and body, our core chemical and biological selves. Evidence is mounting that the 150 times a day we check our phones could be profoundly influencing our behaviors and trading on our psychological reward systems in ways more pervasive than any past medium. James Williams, a ten-year Google employee who worked on advertising and then left to pursue a career in academia, has been sounding the alarm for years. “When, exactly, does a ‘nudge’ become a ‘push’?” he asked five years ago. “When we call these types of technology ‘persuasive,’ we’re implying that they shouldn’t cross the line into being coercive or manipulative. But it’s hard to say where that line is.” 

Madison Avenue had polls and focus groups. But they could not have imagined what artificial intelligence systems now do. Predictive systems curate and filter. They interpret our innermost selves and micro-target content we will like in order to advance the agendas of marketers, politicians, and bad actors. And with every click (or just time spent looking at something), these tools get immediate feedback and more insights, including the Holy Grail in advertising: determining cause and effect between ads and human behavior. The ability to gather data, target, test, and endlessly loop is every marketer’s dream—brought to life in Silicon Valley office parks. And the more we depend on technology, the more it changes us.

The scope of the internet’s influence on us comes with a problem of scale. The instantaneity with which the internet connects most of the globe, combined with the kind of open and participatory structure that the “founders” of the internet sought and valorized, has created a flow of information and interaction that we may not be able to manage or control in a safe way. 

After the widespread adoption of social media and smartphones, the internet evolved from a tool that helped us do certain things to the primary surface for our very existence. And the more we depend on technology, the more it changes us.

A key driver of this scale is how easy and cheap it is to create and upload content, or to market services or ideas. Internet-enabled services strive to drain all friction out of every transaction. Anyone can now rent their apartment, sell their junk, post an article or idea—or just amplify a sentiment by hitting “like.” The lowering of barriers has, in turn, incentivized how we behave on the internet—in both good and bad ways. The low cost of production has allowed more free expression than ever before, sparked new means of providing valued services, and made it easier to forge virtuous connections across the globe. It also makes it easier to troll or pass along false information to thousands of others. It has made us vulnerable to manipulation by people or governments with malevolent intent. 

The sheer volume of connections and content is overwhelming. Facebook has more than two billion active users each month. Google executes three and a half billion searches per day. YouTube streams over one billion hours of video per day. These numbers challenge basic human comprehension. As one Facebook official said in prepared testimony to Congress this year, “People share billions of pictures, stories, and videos on Facebook daily. Being at the forefront of such a high volume of content means that we are also at the forefront of new and challenging legal and policy questions.” 

Translation: Were not sure what to do either. And, instead of confronting the ethical questions at stake, the corporate response is often to define incremental policies based on what technology can do. Rather than considering actual human needs, people and society evolve toward what digital technology will support. 

The third challenge is that the scope and scale of these effects relies on increasingly complex algorithmic and artificial intelligence systems, limiting our ability to exercise any human management. When Henry Ford’s assembly line didn’t work, a floor manager could investigate the problem and identify the source of human or mechanical error. Once these systems became automated, the machines could be subjected to testing and diagnostics and taken apart if something went wrong. After digitization, we still had a good sense of what computer code would produce and could analyze the code line by line to find errors or other vulnerabilities.

Large-scale machine-learning systems cannot be audited in this way. They use information to learn how to do things. Like a human brain, they change as they learn. When they go wrong, artificial intelligence systems cannot be seen from a God’s-eye view that tells us what happened. Nor can we predict exactly what they will do under unknown circumstances. Because they evolve based on the data they take in, they have the potential to behave in unexpected ways.

Commercial forces are taking basic questions out of our hands. It is treated as inevitable that there must be billons of posts, billions of pictures, billions of videos. The focus is on business: more users, more engagement, and greater activity.

Taken together, these three kinds of change—the scope of intertwining digital and non-digital experience, the scale and frequency leading to unprecedented global reach, and the complexity of the machines—have resulted in impacts at least as profound as the transition from agricultural to industrial society, over a much shorter period of time. And the very elements that have made the internet an incredible force for good also come together to create new problems. The shift is so fundamental that we do not really understand the impacts with any clarity or consensus. What do we call hate speech when it is multiplied by tens of thousands of human and nonhuman users for concentrated effect? What do we call redlining when it is being employed implicitly by a machine assigning thousands of credit ratings per second in ways the machine’s creator can’t quite track? What do we call the deterioration of our intellectual or emotional capacities that results from checking our phones too often? 

We need a common understanding, not just of the benefits of technology, but also of its costs—to our society and ourselves. 

Human society now faces a critical choice: Will we treat the effects of digital technology and digital experience as something to be managed collectively? Right now, the answer being provided by those with the greatest concentration of power is no. 

The major internet companies treat many of these decisions as theirs, even as CEOs insist that they make no meaningful decisions at all. Jack Dorsey warned against allowing Twitter to become a forum “constructed by our [Twitter employees’] personal views.” Mark Zuckerberg, in reference to various conspiracy theories, including Holocaust denialism, stated, “I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong.” 

These are just the explicit controversies, and the common refrain of “We are just a platform for our users” is a decision by default. There can be no illusions here: corporate executives are making critical societal choices. Every major internet company has some form of “community standards” about acceptable practices and content; these standards are expressions of their own values. The problem is that, given their pervasive role, these companies’ values come to govern all of our lives without our input or consent.

Commercial forces are taking basic questions out of our hands. We go along through our acceptance of a kind of technological determinism: the technology simply marches forward toward less friction, greater ubiquity, more convenience. This is evident, for example, when leaders in tech talk about the volume of content. It is treated as inevitable that there must be billons of posts, billions of pictures, billions of videos. It is evident, too, when these same leaders talk to institutional investors in quarterly earnings calls. The focus is on business: more users, more engagement, and greater activity. Stagnant growth is punished in the stock price. 

Commercial pressures have impacted how the companies providing services on the internet have evolved. Nicole Wong, a former lawyer for Google (and later a White House official) recently reflected during a podcast interview on how Google’s search priorities changed over time. In the early days, she said, it was about getting people all the right information quickly. “And then in the mid-2000s, when social networks and behavioral advertising came into play, there was this change in the principles,” she continued. After the rise of social media, Google became more focused on “personalization, engagement . . . what keeps you here, which today we now know very clearly: It’s the most outrageous thing you can find.”  

Digital pollution is more complicated than industrial pollution. Industrial pollution is the by-product of a value-producing process, not the product itself. On the internet, value and harm are often one and the same.

The drive for profits and market dominance is instilled in artificial intelligence systems that aren’t wired to ask why. But we aren’t machines; we can ask why. We must confront how these technologies work, and evaluate the consequences and costs for us and other parts of our society. We can question whether the companies’ “solutions”—like increased staffing and technology for content moderation—are good enough, or if they are the digital equivalent of “clean coal.” As the services become less and less separable from the rest of our lives, their effects become ever more pressing social problems. Once London’s industrial effluvia began making tens of thousands fall ill, it became a problem that society shared in common and in which all had a stake. How much digital pollution will we endure before we take action?

We tend to think of pollution as something that needs to be eradicated. It’s not. By almost every measure, our ability to tolerate some pollution has improved society. Population, wealth, infant mortality, life span, and morbidity have all dramatically trended in the right direction since the industrial revolution. Pollution is a by-product of systems that are intended to produce a collective benefit. That is why the study of industrial pollution itself is not a judgment on what actions are overall good or bad. Rather, it is a mechanism for understanding effects that are large enough to influence us at a level that dictates we respond collectively.

We must now stake a collective claim in controlling digital pollution. What we face is not the good or bad decision of any one individual or even one company. It is not just about making economic decisions. It is about dispassionately analyzing the economic, cultural, and health impacts on society and then passionately debating the values that should guide our choices—as companies, as individual employees, as consumers, as citizens, and through our leaders and elected representatives.

Hate speech and trolling, the proliferation of misinformation, digital addiction—these are not the unstoppable consequences of technology. A society can decide at what level it will tolerate such problems in exchange for the benefits, and what it is willing to give up in corporate profits or convenience to prevent social harm.  

We have a model for this urgent discussion. Industrial pollution is studied and understood through descriptive sciences that name and measure the harm. Atmospheric and environmental scientists research how industrial by-products change the air and water. Ecologists measure the impact of industrial processes on plant and animal species. Environmental economists create models that help us understand the trade-offs between a rule limiting vehicle emissions and economic growth. 

We require a similar understanding of digital phenomena—their breadth, their impact, and the mechanisms that influence them. What are the various digital pollutants, and at what level are they dangerous? As with environmental sciences, we must take an interdisciplinary approach, drawing not just from engineering and design, law, economics, and political science but also from fields with a deep understanding of our humanity, including sociology, anthropology, psychology, and philosophy. 

To be fair, digital pollution is more complicated than industrial pollution. Industrial pollution is the by-product of a value-producing process, not the product itself. On the internet, value and harm are often one and the same. It is the convenience of instantaneous communication that forces us to constantly check our phones out of worry that we might miss a message or notification. It is the way the internet allows more expression that amplifies hate speech, harassment, and misinformation than at any point in human history. And it is the helpful personalization of services that demands the constant collecting and digesting of personal information. The complex task of identifying where we might sacrifice some individual value to prevent collective harm will be crucial to curbing digital pollution. Science and data inform our decisions, but our collective priorities should ultimately determine what we do and how we do it. 

The question we face in the digital age is not how to have it all, but how to maintain valuable activity at a societal price on which we can agree. Just as we have made laws about tolerable levels of waste and pollution, we can make rules, establish norms, and set expectations for technology. 

Perhaps the online world will be less instantaneous, convenient, and entertaining. There could be fewer cheap services. We might begin to add friction to some transactions rather than relentlessly subtracting it. But these constraints would not destroy innovation. They would channel it, driving creativity in more socially desirable directions. Properly managing the waste of millions of Londoners took a lot more work than dumping it in the Thames. It was worth it.

The post The World Is Choking on Digital Pollution appeared first on Washington Monthly.

]]>
91496
The FTC Might Just Be Progressives’ Secret Weapon https://washingtonmonthly.com/2019/01/13/the-ftc-might-just-be-progressives-secret-weapon/ Mon, 14 Jan 2019 01:54:42 +0000 https://washingtonmonthly.com/?p=91490 Jan-19-Vaheesan-FTCfilter

A century ago, reformers gave the Federal Trade Commission extraordinary powers to take on abusive corporations. It’s time to wake the agency up.

The post The FTC Might Just Be Progressives’ Secret Weapon appeared first on Washington Monthly.

]]>
Jan-19-Vaheesan-FTCfilter

It is by now widely acknowledged that the United States has a corporate concentration crisis. Sectors across the economy—from agriculture to airlines to online search to pharmaceuticals to telecommunications—are dominated by a handful of giant corporations, and the trend is only accelerating. Globally, 2017 saw the most mergers and acquisitions in history; as of this writing, 2018 was on pace to set new records. Think AT&T and Time Warner, CVS and Aetna, Disney and 21st Century Fox, and so on. The consequences of all this consolidation include lower wages, extortionate health care prices, and government at every level beholden to big business. It’s great for corporate executives and shareholders, who are enjoying record profits—and terrible for most everyone else.

The trend derives primarily from radical policy shifts during the Reagan administration combined with the activism of conservative judges who, applying cartoonish right-wing economic hypotheses, intentionally reinterpreted antitrust laws—which were designed to limit monopolies and consolidation—in ways that favor monopolies and encourage mergers. With scores of Trump-appointed judges coming on board and Brett Kavanaugh ensconced at the Supreme Court, that phenomenon is likely to get even worse. It now looks like we may be stuck for another generation with a federal judiciary that is ideologically opposed to the government doing anything about the increasing dominance of corporate goliaths. 

Yet a powerful anti-monopoly instrument already exists and is waiting to be used. It turns out that progressives faced a similar impasse more than a century ago, in the wake of the original Gilded Age, when they saw their attempts to fight back against monopolies thwarted by a comparably reactionary Supreme Court. The solution they crafted involved the creation of an institution with the explicit legal authority to prevent and undo concentrated corporate power. That institution is in sad shape today; indeed, it perversely devotes resources to attack the very people it was designed to protect. But it remains a potentially potent weapon. With the right leadership and prodding from Congress, it could once again be used to strike back at the monopolists who are choking off America’s democracy and economy, and overcome the judicial forces protecting them. 

America, meet the Federal Trade Commission. 

The origins of the FTC date to 1911, when the Supreme Court ordered that John D. Rockefeller’s Standard Oil be broken up for violating the Sherman Antitrust Act. That decision is often remembered as a great triumph over plutocracy, but, in a key way, it was actually the opposite. While the Court found that Standard Oil had engaged in specific abusive and unfair practices, it also held that there was nothing wrong or per se illegal about being a monopoly. So long as a business didn’t engage in “unreasonable” behavior, as determined by the courts, it was free to grow without limit in scale and dominance.

Credit: Federal Trade Commission

The Court’s reasoning enraged populists and progressives, who argued that private, unregulated monopoly was inherently dangerous to American democracy and accused the conservative justices of ignoring Congress’s clear intention in passing the Sherman Act. In response, Congress in 1914 enacted two landmark statutes. First was the Clayton Antitrust Act, which updated the Sherman Act by explicitly outlawing a range of practices, including mergers that could reduce competition. Second was a law that created a powerful new agency called the Federal Trade Commission. 

Governed by five commissioners who are appointed by the president and subject to Senate confirmation, the FTC was designed to make sure that judges would never again undermine anti-monopoly laws. Congress specifically gave the FTC the explicit authority not merely to prohibit “unfair methods of competition,” but also to define what counts as “unfair.” Under Section 5 of the Federal Trade Commission Act, the agency can use its broad investigatory capacity to build expertise on new business practices and industries and update its own powers over time. 

In delegating this authority to the FTC, Congress sought to guarantee that antitrust policy would be made by a body required to answer to the public. One senator supporting the law stressed that the agency would be independent—commissioners are appointed to fixed terms and can only be fired for cause, but the FTC would also be accountable to Congress for its funding and direction: “I would rather take my chance with a commission at all times under the power of Congress, at all times under the eye of the people…than…upon the abstract propositions, even though they be full of importance, argued in the comparative seclusion of the courts.” The principal Senate sponsor of the FTC Act aimed to create an agency that would be a “servant of Congress.”

The FTC’s early years produced a mixed record. In 1919 it published a landmark report on the meat-packing industry, laying the groundwork for a breakup of the dominant meat-packers and for legislation to ensure competitive markets in livestock. But in the 1920s, progressives lost control of Congress and the White House, and the FTC became cautious about using its great statutory power to regulate big business. 

In the 1930s, Democrats moved to force the agency to once again stand up to concentrated private power. In 1933, populist Texas Representative Wright Patman cut $300,000 from the FTC’s budget and threatened to abolish the agency, complaining that it had strayed from its “very useful purpose” and become “the most useless board in existence in Washington.” That same year, President Franklin Roosevelt went so far as to fire one overly business-friendly commissioner—a move that was overruled in court, but that got the message across. Later, the New Dealers turned to the Antitrust Division of the Department of Justice as the main tool for fighting monopoly. But for fifty years, the FTC provided vital support to the DOJ in the fight against monopoly, using its Section 5 power to stop business practices that threatened competitive market structures in industries like auto parts, footwear, and even movie trailers. In the 1960s, the agency took a lead in efforts to stop conglomerate mergers that would have consolidated unrelated lines of business under one firm. In 1975, it settled a case against Xerox that ended its dominance in photocopiers and helped unleash competition and innovation in the then-emerging market.

The FTC also took a leading role in protecting independent retailers against the power of large manufacturers and giant chains. In the 1930s, the agency successfully sued the A&P grocery chain—arguably the Walmart of its time—for using its power to squeeze suppliers and obtain discounts that rival grocery stores didn’t get. A decade later, the government sued A&P again, ultimately forcing it to break its food-trading arm off from its retail division.

Over the years, liberals and populists nonetheless often expressed frustration with the FTC. In the early 1950s, Patman’s House Small Business Committee criticized both FTC leadership and case selection. In the 1960s, an American Bar Association report faulted the agency for being chronically inefficient, failing to develop priorities, and pursuing trivial cases. A 1969 report by Ralph Nader’s study group was even more scathing.

But if the FTC in this era rarely used its full statutory authority to overcome hostile courts and tackle corporate concentration, that’s mostly because it didn’t have to. Beginning with the New Deal, and especially under the leadership of Chief Justice Earl Warren, the Supreme Court applied Congress’s economic and political objectives for the antitrust laws and supported vigorous antitrust enforcement and other progressive economic regulation. Working with a friendly judiciary, the FTC, along with the Department of Justice and other federal regulators, helped reduce corporate concentration across the economy through the early 1980s.

But a backlash was building among business interests and the conservative judges and academics they supported. In the 1960s and ’70s, a generation of economists and law professors associated with the University of Chicago, most notably Robert Bork, radically reinterpreted antitrust law in a way designed to reverse the post-1930s progress. Although the Nixon administration generally enforced anti-monopoly law with vigor, the judicial conservatives Nixon appointed to the Supreme Court had been converted to Chicago School’s pro-monopoly worldview. In fact, that approach was so influential in elite legal circles that even Democratic-appointed judges have largely gone along with it.

In the wake of the original Gilded Age, reformers saw their attempts to fight back against monopolies thwarted by a reactionary Supreme Court. The FTC was their solution.

To see how profound this shift in legal philosophy was, consider two different pronouncements from the bench. In 1958, the Supreme Court described the first federal anti-monopoly law, the Sherman Act, as “a comprehensive charter of economic liberty” designed to promote “an environment conducive to the preservation of our democratic political and social institutions.” By contrast, in 2004, a majority opinion by Justice Antonin Scalia—joined by liberal Justices Stephen Breyer and Ruth Bader Ginsburg—asserted that monopoly “is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts ‘business acumen’ in the first place; it induces risk taking that produces innovation and economic growth.” In interpreting a law that Congress intended to prohibit monopolization, the Supreme Court was paying tribute to the virtues of monopoly. According to Scalia’s reasoning, society needed to be protected from the risk of too few opportunities for businesses to monopolize markets. 

Unfortunately, in the years since the Reagan administration embraced Robert Bork’s pro-monopoly approach to competition policy, the FTC has largely failed to push back. Today, the FTC, along with the DOJ’s Antitrust Division, operates on the assumption that corporate mergers typically generate economies of scale and other productive efficiencies and thereby lower prices to consumers. (A partial exception to this rule was when Bill Clinton appointee Robert Pitofsky chaired the FTC between 1995 and 2001.)

Nor has the FTC bestirred itself to break up or tame the giant monopolies that dominate more and more of the economy. Following a lengthy probe of Google for exclusionary and other anti-competitive practices in the search market, the FTC in early 2013 opted not to file suit and instead accepted “voluntary commitments” of good behavior from the tech giant. An inadvertent leak of an internal FTC memo in 2015 indicated that legal staff had recommended suing Google but were overruled by the commissioners.

Worse, even as it has accommodated corporate power, the FTC has trained its guns on the little guy. The agency has filed dozens of lawsuits against independent contractors for engaging in collective bargaining and other concerted activity. Targets of the FTC’s recent anti-labor actions have included church organists, ice skating coaches, music teachers, and public defenders. The agency has even weighed in against state and local collective bargaining rights for home health aides and ride-sharing drivers, and—straying far outside its statutory mandate—has devoted significant resources to opining on state and local occupational licensing rules that provide a way for blue- and pink-collar workers to earn a decent living. (Phillip Longman recently explored this phenomenon in more detail.)

All this may sound like a pretty strong brief for just blowing the FTC up. But today’s progressives should remember that, thanks to the progressives of yesteryear, the agency still has extraordinary power to make antitrust policy thanks to its statutory authority to identify and prosecute “unfair methods of competition.” In the right hands, the FTC could become just the tool we need today to roll back decades of consolidation and monopolization. 

The agency’s expansive mandate, combined with well-established legal doctrine that instructs courts to defer to federal agencies that administer open-ended statutes, gives the agency effective legislative power to regulate and structure nearly every kind of market. Big business lobbyists and conservatives in Congress—and even Republican-appointed FTC commissioners—have long known this, which is why they have repeatedly demanded that the agency’s Section 5 authority be somehow curbed, if not by changes in law, then by the appointment of commissioners and staff who promise never to use it. Now some progressives are waking up to the reality of the FTC’s potential greatness as well. In early September, Rohit Chopra, one of two Democrats on the five-member commission, urged his colleagues to use their authority to crack down on harmful business practices.

The Democratic-controlled House alone can credibly threaten the agency’s funding if it doesn’t live up to its mission to fight corporate monopoly.

Large corporations and their trade associations would almost surely challenge any such effort by the FTC, and today’s conservative Supreme Court majority may be tempted to block it. But doing so would require a particularly brazen effort to ignore not only the clear intent of Congress, but two of the Court’s own precedents. In its most recent pronouncements on the FTC’s statutory power, the Court has ruled that “courts are to give some deference to the Commission’s informed judgment” and that the FTC can consider “public values beyond simply those enshrined in the letter or encompassed in the spirit of the antitrust laws.” In other words, according to the Supreme Court itself, FTC commissioners have ample room to restrict corporate mergers and harmful business practices (like below-cost pricing and employee non-compete clauses) as they see fit. 

But first the FTC must be convinced to act. Already, the Democratic-controlled House alone can credibly threaten the agency’s funding if it doesn’t live up to its mission. The House, for instance, can use both appropriations and public hearings to pressure the FTC to stop prosecuting the church organists of the world, to concentrate on harmful mergers, and to take action against other anticompetitive corporate practices. If the agency still doesn’t get the message, Congress can redirect its budget to state attorneys general, many of whom—such as newly elected Keith Ellison of Minnesota—might be eager to take on abusive corporate power if they had the necessary resources. 

A progressive Congress created the FTC to protect our democracy and economy from the crippling force of concentrated corporate power at a time when the courts were making that task impossible. A century later, we find ourselves in the same position. But this time, progressives don’t need to design a new monopoly-fighting weapon—it already exists. Congress just needs to be reminded how to use it.

The post The FTC Might Just Be Progressives’ Secret Weapon appeared first on Washington Monthly.

]]>
91490 Casmoe Photography The enforcer: Rohit Chopra, one of two Democrats on the five-member FTC, has urged his colleagues to use their authority to crack down on harmful business practices.
The Forgotten Lessons of LBJ’s Domestic Legacy https://washingtonmonthly.com/2019/01/13/the-forgotten-lessons-of-lbjs-domestic-legacy/ Mon, 14 Jan 2019 01:51:52 +0000 https://washingtonmonthly.com/?p=91619

From food stamps to Medicare, Johnson’s Great Society offers a model for how to create enduring federal initiatives by pulling together broad, diverse coalitions.

The post The Forgotten Lessons of LBJ’s Domestic Legacy appeared first on Washington Monthly.

]]>

The first time I taught an undergrad American history survey course, I tried an experiment. Finding myself with extra room in the schedule, I slid a heading into the syllabus: “LBJ and the Great Society.” I opened the lesson as I would go on to open each class on LBJ’s bid to change America. “Who heard of the New Deal before this course?” I asked. Most students’ hands shot up. “What about the Great Society?” Only two hesitant hands. 

Then I asked, “Who can name a New Deal program that directly touches on your lives or those of your close family?” “Medicare?” one student ventured. “Nope,” I answered, to some surprise—that wasn’t part of the New Deal. Students started flipping back through their notes, ruling out New Deal programs like the National Recovery Administration, Agricultural Adjustment Administration, and Civilian Conservation Corps. Finally they got Social Security—but that was about it. And so it goes every time I teach the lesson.

“Anybody here ever watch Sesame Street?” I ask. “Mister Rogers? Anybody listen to NPR?” Hands go up. “Anybody know someone on Medicaid or Medicare?” More hands. “Anybody know someone who had a subsidized school lunch? How about a school library?” Most of the class is usually raising a hand at this point, but still, I ask the kicker: “Anybody here on work study? Does anybody have student loans?” Now all the hands are up. Here, I tend to pause for dramatic effect. “Folks, welcome to the Great Society.”

Today’s conventional wisdom about Lyndon B. Johnson’s presidency tends to see a liberalism that largely failed due to domestic policy overreach and the administration’s disastrous handling of the Vietnam War, and that set in in motion two generations of reactionary politics. While this view unquestionably has truth to it, the collective liberal recoil from Johnsonian initiatives has obscured an important parallel legacy: even with its limitations, the Great Society—Johnson’s ambitious project, launched in 1964, to expand on his hero Franklin Delano Roosevelt’s New Deal—offers a text in how to create consequential, popular, and above all enduring federal initiatives. Beyond the Johnson of the Vietnam War, there was Johnson the visionary and master legislator—a leader who didn’t just play to his base, as politicians of both parties tend to do today, but who knew how to use his skills as backroom deal maker to build broad coalitions and get big legislation passed. 

Johnson’s broker-style liberalism had largely disappeared by the time today’s rising generation of progressives came along. But for those who want to see big government do big things, whether about climate change or health care or inequality, Johnson’s legacy offers important lessons. These are in many ways more relevant to today’s circumstances than FDR’s New Deal, which arose out of the historically unique conditions of the Great Depression. Johnson understood that in a culturally fractured and polarized America, lasting political achievements are built through coalitions in which diffuse groups have their own reasons for supporting some common outcome. Programs that benefit one group and rely on everyone else’s continued altruism are easy to dismantle, and, indeed, some Great Society programs succumbed to that fate. But Johnson’s most enduring achievements—including Medicare, food stamps, school lunches, and federal student loans—have survived a half century of Republican assaults precisely because they were designed to give groups otherwise divided by cultural and economic interests different reasons to fight for them.

Johnson’s most enduring achievements—including Medicare, food stamps, and federal student loans—have survived a half century of Republican assaults because they were designed to give otherwise divided groups different reasons to fight for them.

This lesson is worth recovering in today’s political moment. First, Democrats need to figure out how to expand their base in order to both win and keep power and, more importantly, to pass major legislation that is adequate to the problems we face. Practically speaking, that means building coalitions of people drawn from, among other groups, rural, white Americans in order to regain control of the Senate. Second, any ambitious progressive policy is going to have to get buy-in from the various groups that make up the existing Democratic coalition: minorities, college-educated whites, wealthy suburban moderates, and so on. This goes beyond the eternal argument over idealism verses pragmatism. Realizing big, ambitious idealistic goals requires pragmatic collation building with various constituencies who each have different goals and interests. 

It is important to keep in mind that Johnson’s most lasting and influential programs involved trade-offs and imperfections. But as today’s would-be progressive reformers, looking ahead to 2020 and beyond, think about how to achieve ambitious policies that create new coalitions and survive conservative backlash, they would do well to ask themselves: What would LBJ do? 

Perhaps the purest display of Johnson’s coalitional approach was in his creation of two major federal nutrition programs aimed at helping the poor: food stamps and expanded federal school lunches. These programs were—and still are—built on a coalition of interests linking agricultural policy and food availability. Food stamps were initially challenging because they replaced a New Deal–era program, popular with farmers, in which the government bought agricultural surpluses to stabilize commodity prices and used those surpluses to feed hungry people. While this did wonders for agricultural pricing, it did less to provide the poor with reliable access to healthy food. Urban interests in the liberal coalition favored food stamps as an alternative, because instead of being dependent on whatever farmers happened to have left over, poor people would be better able to buy what they actually needed. 

The key was to bring in the reluctant farmers. To do so, Johnson paired food stamps with an agricultural subsidy bill. The message to farmers was clear: If you support nutrition reform, we will support you. The pairing worked; the Food Stamp Act of 1964 sailed through the Senate and squeaked by in the House, drawing support from Democrats representing urban, suburban, and rural areas. 

Who came up with this delicate balancing act? The sponsor of the House food stamp bill, Missouri Democrat Leonor Kretzer Sullivan, gave credit to Johnson. At the time, Congressional Quarterly described the bill’s success as “largely . . . the result of skillful legislative maneuvering in the House.” The program not only navigated the fraught ground between urban and rural interests, it also addressed concerns, including among conservative southern Democrats, over a ballooning federal government. While the federal government oversaw the program, certification of eligible families was designated to state and local governments; food stamp programs would only be established in communities at the request of the states. The trade-off is that this structure gives conservative-controlled states more freedom to restrict eligibility or make the benefits less generous.

Establishing an urban-agricultural coalition in support of federally funded school lunches, Johnson’s second major food policy achievement, was somewhat easier, because careful tooling of the nutritional requirements offered farmers an outlet for surplus crops and propped up the dairy industry through explicit provisions mandating milk. Again, this structure has shortcomings: its attention to farmers’ bottom line has yielded a national diet that privileges grains, starches, and cheap sugars, contributing to epidemic obesity and related health problems. But the coalition linking federal nutrition programs and America’s farmers has made the programs impossible for conservatives to eliminate. To this day, nutrition support is negotiated through the farm bill process, linking one of the most dependably conservative demographics with one of the most progressive. This past fall, a Republican effort to impose stricter work requirements on food stamps failed; Senate Democrats wouldn’t let what CNN referred to as “the massive, must-pass farm bill” through until Republicans agreed to nix that move. 

A tightly knit and largely impermeable coalition similarly helps explain the durability of Medicare, Johnson’s weightiest legislative achievement. The legislation, created via amendments in 1965 to the Social Security Act, struck what would become a grand bargain between proponents of nationalized health care—the Democrats’ long-standing white whale—and other interests, mostly notably conservatives who were ideologically opposed to “socialized medicine,” and the already powerful medical-industrial complex, which sought to protect the income of doctors, hospitals, drug companies, and medical device makers against government-imposed cost controls.

Johnson first enlisted predictable allies in his push. The major unions had already announced their support for health care for the elderly; Johnson turned next to the leadership of the civil rights movement. He did this in part by designing the law in a way that would explicitly promote racial equality: any hospital that received Medicare funds—and for poor, rural hospitals in the South, these funds were a potential lifesaver—could no longer practice segregation among staff or patients. Speaking in Chicago, Martin Luther King Jr. critiqued the American Medical Association’s reservations as preserving racial discrimination. King accused medical organizations of a “conspiracy of inaction,” complaining that “of all the forms of inequality, injustice in health care is the most shocking and inhumane.” The NAACP and the black National Medical Association (the AMA was segregated) campaigned on behalf of the legislation.

The coalition linking federal nutrition programs and America’s farmers has made the programs impossible to eliminate. To this day, food stamps are part of the farm bill, linking one of the most conservative demographics with one of the most progressive

But Johnson knew that he had to bring less-likely constituencies into the tent as well. Wilbur J. Cohen, an administration official who as a young policy staffer during the New Deal had helped develop Social Security, came up with a plan that he thought could satisfy fiscal conservatives, free market enthusiasts, and liberals alike. He described it as a “three-layer cake”: The first layer was a mandatory social insurance program called Medicare Part A, which would provide hospital insurance to people sixty-five and older who paid into the system during their working years (with the first generation of beneficiaries getting a free ride). The second layer was a voluntary program, financed by individual premiums and general revenues, called Medicare Part B, which would cover the cost of physician services. And third was what would come to be called Medicaid: a means-tested program, financed by both state and federal general revenues, that the states could use to offer various levels of health care benefits to low-income individuals, mostly poor children and the elderly. 

Finally, Johnson had to get the medical-industrial complex to go along as well. He did this primarily by promising that the government wouldn’t tell health care providers how to practice medicine or how much they could charge. Medicare would pay doctors their “usual and customary” charges for any procedure, no questions asked, while reimbursing hospitals for their self-described “reasonable” costs. To sweeten the pie further, Medicare would largely assume the cost of financing residency programs and other forms of graduate medical education, thereby giving teaching hospitals and medical schools a lucrative subsidy. Finally, Medicare would be expressly forbidden from negotiating lower prices from drug companies. The legislation passed its final House vote by an overwhelming margin of 313 to 115.

These provisions led to an explosion in health care costs, so much so that many policy wonks, as well as ordinary Americans, concluded in subsequent years that expanding Medicare to the population under sixty-five would be prohibitively expensive. But at the same time, the large subsidies brought the medical-industrial complex into a coalition with seniors that made cutting Medicare politically impossible. Medicare remains one of the most popular federal programs; public support for it routinely tops 70 percent in opinion polling, indicating that support is not only wide but also bipartisan. And while it’s impossible to know whether the bargain cost the nation a shot at a true single-payer system, the fact of Medicare’s popularity and durability is now being used as Exhibit A in the renewed push for universal health care. 

Johnson’s reforms to higher education, which may have had an even greater impact on American life than Medicare, were likewise structured around coalitional interests. The Higher Education Act of 1965 dramatically changed and expanded federal involvement in post-secondary education, particularly through the federal student loan program. Under this program, the federal government promised private banks that it would insure 100 percent of any losses on student loans the banks might make. This instantly turned banks into allies of the program instead of enemies and unleashed a torrent of bank lending to students. Institutions of higher learning also became instant allies, since the student loan program allowed them to attract more students and charge higher tuition. Eventually, of course, student loan debt spiraled into a major problem, while the availability of unlimited federal money is partly responsible for the skyrocketing sticker price of a college degree. The Obama administration would eventually largely replace federal loan guarantees to private banks with a system of direct lending. But, despite engineering flaws, Johnson’s coalition building established a precedence for federal involvement in financing higher education that has lasted to this day. 

To be clear, none of Johnson’s coalition-based initiatives can be said to be perfect public policy. All involved trade-offs and limitations. What they illustrate, however, is permanence.

Or consider legislation to fund public broadcasting, which Johnson saw as a vehicle to bring early childhood education and high culture to American hinterlands. He brought in a broad coalition of interests including the Audubon Society, supporters of the arts, early education advocates, anti-commercial interests, representatives of areas with limited broadcasting markets—mostly rural, sparsely populated communities—and public radio advocates, who were added so late in the process that the words “and radio” were famously added with Scotch tape alongside “television” in the text of the bill. Once again, urban-rural alliances that knit together different demographics were paired with other cross-cutting interests. Rather than presenting the program as a federal, centralizing authority, much of the support went to intensely local pursuits, supporting regional television and radio initiatives that might not have reached a broad enough base to succeed on the open market. Johnson’s bill won extensive Republican support in the House and Senate and was approved by overwhelming majorities in both chambers. Although Republican administrations starting with Richard Nixon’s would actively seek to reduce federal involvement in nonprofit broadcasting, they have mostly been unsuccessful. 

Contrast these programs with ill-fated, more narrowly targeted initiatives. The great expansion of welfare spending that came as part of Johnson’s “War on Poverty” would wind up being rolled back under the Clinton administration. Other initiatives, like Legal Services, barely made it through the Reagan years, while the Community Action Program and others remain chronically underfunded due to lack of coalitional support. The Model Cities Program was created to solve the problems of urban America but failed to enlist other stakeholders; it was shut down in 1974. 

To be clear, none of Johnson’s coalition-based initiatives can be said to be perfect public policy. All involved trade-offs and limitations. What they illustrate, however, is permanence. Republicans beginning with Nixon have tried to chip away at Johnson’s domestic legacy, an effort that became a full-scale frontal assault under Ronald Reagan, who wrote in his diary on January 28, 1982, that he was “trying to undo the ‘Great Society.’ ” Despite these and more recent attempts, however, Johnson’s coalitions have proven difficult to destroy, gaining resilience from the wide geographic, cultural, and economic distribution of their appeals. 

The Johnson administration’s most enduring domestic programs offer a window of possibility into what progressive policymaking could look like, even with a Democratic Party divided between centrists and progressives, whites and people of color, red-staters and blue-staters, and urban and rural folk. The key is to think beyond the idea of “finding common ground” and look for areas in which divergent interests draw different benefits from a given initiative. Modern policies borrowing from Johnson’s playbook could create sustainable urban-rural alliances—something desperately needed to push back against the staggering contemporary levels of geographic polarization.

One example explored elsewhere in this issue is the alignment of interests between food growers and food consumers created by the predatory role of monopolistic agribusinesses in both driving down farm prices and driving up grocery store prices. Similarly, mergers and the spread of corporate medicine equally threaten struggling rural communities and impoverished inner-city neighborhoods that are trying to keep their local hospitals from closing. 

Or consider the coalitions potentially created by an old-school infrastructure project: expanded mid-range rail passenger service. Properly done, it not only provides high-end business travelers with a way to get between major metropolises—such as Boston to New York or New York to Washington, D.C.—but it can also provide vital connectivity to the midsize cities and small towns in between, like New London, Connecticut, or Wilmington, Delaware, that would otherwise become increasingly isolated. There’s a reason why Republicans have never been able to kill Amtrak, particularly its Northeast Corridor service. A Johnson-style approach to infrastructure would begin by trying to design projects that would rope together as many disparate stakeholders as possible. 

Addressing climate change—one of the paramount policy challenges of our time—could also be framed around coalitional interests. Urban sophisticates may have their own reasons for favoring green energy, but for farmers, installing things like wind turbines or solar panels can dramatically increase land values, bringing a much-needed source of revenue into struggling parts of the country. 

Modern policies borrowing from Johnson’s playbook could create sustainable urban-rural alliances—something desperately needed to push back against the staggering contemporary levels of geographic polarization.

Or take internet access, a major issue for rural voters. A 2016 study found that some 39 percent of rural Americans (compared to 5 percent of urban Americans) had no access to an internet service that allows users to telecommute, take online college courses, or watch streaming content. The people who most need the capacity to telecommute to distant employers or take remote classes are, perversely, the least likely to have the online infrastructure to do so. While hardly the panacea for the troubles facing rural America that D.C. technocrats sometimes make it out to be, fixing this disparity would be an obvious part of any broader rural economic agenda. The problem is that the installation and maintenance of broadband in remote areas is not a cost-effective prospect for private industry. 

But think about who else could benefit. Companies that provide internet-based services—especially streaming media powerhouses like Netflix and Amazon—have an interest in increasing their potential client base. Employers and educators, concentrated in metropolitan areas, would be able to access a new pool of available employees, and a federal broadband program could also address the residual 5 percent of disproportionately poor urban residents who still have no high-speed internet. 

Not coincidentally, this has echoes of the rural electrification initiatives of the original New Deal, envisioned by Johnson’s idol, FDR. While rural electric cooperatives weren’t initially cost-effective for private industry, federal support provided employment, infrastructure, and expanding opportunities for their members and customers. (Applying an LBJ-style approach to rural broadband would be poetic: the Texan president’s first political experience came during the New Deal, as the twentysomething political novice worked to bring electricity to rural Johnson City, Texas.) 

As has been remarked upon time and time again since the 2016 presidential election, this is a moment for proponents of ambitious progressive policy to use or squander. Workers’ rights, universal health care, mental health and substance abuse initiatives, and job availability all are areas in which the Democratic Party has a chance to pick up supporters who are searching for federal solutions to grassroots problems. For all its limitations, Johnson’s Great Society proves that federal policy can provide these solutions at the grassroots level in a way that can’t be easily dismantled by future reactionary governments. 

Days after one of my Great Society lectures, a student came to my office hours. “You know, I never knew anything about the Great Society,” she said. “I went to Head Start, and I thought it was just something that happened in my community. I didn’t know it had this whole history behind it.” Imperfect as the Great Society—like Johnson himself—was, my student was a testament to its lasting accomplishments and the resilience of its programs. The Great Society at its savviest built unanticipated coalitions, created local buy-in to national programs, and provided lasting opportunities three generations forward. In both their promise and their potential pitfalls, the examples of the Great Society provide a powerful playbook for a new generation of social reformers.

The post The Forgotten Lessons of LBJ’s Domestic Legacy appeared first on Washington Monthly.

]]>
91619
Editor’s Note: Check Your Coastal Urban Privilege https://washingtonmonthly.com/2019/01/13/editors-note-check-your-coastal-urban-privilege/ Mon, 14 Jan 2019 01:48:44 +0000 https://washingtonmonthly.com/?p=91729 The Pierre, A Taj Hotel, Upper East Side, Manhattan

If affluent liberals want to avoid a permanent GOP dictatorship, they need to reform a system that benefits them but screws everybody else.

The post Editor’s Note: Check Your Coastal Urban Privilege appeared first on Washington Monthly.

]]>
The Pierre, A Taj Hotel, Upper East Side, Manhattan

It’s been three decades since Peggy McIntosh, a women’s studies scholar at Wellesley College, published her landmark essay, “White Privilege and Male Privilege.” In it, she observed that even her most well-meaning male colleagues—those aware of the discrimination female scholars faced—were nevertheless blind to the ways that they, as male academics, benefited from that discriminatory system, from easier career advancement to the power to decide curriculum. She then posited a corollary: that whites like her were similarly unaware of the advantages they enjoyed from a system of policies and social norms that disadvantaged minorities. “I think whites are carefully taught not to recognize white privilege, as males are taught not to recognize male privilege,” she wrote. As illustration, she offered a list of some of those privileges, such as the freedom to go shopping “fairly well assured that I will not be followed or harassed by store detectives,” or the ability to “criticize our government and talk about how much I fear its policies and behavior without being seen as a cultural outsider.” 

McIntosh’s essay is the urtext of what has become a core principle among academics, diversity trainers, social justice activists, and liberal society at large. The concept has been applied to other dominant groups—“cisgender privilege,” “age privilege”—and led to the catchphrase “Check your privilege,” which roughly means, “Be aware of and admit to your own favored personal situation when talking about problems in society.”

Like a lot of people in my generation, I have issues with the use of the word “privilege” in some of these contexts—among other things, it implies that a freedom as basic as not getting harassed by police is a kind of special perk rather than a fundamental right. But the basic point is correct, and essential: many of us, especially we white men, go about our daily lives blissfully unaware of the benefits we enjoy—McIntosh calls them “unearned advantages”—from not being the victims of oppression. 

In fact, I’d like to extend McIntosh’s concept to another form of unearned advantage: the economic opportunities enjoyed by residents of a handful of the nation’s largest and wealthiest coastal metropolitan areas. Cities like San Francisco, New York, and Washington, D.C., and their suburbs, have raced ahead of most of the rest of the country in recent decades. But the well-educated liberals who most benefit from all this economic growth—the same people who are most likely to be woke to privilege in other contexts—don’t seem to recognize that a substantial portion of the gains are unearned, ill-gotten, and coming at someone else’s expense. 

Put simply, these big liberal metro areas aren’t thriving simply because they’re liberal, or educated, or “innovative”; they’re doing so in large part thanks to federal policies that allow them to suck up more than their fair share of the nation’s wealth. Since 1980, the gap between the per capita income in the wealthiest 10 percent of metro areas and the poorest 10 percent has grown by 21 percent. The contrast with rural America is even starker. As recently as twenty years ago, sparsely populated counties were doing fine; in fact, during the first four years of the 1990s boom, their rates of business start-ups and employment growth exceeded those of the big cities. But their economies collapsed during the 2008 recession and have not recovered, even as the largest metro areas have flourished. 

These growing geographic disparities ought to alarm everyone—especially those who claim to care about inequality. But instead of concern, what you too often hear from coastal liberals is smug satisfaction that they are the forward-thinking economic drivers of the national economy and that they disproportionately pay the nation’s taxes. Writing in the New Republic in 2017, contributing editor Kevin Baker took this triumphalism a step further by proposing, tongue only partially in cheek, that wealthy blue states (or, as he dubbed them, “the United States of We Pay Our Own Damn Way”) should virtually secede from the union.

This lack of empathy is partly understandable. Red state voters elected and continue to support a corrupt, racist, misogynist, dissembling authoritarian whose agenda and impulses threaten the whole world. But it is also a stark example of privilege as McIntosh defined it: these blue megacities are getting ahead in part because of a system of unearned advantages that directly contributes to the heartland’s economic distress. 

This magazine has long made the argument—advanced in this issue in twin cover stories by Daniel Block and Claire Kelloway—that the migration of wealth and opportunity to a handful of coastal metro areas is not primarily the consequence of inevitable market forces, but of policy choices made in Washington. Beginning in the 1970s, and with accelerating force in the 1980s, elected officials dismantled a set of rules that for decades had allowed all parts of the country to compete on a level playing field. These included safeguards for local banks and retailers, regulations that kept the costs of air travel relatively uniform throughout the country, and strict enforcement of federal antitrust laws. 

The results of this dismantling have been devastating. Chain stores headquartered in distant cities bought out local retailers and are themselves now threatened by a single fast-growing Seattle-based monopoly, Amazon. Megabanks in New York and San Francisco acquired local lenders, depriving regional economies of access to capital. Four airlines now control 80 percent of the domestic market, and the New York–based hedge funds that own them have cut service and raised prices for flights to and from small and midsize cities, rendering them less competitive places to do business. 

The same thing has happened across industries. As cities in the interior lost companies headquartered there through corporate mergers, they lost their economic engines, while the already wealthy coastal megalopolises gained horsepower. And as those interior cities try to reinvent themselves by nurturing entrepreneurs, their local start-ups are being acquired by tech monopolies located (where else?) on the coasts.

Rural America has been even harder hit by this orgy of industry consolidation. As Claire Kelloway documents, Big Ag monopolies in recent years have relentlessly raised prices on what farmers buy (seed, fertilizer) and pushed down the prices of what farmers sell (grain, livestock), to the point where “we are on the verge of a farm crisis more sweeping than the one that ripped rural America apart in the 1980s.”

One of the key insights of Peggy McIntosh’s original essay on privilege is that unearned advantages harm even those who have them. That is true for residents of elite coastal metros in more ways than one. As Daniel Block notes, plenty of talented professionals would prefer to make their careers in their home towns to be close to friends and family but find themselves instead “channeled to a handful of overly expensive, traffic-choked megacities.” This phenomenon is also what’s driving the gentrification problem. “It’s not that [elite metro areas] have too many white-collar working professionals moving into once-affordable communities,” Block argues. “It’s that they have too many white-collar working professionals, period. Stagnant heartland cities, on the other hand, don’t have enough.”

But the biggest harm is political. In the 2018 midterms, Democrats rode a “blue wave” of support to their first House majority since 2011. Yet, despite a nine-point advantage in the national vote, they lost a net of two Senate seats. That’s because their voters are increasingly clustered in solid-blue states like California and New York and too thin on the ground in states like North Dakota and Ohio. If this situation continues, Democrats will have a hard time overcoming the advantage Republicans enjoy in the Senate (where sparsely and heavily populated states each get two senators), and may even continue to lose the Electoral College despite winning the popular vote. 

The challenge is not only that Democrats have hemorrhaged support in rural areas. It’s also that metro areas in red and purple states, which generally support Democrats, haven’t been growing enough to offset those rural losses. If these places had been thriving at anywhere near the level of the coastal megalopolises, there would be millions more blue votes across these key battleground states. Consider Colorado, a formerly purple state that is now reliably blue thanks to the expansive development of Denver, one of the few metro areas in America’s interior that have enjoyed coast-like economic growth. 

Or compare Minnesota and Wisconsin. In 2016, Minnesota remained blue while Wisconsin went red. In both states, rural voters lurched right. The difference was in the size of their main metro areas, both of which voted heavily for Hillary Clinton. Greater Minneapolis’s economy has grown at almost double the national rate since 1970, while Greater Milwaukee’s has increased at about a quarter of that rate. Consequently, the Minneapolis area’s population has swelled while Milwaukee’s has stagnated. Had they both grown at the rate of Greater Minneapolis, Block concludes, “Clinton would have carried Wisconsin by approximately 16,000 votes instead of losing by roughly 23,000.”

The single best long-term strategy to break through the geographic wall that preserves GOP power is to reverse the policies that have enabled the rise of monopoly firms that cluster opportunity in a few lucky cities. In the short term, committing to that path could help Democrats make inroads among the rural voters they desperately need to woo back. Last fall, Iowa Democrat J. D. Scholten, running on an economic message that heavily emphasized the need to break up the dominant agricultural monopolies, came within 3.4 percentage points of beating Representative Steve King, the notorious racist, in a heavily rural Iowa district that went for Donald Trump by twenty-seven points. If that swing from 2016 could be replicated nationally, Kelloway observes, it “would all but wipe out the current incarnation of the Republican Party.” Over the longer haul, anti-monopoly policies could restore the freedom of small and midsize cities around the country to compete for business, economic growth, and residents—and take away the GOP’s geographic advantage for good.

This obviously won’t be easy. The monopoly powers—the Citibanks, Comcasts, and Facebooks—will resist with all the considerable political clout at their disposal. Democrats who take up the challenge will also have to overcome the resistance of their own voters in coastal megacities who are benefiting from the current setup. “It is difficult to get a man to understand something, when his salary depends on his not understanding it,” as Upton Sinclair famously put it. 

The good news is that political convictions can be an even more powerful force than economic self-interest. Why else would millions of affluent liberals vote for Democrats who promise to raise their taxes? These voters need to be made to see that the prosperity they enjoy is at least partly the result of a rigged set of rules that makes it crushingly difficult for anyone living anywhere else to compete. Those rules are also what’s helping the Republican Party keep control of Washington despite lacking majority support in the country. So if coastal liberals want to avoid a permanent GOP dictatorship, they need to become aware of how a system they benefit from is also screwing them and everybody else. They need, in other words, to check their privilege.

The post Editor’s Note: Check Your Coastal Urban Privilege appeared first on Washington Monthly.

]]>
91729
The Lab-Grown Meat Industry’s Problem With Regulation? There’s Not Enough. https://washingtonmonthly.com/2019/01/13/the-lab-grown-meat-industrys-problem-with-regulation-theres-not-enough/ Mon, 14 Jan 2019 01:45:04 +0000 https://washingtonmonthly.com/?p=91642 Vitor Chris Chicken Chat 2

American entrepreneurs are close to perfecting cell-cultured meat. But thanks to the conservative war on the regulatory process, you may never get to eat it.

The post The Lab-Grown Meat Industry’s Problem With Regulation? There’s Not Enough. appeared first on Washington Monthly.

]]>
Vitor Chris Chicken Chat 2

We came up with the idea to use one feather, from the single best chicken we could find,” says the narrator of a video for Just, a Bay Area biotech company. Later, a handsome farmer pulls a white feather out of his shirt pocket and puts it in a test tube. It’s a promotion for the company’s new product: chicken grown in a lab. “You can take just a handful of cells, and keep growing them, essentially infinitely.” Fast forward to the company’s research chef seasoning chicken nuggets at an outdoor cookout. A group of young people sit at a picnic table in a grassy backyard while a chicken wanders near their feet. They’re eating nuggets, the video claims, made from the cells of that very chicken. 

Just has been racing against other biotech start-ups toward a revolution in food science: meat that doesn’t require slaughtering animals. By culturing chicken, pig, and duck cells, it’s now possible to make chicken nuggets, chorizo, and duck à l’orange—and the products are getting closer and closer to tasting just like traditional meat. The technology holds the promise of being dramatically less land and water intensive compared to conventional animal farming. Just hasn’t announced when their products will hit grocery store shelves, but competitor Memphis Meats expects to sell cultured chicken and duck by 2021. 

But there’s something that could stand in the way of this rapid progress: government regulation. Not in the way you might be thinking, though. The issue isn’t too much regulation—it’s too little. 

This past October, Josh Tetrick, Just’s CEO, sent a letter to Agriculture Secretary Sonny Perdue. He wasn’t writing to complain about burdensome rules. Rather, his worry was that there were no clear regulations for his industry at all. “Companies pursuing innovative, sustainable meat technology,” he wrote, “need clarity about regulatory expectations and a defined path to market.” Without oversight from the federal government, it would be impossible to build up the brand—why would anyone feel safe eating the product? Anyone could claim to sell cell-cultured meat, make people sick, and sully the whole idea, bringing responsible companies down with them. And if they tried to just start selling their products, regulations be darned, they could be opening themselves up to a world of lawsuits. 

At a public meeting a few weeks later, Perdue admitted the problem. “The industry,” he said, “is already ahead of us.” 

The slowest thing in Washington, next to Beltway traffic at rush hour, is the regulatory process. For example, as Barack Obama left office, neither the Affordable Care Act nor the Dodd-Frank banking bill—Obama’s two signature legislative achievements—had been fully translated into regulations, meaning that parts of the law had not gone into effect seven years after being passed by Congress. These delays are less the result of regulatory agencies dragging their feet than an ever-growing series of roadblocks, checkpoints, tolls, and drawbridges put in their way by elected officials convinced that federal regulations suppress job growth and innovation. 

Since the 1970s, the fight against “job-killing regulations,” a phrase made famous by Ronald Reagan, has been a policy obsession of conservative and libertarian think tanks. The American Enterprise Institute went so far as to launch a magazine, Regulation, which may be the only publication devoted to trashing the thing it’s named after. (It’s now published by the Cato Institute.) Overall, the idea that regulations are bad for business is perhaps the only issue besides tax cuts that essentially every Republican agrees on. 

This obsession has translated into policy: Over the past forty years, Congress and successive administrations have imposed a series of laws and executive orders, often sold on the grounds of making the process more “transparent,” that actually make it more cumbersome and easier for opponents to shut down. They have slashed agencies’ budgets, making it harder for them to do their job, and required that agencies subject proposed regulations to ever more stringent cost-benefit analysis—and have even created other agencies to review that analysis. 

Donald Trump, a rebel when it comes to Republican dogma on subjects like trade, has long been a doctrinaire conservative on deregulation. On the campaign trail he said he would “cancel every needless job-killing regulation” and once in office added his own obstacle to the rule-making process: an executive order requiring agencies to cut two old regulations for every new one they create. 

Adding hurdles to the regulatory process might make sense if, as conservatives claim, federal regulations are nothing but a drag on the economy. But, in fact, there’s little evidence for that claim, and much evidence that it’s dead wrong. A compilation of research edited by economists at the University of Pennsylvania and George Washington University showed that, on the whole, regulations do little to change the number of available jobs. Last year, when Alex Tabarrok, a prominent libertarian economist at George Mason University, set out to see whether the increase in federal rule making had impacted the rate at which businesses grow, he was surprised to find zero correlation. 

That doesn’t mean regulations can’t be burdensome to comply with, or that they never cost jobs. But the flip side—and the reason that, on balance, there is no evidence that federal regulations suppress jobs and growth overall—is that regulations often make possible whole new industries that lead to more jobs and economic growth. It was a rule phasing out incandescent light bulbs that spurred the commercialization of alternatives, and now you can cheaply buy LEDs that last about twenty-five times longer than incandescent bulbs and are 80 percent more efficient. The profusion of finance apps like Betterment and Digit are the direct result of regulations that came out of Dodd-Frank, forcing banks to allow customers to give outside firms access to their account information. 

The fact that new industries need regulations to grow, however, means that growth can stall when regulators can’t push out those rules in a timely manner. The commercial drone industry is a good example. The U.S. was an early leader in drone research, but the Federal Aviation Administration was painfully slow to give companies clarity on how they could test and sell their inventions. So American start-ups, like the drone data platform Airware, began doing business in European countries that beat the U.S. to the regulatory punch. Google and Amazon started testing their products in Australia and Canada, respectively, and a company based in China became the clear leader in small commercial drones. 

Cell-cultured meat would likely have an even bigger impact than drone technology. Booming global populations, rising incomes, and increasing demand for meat in countries like China and India mean that the world will need to produce 470 million tons of meat per year by 2050—an increase of more than 200 million tons per year from 2007. That will put pressure on the resource-intensive animal farming system we have now, which uses 660 gallons of water to make a single beef patty. As the planet sprints toward catastrophic climate change, animal farming alone accounts for nearly one-sixth of greenhouse gas emissions.

Lab-made meat could circumvent the factory farming and slaughter of billions of animals, a process that consumers increasingly find grotesque, and a cleaner process could result in healthier meat. (Several reports have found disturbing amounts of fecal bacteria in our meat and poultry.) And if companies can perfect cell-cultured fish—which Finless Foods, a San Francisco start-up, is working on—it could alleviate the decimation of fisheries, allowing consumers to eat overfished species, like bluefin tuna, without moral conflict. 

One indication of the new industry’s potential is that the two biggest meat-processing companies, Tyson Foods and Cargill, have already invested. “If we can grow the meat without the animal, why wouldn’t we?” then Tyson CEO Tom Hayes said in an interview last August with Bloomberg. If these start-ups are successful in mass production, America could be the leader in selling cell-cultured meat to the rest of the world. 

But Washington regulators may already be behind—not just behind the industry, as Sonny Perdue suggested, but behind other countries, too. The European Union added rules for cell-cultured products to one of its major food regulation laws back in 2015; the rules took effect in January 2018. Karin Verzijden, a lawyer for the Dutch firm Axon who advises companies navigating EU food regulations, says the European Commission “has followed developments in the industry and anticipated that this would be needed.”

While the EU has had a clear path to market in place for a full year, in the U.S., formal discussions between industry representatives and regulators didn’t even start until last July. Now, agencies are looking at a gauntlet of obstacles that could drag the process out for years.

The first quagmire was deciding which agency would actually do the regulating. The natural choice would have been the Food and Drug Administration, which already oversees cell-culturing techniques. But as with all innovation, there is an incumbent industry that stands to lose if the emerging lab-grown meat industry takes off: livestock ranchers. And in April of last year, that industry, represented by the National Cattlemen’s Beef Association (NCBA), wrote that it wanted the U.S. Department of Agriculture, an agency over which it holds considerable sway, to be the regulator of the cell-cultured meat industry. 

Over the summer and through the early fall, the two agencies tussled over the turf. While the FDA made a public announcement laying claim to the industry, lobbyists successfully petitioned members of Congress to get language into a bill requiring the USDA to have oversight responsibilities. Finally, in November, the agencies made a Solomonic compromise: they would share jurisdiction, with the FDA overseeing cell collection and growth, and the USDA overseeing production and product labeling. 

Would it be harder for small start-ups to comply with the rules of two agencies instead of one? Probably. But after months of stalemate, this felt like progress. The industry rejoiced. 

That was just the first obstacle, however. The next one is a “public comment” period during which citizens can send in their written concerns, all of which regulators are obligated by statute to review. Industry groups have learned to exploit this process by deluging agencies with thousands of comments. The first comment period was already extended once, and it may be extended again, said Danielle Beck, director of government affairs for the NCBA. Her organization is soliciting comments from its members. One clear bone of contention is how cell-cultured products will be labeled. The NCBA uses unappetizing variations on “fake meat” and “lab-grown meat product” to describe its new competitors, while some start-ups call their products “cell-based meat”—already a concession from their initially preferred “clean meat.” 

Next, each agency will have to decide whether the regulations currently on the books suffice, in which case they can just issue “guidance”—detailed language on how to comply with the regulation—or whether they will need to craft a new set of regulations. If they choose the former, industry could conceivably have the regulatory foundation it needs by the end of this year, though it might take longer. If the agencies decide that new rule making is required, however, a whole new set of procedures kicks in. Economists, lawyers, scientists, and other experts must be convened to offer their input on potential regulations, after which the agencies issue a notice of proposed rule making. Cue another statutorily obligated comment period. 

Then, after the rule is reformulated based on feedback, it may have to go through the Office of Information and Regulatory Affairs (OIRA), which, among other things, would review the agency’s economic cost-benefit analysis. Studies have found that, since 1981, the amount of time it takes OIRA to review a rule has increased from less than ten days to about two months—before factoring in the weeks or months of extra time regulatory agencies invest to bulletproof their rules against expected OIRA objections. All told, the process could take years.

Regardless of whether agencies go with guidelines or rule making, the end result will be vulnerable to lawsuits. If a group wants to argue, for example, that a guideline is actually acting as a rule, they could take the agency to court—and the federal judiciary is packed with anti-regulatory judges. Half of the judges on the Court of Appeals for the D.C. Circuit, the body most likely to hear cases against federal agencies, were appointed by Republican presidents, with more likely on the way. If the court tosses out a rule, the agency typically goes back to square one. That’s what happened to Obama’s Education Department when it finalized rules in 2011 designed to cut off federal funding for lousy vocational colleges whose graduates earn so little they can’t pay back their loans. The for-profit college industry sued repeatedly, necessitating redrafts of the rules, which finally went into effect in the last weeks of the Obama administration. (Trump’s education secretary, Betsy DeVos, has refused to enforce the rule—for which she, in turn, is being sued.) The looming possibility of lawsuits is a big reason why agencies take so much time drafting the rules in the first place. 

Right now, cell-cultured meat companies are optimistic. Most are still a year or two, if not more, from commercializing their products, so there is time for Washington to get its act together and provide a solid regulatory framework. The danger is that if the process drags on, the companies will move some or most of their operations to countries that already have one. Just CEO Josh Tetrick said regulations are one factor among many, but that “having a regulatory framework that’s clear and rational and science based” would play a role in “where we might want to move manufacturing in the future.”

That’s a worst-case scenario. But when it comes to contemporary Washington, it’s folly to ignore worst-case scenarios. 

The post The Lab-Grown Meat Industry’s Problem With Regulation? There’s Not Enough. appeared first on Washington Monthly.

]]>
91642
Three Ways Democrats Can Fix the Farm Economy https://washingtonmonthly.com/2019/01/13/three-ways-democrats-can-fix-the-farm-economy/ Mon, 14 Jan 2019 01:43:58 +0000 https://washingtonmonthly.com/?p=91528 Tractor spraying pesticides on vegetable field with sprayer at spring

Executing these ideas may require the White House and Senate, but the House majority can begin developing and publicizing them right now.

The post Three Ways Democrats Can Fix the Farm Economy appeared first on Washington Monthly.

]]>
Tractor spraying pesticides on vegetable field with sprayer at spring

The farm economy is in crisis, and Democrats will need to do something about it to have a chance at winning back the votes of rural America. Executing the three policy suggestions below may require winning back the White House and Senate in 2020—but the House majority can begin developing and publicizing them right now.

Break Up Big Agribusiness

For decades, federal antitrust regulators have given carte blanche to mega-mergers in the food sector. To fix this, Congress should pass legislation requiring the Department of Justice to take antitrust action in any agribusiness sector in which the four biggest firms control more than 50 percent of the market. Appropriate remedies range from forced divestiture of assets to complete firm breakups. Meanwhile, mergers between companies with more than 5 percent market share should be blocked if they will result in a loss of competition. Congressional Democrats should also follow up on campaign promises made by Barack Obama by pressing for amendments to the Packers and Stockyards Act that would make it easier for farmers to file unfair practices claims and protect them from retaliation for airing grievances. 

Reform Agricultural Co-ops

By forming cooperatives, farmers can bargain more effectively with monopolistic buyers. That’s the reason farm co-ops are exempt from certain antitrust restrictions on coordination and price setting. However, over time, some cooperatives have become so big that they have themselves become abusive monopolies. In the dairy industry, large cooperatives such as the Dairy Farmers of America and Land O’Lakes have swallowed up rivals, made deals with investor-owned firms, and earned record profits, while their members suffer from record-low prices and go out of business by the thousands. To fix this, Congress should pass a law ensuring that the governance of co-ops remains in the hands of member farmers, and that no co-op becomes so big that it monopolizes the local market for farm products.

Replace Farm Subsidies With Supply Management

A broad array of income support programs subsidize different kinds of farmers. But this approach gives the most benefits to the biggest farmers, results in overproduction, and has failed to stem the trend toward consolidation. It also creates perverse price signals, such as by allowing multinational meat companies to procure feed grains at below the cost of production. A better way is to go back to the supply management approach the U.S. used under the New Deal, in which farmers were paid to reduce acreage when stocks were high, and government provided price supports and bought up surpluses if the market price fell below the designated price floor. This policy tool kit stabilized commodity prices and allowed farmers to predict revenue within a fairly narrow band, making it much easier to plan farm operations. Research by agricultural economists at the University of Tennessee indicates that a revamped supply management system would boost farm income, increase farm exports, and save the government billions of dollars annually.

The post Three Ways Democrats Can Fix the Farm Economy appeared first on Washington Monthly.

]]>
91528
Taking the Monopoly Threat Seriously https://washingtonmonthly.com/2019/01/13/taking-the-monopoly-threat-seriously/ Mon, 14 Jan 2019 01:42:45 +0000 https://washingtonmonthly.com/?p=91708 Picture_of_Louis_Brandeis

Louis Brandeis, the legendary jurist, warned of the spiritual and political costs of economic concentration. Today, that’s more relevant than ever.

The post Taking the Monopoly Threat Seriously appeared first on Washington Monthly.

]]>
Picture_of_Louis_Brandeis

We associate Louis Brandeis most with Boston. The legendary Supreme Court justice attended law school at Harvard, opened his private practice on Devonshire Street, and had a college named for him in the city’s suburbs. But as Tim Wu reminds us in his new book, The Curse of Bigness, the place that shaped Brandeis’s most influential thinking was not Boston but the smaller city where he grew up: Louisville, Kentucky. It was there that his Prague-born father, Adolph, decided to settle and came to prosper as a grain merchant. And it was there that Louis developed an abiding attachment to the American ideal of the level playing field. 

The Curse of Bigness:
Antitrust in the New Gilded Age
by Tim Wu
Columbia Global Reports, 154 pp.

“Louisville was no world capital, nor the seat of any corporate empire, but nonetheless a flourishing regional center, in a United States far more economically decentralized than today’s,” writes Wu. “It was, economically speaking, dominated by no few large concerns but a multitude of small producers.” Wu quotes biographer Melvin Urofsky, who wrote that Louisville seemed to Brandeis “the quintessential democratic society, in which individuals . . . could do well by dint of their intelligence and perseverance.”

The Curse of Bigness is intended to show how democratic society is threatened today by a new wave of “large concerns,” and to make the case for dismantling them. (The title is a reference to Brandeis’s famous evocation of the problem of monopolies.) Wu, a law professor at Columbia, is well suited to the task. In recent years, most notably with his 2016 book, The Attention Merchants, he has distinguished himself by combining analysis of the tech giants with a lyrical evocation of the changes they are inflicting on our daily lives, social interactions, and politics. 

Wu’s thesis is straightforward and admonitory: we are, he argues, reenacting the economic concentration of the Gilded Era, with the only difference being that today’s “insensitive behemoths” traffic primarily in clicks and online sales rather than railroads and oil. And to overcome these goliaths, we need to channel the trustbusters of that earlier era—above all Brandeis, who, with what Wu calls his “sensitivity to human ends,” was among the first to identify the broader spiritual and political cost of concentration. “What Brandeis really cared about,” writes Wu, “was the economic conditions under which life is lived, and the effects of the economy on one’s character and on the nation’s soul.” A functioning democracy relied not only on a duly elected government, but also on citizens’ freedom from the abuses of overlords in the other realms of existence. That New Englanders could vote for their legislators was well and good; it was also important that they be spared the wrecks, derailments, and delays that flowed from J. P. Morgan’s consolidation of the New Haven Railroad monopoly, which caused twenty-four deaths and 105 injuries in 1911 alone.

After introducing his hero, Wu gives a concise history of the revolution in antitrust enforcement that Brandeis inspired. There were the trust-busting cases brought by Teddy Roosevelt, who didn’t mind bigness per se but was concerned about “preventing the growth of monopoly corporations into something that might transcend the power of elected government to control.” Then there were the cases brought by the less famous Thurman Arnold, who as head of the antitrust division under Franklin Roosevelt brought an astonishing 1,375 complaints involving forty industries. As Wu notes, the New Dealers’ federal competition policy was crucial to the broad-based prosperity that would characterize the postwar era. What goes mostly unsaid is that wealth wasn’t only being shared up and down the income ladder; it was also being broadly distributed in geographic terms, as guards against monopoly allowed for thriving regional-level businesses in retail, media, and countless other industries—including in midsize cities like Louisville. 

Wu gives a succinct account of what happened next: the undermining of antitrust enforcement by the one-two punch of, first, Aaron Director, the obscure ex-socialist academic who inspired a generation of antitrust skeptics at the University of Chicago Law School; and then Robert Bork, the Chicago Law graduate who weaponized Director’s ideas as a law professor and federal judge. Beginning with an influential 1966 paper, Bork declared that antitrust law had only one legitimate objective: “the maximization of consumer welfare”—that is, low prices. Any concern for the broader economic or political costs of concentration must be ignored. A decade later, this position would be adopted by the Supreme Court; it has governed antitrust law ever since. Wu argues that Bork’s success lay in his having a moral element to his case, one that countered Brandeis’s. “In Bork’s critique, it seemed an antitrust law driven by anything but consumer welfare was the law of the libertine, degenerate and debauched,” writes Wu. In fact, Bork saw antitrust as just one front in a general rearguard action he was waging against the excesses of the Warren Court, which, he wrote, “wrecked many fields of law in its reckless and primitive egalitarianism. Antitrust was one such field.”

Bork had “managed to embed the culture war” in the antitrust debate, writes Wu, and proceeded to win that war by convincing a broad swath of lawyers and judges that respectability and intellectual rigor lay on the side of tamping down antitrust fervor. Never mind that, as Wu argues, there was nothing particularly rigorous about the highly simplistic consumer welfare standard. Bork’s victory was undeniable. There was the pullback in antitrust enforcement under Ronald Reagan—in 1981, the Federal Trade Commission even suspended a program that collected data on industry concentration. (See Sandeep Vaheesan, “Progressives’ Secret Weapon,” on page 39.) The Clinton administration did bring a big case against Microsoft, but George W. Bush’s administration settled it and brought not one antitrust case of its own. The Obama administration took a friendly stance toward the expansion of the tech titans—indeed, the upper ranks of those titans are now riddled with Obama veterans. 

Meanwhile, since 2000, 75 percent of U.S. industries have grown more concentrated, with tech being the most obvious example—Google and Facebook now control more than 60 percent of all digital advertising, while Amazon controls more than half of the e-commerce market. This exacerbates the growing divide not only between urban and rural America but also between a handful of winner-take-all cities and many lagging ones. Nearly half of all venture capital is now invested in the Bay Area, the five largest metro areas now produce more than a quarter of all economic output, and half of the wealthiest counties in the country are in the Washington, D.C., area. “If there is a sector more ripe for the reinvigoration of the big case tradition,” writes Wu, referring to Microsoft-style breakups, “I do not know it.”

Wu then proceeds through a concise marshalling of the arguments for a return to a more aggressive antitrust approach. No, breaking up our new tech overlords won’t slow the economy or hamper innovation—quite the contrary, considering the advances that flowed from the breakup of giants like AT&T. The tech monopolies strain credulity when they claim that their voracious acquisitions of rivals are not subject to antitrust concern, as when Facebook claimed that it was not actually in competition with Instagram. “A teenager could have told you that Facebook and Instagram were competitors—after all, teenagers were the ones who were switching platforms,” Wu writes.

The odds of the Supreme Court being won over by such arguments anytime soon are slim, of course, given that liberals may not regain a majority for decades. But Wu sketches out a way forward for the other branches of government. Congress could do its part by passing legislation to “make clear, at a minimum, that the Anti-Merger Act of 1950 meant what it said” in making it harder for companies to buy the individual assets of competitors, in addition to undertaking wholesale mergers. He suggests setting a higher bar for giant mergers (those above $6 billion in value) and an outright ban on mergers that reduce the number of major firms in a market to less than four. He calls for more transparency in the merger review process, where regulators now withhold too much information, purportedly to avoid politicizing reviews. “Big mergers are political,” he writes, “and the idea that the public or its representatives be kept in the dark is hard to support.” He urges regulators to mimic their British counterparts by undertaking routine “market investigations” to prevent concentration, and to take inspiration from their EU counterparts, who still have the gumption for big cases against the tech giants. 

Building the political will for a new direction is no small challenge. A survey conducted in June and July by Georgetown and NYU found that Democrats expressed more confidence in Amazon than in any other entity listed—ahead of unions, government, and the press—while Republicans ranked the company behind only the military and police. But sentiments on that front may well shift. For one thing, there’s palpable disquiet over the HQ2 extravaganza launched by Amazon, which resulted in its being showered with offers of billions of dollars in subsidies by countless cities. And we all now know how that turned out. HQ2 is headed to New York and Washington—not to Milwaukee or Cleveland or St. Louis or Baltimore. Or Louisville.

The post Taking the Monopoly Threat Seriously appeared first on Washington Monthly.

]]>
91708 Jan-19-Wu-Books The Curse of Bigness: Antitrust in the New Gilded Age by Tim Wu Columbia Global Reports, 154 pp.