July/August 2019 | Washington Monthly https://washingtonmonthly.com/magazine/july-august-2019/ Tue, 01 Nov 2022 17:57:16 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg July/August 2019 | Washington Monthly https://washingtonmonthly.com/magazine/july-august-2019/ 32 32 200884816 Free Trade For Liberals https://washingtonmonthly.com/2019/07/12/the-case-for-a-progressive-atlantic-trade-alliance/ Sat, 13 Jul 2019 01:35:43 +0000 https://washingtonmonthly.com/?p=101064

How a U.S.-Europe economic alliance could protect democracy, fight inequality, and save the environment.

The post Free Trade For Liberals appeared first on Washington Monthly.

]]>

It is difficult to overstate the dangers facing Europe. A hopelessly divided Britain is on the verge of crashing out of the European Union with no deal, risking economic turmoil, medicine shortages, and conflict in Northern Ireland. Aspiring autocrats lead governments in Poland and Hungary—once models of democratization. An angry populist movement with incoherent demands is wreaking havoc in France. A predatory Russia feeds these trends using everything from Twitter bots to assassins. 

Seven decades of peace and prosperity have made it too easy to forget what happens when Europe divides. More than 100 million people, half a million of them Americans, were killed during World War I and World War II. Those two conflicts capped centuries of nearly nonstop war, conquest, and revolution. The continent’s recent stability is the exception, not the rule. 

The peace stems largely from American foreign policy. The United States was essential to defeating the continent’s fascist powers during World War II, helped revive its economy with the Marshall Plan, and now safeguards its security through the North Atlantic Treaty Organization. When the Balkans descended into war in the 1990s, it took American military action, through NATO, to end the bloodshed.

Donald Trump has ignored this history. He has abandoned pacts that the EU values, like the Paris Agreement on climate change and the Iran nuclear deal. He has lobbed insults at the bloc’s leaders, tweeting that French President Emmanuel Macron “suffers from a very low Approval Rating”  and that “the people of Germany are turning against” Chancellor Angela Merkel. He has even threatened to withdraw from NATO. 

The president’s actions have rocked Europe’s confidence in the U.S. Last year, Merkel remarked that “it’s no longer the case that the United States will simply just protect us.” Macron called for a European army to safeguard the continent from “China, Russia, and even the United States of America.” The president’s behavior has created openings for the Kremlin, which is forging political and financial alliances with increasingly powerful far-right parties across the continent. 

But the fracturing of Europe predates Trump, and it will likely outlast him. Extreme nationalist parties have made steady gains in EU parliamentary elections. Unchecked, they will win control of more European countries, and they will use their authority to persecute minorities, weaken civil society, and perhaps even rip the union apart. Given the continent’s history, that’s a scary prospect. The United States needs new ways to bind European states to each other, to America, and to liberal democratic values.

Here’s one: establish a large, powerful trade agreement with the European Union. Call it the Atlantic Alliance. If implemented, it would bring Europe and America closer together by making them part of the largest free trade zone in the world. 

Proposing a new free trade deal in 2019 might sound backward. Recent trade agreements have been archetypes of the runaway free market-ism that produced the very inequality fueling nationalist backlash on both sides of the Atlantic. There’s a reason that Trump ran—and won—on a stridently anti-trade platform. And indeed, the deal that policymakers have already proposed, called the Transatlantic Trade and Investment Partnership (TTIP), shows the flaws of the existing system. It’s designed to liberalize the flow of capital without serious respect for the consequences. Like trade agreements past, it would establish international tribunals that let corporations challenge and undermine the regulations of sovereign states.  

The fracturing of Europe predates Trump, and it will likely outlast him. Given the continent’s history, that’s a scary prospect. The United States needs new ways to bind European states to each other, to America, and to liberal democratic values.

But there’s no reason why a new deal with Europe has to follow this discredited template. Tariffs between the U.S. and Europe are already low. The Atlantic Alliance would lower them further not as an end in itself, but as a means of reinforcing strategic ties and strengthening liberal values.

For American liberals, a deal with Europe could advance domestic progressive policies. Unlike the Asian and Latin American partners in past trade agreements, European countries have labor and environmental standards that tend to be stronger than ours. A deal, if done right, would curb greenhouse gas emissions and raise salaries. It would create stronger antitrust rules to liberate entrepreneurs, and it would crack down on the use of tax havens by multinational corporations and the ultrarich on both sides of the Atlantic. Like any other trade deal, it would increase economic growth. But by making the U.S. and Europe more equitable and prosperous, it would protect both places from authoritarian countries and help defang the right-wing nationalism tearing the West apart. 

“It makes imminent, strategic sense for the U.S. to want this,” said Wesley Clark, NATO’s former supreme allied commander. “Everything we can do to pull the United States and Europe closer together is essential. The United States cannot deal effectively with China without the strong, complete support of its European allies. And right now, the U.S. does not have that support.”

But this is more than just a way to benefit Western democracies. The United States and the European Union make up 45 percent of global GDP. Outside nations will have to raise their production standards for any goods they want traded as part of the agreement. If they want to eventually join the deal, they’ll have to adopt these terms wholesale. That could mean improved labor conditions for billions of workers and a united front on climate change. 

In other words, if done right, this deal could save the world.

In the 1930s, the international system exploded. Japan invaded China. Italy seized control of Ethiopia and Albania. Germany annexed Czechoslovakia and occupied Poland. Britain and France declared war on Germany. The deadliest conflict in human history had begun.

Initially, Americans were reluctant to send troops to Europe or the Pacific. So the United States responded using trade policy instead. In a 1940 speech, Franklin Delano Roosevelt told Americans that, to save its allies, the U.S. needed to become “the great arsenal of democracy.” Roosevelt then provided friendly powers with vital military and agricultural supplies at virtually no cost. Many allied leaders later credited these sales with saving their countries. After Roosevelt’s death, Winston Churchill referred to the president’s decision as “the most unselfish and unsordid financial act of any country in all history.”

Postwar liberals shared Roosevelt’s appreciation for the power of trade to promote democracy. In 1947, New Deal–style policymakers established the Havana Charter, a multilateral trade agreement that would have strengthened international labor rights and limited the power of major corporations and financiers. But business interests opposed the deal, and Congress never ratified it. 

“The United States cannot deal effectively with China without the strong, complete support of its European allies,” said Wesley Clark, NATO’s former supreme allied commander. “And right now, the U.S. does not have that support.”

Still, over the next several decades, the U.S. used trade to help rebuild the economies devastated by the war in a way that would prevent future conflict. Free trade helped Japan grow rapidly and bound its economic fortunes closely to America’s. In Europe, the Marshall Plan made billions of dollars in aid contingent on greater economic cooperation. And in 1949, Congress declared that “the policy of the people of the United States [is] to encourage the unification of Europe.”

European governments grudgingly went along. In 1950, French Foreign Minister Robert Schuman proposed that European nations pool coal and steel production so that war between France and Germany would be “not merely unthinkable, but materially impossible.” 

Four other European countries ended up joining the new European Coal and Steel Community, which allowed coal and steel (and eventually other goods) to travel freely across borders. Over the course of the next several decades, more countries joined, leading to the formation of what was eventually named the European
Union. 

The Marshall Plan and the EU are perhaps the best examples of how economic policies and trade deals can advance growth, security, and democracy. The Marshall Plan kickstarted the integration of Europe while tying the bloc to the United States in an alliance against the Soviet Union. The EU used economic ties and development spending to turn a continent riven by violent divisions into one of the most peaceful and prosperous places on earth. It is an imperfect institution, pushing business-friendly policies at the expense of unions, for example. But scholars have found the incentive of EU membership turned many once communist countries into democracies. It is proof that globalization can be a means to liberal democratic ends.

That, of course, is not the dominant narrative today. “We’re in a retrenchment mode,” said Edward Alden, a senior fellow at the Council on Foreign Relations and a trade expert. “I think the issue now is not, ‘How do trade agreements, how does trade liberalization, move forward?’ I think the question is, ‘How far does it recede?’ ”

There are many things that motivated British and American voters to opt for Brexit and Trump. Racism and opposition to immigration were key, as they have been in right-wing victories throughout the world. But economic skepticism of globalization—shared by many on the left—also played a role. This is, at least in part, a reaction to post–Cold War trade agreements. Instead of treating increased interdependence as a means of advancing democracy and liberalism, as the European Coal and Steel Community once did, these deals tend to serve the interests of multinational corporations. 

Consider the World Trade Organization. Arguably the most important international institution established since the Cold War, it has no labor or environmental requirements. Instead, it was designed to liberalize capital flows as much as possible with little regard for avoiding harmful side effects. 

Individual trade agreements negotiated by the U.S. are hardly better. Many of these deals include “investor-state dispute settlement provisions”—the statutes that let companies undercut the regulations of sovereign countries. Rather than pushing for better labor rights or environmental standards, the U.S. spends its negotiating capital on obtaining strong patent and copyright protections for U.S. pharmaceutical companies and media conglomerates like Disney. 

That’s because multinational corporations are interested in shareholder returns, not democracy and equality. They want patent protections, not pollution control. They want to be able to source as much as possible from China and other low-wage countries with lax environmental and labor standards. And they have disproportionate power to push for these preferences. Through “industry trade advisory committees,” corporations get to see classified negotiating texts and provide drafting suggestions. 

“Current agreements set a ceiling: a country cannot have stronger health, financial stability, environmental, or other policy standards,” said Lori Wallach, the director of Public Citizen’s Global Trade Watch. “But there is no floor. So there’s no rule that says, if your food safety is truly miserable, or if you have child labor, or slave labor, or no minimum wage, it’s kept out of global trade.”

While far-right parties did well in Europe’s 2019 parliamentary elections, a variety of pro-Europe parties on the left made serious gains. If a Democrat wins the presidency, there’s reason to think she can work with the EU’s combined majority of centrists and leftists.

That doesn’t mean these deals don’t have upsides. Military experts say that agreements like NAFTA and the never-implemented Trans-Pacific Partnership increased (or would have) America’s national security by drawing allies closer to the U.S. and away from hostile powers. And there is evidence that post–Cold War trade deals have helped lift hundreds of millions of people in developing nations out of extreme poverty.

But research shows that trade deals have also had unsavory consequences. A 2015 study by three prominent economists found that between 1999 and 2011, Chinese imports alone cost the U.S. roughly one million manufacturing jobs. NAFTA’s impact on the U.S. economy is exceptionally complex, but there is evidence that it has put downward pressure on the wages of non-college-educated workers. This year, several political scientists at the University of Pittsburgh found that regions in Europe that are highly exposed to trade are more likely to support right-wing populists.

Unfortunately, the growing power of these populists makes cutting any kind of trade deal difficult. “The nation-state is coming back to life,” said Charles Kupchan, a professor of international affairs at Georgetown University and the former senior director for European affairs on the Obama administration’s National Security Council. “This is not a moment when free trade is going to find a lot of adherents.”

For European leaders, crafting a deal with the U.S. is especially difficult because it entails negotiating with a country with weaker labor and environmental laws. Even French President Macron, Europe’s consummate centrist, has expressed hesitation. “No European standard should be suppressed or lowered in the areas of environment, health, or food,” he said last fall.

But that doesn’t mean the Atlantic Alliance would be impossible to pull off. While far-right parties did well in Europe’s 2019 parliamentary elections, they didn’t pick up as many seats as expected. More importantly, a variety of smaller, pro-Europe parties on the left—like the Greens—made serious gains. Meanwhile, in the U.S., a 2018 Pew survey found that 67 percent of Democrats have positive views of free trade, up from 53 percent in 2009—likely in reaction to Trump’s hostility to it. If a Democrat wins the presidency, there’s reason to think she can work with the EU’s combined majority of centrists and leftists.

There are plenty of areas where liberal Americans and Europeans could establish a united front. Perhaps the most prominent is climate change. Macron’s government has demanded that any nation signing a trade deal with the EU must also sign the Paris Agreement. The next American administration should go even further and put the Paris Agreement in the new trade deal’s text, making lower tariffs conditional on saving the planet.

“If you put things in that Europeans care a lot about and like, such as the Paris climate agreement, that would be a big deal,” said Kupchan. “That would win back a lot of confidence in American leadership.”

There’s more that Europeans and liberal Americans could accomplish with the promise of lower tariffs. Their trade deal, for example, should require participating countries to set minimum labor standards and pay their citizens a livable minimum wage, which would make the U.S. lift its own pay base. The deal should also include strong antitrust provisions that would make the U.S. address its high levels of corporate concentration. In both of these areas, European law is more stringent than America’s.

Experts I spoke with also pointed to international tax evasion as an area ripe for U.S.-EU collaboration. “It’s become almost impossible to tax corporations because they are so mobile,” said Alden, of the Council on Foreign Relations. Europe’s efforts to deal with the problem have been frustrated by some of its own members. While Germany and France want to fight evasion, tax haven states like Ireland and Luxembourg do not.

Evasion, then, is an area where the United States could actually help Europe. “If the United States were on board for a cooperative approach that could be put into a chapter of the trade agreement, that might alter the internal politics in Europe,” said Robert Howse, a trade expert and international law professor at New York University. This, in turn, would give major European powers an incentive to craft an agreement with the U.S.

Then there are reasons to cooperate on trade that are only indirectly related to American domestic politics. The U.S., for example, could make it easier for Europe to access natural gas, reducing its dependence on Russia. More importantly, a trade deal with strict rules of origin would help the U.S. and Europe establish a united front against China’s mercantilist, environmentally degrading, and labor-exploiting industrial practices. 

Together, this would make the Atlantic Alliance far more popular than the corporate-driven trade deals of the past. There is, after all, something quite populist about a deal that binds countries to chasing down corporations that avoid taxes, and then making them pay their fair share.

Decades of interdependence helped bring unprecedented stability to the West. The last twenty-five years of globalization have eroded that promise and helped turn millions of people against it.

For the deal to accomplish these goals, the Atlantic Alliance would need to be rigorously enforced and hard to reverse. The most effective way to do that may be to put the rules into the implementing legislation—codifying the deal’s terms in European and American law. In the case of the United States, that means the implementing bill needs to make it through Congress. 

This may seem daunting. But trade deals have a good legislative track record, in part because they are often voted on under “fast track” authority. When subject to fast track, trade deals can’t be filibustered, and Congress can’t litigate their components. All they can do is vote up or down. If Congress wants the economic growth that comes with trade deals, and if business communities want freer access to European markets, they will have to support whatever agreement the president produces.

Tucking progressive causes into a trade deal is a double-edged sword—much like abolishing the filibuster. Conservatives could then use the same process to advance their own agenda. But they already do. To ratify the World Trade Organization, for example, the United States loosened its meat and poultry inspection standards and gave pharmaceutical companies three extra years of patent exclusivity on their medicines. Liberals shouldn’t be afraid of also using this tool, particularly for causes that require international solutions. There’s no way to fight climate change without commitments from the United States. And it will be much harder for Republicans to renege on those commitments if they’re enshrined in a U.S. law that’s essential to American economic growth.

Even under the most favorable political circumstances, the United States and Europe will face obstacles in negotiating a deal. Agriculture is a major minefield. The U.S. has long been frustrated by European labeling requirements. Meanwhile, the U.S. historically hasn’t entered into trade agreements that don’t help its politically powerful agricultural industry. But if American and European leaders are serious about saving international liberalism, they’ll have to make some tricky compromises. Decades of interdependence helped bring unprecedented stability to the West. The last twenty-five years of globalization have eroded that promise and helped turn millions of people against it.

“Trade and commercial policies that don’t take into consideration any sort of impact on working people leads people to really distrust the system,” Global Trade Watch’s Lori Wallach said. “It’s how we got Trump. It’s an important part of how a bunch of these right-wing movements are rising in Europe.” 

What we need is the opposite: a deal that fights income inequality, environmental degradation, and corporate concentration. This would show middle-class Americans and Europeans the upsides of interdependence and reduce the appeal of right-wing populism.

If we have learned anything from the last few years, it’s that the forces behind this kind of populism are global in nature, able to thrive in a wide variety of national soils. The solution must therefore be global, too. Illiberalism is simply too big and powerful to be stopped by any one state, or even one region. Working with Europe—a continent full of democracies, many our partners since the end of World War II—is the best place to start.

The post Free Trade For Liberals appeared first on Washington Monthly.

]]>
101064
What Sanders and Warren Get Wrong on Free College https://washingtonmonthly.com/2019/07/12/what-elizabeth-warrens-free-college-plan-gets-wrong/ Sat, 13 Jul 2019 01:33:23 +0000 https://washingtonmonthly.com/?p=97644 Young woman enjoys exciting university lecture

Their plans would bake into place the injustices of the current system. Here’s a better approach.

The post What Sanders and Warren Get Wrong on Free College appeared first on Washington Monthly.

]]>
Young woman enjoys exciting university lecture

In April, Senator Elizabeth Warren unveiled yet another ambitious policy proposal: a $1.25 trillion plan to make college more affordable. It includes canceling up to $50,000 in student loan debt for 95 percent of borrowers, billions of dollars for historically black colleges, and $100 billion in new money for the federal Pell Grant program.

Almost as an afterthought, Warren’s plan also includes a proposal to make tuition free at every public college and university in America. While light on details, Warren’s version of free college seems to be modeled after Senator Bernie Sanders’s. Sanders, of course, built his improbable 2016 primary campaign in part by igniting Millennial student debtors who were outraged by the broken promise of affordable higher education. Now, every serious Democratic contender will have to propose some version of free college—or, as Senator Amy Klobuchar and Mayor Pete Buttigieg did recently, explain why not. 

The broad case for free college is very strong. Many states have slashed public funding for higher learning, shifting the burden to students and parents. Private schools have hiked prices into the stratosphere in pursuit of status and fame. As real public university tuition tripled over the last three decades while middle-income wages stagnated, the federal government’s main response was to lend students ever-larger sums of money to make up the difference, with no control over how much colleges charged or whether the degrees were any good. It was a policy mistake of epic proportions, leaving the path to economic mobility badly narrowed and a generation of collegians saddled with unaffordable loans.

If the Bernie Sanders version of free college becomes the Democratic consensus, the party could be headed for disaster: a 2020 victory followed by a policymaking collapse akin to the 1993 health care fiasco.

Fixing this blunder makes a lot of sense as a matter of both politics and policy. There’s just one problem: the Warren and Sanders free college plans are badly designed. The Sanders proposal would give states federal grants equal to two-thirds of the cost of bringing tuition at all public colleges and universities in the state down to zero, contingent on states matching with one-third of the necessary money. Warren’s plan is vague, but similar: the federal government would “partner with states to split the costs of tuition and fees.” Both, in other words, would force the federal government to make up the difference between the funding that states already provide and the funding necessary to make tuition free. This approach takes the vast disparities and injustices of the existing higher education funding system and permanently bakes them in place, punishing the states already doing the most to support students and rewarding the ones doing the least. 

If this version of free college becomes the Democratic consensus, the party could be headed for disaster: a 2020 victory followed by a policymaking collapse akin to the 1993 health care fiasco, hobbling the victor’s presidency and setting back reform for a generation. Fortunately, there is a better way. To understand why the Warren and Sanders plans don’t work, and how to improve them, we need look no further than the flagship public university in Sanders’s home state. 

The University of Vermont was founded in 1791 and sits on a lovely redbrick campus a short walk uphill from Lake Champlain. The surrounding city of Burlington, where Bernie Sanders began his career in government as mayor in 1981, has a low-key vibe, with streets full of restaurants and boutique clothing stores. Lately, the region has become a hub of small craft brewing companies. The fall foliage is beautiful, and skiing opportunities in the nearby Green Mountains abound. 

All of this makes UVM an attractive destination for out-of-state students, who make up nearly 80 percent of undergraduates. Those students pay well to attend. List-price tuition and fees for non-Vermonters start at almost $44,000 per year, more than many private colleges. On average, factoring in financial aid and in-state students, the university takes in nearly $25,000 per student in tuition, more than any other flagship public university in the nation.

This is one of the reasons why the state of Vermont spends relatively little of its own money on public colleges and universities. In 2017, the state allocated only $2,700 per student to higher education, less than half the national average of $7,640, and less than a third of neighboring New York. 

Less than two miles away, in neighboring Winooski, the Community College of Vermont takes in only $5,340 per student in tuition revenue—barely one-fifth of what flows to UVM. State budget policy makes the disparity even worse. The community college’s annual allotment of $1,500 per student is less than half what the university receives. 

In these two disparities—first, between Vermont’s stingy higher education funding and what other states provide; second, between the bounty that four-year universities receive and the pittance that goes to community colleges—we can see the fatal flaws of the Warren and Sanders free college plans. They take these huge inequities as a given, rewarding states that have done the least to finance higher learning and giving far more money to middle- and upper-income students who attend wealthier public universities. 

This approach benefits cheapskate states like Vermont, or Pennsylvania, which spends about $4,300 per student from the public treasury and leaves undergrads to pay an average of $11,400 a year in tuition. North Carolina, by contrast, spends more than double Pennsylvania—$10,400 per student—and only charges about $5,500 in tuition. Total spending in both states is fairly similar, but one sticks students with most of their college bill; the other does not.

For that exact reason, the Warren and Sanders plans would give Pennsylvania much more money than North Carolina to pay down tuition, since tuition in Pennsylvania is higher to begin with. That’s grossly unfair and a political nonstarter; members of Congress in states that more generously subsidize higher learning would rebel. The obvious alternative is to force Pennsylvania and other low spenders to come up with billions of dollars in additional matching funds right away—but that’s a choice the states would almost certainly decline, since the Sanders plan would allow states to opt out entirely (and a mandatory plan would be unlikely to survive a constitutional challenge). 

The Sanders and Warren plans would reward states that have done the least to finance higher learning and give far more money to middle- and upper-income students who attend wealthier public universities.

In addition to rewarding miserly states and penalizing generous ones, the Warren and Sanders plans would give much more money to four-year students than to two-year students, because four-year tuition is currently much more expensive. This is, frankly, the opposite of how good liberals and democratic socialists should think. Community college students are more likely to be immigrants, working parents, and first-generation collegians from low-income backgrounds. Why spend $25,000 on tuition for a UVM freshman with well-off parents and only $5,000 for a single mom working on her associate’s degree at night so she can get a better job?

These disparities also explain why another popular, more modest free college plan—free community college only—isn’t close to enough to solve the problem of affordable higher education.

To be clear, $0 tuition community college is, on its own, a perfectly good idea. Community college should be free. States including Tennessee already have similar initiatives under way. But many of these programs aren’t especially generous, because most community colleges are already inexpensive for everyone and “free,” or close to it, for low-income students who qualify for the maximum federal Pell Grant of $6,095. Meanwhile, “free tuition” still leaves those students with hefty bills for books and living expenses. (Warren’s plan, to her credit, includes $100 billion in new funding for the Pell Grant program for exactly this reason.)

Additionally, some states rely much more on community colleges than others. In Illinois, 62 percent of students who are enrolled in public institutions attend community college. In Michigan and Wisconsin, the proportion is only 32 percent. The latter states use relatively open-access four-year institutions to provide affordable higher education to students with diverse academic backgrounds. A policy that makes only community college free would put these states at an enormous disadvantage—and anger their elected representatives.

Both the parsimonious free community college approach and the much more ambitious and expensive Warren and Sanders strategies have other design flaws. They do little to make sure that college is not just free but also good. Neither have strong accountability measures to ensure that colleges give students a high-quality learning experience and help them graduate on time. 

The politics will also be tricky. Republicans will oppose free college on general principle, but that’s true of any legislation that involves spending more federal money to help people in need. The real political problem is that colleges will fight against free college—specifically, private colleges that are already struggling for financial survival and would get nothing from Sanders- and Warren-style plans. These schools have the ear of hundreds of members of Congress. Meanwhile, the elite universities whose graduates disproportionately populate the Washington, D.C., staffer and lobbying class don’t want more government money to keep prices low. They like being wealthy institutions that sell expensive services to rich people, and will oppose any plan with mandatory price controls and regulatory strings attached.

Any new federal free college plan should be guided by four principles. First, help students who need help the most. Second, reward states that invest their own money in higher education. Third, create incentives for colleges to cooperate with one another. Fourth, make sure that college is good as well as free. At the same time, such a plan needs to avoid the pitfalls of the Warren and Sanders approaches, which would reward the stingiest states and devote more money to the four-year students, who are, on average, less needy. 

There’s a way to achieve all these goals. It’s smarter, more politically viable, and, while still representing an enormous new federal investment in college affordability, less expensive. Here’s how it would work. 

It starts with cutting out the middleman. Instead of grants to states, the federal government should give grants directly to any public—or private nonprofit—college that agrees to join a national network of institutions dedicated to providing free, high-quality higher education. In exchange for charging zero tuition and fees for all students, in-state and out-of-state, participating colleges would receive a direct annual subsidy of $5,000 per full-time-equivalent undergraduate student. This funding would be in addition to the Pell Grant program. Pell-eligible students could use their grants tax-free to defray the costs of books and living expenses.

Five thousand dollars may not seem like much given news headlines about $70,000 college tuition and six-figure student loan burdens. But it’s actually enough money to make college tuition-free for millions of students nationwide, because most public universities and community colleges aren’t nearly that expensive, and many students get scholarships. 

Because colleges would receive a standard amount in exchange for setting tuition at zero, generous states like North Carolina would be rewarded. Public colleges in the Tar Heel State only take in $7,000 per student in tuition now, on average; many receive less. Elizabeth City State, a public historically black institution, currently takes in $3,500 per student in tuition. A grant of $5,000 per student would be enough not only to set tuition at $0, but also to invest another $1,500 per student in more professors and facilities. States that are less generous, by contrast, would have a powerful new incentive to invest in higher learning in order to bring colleges’ costs down to the point where $5,000 would cover their tuition needs. And because colleges and universities could join the network on an institution-by-institution basis, there is no state-level opt-out problem. 

Unlike the Warren and Sanders plans, this approach wouldn’t just make open-access colleges free. It would also make them better, by providing billions of dollars in new funding. But it shouldn’t stop there. To make sure college isn’t just free but is also good for the country, colleges that join the network should have to agree to certain conditions:

• Enroll a student body that is broadly economically representative of their state and region (as recently proposed by economists Caroline Hoxby of Stanford University and Sarah Turner of the University of Virginia).

• Graduate a reasonable percentage of students, as compared to peer institutions with similar missions and student profile.

• Accept credits earned at other colleges in the network, to facilitate transferring, which in turn promotes graduation and saves students money.

• Publish annual reports detailing how they assure the quality of their work preparing students to succeed in further education, citizenship, and careers.

No college would be forced to accept this bargain, and many would decline to do so. Most selective private schools are in the business of providing a very expensive service to mostly rich students, and would have neither the means nor the inclination to forgo that money and open their doors to a demographically representative undergraduate body. 

Instead of grants to states, the federal government should give grants directly to any public—or private nonprofit—college that agrees to join a national network of institutions dedicated to providing free, high-quality higher education.

But many colleges would jump at the opportunity. By my calculations, using publicly available federal data, if every institution that currently takes in less than $5,000 per full-time student in tuition and fee revenue chose to join the network, the cost to the federal government would be $25 billion per year. This would be a huge new investment in affordable higher education, but at only one-third the cost of the Sanders plan. (Warren’s plan would average $125 billion annually, about half of which would finance debt cancellation.) 

At that level, the network would start with nearly 1,100 colleges and universities that currently charge less than $5,000 on average. They enroll the full-time equivalent of 4.9 million students and include 821 community colleges, 208 public four-year universities, and forty-eight private nonprofit colleges. (One of those private nonprofits is Berea College, which perennially tops the Washington Monthly’s liberal arts college rankings.) Another 176 schools, educating nearly a million additional students, charge more than $5,000 but less than $6,000, creating a powerful incentive to raise additional state and private money in order to drop tuition to zero and become eligible for new federal funds. Forty-one historically black colleges and universities would qualify. 

Not all students would have access to free tuition colleges at first. But this problem could be partly offset through online education. To be sure, fully online education is not, by itself, the solution to America’s college access problem. Research suggests it’s unwise to put academically at-risk low-income students in cheap fully online courses. But online learning is undeniably a valuable option for many people, especially nontraditional, working, and adult students who may want to combine in-person and online courses.

A smart free college plan would expand the reach and quality of online higher education by allowing colleges in the network to create online programs that could be taken, for credit and free of charge, by students at any other college in the network, regardless of where they live. These students would count for enrollment when calculating the federal subsidy, creating strong incentives for colleges to develop high-quality programs that appeal to many students. Students, meanwhile, would have access to a wide array of online course offerings, organized in a common portal by the U.S. Department of Education, rather than relying on the online curriculum of any one institution. 

Overall, because many community colleges and open-access four-year universities currently make less than $5,000 per student in tuition revenue, the plan would focus federal resources on public institutions that have long suffered from underfunding and would help students who are most in need. The accountability provisions would ensure a baseline level of consumer protection, while giving colleges the freedom to develop different approaches to learning and maintain their unique identity. 

This approach to free college would give states an incentive to invest enough money in their higher education institutions to keep tuition free. It would restore the promise of affordable college that has eroded into rubble over the last generation. It would halt and reverse the spiral of undergraduate borrowing that is undermining economic opportunity and public trust in higher learning. It would be a signature achievement for the next president—if she gets the policies right.

This article has been updated.

The post What Sanders and Warren Get Wrong on Free College appeared first on Washington Monthly.

]]>
97644
The Strange Political Silence On Elder Care https://washingtonmonthly.com/2019/07/12/the-strange-political-silence-on-elder-care/ Sat, 13 Jul 2019 01:31:58 +0000 https://washingtonmonthly.com/?p=101045

Millions of middle-aged women struggle to care for ailing older relatives, and the crisis is only getting worse. So why is no one talking about it?

The post The Strange Political Silence On Elder Care appeared first on Washington Monthly.

]]>

For Alexis Baden-Mayer, who lives with and cares for her two elderly parents, the audiobook of Marcel Proust’s six-volume novel, In Search of Lost Time, has two distinct benefits. First, it provides 150 hours of literary distraction. Second, it features a character who jokes about excrement. 

“Play it in the car as you drive your loved-ones to doctors appointments,” she wrote in a blog post about her caregiving experience. “Play it each morning as you strip soiled linens from the mattresses, make beds and fold laundry. Play it, as I have, to try to calm and distract yourself as you bark commands to your dementia-addled mother to wipe her butt and drop the toilet paper in the toilet.”

Baden-Mayer, a freckled forty-five-year-old, put her house on Airbnb three years ago and moved with her husband and two kids into her parents’ home in Alexandria, Virginia. Her mom, who has Alzheimer’s disease, was no longer able to take care of her dad, who had suffered from heart failure. “I didn’t really have a good idea of what I was getting into, quite honestly,” she said, reflecting on what a truly frank conversation with her husband would have sounded like: “What do you think of living with my parents for about ten years while their health declines and they die?” 

When I went to visit one morning in May, her day had started at five a.m. Hair still wet from her shower, she steered her mother through a morning routine. She told her where to put her hands to wash herself, then placed her mom’s feet through the leg holes of her adult diaper. Without Baden-Mayer’s kind but firm instructions, her mother would start staring into space, seemingly happy but unsure of where to go next. More than once, when her mother was smiling at me, perplexed, Baden-Mayer explained my presence. (“She’s a journalist. She’s working on a story about family caregiving.”) The long dining room table was a laundry-folding assembly line, piled with six people’s clothes. 

Baden-Mayer is one of about thirty-four million Americans providing unpaid care to an older adult, often a family member. Most of these caregivers are middle-aged, and most are women. They are individually bearing most of the burden of one of America’s most pressing societal challenges: how to care for a population of frail elders that is ballooning in size. 

Most people assume that Medicare will cover the type of long-term personal care older people often need; it does not. Neither does standard private health insurance. And the average Social Security check can only make a medium-sized dent in the cost of this care, which can easily exceed $100,000 a year if provided in a nursing home. Medicaid, unlike Medicare, does cover long-term care, but only for patients who have exhausted their savings, and coverage, which varies from state to state, can be extremely limited. So the safety net you thought would catch you in old age is less like a net and more like a staircase you get pushed down, bumping along until you’ve impoverished yourself enough to hit Medicaid at the bottom. 

The safety net you thought would catch you in old age is less like a net and more like a staircase you get pushed down, bumping along until you’ve impoverished yourself enough to hit Medicaid at the bottom.

Private long-term care insurance exists, but it’s the designer bikini of insurance: too expensive, skimpy coverage. Since people tend to buy it only when they know they’ll soon be making a claim, there are never enough healthy people paying into the plans to keep them affordable. Insurance companies have realized this and jacked up premiums—or stopped selling policies altogether. 

Meanwhile, the cost of hiring a home health aide to take care of a frail parent can add up to $50,000 or more per year. So tens of millions of individual women across the United States wind up providing the care themselves for free, and bearing its cost in the form of stress, lost wages, and lost opportunities to nourish their other needs, and their families’. When we talked on the phone, Baden-Mayer wondered aloud, “Why is it that we don’t have a good system that we can plug into when our parents need care?” 

Why indeed? You might expect that a problem that affects so many people so profoundly would become a major political issue. Recent years have seen other issues, including ones that disproportionately affect women in their personal lives, become highly politically salient—from sexual harassment and pay equity to the push for universal pre-K education and improved access to child care. Yet even though American women today are politically organized and running for office in record numbers, elder care remains widely viewed as a purely personal matter. You could be a news junkie, following the 2020 race closely, and have heard nothing about it. 

Why is that? And could long-term care go from being a sleeper issue to one that boosts a candidate out of the 2020 pack? 

Demographic trends have prodded and pulled America’s long-term care problem into a long-term care crisis. A driving factor is the increasing risk of reaching a point in our lives when we can no longer perform some of the essential activities of daily life, from getting dressed to using the toilet. Approximately half of us will need some form of long-term care, and an estimated 15 percent will face related medical bills exceeding $250,000. 

Paradoxically, this is partly due to advances in medicine. Since the 1940s, for example, antibiotics have dramatically reduced the numbers of Americans dying of pneumonia, which was once a leading cause of death among older Americans. But advances like those mean more people are living long enough to contract debilitating chronic conditions like Alzheimer’s. 

On the flip side are broad public health trends like obesity and the spread of sedentary lifestyles. These have led to an epidemic of chronic diseases like diabetes that, while not necessarily fatal, leave more and more people struggling with disabling conditions for decades.

Then there’s the looming impact of Baby Boomers hitting retirement, so massive that it’s often referred to in the terminology of natural disasters, like “the gray tsunami.” If you look at a chart of the ratio of middle-aged adults (potential caregivers) to people over eighty (the people most likely to need care), it’s like the steep downhill of a roller coaster, starting at seven to one in 2010, and plummeting to four to one by 2030. In addition, average family size has shrunk significantly since the 1970s. With smaller families now the norm, the strain on individual caregivers within families has increased enormously. The imbalance will become even more acute if America cuts back on the flow of immigrants, who make up a large portion of professional caregivers.  

This was easy to see coming, by the way. As far back as 1971, Congress held hearings on the impending crisis in long-term care, and throughout the 1980s and ’90s, think tanks and blue-ribbon commissions issued a stream of reports on what to do about it, predicting catastrophic consequences by the 2020s if the problem went unaddressed. But it did go unaddressed, perhaps because, like climate change, it was both unpleasant to contemplate and seemingly far off in the future. Meanwhile, other countries with aging populations, including Japan, Canada, and most European nations, took action, offering a range of substantial benefits to family care providers, from directly compensating their work to subsidizing professional home care. But in the United States, public attention to long-term care faded even as the problem grew increasingly acute. 

Sandra Levitsky has a theory about why long-term care has not yet gained traction as a political issue. A sociologist at the University of Michigan, she’s the author of Caring for Our Own: Why There Is No Political Demand for New American Social Welfare Rights, a book she researched in part by schlepping between adult day care centers, nursing homes, and a hospital in Los Angeles, interviewing caregivers and scribbling notes at the back of support group meetings. 

Levitsky found that the lack of public outcry for long-term care didn’t reflect an absence of need. Instead, it was driven by a widely held belief that caregiving is a family responsibility, tied up with what it means to be a good son or daughter. And because it’s so time intensive and takes place in the home, caregiving is often extremely isolating, making it hard to see it as a systemic issue. One woman who was caring for her husband told Levitsky that when she went to a support group for the first time, “I just started to cry. I just thought, ‘My god! I’m not in this alone!’ ” 

Even though women today are politically organized and running for office in record numbers, elder care remains widely viewed as a purely personal matter. You could be a news junkie, following the 2020 race closely, and have heard nothing about it.

Rachel McCullough, an organizer affiliated with Caring Across Generations, a national campaign, noticed this while canvassing door to door in the Bronx. She found that asking people whether they were a caregiver didn’t really work; people didn’t identify themselves that way. Instead, she found that to get a conversation going, she had to ask more descriptive questions—“Have you taken care of your parents?”—or share her own stories. 

The fact that people don’t identify as “caregivers” helps to explain why even women who are otherwise politically engaged don’t view the care they provide to their aging parents as a political issue. Baden-Mayer is a good example. A former women’s studies major, her laptop is as layered with stickers as a college student’s—“Vote YES on Prop 37”—and she works full time as a political director for a nonprofit advocacy group for organic food consumers. In the foyer of her house hangs a photo of a man throwing up a peace sign in front of the U.S. Capitol. If anyone were to connect their own experience to a systemic problem, you’d expect it to be someone like her. But she admits that, for a long time, she really didn’t. And she definitely didn’t question the relative silence from lawmakers on the issue. 

Another barrier to politicizing the long-term care crisis is the fact that there’s no clear bad guy. As McCullough put it: environmentalists have the fossil fuel industry, gun control activists have the NRA, and consumer advocates have the big banks. Who, exactly, are caregivers fighting? Instead of feeling anger, which research shows is linked to political activation, people struggling with providing for their parents tend to feel guilt and shame, directing the blame inward. Once the stressful experience is over, most people want to put it behind them. Still, Levitsky found that some people come out of it wanting to improve the system, particularly middle-aged women. “It was a subset of the group, but they were really politicized,” she said. “And that’s the constituency that I do believe could be mobilized.”

But someone is going to have to mobilize them. Even when participants in Levitsky’s study were directly asked about whether their experience had changed their attitude about the government’s responsibility for helping, a common response was that they simply hadn’t thought of the government’s role. Levitsky said, “When you believe something is so natural, you can’t imagine things being another way.” 

In fact, when it comes to long-term care, it is possible for things to be another way. In mid-May, for example, Washington State Governor and long-shot presidential candidate Jay Inslee signed off on the country’s most sweeping long-term care bill. The law provides eligible residents with a lifetime benefit of up to $36,500 to pay for things like meal delivery, nursing home fees, and home help, including paying a family member who is providing care. 

Passing the bill required a diverse coalition—including the nursing home industry, home health worker unions, disability rights advocates, and the Alzheimer’s Association—to put aside their differences and get on the same page when talking to legislators. It helped that one of the law’s champions, State Representative Laurie Jinkins, had both professional public health experience—she works for a county health department—and a personal connection to the issue. In a speech on the state house floor in support of the bill, Jinkins explained how her mother-in-law ended up having to spend herself into poverty to qualify for Medicaid when she could no longer live alone. 

A crucial factor in getting the bill passed was a study, conducted by the national actuarial firm Milliman, showing that it would soon save hundreds of millions per year in Medicaid costs. “What we found was that it was critically important that legislators could have confidence in the numbers,” said Sterling Harders, president of a regional SEIU union that represents care workers, who advocated for the bill. 

The law is financed by a .58 percent state payroll tax. How can the state finance such a large new benefit with such a modest tax hike? The key is that everyone contributes, including people who are still young and healthy, and to reap the benefit, you have to pay into the system. 

This solves the problem of adverse selection that makes the private provision of long-term care ruinously expensive. Rather than trying to buy insurance only when they’re old and frail enough to expect to make a claim in the near future, Washington residents are now in effect compelled to spread out the cost of their insurance over their entire adult lives, making it much more affordable. 

“You can divide the world of politicians into two groups,” said Howard Gleckman, a senior fellow at the Tax Policy Center. “It’s not Democrats and Republicans, it’s people who have been caregivers and people who haven’t.”

Washington’s approach is also much more efficient than expecting people to save up a nest egg to cover the cost of their own long-term care. Roughly half of us will never need it; among those of us who do, some will need it only for a short time, while others will consume hundreds of thousands of dollars of care over several years. And yet for most of our lives we can’t really know which group we belong to. That makes long-term care a logical candidate for financing collectively through insurance, so long as paying into the system is mandatory. When plans aren’t mandatory, not enough healthy, young people self-select to buy them, and they tank. That’s one of the reasons that the Obama administration ultimately had to pull the plug on its attempt to address long-term care; because the program was voluntary, not enough people enrolled, making premiums far too expensive.

That’s not to say that providing universal long-term care insurance wouldn’t cause sticker shock when it shows up in government budgets. But the fact is that, one way or another, society is already bearing these costs—mostly in the form of care provided by stressed-out, uncompensated women who have the misfortune of having a family member who needs care and can’t afford to pay for it. What we need is a way to distribute that burden more equitably. 

You can divide the world of politicians into two groups,” said Howard Gleckman, a senior fellow at the Tax Policy Center. “It’s not Democrats and Republicans, it’s people who have been caregivers and people who haven’t.” When he’s talking to members of Congress who recognize the problem, it’s far more likely that their understanding comes from personal experience than from an outpouring of calls from constituents. Gleckman himself started working on the issue after he and his wife struggled to care for their own parents. “Don’t underestimate the importance of policy by anecdote,” he said. 

It’s a point that several other advocates and policy experts echoed. One organizer working on caregiving issues in Michigan found an ally in a Republican legislator with a prime perch on a budget committee. That legislator’s mother, the organizer found out, had qualified for Medicaid and was placed in a nursing home because there was a long waiting list for home services. 

One lawmaker who feels strongly about an issue could be worth twenty who merely support it. A prominent example came in 2008, when Congress voted on a bill requiring insurers to cover mental illnesses at the same level as physical ones. It was the result of over a decade of determined lobbying from Senator Pete Domenici, a senior Republican, fiscal hawk, and chairman of the powerful Senate Budget Committee. Otherwise an unlikely champion, Domenici was propelled by his daughter’s experience with schizophrenia. He joined forces with one of the most liberal senators at the time, Minnesota Democrat Paul Wellstone, whose brother had suffered from mental illness, and together they built alliances with a number of other legislators who had likewise been personally affected. 

The prospects for long-term care coverage at the national level got a boost this past April, when Bernie Sanders added it to his single-payer health care plan. But if support for family caregivers is to become a priority in the coming election cycle, it may be because some of the other candidates have had their own brushes with long-term care. Amy Klobuchar, the 2020 candidate with perhaps the longest legislative history of working on issues that affect seniors, has talked about her father’s struggle with alcoholism. Cory Booker has been vocal about Parkinson’s disease, which his father suffered from, and is proposing an expansion of the Earned Income Tax Credit that would give caregivers more money. “I watched my mother be his primary caretaker, and it affected her physical health,” he told a small crowd at a campaign event in February. “The personal pain I saw it causing my mom was devastating to me.” He added, “This is a common problem in our country. We are weak in America when we let people struggle and suffer in isolation.” 

Rachel McCullough, the organizer in New York, said her group is already thinking about how to bring this issue to the forefront of the 2020 presidential campaign. They already have organizers and volunteers working on a state campaign in Iowa, which is dense with national press and where it’s relatively easy to get face time with candidates. In televised town hall meetings, their Iowa counterparts may try to force candidates to articulate a position on caregiving. McCullough said, “A case we’re trying to make, and that we will be making to the presidential candidates, is if their goal in the face of Trump and Trumpism is to speak to and unite the vast majority of Americans, with a focus on women—this is the issue.”

The post The Strange Political Silence On Elder Care appeared first on Washington Monthly.

]]>
101045
Superpredators https://washingtonmonthly.com/2019/07/12/superpredators/ Sat, 13 Jul 2019 01:29:16 +0000 https://washingtonmonthly.com/?p=101057

How Amazon and other cash-burning giants may be illegally cornering the market.

The post Superpredators appeared first on Washington Monthly.

]]>

Earlier this year, the Institute on Taxation and Economic Policy, a liberal think tank, reported that Amazon, one of the most valuable corporations in the world, paid no federal taxes on a supposed $11.2 billion in profits in 2018. Many Americans felt outraged, and shortly thereafter Senator Elizabeth Warren introduced a plan to force companies like Amazon to pay their “fair share” of taxes. 

But in this case, the outrage was somewhat misplaced. We should not be astonished that Amazon pays no taxes, for the simple reason that it doesn’t actually turn a profit. While the company used accounting techniques to show a positive cash flow on paper, its zero-dollar tax bill more accurately reflects the nature of the business.

Today, many firms, not just Amazon, have adopted a growth strategy based on rapid expansion and negative cash flow. They are propped up by investors and by low interest rates that provide cheap and easy access to capital. They can’t be unprofitable forever, the thinking goes, and they must have an exit strategy, even if they don’t share it publicly. Until then, they continue to hemorrhage cash in their quest for an ever greater market share. The orthodox narrative on Wall Street is that these firms are reinvesting what would otherwise be profits, instead of sharing them with investors and shareholders. This narrative suggests that we are witnessing one of the greatest wealth transfers in the history of capitalism. By investing all their profits back into the firm, these companies are essentially transferring wealth from their investors to us, the consumers.

However, it’s unclear how, or even whether, that’s actually happening. Selling below cost is a classic way for aspiring monopolists to seize market share from smaller competitors who can’t afford to consistently lose money. This technique, known as predatory pricing, is bad for consumers, and the economy as a whole, because it drives companies out of the market not because they’re less competitive or efficient, but because they don’t have enough funds to survive without turning a profit. That’s why predatory pricing is illegal under federal antitrust law. 

Today the U.S. economy is rife with spectacularly valuable corporations that fail to turn a profit, relying on the continuing faith of investors. It’s not just Amazon: Uber, Netflix, and WeWork are some of the many other examples. To the average person, these companies appear to be using super-low prices to gain market share. But if predatory pricing is illegal, how can this be happening? 

The answer is that what the average person thinks about Amazon’s business strategy doesn’t matter, because the Supreme Court has all but defined predatory pricing out of existence. Taking cues from the conservative law and economics movement, the Court has held that the strategy is irrational as a matter of economic theory, because for it to pay off, the monopolist will have to recoup today’s losses by raising prices dramatically in the future. But that won’t work, the logic goes, because when they do, competitors will swoop in and offer the same service or product at lower prices, frustrating the entire scheme. Under that thinking, the Court has set up rules making it nearly impossible to prove that predatory pricing is happening.

But the Court and most antitrust scholars have been making a systematic mistake. The prevailing doctrine assumes that there is only one way for a company to recoup its losses once it has cornered the market: raising prices. It ignores the other half of the profit equation: costs. This is a serious error, because giants like Amazon have tremendous power to lower costs by squeezing other parties, like employees and suppliers. When you take both costs and prices into account, predatory pricing begins to look much more rational, and therefore more common, than the courts have imagined. 

To the average person, companies like Amazon, Uber, Netflix, and WeWork appear to be using super-low prices to gain market share. But if predatory pricing is illegal, how can this be happening?

In early June, the Federal Trade Commission, Department of Justice, and House Antitrust Subcommittee all announced that they would be opening investigations into tech companies. So far they haven’t said anything about whether they will look into predatory pricing. It would be a mistake not to. Amazon and the other “unicorns” on Wall Street claim to be heavily investing in innovation now to reap the benefits in the future. But there is reason to believe that in addition to making these legitimate investments, they are spending heavily to subsidize selling below cost. In other words, they may be getting away with predatory pricing in broad daylight. If federal regulators don’t start asking the right questions, however, we may not know until it’s much too late. 

Predatory pricing is not a new phenomenon. It was one of the allegations brought against John D. Rockefeller’s Standard Oil in 1911, when the Supreme Court decided to break up the company. In subsequent decisions, the Court came to hold that the practice was illegal under federal antitrust laws. The rationale was simple. It’s good when a company gets so efficient that it can charge less than its rivals. But if a company reduces prices below its own costs, then that doesn’t reflect efficiency, and must instead be aimed at cornering the market. That’s harmful to consumers in the long run, because eventually the company will start charging higher prices than it would if it hadn’t crushed the competition. 

Through the late 1970s, cases involving predatory pricing were common, but critics pointed out that the courts lacked a consistent framework for deciding whether it was happening in a given situation. In 1975, a groundbreaking Harvard Law Review article established a straightforward test: to prove that a company’s pricing scheme is predatory, a plaintiff must show both that the prices are below the cost of producing a single product or service and that using that strategy to eliminate competitors is economically rational. The scheme counts as rational if the firm will be able to recoup its losses in the future “through higher profits earned in the absence of competition”—the whole point of having a monopoly.

But just a few years later, this formula was subtly revised. In his 1978 magnum opus The Antitrust Paradox, which provided the blueprint for a conservative counterrevolution in the field, Robert Bork narrowed the definition of “recoupment” from turning a profit to specifically charging higher prices. And any company whose strategy depended on raising prices in the future, he reasoned, would run into the problem of new competitors emerging to undercut them. Predatory pricing, he concluded, was almost always irrational, and courts should be highly suspicious of parties bringing forward such claims. 

That position, like most of Bork’s views on antitrust, made its way into the official doctrine of the Supreme Court, which assumed that investors and shareholders would refuse to allow companies to lose money on a scheme that was unlikely to work. And so, according to the judiciary and mainstream antitrust lawyers, we have little to fear from the practices of companies that sell products and services for ridiculously low prices, year after year, without making a profit. 

But the rise of platforms that are both insanely valuable and persistently unprofitable has made the Court’s assumptions look increasingly shaky. As Amazon, Netflix, and Uber have shown, investors and shareholders can be more than willing to tolerate losses if they expect the firm to eventually translate these losses into a money-making scheme. The question is: How? 

As I argued in a paper recently published in the Oxford Journal of Antitrust Enforcement, scholars on both sides of the antitrust debate have been overlooking the other side of the predatory pricing equation: lowering costs without passing those savings along to the consumer.

Take Amazon. While the company publicly claims to be profitable, it has reported a cash outflow in statements filed with the Securities and Exchange Commission as recently as the end of 2017. That means that—once various debt obligations were taken into account—Amazon was losing money. For example, for the 2017 calendar year, it reported a net cash outflow of $1.5 billion. This raises the question: If Amazon is indeed losing money because it is pricing below cost, what’s its strategy for recouping those losses down the line? 

In my paper, I argued that Amazon, which dominates nearly 50 percent of all e-commerce, might eventually recoup its losses by growing so efficient that tomorrow’s costs drop far below today’s prices. Then, in the absence of competition, it would face little pressure to pass the savings onto customers. 

However, in the weeks that followed the publication of the paper, a more ominous hypothesis occurred to me. It begins with the fact that Amazon’s e-commerce business is really composed of two main parts. In the first, Amazon operates as a retailer, buying products in bulk from vendors and then selling directly to consumers. Under this model Amazon bears all of the costs associated with storage, fulfillment, and shipping. This operation is similar to a traditional brick-and-mortar shop. The second branch is the Amazon Marketplace: a virtual mall in which sellers pay Amazon for the right to display and sell their goods on its platform. The crucial difference is that in the Marketplace, sellers shoulder the costs associated with storage and fulfillment. In recent years the share of the Marketplace has grown dramatically. In 2001, 6 percent of merchandise sales on Amazon came through the Marketplace. Today, the figure is around 58 percent.

Here we see the potential for Amazon to recoup its losses in a way that Robert Bork never imagined. Instead of raising consumer prices, Amazon can sell the same products, for the same price, but push more and more vendors to become third-party sellers on the Marketplace—offloading the costs of fulfillment and allowing Amazon to charge those same entities higher fees. 

This recoupment process appears to be under way. Bloomberg recently reported that Amazon was preparing for a “supplier purge”—ceasing to buy from thousands of wholesale vendors, and pushing them instead to become sellers on the Marketplace. At the same time, there is anecdotal evidence that Amazon is making terms less generous to sellers, so that Amazon keeps a bigger cut of the money coming in. In early May, I was contacted by the CEO of a company that has generated over $65 million in revenue on the Marketplace over the last five years. He spoke on the condition of anonymity, since he relies on Amazon for his livelihood, but agreed to let me look at his company’s books. His story matched accounts I’ve heard from other sellers. 

Until recently, the CEO explained, Amazon treated his company well, because sellers like him solved a key problem: big brands were refusing to sell directly to Amazon, meaning their products were unavailable on the site. Sellers on the Marketplace filled this crucial void. They would buy products from the big brands and then sell them on Amazon, thereby circumventing the embargo. Amazon treated sellers like royalty in exchange. For instance, the CEO explained, the company charged unusually low fees for storing his company’s products, and if Amazon made a mistake in the fulfillment process or lost products shipped by sellers, it would quickly reimburse them, no questions asked. And most importantly, it offered its own fulfillment services, known as “Fulfilled by Amazon,” or FBA, at extremely low prices—far lower than what it would have cost the CEO to fulfill orders on his own. Those were the good days, he said, when his business grew from just under $1.5 million to more than $6 million in annual revenue. 

The Supreme Court assumed that investors and shareholders would refuse to allow companies to lose money year after year. But the rise of platforms that are both insanely valuable and persistently unprofitable has made the Court’s assumptions look increasingly shaky.

But now he fears that his company’s days are numbered, due to ever shrinking profit margins. What happened? First, FBA became much more expensive. If back in 2014 fulfillment fees stood at 17 percent of the seller’s total costs, they are currently hovering at about 27 percent. Second, Amazon now charges much more for inventory storage: over the past four years, the monthly rate per cubic foot that the CEO pays has increased by over 40 percent. (Add to this the fact that more recently, the CEO said, he has had to pay Amazon for advertising for his products to have a chance of appearing at the top of search results. In the past, a successful product could top the list without a boost from ad money.) Amazon touts that it continuously invests in research and development and improving vertical integration (such as buying its own fleet of planes), ever striving toward greater cost efficiencies. It’s hard for the CEO to square that with the fact that fulfillment services fees have grown by about 60 percent as a proportion of his costs over the past five years. 

Why not leave Amazon fulfillment, then? It’s too risky, the CEO explained: even if he could raise enough capital to set up an efficient fulfillment infrastructure, his company would risk bankruptcy in the event that Amazon suspended its account for any missed or delayed deliveries. And leaving Amazon entirely would be suicide for most sellers. 

All in all, it appears that if Amazon was indeed engaging in predatory pricing, it has now moved on to the recoupment phase by shifting the costs of fulfillment onto third parties and by squeezing higher commissions and fees from those sellers. By Amazon’s own account, those represent one of the company’s fastest-growing sources of revenue. In 2018, Amazon’s cut of the revenue from third-party sellers totaled $42.7 billion, which translated to nearly one out of every five dollars the company made. 

The threat of predatory pricing goes far beyond Amazon. Uber, for instance, has been up front in public announcements, including the paperwork for its IPO, about the fact that it is running its Uber Pool service at a loss in order to gain market share. And it has even discussed the need to recoup its losses by lowering the cut of fares that drivers receive. It’s far from clear that Uber’s strategy will work, as its lackluster IPO suggests—the ride-share industry may be impossible to monopolize, since the barriers to entry for new rivals are relatively low. But Uber is just one high-profile example. WeWork, another cash-burning mammoth gearing up for an IPO, is another. It could be leasing and buying office space around the world in order to attract customers, only to recoup its losses by squeezing landlords once it snatches up a large enough market share of the world’s prime real estate.  

Ultimately, however, we generally lack concrete evidence that these companies have been charging below cost—the key way to determine whether predatory pricing is taking place—because that information is not part of mandatory corporate disclosures. Similarly, we can’t know for sure that the squeeze Amazon sellers are feeling these days is part of a recoupment plan. But the available indicators should be triggering alarm bells in Washington. As part of their upcoming investigations of the tech industry, the Federal Trade Commission, Department of Justice, and House Antitrust Subcommittee should make companies turn over the information that would show whether they’re violating the prohibition on predatory pricing. For a more forward-looking solution, regulators should create new rules requiring companies that consistently lose money to submit a confidential report showing whether they’re pricing below cost.

The good news is that at least one powerful member of the judiciary, which wields disproportionate power over how antitrust law is applied, seems somewhat attuned to the dangers of corporations that use their position as a dominant buyer to impose unfair terms on sellers—a phenomenon known as monopsony power. 

Instead of raising consumer prices, Amazon can push more and more vendors to become third-party sellers on the Marketplace—offloading the costs of fulfillment and allowing Amazon to charge those same entities higher fees. This process appears to be under way.

As the judge put it in a 2017 decision of the U.S. Court of Appeals for the D.C. Circuit, “[T]he exercise of monopsony power to temporarily reduce consumer prices does not qualify as an efficiency that can justify an otherwise anti-competitive [activity]. . . . Although both monopsony and bargaining power result in lower input prices, ordinary bargaining power usually results in lower prices for consumers, whereas monopsony power usually does not, at least over the long term.”

That was Brett Kavanaugh, now a member of the Supreme Court and, perhaps surprisingly, a potential swing justice when it comes to antitrust law. In May, Kavanaugh joined the Court’s four liberals to allow an antitrust case to proceed against Apple based on the terms it imposes on third-party developers in the App Store. “A retailer who is both a monopolist and a monopsonist may be liable to different classes of plaintiffs—both to downstream consumers and to upstream suppliers—when the retailer’s unlawful conduct affects both the downstream and upstream markets,” he wrote in the majority opinion. 

There’s no doubt that negative cash-flow juggernauts like Amazon have used technological innovations to provide consumers with excellent services at a fantastic bargain. The question is whether they are also using illegal tactics to push competitors—including tomorrow’s would-be innovators—out of the market. If they are, and they aren’t stopped soon, we will likely come to find that the bargain wasn’t worth it.

The post Superpredators appeared first on Washington Monthly.

]]>
101057
Breaking the Faith https://washingtonmonthly.com/2019/07/12/breaking-the-faith/ Sat, 13 Jul 2019 01:27:04 +0000 https://washingtonmonthly.com/?p=100666

It took centuries to fulfill James Madison’s unique vision of religious freedom. Donald Trump threatens to undo it.

The post Breaking the Faith appeared first on Washington Monthly.

]]>

American history is checkered with ugly bouts of religious persecution—from Protestant mobs burning convents in the 1830s, to Henry Ford publishing anti-Semitic propaganda in the 1920s, to anti-Muslim violence after September 11. But there was one thing that, until 2016, had never happened before in the history of our country. No one had ever won the presidency on a campaign that prominently and persistently attacked a religious minority.

As a candidate, Donald Trump didn’t just demonize Muslims rhetorically. He offered specific policies that ran against our shared consensus about religious freedom. He proposed banning Muslims from immigrating to the country, claiming that Muslim refugees were “trying to take over our children and convince them how wonderful ISIS is and how wonderful Islam is.” Just as stunning, Trump said he would “absolutely” require American Muslims to register in a special database to make it easier for the government to track them. Finally, he said that “there’s absolutely no choice” but to close down some American mosques as a way of combating extremism.

Anti-Muslim animus grew as the 2016 election approached and Republican voters learned to take their cues from Trump. The percentage of Republicans who believed that at least half of Muslims living in the United States were anti-American jumped from 47 percent in 2002 to 63 percent in 2016, according to the Pew Research Center. Most shockingly, according to a Public Policy Polling survey, only half of Republicans were willing to say that Islam should be legal in America. So when Trump, a week after his inauguration, signed an executive order banning foreign nationals from seven predominantly Muslim countries from entering the United States, he was doing so with the overwhelming support of the voters who put him in office.

But then something encouraging happened. Thousands of people of different faiths flooded airports to protest the Muslim ban. The courts blocked the ban from taking effect, leading Trump to introduce what he called a “watered-down” version. Federal courts then blocked that one, too, because it still prioritized Christian refugees over Muslims. In June 2018 the Supreme Court’s five conservatives upheld a third version of the ban, revised to drop the preference for Christians and add two token non-Muslim countries (including North Korea, which of course has never been a significant source of U.S. immigration). 

The Muslim ban exemplified two facts about religious freedom in America: It is deeply baked into our system, cherished as one of our most sacred liberties. Yet it is also fragile. The consensus can unravel quickly.

Donald Trump’s Muslim ban exemplified two facts about religious freedom in America: It is deeply baked into our system, cherished as one of our most sacred liberties. Yet it is also fragile. The consensus can unravel quickly.

On some level, liberals understand that this is a problem. But religious freedom is rarely top of mind on the left. To some degree, this reflects the right’s success at casting religious freedom as a conservative issue—one that typically concerns expanding the role of conservative Christianity in the public sphere. It is also because the Democratic coalition includes more atheists and nonreligious people. But it’s a mistake for liberals to ignore religious freedom. First, remember that the most successful progressive movements in history were driven in great measure by religion. Abolitionism and the twentieth-century civil rights movements were to a great degree religious crusades that drew power from their ability to use language and ideas that spoke to the fundamental beliefs of a broad range of Americans. 

More important, when religious freedom collapses, it is the marginalized who suffer most. The moral commitments of liberalism thus require that the right to worship freely be defended. But in order to do that, we first need to understand the specifically American approach to religious freedom in America—an approach unique in the history of the world.

Societies have puzzled for millennia over how to have both religion and freedom. Today, most nations still have not found the right balance. More than three-quarters of the world’s population lives in countries with limited religious freedom, according to Pew, and 42 percent of nations still have an official or preferred religion. Varieties of oppression have flowered: Eastern Orthodox Christians harass Protestants in Russia, Muslims persecute Coptic Christians in Egypt, Buddhists attack Muslims in Myanmar. Even Western democracies have stumbled, as when, in 2016, French policemen forced female Muslim beachgoers to strip off their head scarves and burkinis because their religious attire showed disrespect to secularism.

By comparison, the United States was, at least until the Trump presidency, managing its religious diversity well. America is home to 350,000 houses of worship, from Adventist to Zoroastrian, from urban storefronts to Christian mega-churches that hold 40,000 people. Nearly three-quarters of Americans say they pray at least once a week. Notably, affluence has not dampened our religiosity as it has in other countries. The Pew Research Center recently mapped the relationship between wealth and religious practice. On the upper left of the chart is a cluster of countries that are religious and poor—Afghanistan, Nigeria, Guatemala. On the lower right are wealthy, secular nations, including Norway, Switzerland, and Germany. Way off by itself on the right edge of the chart is a single stray dot: the United States, wealthy and religious. America has reduced religious persecution without subduing religious passion. 

But the struggle to make religious freedom real in America has been long and tempestuous. As with civil rights, the journey began with a set of ideas. The most significant visionary—and the most effective activist for religious liberty—was James Madison, who wrote the seminal treatise “Memorial and Remonstrance Against Religious Assessments,” engineered the passage of the Virginia Statute for Religious Freedom, and guided the creation of both the United States Constitution and the First Amendment. More than anyone else, Madison devised the ingenious, counterintuitive, and often misunderstood blueprint for the religious liberty we enjoy today.

Madison’s views were shaped by a shocking wave of religious persecution against Baptists near his home when he was a young man in Virginia. In 1771, in Caroline County, an Anglican minister approached the pulpit where Reverend John Waller was preaching and jammed the butt of a whip into his mouth. Waller was dragged outside and brutally beaten by a local sheriff. He then spent 113 days in jail. This was only one of 150 major attacks between 1760 and 1778 against Virginia’s Baptists, who today would be called evangelical Christians. In 1774, in a letter to a friend, Madison, then twenty-two, complained that the arrest of Baptist preachers “vexes me the most of any thing.”

Madison went on to devise a two-part formula for religious freedom. First, he argued that the best way to promote religion was to leave it alone. This was revolutionary. In all of previous human history, those who wanted to encourage religion had enlisted the government’s help. Madison believed that the state should neither constrain nor coddle religion and, above all, that it should not favor one faith over another. Even well-intentioned efforts would backfire, he insisted, sapping religion of its vitality. 

Second, he wanted religion to have its own checks and balances. Skeptical of the efficacy of mere “parchment barriers”—lofty declarations of rights in constitutions—Madison believed that the surest path to religious liberty would come from a “multiplicity of sects” all jostling for followers. In a free marketplace of faiths, no one religion could dominate. Spiritual innovation would spread. New styles, denominations, and religions would continually emerge, creating still larger constituencies for religious freedom. Madison approached religion the way an early-twentieth-century progressive approached capitalism: he wanted open competition, but with rules to keep the big players from undermining the upstarts.

In the first quarter of the nineteenth century, the ratification of the First Amendment helped trigger a virtuous circle of liberalization. One by one, states dropped their religious regulations, getting rid of taxation-based establishments and most religious tests. Meanwhile, religious fervor erupted in the form of the evangelical Second Great Awakening, which fueled new denominations and styles (especially among Methodists and Baptists). Whole new religions, like Mormonism, sprouted. The two trends reinforced each other. Less regulation meant more religious newcomers, who then demanded still more freedom. Around the same time, immigration, especially a flood of Catholics from Ireland, further contributed to diversity.

In 1819, Madison concluded that the First Amendment had worked well—not because of the decline in religious persecution but because of the rise in enthusiasm: “On a general comparison of the present & former times, the balance is certainly & vastly on the side of the present, as to the number of religious teachers, the zeal which actuates them, the purity of their lives, and the attendance of the people on their instructions. . . . The number, the industry, and the morality of the priesthood & the devotion of the people have been manifestly increased by the total separation of the Church from the State.” 

Religious freedom was nonetheless still in an early stage of development. Persecution of unpopular religious minorities continued into the nineteenth and twentieth centuries, in part because the First Amendment only applied to the national government. The persecution followed certain patterns. Minorities were often depicted as violent and too alien to ever fully blend into America. An 1838 editorial in a Missouri newspaper  declared of Mormons, “Their manners, customs, religion and all, are more obnoxious to our citizens than those of the Indians, and they can never live among us in peace. The rifle will settle the quarrel.” 

Harsh words like these led to horrific acts. In the fall of 1838, Missouri Governor Lilburn Boggs issued Missouri Executive Order 44, declaring that “the Mormons must be treated as enemies, and must be exterminated or driven from the State if necessary for the public peace.” Three days after the order was issued, on October 30, 1838, the biggest massacre of a religious minority in American history occurred. About 250 Missourians, including a state senator, arrived at Haun’s Mill, a small Mormon community, and opened fire. The mob murdered nineteen Mormons, including children, and wounded fifteen. 

Madison believed that the surest path to religious liberty would come from a “multiplicity of sects” all jostling for followers. In a free marketplace of faiths, no one religion could dominate. Spiritual innovation would spread.

Catholics were likewise thought to be unable to accept or understand American democracy. In 1835, the famous minister Lyman Beecher warned that Catholics were a “dark minded, vicious populace—a poor, uneducated reckless mass of infuriated animalism,” and that the Catholic Church was working to “throw down our free institutions.” The day after one of Beecher’s sermons, in Boston, a few thousand people gathered around the Ursuline Convent in Charlestown. (It’s not clear whether any of them had attended Beecher’s sermon.) A few hundred of the men busted through the convent gate, chanting, “Down with the pope! Down with the convent!” While the mother superior hurried the nuns and the students out the back, the men rampaged, destroying Bibles, the nuns’ belongings, and musical instruments. They raided the crypt, collecting the teeth of deceased nuns as souvenirs. Then they burned the convent to the ground while a fire company stood by and watched.

Religious minorities were often depicted as ethnically alien. In a report submitted to the U.S. Senate in 1860, military doctor Robert Bartholow described a typical Mormon: “yellow, sunken, cadaverous visage; the greenish-colored eyes; the thick, protuberant lips, the low forehead; the light, yellowish hair, and the lank, angular person, constitute an appearance so characteristic of the new race, the production of polygamy, as to distinguish them at a glance.” In 1870, Thomas Nast, the nation’s most famous political cartoonist, launched a series in Harper’s Weekly that depicted Irish Catholics as apelike trolls.

Catholics were said to be loyal to a foreign power, the pope. Nativists warned that foreign governments were not sending their best people (so to speak). Lyman Beecher warned that foreign governments were “emptying out upon our shores” so many paupers—“the sweepings of the streets”—that the result would be “multiplying tumults and violence, filling our prisons, and crowding our poor-houses, and quadrupling our taxation.” 

[media-credit name=”Courtesy Church History Collections of Latter-day Saints and Intellectual Reserves, Inc.” link=”http://washingtonmonthly.com” align=”right” width=”300″]July-19-Waldman-InteriorMormons[/media-credit]

The conflict was ugly, but, stepping back, we can see how it pushed freedom forward. Consider the bitter fight over teaching the Bible in public schools. In the early part of the nineteenth century, Protestants insisted that schools teach their translation of the Bible and the Ten Commandments. Catholics resisted. Things got ugly. In 1834, Catholic churches and houses in Philadelphia were burned to the ground and about thirty people died. But over time, Catholics made headway, through both the state courts (which increasingly recognized that requiring them to read the Protestant Bible violated religious freedom) and the ballot box (by then, Catholics made up a sizable voting bloc in many cities). By 1887, only one-third of public schools taught the Bible. 

The Mormon experience showed the system at work in a different way. In 1862, in an effort to destroy the religion, the Republican Congress passed the Morrill Anti-Bigamy Act, which outlawed polygamy, annulled the incorporation of the Mormon Church, and forbade the church from owning real estate valued at more than $50,000. In 1871, Brigham Young, the head of the church, was indicted for practicing polygamy. From 1882 to 1893, nearly 1,000 Mormons were jailed. By sticking to their principles and refusing to renounce their own family structures, Mormons engaged in massive civil disobedience. 

But eventually politics intervened. Mormons wanted Utah to become a state, which would give it more control over its own affairs than if it continued as a territory. Republicans had initially resisted the idea, but the calculus changed. The political balance of power was shifting from east to west as new states continued to join the union. Mormons now represented a sizable number of votes beyond Utah: from 1876 to 1879, more than 100 Mormon settlements had been established in Arizona, Nevada, Wyoming, Colorado, and other states. 

Congress let it be known: We’ll allow Utah in the union—and effectively accept Mormonism as a mainstream religion—if it just renounces polygamy. So the church did something that American religions often do, but don’t like to admit: it shape-shifted to accommodate the law, agreeing to end polygamy. Congress admitted Utah into the union soon thereafter. The historian Kathleen Flake has suggested that the growing acceptance of Mormons reflected a key strain of Progressive Era politics. In the economic sphere, Progressives sought to create a set of rules that would ensure fair competition. They applied the same principles to religion. Mormonism could be tolerated as long as the LDS Church was willing to play by the same rules as other faiths did. 

These accommodations hardly ended persecution. In the 1920s, a resurgent Ku Klux Klan spread anti-Catholicism around the country. In 1927, 1,000 white-robed Klansmen joined the Memorial Day parade in Queens. Police and Klan members fought, with the police claiming that the Klan had violated a pledge to go hoodless. Klan members subsequently claimed that “Native-born Protestant Americans” were being “assaulted by Roman Catholic police of New York City.” (One of the seven people arrested during the rally was Fred Trump, Donald Trump’s father. News accounts do not specify whether he was there as a Klan member or not.)

The 1928 presidential candidacy of Al Smith, the first Catholic major-party nominee, triggered allegations that a Catholic president would be beholden to the Vatican. Klansmen claimed that a photo of the recently completed Holland Tunnel in New York City actually showed a newly built secret pathway from Rome to the United States, through which the pope would arrive and take over the country. One KKK flier showed an image of a priest throwing a baby into a fire, with the title “Will It Come to This?” In Muncie, Indiana, a twofer conspiracy theory spread: the Catholics had invented a powder that would bleach the skins of black men so they could seduce and marry unsuspecting white women. 

Part of the argument against Catholics was strikingly similar to modern attacks on sharia, the religious code of behavior that is part of traditional Islam. In the Atlantic, a Protestant named Charles C. Marshall cited various Vatican rulings that, he claimed, proved that Smith would have to defer to the pope and Catholic laws. To combat the claim, Smith had to vocally support the separation of church and state, putting him at odds with the Vatican. It was not the last time that American Catholic politicians would break with the pope in order to chart a course through the pluralistic U.S. system.

Smith lost in a landslide. But it turned out that 1928 saw both a massive increase in urban populations and a shift in urban voting to the Democrats, the birth of a new coalition that would sweep Herbert Hoover out of office in 1932 and lead to Catholics thereafter having tremendous political clout within the Democratic Party. “The Republican hold on the cities was broken not by Roosevelt but by Alfred E. Smith,” declared political scientist Samuel Lubell. 

When religious groups are very small, however, electoral politics offers little help. The Constitution then has to assert itself—championed by an independent judiciary. The group that most tested this view was Jehovah’s Witnesses. From 1933 to 1951, there were 18,866 arrests of Witnesses for refusing to salute the flag or comply with the military draft. Mobs punished them brutally. In Litchfield, Illinois, a mob smashed Robert Fischer’s head against the hood of a flag-draped car, demanding that he salute Old Glory. In Richwood, West Virginia, Witnesses were brought to the mayor’s office, where they were roped together like cattle, at two-foot intervals, and forced to drink castor oil. 

The Witnesses responded with an unprecedented wave of lawsuits that changed the course of history. At least thirty-seven religious freedom cases involving Jehovah’s Witnesses were argued in front of the U.S. Supreme Court. In Cantwell v. Connecticut, in 1940, the Court ruled for the first time that the First Amendment’s “free exercise” clause applied to state and local government, not just to Congress. In 1943, in West Virginia State Board of Education v. Barnette, the Court held that the state could not force a pair of young Witnesses to salute the flag in school. The next year, in a law review article titled “The Debt of Constitutional Law to Jehovah’s Witnesses,” retired federal judge Edward Waite asked, “If ‘the blood of the martyrs is the seed of the Church,’ what is the debt of Constitutional Law to the militant persistency—or perhaps I should say devotion—of this strange group?”

It was World War II that ultimately cemented the American idea of religious freedom. The presence of two major existential threats, fascism and communism, forced the nation to emphasize the central role that religious liberty—not just religion—played in the American identity. Franklin Roosevelt listed it as one of the Four Freedoms. Harry Truman said it was the heart of the argument against communism. 

A form of competition Madison had never envisioned—competition with totalitarian foreign adversaries—was leading America’s leaders, and increasingly its citizens, to interpret the principles of the First Amendment in a new way. “Our form of government has no sense unless it is founded in a deeply felt religious faith, and I don’t care what it is,” Dwight Eisenhower famously declared. “With us of course it is the Judeo-Christian concept, but it must be a religion that all men are created equal.” Some mocked his “I don’t care what it is” line. As one critic put it, Eisenhower seemed to be “a very fervent believer in a very vague religion.” But the president had captured the way Americans were increasingly approaching faith—with a combination of passion and tolerance. Blood had been spilled. Religious freedom therefore needed to be revered and protected. It became a sacred liberty.

In the nineteenth century, anti-Catholic nativists warned that foreign governments were sending “the sweepings of the streets,” and that the result would be “multiplying tumults and violence, filling our prisons, and crowding our poor-houses, and quadrupling our taxation.”

Over the next few decades, religious freedom continued to march forward. The Supreme Court expanded the rights of religious minorities while restricting government’s role in favoring one religion over another. Political coalition building led to further thawing in the relations between Protestants and Catholics: first, on the left, as liberal Catholics and Protestants joined to support John F. Kennedy; and then in the 1970s, on the right, as conservative Catholics and Protestants joined to fight abortion and secularism.

Meanwhile, the 1965 immigration act had loosened up immigration from non-European parts of the world, altering the religious makeup of new arrivals. Up to that point, the top ten countries sending immigrants to the United States were all majority-Christian nations. But after the effects of the 1965 law fully kicked in, the melting pot became filled with very different ingredients. From 1986 to 2012, three of the top five countries sending immigrants—China, India, and Vietnam—were majority non-Christian. The Pew Research Center estimated that from 1992 to 2012, 25 percent of immigrants followed non-Christian religions, with the largest groups being Muslims (10 percent), Hindus (7 percent), and Buddhists (6 percent). 

At this point, we are not only a nation of immigrants; we are a nation of religious minorities. The original American majority was composed of Anglicans and Congregationalists. Those denominations now make up 1.7 percent of the American population. Most everyone else descends from a group that was once considered a religious minority. Our system reflects that. 

The terrorist attack of September 11, 2001, posed a major challenge to America’s culture of religious pluralism. President George W. Bush delayed the rise of Islamophobia somewhat with his admirable embrace of Muslim Americans. But within a year or two, religious bigotry began to show, led in part by conservative evangelicals—an irony, since evangelicals had so often been at the forefront of expanding religious freedom in America’s past. While Billy Graham had complimented Islam in 1997, his son Franklin in 2002 called Islam a “wicked, violent religion.” The popular televangelist Pat Robertson said, “Ladies and gentlemen, we have to recognize that Islam is not a religion. It is a worldwide political movement bent on domination of the world.” 

Anti-Muslim sentiment grew even louder with the presidential candidacy of Barack Obama. Rumors that he was a secret Muslim, educated in a “madrassa,” spread easily from the fringes of conservative media to the minds of millions of Republican voters. During the first few years of his presidency, anti-Islamic sentiment intensified on the local level, often in the form of attempts to block the building of mosques. After someone set fire to construction equipment at a mosque site in Murfreesboro, Tennessee, one resident said, “I think it was a piece of their own medicine. They bombed our country.” Echoing the arguments once made against Mormonism, opponents argued that Islam was not a real religion and therefore not worthy of First Amendment protections. 

From 1992 to 2012, 25 percent of immigrants followed non-Christian religions, with the largest groups being Muslims (10 percent), Hindus (7 percent), and Buddhists (6 percent). At this point, we are not only a nation of immigrants; we are a nation of religious minorities.

In past decades, this localized bigotry might have remained marginalized. But a media infrastructure now existed to give them national scope and legitimacy. Conservative outlets, especially Fox News, gave positive coverage to these stories and invited on “experts” to validate other lunatic ideas, such as the menace of sharia. (Sharia is similar to the Halacha rules that govern some Orthodox Jews and to Catholic canon law, which affects all practicing Catholics.) “Is Islam a destructive force?” asked Bill O’Reilly. “There are exceptions to the rule, but they are few.” Brian Kilmeade of Fox & Friends suggested, “Not all Muslims are terrorists, but all terrorists are Muslims.” 

In 2011, another frequent Fox talking head picked up and ran with the anti-Muslim madness: Donald Trump. The real estate mogul began airing his anti-Muslim message during his drive to prove that Barack Obama wasn’t really a U.S. citizen. In March 2011, he said to radio host Laura Ingraham, “Now, somebody told me—and I have no idea if this is bad for him or not, but perhaps it would be—that where it says ‘religion,’ it might have ‘Muslim.’ ” 

When Don from Queens became a presidential candidate, we entered uncharted waters. Religious freedom has been sustained not just by laws and court rulings but also by an informal consensus that past attacks on minority religions were fundamentally un-American. When the president of the United States doesn’t respect that idea, the consensus becomes vulnerable. 

Trump’s attacks on American Muslims were indirect at first. But his rhetoric escalated heading into the first Republican primaries, a period that coincided with two terrorist attacks—the mass killing orchestrated by ISIS on November 13, 2015, in Paris and the shooting in San Bernardino, California, on December 2, when two Muslims murdered fourteen coworkers at a Christmas party. Trump deployed many of the same lines of attack used by anti-Muslim activists and previous generations of religious bigots. Echoing the old attack against Mormons, Trump insisted that Muslims couldn’t become fully American: “I’m talking about second and third generation,” he told Fox’s Sean Hannity. “For some reason, there’s no real assimilation.” And as earlier demagogues did to Catholics and Jehovah’s Witnesses, he suggested that Muslims were dangerously disloyal. “When they see trouble they have to report it,” he said. “They are not reporting it. They are absolutely not reporting it and that is a big problem.” (In fact, the evidence is overwhelming that law enforcement has been able to thwart a huge number of attacks because of the cooperation of rank-and-file Muslim Americans. According to a Duke University study, American Muslims provided tips in forty-eight of the 120 violent terrorist plots that were thwarted between 2001 and 2011.) 

When Don from Queens became a presidential candidate, we entered uncharted waters. Religious freedom has been sustained not just by laws and court rulings but also by an informal consensus that past attacks on minority religions were fundamentally un-American.

Most important, in December 2015, Trump called for a “total and complete shutdown of Muslims entering the United States until our country’s representatives can figure out what is going on.” Gone was the idea that we should focus on Islamic fundamentalists or terrorists. No Muslims of any kind could be trusted. Just as stunning, Trump said he would “absolutely” require American Muslims to register in a special database to make it easier for the government to track them. And finally, he said that “there’s absolutely no choice” but to close down some American mosques. 

Meanwhile, violent attacks on American Muslims multiplied. Hate crimes reported to the FBI grew 76 percent from 2014 to 2017. Almost one-third of the attacks in 2015 came in December—just one month—as Trump’s anti-Muslim campaign hit full gear. On December 10, a mosque was firebombed in Coachella Valley, California. On December 12, in Grand Rapids, Michigan, a robber called a convenience store clerk a “terrorist” before shooting him in the face. On December 24, a shooter ranted about Muslims before killing one man and injuring another outside a Muslim-owned tire shop in Pleasant Grove, Texas. 

Trump continued to lead in Republican primary polls.

At the heart of James Madison’s vision was a system of fair competition among religions: the power of the state should not be used to favor one over another. Trump’s ascent to the presidency has challenged that principle directly: he proudly advertises his desire to favor one group, white evangelicals, over others, especially Muslims. 

“The Christians are being treated horribly because we have nobody to represent the Christians,” Trump said during the 2016 campaign. He promised not only to protect Christians from persecution but also to restore their dominance: “We have to band together. . . . Our country has to do that around Christianity.” Although Trump has advocated a few legitimate expansions of rights for religious people generally, he mostly has defined religious liberty downward, using the concept, for instance, to justify allowing tax-exempt churches to endorse political candidates. 

Meanwhile, Trump stocked his government with men allied to the most extreme anti-Muslim activists. Michael Flynn, his first national security adviser, dismissed Muslims’ claims that they should be protected by the First Amendment as a treacherous tactic. John Bolton, the current national security adviser, appointed as his chief of staff Fred Fleitz, the senior vice president of Frank Gaffney’s Center for Security Policy, one of the leading groups peddling conspiracy theories about the looming threat of sharia. After the Boston Marathon bombing in 2013, Secretary of State Mike Pompeo, then a member of Congress, claimed that the “silence of Muslim leaders has been deafening” and that therefore “these Islamic leaders across America [are] potentially complicit in these acts.”

Trump and the anti-Muslim extremists he has empowered have already degraded the basic rules that had long propelled America’s unique model of religious freedom. But things could still get much worse. After ten years of propaganda from Fox News, right-wing trolls, talk radio hosts, and now the president of the United States, a substantial minority of Americans don’t believe that Muslims are worthy of First Amendment protections. The foundation of religious freedom has been soaked with gasoline. 

Now imagine there’s a large-scale terrorist attack on American soil committed by a Muslim radical. Does anyone expect Trump to caution his followers against blaming Islam as a whole? He would more likely add fuel to the fire. How many hours would pass before we heard him say, “See, I was right about the Muslims!” And since the whole thrust of the anti-Muslim movement of the last decade has been to blur the line between Muslim terrorists and ordinary Muslims, Trump’s reaction could embolden more of his supporters to take matters into their own hands. And history is full of reminders that once animus is normalized against one religious minority, others are at risk of being next in line.

It is hard to imagine mass religious violence in modern America. But remember Trump’s words when asked, in November 2015, whether he would consider shutting down mosques as president: “We’re going to have to do certain things that were frankly unthinkable a year ago.” America’s system of religious freedom has been so successful that liberals have stopped worrying very much about how to defend it. If we don’t start again, the unthinkable may become frighteningly thinkable.

The post Breaking the Faith appeared first on Washington Monthly.

]]>
100666 July-19-Waldman-InteriorMormons
King Modi https://washingtonmonthly.com/2019/07/12/king-modi/ Sat, 13 Jul 2019 01:25:31 +0000 https://washingtonmonthly.com/?p=101050

India’s recent election shows that majoritarian nationalism is hard to defeat.

The post King Modi appeared first on Washington Monthly.

]]>

You got some bad dudes coming in,” Donald Trump told the West Virginia crowd. It was the end of September 2018, and the president was trying to rally his base before the midterms. Democrats, Trump warned, wanted open borders. They wanted sanctuary cities that “unleash violent predators” and leave “innocent Americans” at the mercy of “really ruthless animals.”

Several days before—and thousands of miles away—the president of India’s governing Bharatiya Janata Party (BJP) delivered a similar address. “There are illegal infiltrators in Delhi,” Amit Shah said. “Like termites, they have eaten the future of the country. Shouldn’t they be uprooted?” The audience cheered, and Shah pointed toward the crowd. He accused the BJP’s rivals of being too cowardly to deal with undocumented immigrants, people who “enter here, throw bombs, and kill innocent citizens.”

To an American, Indian politics can seem impossible to follow. There are more than thirty-five parties with seats in the country’s national parliament, many specific to particular linguistic communities. Together, they serve nearly 900 million registered voters, an electorate more than four times the size of America’s and close to twice as large as the population of the entire European Union.

But the country’s seventeenth general election featured many themes that Westerners would recognize. The country’s leader, Prime Minister Narendra Modi, who won reelection, is a right-wing nationalist widely seen as having made India a less tolerant place for minorities. Religious hate crimes have increased more than fivefold since Modi and the BJP came to power in 2014. Most of the perpetrators are part of the country’s vast Hindu majority. Most of the victims belong to the country’s population of 190 million Muslims.

“The BJP has always been known for its Hindu nationalism, which, more often than not, translates into anti-Muslim ideology on the ground,” Paranjoy Guha Thakurta, a prominent Indian journalist, told me. When the party and its supporters speak about immigrants, “the whole idea is to whip up a kind of xenophobic, jingoistic sentiment, where the enemy is perceived to be the Muslim refugees who are reportedly taking over.”

This isn’t just through speeches. India has right-wing TV networks where anchors angrily berate liberals as unpatriotic. The country is experiencing its own fake-news epidemic. And during the 2019 contest, activists and journalists reported that millions of Indian citizens may have been purged from voter rolls. Most of those missing appeared to be Muslims, low-caste Hindus, and women.

“India is several years further along down a path of vicious nationalism than we are,” said Audrey Truschke, a professor of Indian history at Rutgers University. “It’s a wake-up call to be more proactive.”


India’s first prime minister, Jawaharlal Nehru, was an avowed progressive. “It is the duty and responsibility of the majority community, whether in the matter of language or religion, to pay particular attention to what the minority wants,” he declared in 1955. “The majority is strong enough to crush the minority which might not be protected. Therefore, whenever such a question arises, I am always in favor of the minority.”

These beliefs were, by all accounts, genuine. They were also critical for Nehru’s project: creating a stable and democratic India. The country hosts dozens of languages. It is the birthplace of multiple religions and is home to hundreds of millions of people who practice non-native faiths. Nehru knew that keeping India free and whole required tolerance.

It’s therefore no surprise that he clashed with Hindu nationalists. After Mahatma Gandhi’s assassination, Nehru’s government temporarily banned a far-right group with which the assailant was associated, the Rashtriya Swayamsevak Sangh (RSS). Historians say the RSS was related to the fascist organizations of interwar Europe. B. S. Moonje, one of the group’s founders, visited Italy in 1931, where he met Benito Mussolini and toured the National Fascist Party’s military schools and educational institutes. He was impressed. “The idea of fascism vividly brings out the conception of unity amongst people,” Moonje wrote. “India and particularly Hindu India need some such institution for the military regeneration of the Hindus.” The RSS, he continued, “is of this kind.”

The Indian election featured many themes that Westerners would recognize. Prime Minister Narendra Modi, who won reelection, is a right-wing nationalist. Religious hate crimes, mostly against Muslims, have increased more than fivefold since Modi came to power in 2014.

For decades, the organization’s direct political impact was limited, kept in check by the electoral hegemony of the Indian National Congress, the party of Gandhi and Nehru. Instead, the RSS operated in the background, with a particular focus on teaching children its Hindu nationalist ideology. 

One of those children was Narendra Modi. Born into a poor family in the westernmost part of the country, Modi joined the RSS when he was eight. During his early twenties, he led the party’s regional student wing. By the age of thirty-one, he was one of the group’s principal leaders in his home state. In 2001, the RSS tapped Modi to lead its political party, the BJP, in the state of Gujarat. The BJP already controlled Gujarat’s parliament, and Modi became the state’s chief minister, akin to a U.S. governor. It’s a position he would hold for the next twelve and a half years.

As Modi tells it, he led Gujarat through a period of great investment and development. The state’s economy did in fact grow steadily under Modi, although economists say that may have more to do with its coastal location and history of trade than with the chief minister’s policies.

But Modi’s tenure was also defined by disturbing communal violence. In 2002, fifty-eight people were killed when a train coach carrying Hindu nationalists was set ablaze inside a Muslim neighborhood. How the fire began is a subject of intense dispute. What happened next is not: Hindus across the state rioted. They forced children to drink kerosene, stabbed people to death, and electrocuted entire families. Ahsan Jafri, a Muslim member of parliament, was dragged out of his house, covered with wax, and burned alive. All in all, the rioting killed as many as 2,000 people, the vast majority of whom were Muslims. (Hundreds of Hindus died as well.) More than 500 Islamic religious sites were either damaged or destroyed.

Indian politicians continue to argue over Modi’s involvement with the pogrom. But at a minimum, it’s clear he did little to intervene. Legislators from his party helped lead attackers to their targets. A senior minister in Modi’s cabinet told an investigatory tribunal that, in the aftermath of the train fire, Modi instructed police officials not to stand in the way of Hindu vengeance. That minister was later shot dead in his car.

In Gujarat’s December 2002 elections, less than a year after the riots, the BJP increased its state parliamentary majority. The party performed best in districts most affected by the violence. During the campaign, BJP advertisements featured images of the train coach that burned.

Modi has denied any wrongdoing, and an investigatory team appointed by the Indian Supreme Court decided there was not enough evidence to prosecute him. But the violence rocked the world. In 2005, George W. Bush’s State Department refused to let Modi enter the United States, citing his “particularly severe” violations of religious liberty.


When Modi was tapped to lead the BJP into the 2014 general election, he made little mention of the riots. Instead, he campaigned as a pro-business politician committed to fighting corruption, liberalizing the economy, and paring down India’s bloated state bureaucracy. The message clicked, and he won in a landslide. The BJP became the first party in thirty years to win an outright majority in India’s fractious 545-member parliament. The incumbent Congress Party, the storied institution of Nehru and Gandhi, won just forty-four seats.

Modi was now welcomed into the international community. Barack Obama called almost immediately after the elections to congratulate the new prime minister. In September 2014, nine years after he was denied a visa, Modi traveled to the United States to meet with Obama, address the United Nations General Assembly, and hold a rally for 19,000 adoring fans at Madison Square Garden.

In 2005, George W. Bush’s State Department refused to let Modi enter the United States, citing his “particularly severe” violations of religious liberty.

Swadesh Singh, a political scientist at Delhi University, told me Modi’s first term lived up to the hype: “Prime Minister Modi has started and provided an ecosystem for entrepreneurship.” As evidence, he cited the array of programs the BJP has launched to streamline and digitize India’s economy. One prominent scheme is bringing banking services to rural areas. Another, called Digital India, will expand high-speed internet and make government services available online.

But many of these programs draw on, or are rebrands of, policies created by Modi’s predecessors, said Sunil Khilnani, the director of the India Institute at King’s College London. Much of the Digital India plan, for instance, began under the previous government. And Modi’s first signature achievement—streamlining the country’s tax code—was initially proposed by the Congress Party. India’s GDP growth under the BJP government, while strong, is roughly in line with what it was before.

It’s also unclear to what extent the public can trust the Modi government’s economic figures. In January, one state agency estimated that India’s unemployment rate from 2017 to 2018 was 6.1 percent—a forty-five-year high. But the government refused to release this data, prompting two of the agency’s officials to leak their findings to the press. The other government department traditionally responsible for employment data, the Labor Bureau, abruptly stopped releasing it in 2016. But outside estimates suggest that unemployment is rising. India’s economy simply isn’t growing fast enough to accommodate its increasing population.

“We’ve had five years of BJP rule in India, and we’ve seen what it means. It does not mean greater economic growth,” Audrey Truschke said. “What we have seen is a significant uptick, really a surge, in violence against religious minorities.”


In April 2017, Pehlu Khan, a Muslim dairy farmer from the Indian state of Haranya, was driving home with two of his neighbors and two of his sons. Khan had cows he had purchased in neighboring Rajasthan in the back of his truck. Cows are considered holy by many Indian Hindus, and killing them is outlawed in many states. Khan and his sons said they bought the animals not to slaughter, but to milk. But, while still in Rajasthan, they were stopped in the street by a mob, which pulled the passengers out of their vehicles and beat them savagely. Two days later, the fifty-five-year-old Khan died.

The attackers filmed the assault, and it spread across the internet. Civil rights groups protested the murder, but Rajasthan’s BJP-led government mostly blamed Khan. “People know cow trafficking is illegal, but they do it,” the state’s home minister said. “Cow worshippers try to stop them. There’s nothing wrong with that but it’s a crime to take the law in their [own] hands.” The police later charged Khan’s sons with transporting cattle for slaughter. The surviving farmers argued that they in fact had the necessary permits to transport cows for dairy production.

Khan is just one of the many people who have been maimed or killed by “cow vigilantes” since Modi took office. Almost all of the victims of these lynchings are Muslims or low-caste Hindus. (Higher castes are more likely to consider the animals sacred.)

India’s cattle laws are almost entirely decided by the states, but the BJP has long made protecting cows one of its aims. Modi has rarely commented on the country’s cow-related violence. Not until several months after Khan’s murder, and after at least three more people were similarly killed, did Modi speak out. “Violence is not a solution to the problems,” he said.

One year later, Modi introduced a nationwide ban on selling cattle for slaughter. But the country’s supreme court, which in July 2018 warned that India was descending into “mobocracy,” blocked the ban from taking effect. The petitioners argued that the law would needlessly undermine industries that employ many Muslims and low-caste Hindus. The chief justice agreed. “The livelihood of people should not be affected by this,” he wrote.

Polls had suggested that Modi would need coalition partners to stay in power. But the polling was wrong. Much like in America’s 2016 contest and the United Kingdom’s Brexit vote, right-wing nationalists overperformed.

Swadesh Singh, who helps lead a pro-Modi activist group, argued that these incidents unfairly skew Modi’s record on religious tolerance. “The last big communal violence took place in 2013,” before Modi took office, he said. “Small incidents are a law-and-order problem, which should be tackled by the state governments, because law and order is a state subject.”

But under Modi’s reign, the BJP has become similarly dominant at the state level, giving the prime minister considerable sway over regional politics. The man he picked to lead Uttar Pradesh, India’s largest state, is a Hindu priest who has been accused of weaponizing the police force against Muslims. Reporting suggests that the state has seen an explosion in orchestrated shoot-outs against Muslims. It’s a charge the chief minister hasn’t exactly denied. “In 1,200 encounters, more than forty criminals have been killed,” he said in one speech. “This trend will not stop.”

Journalists say it’s becoming more difficult to cover the violence. “There’s a relatively small section of the media that’s really criticizing the government and holding truth to power,” Guha Thakurta told me. Many newspapers, he explained, are dependent on the government for advertising revenue. One of India’s largest newspapers allegedly fired its editor in chief under pressure from the BJP. That editor had placed a “Hate Tracker” on the newspaper’s website to catalog the country’s hate crimes. After he left, the feature was taken down.


Two months before Indians began voting, Modi ordered air strikes against Pakistan, India’s Muslim-majority neighbor and geopolitical rival. The strikes came in the wake of a suicide attack in Pulwama, a district in Indian-administered Kashmir, by a terrorist group based in Pakistan. The attack killed forty Indian paramilitary soldiers. 

For the remainder of the campaign, the prime minister routinely referenced Pakistan and the attacks in his messaging. “Can your first vote be dedicated to the valiant soldiers who carried out the air strike in Pakistan?” Modi asked young Indians. “Can your first vote be dedicated to the brave martyrs of Pulwama?” The BJP also intensified its focus on religious identity. It tried to tie the Congress Party to Pakistan. It claimed that other competing parties were treating illegal immigrants as a “vote bank” and again promised to kick them out. The day voting began, the party Twitter account tweeted: “We will remove every single infiltrator from the country, except Buddha, Hindus, and Sikhs.” Left out were Christians and Muslims. 

On May 23, the BJP won a landslide victory, picking up more seats than it did in the previous election. Even after the Pakistan air strikes, this result was unexpected. High unemployment was thought to have dented the BJP’s popularity. The party had performed poorly in recent state elections, losing three state legislatures to the Congress Party during December 2018. General election polling suggested that it would need coalition partners to stay in power. But the polling was wrong. Much like in America’s 2016 contest and the United Kingdom’s Brexit vote, right-wing nationalists overperformed. 

Singh told me that Modi’s reelection was a vindication of the BJP’s economic program. “Over the span of his first term, Modi tried to reach every section of society through a range of schemes and programs,” he said. “It has been a result of this that the 2019 elections have defied the traditional vertical identities of caste, language, and religion, among others.”

But surveys suggest that the electorate was indeed polarized along religious grounds. One post-election study found that 54 percent of all Hindus wanted the Modi government to return, while only 15 percent of Muslims and 17 percent of Christians did. These findings suggest a different conclusion: majoritarian nationalism is incredibly difficult to defeat. 

In West Bengal, for example—a Hindu-majority but traditionally left-leaning state—the BJP promised to implement a controversial citizens’ register designed to track down illegal immigrants and evict them from the country. That appeal may have worked. In the 2014 elections, when the BJP became the first party in thirty years to win an outright parliamentary majority nationwide, it only won two of West Bengal’s forty-two seats. This year, however, it won eighteen.

“A large part of Modi’s victory can be attributed to nationalist and religion-based emotions which were incited to appease the majority before the elections,” said Dhruv Rathee, an Indian activist and political commentator. “The BJP’s return could worsen the communal divisions in the country, seeing their previous track record.”

Early evidence suggests this may be right. The day after the elections, a video of Hindus thrashing three Muslims in the name of cow protection went viral. On May 25, a Muslim man was allegedly beaten by a Hindu mob while returning from his mosque. But in a speech the next day, Modi said that India’s minorities live in “imagined fear.”

In the lead-up to the contest, Singh told me that the election would be a turning point in Indian history. “This is not just a battle of votes,” he said. “This is a battle of ideas and narratives.”

On that, Modi’s supporters and critics agree. “This is not really a routine moment in Indian democracy,” Sunil Khilnani told me. He worried that continued BJP rule will undermine Nehru’s vision of an India that is tolerant, stable, and free. “In the Indian constitution, being Indian was not defined by any particular bloodline or religion or language or ethnicity. And the BJP have been trying to change that,” he said. “Five more years of the current government and India may really start to look like quite a different place.”

The post King Modi appeared first on Washington Monthly.

]]>
101050
National Service Solves Everything https://washingtonmonthly.com/2019/07/12/national-service-solves-everything/ Sat, 13 Jul 2019 01:23:25 +0000 https://washingtonmonthly.com/?p=101475 I Want You Army Poster

A simple plan to deliver free college and a federal jobs program while restoring faith in American government.

The post National Service Solves Everything appeared first on Washington Monthly.

]]>
I Want You Army Poster

It’s midsummer in Washington. Sweat gathers under suit jackets. Electric scooters clog the sidewalks. And presidential campaigns get serious about firming up policy positions before the primary season gets into full swing. As is our wont, we offer a few ideas in this issue: Daniel Block sketches a blueprint for a sweeping, progressive trade pact with Europe; Grace Gedye explores the latent political potential of tackling the long-term elder care crisis; and Kevin Carey lays out an innovative vision for establishing a national consortium of zero-tuition colleges. 

Another policy idea bouncing quietly around this cycle is an evergreen: expanding national service. Several presidential hopefuls, most notably Pete Buttigieg, have publicly discussed it. As well they should. Voters don’t just want stuff; they want opportunities to contribute to society, and not necessarily through the military. This insight guided John F. Kennedy when he established the Peace Corps; Bill Clinton when he created its domestic counterpart, AmeriCorps; and Barack Obama when he signed legislation designed to more than triple the number of AmeriCorps slots. (Unfortunately, the law required Congress to approve annual spending increases, which went pretty much how you’d imagine. Today, the program still only deploys about 75,000 members per year.) 

Today, in an era of vicious partisanship, the need to rebuild a real sense of national identity is especially urgent. Yet, oddly, politicians may not be thinking opportunistically enough about national service. It has become a far better idea than most people in politics seem to realize, because, done properly, it represents an elegant way to execute two extremely buzzy but decidedly half-baked policy goals in liberal circles: free college and a federal jobs guarantee. 

Ever since Bernie Sanders ran it up the flagpole in 2015, the notion that the federal government should eliminate tuition at all public universities has been both left-wing orthodoxy and a policy headache. As Kevin Carey explains, the Sanders approach perversely rewards the states that now provide the least support for higher education while punishing the ones that provide the most. It isn’t even all that popular beyond the Democratic base. A recent Quinnipiac poll found 52 percent of registered voters opposed, the latest of several similar results. 

But if we turn “free college” into “free college if you serve,” then something easily demonized as an expensive handout transforms into an earned benefit. A few Democratic candidates have picked up on this. (Unfortunately, as of this writing, their polling averages all begin with a decimal point.) Massachusetts Representative Seth Moulton, a military veteran, has the best proposal: granting generous tuition assistance, modeled after the GI Bill, for every year of national service. AmeriCorps members today get an education benefit capped at the maximum federal Pell Grant, currently $6,095. Moulton’s plan, on the other hand, would give up to 100 percent of in-state tuition (or $24,000 for job training). 

That’s the right idea. But even better would be to tie national service to one more objective: a massive federal jobs program. 

Like free college before it, the idea of a universal job guarantee got its biggest boost when Sanders endorsed it last year. The appeal is intuitive. Even with today’s low unemployment rate, millions of people can’t find work. Of particular concern are “displaced workers”: adults who are out of work because their employer or position ceased to exist. From 2015 to 2017, according to the Department of Labor, three million people were displaced from jobs they’d held for at least three years; by January 2018, one million were still unemployed. All those numbers will, of course, explode in the next recession. That makes a job guarantee automatically countercyclical. When the economy goes sour, spending goes up, helping to stanch the worst effects of a downturn.

But the idea wilts under scrutiny. The problem is right there in the name: How can you guarantee everybody a job? Not everyone who wants work is qualified to do something that needs doing. Even if they were, you’d have to convince Americans that the federal government is up for directly creating, staffing, and managing millions of new positions. And you’d have to figure out how long the guarantee lasts—is it life tenure for people who can’t find a better job elsewhere?

Dramatically expanding national service would be free of these issues. First, it’s time limited, with an education benefit at the end, like a bridge back into the broader economy. Second, it isn’t a super-centralized government bureaucracy. AmeriCorps, the marquee domestic service program, is essentially a network of nonprofits around the country that receive federal funding but handle hiring, training, and management themselves. Finally, it doesn’t aspire to universality. AmeriCorps is competitive; not everyone who applies gets in. Even if we expanded it by a factor of twenty, we could keep an element of selectivity. 

But we can’t just expand the existing system. AmeriCorps has been an overall success, with studies finding that its programs, which range from tutoring to disaster relief to legal assistance, more than pay for themselves in economic terms. The problem is that the setup—a grant program for nonprofits—hides its national character. 

“Many AmeriCorps members don’t really know that they’re AmeriCorps members,” said John Gomperts, who ran the program during the Obama administration. “You’re going to associate who you’re working for with where your paycheck is coming from.” That’s a serious shortcoming. The whole point of national service is that it’s, well, national. And for any government program to be politically durable—and worth fighting for in the first place—it really helps for people to know who’s behind it. 

An expanded national service program should therefore create a more direct relationship between citizen and government. Let people apply to the federal government directly. If accepted, they’d get a voucher that they could use to apply to any approved direct-service nonprofit organization, with the promise that the federal government would pay the bulk of their salary. That organization would choose whether to hire the person or not, but paychecks would be signed by the U.S. government. 

What I’m describing is basically a federally funded marketplace for national service. That might sound strange. But Shirley Sagawa, a leading expert on service programs, pointed out that it’s a setup that already exists in higher education. “It’s like federal financial aid,” she said. “The student, not the college, applies for aid.” Then the student decides which college to go to—provided the college accepts them. 

Just like colleges, the nonprofit employers would need to be accredited. Some authority would have to make sure that they’re real organizations, sincerely aimed at solving real problems, perhaps within certain broad categories like education and public health. But the government wouldn’t be in the business of deciding which organizations get money. Just let the market forces of job openings and worker preferences take control. 

For the program to be successful, organizations would have to actually want to hire the people getting the vouchers. That means that, like AmeriCorps today, it wouldn’t be open to everyone; vouchers should be awarded on a somewhat competitive basis, with a strong preference for work experience. Rather than mainly targeting eighteen-year-olds, this would likely end up benefiting older displaced workers the most. These are people who have proven they can hold down a job, and who could use two years of employment at a nonprofit, plus the two years of free college, to transition to a new career. 

A built-in countercyclical funding scheme would be crucial. Not only do the ranks of the unemployed swell during a recession; so, too, does the demand for the kind of social services that many nonprofit organizations provide. Legislation establishing a new national service program should therefore automatically tie funding to the unemployment rate, so that as unemployment rises, so do the available positions. (In a white paper, Sagawa suggests funding 25,000 new positions for every 0.1 percentage point by which the long-term unemployment rate goes above 1 percent. That would have meant 475,000 jobs at the peak of the Great Recession.)

It’s impossible to know in advance exactly how a program like this would play out. But when it comes to addressing two of the perennially dominant political issues in America—employment and education—national service is a powerful tool lying in plain sight. A political leader with vision just might pick it up.

The post National Service Solves Everything appeared first on Washington Monthly.

]]>
101475
What Your Data Is Really Worth to Facebook https://washingtonmonthly.com/2019/07/12/what-your-data-is-really-worth-to-facebook/ Sat, 13 Jul 2019 01:19:12 +0000 https://washingtonmonthly.com/?p=100615 Mark Zuckerberg

And why you deserve a cut.

The post What Your Data Is Really Worth to Facebook appeared first on Washington Monthly.

]]>
Mark Zuckerberg

Americans who use the internet—85.5 percent of us—have made a tacit bargain with Facebook, Google, MasterCard, Verizon, and most other sites and products we use regularly. We get access to these companies’ services, and they get to scoop up, analyze, and sell our personal information. Few people question this setup, perhaps because most of us assume that our data isn’t worth much.

But that assumption is wrong.

Earlier this year, my colleague Siddhartha Aneja and I published a deep-dive study into the value of the personal information that every major website sells access to. It’s a complicated problem. Much of the value comes from advertising revenue, disclosed in annual reports and SEC filings by public companies. But we also had to determine how much of that ad revenue is derived specifically from the micro-targeting that user data makes possible, as well as how much the companies spent to gather, analyze, and market user profiles. In the end, we calculated that internet companies earned an average of $202 per American internet user in 2018 from personal data. We believe that’s a conservative estimate.

The value reflects the extraordinarily varied and detailed data that companies collect. Google collects not only the personal information you reveal when you use its search engine, but also the data that comes from whatever you do when you visit or use any of its dozens of properties—YouTube, Gmail, Google Maps, the Chrome browser, Google Pay—or apps accessed by logging in through Google. Similarly, Facebook gathers all the data crumbs you leave whenever you visit the site itself or use its Messenger service, plus whatever you do on subsidiaries like Instagram and on apps accessed by logging in through Facebook.

Amazon is newer to this business, and unlike Facebook and Google, its basic business model doesn’t rely on data-derived profits. But Amazon’s public records show that its earnings from user data likely more than doubled between 2016 and 2018. Beyond the major platforms, hundreds of other companies take part in the burgeoning personal data business. Our study also explored the revenues from digital advertising earned by smaller internet services, ranging from Snapchat and Spotify to internet media holding companies such as IAC, which owns Match.com, the Daily Beast, and Investopedia.

For a general sense of the value of people’s personal data to these companies, we started with digital advertising revenue. In 2018, Facebook earned an average of roughly $110 in ad revenue per American user. This calculation, however, ignores what it costs to collect, analyze, and market user data. According to the balance sheet on file with the SEC, Facebook earned $55.8 billion worldwide in 2018, virtually all of it from targeted advertising. Facebook also reported that relevant costs came to $20.6 billion. That implies that the value of its users’ personal information was equivalent to $35.2 billion, or 63 percent of Facebook’s earnings.

Imagine the profits General Motors could achieve if it didn’t have to pay for the steel it uses to build cars. Like GM, online companies should pay their users for the valuable raw materials they provide.

In other cases, determining a company’s data-related costs is less straightforward. Google’s cost data, for example, is embedded in the balance sheets for Alphabet, the holding company that also includes Google Fiber, GV (former Google Ventures), CapitalG, Waymo, and Google Deep Mind, as well as the suite of Google-branded internet services.

Given these issues, we developed a proxy to distinguish the value of personal information to the online platforms and the costs to gather and monetize those data. We relied on a careful study by three business school economists who analyzed certain aspects of the Ad-Choices program created by the online advertising industry. In 2010, AdChoices announced that anyone could opt out of all targeted online ads by registering at a special website. While relatively few internet users opted out, the researchers found that those non-targeted ads produced 52 percent less revenue than comparable targeted ones. This suggests that 52 percent of digital advertising revenue is derived from the personal information used to target those ads to particular individuals.

Since targeted online advertising generated $108.6 billion in overall revenue in 2018, on this basis we estimate that the profits derived from American internet users’ personal data totaled $56.5 billion. Some 279.7 million Americans used the internet last year, so the value of their personal data to these online operations, after costs, came to an average of $202 per American internet user.

Our personal information is worth so much because the profiles created from it are remarkably probing and detailed. Algorithms track and save data on what we search for, what we write in emails and messages, what we buy, and everything else we do online, whether on our phones or laptops. Not only do the algorithms then build up a basic profile based on gender, age, ethnicity, and so on; they also determine our individual interests, likes and dislikes, family background, political leanings, sexual orientation, and much more. Everything we reveal online is fair game.

As the software for data mining and targeting has advanced, the revenue from digital ads and the consequent value of the data used to target them have risen rapidly. Our study found that from 2016 to 2018, the value of the information Google mined from Americans grew 40 percent; for Facebook, 85 percent; and for the latecomer Amazon, 312 percent. Overall, the value internet companies derive from Americans’ personal data increased almost 54 percent. At that rate, the number will reach $127.9 billion in four years. Adjusting for the estimated increase in the number of Americans using the internet, people’s personal data will be worth an average of $434 per American user in 2022.

Given how profitable these data operations are, the practice is sure to spread, and the commercial, social, and other uses of our personal data—and the related side effects—will proliferate. In response, we can try to regulate it, but we shouldn’t be surprised when tech companies figure out how to deflect or defeat such regulation.

Instead of regulation, then, we should consider a more direct economic response. Imagine the profits General Motors could achieve if it didn’t have to pay for the steel it uses to build cars. Like GM, online companies should pay their users for the valuable raw materials they provide.

Behind this proposal is the principle that people have a property interest in their personal information, or at least in the wealth generated by that information. This theory does not depend on any particular position on privacy. Rather, it comes directly from John Locke’s analysis of the origins of property, which are embodied in the U.S. Constitution. Locke said that a person can create property by mixing his or her labor with materials from nature, because each of us has personal ownership of our own bodies and minds and, therefore, of whatever we produce by using our minds to direct our bodies. Every piece of personal information online companies gather, analyze, and sell is the result of that very same process, so it meets Locke’s test for personal property. Someone can persuade you to give it away—but if they mislead you about its value to them, either you’re a sucker or they’re a fraud.

Congress should recognize people’s property rights over their personal information and direct that internet companies do so as well. It’s true that the structure of the internet complicates the analysis, because the value of one person’s personal information to an online platform is bound up in the platform’s access to the personal information from many millions of other people. Economists call this a “network effect.” Just as Facebook’s value to any individual increases as more people use it, so the value of any individual’s personal data to Facebook and its advertisers increases as Facebook gathers and analyzes data from millions of other users.

In other words, internet companies and their users both contribute to the commercial value of the personal profiles that drive digital advertising. A straightforward solution is thus to require the companies to share the profits from those operations with users on a fifty-fifty basis. Of course, asking internet companies to write a check to every individual user would be impossibly inefficient. Instead, each company could write a single check to the government, and the government could distribute the proceeds to every household based on the number of its internet users. So, in 2022, a family of four internet users would receive $868 in payment for their personal data. The era of free riding for online companies would be over. Corporations have gotten rich by exploiting our data. It’s time for them to share the wealth.

The post What Your Data Is Really Worth to Facebook appeared first on Washington Monthly.

]]>
100615
The Hustling, Sweating, Flawed Greatness of Richard Holbrooke https://washingtonmonthly.com/2019/07/12/the-hustling-sweating-flawed-greatness-of-richard-holbrooke/ Sat, 13 Jul 2019 01:17:56 +0000 https://washingtonmonthly.com/?p=98107 Richard Holbrooke

The legendary diplomat was a deeply imperfect specimen of the human condition. Yet at his most statesmanlike moments, he could be extraordinary.

The post The Hustling, Sweating, Flawed Greatness of Richard Holbrooke appeared first on Washington Monthly.

]]>
Richard Holbrooke

The late American diplomat Richard Holbrooke (1941–2010) had a knack, that is to say a weakness, for self-promotion. He lobbied for the Nobel Peace Prize. He hinted broadly that he could serve as secretary of state. When one of his old friends died, Holbrooke petitioned the man’s widow to be included among the eulogists. During meetings in the Situation Room, he started out against the wall with the other assistant-level staff, but slowly inched his chair forward to the table among the cabinet officers. Secretary of State Cyrus Vance’s personal assistant had to write a memo in 1978 admonishing Holbrooke not to “insert yourself as a passenger in the Secretary’s car unless this office has specifically approved your request.” Where a lesser striver might have perceived a rebuke, Holbrooke had the memo framed.

Our Man: Richard Holbrooke and the End of the American Century
by George Packer
Knopf, 608 pp

He liked to dispense with the fussier conventions of statecraft. “There are two kinds of people,” he told his sons: “those who like fart jokes and those who don’t. We know where the Holbrookes stand.” Most diplomats keep safely to the other side of the fart-joke line. Yet most diplomats also do not sweat through a half-dozen pairs of socks per day or hang their used ones on an airplane’s first-class seat pocket to dry. Nor do they sit on the bed in their underpants eating Russian caviar straight from the jar. Their French is not delivered in a New York accent. They do not delight in pranks, like de-blousing a colonel’s resplendent uniform in the innermost halls of the Pentagon. After organizing a goodwill game of softball with staff from the Japanese embassy, they would not crush the ambassador’s first pitch for a home run and laugh their way around the bases. 

Holbrooke was less a statesman or strategist than a character from a Philip Roth novel: a hustling, sweating, deeply imperfect, occasionally inspiring, mouth-full-shouting specimen of the human condition. He had three marriages and too many mortgages; he disregarded his children; he wept when called upon to speak of his father in public. He flirted with one of the doctors who tried and ultimately failed to save his life. Yet at his most statesmanlike moments, like negotiating an end to the Bosnian war in 1995, he could be extraordinary. During a half century in public life, Holbrooke became America’s all-purpose hot-zone fixer, the one who jumped into quagmires with both feet and somehow emerged with a deal. The journalist and former diplomat Ronan Farrow, who worked closely with Holbrooke and described him as a father figure, has written that “he was the rare asshole who was worth it.”

In an entertaining and humane new biography, Our Man, George Packer portrays a deeply flawed figure of tremendous energy, blindness, and passion. Holbrooke was constantly “doing something to you,” Packer writes, “cajoling, flattering, bullying, seducing, needling, analyzing, one-upping you.” By the end of an encounter, “you found yourself far out from where you’d started, unsure how you got there, and mysteriously exhausted.” One negotiating team member on the Bosnian detail wrote in his journal, “A true character. Can’t help but like him. He is something to watch.” 

Packer, a heavyweight journalist who joined the Atlantic last year after a distinguished run at the New Yorker, was friendly with Holbrooke and is well positioned to tell his story. It is a subject worthy of Packer’s considerable narrative gifts: a tragicomic hero who poured all of his infuriating ambitions and intensities into a life of purpose on stages both global and bureaucratically small. Bosnia is the heart of Our Man, and it was Holbrooke’s great accomplishment. And don’t think he wouldn’t have let you know it: on his bookshelf he kept three dozen copies of the memoir in which he portrayed himself as the savior and indispensable figure of the peace negotiations. Which, in fact, he was. 

Holbrooke came to diplomacy coincidentally. Growing up in Scarsdale, New York, he was the surrogate son of Dean Rusk, the father of his best friend and a future secretary of state. Holbrooke’s own father died of cancer when he was fifteen, and he did not learn of his family’s Jewish heritage until later in life. Addressing the senior class of Scarsdale High in 1958, Rusk gave bland but fateful advice: “When you’re thinking of careers, think of the Foreign Service.” After an internship with the New York Times, Holbrooke passed the Foreign Service exam and became a member of the class of 1962. 

Packer organizes Our Man around three wars: Vietnam, Bosnia, and Afghanistan. Holbrooke served the United States government in varying capacities in each. In Vietnam, he learned; in Bosnia, he triumphed; and in Afghanistan, he failed. As an aid worker handing out farming supplies for USAID in Vietnam, Holbrooke witnessed the senseless waste of American blood and treasure in pursuit of hubristic goals. He worked in Saigon and the countryside, throwing himself into harm’s way by volunteering for service deep in Vietcong territory. Breaking with diplomatic practice, he befriended and caroused with journalists like David Halberstam and Neil Sheehan, two of the war’s most perceptive critics. 

He also offered trenchant analysis. One of Our Man’s strengths is Packer’s willingness to quote at length from Holbrooke’s journals and correspondence. Holbrooke was no antiwar radical, but he spotted the weaknesses in American policy sooner than most. Here he is in 1963 or early ’64, prefiguring the misgivings that would later typify most critiques of the war:

We arrive here with no knowledge of the country or of the situation and immediately start giving advice, some of which we can really turn almost into orders because of the materials and money and transportation that we fully control. I think that no American would stand for such a deep and continuing interference in our affairs, even if it appeared that survival was at stake. 

Too junior to influence policy, Holbrooke instead watched, worked, and remembered. “His ambition still had a clean smell,” Packer writes, foreshadowing the way his subject’s ladder climbing would later outweigh his idealism. Holbrooke also gave himself over to the romance of service abroad in pursuit of national goals. “Some things I enjoy about Vietnam, not necessarily related to our mission but to my disposition,” he confided in a letter: “I enjoy the fast pace of the people who are good, the men who are doing the best job for us.” It was thrilling to return to the city or “an airstrip somewhere in the Delta” and run into “these people, with whom you may have shared a tough day in the field somewhere.” And, of course, he loved “the drama of the helicopters.” 

In Vietnam Holbrooke made the most consequential friend of his life: Anthony Lake. They were yin and yang: Lake the Harvard WASP to Holbrooke’s New York Jew; Lake who was understated and subtle where Holbrooke never entered a room he could not fill. Holbrooke developed a close friendship with Lake and his wife Toni in Vietnam, playing tennis and lingering over lazy meals. Their lives and careers would run in parallel until Holbrooke and Toni fell in love, dooming the friendship and poisoning Holbrooke’s first marriage. Holbrooke somehow contrived to blame Lake for this.

After participating in the Paris peace negotiations and then editing Foreign Policy during the Nixon years, Holbrooke joined the Carter administration as assistant secretary of state for East Asia. In that position he started making enemies in earnest. Slights and snubs began to add up; so did more Shakespearean enmities with heavyweights like Zbigniew Brzezinski. Holbrooke advised Carter against naming Brzezinski national security adviser, and Brzezinski apparently found out. Brzezinski engineered a campaign of petty retaliation: excluding Holbrooke from meetings, sticking him at the back of motorcades, refusing to let him see the talking points. The great strategist behaved like a third-grade bully, and Holbrooke frantically tried to save face. 

Yet, Packer writes, Holbrooke brought his egotism and idealism into balance in order to do good. In a remarkable effort, he championed the cause of South Vietnamese refugees, helping persuade multiple governments to increase their quotas. It was a smaller moment in a career full of blockbuster set pieces; a lesser biographer might have overlooked it. “Human suffering didn’t plunge Holbrooke into psychological paralysis or philosophical despair,” Packer asserts. “It drove him to furious action.” Holbrooke earned loyalty from his staff, giving them “not personal warmth—during conversations he was always on a phone call and shuffling paperwork—but intellectual stimulation, openness to dissent, and a sense of collective mission. In return they gave him their best.” And he proved a natural at bureaucratic tricks, exceeding his negotiating authority to achieve breakthroughs and shamelessly leaking to the press. When Reagan became president, Holbrooke spent a decade in the wilderness, cashing in his name and network for a sinecure at Lehman Brothers. It was a terrible fit, but at least it paid well.  

Upon Bill Clinton’s election, Holbrooke scrambled for a top foreign policy position in the administration. But he had made too many enemies, and his brash and openly ambitious style alienated too many people. To borrow Saul Bellow’s description of Vladimir Nabokov, Holbrooke was one of the great wrong-way rubbers of all time. Anthony Lake was named national security adviser, but by then the two men were practically enemies, and Lake declined to help him. So did other putative friends. 

So Holbrooke returned to the field, sending himself on a fact-finding mission to Bosnia in January 1993. He again felt the excitement of stepping into a war zone, and once more linked arms with a journalist, this time John Burns of the New York Times. They toured Sarajevo, which had been under siege by the Serbian army for 270 days and running. Holbrooke encountered camp survivors, bloodstains, rubble, and refugees. He confided to his journal, “If I don’t make my views known to the new team, I will have not done enough to help the desperate people we have just seen; but if I push my views I will appear too aggressive. I feel trapped.” The ambivalence quickly resolved in favor of trying to force action to stop the genocide. Holbrooke wrote unanswered memos and news articles urging the use of force against Serbian aggressors.

His government exile ended in 1993, when Clinton named him ambassador to Germany. The next year he became assistant secretary of state for Europe and Canada; a colleague told him that the job would include solving “the Bosnia problem.” Holbrooke spent much of his wedding day in 1995—his third—on the phone with Washington. While Clinton waffled during the Srebrenica massacre, Holbrooke seethed. (“If we’d bombed these fuckers as I had recommended,” he said, “Srebrenica wouldn’t have happened.”) He nearly lost his life in a car accident on Mount Igman in Bosnia that claimed three American lives. Eventually NATO airstrikes brought the Serbs to the negotiating table, where Holbrooke was determined to press his advantage during the short window while the bombs fell.

Packer’s chapters on Holbrooke’s negotiating efforts in Belgrade, and then in Dayton, where the peace talks concluded, are the centerpiece of Our Man. They show Holbrooke not exactly at his best, but certainly at his most. He alternately finessed and screamed at Slobodan Milosevic. (An observer commented that their “two egos danced all night.”) He sat through endless banquets. He lifted glasses of Scotch to his lips but barely drank, in order stay clearheaded. “He didn’t stick to talking points—had no real talking points—but let the conversation run its meandering course while looking for openings to run through,” Packer writes. Through it all, Holbrooke never let up, “always pushing the pace, and this intensity created momentum for the next small breakthrough, and each breakthrough added more speed and power.” To the extent that he had a strategy, Packer writes, “it was this: he set himself in motion and caused others to move, and things became possible that never happened with everyone at rest.” The war ended, on terms that were less than just—but it ended.

Nothing could top that experience, and nothing did. When Clinton was reelected, Holbrooke hoped to be named secretary of state but was passed over for Madeleine Albright—another enemy. (Later, during the 2000 presidential campaign, she was heard to remark, “I hope Gore gets elected, but I’ll be damned if Holbrooke is going to succeed me.”) Instead Holbrooke continued to troubleshoot hot spots, serving as Clinton’s special envoy to Cyprus, U.S. ambassador to the UN, and U.S. negotiator during the Kosovo war in 1998. During the 2000s, he returned to Wall Street, allied himself with Hillary Clinton—his most faithful patron—and worked on his marriage. Around this time Anthony Lake made a perceptive and generous observation about his former friend: “What Holbrooke wants attention for is what he’s doing, not what he is. That’s a very serious quality and it’s his saving grace.” 

A reckless and impulsive action-diplomat with a flair for the dramatic was bound to rub the cerebral Barack Obama the wrong way. Packer ably chronicles the friction between the two men. During one briefing on Afghanistan, as Holbrooke described a decision point as being “at the savage intersection of policy, politics, and history,” Obama murmured, “Who talks like this?” But as Afghanistan proved more and more intractable, Obama did what other presidents had done before him: he sent in Holbrooke, this time as special envoy for Afghanistan and Pakistan. It was an impossible job. Holbrooke fell out with Hamid Karzai; he could not afford to show daylight between himself and Secretary Clinton; he was nearly fired. Like Vietnam—which Obama quickly tired of hearing about—it was just too big a problem to solve.

If there is a line running through Holbrooke’s public life, it is a combination of liberal internationalism, a willingness to use American military power, and an easily overlooked decency toward the forgotten people of the world. Some of his worldview has gone out of fashion after the disastrous Iraq War. Is humanitarian intervention worth its very high costs? In Bosnia, yes. Libya, no. Kosovo, yes. Iraq, very much no. Rwanda, we will never know. But in the end, Richard Holbrooke’s life says less about foreign policy than about humanity itself. His profound imperfections reveal vulnerability and bathos, no more so than when he desperately tried to name all the people he loved as he lay dying—and then said to a staffer, “Make sure you’re recording my every witticism.” When a great man departs, it is often said that we will not see his like again. The mere thought of another Holbrooke is exhausting.

The post The Hustling, Sweating, Flawed Greatness of Richard Holbrooke appeared first on Washington Monthly.

]]>
98107 Our Man: Richard Holbrooke and the End of the American Century by George Packer Knopf, 608 pp
The Radicalizations of the American Mind https://washingtonmonthly.com/2019/07/12/the-radicalizations-of-the-american-mind/ Sat, 13 Jul 2019 01:15:03 +0000 https://washingtonmonthly.com/?p=101484 Women's March on Washington

Why liberals need to grapple with the complexity of the new left.

The post The Radicalizations of the American Mind appeared first on Washington Monthly.

]]>
Women's March on Washington

Avant-garde social movements are jarring among the non-adherent. It’s part of their point—and familiar at the moment. By 2019, a set of radical political theories on the left has spread from college campuses into professional media and mainstream culture, variously emphasizing demands for safe spaces free of posttraumatic triggers, limits on free speech to offset structural privilege, new pronouns to help de-normalize cisgender identity, and other unconventional imperatives. You’ve probably seen scare quotes around “safe spaces,” “structural privilege,” or “cisgender identity” as often as not.

Panic Attack: Young Radicals in the Age of Trump
by Robby Soave
All Points Books, 336 pp.

A more familiar but still disruptive politics of anti-capitalism has meanwhile been on the rise across the left under the old banner of socialism, while a politics of white nationalism, resurgent on the right, has echoed the tropes of fascism. All together, these movements haven’t just bewildered the uninitiated majority of the country; they’ve seemed to hasten a steep erosion of consensus around the terms of political debate in America and a metastatic spread of conflict without common ground.

For anyone committed to the foundational mores of liberal democracy, such as toleration, free expression, and due process, the encroachment of illiberal consensus from all sides represents a special problem. Liberal-democratic politics may be compatible with any number of radical ideas over time, but it also depends on the maintenance of a common social repertoire that includes a broad investment in mutual persuasion, a commitment to evidentiary argument, and an openness to creating diverse coalitions for distinct political ends. The more radical politics thrives by working against this social repertoire, rather than with it, the more it stands to threaten liberal-democratic politics as such.

In his new book Panic Attack, Robby Soave takes a comprehensive and critical look at the flourishing ecosystem of American radicalism with this problem in view. That means surveying a lot of ground: the hugely influential worldview informed by intersectional theory, which focuses on the complex interactions among racism, sexism, and other forms of oppression; the related politics of identity, culture, and power behind Black Lives Matter, fourth-wave feminism, and trans activism; the new salience of the Democratic Socialists of America; and the alarming phenomenon of the hateful, cruel, and ironical alt-right.

It also means going to some historical depth. While Soave’s subtitle is Young Radicals in the Age of Trump, he understands that, like so much about this age, the state of its radicalism has vastly more to do with decades of emerging conditions that preceded Donald Trump’s presidency than it does with Trump himself. In a sense, Panic Attack works as an anthology of historical explainers. Perhaps you’re uncertain about the origins and significance of ideas like “cultural appropriation” or “gender dysphoria,” the importance of Title IX of the Education Amendments Act of 1972, or the influence of Herbert Marcuse and the Frankfurt School of Critical Theory on undergraduate instruction in the humanities and social sciences? Soave lays these out clearly, without condescension to either his subject matter or the reader.

He integrates them, as well, in an overarching historical narrative. Despite their eclecticism and sporadic mutual enmity, Soave argues, the array of radical movements asserting themselves since Trump’s election share a core set of themes, apart from which we fundamentally can’t understand them. More specifically, they’re driven by a core set of experiences that have unmoored “Zillennials”—a mash-up term of art combining Soave’s own Millennial generation (born 1981–96) with the younger Gen Z (1997–)—from some of the basic assumptions of their
forebears.

As Soave sees it, Zillennials have been raised in a chronically freaked-out “safety culture,” at home and at school, that has left them broadly unprepared for some of modern life’s tougher realities. They have then been loaded with unprecedented student debt and dropped into an extraordinarily rocky economy. With the intensifying influence of social media, and the powerful new modes of social normalization they’ve catalyzed, Zillennials have been primed for immersion into a networked set of radical ideas that have seemed to speak meaningfully to their lives. Coming largely out of the academy, these ideas have emphasized a capacious view of victimhood and an associated need for protection from psychological harm; an antipathy to capitalism; and a hard turn away from the traditionally youth-friendly norms of free expression that characterized the civil rights and antiwar eras.

The result has been a proliferation of new forms of common sense that are contemptuous not only toward liberals’ traditional embrace of markets in the economic realm, but also toward the traditional defenses of free expression, open debate, and due process that liberals have often shared with radicals on the left in the social, political, and legal realms.

Like many concerned about “political correctness,” Soave locates these tendencies first and foremost on college campuses, the sources of some of the more familiar and dramatic illustrations. These include the deliberate disruption and harassment of campus speakers—occasionally quasi-fascists, sometimes mainstream conservatives, but just as often heterodox liberals or even left radicals—on account of the putative harm their ideas do to one or more categories of oppressed people. Other illustrations include the establishment of new campus regulatory powers to enforce radical norms and various successes in reshaping curricula to conform with them. But Soave also sees these new norms beyond academia, in patterns of fierce division, exclusion, and censure that inflect different race-, sex-, and gender-based identity movements across society at large.

Notably, when Soave reports conversations with left-radical protestors, they tend to convey little interest in liberal theories of the case at all. “Free speech is allowing people to express themselves in a way that doesn’t put other people down,” one tells him at an anti-alt-right counterdemonstration last year. “It doesn’t oppress people and damage our society.” Elsewhere, an antifascist activist is incredulous about any need to justify physically attacking a peaceful group of right-wing demonstrators: “They’re fucking Nazis.” Elsewhere still, in arguing to cancel the screening of a pro-gay film about Stonewall that a group of student activists deemed in an open letter “discursively violent” toward members of the trans community, an undergrad student dismisses the idea of showing the film and then debating about it. “Critical discussion,” she says, “is simply a way of engaging in respectability politics.” Overall, the lack of engagement with liberal ideas is at least as striking as the substance of any radical ideas themselves.

The role of this negative space seems key to interpreting the story Soave is telling in Panic Attack. It’s not uncommon in skeptical accounts of “political correctness” to encounter the idea that radical politics goes too far in an illiberal direction. But Soave’s account is different. The physics of contemporary radicalism that he describes isn’t really linear, in the sense that “too far” would imply, so much as circular: as liberal-democratic mores become increasingly meaningless among radicals, they themselves increasingly see only left versus right, socialist versus fascist, radicalism versus radicalism. The only lucid way to respond to the world and win the future is, now, to understand that liberals are objectively allied with your most extreme political enemies; and the more forceful you imagine those enemies to be, the more force you’ll consider necessary and justified to counter them.

This dynamic may account for what some will no doubt consider a glaring disproportionality in Soave’s book: he devotes one chapter, less than twenty-two pages out of 280, to the alt-right. This appears to be because he considers the movement’s reach tightly limited. “The alt-right doesn’t have a comparable base of power,” he writes, in reference to the radical left’s dominance on college campuses, “but its members have played an important role in infesting social media sites and making them miserably toxic places.” The alt-right has been violent, even lethally; it’s everywhere hateful and ugly; but, for Soave, its only true political achievement is to have dressed itself up as something the left would maximally fear and react to, while feeding cues to a nihilistic and insecure president who, of course, reads the world extensively through social media.

The alt-right, Soave believes, has failed to grow for the same reason it’s succeeded at all: despite all the anger and disaffection it can speak to, it has no argument for destroying the legacy of liberal democracy in the United States that’s in any way compelling to most Americans. It just feeds off the twisted psychology governing a very online, disaffected fringe population of white men who consider themselves victims. “Alt-right activists often chant ‘You will not replace us!’ at their marches,” Soave writes. “They are specifically radicalized by the idea that multiculturalism is about erasing white identity.” That is, as timeworn as its KKK-style white-supremacy tropes are, the alt-right makes no sense as a contemporary movement, even to its members themselves, apart from the core narrative that a “politically correct” left is oppressing them. The alt-right is, for Soave, only marginally significant apart from its codependent relationship with the radical left.

Left-radical protestors tend to convey little interest in liberal theories of the case at all. “Free speech is allowing people to express themselves in a way that doesn’t put other people down,” one says.

This is an account in which cause and effect can play ambiguously. That won’t please everyone. But it does help give Panic Attack an unusually subtle aspect. Soave is interested, above all, in the underlying political psychology of our fragmented moment. Self-identified radicals may inevitably feel stereotyped or altogether misunderstood. They may also feel suspicious, particularly knowing that Soave identifies as a libertarian. But the dominant themes of the book aren’t specifically libertarian; they’re generally liberal-democratic, and Soave gets at them in a remarkably open spirit of liberal-democratic dialogue.

There’s an important substance to this style. Critical commentary about threats to free speech and due process can often work itself up about a monstrous formation like “postmodern cultural Marxism” that’s ostensibly beneath everything one could hate about the present-day radical left. It can imagine something like a single rhizome supporting and sustaining all sorts of toxic social tendencies—“politically correct” degradations of plain language, performative anger, and so on—that we might otherwise understand as distinct.

In effect, single-rhizome thinking like this reapplies the fundamental analogy in the concept of radicalism itself, with its etymological origins in the Latin radix, meaning “root.” Radical politics understands itself as going to the roots of society’s problems and addressing them there. Single-rhizome thinking is radical anti-radicalism. It sees no compromise, let alone prospect for transformative mutual understanding, with people immersed in the vocabulary of the radical left; the root has to be dug out and destroyed before it destroys us.

Soave rejects this way of thinking. Which allows him not only to distinguish different theoretical frameworks (for example, “postmodern” theories from Marxism), but also to see value where radically anti-radical analysts might not—as he does in the relentless attention Black Lives Matter has brought to policing and the criminal justice system or in the achievements that trans and queer activists have made in extending the equality and dignity that gay activists won before them. Which in turn allows Soave to be uncommonly specific rather than overly general in his criticisms. It’s a rare virtue.

Occasionally, Soave struggles to balance the intricacies this virtue can demand with the scope of the book. At times he just moves too briskly, as, for example, in his summary of G. W. F. Hegel’s significance for modern politics, which he interprets in one paragraph by reference to a single blog post. At others, he invokes idiosyncratically libertarian reasoning about a complicated and contestable issue. This tendency may be most pronounced in a chapter assessing a miscellaneous range of contemporary political movements without consistently establishing their connection to his central argument. For instance, his discussion of post-Parkland youth activism on gun control offers no reason to see it as a form of radicalism, apart from conjectural links to “safety culture.” The effect is one Soave might have preferred to avoid in an argument otherwise characterized by recurring efforts to establish good faith.

Toward the end of the book, Soave shifts from sympathetic critic to frustrated potential ally. In his chapter on the alt-right, for example, he argues that the contemporary left is engaged in a devastating array of strategic and tactical blunders, alienating armies of potential allies and ensuring the impossibility of the sort of coalitional alliances that could effectively counter Trumpian politics. “Political power, at least for now, is firmly in the right’s grasp,” he writes. “And it’s probably more likely to stay that way as long as the right’s foes—the left, but also liberals, centrists, and libertarians—remain hopelessly divided.”

The idea that we could overcome the far right’s political dominance through strategic and tactical compromises among its opponents is, in its way, hopeful. But it may also wish away the problem: Soave’s left radicals aren’t, as they see it—or even as he presents them on the whole—really here to defeat Trump; they’re here to do what they can to ensure that what follows Trump is a radical future. And as Soave notes throughout the book, this is a future liberated quite precisely from the politics of compromise, the “moderation” and the “centrism,” that radicals on the left believe have allowed radicals on the right to gain so much ground in America over the decades—and have enabled systematic forms of oppression for longer still.

In the end, the challenge for coalition building between liberal-democratic stalwarts and left radicals isn’t really strategic or tactical; it’s political. They don’t obviously want the same things, and so they can’t obviously think about coming together to undo Trump or the Republican Party in the same way. Neither should liberals, or anyone else intent on defeating the Trumpian right, accept without argument that a coalition with left radicals makes sense as a strategic priority at all—compared with, say, helping the Democratic Party win at the state level, undo GOP-engineered gerrymandering, or just run better national campaigns. As Soave himself reminds us throughout Panic Attack, the politically disaffected population in America super-outsizes the actually radicalized population. But then, if liberals ever needed a demonstration of how crucial it is for them to see beyond electoral strategy, renew their ability to speak compellingly to a world rife with injustices, and intellectually seed a liberal-democratic future that no one can take for granted, the Age of Trump is it.

The post The Radicalizations of the American Mind appeared first on Washington Monthly.

]]>
101484 July-19-Soave-Books Panic Attack: Young Radicals in the Age of Trump by Robby Soave All Points Books, 336 pp.