November/December 2016 | Washington Monthly https://washingtonmonthly.com/magazine/novemberdecember-2016/ Sun, 09 Jan 2022 10:03:44 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg November/December 2016 | Washington Monthly https://washingtonmonthly.com/magazine/novemberdecember-2016/ 32 32 200884816 Just the Medicine https://washingtonmonthly.com/2016/10/25/just-the-medicine/ Wed, 26 Oct 2016 00:13:33 +0000 https://washingtonmonthly.com/?p=61194 Presidential desk with pills

How the next president can lower drug prices with the stroke of a pen.

The post Just the Medicine appeared first on Washington Monthly.

]]>
Presidential desk with pills

It’s hard to watch television or read a newspaper these days without seeing stories about outrageous prescription drug price increases. This past summer, the company Mylan was in the spotlight for hiking the price of its EpiPen, an injector containing cheap but life-saving allergy medicine, from $94 for a two-pack in 2007 to over $600 today. Last fall, Martin Shkreli, CEO of Turing Pharmaceuticals, became the face of greed when his company purchased the AIDS drug Daraprim and promptly raised its price from $13.50 to $750 per pill—an increase of some 5,000 percent. Prior to that, Valeant Pharmaceuticals drew widespread scorn for jacking up the prices of two heart medications, Nitropress and Isuprel, by 212 percent and 525 percent respectively. Meanwhile, Medicare, Medicaid, and private insurers were buckling under the $84,000 per-dosage-cycle price of Sovaldi, Gilead Sciences’ treatment for Hepatitis C, and of Medivation’s prostate cancer drug Xtandi, which costs $129,000 for an
annual treatment.

These are not isolated incidents. List prices for drugs in general rose 12 percent last year, on top of similar increases over the previous five years. That increase is helping to drive up health insurance premiums and patient deductibles. According to an August 2015 report by Kaiser Health News, 24 percent of Americans taking prescription drugs reported being unable to afford a prescription from their doctors in 2015 over the previous year.

The only thing more depressing than these out-of-control drug prices is the seeming inability of politicians to do anything about the problem. President Barack Obama has called for, among other things, faster approvals of generic copies of expensive biologic drugs and new authority to drive down prices for Medicare Part B. His proposals have gone nowhere in the GOP-controlled Congress. This summer, Hillary Clinton released a more aggressive plan for statutory changes that would make drugs cheaper and cut some advertising tax breaks for the drug industry. Even Donald Trump said he would break with his own party and support changing the law to allow Medicare to bargain with the pharmaceutical industry over drug prices.

Yet none of these proposals has even the slightest chance of being taken up by Congress during the lame-duck session, and the chances will be hardly better in the new Congress, given Big Pharma’s power over lawmakers in both parties. Indeed, legislation introduced in September by a bipartisan group of lawmakers that would merely require drug companies to give warning about upcoming price increases—an effort just to give incumbents up for reelection something they could tell voters they were doing—was widely seen as DOA.

But what if the next president doesn’t need Congress’s approval in order to act? What if previous statutes have already given the executive branch the authority it needs to beat back extreme drug price increases? And what if the Obama administration, which otherwise has not been shy about using executive power aggressively—to battle climate change, to reform immigration, and to defend transgender rights, for example—simply hasn’t used that power to curb drug prices, even though it could?

Government funding played a role in nearly half of the 478 drugs approved by the FDA between 1998 and 2005 and almost two-thirds of the most important and cutting-edge ones.

That’s exactly what a group of progressive Democratic lawmakers, including Massachusetts Senator Elizabeth Warren and Vermont Senator Bernie Sanders, have been saying for months. The source of that authority, they say, comes from provisions in a thirty-six-year-old law, the Bayh-Dole Act, that empower the executive branch to get pharmaceutical companies to reduce prices on drugs invented with the help of federal research funds. “We already have leverage in the law to force down prices—why isn’t President Obama using it?” asks the group’s leader, Texas Representative Lloyd Doggett.

According to a months-long investigation by the Washington Monthly—including interviews with a dozen current and former Obama administration officials as well as scores of outside experts—these progressive Democrats have a case. If they’re right, the next president could have leverage not only to bring down excessive drug prices, but also to reform the increasingly dysfunctional federally funded biomedical research and commercialization system that gives rise to those insane prices in the first place. 

Last September, as public outrage over price hikes by Valeant and Martin Shkreli was spiking, Doggett invited a few fellow Democratic representatives, including Rosa DeLauro of Connecticut, Jan Schakowsky of Illinois, and Peter Welch of Vermont, along with staffers, for a series of meetings in his Rayburn Building office. Doggett is a former Texas state supreme court judge with a bit of a laconic western drawl who represents a safe liberal district that includes Austin. He enjoys a reputation among his colleagues as a non-flashy legislative workhorse who fights hard behind the scenes for his causes. One of those causes is lowering drug costs; he has been pushing legislation to that end since he entered Congress in 1995.

At one of these meetings in Doggett’s office, researchers from the Center for American Progress (CAP), a liberal think tank, gave the group an eye-opening presentation on the extent to which federal—meaning taxpayer—dollars support critical drug research. Government funding played a role in nearly half of the 478 drugs approved by the FDA between 1998 and 2005, according to one study, and almost two-thirds of the most important and cutting-edge ones. These more innovative drugs, such as the critical oncology medication Gleevec, not only originated through federal support but continue to receive it thanks to Medicare, Medicaid, and other government programs that subsidize their purchase. Taxpayers, in other words, are paying for these drugs on both the front and back ends, even as the prices drug companies charge escalate seemingly without end. The CAP researchers also explained how the Bayh-Dole Act—officially the Patent and Trademark Law Amendments Act of 1980—could be utilized to lower those prices.

A complex piece of legislation that took four years to write and pass, Bayh-Dole was designed by its sponsors, Indiana Democratic Senator Birch Bayh and Republican Robert Dole of Kansas, to encourage the commercialization of federally sponsored research. At the time, much of that research was sitting on shelves in university and federal labs because companies could not get secure enough title to the discoveries to be willing to invest the extra dollars necessary to turn them into salable products. Bayh-Dole mandated that the labs and universities could patent their discoveries and sell the royalty rights to private-sector firms.

The law was, by most accounts, a big success. Over the next two decades, U.S. universities increased their patent output tenfold and founded more than 2,200 companies to exploit those patents, thus creating 260,00 new jobs and contributing $40 billion to the economy (though some of this increase is probably due to the biomedical revolution, which gave university researchers tools such as gene splicing to more easily create patentable medical applications).

Bayh-Dole also mandated, however, that the federal government retain its own rights to the patents, which it could exercise under certain conditions. If, for instance, a company failed to use a federally funded discovery to get a product to the market, the federal agency could “march in” and offer the rights to another company to commercialize and sell the drug to the public. Or it could offer “royalty-free rights” on any patent to companies that would manufacture products strictly for government use—say, a drug sold only to the military.

Yet in the thirty-six-year history of Bayh-Dole, there had only been five attempts (petitions from patients, advocacy groups, or corporations) to get the government to invoke march-in or royalty-free rights—together referred to as “retained rights”—against a pharmaceutical company. All five petitions had been rejected by the National Institutes of Health (NIH), an agency of the Department of Health and Human Services (HHS).

The main reason for the NIH’S hesitation, Doggett and his team learned, is that the agency has powerful institutional reasons not to want to exercise its retained rights. The NIH’s main mission—the thing Congress funds it to do and holds it accountable for—is encouraging medical advances. It achieves this by partnering with university researchers and pharmaceutical companies. Anything that upsets these partnerships is seen within the agency as hampering its mission, and as a threat to its budget. “NIH won’t ever agree to exercise march-in or royalty-free rights, no matter the strength of the case,” says James Love of the think tank Knowledge Ecology International, who led three of the five failed NIH petitions and was involved in the other two. In briefings with Doggett and his team, Love suggested that the only way to get the NIH to use its power would be to convince higher-ups in the Obama administration to force it to do so.

So that’s what the lawmakers decided to do. In early January, fifty-one House Democrats, including Doggett and his group, sent a letter to HHS Secretary Sylvia Mathews Burwell and NIH director (and Nobel laureate) Francis Collins, saying, “We respectfully urge you to use your existing statutory authority to respond to the soaring cost of pharmaceuticals.” Specifically, they asked the NIH and HHS to finally propose guidelines for triggering the use of march-in rights, saying, “We believe that just the announcement of reasonable guidelines in response to price gouging would positively influence pricing across the pharmaceutical industry.”

“That’s the point,” says Doggett. “Just the threat” of exercising these rights, or even of reviewing the amount of U.S. government support that over the years has gone to the companies holding exclusive patents, would probably “cause the pharmaceutical companies to blink.”

Like a lot of policy battles in Washington, the one over the government’s retained rights on patents to federally funded research revolves around contested interpretations of a few words in a long statute. The Bayh-Dole Act states that the federal government can exercise its retained rights only under certain conditions. The main one is if the company that holds the patent rights has failed to make the fruits of the discovery “available to the public on reasonable terms.” Another is if the agency that originally disbursed the research funds determines that exercising its retained rights “is necessary to alleviate health or safety needs” of the public. The fundamental legal dispute is whether, under Bayh-Dole, exorbitant drug prices constitute a violation of “reasonable terms” and/or a threat to “public health and safety.”

It is fair to say that the vast majority of attorneys who know anything about Bayh-Dole have concluded that the answer is no: high drug prices are not one of the conditions that would trigger the government’s ability to exercise its retained rights.

It is also fair to say that most of the attorneys who make this argument represent drug companies. This is the case even of the law’s cosponsors. In 2002, Birch Bayh and Bob Dole, by then retired from the Senate and working as lobbyists for law firms representing drug companies (Dole himself was starring in ads for Viagra), wrote a letter to the editor in the Washington Post. In it they stated that Bayh-Dole “did not intend that government set prices on resulting products” and that government could exercise its retained rights “only when the private industry collaborator has not successfully commercialized the invention as a product.” A few years earlier, Bayh had argued the opposite when he was representing a firm that would have benefited from the government’s exercise of royalty rights.

Opposed to the industry’s position is a small group of lawyers, researchers, and scholars who have long argued that the government does have pricing rights under Bayh-Dole. They include public interest lawyers such as Love and Robert Weissman, the president of the advocacy group Public Citizen; law professors such as Michael Davis of Cleveland-Marshall College of Law and Rachel Sachs of Washington University in St. Louis; medical policy experts such as Peter Arno of the University of Massachusetts Amherst and Aaron Kesselheim and Jerome Avorn of Harvard Medical School; and the philanthropist and former pharmaceutical patent attorney Alfred Engelberg.

These experts have their differences. The latter three, for instance, believe march-in rights apply only to drugs based on patents that all derive directly from government research. That’s a small portion of the drugs on the market, though many of those are the most pricey. (If a drug’s patent has expired, as was the case with Daraprim, Bayh-Dole no longer applies.)

In general, however, these experts all agree that “reasonable terms” and “health and safety” can include price, for several reasons. For one, many U.S. laws other than Bayh-Dole use the phrase “reasonable terms,” and courts have typically defined that phrase as including price. Also, when the legislation was being considered, many lawmakers and witnesses at hearings raised the very issue of price, out of worry that granting private companies lengthy, exclusive patents on government-funded research—that is, monopolies—would lead them to jack up the prices. March-in and royalty-free rights were the provisions these lawmakers demanded in order to secure their votes for the bill. Finally, there’s the fact that the NIH, in its written rejections, has never explicitly stated that Bayh-Dole prohibits using pricing as a factor. Instead, the agency has come right up to the line—stating, for instance, that march-in rights are “not an appropriate means of controlling prices.” This, say advocates, suggests that the NIH knows that its own case is not legally ironclad.

Doggett determination: Representative Lloyd Doggett (center) speaking at a press conference announcing the formation of a prescription drug pricing task force in November 2015. Pictured in back, from left: Representatives Elijah Cummings, Marcy Kaptur, Jim McDermott, and Rosa DeLauro.
Doggett determination: Representative Lloyd Doggett (center) speaking at a press conference announcing the formation of a prescription drug pricing task force in November 2015. Pictured in back, from left: Representatives Elijah Cummings, Marcy Kaptur, Jim McDermott, and Rosa DeLauro. Credit:

So, which side is right? The proponents of march-in rights power certainly have a reasonable case. But it’s impossible to say with certainty, because the question has never been litigated. The only way to know for sure would be for the government to actually test its powers. It could do so by proposing a regulation, or even just a regulatory guideline, based on those rights; evaluating the arguments that come back from the public and interested parties; and waiting to see how the courts ultimately decide any lawsuits that challenge those regulations or guidelines. The problem is that, for thirty-six years, the government has been too scared to try.

In addition to parsing the language of the statute, the drug companies deploy a second argument against the government’s use of retained rights to regulate prices. It is they, not the government, who put up the lion’s share of R&D funding for new drugs, say the companies. So it would be unjust and confiscatory for the government to use its retained rights to lower prices.

As a matter of pure law, advocates note, this is beside the point. Bayh-Dole does not set out any percentages or other metrics for what the government’s share of R&D on a drug must be before its retained rights kick in. “It doesn’t matter if the government grant was for millions of dollars,” James Love says, “or for a few thousand.”

In any event, the government’s impact on R&D and the amount spent to support drug development is much higher than the drug industry likes to acknowledge and most voters understand. This is especially true of breakthrough drugs (of which there are far fewer coming online than in years past) as opposed to the “me too” variety—
modest tweaks on existing treatments—that the drug industry has increasingly produced. A 2011 study published in the journal Health Affairs found that government-funded research contributed to most of the new medications that, because of their innovative nature, qualified for “priority review” by the Food and Drug Administration between 1998 and 2005. A 2014 study in the same journal found that the majority of the twenty-six most transformative drugs—those judged by medical experts both to be innovative and to have groundbreaking effects on patient care—developed between 1984 and 2009 were discovered with the help of federal research funding.

The drug industry will on occasion grant that the most innovative drugs require federal research funding—usually when they’re lobbying Congress for more such funding. Still, they say, government’s share of the research and development costs behind any particular such drug is small compared to the drug company’s own R&D costs, which, the industry says, typically exceed $1 billion.

Independent researchers, however, have challenged that $1 billion–plus figure. They note that it is derived from unverifiable industry data, that half is accounted for by federal tax breaks pharmaceutical companies receive, and that a substantial portion of the rest comes from dubiously counting such expenses as “cost of capital”—what companies theoretically would have earned investing in something else. The drug industry vigorously defends the figure.

Whatever the merit of Big Pharma’s claim to be a big investor in drug innovation, that claim is less true every year. In the past decade, major drugmakers have cut R&D costs in order to slash expenses and maintain high returns to shareholders. Nine of the top ten drugmakers spend more on marketing than R&D. A McKinsey & Company report called even this reduced level of R&D spending “a luxury that investors no longer tolerate.” In general, the big drugmakers are leaving the innovation to small pharmaceutical and biotech firms, which originated 64 percent of the new drugs approved by the FDA last year, up from less than 50 percent a decade ago.

Figuring out the pharmaceutical industry’s share of drug R&D costs is made even more difficult by the fact that the government doesn’t bother to tote up the overall value of all of its subsidies. Beyond research grants to academia, medical centers, and small start-up companies, Washington spends millions of dollars on the infrastructure that keeps the drug discovery process moving globally. The NIH helps many drug researchers in the earlier stages get through the maze of federal bureaucracy in order to advance a novel medicine. The NIH and the Food and Drug Administration work with drug developers to create frameworks for testing for safety and efficacy, in order for companies to be certain of the data they must collect and the standards they must meet for approval. The NIH strikes “cooperative research and development agreements” with commercial firms, sharing resources and work on projects that might ultimately lead to new medicines. Many modern medical devices and prosthetics marketed by major corporations start as experiments in Department of Veterans Affairs hospitals and laboratories. The VA and the Department of Defense have conducted large clinical trials in cardiology, diabetes, prostate cancer, and smoking cessation that help shape the direction of industry research in those areas. And universities and medical consortiums win government grants for disease “awareness” and “testing” programs, which all contribute to the ongoing market success of a drug invented to treat a certain condition. Add to this the multitude of tax breaks and seemingly endless extensions of patent exclusivity that government showers on drug companies, strengthening their monopolies. Most of this government largesse is not counted in the many (often industry-funded) studies of drugmakers’ R&D investment floating around Washington; government programs like Medicare that subsidize the purchase of the industry’s products are also rarely considered.

“Industry always compares individual federal research grants to what they claim to be their overall cost—which they greatly exaggerate,” says a senior NIH official who didn’t want to be identified by name. “But we finance the whole system that basically keeps global drug development on track and launching successful drugs.”

The industry defends high drug prices as necessary for companies to recoup the R&D costs of the many drugs they invest in that don’t ever make it onto the market—the “risk-adjusted price.” There may be some truth in this. But the same logic also applies to the government’s investment. For every federal research grant that leads to a patent sold to a drug company, there are hundreds of others that don’t (even if they extend the boundaries of scientific knowledge). Drug companies are beneficiaries of that winnowing-out process, too.

The drug companies’ third argument is that any attempt by government to exercise its retained rights on a drug patent would wreak havoc on the whole pharmaceutical industry. Without the certainty of a patent term and end date, pharmaceutical companies would be reluctant to invest in new drugs coming out of universities and biotech start-ups. Moreover, even a whisper of such threats would spook the Wall Street banks and hedge funds that have become increasingly big investors in the pharmaceutical industry.

It’s not just the drug companies who make this argument. You hear it from insiders at the NIH, even if they won’t say it on the record. You hear it from the Department of Defense, which also funds medical research. In a letter this summer opposing a march-in rights petition, the Defense Department mentioned the concerns of investors three times, saying, “NIH has consistently declined to exercise march-in authority because market dynamics could be affected for all products subject to the provisions of the Bayh-Dole Act.” 

You hear it from independent market researchers like Ira Loss at Washington Analysis. If the government were to use march-in rights to exercise pricing power, even once, “there’d be widespread panic,” predicts Loss. “It would really impact investment in pharma/bio, maybe even the overall medical sector.”

Views like this are so widespread that it would be folly to ignore them. Nevertheless, there are good reasons to take them with a grain of salt. Similar warnings were voiced by the telecommunications industry during the long “net neutrality” debate over whether the Federal Communications Commission should apply the same strict regulations to cable broadband providers that it does to telephone companies. In 2015, the FCC ruled in favor of net neutrality. Since then, the telecoms have issued reports based on proprietary data that, indeed, broadband investment has declined. But the current FCC chairman, Tom Wheeler, citing public SEC data, has countered that there has been a 35 percent increase in investment in internet-specific businesses and sizable increases by large network companies like AT&T.

In the case of drugs, as we’ve seen, pharmaceutical companies have already been cutting back their R&D investments. If Big Pharma’s profits can only be supported by greater and greater federal subsidies and monopoly rents that gouge the public, the industry is operating with an unsustainable business model—one that bears an alarming likeness to a real estate sector that, a decade ago, could only be propped up by predatory mortgages. The greater folly, says Robert Weissman of Public Citizen, would be to allow “us to be kept locked into the status quo, because of threats of a market collapse from pharma any time the government tries to control drug prices.”

The letter that Lloyd Doggett and fifty other House members sent to the heads of HHS and the NIH in early January of this year landed at a propitious moment for advocates. Martin Shkreli had recently been indicted on securities fraud. The pharmaceutical behemoth Pfizer had just announced major price increases on 100 of its drugs. And the president was unveiling another in a flurry of new executive actions, this one narrowing the loophole that allowed guns to be sold privately—at gun shows, for instance—without licensing or background checks. It seemed at least plausible that he would soon take similar unilateral action on drug prices.

That possibility appeared to become a near certainty later that month, when the New York Observer quoted New York Democratic Representative Charles Rangel saying that Obama would “use his executive powers, to deal with this thing [high drug prices] as soon as he gets back” from a trip to Detroit.

Rangel’s statements put the pharmaceutical industry and its battalion of Washington lobbyists on red alert. According to one lobbyist, the biopharmaceutical lobby and Big Pharma’s official trade group were scrambling to answer angry calls from corporate drug company headquarters around the world. How had they missed this? Who could get information from the White House? The lobbyist said that pharmaceutical CEOs were particularly annoyed that Obama hadn’t warned them first. Didn’t the president owe them something for their having supported the passage of Obamacare?

But Rangel’s story also surprised the White House. Over the next few days, the administration let it be known that there were no immediate plans for an executive order affecting drug prices.

Still, Doggett and his allies had another card to play, one they thought would give the president the perfect opportunity to take executive action, were he so inclined. James Love’s organization, Knowledge Ecology International, along with another nonprofit, the Union for Affordable Cancer Treatment, had just filed a petition with the NIH and the Department of Defense, arguing that the U.S. government should use march-in rights on the prostate cancer drug Xtandi.

What if the next president doesn’t need Congress’s approval in order to act? What if previous statutes have already given the executive branch the authority it needs to beat back extreme drug price increases?

There could hardly be a better example to trigger Bayh-Dole rights than Xtandi. The drug targets a widespread disease; according to the American Cancer Society, prostate cancer attacks one in seven American men, and killed 27,500 in 2015. It is also outrageously expensive—$129,000 for a year’s supply, about four times higher than the same drug sells for in Japan and Canada—putting it in the top ten most expensive drugs for Medicare. Best of all, from a legal point of view, all the patents on the drug came directly from government-funded research at UCLA; no pharma companies had added on patents, which would have weakened—at least in the eyes of some experts—the government’s ability to exercise its retained rights.

In February, while Love’s petition was making its way through the system, HHS Secretary Sylvia Burwell came to Congress to testify before the Ways and Means Committee, on which Lloyd Doggett serves. The Texan used the opportunity to ask her pointedly if she could assure him and his fifty colleagues that their earlier letter requesting guidelines on march-in rights was “receiving thorough consideration.” Burwell answered carefully. “We are continuing to try and pursue every administrative option,” she said, adding, “We welcome your letter and your suggestion.”

But just a couple of weeks later, she sent Doggett her official dismissal. While the administration had not ruled out using Bayh-Dole rights “when presented with a case where the statutory criteria are met,” Burwell wrote, after consulting with the NIH the administration had decided that “the statutory criteria are sufficiently clear and additional guidance is not needed.” Doggett said he wished she had just given him a straight “Hell, no!” to begin with.

Love’s petition on Xtandi was still alive, though, so Doggett’s group decided to bring in the big guns. On March 28, they sent another letter to Burwell, this time including six Democratic senators—among them then presidential candidate Bernie Sanders (who had tried to clarify Bayh-Dole language protecting taxpayer rights in 2001) and Elizabeth Warren.

The lawmakers weren’t just requesting general guidelines. They wanted to hear what the NIH had to say about Xtandi. “We do not think that charging U.S. residents more than anyone else in the world meets the obligation to make the invention available to U.S. residents on reasonable terms,” they said. They asked HHS to review the facts and issues in the Xtandi case in a public hearing, not behind closed doors.

Burwell rebuffed that request, too, writing on June 7 that “the NIH believes [the current] process allows the agency to collect sufficient information to consider the [Xtandi] petition without a public hearing.” The NIH formally rejected Love’s petition two weeks later.

The financial press trumpeted the decision as “good news” for Astellas and Medivation, the two firms that share the blockbuster drug’s profits. And indeed it was. In August, Pfizer Inc. announced it was buying Medivation for $14 billion, nearly double what the company had been worth six months earlier. This was a pretty good return for a drug that would never have existed without $31.5 million in NIH grants.

Why did the Obama administration refuse to exercise—or even hint at exercising—its power under Bayh-Dole to bring down excessively high drug prices? A White House spokesperson would only say that the president “deferred to HHS,” which is more a statement of the obvious than an answer.

One possibility is that administration lawyers looked at the statute, read all the relevant pro and con arguments, and came to the conclusion that Bayh-Dole does not, in fact, give government that power. This seems unlikely, though: Sylvia Burwell’s February letter certainly stops short of saying that.

Another possibility is that the administration had political reasons not to want to cross Big Pharma. To be sure, the White House did deals with the industry to pass Obamacare. But with that law secured, the need for the president to play nice with the industry significantly lessened. In fact, taking on the pharmaceutical industry would have been excellent politics in an election year, especially with the Democratic base. Moreover, Obama hasn’t been shy about signing executive orders that have infuriated other powerful interests, such as the energy industry and the National Rifle Association.

A third possibility, and a plausible one, is that Obama was briefed on retained rights, concluded that he might indeed have the power to use them to lower drug prices, but then chose not to do so, out of fear of spooking the markets and putting the economy at risk in an election year.

A final possibility—one that fits the known facts and may be familiar to anyone who’s served in government—has to do with timing. The issue of high drug prices, though long simmering, didn’t reach a political boiling point until last year. By then, many of the long-serving White House officials who might have been most able to see the bubbling crisis as an opportunity to take action—those with policy chops, knowledge of the bureaucracy, and close relationships with the president—were cycling out. And as happens in any administration, those who have taken their place are younger, more inexperienced staffers with less inclination, and less of a mandate, to take risks. It’s entirely possible that none of them even raised the idea of exercising march-in rights with the president.

“I know Barack Obama very well,” says a former senior Obama White House official. “When he said he wanted to do something about high drug prices, I believe him.” This official also believes that the executive branch probably does have the power to use Bayh-Dole to bring down drug prices, and should have at least tried to exercise it by proposing regulations or guidelines. “My guess is [his current staff] told him there’s nothing he can do unilaterally.” Evidence for that view is that the president never publicly voiced support for march-in rights, as he did for net neutrality.

On January 20, 2017, a new president will enter the White House, along with a fresh—and, one hopes, capable—White House staff. The new administration will then begin a months-long dance with Congress to win approval of its agency nominees and to build support for its agenda. One item near the top of that list should be high drug prices. The administration’s need to woo lawmakers will bring with it a temptation to forswear any intention to act unilaterally on that issue.

It should resist that temptation, however. Long experience shows that Congress is extremely unlikely to take any meaningful steps toward reeling in drug prices. The clout of the pharmaceutical industry and the fear of upsetting Wall Street and the markets are simply too strong. In such an environment, a president who wants to get something done needs leverage.

The threat to exercise the government’s retained rights under Bayh-Dole would do the trick. And some powerful lawmakers would like a president to take that step. “March-in rights provide a powerful tool to improve access to federally funded medicines, but that tool has lain dormant for decades, even while drug prices soar out of reach for millions of Americans,” Elizabeth Warren told the Washington Monthly. “While Congress needs to do more,” she added, the executive branch “needs to step up.”

The next president could have leverage not only to bring down excessive drug prices, but also to reform the increasingly dysfunctional federally funded biomedical research and commercialization system that gives rise to those insane prices in the first place.

There are other statutory powers the next president could draw on. One is the authority of the Medicine Equity and Drug Safety Act of 2000 to allow the re-importation of lower-priced drugs from countries like Canada. (The Canadian drugmaker Biolyse Pharma has already offered to sell a generic version of Xtandi to the U.S. government at a roughly 90 percent discount.) Another is a section of the United States Code that allows government agencies to buy generic versions of drugs at steep discounts. In 2001, during the anthrax scare, then HHS Secretary Tommy Thompson used the threat of this power to force drugmaker Bayer AG to cut the price of its anti-anthrax medication Cipro in half.

By threatening to invoke Bayh-Dole and other existing powers broadly, the next president could get price reductions on a range of outrageously expensive medications. But perhaps even more importantly, the threat may be the only way to force the drug companies and lawmakers in both parties to sit down with the administration and hammer out a broader array of reforms.

Bayh-Dole was in many ways an inspired piece of legislation, giving rise to a biomedical and commercial research system that has produced some miracles. But in the intervening thirty-six years, that system has grown increasingly dysfunctional, predatory, and dependent on public largesse. Fortunately, the legislation that created the system provides the tools we need to reform it.

The post Just the Medicine appeared first on Washington Monthly.

]]>
61194 Nov-16-Mundy-Doggett Doggett determination: Representative Lloyd Doggett (center) speaking at a press conference announcing the formation of a prescription drug pricing task force in November 2015. Pictured in back, from left: Representatives Elijah Cummings, Marcy Kaptur, Jim McDermott, and Rosa DeLauro.
Top Cop https://washingtonmonthly.com/2016/10/25/top-cop/ Wed, 26 Oct 2016 00:12:52 +0000 https://washingtonmonthly.com/?p=61198

As police chief of gritty Richmond, California, Chris Magnus embraced Black Lives Matter, all but eliminated fatal shootings by police, and cut the homicide rate in half.

The post Top Cop appeared first on Washington Monthly.

]]>

In my nearly fifty years of picketing and protesting for one cause or another, I’ve learned that brushes with police officers, much less their bosses, may not end well. So my first encounter, three years ago, with one of America’s most successful police chiefs was a novel experience. Then fifty-one-year-old Chris Magnus, a fair-haired midwesterner fond of quoting Robert Peel, was mixing with “the public” in Richmond, California, a blue-collar city of 110,000 across the bay from San Francisco.

Several thousand of us had just marched to the front gate of our local Chevron refinery, in Richmond’s largest environmental protest ever. Magnus was circulating around with no visible sign of rank or authority. He was hatless and wearing blue jeans and a windbreaker with a small, barely noticeable Richmond Police Department (RPD) logo on it. When he approached a small group of us, we were discussing the large turnout. As if already attuned to the topic of our conversation, he stopped and declared, “Isn’t this a terrific crowd? What a great day for our city!”

In America today, due to the alarming spread of police misconduct or counter-violence, many police chiefs—and their cities—are having more bad days than great (or even good) ones. And local law enforcement leaders don’t tend to be very complimentary about protest activity, even when it’s not directed at them. As the New York Times reports, fatal shootings and other civilian abuse cases have “radically changed their work—making jobs more difficult, far more political and much less secure. Being fired by a mayor on live television now comes with the territory.” Ronal Serpas, a former chief in two southern cities, told the Times, “It isn’t a 20-year career to be chief of Baltimore or Chicago or New Orleans. We know we are on a four- to five-year lifeline.”

Chris Magnus’s tenure in Richmond lasted twice that long. By the time Magnus left last year, under his own steam, to become chief in Tucson, Arizona, he had greatly improved public safety by repairing relations with a majority-minority community long estranged from the police. Between 2009 and 2014, killings in Richmond—often gang related—declined five years in a row. Violent crime in general was 23 percent lower, and property crime fell by 40 percent during that period. By the end of 2015, the city’s homicide rate was 50 percent lower than a decade earlier.

Search committee members wondered how Magnus, then police chief of Fargo, North Dakota, would fare in largely nonwhite Richmond, whose homicide rate in 2005–06 made it one of the most dangerous cities in the United States per capita.

Unlike colleagues elsewhere, Magnus was able to overcome union and political opposition, from inside and outside the department. He never lost the backing of municipal officials, community leaders, or even progressive activists with little past fondness for cops. And he did all this while serving as one of the nation’s few gay police chiefs—and the first to ever participate in a Black Lives Matter protest.

The story of Chris Magnus and Richmond’s remarkable public safety turnaround is both inspiring and instructive. As a rare case study in successful public safety reform, the RPD, under Magnus, generated much favorable media coverage, both national and local. In 2015, President Obama welcomed RPD officer Erik Oliver to the White House for a briefing on what Richmond was doing right. Attorney General Loretta Lynch visited Richmond for similar information-gathering purposes. Magnus himself was tapped by the Department of Justice to investigate police department dysfunction in Ferguson and Baltimore. But Richmond’s experience also reminds us that there are no quick fixes for an arm of local government in need of fundamental repair. If it took progressive city leadership more than a decade to make institutional change in the RPD, how long will police reform take in cities lacking such vision and determination?

Despite his calm, self-effacing, even stolid manner, Magnus is regularly described, in the press, as “unconventional.” Given his chosen profession, that’s not surprising. A native of Lansing, Michigan, Magnus grew up in university circles, with no other cops in the family tree. His father taught at Michigan State (MSU); his mother was a local piano teacher. He first worked as a police dispatcher and then became a paramedic. After joining his hometown police force as a patrol officer, he rose to the rank of captain in sixteen years. Along the way, he earned a master’s degree in labor relations at MSU. Then he departed to become chief in Fargo, North Dakota.

Six years later, when Magnus was forty-five, he applied for the top job in Richmond. In Fargo, he had a strong record of accomplishment in one of the safest and whitest places in America, a city then averaging only one homicide every two years. Richmond’s city hall search committee wanted a new chief committed to the crime reduction strategy known as “community policing.” But some members wondered how Magnus would fare in largely nonwhite Richmond, whose homicide rate in 2005–06 made it one of the most dangerous cities in the United States per capita.

Before its current wave of popularity, community policing gained some traction three decades ago when it was championed by the Clinton administration. But the idea never took hold in Richmond, where, in the 1970s and ’80s, a crew of trigger-happy cops known as the “Cowboys” ran roughshod over the community, garnering national publicity on 60 Minutes and other media outlets. Among their costly misdeeds was the fatal shooting of two African Americans, whose families won a $3 million judgment in an NAACP-assisted case accusing city officials of ignoring or condoning a “pattern of misconduct.”

After the Cowboys were disbanded, overly aggressive street teams, known as the “Jump-Out Boys,” took over. In RPD publicity photos, they posed proudly “decked out in full tactical gear and toting MP5 submachine guns,” as San Francisco Magazine’s Joe Eskanazi discovered. But in interviews with Eskanazi, RPD veterans from that era sadly acknowledged just leaving “murderous gangs locked in a futile stalemate with police.” As the city’s Latino population grew, Richmond cops were not even friendly to newly arrived immigrants just trying to survive peacefully in El Norte.

The RPD hassled day laborers outside of Home Depot, conducted traffic stops targeting Spanish-speaking drivers, and roughed up participants in a 2002 Cinco de Mayo festival. The former city employee Andres Soto won a $175,000 settlement over that misconduct, and cofounded the Richmond Progressive Alliance, a group dedicated to community improvements like police reform.

By 2005, gang strife had become so bad that some city councilors favored local deployment of the National Guard. Veterans of Middle Eastern wars, newly hired by the RPD, were shocked by the street violence they encountered. “I couldn’t believe I was in an American city,” recalls RPD officer Ben Therriault, who served with a military police brigade in Iraq. “I thought I was back in Baghdad.”

Richmond’s new city manager Bill Lindsay and city councilors like liberal Democrat Tom Butt and Progressive Alliance leader Gayle McLaughlin decided to employ Magnus, not the failed paramilitary responses of the past. Beginning in early 2006, the new chief reshuffled the RPD’s command structure and began promoting like-minded senior officers. As Magnus told me later, he limited the use of “street teams” in high-crime neighborhoods because of their tendency to “roust anybody who’s out walking around, doing whatever, with the idea that they might have a warrant outstanding or be holding drugs or something.”

According to the new chief, that kind of law enforcement activity, if conducted on a regular basis, only served “to alienate the whole population that lives in those neighborhoods.” Instead, more officers were switched to regular beats, where they were encouraged to do foot patrolling, where possible. Under a new job evaluation system, career advancement became more closely tied to each officer’s ability to build long-term relationships with individual residents, neighborhood groups, and community leaders.

“We assign people for longer periods of time to specific geographic areas with the expectation that they get to know and become known by residents,” Magnus explained. “They are in and out of businesses, nonprofits, churches, a wide variety of community organizations, and they come to be seen as a partner in crime reduction.”

Richmond cops were given personalized business cards, with their work cell phone numbers and email addresses, and were urged to give them out. The RPD even began hosting “Coffee with a Cop” conversations in places where residents could meet officers responsible for their neighborhoods, ask them questions, and get crime-fighting tips.

To set a personal example in a city with few resident police officers, Magnus bought a home in the Richmond neighborhood known as the North and East. From there he could bicycle to work, and even when off duty he was never far from the daily challenges he faced on the job. He could hear police sirens late into the night, the occasional shot being fired, and members of his neighborhood association knocking on his door to report nearby crimes.

City budgets kept the RPD at full strength during a period when fiscal pressures on neighboring cities, like Oakland, led to force reductions and what some residents believed was a related increase in crime. So Magnus was able to hire and promote more women, minorities, and Richmond residents. The chief also sought applicants of all races and genders who, he said, could “show empathy with victims of crime, who are not afraid to smile, to get out of the police car and interact in a positive way with people, who can demonstrate emotional intelligence, who are good listeners, who have patience, who don’t feel that it takes away from their authority to demonstrate kindness.”

Through attrition, Magnus was able to personally select more than ninety of the department’s nearly 140 patrol officers and all but four of its forty-six supervisors. By 2014 about 60 percent of the department’s 182 active police officers were from minority groups. The RPD had twenty-six women on its payroll, including several female officers who were highly visible in the community. Only a dozen or so officers serving when Magnus arrived remained on the force. As the chief explained to one reporter, “it’s easier to get new people in a department than it is to get a new culture in a department.”

The chief’s personnel decisions were much applauded later on. But his initial shake-up of the RPD roster threatened an institutional hierarchy with well-established perks, power, and promotional expectations. In late 2006, seven senior officers sued Magnus and the city for racial discrimination, claiming that he had created a hostile work environment for them as African Americans.

Richmond police had hassled day laborers outside of Home Depot, conducted traffic stops targeting Spanish-speaking drivers, and roughed up participants in a 2002 Cinco de Mayo festival.

Robert Rogers, then a reporter for the Contra Costa Times, covered the resulting state and federal court cases, which took five years to resolve and cost the city nearly $5 million. He believed this litigation was part of broader “old guard” resistance to city government changes introduced by newcomers like Magnus, city manager Lindsay, and Gayle McLaughlin, the California Green who became Richmond’s mayor for eight years. After deliberating in April 2012, a Contra Costa County jury rejected $18 million worth of damage claims, including $3 million for “emotional distress.” As John Geluardi reported for the East Bay Express, the jurors concluded that the plaintiffs were past beneficiaries of a “buddy system that facilitated their rise to the highest positions in the department through intimidation, race baiting tactics, and backroom dealing.” The court determined that racial discrimination had not been a factor in the alleged sidelining of their careers.

In December 2014, a Richmond youth group organized a downtown vigil lasting four and a half hours, the period of time that Michael Brown lay in the street in Ferguson, Missouri, after being shot by a police officer. About 100 people attended, including Chris Magnus. When a young protestor handed him a hand-painted sign declaring that “Black Lives Matter,” Magnus displayed it to passing traffic, while chatting with others at the event.

This peaceful scene in Richmond stood in sharp contrast to the almost simultaneous street clashes between police and protestors in Berkeley and Oakland, just a few miles away, after grand juries failed to indict Michael Brown’s killer or the officer who fatally choked Eric Garner in New York City. Yet, after a photo of Magnus and his sign appeared in local and national media outlets, the Richmond Police Officers Association (RPOA) criticized the chief for participating in political activities while in uniform. Backed by Richmond city officials, Magnus strongly disputed the union’s criticism. At a public appearance, he asked why it should be objectionable “to acknowledge that ‘black lives matter’ and show respect for the very real concerns of our minority communities.”

The RPOA reaction was muted compared to police union outbursts directed at protestors or public officials who sided with them in other cities. Magnus’s own view of police unionism was not hostile at all, based on his own rank-and-file experience in Lansing. There, he once joined union reformers who favored what he calls “an alternative way of addressing a lot of the issues we had with management.” His opposition slate defeated longtime incumbents no longer in touch with the concerns of younger officers.

Sounding very much like a local community organizer, Magnus recommended a two-step approach to fighting violent crime: “Get to know your neighbors, then get organized!”

The RPOA underwent its own leadership change a few months after the chief’s BLM sign holding. To clear the air about that controversy, Magnus met with Virgil Thomas, the union’s new president, and other officers. After an exchange that one participant recalls “was not warm and fuzzy,” Thomas told the press that he better understood what Magnus “was trying to do—he’s trying to bridge the gap, like we all are.” A year later, Thomas was defeated for reelection by Ben Therriault, the former military policeman who joined the RPD after serving in Iraq. Therriault grew up on a Flathead Indian reservation in Montana, where his father was tribal chairman. Like Magnus, he became a Richmond resident—in his case by taking advantage of a program that enables officers to live rent free in local public housing. “A lot of coworkers thought I was crazy,” he told a reporter, but Therriault’s new neighbors were quite happy to have him in their midst. Even though Magnus and Therriault were both model community police officers in that respect, Therriault told me fellow officers faulted Magnus for his “problematic symbolism” with the BLM movement. The movement, he said, “is not viewed as law-enforcement friendly.”

By the summer of 2014, there had not been a fatal shooting by the Richmond police since 2007. Between 2008 and 2014, the RPD averaged less than one officer-involved shooting of any kind annually. On September 6, 2014, the Contra Costa Times ran a story highlighting these favorable statistics, under the headline “Use of Deadly Force by Police Disappears on Richmond Streets.”

Unfortunately, that media celebration was premature. Just a week later, Richmond officer Wallace Jensen pumped three bullets into twenty-four-year-old Richard Perez, after a sidewalk struggle in which Jensen claimed that an intoxicated Perez tried to grab his gun. Both the district attorney and the RPD’s Professional Standards Unit found Jensen blameless, although the city later reached an $850,000 settlement with his family. After returning to duty briefly, Jensen applied for and was granted medical disability retirement.

The Perez case stirred heated local debate about Jensen’s use of deadly force and how such incidents are investigated. At a city council hearing, Magnus, who attended the young man’s funeral, argued that Richmond officers were “using a wide set of skills, including good verbal, sensitivity, and de-escalation skills to gain people’s cooperation. The results clearly speak for themselves. We have had only two fatal officer-involved shootings in a period of ten years. . . . That’s one of the lowest rates of force you’re going to find in any urban police department in this country.”

In 2014, Magnus told the council, the RPD responded to 122,159 “calls for service,” as he called them. Nearly 3,000 resulted in somebody being arrested (and 357 guns being confiscated). But only 6 percent of all Richmond arrests required the use of force in some form. In those situations, a Taser was deployed about 25 percent of the time. Seventy of the 182 suspects arrested forcibly were injured, along with twenty-two of the officers who detained them.

In twelve of those cases, officer conduct was reviewed by the RPD itself or, less frequently, the Richmond Police Commission (RPC), a nine-member civilian body appointed by the mayor and the city council. To encourage more independent probes in the future, Magnus facilitated the transfer of RPD “internal affairs” functions to a new Office of Professional Accountability located in city hall. The OPA’s first director is Eddie Aubrey, a former police officer, prosecutor, and judge who told the Mercury News that this rare arrangement can be “a model for agencies around the country,” and can “benefit communities” long distrustful of the police. In one of his initial investigations, involving police sexual misconduct with a teenage sex worker, Aubrey recommended that one officer be fired and eight others be suspended, demoted, or given letters of reprimand.

In early 2016, the city council overhauled the RPC as well. It’s now known as the Citizens’ Police Review Commission (CPRC), and has an expanded investigative mandate and additional resources of its own. As these changes were being made, Allwyn Brown, the RPD deputy chief under Magnus (and later his successor), told the East Bay Express that he wasn’t worried about having “another pair of eyes and ears for checks and balances.”

Even in Richmond, reformers like Magnus and Brown can’t rest on their laurels for long. In June 2015, in an open letter to Richmond residents, Magnus reported that the city was “experiencing a troubling upswing in both violent and property crime”—a 16 percent increase overall in 2015 over the previous midyear rate, with armed robberies going up 26 percent. More alarming was a 9 percent increase in January-to-June calls to the RPD about shootings. Gunfire claimed ten lives during this period, just one less than Richmond’s homicide total for the whole previous year. This murder rate spike was national in scope and, in other cities, spawned the hypothesis that it reflected “the so-called Ferguson effect”—that is, less aggressive policing methods as a result of protests against police killings of African Americans.

In fact, in Richmond and elsewhere, many police officials saw the problem in terms of more young people settling their disputes with guns—an uptick in retaliatory gang violence that had to be addressed with more, not less, community engagement. “We can reverse this recent trend, but we must take it seriously and respond now by working together,” Magnus warned in the open letter. He urged residents to form more Neighborhood Watch groups and to support Operation Ceasefire, the RPD-backed, church-based campaign to defuse turf rivalries with neighborhood walks, one-on-one meetings, and peace vigils. Sounding very much like a local community organizer, Magnus recommended a two-step approach: “Get to know your neighbors, then get organized!”

Six months later, Magnus was getting to know new neighbors himself, in Arizona. Looking for new career challenges, he left Richmond to become police chief in Tucson, which has a police department five times larger than Richmond’s. Awaiting him there was a not-very-warm welcome from the local police union, whose leaders had opposed his appointment and preferred a candidate from Dallas instead. During the hiring process, Brad Pelton, vice president of the Tucson union, came to Richmond with a fellow officer to interview Magnus. He noticed a framed political cartoon of the controversial vigil commemorating the death of Michael Brown. “That it was hanging prominently on his wall spoke volumes to me,” Pelton reported back.

On the Arizona union’s scorecard of his record in Richmond, Magnus was credited with “reduced crime, increased police staffing, increased officer compensation, and improved community relations.” But that didn’t outweigh his negatives, which included the fact that he had “participated in a ‘Black Lives Matter’ protest and brought in a civilian to replace the commander in the internal affairs division.”

In Richmond, community policing continues under Magnus’s successor, Allwyn Brown, a thirty-two-year veteran of the RPD. That’s a resume still preferred by the RPOA, whose president, Ben Therriault, says he already sees a “big difference with someone who grew up within the department versus an outsider.” Nevertheless, “outsiders” still have ideas sometimes worth checking out or trying.

Last November, for example, Brown joined U.S. law enforcement leaders on a trip to the United Kingdom to learn more about “de-escalation” tactics used by its largely unarmed police officers. “My experience in Scotland sort of changed my lens, in terms of how I look at force incidents,” Brown told the New York Times. “Our cadence, leading up to the moment of truth when force is used, seems like it can be a little fast.”

Ten months later, in the wake of fatal officer-involved shootings in Baton Rouge and Minneapolis and the retaliatory killing of five cops in Dallas, Richmond police sergeant Ernest Loucas assured KQED, a local radio station, that Richmond’s more positive police-community relationships would “keep officers safe and help them fight crime,” as the reporter put it, even if “just putting on the uniform and driving around in a marked vehicle makes him a target in some neighborhoods.” 

By the summer of 2014, there had not been a fatal shooting by the Richmond police since 2007. Between 2008 and 2014, the RPD averaged less than one officer-involved shooting of any kind annually. 

When a small multiracial crowd gathered in front of Richmond city hall to protest police violence elsewhere and pray for the dead Dallas officers, Brown, like Magnus before him, dared to appear and address the concerns of those assembled. “We are proud of the gains that we’ve made, but that’s just today, right?” he said. “We are proud of the relationships that we’ve built up, but all relationships are based on trust and trust is fragile. And trust is an easy thing to break, so we don’t take it lightly.”

In Richmond, at least, seeing police chiefs on the same general side as anti-violence protestors is still commonplace. In too many other cities, the gulf between the public and the police remains an unhealthy reality, and signs of progress are harder to find.

The post Top Cop appeared first on Washington Monthly.

]]>
61198
How to Make Conservatism Great Again https://washingtonmonthly.com/2016/10/25/how-to-make-conservatism-great-again/ Wed, 26 Oct 2016 00:11:16 +0000 https://washingtonmonthly.com/?p=61164

To save their party from Trumpism, Republicans need to once again take on monopolists.

The post How to Make Conservatism Great Again appeared first on Washington Monthly.

]]>

In the aftermath of Mitt Romney’s defeat in the last presidential election, the political press focused briefly on a network of conservative writers, most of them still in their thirties, who were challenging at least some of the orthodoxies of the Republican Party. “Reformish conservatives,” the Washington Monthly called them in one of the first articles to take note of this coterie. In a long and sympathetic group profile published a year later, the New York Times Magazine tagged them “reformicons,” and suggested that they might make the Republicans the “party of ideas.”

If there was any single theme that defined these would-be reformers, it was their insistence that the GOP needed to stop mindlessly following the agenda of the donor class and start focusing on the increasing economic insecurity facing the majority of working-class Americans. Long before Trump’s capture of the Republican Party proved their point, two prominent reformicons, Ross Douthat and Reihan Salam, used the term “Sam’s Club” voters to describe a demographic whose members accounted for an increasing majority within the Republican Party, but who were increasingly ill served by, and alienated from, the glib, “free market” ideology peddled by the party’s plutocratic elites.

In the pages of journals such as the National Review and National Affairs, many reformicons put forward specific and practical, if rather small-bore, policy proposals targeted at Sam’s Club voters. These included measures like “mobility grants,” which would provide workers struggling to make a living wage in Middle America with the money they needed to move to the thriving metropolises where the best-paying jobs were. Others, like Yuval Levin, whom the New Republic in 2013 called “the conservative movement’s great intellectual hope,” offered up gauzier, somewhat contradictory big-think formulations, like his proposal to promote “subsidiarity”—a ten-dollar word for the not-so-new idea of pushing government power as far out of Washington as possible and into the hands of local heartland communities.

Coming into this election cycle, it looked as if the reformicons might actually gain some real power and influence. They had a pipeline into the office of House Speaker Paul Ryan. They pitched their ideas to many of the 2016 GOP presidential candidates, including Scott Walker and Rick Perry. Jeb Bush and Marco Rubio actually ran on a number of reformicon ideas, along with more standard-issue pro-corporate-libertarian fare.

This was, of course, in those antediluvian days before the party was taken over by Donald Trump, whose xenophobia, authoritarianism, and policy cynicism the reformicons generally deplore but have been powerless to counter. Going forward, it is far from clear what role, if any, intellectuals of any stripe will play in the Republican Party. Yet it is still worth hoping that out of this crisis a new generation of conservative thinkers will emerge. If this election proves anything, it’s that America badly needs a conservatism that responsibly addresses the legitimate fears and resentments of working-class Americans who are falling behind. Healthy political debate also requires, now as always, a conservatism that sensibly challenges liberals and progressives when they fall into dogmatism and groupthink as, yes, sometimes happens.

What would such a conservatism look like? Let me make a humble suggestion. Going back to the Reagan-era fixation on cutting taxes and regulations isn’t going to wrest control of the GOP base away from Trump or Trumpism—not at a time when most Americans know instinctively that something has been fundamentally wrong with the economy for decades and distrust the stories told by elites about how it will all work out great for everyone in the end. Nor will grafting some kind of “compassionate conservatism” on top of that dead paradigm do the trick.

But there is a deeper, and by now nearly forgotten, tradition of “free market” conservatism that speaks directly to the major structural economic challenges faced by the country today. Better than that, it is a tradition that, as history shows, has broad potential appeal not only to those who think of themselves as principled conservatives, but also to many progressives—especially those, and there are many, who are alert to the many flaws of socialism. But to reconnect with this history, we must first break free of a false narrative about how we got here, one that has profoundly corrupted the meaning of terms like “free markets,” “regulation,” and “conservative.”

In his recent book, The Fractured Republic, Yuval Levin tells a story about the evolution of America’s political economy that is the received wisdom of most of today’s conservative intellectuals, including the reformicons. According to this story line, up until the late nineteenth century America had a highly decentralized, laissez-faire economy. But then, starting in the Progressive Era, the federal government began to grow more powerful, intrusive, and centralizing. Under Teddy Roosevelt and Woodrow Wilson, gigantic bureaucracies emerged that had the nominal purpose of taming the excesses of capitalism but that were really about concentrating economic power.

“Although these regulatory measures all took shape in response to the consolidating power of the industrial economy,” Levin writes, “they functioned not by pushing back against that aggregating tendency, but by further consolidating American society—in the process often reducing economic competition to increase government control over the economy and expanding the scope and scale of the state itself.”

Levin continues to recite the standard story line when it comes to the New Deal. “Most of the New Deal initiatives pursued by Franklin D. Roosevelt’s administration to combat the economic collapse amounted to crude, if well intentioned, cartel-building exercises,” he writes. “They were intended to protect incumbent businesses and workers while restraining production . . . and propping up prices. The result was a highly centralized economy characterized by an unprecedented degree of corporatism.”

Still following the standard line, Levin tells us next that the collusion of Big Government, Big Labor, and Big Business continued to consolidate the economy through the 1950s and ’60s. “At the core of the postwar economy, as of the prewar economy, was a corporatist, cartel-based approach to regulation,” he writes. “Its purpose was to stifle competition to help large, incumbent players and to maintain an artificial balance between powerful producer interests and powerful labor.”

Faithfully moving on to the next chapter of the standard narrative, Levin recites how in “the late 1970s, leaders in both parties began to recognize that the consolidation of the economy was itself part of the problem.” And then Levin brings us to the big inflection point, explaining how, after Reagan, “a nation of big, powerful institutions . . . [gave] way to a nation of smaller, more nimble players competing intensely in a highly dynamic, if therefore also less stable, economy.”

So powerfully entrenched is this story line that even many liberals believe large parts of it. Across the political spectrum, received wisdom holds that beginning in the 1980s and continuing today, “creative destruction” has been “disrupting incumbents” and creating a far more competitive economy. When liberals tell the story, they usually don’t dispute that American society has become “market driven” over the last forty years. They just bemoan the increase in inequality and economic insecurity, which they see as a consequence of letting markets rule.

But there’s a big problem with letting your worldview and policy prescriptions be informed by this story: it’s mostly false. For example, its claim that cartels and economic concentration increased in the decades leading up to Reagan’s election is demonstrably and importantly untrue. Standard economic statistics show that the opposite happened. As the government grew, cartels lost power and markets become more competitive, even in the face of technological trends that pushed strongly in the opposite direction.

The standard story line is also demonstrably false in its claim that market competition has increased since the Reagan revolution rolled back government management of the economy. Nominal “deregulation” and “privatization” have instead brought well-documented levels of economic concentration and outright monopolization not seen since the Gilded Age. Getting this history right, and reconnecting with the real tradition of free market conservatism, is the first step in constructing a reformed platform that can stand up to the forces of Trumpism.

The big piece the standard story line leaves out is the anti-monopoly movement. Most people today, if they are aware of this movement at all, associate it with turn-of-the-last-century figures such as Teddy Roosevelt—the “trust buster” who, as American schoolchildren are still sometimes taught, struck a blow for the little guy by breaking up John D. Rockefeller’s Standard Oil Company. Others might recall reading about the Sherman Anti-Trust Act, passed by Congress almost unanimously and signed into law by President Benjamin Harrison, a Republican, back in the “gay old ’90s.”

But the anti-monopoly movement doesn’t just belong to the days of barbershop quartets and bicycles built for two. It inserted itself into the heart of the New Deal and actually reached its zenith of power in the 1960s. By that time, it was at once so institutionalized and so successful in de-concentrating the American economy that it barely figured in political discourse. You might say it was so successful that it wound up being written out of history.

Here’s a prime example of the movement’s forgotten legacy. It is true, just as Levin and the standard story line claim, that in the early years of the New Deal the federal government implemented policies that directly and purposefully limited market competition. The particular vehicle was the notorious National Industrial Recovery Act (NIRA). Under this early New Deal legislation, the government suspended all anti-monopoly enforcement against companies that promised to adopt minimum wages, establish minimum prices, and work closely with competitors and “code authorities” in government-sponsored cartels. The central idea was a carry-over from the Hoover administration, which had encouraged the growth of colluding business associations under the theory that this would lead to less ruinous competition and therefore to fewer bankruptcies and job losses.

But the standard line leaves out what happened next. In 1935, the Supreme Court struck down key provisions of NIRA as unconstitutional. And with that decision, the New Deal completely changed course. Rather than trying to suppress market competition, public policy shifted aggressively and successfully for the next four decades to making markets more open and competitive.

It is hard to exaggerate the scale on which the federal government, from the late 1930s through the end of the ’70s, used antitrust and other measures to bust up cartels and monopolies and to foster competition in concentrated markets. Declaring that the average citizen “must have equal opportunity in the market place,” Franklin Roosevelt began the process by ramping up the staff of the Antitrust Division of the Department of Justice, from just eighteen in 1933 to nearly 500 by 1943.

By February 1941, the DOJ was prosecuting ninety different antitrust cases, involving 2,909 defendants, and had thirty additional grand juries authorized or in progress. Among the results were consent decrees that, for example, forced General Electric to license the patents on its light bulbs, and, perhaps most consequentially, required AT&T to share its transistor technology, which became the basis for the digital revolution. (See “The Real Enemy of Unions,” Washington Monthly, May/June 2011; “Estates of Mind,” Washington Monthly, July/August 2013.)

Roosevelt framed the fight on concentrated economic power as necessary to save democracy from dictatorship. “The liberty of a democracy is not safe,” Roosevelt warned Congress in 1938, “if the people tolerate the growth of private power to a point where it becomes stronger than their democratic state itself. That, in its essence, is Fascism—ownership of Government by an individual, by a group, or by any other controlling private power.”

In the postwar era, successive presidents and Congresses continued to turn up the dial on antitrust enforcement, convinced that monopoly was a threat not only to democracy but also to the market competition on which democratic capitalism depends. In 1950, for instance, Congress passed one of its strongest anti-monopoly laws, the Celler-Kefauver Act, which prohibited companies from acquiring each other’s assets, such as patents, trademarks, and copyrights, as a means of stifling competition.

Estes Kefauver, a senator from Tennessee, described the goals of the bill using a then-common formulation that linked antitrust enforcement to checking the growth not only of Big Business, but also of Big Government and Big Labor: “The concentration of great economic power in a few corporations necessarily leads to the formation of large nation-wide unions. The development of the two necessarily [leads] to big bureaus in the Government to deal with them.”

Given the conceptual boxes we live in today, it may be surprising to learn that such formulations were common not only among liberals like FDR and Kefauver, but also, and especially, among libertarians and advocates of limited government. A prime example is Henry Simons, author of such titles as “A Positive Program for Laissez-Faire,” and a progenitor of the modern libertarian movement.

For Simons and his many admirers, whose ranks included Frederick Hayek and the young Milton Friedman, as well as other highly influential conservatives clustered around the University of Chicago’s economics department in the 1940s and ’50s, it was axiomatic that Big Business threatened freedom as much as Big Government did. “The great enemy of democracy is monopoly in all its forms,” Simons wrote: “gigantic corporations, trade associations and other agencies for price control, trade-unions—or, in general, organization and concentration of power within functional classes.”

Shop around: Federal antitrust enforcement from the New Deal until the Reagan administration helped independent businesses— like the ones on this Montana main street in the 1950s—flourish.
Shop around: Federal antitrust enforcement from the New Deal until the Reagan administration helped independent businesses—like the ones on this Montana main street in the 1950s—flourish. Credit:
Shop around: Federal antitrust enforcement from the New Deal until the Reagan administration helped independent businesses—like the ones on this Montana main street in the 1950s—flourish. Credit:

It was also axiomatic that anti-monopoly policies were essential to keeping both Big Business and Big Government in check. In mid-century America, the rising generation of libertarian intellectuals—men like Friedman, George Stigler, Aaron Director, and Jacob Viner—were far from the corporate apologists that some of them would later become. Instead, as Robert Van Horn and other historians of the movement have shown, the leading libertarian thinkers of this era opposed bigness in all its forms and accordingly advocated for such measures as limiting the size of corporations, rolling back patent monopolies, eliminating interlocking directorates, and even, in Simons’s case, heavily taxing corporate advertising.

Today, this framing may seem paradoxical in large part because of the false history we have been told, which holds that a “free market” is a market in which government plays no role. But it wasn’t paradoxical at all to Simons and other leading libertarian intellectuals in the postwar period who were trying to build a conservative movement free from corporate backing and control.

For them, true laissez-faire required that government have a “positive program” of antitrust and other policies to keep monopolists from seizing markets and destroying their efficiency. The only alternative was an expanding regulatory state that would inevitably lead to some form of socialism or fascism. The best approach, argued Simons and other libertarians at the time, was a “night watchman state” that enforces contracts and guards against explicit theft, importantly including the theft that inevitably results when monopolists set prices and control production rather than free and open markets.

As the economist J. Bradford DeLong has pointed out, Simons and other free market conservatives of the era were not so naive as to think that market forces would prevent or even tame monopolies. They knew that exposure to real competition causes most business owners and managers to want to avoid it at all costs. Had not Adam Smith himself famously written in the Wealth of Nations that “[p]eople of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices”? For mid-century advocates of limited government, as for Smith, it was an obvious empirical truth that markets tend toward monopoly in the absence of some countervailing force.

Reflecting the consensus among liberals and conservatives on the importance of checking corporate power, the Eisenhower administration continued the aggressive use of antitrust legislation to foster market competition. Eisenhower’s Department of Justice actively enforced the Celler-Kefauver Act while prosecuting such goliaths as Kodak and the Continental Can Company, then headed by Ike’s friend and fellow comrade in arms, retired General Lucius Clay. Some liberals complained that Ike wasn’t aggressive enough. But largely because of his antitrust and other competition policies, there was no general increase in concentration during his time in office. The number of mergers taking place in the 1950s remained flat, while the rate of new business formation surged. And, in most sectors of the economy, data on market shares shows that effective competition either remained level or, in many markets, increased substantially.

If you want to put a face on those statistics, think of all the independently owned hardware stores, family diners, auto-repair shops, drugstores, and other small- and medium-sized businesses that sprang up across America’s vastly expanding postwar suburbs before the coming of the malls and Walmart. Or think of the growing ranks of regional home builders, community banks, light manufacturers, and other mid-sized firms that came into being, especially across dynamic metro Sunbelt areas like Atlanta, Dallas, and Los Angeles. For better or worse, this was an era of hyper-localism compared to our own, and it was brought to America largely by anti-monopoly policy.

Adding to the ways in which public policy restrained concentration in this era were financial regulations that limited the opportunities for leveraged buyouts and that prevented communities from losing local ownership of their banks, thrifts, and savings and loan associations. Also important were fair trade laws that prevented, for example, the emergence of a giant trading company on the scale of today’s Walmart, let alone platform monopolies like the one Amazon is rapidly becoming. In the expanding postwar suburbs, as in the inner cities, most retailing remained under the control of locally or at least regionally owned businesses. Concurrently, inequality of income between the country’s heartland and its elite cities like New York and Boston faded as government policies continued to target monopolies and thereby ensured broad geographic distribution of economic power. (See “Bloom or Bust,” Washington Monthly, November/December 2015.)

By the 1960s, government measures to prevent economic concentration reached a crescendo. The Supreme Court, for example, blocked a merger that would have given a single distributor control over a mere 2 percent of the nation’s shoe outlets. Joining the antitrust actions of the Justice Department in the 1960s was a highly aggressive Federal Trade Commission that relentlessly targeted mergers like Procter & Gamble’s attempted takeover of Clorox. Meanwhile, exploding numbers of antitrust suits brought by private plaintiffs added to the strong deterrent against the kind of wheeling and dealing in mergers and acquisitions that would turn Wall Street into a casino in the 1980s and lead to the dominance of concentrated financial power over most of the economy.

Reflecting the broad conservative consensus supporting rigorous antitrust enforcement, Gordon Tullock, then a professor of economics at Rice University, published a highly influential paper in 1967 in which he likened monopoly to theft and antitrust enforcement to crime prevention. “As a successful theft will stimulate other thieves to greater industry and require greater investment in protective measures,” wrote Tullock, “so each successful establishment of a monopoly . . . will stimulate greater diversions of resources to organize further transfers of income.” The paper laid the foundation for what later became the “public choice” school of conservative thought, using monopoly as a paradigm of what has come to be known as “rent seeking,” or behavior that subtracts from society’s wealth by manipulating and abusing market rules.

At the time, this brand of conservative thinking stood in opposition to certain thought leaders who were emerging on the left, most notably the Harvard economist and public intellectual John Kenneth Galbraith. While still a small minority within the Democratic Party, Galbraith and like-minded liberals were peddling a competing vision under which Big Business would be contained by “the countervailing power” (as Galbraith put it) of Big Government and Big Labor, making antitrust, in the telling, quaint and unnecessary.

Coming into the 1970s, however, these new liberal voices had no effect on competition policy, while mainstream conservatives continued to see anti-monopoly enforcement as a bulwark against an expanding regulatory state. Accordingly, the long, slow postwar trend away from concentration and toward increased competition continued. In 1947, the share of the market controlled by the top four firms in producer industries was 45 percent. By 1972, that number had declined to 43 percent. An exception to the general trend occurred in consumer goods manufacturing, where, studies show, the growth of television advertising led to a modest increase in concentration in this sector despite the strict enforcement policy against mergers during this period. Relentless TV spots may have made American children in the 1960s and ’70s “cuckoo for Cocoa Puffs,” but at least antitrust legislation prevented cereal makers from combining into the monoliths that dominate today’s global food markets.

By the time Reagan was first elected president, in 1980, most markets had been becoming more and more competitive for decades. Economists use standard measures and definitions to express how concentrated or competitive any given market is. If a single company controls 100 percent of a market, it is, of course, a total monopoly. If three or four firms split a market, that’s an oligopoly. Effective competition usually emerges only when the top four firms in a market control less than 40 percent of sales. Using these definitions, the share of the service sector subject to effective competition shot up during the era between the New Deal and the election of Reagan, from 54 percent to 78 percent. In the wholesale and retail trade sector, the rise was even more dramatic, from 58 percent to an astounding 93 percent.

If this election proves anything, it’s that America badly needs a conservatism that responsibly addresses the legitimate fears and resentments of working-class Americans who are falling behind.

If this causes some cognitive dissonance, it should. After all, not only does it contradict the received wisdom about how expanding government stifles competition, this decline in concentration occurred at a time when all the emerging technologies of the era were pushing in the opposite direction.

Just think about it. New and expanding communication networks like television and radio made it far easier for advertisers to build dominant national brands that could wipe out local and niche competitors. Rapidly expanding automobile ownership and interstate highways collapsed distances and thereby threatened to disrupt and displace many small businesses, from neighborhood grocers to local craft producers. New materials like plastics and aluminum required expensive plants that had to operate at very high volumes to be economical. The rapidly expanding airline industry had network effects that then, as now, favored large carriers over smaller ones. One of the most popular ideas among social commentators in the 1950s was that America was becoming a “mass” society.

Yet in most markets competition was increasing and monopoly was fading. In short, the evidence clearly shows that the master narrative we have all heard about how more government regulation led to more concentration under the post–New Deal order is wrong. Instead, rigorous antitrust and other competition policies, supported by liberals and conservatives alike, played a significant role in either reversing market concentration or preventing it from getting worse. But that’s hardly the only part of the master narrative that turns out to be false.

In the next chapter of this folktale, America supposedly becomes a much more “market-driven” society. We’re told that the “deregulation” of airlines, railroads, banks, and other key industries that began under Jimmy Carter and vastly accelerated under Reagan produced a burst of “creative destruction” that “disrupted incumbents” and caused the economy to become much more competitive and dynamic.

One part of this is right. A revolutionary break in competition policy did occur. This included new guidelines adopted in 1982 by Reagan’s Justice Department that led to a wholesale retreat from antitrust enforcement except in rare cases of provable and brazen collusion to fix prices. Meanwhile, under Reagan and the next three administrations, the federal government abandoned many of the policies and regulations that it had previously used to shape and structure markets.

This change reflected in part the rising influence of liberals like Galbraith, as well as the arrival of a new generation of “New Democrats” like the MIT economist Lester Thurow. But it also reflected an abrupt change in the thinking of many self-styled advocates of “free markets,” who went from being opponents of economic concentration to defenders of monopoly. Starting in the 1970s, figures like Robert Bork and others associated with the “law and economics movement,” as it came to be known, started arguing that antitrust enforcement was an example of unnecessary and inefficient government intervention in the economy, rather than a precondition for keeping markets open and competitive, as previous generations of market conservatives had understood.

Just how and why this shift happened is a story for another time. If we are not in a generous mood, we might conclude that professional libertarians wound up getting captured by the corporations and plutocrats whose favor they needed to fund their think tanks and academic departments. If we are inclined to be more generous, we can allow that it is only with the benefit of three-plus decades of hindsight that we now know beyond a doubt that monopolies are not automatically disrupted by creative destruction.

Either way, it’s clear what the demise of the anti-monopoly movement has wrought. Since the early 1980s most markets have been becoming less competitive, not more. Even the Economist now concedes, as its editors wrote in a recent headline, that “[t]he rise of the corporate colossus threatens both competition and the legitimacy of business.” In virtually every sector of the economy, from coffins to credit cards, airlines to hospitals, markets are increasingly controlled by just a handful of dominant firms earning unprecedented profits. Even in—or, better put, especially in—a highly networked, globalized age of big data controlled by big corporations, the threat of monopoly looms far larger than in the age of steam.

Along with the rise in concentration has come a decline in entrepreneurship as more and more markets become closed to new competitors. As this magazine first reported in 2012, and as many others have since confirmed, the increasing dominance of large firms has caused the number of new businesses per capita to shrink far below what it was in the supposedly stagnant 1970s. (See “The Slow-Motion Collapse of American Entrepreneurship,” Washington Monthly, July/August 2012.)

Many people have trouble accepting the reality of this tidal shift because it contradicts their received ideas about how the world works. But it makes sense once you escape cognitive capture by the master narrative.

True free market thinkers, from Adam Smith to Henry Simons and the founding fathers of the Chicago School, could easily tell us what is wrong with the way we have come to conceptualize the relationship between government rule and market rule. So could populist, anticommunist Democrats like Kefauver, Wright Patman, and Lyndon Johnson, who dominated the party before the 1970s. In monopolized markets, they would tell us, “market forces” don’t determine outcomes; monopolists do. From this it follows that concentrated markets are never “deregulated.” Instead, they are regulated by monopolists who are as much a threat to liberty and free enterprise as any bloated socialist or communist bureaucracy.

And monopolists will come to control more and more markets unless government checks that outcome. Truly free, or laissez-faire, markets, meaning the kind that have even the potential to allocate resources efficiently, only exist when, as Simons put it, government adopts a “positive program” to prevent their monopolization. Or, as Tullock said in the 1960s, abuse by monopolies will increase just as other forms of property crime will increase in the absence of an effective police force.

As we face the crisis that the rise of Trumpism has revealed, what matters now is that we learn the actual lesson of the past: markets work their magic only when public policy prevents their capture by monopolists. This means that reform conservatives need to apply their talents to figuring out how to make a “positive program” of anti-monopoly work again, rather than repeating false narratives about how markets work only when government doesn’t.

For many, the road back to true market conservatism will require revisiting the concept of rent seeking. Many policy intellectuals today are focused on onerous rules and regulations that have the effect of increasing barriers to entry within specific occupations; licensing requirements for dog groomers and yoga instructors are favorite examples. Others focus on the cartelization of higher education and medicine that comes with the abuse of credentialing and accreditation systems. Why do you need the permission of established institutions of higher learning to start a university? Why can’t nurse practitioners compete with primary care doctors?

Still others focus on how exclusionary zoning can be used to drive up the cost of housing by stifling competition from new construction, or on how taxi regulation leads to fewer, higher-priced taxis. These are important lines of inquiry, and they illustrate how bad regulation can lead to concentration and monopoly rents in specific markets.

But the implication is not that monopoly rents will disappear in the absence of government. Indeed, the experience of the last forty years has shown that in the absence of effective anti-monopoly policy, concentration will occur on a scale that is many orders of magnitude larger, and of far more macroeconomic consequence, than any trends in local zoning or occupational licensing requirements.

Yes, governments can and have pursued ill-considered anti-monopoly policies. The Nixon White House used the threat of antitrust suits to shake down corporations like ITT for campaign contributions. But again, the implication is not that anti-monopoly policy inevitably degenerates into rent seeking, any more than instances of police corruption or incompetence prove we don’t need police. The implication is that we need smart and effective policies to preserve competition, and all the more so if we want true market-driven solutions to more of society’s problems.

The real trade-off, as the jurist Louis Brandeis once put it, is between regulating competition with a minimal government and regulating monopoly with a big government. The choice shouldn’t be hard, especially for the rising generation of conservatives who have come of age since the Great Recession. When banks become “too big to fail,” that leads to more regulations and bureaucracy to limit the systemic risks these giants pose (see Dodd-Frank). This growing regulatory burden, in turn, cripples smaller banks and prevents the formation of new banks. It’s better to not let banks get too big to fail in the first place.

I would suggest, too, that we reflect on how anti-monopoly policies connect with conservative thinking about the importance of families, local communities, and self-governance, or what Yuval Levin calls “mediating layers of society.” Historically, this connection was obvious to conservatives. The eighteenth-century thinker Edmund Burke, whom Levin celebrates, denounced the monopolies of his own time, among them the British East India Company, as a violation of the “Natural Right” of people to supply each other’s needs as they saw fit. America’s Founding Fathers—most notably those who, like Thomas Jefferson and James Madison, were exceptionally suspect of concentrated power—believed that civic virtue presupposed the widespread distribution of property, and that “checks and balances” extended to checking monopoly in all its forms.

Today, many conservatives rightly focus on the importance of cohesive families rooted in strong local communities that are supported by civic institutions like churches and fraternal organizations. And many, such as Charles Murray, as well as many younger reformicons, point to the breakdown of these mediating institutions as a prime cause of the spreading hardships suffered by working-class blacks and whites. But few of them see how the loss of social capital in many inner-city and heartland communities is at least in part a consequence of the wave of corporate consolidations that have stripped local communities of the locally owned banks and other businesses that were once the pillars of their civic life. Lest there be any doubt about the relationship, abundant empirical studies show that levels of charitable giving and civic engagement decline when local companies are bought out by absentee owners. (See “The Real Reason Middle America Should Be Angry,” Washington Monthly, March/April/May 2016.)

Here is another consideration. While thinkers like Levin can advocate all they want for decentralizing political and governmental power to states and localities, liberals will never let that happen. But liberals could well become partners with conservatives in decentralizing economic power. Already in the Senate we see progressive firebrands like Elizabeth Warren and Tea Party champions like Mike Lee talking along remarkably similar lines about the dangers of economic concentration.

Adopting antitrust legislation will require conservatives to challenge many of their wealthiest and most powerful individual and corporate backers, the kind of people and companies that thrive in the current rent-seeking environment. But if Donald Trump has taught them anything, it’s that there’s less of a political price to pay for that than they thought.

Conservatives desperate to find an agenda that appeals to base Republican voters and that is different from, and better than, the racist Know-Nothingism of Donald Trump should also consider how the geography of monopoly links up with the political map. The dozen or two thriving metropolises that are hogging all the growth and gobbling up all the corporate ownership are solid blue and mostly on the coasts. It’s the mostly red state flyover country that’s getting the shaft. Republicans may think of pro-competition, anti-monopoly politics as leftist and progressive, but not only is that historically inaccurate, it is also their own (furious, desperate) voters who would most likely rally to, and benefit from, such a politics. The commercials almost write themselves: “It’s time to stop the liberal monopolists in San Francisco and New York from stealing our jobs and robbing our future!”

As longtime readers of the Washington Monthly know, this magazine has been making the case for strengthening antitrust laws for well over a decade. A number of leading progressive intellectuals have joined the crusade, from New York Times columnist Paul Krugman to Council of Economic Advisers chairman Jason Furman. So too have a few leading Democratic politicians, including, just recently, Hillary Clinton. Still, there is great resistance to these ideas within the Democratic Party, especially from the tech giants that underwrite Democratic electoral campaigns and benefit mightily from monopoly control of their markets. It is far from certain that progressives can overcome this resistance on their own.

The best hope for a revival of anti-monopoly, then, is for conservatives to awaken to its promise, and to challenge progressives on that turf. As I’ve tried to argue here, there are profound conservative reasons for them to do so.

The post How to Make Conservatism Great Again appeared first on Washington Monthly.

]]>
61164 Nov-16-Longman-Conservatism-MainStreet Shop around: Federal antitrust enforcement from the New Deal until the Reagan administration helped independent businesses— like the ones on this Montana main street in the 1950s—flourish.
Higher Red https://washingtonmonthly.com/2016/10/25/higher-red/ Wed, 26 Oct 2016 00:10:40 +0000 https://washingtonmonthly.com/?p=61200

Why China’s universities may never make the grade as world-class institutions.

The post Higher Red appeared first on Washington Monthly.

]]>

Every March, thousands of representatives from around China descend on Beijing for annual government meetings to lay out directions for the coming year. The so-called “Two Meetings” for the legislature and the political consultative committee—part policy, part show—offer delegates a chance to raise their concerns about what needs fixing and how to do it. In March 2010, Beijing was abuzz with talk about universities: weeks earlier, the Ministry of Education had released a draft of an education reform plan for the next decade that would get rid of “the ways in which schools are run like government appendages.” Key to this change was a policy of “de-administration” (qu xingzhenghua) that would limit the power of government-affiliated administrators in universities. If implemented, it would reorient China’s top schools toward the model of the university as an independent ivory tower, an ideal that liberal academics had long hoped to realize in China.

The impetus for change was not confined to academics sitting on campus; the country’s top leadership voiced their support as well. China’s higher education system was still weighed down by remnants of the Soviet era, when the central government refashioned universities as specialized technical institutes under tight central control. More autonomy, leaders hoped, could propel China’s top schools into the upper echelon of global universities alongside the Harvards, Stanfords, and Oxfords of the world—able to command greater international respect and train future innovators to drive China’s economy past the middle-income trap. The de-administration of universities would allow China to compete in the twenty-first century, in terms of both higher education and economic growth.

Zhu Qingshi, the former president of the prestigious China University of Science and Technology, led the charge for de-administration at the 2010 meetings. A respected chemist and advocate of higher education reform, Zhu had just signed on to lead the newly formed Southern University of Science and Technology (SUSTech) in Shenzhen. Channeling the innovative spirit of a city that had grown from a small fishing village to a global manufacturing powerhouse in mere decades, SUSTech sought to be the first professor-led and bureaucracy-free university. Administrators would not be given government-level rankings like at other universities.

To Zhu, changing this structure was a key to a better higher education system. “Many professors now pursue bureaucratic rank instead of academic excellence,” Zhu told the journal Science in 2009. “If you attain a high rank, you get money, a car, research funding. This is why Chinese universities have lost vitality.”

Zhu and the reformers in Shenzhen envisioned a Chinese university more like the California Institute of Technology than Tsinghua University or Peking University, the top universities in China. Frustrated with Chinese universities’ inability to cultivate enough talented, innovative graduates, Zhu wanted China’s leading universities to bring together the synergies of teaching and research. Within ten years, Zhu said, SUSTech’s new model of Chinese higher education would position it among the top universities in Asia.

“Normally, administrators are supposed to serve teachers and researchers,” said Qiao Mu, an outspoken journalism professor at Beijing Foreign Studies University. “In China, it is the other way around. You serve them.”

It was a bold plan, and Zhu charged ahead. Rather than gradually push the envelope of reform, Zhu quickly instituted structural changes that went further than any other top university in China. He wanted to bypass the national college entrance exam and allocate the authority to issue diplomas to the school itself, rather than central government agencies. The Ministry of Education, unwilling to accept this ambitious slate of changes all at once, balked. The forty-five students in SUSTech’s first class, which did not take the entrance exam, were not awarded official diplomas by the Ministry of Education. The scope of reform was scaled back. Zhu retired a few years later, his goal of major higher education reform unfulfilled; he ducked out of the spotlight to live a quiet retirement in his old hometown, where he spends his time studying the philosophy of nature.

In the more than twenty years since former President Jiang Zemin launched a campaign to build 100 “world-class universities” in China by the end of the twenty-first century, China’s central government has poured funding into the country’s leading universities. Policymakers allocate extra resources to a group of the country’s top schools, and give even larger amounts to a more select group of the most elite institutions.

This trickle-down approach to higher education has been a resounding success—at least in terms of creating schools that perform well in international rankings. China’s top schools have bounded up the global rankings, particularly in science and engineering. Twenty-four universities in China were listed in the top 500 of the QS World University Rankings in 2015–16, and U.S. News & World Report named Tsinghua the top engineering school in the world in 2015, dethroning MIT. These top institutions, as one higher education researcher said casually, “are putting in a lot of money and getting some pretty good results.”

Yet in dozens of interviews with professors, students, and administrators within China’s universities, the tone is far less positive. China’s research and teaching quality are improving, but much more slowly than the rankings suggest. Nearly anyone who has worked or studied in a Chinese university, even the most well-funded flagship schools, will tell you that the quality of the education still trails top foreign schools by a wide margin. Those who can afford an education abroad vote with their feet: more than 300,000 Chinese students studied in the United States last year.

Most foreign observers place the blame on worn-out tropes, such as limited academic freedom or the inability of students to think critically due to the rigid teach-to-the-test education in middle and high schools. These criticisms are at best tangential, if not flat-out wrong. Few, if any, Chinese academics or foreign scholars in China say that direct government censorship is among the biggest problems facing China’s higher education system. And recent research from Stanford found that Chinese students entering college actually have better critical-thinking skills than their peers in the United States. The advantage dissipates, however, during the college years—suggesting that the problem lies within the higher education system, not outside of it.

Officials acknowledge that the power of school administrators is at the root of the problems that stymie the creation of a better university system. “This bureaucratic structure runs counter to the rules of academic development,” Wang Feng, of the Ministry of Education’s Educational Development Research Center, told People’s Daily late last year. At the 2010 Two Meetings, then Prime Minister Wen Jiabao said, “There is a notable gap between domestic and foreign universities: foreign administrators serve academic ends, and professors have a powerful voice; but domestically, there is too much administrative control over academics.” Delegates have raised the topic of de-administration at the Two Meetings every year since Zhu and others brought it to the fore in 2010; the newly appointed president of Peking University used this year’s gathering to call yet again for more de-administration.

Yet more than two-thirds of the way through the Ministry of Education’s decade-long plan, de-administration efforts have failed to make significant progress. Funding has increased, and teachers’ qualifications have improved, but no school has managed to eliminate the system of government rank for university administrators, and only a handful of schools have managed to even move toward an open selection process for the school president. “In terms of schools themselves, government management style and school functional structure, administrative power has actually gotten even stronger,” says Xiong Bingqi of the 21st Century Education Research Institute.

Education reform, like numerous other policy areas in China, is hemmed in by intractable political and institutional constraints. After all, the control provided by the administrative structure has allowed China’s universities to expand and gain prestige while containing political fears of campus dissent. It is these constraints that set China’s problems apart from those in the United States, and render simple solutions obsolete.

The process of moving back to China was not an easy one for Bai Tongdong, a Confucius scholar. In 2010, Bai gave up a tenured philosophy position at Xavier University in Ohio to join the faculty at Shanghai’s elite Fudan University, where he would have the opportunity to train China’s best and brightest young minds.

At Xavier, the Philosophy Department shared administrative services with another department; at Fudan, the department had its own secretaries. Bai was excited about the possibility of more institutional support for his research and teaching. He walked over to the departmental office to make copies of the syllabus for his first course, but the secretaries told him he wasn’t allowed to use the copy machine in the department. If he wanted to make copies of his syllabus, the department said, he needed to go outside, find a copy shop, and pay for the copies himself. When a friend asked what academic work he did in the six months after he arrived in China, he replied, “Filled out forms.”

All universities around the world, even leading American schools, are constantly struggling to stem the tide of over-administration. But the administration that suffocates China’s universities is fundamentally different from that in the United States. The balance of power in American universities has shifted in favor of ever-increasing numbers of administrative staff, leading to out-of-control costs and a smaller share of resources dedicated to academics. But faculty members in the United States retain the power to set priorities within their universities and departments. The job of administrators is to support research and teaching, and well-known faculty members still command the highest respect on campus.

In China, however, over-administration is a question of political power. Administrators have political standing on par with government officials under the strict, status-driven hierarchy of the nomenklatura system. Under this system, a departmental dean at a national Chinese university like Tsinghua enjoys the same political rank as a county-level governor, while the president of the school is equivalent to a vice minister in the central government. It is this system that skews incentives in Chinese academia, as one young lecturer at Renmin University of China, or People’s University, explained. If Professor A is a renowned scholar, and Professor B is the departmental dean, Professor B will always have higher status because he is an administrator. Anything Professor A wants to do—obtain research funding, invite guest lecturers, offer new classes, get reimbursed for expenses—must be approved by Professor B or multiple other levels of government-ranked administrators, deans, or directors, who may have only a tenuous connection to academic teaching and research. Professor A will have little power, money, or autonomy.

“Normally, administrators are supposed to serve teachers and researchers,” said Qiao Mu, an outspoken journalism professor at Beijing Foreign Studies University. “In China, it is the other way around. You serve them.” Professors who have visited libraries at universities in the United States are caught off guard when the librarians try to be helpful. As one lecturer at an elite university in Beijing told me, “In China, if you ask a librarian for help, they will often think it is outside of their responsibility.”

In China, teaching assistants are a rare luxury; professors rarely assign any shorter papers, essays, or tests besides the final exam or paper because they have no time to grade them or give feedback.

As a result, the ultimate goal for many academics is not to do top-quality research, as Zhu Qingshi argued in 2010, but to find a way to become an administrator. This creates fierce competition for a few administrative positions, at the cost of academic research and teaching. “If a research project reduces the chance of becoming an administrator, such as needing to take a year to do fieldwork somewhere, then you won’t do it,” said Daniel Bell, a philosophy professor at Tsinghua.

At top schools, academics have plenty of money available for research that is in line with the government agenda. But there is little time to actually do it, professors say, given that they are the ones also supporting administrators’ needs. Professors have to organize and file reimbursements themselves, an extraordinarily complicated and time-consuming process; the reimbursement rules have become so strict under the government’s anti-corruption campaign that some professors have simply given up on getting paid back. Younger faculty I spoke to feel an incredible pressure simply to stay afloat, given all of the responsibilities they have in addition to academic work. Only after spending a full day doing other work, such as supervising groups of students, welcoming foreign professors, and, of course, filing reimbursements, will a young scholar start to concentrate on doing research.

The administrative structure in China’s universities has also had a deleterious effect on teaching. What Bai, the philosophy professor at Fudan, took away from his time in the U.S. was the importance of short assignments and essays to build skills and grasp small, discrete ideas. But in China, teaching assistants are a rare luxury; professors try not to assign any shorter papers, essays, or tests besides the final exam or paper because they have no time to grade them or give feedback. “There’s no training on academic writing, or how to actually construct an argument,” a former student at China Agricultural University told me. “You submit a paper, they give you a grade, and that’s the end. There’s no accumulation of knowledge.”

Not that students want more work; when a student at Renmin University told me that she takes up to twenty classes per semester, I was shocked. She told me not to worry: sitting in class from eight a.m. to nine p.m. every day was doable because there wasn’t much homework. Professors know the students have no time to do it, and the students know the professors have no time to grade it.

The problematic administrative structure is not without its benefits, however: there is no danger that China’s schools will face the over-administration problem that plagues universities in the United States. The rigidity of the Chinese civil service system limits the number of administrators any university can have, thereby obviating the threat of administrative bloat and increasing nonacademic costs. “All this money has flowed into Chinese universities, and there’s been a gigantic expansion of students, teaching staff, and postdocs—but not of administrators,” said Elizabeth Perry, a political scientist at Harvard. Those who do achieve a coveted administrative position end up with both power and status, but the number of administrators cannot endlessly proliferate because the number of civil servants—as well as tuition restrictions—are set from above by the state.

With little to show for their efforts thus far, central agencies are again focusing their efforts on policies to improve the higher education system. In November 2015, officials announced a “World Class 2.0” plan that called for making China a global education powerhouse by 2050. Plans are under way to retool the hierarchical system that funnels vast quantities of special funds to the most elite schools, like Tsinghua, Fudan, and Peking University; the existing policies have led to massive funding gaps between the few elites and the rest. At this year’s Two Meetings, the former president of Guizhou University caused controversy when he said that the past thirty years of government funding at the school amounts to less than what Tsinghua receives in three months—despite Guizhou University’s being part of the original world-class university campaign.

In July, the government announced that it was exploring the elimination of the civil servant pay system at hospitals and universities, which would pave the way for granting true autonomy to universities and faculty. Yet further progress is likely to remain slow, as reform faces off against deeply vested interests. Most importantly, the people responsible for changing the system are the administrators who benefit from it. To put academics ahead of administrators would be to upend the power structure that forms the core of the university project. “It is equivalent to requesting that administrative departments decapitate their own authority and role,” said Xiong of the 21st Century Education Research Institute. “Such a reform gets caught in a paradox: we demand that they give up power, but that power is their own.”

Part of the problem is that administrative status is connected to China’s overall governance structure. “Factories have been largely privatized, but universities—like military institutions—are the places the state has the strongest control,” explained Harvard’s Perry. “The fact that the top leaders are part of the nomenklatura system and that university presidents are given vice ministerial rank gives them a sense that they are part of the establishment. And particularly in a cultural context such as China, where there is this very old tradition of meshing academic success with government service, that combination is very powerful.”

Nor can Communist Party leaders overlook the potential political consequences of granting universities and faculty true autonomy. University campuses around the world serve as hotbeds of protest and social dissent; in China, university students were the spark for the protests that culminated in the brutal crackdown in Beijing’s Tiananmen Square on June 4, 1989. Since then, university campuses have been extremely quiet; the only protests organized on college campuses have been those for pro-nationalist causes, often encouraged by officials.

The current administrative structure allows leaders in Beijing to exercise greater control over the most influential universities and tools for intervention. Separating schools from the central administrative structure could jeopardize this control, making deep reform a political nonstarter. Even the policies that expanded the higher education system were first and foremost about regime stability: political scientist Wang Qinghua notes that the driving force behind the expansion of China’s higher education in the late 1990s was the palliative effect that increased enrollments could have on unemployment in the aftermath of the Asian financial crisis. The education policy’s “side effects on higher education were of secondary importance when the Party considered that its rule was threatened,” wrote Wang in a 2014 paper.

The quality of Chinese higher education still trails top foreign schools by a wide margin, which is why so many Chinese students vote with their feet. More than 300,000 Chinese students studied in the United States last year.

A slowing economy—and thus a greater potential for social unrest—further diminishes the prospects of reform. Even nationalist protests, often encouraged, are now too dangerous. Immediately before an international tribunal ruled against China over its territorial claims in the South China Sea this summer, university officials in Beijing released a preemptive note demanding no protests or assemblies, even though the protestors would have been supporting the government’s position.

What is left is a series of small tweaks to the existing structure. Despite the cavalcade of new, seemingly useful policies, changes that would move the university away from being a danwei, or government service agency—what both Zhu Qingshi and the education reform plan vowed to do—have not budged.

For some, this eliminates the possibility that Chinese schools can ever reach the top echelon of the global education elite. “Chinese universities cannot be world-class universities, because they are danwei first and university second,” a professor at Peking University told me. But others are less sure. “Higher education is never politically neutral,” says Michael Gow, a researcher of Chinese higher education at Xi’an Jiaotong-Liverpool University. Few, if any, scholars believe that Chinese higher education should directly mimic American schools; articulating the Chinese alternative, however, has not been easy. President Xi Jinping’s calls for a world-class university with “Chinese characteristics” have sounded far more like an attempt to impose central ideological control on higher education than a way to promote a constructive alternative to the American model.

In Shenzhen, the first lecture that incoming students at SUSTech now hear is a history of the school’s founding. In a country that wears its 5,000-year history on its sleeve, a school less than a decade old is still lying swaddled in its crib. Students listening in the lecture hall are moved to tears. “Actually implementing education reform is not an easy task,” one student wrote after last year’s introductory lecture. “But in twenty years, when foreign visitors come to China, they won’t look to Tsinghua or Peking University. They’ll look to SUSTech.” China’s goal of becoming an education powerhouse may depend on it.

The post Higher Red appeared first on Washington Monthly.

]]>
61200
Hillary Opens the Overton Window https://washingtonmonthly.com/2016/10/25/hillary-opens-the-overton-window/ Wed, 26 Oct 2016 00:09:42 +0000 https://washingtonmonthly.com/?p=61212 The larger political sphere is finally wising up on antitrust policy.

The post Hillary Opens the Overton Window appeared first on Washington Monthly.

]]>
One evening in early October, my wife and I were in the kitchen, half-listening to MSNBC, when I suddenly found myself riveted to the screen. There was Hillary Clinton, in a speech given earlier that day in Toledo, linking Donald Trump’s tax code manipulations and other examples of corporate abuse—big banks creating phony customer accounts, Big Pharma jacking up drug prices—to a broader critique of growing corporate monopoly power. That power, Clinton said, “threatens business of all sizes, as well as consumers. With less competition, corporations can use their power to raise prices, limit choice for consumers, cut wages for workers, crowd out start-ups and small businesses. . . . As president, I will appoint tough, independent authorities to strengthen antitrust enforcement and really scrutinize mergers and acquisitions, so the big don’t keep getting bigger and bigger.”

My first reaction was to give my wife a fist bump. As longtime readers of the Washington Monthly know, we’ve been hammering away at the issue of market concentration and the need for stronger federal competition policies for more than fifteen years. Finally, our white-whale-like issue has made it to the forefront of the national political debate.

My second reaction was to think, Why did Clinton wait until five weeks before the election to bring this up? In a campaign that is supposedly all about the economy, inequality, and populist anger, isn’t an idea as big as going after corporate monopolies something to build your economic message around from the beginning, rather than tossing it out, almost as an afterthought, at the end?

But then my third reaction was, Don’t go blaming Hillary. At least she is talking about the issue. None of the other 2016 presidential candidates did. Not Bernie Sanders, who had much to say about breaking up the big banks but never suggested expanding that idea to other sectors of the economy that are in many cases more monopolized than banking. Not the two short-lived Democratic contenders, Martin O’Malley and Jim Webb—and Lord knows those two could have used an idea to make them stand out. And certainly not any of the seventeen GOP hopefuls, including the eventual nominee, Donald Trump (who finally did raise the consolidation issue, two weeks after Hillary).

The failure of the antitrust issue to garner any serious political attention until the end of the campaign is an object lesson in the difficulty of opening the Overton Window—academese for the range of ideas deemed politically respectable enough to consider publicly. For most of the last decade-plus, the Washington Monthly and our partners at New America’s Open Markets Program were pretty much voices in the wilderness on the issue of corporate consolidation and antitrust policy. Then, beginning a few years ago, several leading economists, including Paul Krugman, Joseph Stiglitz, Jason Furman, and Peter Orszag, joined the crusade. So too did a couple of other think tanks, including the Roosevelt Institute and the Center for American Progress. In recent months other thought leader publications, including the Atlantic, the New Yorker, the New Republic, and Democracy, have picked up on the consolidation trend and published important pieces on its dangers.

Yet the issue has barely penetrated the mainstream press and has been virtually absent from the country’s larger political debate. In fact, it’s been decades since political leaders and mainstream reporters engaged in a real examination of monopoly and antitrust policy. Most of them do not have the intellectual background to even discuss it.

I saw a stark example of that on MSNBC that night. After running the clip from Clinton’s speech, host Chris Hayes asked his guest, Ohio Senator Sherrod Brown, a typically insightful question about how the decline of antitrust policies might be behind declining wages and rising inequality. Brown responded as if he had no idea what Hayes was talking about, shifting the discussion to more familiar causes like outsourcing and tax policy.

You can understand why presidential candidates might be reluctant to talk to voters about a complex, deep-in-the-source-code issue like antitrust enforcement that even their Senate colleagues don’t quite grasp. Sometime soon, however, we are bound to have the big national debate on monopoly power the country needs. One reason, as Phillip Longman argues in the current issue (see “How to Make Conservatism Great Again” here), is that while Democrats have been the first to seize on the issue, Republicans are likely to figure out that it’s also perfect for their Trumpian base. After all, Trump voters don’t give a hoot about the big corporate donors who give the establishment party its marching orders and who profit most from monopoly rents. And those voters live in the towns and second-tier metro areas that are being robbed of their locally owned employers by giant firms based mainly in the liberal bastions on the coasts, like New York and San Francisco.

Another reason is that toughening antitrust enforcement can be done without congressional approval, something the next president is unlikely to get much of. That’s also true of the subset of antitrust policy that has to do with the politically red-hot issue of high prescription drug costs, as Alicia Mundy makes clear in her months-in-the-making cover investigation (see “Just the Medicine” here).

One way or another, you’re going to end up reading a lot more about antitrust policy soon, and not just in the pages of the Washington Monthly.

The post Hillary Opens the Overton Window appeared first on Washington Monthly.

]]>
61212
The Enigma of Ulysses S. Grant https://washingtonmonthly.com/2016/10/25/the-enigma-of-ulysses-s-grant/ Wed, 26 Oct 2016 00:08:43 +0000 https://washingtonmonthly.com/?p=61201 A magisterial new biography fails to crack the mystery of America’s greatest general.

The post The Enigma of Ulysses S. Grant appeared first on Washington Monthly.

]]>
He is a strange character,” remarked William Tecumseh Sherman in 1879, trying to explain Ulysses Simpson Grant, his old chief during the Civil War and (by that time) president of the United States. “I knew him as a cadet at West Point, as a lieutenant of the Fourth Infantry, as a citizen of St. Louis, and as a growing general all through a bloody civil war. Yet to me he is a mystery, and I believe he is a mystery to himself.”

Nov-16-White-Ulysses
American Ulysses: A Life of Ulysses S. Grant by Ronald C. White Random House, 864 pp. Credit:

Grant has not proven any less a mystery since then, and it has been hard to connect the dots of the man’s qualities in a pattern that will explain how he became the greatest commanding general of the U.S. military forces in the nineteenth century and the conqueror of the fabled Confederate Robert E. Lee. Charles Dana, who was sent as a War Department observer to evaluate Grant’s aptitude for high command midway through the war, described him as “the most modest, the most disinterested, and the most honest man I ever knew, with a temper that nothing could disturb, and a judgment that was judicial in its comprehensiveness and wisdom”—just the sort of thing that could be said about almost anyone in shoulder straps in 1863 who was not a complete idiot. On the centennial of Grant’s death, in 1985, John Leo wrote a partly humorous tribute to Grant that seemed to spear the man exactly, saying that Grant was the sort whose first words upon being introduced would be “Meet the wife.”

It did not take long, even during the Civil War, for puzzled critics to assume that there was no pattern to the dots at all. They concluded that Grant suceeded as a general only because he tapped the enormous manpower resources of the North and just threw them at the Confederates, regardless of the cost in blood. They railed at Grant the two-term president (from 1869 to 1877) as an incompetent for protecting corruption and allowing the Reconstruction of the South to fizzle. Moreover, Grant set two unhappy precedents in American history: that of second-term presidents whose administrations collapse in scandal, and that of great generals who make for bland politicians. Grant’s presidency, complained Henry Adams, “avowed from the start a policy of drift.” Grant himself was “inarticulate, uncertain, distrustful of himself, still more distrustful of others.” He “should have lived in a cave and worn skins.”

Ronald C. White’s American Ulysses is the seventh doorstop-sized biography of Grant to be published in the last thirty-five years, following William McFeely (1981), two volumes from Brooks Simpson (1991 and 2000), Geoffrey Perret (1997), Jean Edward Smith (2002), and H. W. Brands (2013). This puts quite a burden on White, a former seminary professor and historian of the Social Gospel who rocketed to prominence in 2002 with the first of three well-received books on Abraham Lincoln. But White’s fundamental take on Grant is to stress the basic decency of the man. White’s Grant disliked war, politics, and slavery (more or less in that order). “I am called a man of war,” Grant complained in 1878, “but I was never a man of war.” He was no student of military science, either, like his great contemporary Helmuth von Moltke, and once reduced the essence of strategy to a sound bite: “Find out where your enemy is. Get at him as soon as you can. Strike him as hard as you can, and keep moving on.” He protested to Otto von Bismarck that “I am more of a farmer than a soldier. I take no interest in military affairs. . . . I never went into the army without regret and never retired without pleasure.” He was devoted to his wife, his family, and his horses. (The rare moments when Grant emerged from his customary stolidity into raging fury were the ones in which he witnessed mistreatment of animals.) He never used profanity and was a lifelong Methodist. Not surprisingly, it is on that last point that White wants to dwell, since in all the other Grant biographies, “Grant’s religious odyssey has been overlooked or misunderstood.”

John Keegan, in The Mask of Command, observed that Grant possessed two great virtues as a general: he had the coup d’oeil (the ability to size up terrain and know instinctively what to do on it) and a clear, brisk style of communication that never left his subordinates in doubt of his wishes. (This may not sound like a remarkable talent, but the American military on the eve of the Civil War had no general staff and no established pattern for field communications, and much of the Civil War was pockmarked by bad decisions prompted by unclear or vague orders.) These were, however, virtues that required a circumstance to draw them into view, and the trouble Grant had in his first thirty-eight years was being found by such a circumstance. Born in Point Pleasant, Ohio, in 1822, he was packed off at the age of seventeen to West Point by his father, mostly for the benefit of a free education, and served creditably as a junior officer and quartermaster in the Mexican War. He filled a variety of small posts in the years afterward, until he was finally dispatched to Fort Humboldt on the Northern California coast. There, separated from the support of his wife, Julia, he took to drink and was compelled to resign his commission in 1854. Having hit bottom, Grant proceeded to bump along that bottom for the next six years, finally ending up as a clerk in his father’s leather-goods store in Galena, Illinois.

Grant hadn’t originally been enamored of Lincoln. He didn’t vote for him in the 1860 election, and in the first year of the war had cautiously refused to harbor fugitive slaves behind Union lines.

Then came the war, and the desperate need for anybody who had at least some small bits of military know-how. Grant was handed command of the 21st Illinois Volunteers, and from that moment he went nowhere but up. Promoted to brigadier general, he organized a joint Army-Navy expedition up the Tennessee and Cumberland Rivers in February 1862, which knocked aside the western Confederacy’s barrier forts, Henry and Donelson. This feat, and his blunt demand for the unconditional surrender of Fort Donelson, earned him quick national celebrity. (U. S. Grant became “Unconditional Surrender” Grant, and earned him the first of many boxes of the congratulatory cigars that eventually killed him.) Grant almost threw it all away when he was caught napping by the Confederates at Shiloh that April. But he redeemed himself by managing a superb, light-footed campaign against the Confederacy’s citadel on the Mississippi River, Vicksburg, in mid-1863. He then pivoted militarily, and won a tremendous come-from-behind victory at Chattanooga later that year. That was enough to convince Abraham Lincoln that Grant was the general who could win the war. Waving aside insinuations that Grant still had a drinking problem, Lincoln had him promoted to lieutenant general and gave him a mandate to crush the main Confederate army, under Robert E. Lee.

Curiously, Grant hadn’t originally been enamored of Lincoln. He didn’t vote for him in the 1860 election, and in the first year of the war had cautiously refused to harbor fugitive slaves behind Union lines. However, keeping a finger in the political winds became a Grant specialty. Soon enough he pivoted politically as well, publicly endorsing Lincoln’s emancipation policy and quieting any fears on Lincoln’s part that he had political ambitions of his own.

The Overland Campaign that Grant devised against Lee in 1864 was not, as Grant’s enemies often whined, a war of brute attrition. To the contrary, it was a war of the most nimble maneuvers, with Grant consistently outfoxing Lee until he had pinned Lee’s army into a siege around the Confederate capital of Richmond. Lincoln and the politicians had badgered Grant from the start to look for a knockdown fight with Lee’s army. Grant knew better. Warfare in the nineteenth century had become too large scale to be settled by Napoleonic set-piece showdowns. Once it became clear at the Wilderness, Spotsylvania, and Cold Harbor that Lee could not be overcome by mere head-down fighting, Grant did what he had wanted to do all along, which was to cross the James River and grasp Richmond and its rail-junction neighbor, Petersburg, in a remorseless headlock. Lee understood all too well what Grant was up to, and knew it spelled the end. The siege isolated Richmond and demoralized and exhausted Lee’s army. When Richmond’s defenses finally collapsed in April 1865, Lee was only able to flee as far as Appomattox Court House before Grant caught up and compelled his surrender without the need for any final, bloody climax.

This might have been the happy ending for Ulysses Grant. Instead the political vacuum created by Lincoln’s murder the week after Appomattox sucked Grant in. Lincoln’s vice president and successor, Andrew Johnson, as politically inept a president as has ever taken the oath, descended into a swift spiral of conflict with Congress that ended in his impeachment in 1868 (which Johnson survived by just one vote in the Senate). Grant was at first supportive of Johnson. But by 1866, Grant’s finger in the wind had steered him to the congressional side, and in 1868, Lincoln’s Republicans enthusiastically adopted him as their candidate for president.

Alas for Grant, his presidency undid in peace nearly everything he had accomplished in war. Congress expected him to do as Johnson had not, and apply the severe measures for Reconstruction that Congress had embodied in the Fourteenth Amendment and the three Reconstruction Acts of 1867. Grant was only too willing to do so. He became an ardent defender of civil equality for the freed slaves, and when southern irredentists resorted to terror to nullify black votes in the South through the Ku Klux Klan, Grant’s attorney general, Amos Akerman, prosecuted the Klan with the hand of an avenging angel.

But Congress’s Reconstruction strategies were severe only by comparison with doing nothing; after only three years, even under congressional rules, most of the old Confederacy had been reintegrated into the Union without sufficient safeguards to protect the freedpeople. Congress displayed even less wisdom in managing financial affairs, and was swallowed up in 1872 by the Crédit Mobilier scandal, which saw members of Congress standing in line for what amounted to bribes from the railroads. When a financial panic struck the nation in 1873, an enraged electorate ripped away the Reconstruction majorities Grant had relied upon in the House of Representatives, and for the remainder of his second term, the enforcement of federal voting rights laws was handicapped by an unsympathetic opposition on Capitol Hill.

Grant’s capacity to anticipate events seemed to desert him, especially when it came to dealing with corruption within his own administration. Eager to be appreciated as one who stood by his friends, Grant appointed old Army cronies to positions everywhere from post offices to cabinet departments. Like Grant, many of them had never known anything before the war except genteel poverty; unlike Grant, they could not resist the manifest opportunities that political power offered them for graft. Grant saw the chair of his Indian Commission, his personal secretary, his secretary of war, and his secretary of the interior all forced into resignation over financial chicanery. Grant himself was unsullied by any charges, but it cost him respect, a cost made worse by his propensity to cling to his friends far longer than their misdeeds warranted. The month after he left office, the last Reconstruction governments in the South collapsed, and the steady road to Jim Crow was open.

White narrates this mud-daubed life with palpable sympathy and no small skill. The book is handsomely written, and it is accompanied by a brace of extraordinarily fine maps, including ones that chart Grant’s post-presidential world tour. Moreover, White has taken the trouble—and how many biographers short-change this part of the process!—of visiting the places Grant lived and worked in. But White never quite moves us to conviction about the single most unusual aspect of his biography, and that is Grant’s religion. From time to time, we are reminded that Grant rented a pew in this or that Methodist church in Galena or Detroit, that he listened attentively to the “feeling discourses from the pulpit” of his Galena pastor, John Heyl Vincent, or that he endorsed John Wanamaker’s Sunday School Times with a letter that urged its readers to “hold fast to the Bible as the sheet-anchor of your liberties; write its precepts in your hearts, and PRACTISE THEM IN YOUR LIVES.” These are, as White rightly complains, aspects of Grant’s life that most of the other biographies miss entirely.

Grant became an ardent defender of civil equality for the freed slaves, even when southern irredentists resorted to terror to nullify black votes in the South through the Ku Klux Klan.

But beyond this, even White finds it nearly impossible to detect in Grant the sort of mental anguish on religious subjects that marked Abraham Lincoln. In 1862, Grant issued a general order banning “the Jews, as a class” from his military department for “violating every regulation of trade established by the Treasury,” only to rescind the order under pressure from Lincoln, and to spend the rest of his life apologizing for it. In 1875, he endorsed the Blaine Amendment, legislation proposed ostensibly to reserve religion “to the family altar, the church, and the private school,” but designed in practice to defund Catholic parochial schools. It does not appear that Grant was baptized until he was dying of cancer in 1885—and even then, he was unconscious. “From many years of my acquaintance with General Grant I cannot recall an instance of a reference to theological opinions upon controverted topics of faith,” remembered George S. Boutwell, who served as Grant’s secretary of the treasury. “He disliked controversy even in conversation.” His posthumously published Personal Memoirs contain only two direct references to God, and both are mere bromides (“Man proposes and God disposes”). Julia Grant’s memoirs were substantially more forthcoming about religion, but not about her husband’s beliefs.

And so Ulysses Simpson Grant remains as much a mystery as ever Sherman found him. In British history, military heroes have been eccentrics—sometimes, highly religious eccentrics, like Orde Wingate, Henry Havelock, or Charles George Gordon. But Americans are only amused by their eccentric generals; they rarely give them top command. White takes, as the trope of his title, the comparison of Grant with a very different kind of soldier, his classical namesake Ulysses. To the extent that Grant turned into quite a world tourist after 1877, it is true that he really could not “rest from travel”; to the extent that he was a foil for Sherman, the Civil War’s Achilles, he was an American Ulysses. But he was certainly not an American Caesar or an American Cincinnatus, and in truth, he may only have been an early Eisenhower.

The post The Enigma of Ulysses S. Grant appeared first on Washington Monthly.

]]>
61201 Nov-16-White-Ulysses
Trump’s Supporters Revealed https://washingtonmonthly.com/2016/10/25/trumps-supporters-revealed/ Wed, 26 Oct 2016 00:07:02 +0000 https://washingtonmonthly.com/?p=61202 Two new books underscore the big lesson of 2016: GOP base voters hate big government spending only when the “wrong” people benefit.

The post Trump’s Supporters Revealed appeared first on Washington Monthly.

]]>
How on earth did Donald Trump become the Republican presidential nominee in 2016? How did we wind up with such a nasty and bitter election between two of the most disliked candidates in modern history? Is there any hope for a better, more civil politics? These questions will surely launch scores of books in the years to come, but for now, we have two excellent ones to start with, both of which offer compelling, if pessimistic, explanations for how we got where we are (including anticipating Trump, albeit indirectly), and why things are unlikely to get better anytime soon.

Nov-16-Cramer-PoliticsResentment
The Politics of Resentment: Rural Consciousness in Wisconsin and the Rise of Scott Walker by Katherine J. Cramer University of Chicago Press, 256 pp. Credit:
Why Washington Won’t Work: Polarization, Political Trust, and the Governing Crisis by Marc J. Hetherington and Thomas J. Rudolf University of Chicago Press, 256 pp.
Why Washington Won’t Work: Polarization, Political Trust, and the Governing Crisis
by Marc J. Hetherington
and Thomas J. Rudolf
University of Chicago Press, 256 pp. Credit:

Katherine J. Cramer’s The Politics of Resentment: Rural Consciousness in Wisconsin and the Rise of Scott Walker is the product of a six-year listening tour in which Cramer, a political science professor at the University of Wisconsin–Madison, got in her car and drove around rural Wisconsin to learn how voters there actually discuss politics when they talk to each other over coffee and in discussion groups. Cramer isn’t interested in putting down working-class whites because they may get their facts wrong or fail to vote their self-interest. She’s interested in understanding how they reason through a complex world. As a result, these conversations—many of which she quotes verbatim—infuse The Politics of Resentment with a complex humanity that is rare in books about public opinion. The payoff is a narrative with both nuance and depth.

For many rural voters, resentment toward urban elites has become the organizing principle of their politics, writes Cramer. They look around and see their towns struggling. They believe that they are ignored by policymakers, and convinced that they do not get their fair share of resources. (This is more perception than reality: data shows that mostly rural Wisconsin counties actually get a little more state and federal money per capita than mostly urban counties. At the national level, a Mother Jones analysis found that 81 percent of predominantly rural states got more federal spending than they paid in taxes, while only 44 percent of predominantly urban states did.) These rural Wisconsinites do not feel that their values and lifestyles—which they see as fundamentally distinct—are understood and respected by those living in cities. “Our votes mean nothing,” said one person; “I think we are just hung out there to dry,” said another; “I think you’ve forgotten rural America,” said a third.

Thus it follows that big-city politicians must be taking that money and spending it on their constituents. “Many people in small towns perceived that their tax dollars are being ‘sucked in’ by Madison or Milwaukee, never to be seen again,” writes Cramer.

Rural voters often resent how much government employees make—that information is public, and it looks to them like pretty good money, especially with the benefits these employees get. And they don’t understand what those people actually do, other than occasionally harassing them for fishing without a license or telling them how to run their small business. To rural residents, public employees are, in the words of one respondent, “[s]ecretaries, with secretaries, with secretaries.”

Politics is far more about identity than it is about ideology; that is, most voters and even politicians care far more about their side winning than they do about specific policy principles.

Which is very different from how these voters see themselves. “Many people [take] enormous pride in using their hands rather than . . . sitting behind a desk all day,” Cramer writes. “I still know how to work,” boasted one man. “I’m eighty-two years old and I’m driving a semi.” Some have two or three jobs, each demanding hard physical labor. There was special disdain for those who, in the words of one interviewee, “shower before work, not afterwards.” Add this up, and you begin to see how many low-income residents in rural areas who would very much benefit from some redistribution have come to resent government.

Conservative politicians have played skillfully on these resentments, rallying these folks to small-government causes on the premise that government spending is just a giant wealth transfer from hardworking honest country folks to undeserving urban dwellers, of both the paper-pushing government bureaucrat type and the stroller-pushing welfare queen type.

And yes, there is a racial element to this, as Cramer notes. “Urban” can be code for black. But she’s careful to point out that the racial element can’t be disentangled from a larger sense of identity. Racial differences are far from the only aspect of the “us versus them” construct that separates rural and urban identities.

It is little wonder that both Donald Trump and Scott Walker do so well among this demographic. Walker is seen as somebody who is finally sticking it to those bureaucrats, who lazed around in their cushy office jobs all day. “There is no reality in Madison,” said one person Cramer interviewed. In a survey Cramer conducted, 16 percent of suburban and 25 percent of urban respondents felt rural areas received “much less” or “somewhat less” than their fair share. But 69 percent of rural respondents felt that way. “The people at the top,” said one respondent, “they are just milking us dry on taxes.”

Rural counties have always been poorer, and there is less private investment because the workforce tends to be less skilled and less well educated. Local ownership of mom-and-pop businesses, which were once the commercial lifeblood of rural towns, is disappearing. According to an analysis by the Economic Innovation Group, the effects of the recent recovery haven’t always trickled down to rural counties. Between 2010 to 2014, roughly two out of three of those counties lost jobs, compared to under one in five who lost jobs in the 1990s. So despite the actual public investment, Cramer reports, “many rural residents perceive that rural communities are the victims of economic injustice.”

To this, the conservative pitch has been simple: You country folks keep sending money to the government, and all you get is disrespected and ignored. So let’s just stop sending your money and cut government spending instead. Donald Trump has skillfully played on these grievances, simply by acknowledging them and promising to right all wrongs, using vague promises to return to a halcyon, prosperous past. In Cramer’s telling, this is precisely the message that America’s rural residents have been waiting to hear for years.

Marc J. Hetherington and Thomas J. Rudolf, professors of political science at Vanderbilt University and the University of Illinois at Urbana-Champaign, respectively, have together written Why Washington Won’t Work, about the decline of trust in government. Theirs is a simple takeaway: our politics have become so polarized that Democrats only trust Democrats and Republicans only trust Republicans. It becomes increasingly difficult to deal with political opponents if you suspect that they are acting in bad faith. And so a negative spiral of dysfunction, disappointment, and further distrust is created.

Both of these books cast doubt on the prevalent narrative long pedaled by conservative elites: that America was a deeply conservative country that loves free markets and hates government entitlements, proven by the electoral successes of conservative Republicans who ran against big government and won. As Trump has demonstrated, Republican voters were indeed angry. But not because government was spending too much. Rather, as these voters saw it, government was spending mostly to help people other than themselves.

Both books also cast doubt on the long-enduring, bipartisan fantasy of compromise being just a matter of politicians’ willingness to engage in a little give-and-take at work and then grab drinks with their colleagues after hours. Instead, the authors demonstrate that the us-versus-them divisions run deep in American politics, with powerful pressures that cannot simply be papered over by Beltway socializing. Polling by the Pew Foundation discovered that 52 percent of registered voters—regardless of color, gender, or age—want to see their candidate be willing to compromise. (Interestingly, that number jumps to 63 percent among registered Democrats, while only 35 percent of registered Republicans want their candidate to reach across the aisle.)

Had pundits internalized the analysis of these two books, they might have been less surprised by Trump’s rise, because they would have understood that politics is far more about identity than it is about ideology; that is, that most voters and even politicians care far more about their side winning than they do about specific policy principles. Witness the many Republican politicians who denounced Trump for the entire primary campaign but then quickly got on board to support him after he became the Republican nominee, because he, like them, identified as a Republican. After all, even the craziest Republican has to be better than a Democrat. Identity, be it partisan or geographic, invariably pushes voters to ask, “Is this good for my side?” They might also have understood that conservatives who believe philosophically in smaller government have a natural incentive to stoke resentment and distrust of government among their constituents—but that distrust and resentment is not the same thing as an actual preference for smaller government.

Where the Politics of Resentment draws on face-to-face conversations, Why Washington Won’t Work is a more traditional political science public opinion book: a few chapters of theory, then a few chapters of survey data analysis. That said, it’s clearly and engagingly written, and full of intelligent insights.

Where Cramer focuses on rural consciousness and resentment as the organizing concepts of political conflict, Hetherington and Rudolf focus on partisanship and distrust. “Political trust is critical,” they write, “because it helps create consensus in the mass public by providing a bridge between the governing party’s policy ideas and the opinions of those who usually support the other party.” If Republicans trust the government to do what is right even with a Democrat in the White House, they’ll give Democrats at least some benefit of the doubt in negotiating a policy compromise. But without that reservoir of trust, there’s no room to negotiate. Hence, the current gridlock. Trust is a complex indicator, driven by multiple factors. But generally it declines when government screws up, when the economy is bad, and when the political process seems messy and corrupt.

During the recession in 2008, trust in government declined somewhat generally. When Barack Obama took the White House in 2009, Republican trust in government declined dramatically. In that sort of environment, it became very difficult for Obama to get any Republican support for the needed stimulus, which forced a compromise stimulus bill that many believed was far short of what was actually necessary. (By contrast, tax cuts sell well in a low-trust environment: if you don’t trust the government to spend your money, why give it to them in the first place?)

The noisy and visible conflicts that have increasingly dominated Washington turn people off from government and politics. The gridlock of these conflicts also further depresses the responsiveness of government. “Americans do not like to see democracy in action,” write Hetherington and Rudolf, “even though they profess loving democracy in theory.” Which might explain why the recent economic numbers showing rising incomes across the board failed to assuage voters’ anxiety. “If economic performance is the engine that drives political trust,” Hetherington and Rudolf write, “that engine needs to work about four times as hard for about four times as long to raise trust as it does to lower it.” That’s because when the economy improves, people stop paying attention to it and fail to notice the good job government is doing. Instead, they focus on new problems.

And, like Cramer, Hetherington and Rudolf note that conservatives have an obvious incentive to fan the flames of mistrust, because they benefit politically. While liberal policy initiatives depend on government activity, a lack of political trust “will more often put a brake on liberal policy initiatives than conservative ones.”

Both books end on a pessimistic note. In their final chapter, “Things Will Probably Get Better, but We Are Not Sure How,” Hetherington and Rudolf argue that the high-trust era of the 1950s and ’60s may be a historical anomaly, since it benefited from the unique combination of long-run economic growth and the predominance of international issues, both of which tend to improve trust in government.

Certainly, a global conflict would be one way to improve trust, since as Hetherington and Rudolf write, “Out-group threats trigger various manifestations of in-group solidarity.” (Think 9/11 and George W. Bush’s stratospheric approval ratings.) War may be good for the health of the state, as Friedrich Hegel once put it. But it has other obvious drawbacks.

Both books note that many specific sectors of government and individual government programs do enjoy high approval and trust, and suggest that politicians who want to build more trust in government and reduce resentment focus more attention there. For example, people have favorable opinions about government programs like Social Security and Medicare. Unfortunately, talking about government programs that function well is boring, and not newsworthy by modern media standards. The headlines and political rewards go to those who are crusading against corruption and wrongdoing. These are powerful incentives.

“My fear is that democracy will always tend toward a politics of resentment,” admits Cramer, “in which savvy politicians figure out ways to amass coalitions by tapping into our deepest and most salient social divides: race, class, culture, and place.” The evidence is compelling. The norm of American democracy appears to be deep and divisive conflict between competing social groups and political tribes, all of whom want government to spend money on them and them alone. This stands as a powerful counterargument to those who are convinced that the problem with Washington is that politicians don’t spend enough time after hours in Georgetown salons, where Rs and Ds give way to Maker’s Marks and Woodford Reserves, and where moneyed elites can all agree that the biggest problem is government spending on entitlements.

In a broader sense, these books vindicate James Madison’s age-old observations about the unavoidability of factions. Political conflict, resentment, and mistrust may just be the natural state of things, with the postwar era of bipartisan consensus being an aberration that resulted from a rare period of continued economic growth, shared prosperity, and the primacy of international conflict. If we grant this conclusion, perhaps the task of political reform becomes not trying to minimize conflict, but instead accepting, channeling, and managing the inevitable struggles that are endemic to politics—just as Madison divined.

The post Trump’s Supporters Revealed appeared first on Washington Monthly.

]]>
61202 Nov-16-Cramer-PoliticsResentment Nov-16-Hetherington-WashingtonWork Why Washington Won’t Work: Polarization, Political Trust, and the Governing Crisis by Marc J. Hetherington and Thomas J. Rudolf University of Chicago Press, 256 pp.
Seven Habits of Successful Nations https://washingtonmonthly.com/2016/10/25/seven-habits-of-successful-nations/ Wed, 26 Oct 2016 00:06:11 +0000 https://washingtonmonthly.com/?p=61203 Some unlikely places around the world are tackling some of the world’s toughest challenges and winning.

The post Seven Habits of Successful Nations appeared first on Washington Monthly.

]]>
Jonathan Tepperman, managing editor of Foreign Affairs, is hardly sanguine about the state of the world. He points to “the slow-motion disintegration of Westphalian nation-states in multiple parts of the world” and “a growing weakness at the heart of the liberal, rules-based global order.” But he hasn’t given up hope. The Fix: How Nations Survive and Thrive in a World in Decline sees Tepperman take a tour around successful interventions to deal with ten of the “toughest and most persistent challenges states have faced in the modern era,” put in place by “a tiny band of freethinking, often underrated leaders,” many of whom he interviewed for the book. The Fix suggests that such successes could be replicated, if other world leaders were willing to be as brave and freethinking as their colleagues.

The Fix: How Nations Survive and Thrive in a World in Decline by Jonathan Tepperman Tim Duggan Books, 320 pp.
The Fix: How Nations Survive and Thrive in a World in Decline
by Jonathan Tepperman
Tim Duggan Books, 320 pp. Credit:

Tepperman’s enjoyable and informative book demonstrates the power of politicians to make the world a better place. It should be a welcome tonic for those who no longer believe that governments can do anything right anymore. But, if anything, the book is not positive enough. The Fix presents its case studies as exceptions to the norm of a world in decline. Instead, they should be considered fine illustrations of why we are seeing such widespread and unprecedented global progress.

The Fix begins with a grim recounting of recent miseries, from Iraq’s disintegration and Russia’s annexation of the Crimea to the global financial crisis and the near collapse of the Eurozone project. It quotes General Martin Dempsey, former chairman of the Joint Chiefs of Staff, suggesting it is “the most dangerous time” in his life—impressive for a man born during the Korean War who lived through the Cuban Missile Crisis, the Soviet invasion of Afghanistan, and the attacks on the Twin Towers. Tepperman suggests that our present travails stem from the failure of leaders to address ten big problems: inequality, immigration, Islamic extremism, civil war, corruption, the resource curse, energy extraction, the middle-income trap, and gridlock (twice). And he argues that “while the details of all of the troubles currently wracking the world vary, they share an underlying cause: the failure of politicians to lead.”

The Fix provides scant justification for why these problems are the big ten planetary concerns, and some of them don’t belong on the list (climate and threats to global health are obvious additions, and below are suggestions for some that might be cut). Nor does the text really make the case that their solution would bring stability to the Eurozone or an end to Russian adventurism in the Ukraine. But the opening discussion is a convenient appetizer for the (succulent) meat of the book: chapters recounting successful attempts to foster peace, security, economic growth, and equality alongside multiculturalism and openness.

The first case study Tepperman presents involves Bolsa Familia (Family Grant), a Brazilian antipoverty program that gives cash payments to the poorest families in the country on the condition that their children be vaccinated and in school, and that both mothers and children get regular health checkups. The transfers provide poor people what they most need in order to be a little less poor—money—and the conditions both help make the program politically palatable and improve health and enrollments. Bolsa Familia has been a big factor in a decline in the country’s extreme poverty rate, from 9 percent at the program’s start in 2003 to less than 3 percent today. It has also played a role in dramatically reducing inequality. Over a similar period, school enrollments have climbed, Brazil’s infant mortality has declined by 40 percent, and deaths from malnutrition have fallen by more than half. All of this has come at a cost of around one-half of 1 percent of GDP. Bolsa Familia is a widely admired program, and former President Lula da Silva—despite  the allegations of corruption recently made against him—receives (and deserves) Tepperman’s plaudits.

Tepperman’s enjoyable and informative book demonstrates the power of politicians to make the world a better place. It should be a welcome tonic for those who no longer believe that governments can do anything right anymore.

Next on the list of solvable intractables is immigration policy. Canada has one of the highest per capita immigration rates in the world, with 20 percent of the country’s population foreign born. Yet it was still a photo opportunity rather than political suicide when the country’s new prime minister, Justin Trudeau, welcomed the first of 25,000 Syrian refugees into the country saying, “You’re safe at home now.” Immigration has helped turn “a small, closed, ethnically homogeneous state into a vibrant global powerhouse and one of the most open and successful multicultural nations in the world,” suggests Tepperman, thanks to a long-lived and very deliberate strategy of promoting pluralism. Among other interventions, a billion-dollar budget to support pro-immigration documentaries, teaching aids, and community integration efforts helps to account for a country where 85 percent of the population thinks multiculturalism is important to the national identity and two-thirds feel that immigration is one of Canada’s key positive features.

The Fix also provides compelling case studies on security and stability. Indonesia has seen a significant decline in terrorist attacks while preserving democracy and fostering development, thanks to political leadership that has kept radicals “weak and off-guard.” Tepperman notes that the country has treated terrorism “more like a law-enforcement problem than a military one,” coopting more extreme parties rather than attempting to crush them, generally detaining suspects only when there is good evidence, holding public trials for those it seeks to imprison, and attempting to rehabilitate as well as incarcerate.

Tepperman also reports on Rwanda, which has rebounded from a genocide and civil war that killed or displaced more than 40 percent of the population, obliterated the civil service, and left the country with a per capita yearly income of just $217. President Paul Kagame went about rebuilding the country and attempting reconciliation by stripping all mention of Hutu, Tutsi, and Twa ethnic groups from official texts. He introduced a transitional justice system based on a precolonial model that used elected local tribunals to combine confession, reconciliation, and punishment in dealing with the perpetrators of mass violence. The system heard nearly two million cases before it was dismantled in 2012. Thanks to peace, stability, and economic reform, per capita income in the country has tripled over a decade, and child mortality has fallen by more than two-thirds—a world record pace. Ninety-three percent of Rwandans report that they are confident about the direction of the country, according to Gallup. While far from a heaven on earth, the country is no longer a problem from hell, and those in the Democratic Republic of Congo, Syria, or Afghanistan would surely be delighted if you suggested that similar progress could be theirs over the next decade.

The Fix discusses other economic successes. Botswana has seen the fastest growth rate in the world over the thirty-five years of its independence, never suffering hyperinflation, recession, or famine, by successfully harnessing its immense diamond resource to the cause of development under effective democratic rule. South Korea has seen almost as rapid growth. Tepperman argues that this is thanks in part to leaders who have successfully implemented an industrialization strategy followed by reforms that have allowed efficiency and competition to drive creative destruction. And in 2012, Mexico undertook a rapid and far-reaching set of reforms under a bipartisan coalition that an expert quoted by Tepperman suggests was “the most ambitious process of economic reform seen in any country since the fall of the Berlin Wall.”

To add to the list of problems that may appear insoluble but aren’t, The Fix includes chapters on Singapore’s successful anti-corruption efforts, the American shale revolution, which extracted previously inaccessible fossil fuels, and New York’s anti-terror policing, which may have helped keep the city safe after 9/11.

For all that these are cases of success, Tepperman is usually straightforward about the limitations of the stories he highlights. In the case of Rwanda, Kagame’s leadership is the subject of considerable debate, for example, and the transitional justice system is burdened with significant concerns over due process and plentiful examples of misuse of power to settle personal scores.

Nonetheless, The Fix is often a little too certain about what has worked. Was it the FBI, New York’s Finest, or something else entirely that should take the credit for the lack of a mass terror event in New York City since 2001, for example? And vital elements sometimes vary: The Fix suggests that high government salaries were an important part of the anti-corruption fight in Singapore, but that keeping government salaries low was an important part of avoiding wasteful spending in Botswana.

Tepperman does worry about this latter problem—what social scientists would term “external validity.” For example, he notes that Canada’s positive policies and attitude toward immigration are based in part on long-standing fears that the country was underpopulated and on the fact that undocumented immigration has rarely been a significant problem there. Many other countries don’t share those features. And if there is a lesson from Rwanda, it is that “local problems require local answers,” he writes. For all of that, some of the solutions are generalizable; the model used by Bolsa Familia, for example, has been used in many other countries to similar effect.

Tepperman’s preferred generalizable lessons of success regard leadership—specifically, the central role of Brazil’s Lula, Botswana’s Seretse Khama, and Singapore’s Lee. A final chapter suggests that the most important leadership components are pragmatism, not letting a good crisis go to waste, acting magnanimously from strength, and so on. Tepperman suggests that the reason many global leaders have not been able to achieve the success of the book’s case studies is that the leaders in question “haven’t yet found the wisdom and intestinal fortitude to do what’s necessary.”

Leadership vision and drive have an important role in tackling The Fix’s terrible ten. Changing the immigration system in the United States to be more generous would have a huge payoff, because immigrants add immense value to the economy (they do not, contrary to current right-wing conventional wisdom, steal jobs or raise crime rates). But rather than lead the fight on reform, we get “Trump and his quasi-fascist race-baiting” and European leaders who, instead of fighting bigotry, pander to it.

Nonetheless, the evidence suggests that if leadership is vital to progress, it must be far more common than The Fix implies. Bolsa Familia, for instance, is not a unique example of a large-scale conditional cash transfer program, nor even the first. Mexico’s Progresa program, later rebranded Oportunidades, began in 1997. Like Bolsa Familia, it provided cash to poor families on the condition that they send their kids to school and keep their shots up to date, and it has reduced poverty and improved health and learning. Today there are conditional cash transfers running in countries from Bangladesh to Jamaica and Indonesia and beyond. The growth of such programs may help to explain why average inequality in countries of the developing world has been fairly flat since 2000, and Brazil is not the only country in which it has rapidly declined. Mali and Peru both saw almost as fast reductions in inequality alongside more robust growth than Brazil managed—suggesting that they did even better at raising the incomes of their poorest citizens.

To the extent that political regimes are primarily (excessively) judged on their ability to produce jobs through economic growth and low inflation, it has been a fantastic fifteen years around much of the world. The average inflation worldwide during the first ten years of the new millennium was less than 8 percent, compared to 66 percent in the previous decade. Economic growth has been so widespread that the number of low-income countries (those with an annual GDP per capita of less than $1,005) fell from sixty-three to thirty-five between 2000 and 2010.

It isn’t just income and inequality: while Rwanda is indeed a world leader in reducing child mortality, five high-mortality countries managed to reduce child deaths by three-quarters over the past couple of decades, and the world as a whole has more than halved the rate of mortality since 1990. For all of the horrors of Syria, the world is getting less violent. Terrorism remains a tragic but extremely rare phenomenon outside of Iraq, Afghanistan, Nigeria, and Syria, and, even including those countries, global terror deaths are still far below annual automobile traffic deaths in the U.S. alone. In the United States, terror deaths run far behind those from fatal lawnmower accidents and deaths caused by falling out of bed. Meanwhile, there isn’t much evidence that global migration flows are higher or lower than in the recent past, for all of the toxic rhetoric around the subject in the West. And even in the United States, that rhetoric spews from an aging, declining minority: in recent polls, 59 percent suggested that immigrants strengthen the country through hard work and their talents, and only 33 percent describe them as a burden through taking jobs, housing, and health care. In 1994, opinions were worse: 63 percent said immigrants were a burden, and 31 percent said they strengthened the country.

Even better, some of the “terrible ten” problems highlighted by The Fix may not even be problems at all. It isn’t really clear that there’s such a thing as a middle-income trap; there’s no evidence that countries “bunch” below a particular GDP per capita because it takes some fundamental and transformative institutional change to leap over it. In addition, countries with more natural resources tend to be richer than countries without resources. And the looming problem with energy isn’t so much that regulations are keeping shale oil in the ground in many countries, it is that if we extract and burn all we can we’ll fry the planet.

Tepperman’s case studies, alongside evidence of widespread global progress, suggest that the idea that significant development challenges can only be overcome with a rare miracle of leadership under unique circumstances is wrong; success is not as hard as that. Some combination of the comparative ease of replicating good ideas, a reasonable global stock of competent leadership, and multiple paths to progress means that improvement has been ubiquitous rather than elusive worldwide over the past ten years. Tepperman does a wonderful job of illustrating that government leaders can achieve great things if they put their minds to it. Widespread global progress suggests that many do—and we should expect nothing less.

The post Seven Habits of Successful Nations appeared first on Washington Monthly.

]]>
61203 Nov-16-Tepperman-TheFix The Fix: How Nations Survive and Thrive in a World in Decline by Jonathan Tepperman Tim Duggan Books, 320 pp.
One for the Money https://washingtonmonthly.com/2016/10/25/one-for-the-money/ Wed, 26 Oct 2016 00:05:16 +0000 https://washingtonmonthly.com/?p=61204 How Alan Greenspan’s disastrous reign at the Fed came to be.

The post One for the Money appeared first on Washington Monthly.

]]>
In December 2015, America’s central bank, the Federal Reserve, raised its target interest rate by one-quarter of 1 percent. This was the first such action in exactly seven years, when the Fed cut rates all the way to zero in the depths of the Great Recession. It was a move that came despite the fact that the Fed’s preferred measure of inflation was six-tenths of a point below its 2 percent target, which it had not met since the spring of 2012 and was trending down, suggesting that there was room for employment to grow before inflation threatened.

The Man Who Knew: The Life and Times of Alan Greenspan by Sebastian Mallaby Penguin Press, 800 pp.
The Man Who Knew: The Life and Times of Alan Greenspan
by Sebastian Mallaby
Penguin Press, 800 pp. Credit:

The Fed committee looked past all that, projecting that there would be four similar rate increases during 2016, as the economy gained more strength. That projection turned out to be highly premature. Employment and inflation surveys repeatedly came in weak enough that the Fed was forced to delay rate hikes. At the time of writing, there has not been one yet in 2016.

For Fed watchers, this is an entirely unsurprising state of affairs. For every one of those seven years, as America suffered its worst bout of mass unemployment since the 1930s, the Fed has been consumed with worry over future inflation, and very obviously itching for some excuse to halt unconventional stimulus and raise interest rates above zero. Replacing Republican Ben Bernanke as Fed chair with Democrat Janet Yellen did precisely nothing to address this tendency.

Why? One place to look is The Man Who Knew: The Life and Times of Alan Greenspan, a new biography by the former Economist reporter and author Sebastian Mallaby. Greenspan, who was Fed chair from 1987 to 2006, was more responsible than any other single person for transforming that institution and its broader political culture into the form we know today: one obsessed with inflation, unconcerned with economic inequality, ranking price stability far above full employment, and always ready to backstop a bloated and disaster-prone financial sector.

Mallaby’s book is compelling reading. Against my better judgment, I found myself rather sympathizing with Greenspan, while still noting his immensely negative influence—but not for the reasons Mallaby gives. When it comes to assessing Greenspan’s legacy, Mallaby gets both his worst flaws and his best qualities wrong.

But first, the good stuff. Mallaby deftly leads the reader through Greenspan’s life, from Ayn Rand cultist to consultant to D.C. power player to world’s greatest economic policymaker. The skill with which the portrait is painted, and the tremendous amount of research that clearly went into the work, are the best parts of the book.

Greenspan was an ambitious but shy data-obsessed nerd who stumbled into extreme political skill. For a relative newcomer to the Greenspan legendarium, this was the most surprising and interesting part of the book. It turns out that a deep facility with statistics, a reticent demeanor, and a nose for power relationships can be parlayed into a serious political career. Whether it was helping Richard Nixon intimidate Fed chair Arthur Burns into keeping rates low before the 1972 election, besting Henry Kissinger in a straight bureaucratic knife fight while appearing not to do so, or stacking the Federal Reserve Board of Governors with his favored cronies, Greenspan was one of the most fearsome political operatives in Washington in his day.

Also surprising were Greenspan’s economic beliefs. He never bought any of the neoclassical theories coming out of the University of Chicago, dismissing “rational choice theory” and other such ideas that cast government regulation as always prone to backfiring. While he was in general agreement with those notions, his own economic perspective was much more ad hoc—fundamentally premised on the idea that relationships between economic variables were constantly changing and therefore not very amenable to modeling.

Instead of constructing complex models, he loved to dig into any sort of obscure data set he could find, to see what relationships might pop up. As a result, he was among the first to spot the increasing rate of personal indebtedness that began in the 1970s, particularly for home loans, understanding early on that recovery from recessions would be harder, as people struggled to spend while under a load of debt.

That data-focused, theory-skeptical approach was partially vindicated by the financial crisis, as many heavy-duty theoreticians failed to see it coming—or, indeed, loudly predicted that it could never happen. However, it also illustrates a blind spot of both Greenspan and Mallaby. Throughout the book, to his credit, Mallaby is concerned with detailed descriptions of how monetary policy works. But, like Greenspan, he does not investigate one major reason why personal indebtedness began to grow in the mid-1970s: inequality. That is the point at which productivity increases were infamously decoupled from wages, and inequality began to grow in earnest. But because, as Franklin D. Roosevelt’s Fed chair Marriner Eccles once wrote, “mass production has to be accompanied by mass consumption,” the middle class needed a new source of purchasing power to be able to partake in new production. They did that by borrowing more and more, until the 2008 crisis.

This is not a minor point. Inequality was central to Fed policy during the so-called “Great Moderation,” over most of which Greenspan presided as chair. With the marginal purchase made with borrowed money, the Fed’s interest rate tool was much more immediately effective; there was no more need to induce recessions to stop inflation. Yet this came at the expense of a massive debt buildup that eventually led to disaster.

Instead, Mallaby’s major criticism of Greenspan is that he was too hesitant to use interest rates to prevent bubbles from forming. In the late 1990s, and again in the mid-2000s, Greenspan kept rates low because inflation was low and stable and employment was trucking along.

The argument here is that because financial crises can cause such lasting damage, the Fed should use its monetary policy to maintain a stable financial sector, at the cost of reduced employment and output—taking a mild hit now to avoid a bigger one later. The wisdom of this argument is core to any assessment of Greenspan’s legacy, so it deserves close consideration.

As an initial matter, it is far from clear that higher interest rates would actually reduce financial instability at all. A paper from the International Monetary Fund from a couple of years ago examined the probable effects of raising interest rates on financial stability, and outlined two possible positive effects and three negative ones. The net effect would depend on the relative size of each option, and it’s not obvious what the result would be.

Indeed, during an expanding financial bubble, higher interest rates might even harm the economy as a whole while failing to prick the bubble. As John Kenneth Galbraith wrote about the pre–Great Depression bubble, “higher interest rates would have been distressing to everyone but the speculator,” because returns on the stock market were so spectacular that a somewhat higher cost of funds would barely even have been noticed. The several rate raises in 1999–2000 had little if any effect on the stock bubble at the time.

Still, during the housing bubble, mortgage rates were generally pegged to the Fed’s interest rate, so it’s at least possible that raising rates might have choked off the boom before it got too big. Let’s grant that for the sake of argument.

We must be very clear about what this means: deliberately creating a mini recession because Wall Street is doing its supposed job (allocating capital to its most productive use), like a cracked-out baboon. It may mean permanently lowering growth and eschewing full employment if bubbles are a frequent occurrence, as they have been in the last few decades. The housing bubble, for example, began to inflate only a couple of years after the dot-com collapse.

This is a crappy way to run a country. A capitalist system is prone to serious crisis, as we learned in 1929 and again in 2008. Hence, the number one task of economic policy is to prevent depressions and mass unemployment. Without that, there could be enormous upheaval. But the New Deal, World War II, and, to a lesser extent, the Obama presidency proved that the upheaval can be minimized. Government spending can restore full employment, thus undoing the damage of a recession, and stringent regulation can restrain the size and riskiness of the financial sector, thus keeping it from blowing up the economy. High taxation, full employment, and redistributive policy can reduce inequality, which keeps income circulating widely and thus preserves the strength of monetary policy tools.

To abandon all that in favor of using the central bank to deflate bubbles is to submit to atrocious economic performance for the foreseeable future. As John Maynard Keynes wrote in 1936,

The right remedy for the trade cycle is not to be found in abolishing booms and thus keeping us permanently in a semi-slump; but in abolishing slumps and thus keeping us permanently in a quasi-boom. . . . [A]n increase in the rate of interest, as a remedy for the state of affairs arising out of a prolonged period of abnormally heavy new investment, belongs to the species of remedy which cures the disease by killing the patient.

Mallaby, by contrast, casually dismisses the possibility of restraining Wall Street with regulation. Regulators failed before the 2008 crisis “and will likely fail in future,” he writes. What about the thirty-year period following World War II, when there were no financial crises? That is not discussed, and neither is the Dodd-Frank financial reform legislation. Mallaby also implicitly dismisses the value of government spending as economic stimulus, writing that “no amount of cleanup work could prevent a prolonged downturn” after the 2008 crisis. This is at odds with the opinion of most economists, who believe that a much bigger stimulus would have quickly restored full employment. But Mallaby doesn’t discuss Obama’s Recovery Act at all.

The basic reason the United States economy is sick is that it has slowly abandoned all the hard-won lessons from the New Deal era. Alan Greenspan is more responsible than anyone else for this act of forgetting. For his entire massively influential career, he helped to secure lower taxes on the rich, to cut social programs, and, especially, to deregulate finance. He ran the Fed with little concern for issues of distribution, effectively enabling the growth of inequality by viewing any wage increases as a harbinger of inflation. For a child of the Great Depression, this was a fantastic effort of willful ignorance.

In the beginning his ideology had the brittle idiot mania of the Ayn Rand fan he was in his younger life (and Mallaby’s portrayal of these days is quite good). As he got older, it evolved into the weird inverted communism of the later George W. Bush years, where government handouts were generally frowned upon except when Wall Street firms landed themselves into trouble, in which case they got huge government cash infusions and nearly endless cheap loans.

Mallaby cites the fact that this became the hegemonic view in both parties as evidence that Greenspan was “scarcely ideological” in his later years. On the contrary, it shows that his very strong ideology gained widespread acceptance—so much so that even long after he is gone, it is still a major obstacle preventing sensible monetary policy.

All this inverts Mallaby’s portrayal of Greenspan’s decision in the late 1990s and the 2000s to let economic growth ride instead of attacking stock bubbles with interest rates. The former choice, in particular, was remarkable. He made the decision after a prolonged investigation into data minutiae, developing a hunch that productivity growth wasn’t showing up properly in the traditional statistics—a view not shared by most of the rest of the economics profession, including current Fed Chair Janet Yellen, who was on the Fed board at the time.

Given that attacking the stock bubble with higher interest rates might not have slowed the bubble in the slightest, and that the late 1990s were the one period when the relentless march of inequality temporarily reversed itself, that decision was in reality Greenspan’s finest hour. It could not have happened without his unusual intellectual skills and independent mind, and the result was the only period of sustained full employment from 1980 through the present day.

A final point: Mallaby does not address  potential defects in the Fed, Greenspan’s signature institution. At this point, most Fed watchers have lost hope that the institution will be able to cast off the Greenspan legacy and begin to take full employment as seriously as it takes inflation and the stability of Wall Street. Yet there may be hope in changes to the Fed structure. Many writers, including myself (see “Free Money for Everyone,” Washington Monthly, March/April 2014), have advocated that it be granted the ability to deposit newly created money directly into the bank accounts of citizens, on a per capita basis.

Since the 2008 crisis, during its several halfhearted efforts at unconventional stimulus, the Fed nearly quintupled the American monetary base (from $860 billion to $4.1 trillion) in an attempt to get spending and employment up and moving again. If even a small fraction of that money had been placed directly in American pockets (sometimes called “helicopter drops”), the damage of the Great Recession would have been healed years ago. More importantly, the danger of the zero lower bound would be permanently abolished.

Despite the recent good economic news, there is little chance in the near future of reversing the huge increase in inequality or seriously shrinking the bloated financial sector ushered in by Greenspan and his political allies. But a helicoptering Fed could at least keep the economy pressurized and growing. For future economic policymakers, it’s an idea worth considering.

The post One for the Money appeared first on Washington Monthly.

]]>
61204 Nov-16-Mallaby-ManWhoKnew The Man Who Knew: The Life and Times of Alan Greenspan by Sebastian Mallaby Penguin Press, 800 pp.
The Revolution Will Be Analyzed https://washingtonmonthly.com/2016/10/25/the-revolution-will-be-analyzed/ Wed, 26 Oct 2016 00:04:49 +0000 https://washingtonmonthly.com/?p=61205 America changed in 1969, but our history isn’t quite complete.

The post The Revolution Will Be Analyzed appeared first on Washington Monthly.

]]>
Nineteen sixty-eight has often been cited as the year when America and the world went crazy. It was, after all, the year Martin Luther King Jr. and Robert Kennedy were assassinated, and American cities, the nation’s capital included, were set ablaze by black Americans enraged by King’s assassination. Paris exploded in a bloody student revolt; the Soviet Union invaded Czechoslovakia; the Tet Offensive in South Vietnam galvanized North Vietnam to fight on. Demonstrators raged outside of the Democratic National Convention in Chicago. Richard Nixon was elected president.

Witness to the Revolution: Radicals, Resisters, Vets, Hippies, and the Year America Lost Its Mind and Found Its Soul By Clara Bingham Random House, 611 pp.
Witness to the Revolution: Radicals, Resisters, Vets, Hippies, and the Year America Lost Its Mind and Found Its Soul
By Clara Bingham
Random House, 611 pp. Credit:

Not so, says journalist and writer Clara Bingham, the time of psychosis really began in 1969. Her 600-page oral history Witness to the Revolution describes 1969 to 1970 as modern America’s most radical, and possibly most transformative, period. It was a time in which nearly two million Americans dropped acid and the nation experienced eighty-four acts of arson and bombing, and when a major social transformation took place, an era in which various subordinate communities—blacks, Latinos, gays, women—began to emerge and challenge the status quo.

Bingham has divided her book into twenty-six chapters that cover the draft, the psychedelic revolution, the women’s liberation movement, and radicals and resisters. From 2012 to 2015, she traveled the country interviewing some 100 individuals—(some) black and (mostly) white, (mostly) male and (some) female, primarily early Baby Boomers who “played an important role in bucking the system,” including Bernardine Dohrn, Greil Marcus, and Julius Lester. Many of those she interviewed, like Carl Bernstein, Daniel Ellsberg, Morton Halperin, Michael Kazin, Tony Lake, and Richard Reeves, went on to have a lasting effect on American politics and culture.

Bingham designates 1969 as the year the sixties generation “awakened”; she argues that the decade designation “the sixties” is arbitrary. For some, that specific sense of time—what the novelist Raymond Williams described as “a structure of feeling”—began with President John F. Kennedy’s assassination in 1963. For the writer and poet John Perry Barlow, “the sixties” didn’t begin until 1966. “Prior to that,” he writes, “it was Eisenhower’s America.”

“From the start of the academic year in 1969 until the classes in September 1970, a youth rebellion shook the nation in ways we may never see again,” Bingham writes. “It was the crescendo of the sixties, when years of civil disobedience and mass resistance erupted into anarchic violence.” What makes that era different from today’s politics—when political debate is often confined to shouting on cable television and posting on Twitter—was that it was damn near close to actually being a revolution, she says. “The marches, demonstrations, rebellions, and resistance, in these often overlooked twelve months, threatened the very order of society.” During that period there were twenty-five political trials, the most famous being the prosecution of the Chicago 8.

Bingham chronicles the time before the “revolution” with voices such as David Harris, a draft resister who attended Stanford but also worked as an organizer in Mississippi when the New Left’s first struggle was the civil rights movement in the South. Harris, who married Joan Baez, was later jailed for his draft resistance, but was one of those who saw the folly in some New Left members’ embrace of violence that was to leave such an ugly legacy.

Frustration at being unable to effect change in U.S. policy in Vietnam led some Americans to “bring the war home” by engaging in arson and bombings, as in the rise of the radical, violent group Weather Underground, which included Mark Rudd, Bernardine Dohrn, and Bill Ayers. Tom Hayden and others recount how Students for a Democratic Society (SDS) went from being concerned with and supporting black empowerment to taking a more radical position opposing the Vietnam War before morphing into the Weather Underground. That group launched a program, “Days of Rage,” to bring the “revolution” to the U.S. At that point the New Left took the wrong turn and went down Revolutionary Road, which turned out to be a dead end. In the words of David Harris, it was “all these white kids pretending they were Black Panthers, and Black Panthers pretending they were third-world revolutionaries.”

Dohrn, one of the most wanted fugitives of that era, was a founding member of Weather Underground. When she and  Bill Ayers (her husband and fellow Weatherman) finally surrendered in 1980, Dohrn was fined just $1,500 and placed on probation because the FBI had illegally obtained evidence against her. The Bureau had also targeted and incited violence against the Black Panthers, which led to the deaths of Fred Hampton and Mark Clark in Chicago. Panther leader Ericka Huggins always believed that her husband, John, was set up by the police for assassination, orchestrated by the FBI’s Counterintelligence Program—founded in 1956 primarily for targeting the activities of the American Communist Party, but often used to monitor groups from the Ku Klux Klan to the Black Panthers.

Even Cuban and Vietnamese allies told Mark Rudd, a former SDS leader now at Columbia, “You’re way too far ahead of where the base is, and not only that, but the Vietnamese want a united antiwar movement. And you’re already calling for a revolution.” But the Weather Underground’s response to such criticism was that they knew America was ripe for revolution. “[The] Cubans and the Vietnamese don’t understand our situation,” Rudd told Bingham. (Turns out that the Cubans and the Vietnamese did; it was the Americans who didn’t.)

Rudd best sums up what happened to the New Left movement: “The Weather made a fundamental mistake in forgetting about base-level organizing, which is relationship building, coalition building, all the things that built the antiwar movement up to that time, had built the civil rights movement, had built the labor movement.” The black professor and Newberry Award–winning writer Julius Lester agrees. “[W]hen movements become ideological,” he observes, “they lose sight of people. Ideology becomes more important than people.”

With Americans now reflexively thanking volunteer armed forces “warriors” for their service in Iraq and Afghanistan, it is hard to recall a time when the Army rank-and-file was in near mutiny and returning veterans organized against the war. Remember the famous footage of John Kerry and hundreds of other vets tossing their medals over a fence in front of the U.S. Capitol? Resistance to the Vietnam War draft resulted in 3,250 men being held in federal prisons. More than 400,000 deserted, with 100,000 fleeing to Canada and Sweden.

It is, nonetheless, worth remembering how returning servicemen were treated. Crowds of antiwar protesters would meet the buses and hurl invective at them. Wayne Smith, one such soldier, recalled, “I went into the men’s room and there were all these khaki uniforms bulging out of the trash bin, where people had taken off their uniforms and thrown them away as fast as they could. . . . The uniform thing was big, not wanting people to know you were in the military.”

Although the political revolution may have fallen short, the awakening had enormous social impact: Americans, as individuals and as members of social groups, were beginning to question the society they lived in, and what kind they wanted. Blacks were no longer going to be second-class citizens; women wouldn’t be content simply as mothers and wives; gay men and women weren’t going to exist in the closet or be persecuted. Unfortunately, these voices are barely represented in Witness. The rise of Native American or Puerto Rican liberation movements aren’t even mentioned, despite the fact that the American Indian Movement occupied Alcatraz Island for nineteen months in 1969.

All of this is deeply ironic for a 600-page opus. Indeed, one of the most decisive moments of the revolution occurred on the night of the Stonewall riots, which launched the gay rights movement. But the gay experience is only hinted at, as an episode in the life of David Mixner, a closeted gay man who was one of the 1969 Vietnam Moratorium organizers. His story is interesting, since it shows how he was targeted by a “honey trap” and then shown photos of the tryst by men with badges. In response, he simply stepped back from his high-profile organizing against the war.

Will uninformed readers, particularly younger ones, believe that gays and blacks, Puerto Ricans, Chicanos, Native Americans—the proverbial people of color—didn’t participate in the “revolution”? This narrative doesn’t explain the number of black, Latino, and Native American activists who were shot, killed, and incarcerated, compared to whites—many of whom went on to relatively normal lives.

By excluding these voices, Bingham unintentionally paints a picture of white middle-class activists who “went crazy” but then settled down, got jobs, became moms and dads, and watched as their children created apps and businesses like Uber. While there may be nothing wrong with that, it underscores that the brunt of state repression fell on the black and brown backs of those who are mostly absent from Bingham’s book.

Witness to the Revolution is an important legacy document. Most of the people interviewed are Baby Boomers, who are going to begin exiting the scene within the next two decades, and it is important to have a record of what they saw and how they acted during that time. Yes, they made mistakes. But in the best American tradition, they cared about the soul of their country.

The post The Revolution Will Be Analyzed appeared first on Washington Monthly.

]]>
61205 Nov-16-Bingham-Witness Witness to the Revolution: Radicals, Resisters, Vets, Hippies, and the Year America Lost Its Mind and Found Its Soul By Clara Bingham Random House, 611 pp.