March/April/May 2017 | Washington Monthly https://washingtonmonthly.com/magazine/marchaprilmay-2017/ Sun, 09 Jan 2022 10:15:30 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg March/April/May 2017 | Washington Monthly https://washingtonmonthly.com/magazine/marchaprilmay-2017/ 32 32 200884816 Can the ACLU Stop Trump? https://washingtonmonthly.com/2017/03/19/can-the-aclu-stop-trump/ Mon, 20 Mar 2017 01:00:52 +0000 https://washingtonmonthly.com/?p=63869

New legal director David Cole thinks he knows how.

The post Can the ACLU Stop Trump? appeared first on Washington Monthly.

]]>

On a sunny afternoon in the first week of January, I met David Cole at his office at Georgetown Law Center. In a few days, he would officially take over as national legal director of the American Civil Liberties Union—but first he had to finish grading exams. Tall and gangly, with wire-rim glasses and unfussy clothes a size too big for his skinny frame, Cole looks every bit the public interest lawyer and professor he has been for nearly three decades. At fifty-eight, he has a permanently tousled mop of thinning gray-brown hair and an open, boyish face that breaks easily into a grin.

It was lunchtime, so we headed down to the cafeteria. Around mouthfuls of tuna sandwich, Cole ruefully recalled his expectations for the ACLU job when he accepted it last summer. “I, like everybody else, thought that Donald Trump didn’t really have a chance,” he said. Hillary Clinton would win the presidency and pick Antonin Scalia’s replacement, and Cole would get to spearhead the ACLU’s effort to move the law to the left under the first liberal Supreme Court majority since the 1970s. “And no memos were written on ‘What if Trump wins?’ ”

You know what happened next.

“On November 8, the job completely changed,” Cole said. “Suddenly, instead of thinking about incremental ways to advance the law in a more progressive direction, we’re in full defense mode.”

Trump’s election made certain jobs matter much more than they would have in normal times. Cole’s is one of them. As a candidate, Trump specialized in constitutionally suspect policy proposals: criminalizing abortion, “national stop-and-frisk,” mass deportations, a Muslim registry. Democrats in Congress simply don’t have the numbers to stop Trump from following through, and Republicans don’t appear interested. That means the only plausible place to challenge him is through the legal system.

As legal director of the ACLU, Cole is the new top lawyer of the largest and most powerful public interest law firm in the country—one whose war chest has exploded since the election, thanks to an enormous spike in donations. He supervises about 100 litigators in its national offices in New York and Washington, D.C., and provides support and advice for the 200 or so lawyers in the affiliate offices across all fifty states. (Those numbers are set to grow as the ACLU spends some of its new funds.) He is also, in the words of ACLU executive director Anthony Romero, “the keeper of the keys to the Supreme Court docket,” personally signing off on everything filed at the high court, and potentially arguing cases himself.

On his third day on the job, in January, Cole testified at the Senate Judiciary Committee hearing on Jeff Sessions’s nomination to be attorney general, where he delivered a lacerating critique of Sessions’s record on civil rights and sparred with Senator Ted Cruz. Two weeks later, he was editing the ACLU’s briefs in three high-profile Supreme Court cases, about transgender bathroom access, immigration detention, and voter suppression in North Carolina. Then, late that Friday afternoon, the Trump White House tossed its first bomb: an executive order temporarily banning immigration from seven Muslim countries and shutting down the refugee program.

The ACLU had been preparing for this moment since the election. Within minutes, its immigrants’ rights project had teamed up with a student clinic at Yale Law School, the National Immigration Law Center, and the International Refugee Assistance Project to file a case on behalf of two Iraqi men detained at John F. Kennedy Airport. Meanwhile, something astonishing was happening: thousands of ordinary people were flocking to airports around the country to protest the ban.

By Saturday afternoon, Lee Gelernt, deputy director of the immigrants’ rights project, was arguing their case in federal court in Brooklyn, and by around 9 p.m., the judge had issued a ruling blocking the government from deporting anyone who had been detained under the order. Gelernt and Romero emerged from the courthouse to a crowd of hundreds of cheering supporters. In the first of many looming battles with Trump, they were the first to draw blood.

The ongoing fight over the immigration ban is a preview for what any successful resistance to Trump will have to look like: swift, coordinated action in the courts combined with extensive public mobilization. No one is more sensitive to the interplay between these two forces than Cole. In fact, he wrote a whole book about it. In Engines of Liberty: The Power of Citizen Activists to Make Constitutional Law, published last year, Cole argues that constitutional rights must be won in the court of public opinion before they can be vindicated in courts of law. Civil society groups like the ACLU lay the groundwork for change by working simultaneously through many channels—state and international law, information requests, media campaigns—to shift public and expert opinion.

“In the end, I’m optimistic that our institutions are stronger than the will of a populist demagogue, even if he doesn’t back off,” Cole said when we first spoke, over the phone, in late November. “It’s not like they’re stronger in some essential or abstract way; they’re strong enough if people are concerned enough, if institutions push back enough. If people don’t resist, if people accept it, if people are chilled from expressing resistance, then I don’t think there’s anything magical about our institutions that will necessarily stop him.”

Cole takes the helm of the ACLU’s litigation team when the organization enjoys more popular support than perhaps at any other time in its ninety-seven-year history. Because civil rights are by definition protections against majority rule, the ACLU has made its reputation representing some of the most unpopular members of society, like when it successfully defended the First Amendment right of Nazis to march through Skokie, Illinois, at the Supreme Court in 1977. In the 1988 presidential campaign, George H. W. Bush played up the image of the ACLU as outside the American mainstream when he accused his opponent, Michael Dukakis, of being a “card-carrying member of the ACLU.”

The perception of the ACLU as a bunch of First Amendment zealots defending misfits and Communists began to shift after 9/11, as the organization emerged as one of the most dogged opponents of the war on terror. But the election of Trump vaulted it to the mainstream of liberal American civic life overnight. For the year 2015, total online donations added up to $3.5 million; for a few days after the election, people were giving a million dollars per day. The immigration ban triggered an even bigger flood of donations: $33.9 million in three days, according to an ACLU spokesman. The organization estimates that its active membership has more than doubled since October, from around 500,000 to more than a million.

The ACLU is far from the only organization gearing up to defend constitutional rights under a Trump presidency, but it’s certainly the most prominent. Unlike most legal nonprofits, which specialize in a particular area of law, the ACLU is set up to defend the full spectrum of constitutional rights (though conservative critics note that it doesn’t seem to care much about the right to bear arms). That makes it a natural clearinghouse for liberals and libertarians who can’t decide what to be most scared about. Cole himself has been litigating and studying just about every area where the ACLU is active since he was in his twenties. He has defended the First Amendment rights of artists and political protestors at the Supreme Court; challenged bans on using federal funds for abortion counseling; and, since long before 9/11, defended Muslims from prosecution and deportation in the name of national security. “David is the perfect top lawyer for this organization,” said Romero. “He’s someone who comes at it as one of the great legal minds in the country, but also someone who thinks about a diversity of tactics and strategies that need to be employed in addition to litigation to make a difference.”

In Engines of Liberty, Cole writes, “Framing and messaging are as essential to a constitutional campaign as formal legal argument.” His job now is to do both. Since long before his official start date, he has been helping craft the ACLU’s legal argument against the immigration ban—namely, that Trump’s extensive public comments prove that the ban was designed to target Muslims and favor Christians, in violation of the First Amendment right to freedom of religion. At the same time, he has been making that argument relentlessly in public forums. A longtime contributor to the Nation and the New York Review of Books, among other outlets, he churns out legal commentary incredibly quickly. The morning after the Ninth Circuit Court of Appeals upheld a nationwide freeze of Trump’s policy—and noted that the claims about Trump’s intent “present significant constitutional questions”—Cole had published a reaction on the New York Review of Books website. By evening, he had an op-ed in the Washington Post arguing that Trump’s immigration order, coupled with his public comments, is “like a governor signing a ‘voter ID’ law and simultaneously holding a news conference to announce that the purpose of the law is to suppress black votes.”

By breaking down the legal theory against Trump’s order into plain English, Cole hopes both to persuade undecided readers and to energize those who already care. In a 1992 law review article, he noted that the ACLU historically faced the “tension inherent in appealing to the mainstream while representing those whom the mainstream seeks to suppress, silence, or exclude.” If the ACLU is more mainstream than ever, it’s because Trump has largely dissolved that tension. “He is so divisive, and so threatening, that he is, ironically, a great unifier,” said Cole. “He is uniting people who care about civil liberties and civil rights like I’ve never seen before in my career as a lawyer.”

As legal director of the ACLU, Cole is the new top lawyer of the largest and most powerful public interest law firm in the country—one whose war chest has exploded since the election, thanks to an enormous spike in donations.

Cole almost didn’t become a lawyer at all. He graduated from Yale in 1980 with an English degree, aspirations to be a journalist, and no idea what to do next. So, like many gifted young people who were ambitious but unfocused, he applied to Yale Law School, which had a reputation as the liberal arts college of law schools. He got waitlisted. He was about to move home to Chicago and take a job trading stock options when, in the last week of August, he got a letter from Yale asking if he could start in a week.

“So I was the last person admitted to that class,” Cole said. “Someone pulled out at the last minute and they said, ‘Who can we pull in?’ ”

At law school, Cole kept taking classes in the English Department and wrote arts reviews for the undergraduate newspaper. “He had deep ambivalence, maybe more, about wanting to do law,” said Owen Fiss, one of Cole’s first-year professors. “He wore a black jacket, high-top sneakers. He was interested in being a jazz writer.”

An internship at the Center for Constitutional Rights changed his mind. The CCR is a scrappy public interest firm in New York City that specializes in long-shot constitutional litigation. It was the Reagan era, and Cole worked on lawsuits to shut down some of the administration’s aggressive foreign policy adventures in Latin America and eastern Europe. He laughed when I asked if any of those suits were successful: of course not. Still, he was intoxicated by the work. The CCR “just had kind of a chutzpah that I hadn’t seen anywhere else,” he said. He put in forty hours a week during his last year of law school, taking the train back to New Haven to attend classes a few days each week.

One lawyer Cole worked with was Jules Lobel, who is now president of the CCR. “At that point, I realized that he was going to be a star,” Lobel told me. “He wasn’t like any other law student that I’d worked with. His intelligence, his articulateness, and his writing—all three of those in combination were far better than any law student that I had ever worked with. It was better than most lawyers that I’d worked with.”

After graduating and clerking for a federal judge, Cole returned to the CCR as a staff attorney. His caseload grew “serendipitously,” he said, giving him early experience with a broad range of civil liberties issues. In one early case, he successfully defended the American-born writer Margaret Randall, who had become a Mexican citizen in the 1960s and was facing deportation under a McCarthy-era statute because she had written favorably of the Communist governments in Cuba, Nicaragua, and Vietnam. After that, he was contacted by the lawyers for a group of Palestinians who were being held in a California prison. The government, claiming national security interests, was refusing to share the evidence against them. It charged them under the same anti-Communist statute, since they allegedly had ties to a branch of the Palestinian Liberation Organization that dabbled in Marxism. “They charged them with ‘world Communism,’ and the lawyers out there said, ‘Well, has anyone done a Communism case in the last thirty years?’ ” Cole recalled. “And I had, so they called me and I joined that team.” The team convinced the judge hearing the case to give the government a choice: share its evidence with the detainees’ lawyers, or let the men go. It let them go. (The case dragged on for two decades, until the George W. Bush administration gave up trying to deport them.)

<b>People’s court:</b> The early legal victories against Trump's immigration order took place against a backdrop of popular protests, like this one at John F. Kennedy Airport.
People’s court: The early legal victories against Trump’s immigration order took place against a backdrop of popular protests, like this one at John F. Kennedy Airport. Credit:

That led to more immigration and national security work, which made Cole unusually well prepared to take on Bush’s war on terror policies. “Because I did that case, I started getting contacted by various other Arab and Muslim immigrants who were getting deported or detained on the basis of secret evidence,” Cole said. “And that was all before 9/11.”

The cases that solidified Cole’s reputation as a top-flight civil liberties attorney were about political speech. The heated debate throughout the 1990s about a constitutional amendment to ban flag burning sprang from a pair of infamous Supreme Court cases that Cole litigated. In 1989, he and William Kunstler, one of the CCR’s founders and a celebrity of the legal left, persuaded the Supreme Court to overturn a Texas law criminalizing burning the American flag. That opinion, which held that flag burning was self-expression protected by the First Amendment, prompted a massive backlash, and Congress responded immediately by passing a ban of its own. So in 1990, Cole and Kunstler persuaded the justices to strike down that law, too. In each case, Kunstler did the oral arguments, but Cole developed the legal theory and wrote the briefs. A lawyer from the solicitor general’s office, which had defended the law, later told a reporter at Legal Times that Cole’s was the best opposing brief he had ever read.

Shortly after the flag-burning victories, Cole left the CCR to teach at Georgetown. He wanted to pursue academic writing, teaching, and journalism while continuing to work on cases on the side. (He lives in Washington, D.C., but now spends the workweek at ACLU headquarters in New York City; his wife, Nina Pillard, is a judge on the D.C. Circuit Court of Appeals.)

Many law professors litigate some cases, and many lawyers teach law classes. But Cole belongs to a smaller group of top-tier litigators who are also serious eggheads. His most interesting scholarship approaches the law not as a system of rules, and not as policymaking by judges, but as both—a process by which abstract legal rules are shaped by cultural currents and the unconscious needs of the judges who craft them. One of his first articles, published in 1986 in the Yale Law Journal, applies the literary critic Harold Bloom’s theory of poetic interpretation to the question of what makes Supreme Court justices “great.” Cole’s insight is that while following precedent is a foundational principle of judging, to be remembered as a great judge requires breaking from precedent—just as being remembered as a great poet requires first mastering, then  breaking from, poetic tradition.

In Engines of Liberty, Cole analyzes the meticulous, decades-long efforts to legalize same-sex marriage and persuade the Supreme Court to formally recognize an individual’s right to bear arms. But it’s the third section, on the fight against Bush’s war on terror policies, that’s most relevant to his job now. It’s easy to forget how brazenly the Bush administration claimed authority to act without legal constraints, because Bush left office cowed by both public opinion and the Supreme Court. But in the aftermath of 9/11, the administration basically declared itself above the law when it came to national security. It established a military tribunal to try alleged terrorists in which, Cole writes, “the executive branch would be judge, jury, and executioner,” with no room for judicial review. And it insisted that it could detain so-called “enemy combatants” at Guantánamo indefinitely, without even a hearing to determine whether they are subject to detention as prisoners of war. The law appeared to be on Bush’s side: the Supreme Court had ruled in 1950 that foreign prisoners of war held overseas had no right to challenge their detention in American courts. Former CCR president Michael Ratner, Cole’s friend and mentor, who died last year, told Cole that the case he brought challenging Bush’s detention policies seemed “completely hopeless.”

The ongoing fight over the immigration ban is a preview for what any successful resistance to Trump will have to look like: swift, coordinated action in the courts combined with extensive public mobilization. No one is more sensitive to the interplay between these two forces than David Cole.

Yet two surprising things happened. First, the Supreme Court—with a conservative majority—ruled against the Bush administration four times in national security cases, including Ratner’s, between 2004 and 2008, rejecting its arguments that detainees could be held without a hearing and affirming Guantánamo prisoners’ right to judicial review. Second, the administration rolled back some of its policies even without any court saying it had to. By the time Bush left office, he had suspended the CIA’s “enhanced interrogation” program, closed its secret prisons, stopped sending detainees abroad to be tortured, released more than 500 of the 779 Guantánamo prisoners, and agreed to judicial oversight for warrantless wiretapping.

In Cole’s account, this was the product of a combination of strategies. The Supreme Court victories, he argues, were possible because human rights lawyers successfully framed the cases as a clash between Bush and the rule of law itself—not just in their legal filings, but in reports and speeches designed to marshal the opinion of the legal profession as a whole. They enlisted respected national security figures, like retired generals, to speak out against torture, which made it harder for the administration to justify itself to the public. They also waged a deliberate campaign to stir up international opinion against Bush’s policies, particularly in the United Kingdom, which had several citizens detained at Guantánamo. Cole interviewed former Bush administration officials, who told him that “foreign pressure had a significant impact on the curtailment of its counterterrorism measures,” because the U.S. depended on international cooperation to pursue its national security agenda.

Transparency was also crucial. “Human rights groups could not challenge what they could not see,” Cole writes. In 2003, two new ACLU staffers, Jameel Jaffer and Amrit Singh, began a Freedom of Information Act campaign that, over the next decade, would reveal nearly 6,000 documents detailing the administration’s torture program. Getting that information out may have made the Supreme Court less willing to defer to the administration’s promises that it was obeying the law.

Cole isn’t saying that Supreme Court justices turn on CNN, learn that people are protesting a certain policy, and so decide to rule against it. If courts always obeyed the majority will, constitutional rights would be toast. The influence of public opinion is more subtle. The 1944 case Korematsu v. United States, in which the Supreme Court upheld the legality of Japanese internment during World War II, has never been formally overturned. But decades of advocacy by Japanese American groups led to widespread recognition that the decision, like internment itself, was a national disgrace. Congress formally apologized for Japanese internment in 1988. By the time Fred Korematsu filed an amicus brief in the Supreme Court Guantánamo cases, the justices couldn’t help but be aware of the risk of again being on the wrong side of history. “To accept Bush’s position that he had unchecked authority to detain without judicial oversight would have looked dangerously like the excessive deference employed in Korematsu,” Cole writes.

The point is that the act of judging inevitably involves weighing abstract values that can’t be measured and put into effect without a sense of common knowledge and community beliefs. Judge James Robart, the federal district judge in Seattle who blocked the immigration ban nationwide, admitted as much in his written order. “Although the question is narrow,” he wrote, “the court is mindful of the considerable impact its order may have on the parties before it, the executive branch of our government, and the country’s citizens and residents.”

In some ways, there is more reason to expect the judiciary to check the executive branch now than in the early Bush years. In the terrified aftermath of 9/11, there was little public outcry at first on behalf of detained terrorist suspects, in sharp contrast with the immediate and overwhelming popular protests against Trump’s immigration ban. Bush had some goodwill stored up after campaigning as a moderate conservative and ably performing the role of strong-willed leader after the attacks. Trump, on the other hand, is virtually guaranteed to face a regular wave of skepticism unrivaled in modern presidential history. As Cole noted in the New York Review of Books, the judicial rulings against Trump’s immigration ban are part of a broader backlash from all corners of civil society. More than a hundred tech companies, including Google, Apple, and Facebook, supported the case filed in Seattle, as did a bipartisan group of national security officials. Cole marveled that General Michael Hayden, who ran the CIA and the NSA under Bush, tweeted, “Imagine that. ACLU and I in the same corner.”

“Ordinarily, when the government targets foreign nationals in the name of national security, you don’t see a widespread public reaction from Americans,” Cole said in early February. “We’re in a different moment, where people are so concerned about the threat that Trump poses to them that they are making alliances with those whose interests they don’t ordinarily share.”

Cole’s observation that human rights lawyers triumphed in court when they could cast Bush’s policies in terms of “the rule of law v. the government” bodes ill for Trump, who has already made a trademark (not literally, but give him time) of personally attacking judges who rule against him. After Robart blocked the immigration ban nationwide, Trump took to Twitter to blast “this so-called judge.” In its appeal, the Justice Department made the Orwellian argument that judicial oversight of executive orders violated the separation of powers.

That didn’t go over well. “There is no precedent to support this claimed unreviewability, which runs contrary to the fundamental structure of our constitutional democracy,” wrote the Ninth Circuit Court of Appeals panel that unanimously upheld Robart’s order. The opinion cited two of the national security Supreme Court rulings against Bush to support the proposition that the president isn’t above the law.

Cole’s optimism about the ability of lawyers and the public to partner to keep constitutional rights safe from a Trump presidency is infectious. But is it justified?

The Supreme Court victories against Bush were extremely modest. As Cole acknowledges in Engines of Liberty, they simply required giving detainees some legal due process rather than none. Cole himself was on the losing end of several lawsuits against the government. In one that particularly rankles, he represented Maher Arar, a Canadian citizen who was seized at JFK, sent to Syria to be tortured for ten months on the U.S.’s behalf, and never charged with a crime. A federal court ruled that letting him sue the government would interfere too much with national security.

Trump probably won’t bungle everything as badly as he did the immigration order. If the administration had simply provided some facts justifying its national security judgment, it might not have been blocked. Even if, on appeal, the Supreme Court were to agree with Cole’s theory about the order’s unconstitutionality, the lesson would be that Trump would have gotten away with it if only he hadn’t spoken so loosely. (As this article went to press, the administration was preparing to issue a revised, ostensibly more legally sound travel ban.)

Most urgently, we still don’t know what will happen after the next terrorist attack on U.S. soil—how opportunistically the administration will use it to crack down on the rights of immigrants, Muslims, and political opponents. Even here, though, Cole is cautiously optimistic. “To the extent Trump tries to make radical changes, I think he’s much less likely to succeed,” he said. The same slow processes that made the fight against Bush-era policies drag on for years, he argued, will make it hard for Trump to try to return to those policies. Indeed, shortly after inauguration, career national security officials rejected a draft executive order that would have revived the torture program. “I could be wrong,” Cole said, “but I think we do tend to learn from our mistakes.”

The question is to what extent lessons of the past apply to a Trump presidency. The Bush administration infamously relied on the “unitary executive” theory to argue that it could set aside laws that would limit the president’s power over national security. But that was still a theory of constitutional authority, crafted by lawyers operating within the norms of legal discourse. So far, Trump and his inner circle don’t appear to see the need to justify themselves in those terms. Their theory seems to be: “We won. Get over it.”

Even if the ACLU and others can marshal public and international opinion against Trump, will it matter? Trump acts like a man at once desperate for approval yet, paradoxically, unwilling to change his behavior to earn it. Critics are to be defeated, not listened to. Trump may really believe that all opposition is a concoction of the biased media—that, as he tweeted after surveys showed that most Americans disapproved of the Muslim ban, “[a]ny negative polls are fake news, just like the CNN, ABC, NBC polls in the election.” It’s hard to predict whether even massive resistance will constrain his use of executive authority.

“In the end, I’m optimistic that our institutions are stronger than the will of a populist demagogue, even if he doesn’t back off,” Cole said.

The Constitution both empowers and limits government. The central idea of constitutional democracy, the principle the ACLU represents, is that there are some things elected officials may not do. But Trump is in charge now, and he embodies the opposite idea. This is most obvious in his refusal to extricate himself from his businesses, barely hiding his intention to use the presidency to enrich himself and his family. Enabled by a pliant Congress, Trump thus poses a high-stakes test of Cole’s theory and of his capacity as an advocate.

Cole begins Engines of Liberty with a quote from the great jurist Learned Hand: “Liberty lies in the hearts of men and women; when it dies there, no constitution, no law, no court can save it; no constitution, no law, no court can even do much to help it. While it lies there it needs no constitution, no law, no court to save it.” The first part is true, but the second is not. Even liberty-loving men and women need a legal system to preserve their rights, and they need groups like the ACLU to fight for those rights. Cole and the ACLU have to do two things: harness the energy of those Americans concerned about the rule of law, and persuade those who aren’t concerned that they should be. Without that, all the lawyering in the world won’t matter.

The post Can the ACLU Stop Trump? appeared first on Washington Monthly.

]]>
63869 ?p=63856
How to Make the Electoral College Work for Everyone https://washingtonmonthly.com/2017/03/19/how-to-make-the-electoral-college-work-for-everyone/ Mon, 20 Mar 2017 00:55:18 +0000 https://washingtonmonthly.com/?p=63946

The Constitution asks us to elect a president of the United States, but what we get is a president of Ohio and Florida. There’s an easy way to fix that.

The post How to Make the Electoral College Work for Everyone appeared first on Washington Monthly.

]]>

Since Donald Trump won the presidency despite losing the popular vote to Hillary Clinton by three million votes, the Electoral College has once again taken a thorough public flogging. Democrats, in particular, are enraged that the loser of the popular vote has now won two of the past five elections, first by a few hundred votes in Florida in 2000, and then in 2016 by fewer than 80,000 combined votes in three Rust Belt states.

But there is an even deeper problem with the Electoral College as it operates today, even when the popular vote winner and the Electoral College winner are the same. In every presidential election, the voters in all but a handful of states, and their concerns and views, are ignored by the campaigns and later by the administration of whoever is elected. This affects both Republicans and Democrats, voters in small states and voters in big states. As Wisconsin Governor Scott Walker, then a serious contender for the Republican nomination, put it in the fall of 2015, “The nation as a whole is not going to elect the next president. Twelve states are.” The Constitution asks us to elect a president of the United States. What we get is a president of Ohio and Florida.

This problem is actually not the Electoral College itself. The Constitution lets each state legislature decide how to award its electoral votes. Two states, Nebraska and Maine, award votes by congressional district, while the remaining states award all of their Electoral College votes to the winner of that state’s popular vote. It’s these state-level winner-take-all laws that make the votes of millions of Americans effectively meaningless.

The good news is that, just as states were free to adopt or not adopt a winner-take-all approach, they remain free to change their laws to ensure that the popular vote winner becomes president. The way to do this is simple: pledging to award all their Electoral College votes to whoever wins the national vote. A project to make this happen is already under way. It’s called the National Popular Vote Interstate Compact.

Ten states (and the District of Columbia, which has three Electoral College votes) have already passed legislation to take this approach, with a proviso that the law kicks in only when the number of Electoral College votes of the enacting states reaches 270, the number necessary to win the presidency. So far, 165 electoral votes have been pledged. Twelve more states (with ninety-six votes) have passed the law in one legislative body. If these states, plus just a few others, pass the law, then the Electoral College will function as most Americans want it to: it will award the presidency to the winner of the popular vote. More importantly, presidential campaigns will finally start paying attention not just to the very few voters in “battleground” states, but to all voters, everywhere.

[media-credit name=”FairVote” align=”alignright” width=”450″]Silberstein_PieChart[/media-credit]

If your only knowledge of U.S. geography came from watching presidential campaign appearances, you would be forgiven for thinking that the most powerful country in the world is a pretty small place. Clinton and Trump, and their running mates, made a total of 399 campaign appearances after officially winning their party’s nomination. Of these appearances, over two-thirds were in just six states: Florida, North Carolina, Pennsylvania, Ohio, Virginia, and Michigan. If you add in the next six states considered competitive, that accounts for 375 of the 399 visits. In other words, the candidates campaigned almost exclusively in only twelve states. (Three or four of these are true battlegrounds every four years; the others lean Democratic or Republican, but are still contested.) Consider Pennsylvania, population 12.7 million. The candidates went there fifty-four times to meet with voters. Do you want to guess how many times they visited neighboring New York, New Jersey, Maryland, and West Virginia (combined population 36.4 million)? If you guessed zero, you would be right. In fact, just shy of half of all states were completely ignored by the campaign road show. See Figure 1.

Not surprisingly, the lack of attention candidates pay to non-battleground, or spectator, states also reflects itself in what issues are prioritized, so the declining coal and steel industries in Rust Belt Ohio and Pennsylvania, like the foreign policy preoccupations of Cuban immigrants in Florida, take center stage in presidential politics. Meanwhile, barely a word is spoken about, say, the catastrophic droughts faced regularly in spectator states like California and Texas.

It would be bad enough if the battleground problem only manifested itself in campaign schedules and local TV saturated with almost nothing but campaign ads. But because presidents never really stop campaigning—either for reelection or for their sucessor—battleground states exert a gravitational pull on their agendas even after they take office. The fact that it took until 2016 to move away from the absurd Cuba embargo is not, of course, because Cuba changed in some important way, but rather because a rapprochement would have alienated a bloc of Cuban American voters in south Florida crucial for winning that state’s Electoral College votes.

The battleground problem affects not only the positions presidents adopt, but also how their administrations distribute tax dollars. The 2009 stimulus bill, for instance, set aside money for high-speed rail and gave President Obama discretion as to where to build it. So where did he decide the first project should be? Maybe the D.C.-to-Boston corridor, the nation’s busiest and perhaps most dilapidated commuter rail line? Or possibly one of the top priorities of the voters of the nation’s most populous state, a high-speed rail connection between San Francisco and Los Angeles? No, the president instead proposed spending $2.4 billion in federal funds to build a high-speed rail line covering the eighty-four miles between Tampa and Orlando. (In 2011, Florida’s then Governor Rick Scott told Obama to spend the money elsewhere.)

That’s just one anecdote, of course. But in his 2014 book, Presidential Pork, political scientist John Hudak found that, controlling for other factors, battleground states receive 7 percent more than other states in federal grant money, even in a president’s second term.

In short, the winner-take-all approach of awarding a state’s Electoral College votes hurts the majority of Americans no matter who ends up in office. Unless you live in one of the very few battleground states, your concerns, your policy preferences, and your votes simply don’t matter. And your taxes are disproportionately funneled to the states whose citizens get an outsize role in calling the shots on who will be the president.

The momentum for moving to a popular vote without amending the Constitution began brewing after the 2000 election, when the law professor brothers Akhil and Vikram Amar, as well as Robert Bennett, wrote papers introducing the idea and pointed out that if just the eleven most populous states adopted it, they would clear the 270 electoral vote hurdle. In 2006, computer scientist Dr. John Koza, now chairman of the National Popular Vote organization, wrote up a detailed proposal for what he called the National Popular Vote Interstate Compact. Here is how it works: A state legislature passes a statute pledging its electors to whoever wins the national popular vote. The brilliant part is that the law doesn’t kick in until states representing at least 270 Electoral College votes have passed the law as well. That ensures that Texas, for instance, wouldn’t risk giving a victory to a Democrat who won the popular vote but otherwise wouldn’t have won the Electoral College.

After their party’s nominations, Clinton, Trump, and their running mates made 399 campaign appearances. More than two-thirds were in just six states: Florida, North Carolina, Pennsylvania, Ohio, Virginia, and Michigan.

In the decade since National Popular Vote (on whose board I sit) got started, it has made swift progress. In 2007, Maryland became the first state to pass the compact, after Jamie Raskin, a state senator (and now congressman), spearheaded the effort and Governor Martin O’Malley signed it into law. Since then, nine more states (plus the District of Columbia) have followed suit, including small states like Rhode Island and Vermont and big states like Illinois, New York, and California. In twelve additional states, including Arizona and Connecticut, the measure has passed one state legislative body. See Figure 2.

Opponents of the compact raise a number of objections. First, they claim that the state-level winner-take-all approach used by most states today is what the Founding Fathers wanted. This is false. Only three states used it in the first presidential election. It did not come into widespread use until 1836, after a dozen presidential elections had been held using other methods.

[media-credit name=”FairVote” align=”aligncenter” width=”800″]Silberstein_Map[/media-credit]

Another argument is that the Electoral College gives more weight to small states that would be drowned out in a popular vote. By giving all states one elector per senator, plus one for each member of the House of Representatives, the smallest states have three electoral votes instead of just one. Switch to a popular vote, some critics worry, and small states will be totally ignored as New York, California, and Texas voters call the shots. (A variation of the argument is that small states tend to vote Republican. In fact, the eleven states besides New Hampshire that have only three or four electoral votes, plus D.C., are split evenly between red and blue.)

But in practice, it’s not true that small states get any benefit from the Electoral College. If their votes were so important, candidates would seek them out. Yet Iowa, Nevada, and New Hampshire are the only states with fewer than five million citizens among the twelve that the campaigns paid attention to in 2016. Having three electoral votes instead of one doesn’t mean anything if your state is uncompetitive—just ask true-blue Vermont, which got zero general election campaign appearances while purple New Hampshire (four electoral votes) got twenty-one. The important distinction is battleground versus spectator, not big versus small. If you live in an uncompetitive state, whether it’s California or Wyoming, your vote has the same practical value: zero.

Still others object that the National Popular Vote Interstate Compact is an end run around the Constitution and that the proper way to solve the problem is via a constitutional amendment. In fact, the current winner-take-all system in the states has no basis in the Constitution. The Founding Fathers’ original vision was for state legislatures to decide how to assign their electoral votes in the best way for their citizens. The compact is completely in line with that vision: letting the spectator states decide for themselves to adopt a popular vote means letting them opt in to a system that will bring them more attention from presidential candidates and a fairer share of the economic pie from presidents.

A final objection is not substantive, but practical: Why would Republicans, who overwhelmingly control state governments, go for this? After all, their guy won thanks to the Electoral College—twice. Why give up such a clear advantage?

First of all, all spectator states, red and blue alike, are getting punished by the current system. Governors and state legislators want federal money flowing to their states. They want their policy concerns to carry weight with the White House. But President Trump doesn’t have to pay the slightest attention to what Texans want, because Texas’s thirty-eight electoral votes are safely Republican. It’s true that the states to have fully passed the compact so far are all blue states. But in both Oklahoma and Arizona, two reliably red states, one of the two legislative houses has passed the popular vote statute in an overwhelming and bipartisan vote. It also passed 57–4 in the Republican-controlled New York state senate. These Republican lawmakers recognize that no matter what your party is, it’s not good for your state to be irrelevant in presidential politics.

Second, while it’s true that the GOP benefited from the Electoral College in 2000 and 2016, the tables can turn. In fact, they almost did in 2004, when George W. Bush won the popular vote by three million but only won the Electoral College because he carried Ohio, with twenty electoral votes, by about 120,000. A shift of 60,000 votes in Ohio from Bush to John Kerry would have given Kerry the state and a 271–266 Electoral College victory. Kerry would have become president despite losing the popular vote, just as Bush did four years earlier.

Third, demographic changes may soon make heretofore reliably red states such as Arizona, Georgia, and Texas up for grabs. If that happens, the Electoral College could spell doom for Republican presidential prospects.

Several leading national Republicans have expressed support for moving to a popular vote. Newt Gingrich has endorsed National Popular Vote explicitly, writing in a letter to John Koza that “our president must be the president of an enormously complex and varied country—of those in midtown Manhattan and southern California, as well as those in rural Oklahoma and the wilderness of Alaska.” Trump himself called the Electoral College a “disaster for democracy” in 2012 and, shortly after his 2016 victory, told 60 Minutes, “I’m not going to change my mind just because I won. But I would rather see it where you went with simple votes.” In December, he tweeted, “I would have done even better in the election, if that is possible, if the winner was based on popular vote—but would campaign differently.” And in January, the Wall Street Journal reported, he brought up replacing the Electoral College with a popular vote in a meeting with congressional leaders.

Trump’s claim may have been yet another example of salesman’s bluster, or it could be true; there’s no way to know. Campaigning really would be different in a popular vote system. There are tens of millions of votes given up for dead by both parties because they’re in states that inevitably go for the other party. One reason the Democrats seem to have an advantage in the popular vote is that the GOP doesn’t even bother going after Republican voters in California and New York, two of the biggest states. But California has five million registered Republicans; Trump got about 4.5 million votes there, just shy of his total in Texas. Who knows how many more Republican votes Trump could have gotten in California (or how many more Democratic votes Clinton could have gotten in Texas) if they had had reason to make an effort?

Enacting the National Popular Vote Interstate Compact wouldn’t just make presidents campaign and govern for the whole country. It would also create radically more room for political engagement overnight. If there is anything about the American political system that gets as much abuse as the Electoral College, it is the fact that so few people vote. The two problems are related: turnout is about 11 percent higher in battleground states. Presidential contests are the one situation in which almost every person knows who the candidates are and has an opinion on them. Yet the votes of almost every person not living in a battleground state simply don’t matter. People know that, and hence don’t waste their time voting. The compact will increase voter turnout, plain and simple, by giving every voter in the country a reason to participate in our democracy.

The winner-take-all approach of awarding a state’s Electoral College votes hurts the majority of Americans no matter who ends up in office. Unless you live in one of the very few battleground states, your concerns, your policy preferences, and your votes simply don’t matter.

The popular vote may also be the only way of keeping things from getting much worse. Some deeply partisan Republican political operators are introducing state legislation to award Electoral College votes based on who wins the popular vote in each congressional district. This would be a disaster, even if every state were to adopt it. Fewer people live in “swing districts” than in swing states, meaning the system would take us even further away from the national popular vote. And in the many districts with extreme partisan gerrymandering, candidates would win electoral votes beyond their proportional share of a state’s popular vote. Republican candidates would peel away electoral votes from states that otherwise go blue in presidential elections. The popular vote compact is the best way to neutralize these and other anti-democratic efforts, because once 270 electoral votes’ worth of states sign on, it doesn’t matter what the rest of the states do.

There are many problems with American democracy that seem impossible to fix. The Electoral College is not one of them. The national popular vote initiative is well on its way to making sure that the loser of the popular vote never again is awarded the presidency and that presidential candidates and administrations pay attention to all Americans—whether Democrat or Republican, urban or rural, living in big states or small states—and all fifty states, not just twelve.

But it needs one last push, and the commitment of 105 more Electoral College votes, to get over the finish line. Ultimately that is up to the American people. Signing a petition urging a constitutional amendment feels good, but accomplishes nothing. The road to the popular vote runs through statehouses. State legislators, unlike U.S. senators, pay attention to letters and phone calls from constituents, in part because they don’t receive that many. If the popular vote compact is to become a reality, it will be because enough Americans, tired of being ignored by the president and the campaigns, tell their state legislators to fix the system using the power that the Founding Fathers gave them.

The post How to Make the Electoral College Work for Everyone appeared first on Washington Monthly.

]]>
63946 Silberstein_PieChart Silberstein_Map
The Thinking Person’s Guide to Infrastructure https://washingtonmonthly.com/2017/03/19/the-thinking-persons-guide-to-infrastructure/ Mon, 20 Mar 2017 00:45:11 +0000 https://washingtonmonthly.com/?p=63762

Instead of embracing Donald Trump’s vision for gargantuan and indiscriminate building projects, Congress should insist that any federal infrastructure legislation be focused on delivering what the market says Americans want: walkable communities.

The post The Thinking Person’s Guide to Infrastructure appeared first on Washington Monthly.

]]>

SIDEBAR: Oklahoma City, Oklahoma — Walking Gains Ground in City Once Rated “Worst” for Pedestrians

In addition to the three major policy shifts described above, any new infrastructure bill, to be successful, will also require both liberals and conservatives to accept some concessions.

Liberals must concede that conservatives are right about environmental regulations having become a way for some people to simply stop beneficial infrastructure and real estate projects they don’t like. Environmental regulation in general, and especially the complex process of public comment and multiple reviews mandated by the federal government’s National Environmental Policy Act, must be modified to speed up the approval process and not encourage endless lawsuits and delays. It is imperative that we establish a maximum time period for appeals of infrastructure projects. The Seattle Sound Transit’s fourteen-mile light rail line from downtown to the east side of Lake Washington is taking seventeen years to plan and build, rather than the four to six it should take. In Beijing, by contrast, the government has been building subways at eighteen miles per year. The slow progress is not just a waste of money in itself, but also delays the economic and tax-producing benefits that attend walkable urban development around new stations. Meanwhile, the polluting highway congestion continues to strangle the regional economy.

Conservatives must concede that liberals are justified in warning that the building of walkable development often harms poor and lower-income Americans. Enabling more such projects could eventually bring down prices to the point where middle-class families could afford them. Plus, walkable communities are more equitable than they might first appear. Low-income households—defined as those making 80 percent or less of a metro area’s median income—typically devote about 40 percent of their income to housing. But counter-intuitively, they pay the same (punishing) percentage whether they live in metros areas with few walkable places (say, Tampa) or many (say, Seattle). And those who live in the latter typically pay 40 percent less in household transportation costs—because they don’t need a car, or can get by with only one—and have access to two to three times more jobs, which on average pay much better.

SIDEBAR: St. Michael, MinnesotaLife in the Exurbs

Still, there’s no getting around the fact that many walkable developments cater to the affluent and push out the poor. The only remedy is to consciously add affordable housing requirements. Any federal infrastructure bill should therefore stipulate that a local government receiving federal infrastructure funding (grants or loans) must require that 10 to 20 percent of housing units built within walking distance of those projects be affordable to families making below the area’s median income.

Finally, both liberals and conservatives must come to a shared understanding of how we want emerging technologies to serve us. New transportation technologies and business models, including Zipcar, Uber, drones, and self-driving cars, will have a dramatic impact on how our built environment evolves. Some predict that they will inevitably revitalize drivable suburban development (long commutes aren’t such a problem if you can watch a movie on your phone while riding). Others foresee them making walkable urban development even more popular (cheap driverless vehicles will make car ownership obsolete and the drive from home to rail stations an affordable breeze). The truth is, of course, that nobody knows the future impact of these technologies, and we have to make surface transportation decisions now.

The transportation policies fashioned by elected officials in Washington more than half a century ago created the drivable suburbs. Today, the market is clearly signaling that more and more Americans want a different, more walkable built environment for themselves and their children. Our leaders in Washington should listen.

SIDEBAR: Charleston, South Carolina — Where Walking is a Luxury

Walkable urbanism is now taking place in nearly every metro area in the country, including San Diego, Salt Lake City, Chattanooga, and Philadelphia. The most walkable urban metros in the country have a 49 percent higher GDP per capita than the most drivable ones—equivalent to the difference between the GDP per capita of a First World country such as Germany and a Second World country such as Russia.

While there are more walkable urban places than there used to be, demand still substantially outstrips supply. Progress has been slow, in part because real estate developers can’t build as fast as market preferences shift (the country adds only about 2 percent to the real estate inventory in a good year), and in part because government policy has not caught up with what the market wants.

Take zoning and building codes. As mentioned above, these codes, enacted by local governments, have long mandated drivable suburban development. This means that in vast numbers of American municipalities it is literally illegal to build walkable communities. Any developer who tries to get those local codes changed in order to construct walkable projects runs into a multiyear variance process and a jihad of NIMBY protests and lawsuits from local residents worried about increased traffic, overloading of schools—especially with students from lower-income families—and loss of open space.

The irony of this opposition (which often comes from a small minority of residents) is that walkable places don’t take up much land, as is the case in Arlington, Virginia. According to recent research, typically 5 to 7 percent of total metropolitan land is all that is required for walkable urban development. The rest of a region’s real estate will likely stay drivable in nature for decades. Ironically given the outcry by some homeowners, single-family homes immediately adjacent to walkable urban town centers, such as Birmingham, Michigan; Kirkland, Washington; and Park Cities, next to Dallas, have a 40 to 100 percent price premium over values of drivable homes in these same towns. Residents can live in suburban splendor and still walk to fine restaurants, theaters, and shopping outlets. Sometimes the most vociferous NIMBY critics are the very ones who would benefit the most from the urbanization of the suburbs.

Two recent shifts in public policy are beginning to accelerate the trend toward walkable development. One is the increasing willingness of local voters to support proposals to raise local taxes for rail, bus, biking, and pedestrian projects. A record seventy-seven such proposals were on local ballots in 2016. Of those, 71 percent passed—a success rate that has been consistent for such ballot measures for more than a decade. The 2016 measures commit more than $200 billion, primarily for major rail and bus expansions, in metro areas like Los Angeles, Atlanta, and Raleigh—cities that have been poster children for sprawl. That’s a lot of local money. By comparison, the big transportation bill that President Obama signed in 2015 authorized $305 billion over five years for the entire country, the bulk of it for highways.

The transportation policies fashioned by elected officials in Washington more than half a century ago created the drivable suburbs. Today, the market is clearly signaling that more and more Americans want a different, more walkable built environment for themselves and their children.

The other positive development is a set of provisions in the 2015 highway bill passed by a Republican Congress and signed by Obama that will make it much easier for places such as Los Angeles, Atlanta, and Raleigh to get their transit projects up and running. Even with dedicated local sources of revenue, local governments need cash-on-hand financing before they can break ground on big-ticket ventures such as rail lines. They can raise the money themselves by floating bonds. But that risks lowering their bond ratings, which can drive up the cost of other financing. A much better option is to borrow the money directly from the federal government, which itself can borrow at rock-bottom rates (less than 3 percent), or to have the feds guarantee their municipal bonds, which has the same effect. The U.S. Transportation Department has long provided two loan programs for this kind of funding—one for the building of toll roads, the other for freight and passenger rail projects (which was so restrictive it seldom got used). The 2015 transportation law merged these two programs and broadened the availability of financing to include rail and other transit systems and the infrastructure around stations. Los Angeles is leveraging this program to finance new transit projects approved by voters last fall.

With this system in place—transit-oriented infrastructure built with locally pledged revenues but financed by low-cost federal loans—walkable communities will spring up in more and more places throughout the country, even if the Trump infrastructure push fizzles. But clearly this economic and development transformation could spread much faster with a major infrastructure bill.

Such a bill should hew to three basic principles.

First, the federal government shouldn’t play favorites and pick winners and losers by advantaging one form of transportation infrastructure over others. In practice, this means that highway, mass transit, bike, and walking projects would get the same percentage of federal matching grants—as opposed to the current practice, in which highways get much larger matches. The federal share of that match would likely be somewhere between 20 and 40 percent, depending on the expansiveness of the infrastructure offerings.

Because of Republican resistance in Congress, Trump will almost certainly need large numbers of Democratic votes to pass any substantive infrastructure bill. That means Democrats will likely have significant leverage in determining the shape of such a bill. What should they demand in return for their support?

Second, the bill should allow the level of governance closest to the voters—cities, counties, metropolitan planning organizations, and place-specific organizations such as business improvement districts—to take the lead in determining what infrastructure investments best serve their communities. Traditionally, decisions on specific projects were made either by lawmakers in Washington’s “smoke-filled rooms” (the elimination of earmarks has curbed their power) or by state departments of transportation, which much prefer funding rural highways rather than urban rail transit projects. Local governments have had virtually no formal role in the decisionmaking process. This should change. Under a new transportation infrastructure bill, municipal governments, metropolitan-wide entities, and local governance organizations should have the same rights as state DOTs to compete for federal transportation dollars.

Third, Washington should insist that localities have skin in the infrastructure game—that is, that they find local sources for the funds needed to maintain the infrastructure and service the debt that federal grants and loans make possible. Repayment could come in the form of pledged sales or property tax increases, or as a percentage of the increased value of adjacent real estate, paid for by private-sector developers.

Indeed, any new infrastructure program should encourage private property owners and developers to share the burden of building local infrastructure, as their counterparts did a century ago. Developers are increasingly willing to effectively tax themselves by sharing the financial upside resulting from real estate projects made possible by these transportation investments. With federal infrastructure grants in short supply, this kind of model—combining local taxes and federal support with private-sector investments paying off cheap federal loans—makes great sense, far more sense than the expensive tax credit scheme the Trump team has put forth.

To help out, the feds should lift the ban on state and local governments charging tolls on interstate highways. Currently, tolls are allowed on only a few stretches, such as in New Jersey and Pennsylvania, where previously built state toll roads were incorporated into the national interstate system. Localities everywhere should be permitted to charge tolls and use the revenue as they see fit. This practice, especially when based on “congestion pricing”—tolls that go up when traffic is higher and down when it’s lower—is a market-based tool to allocate highway usage and mitigate traffic congestion, while at the same time paying for the overdue maintenance of those roads.

These three federal policies (equal treatment for transportation modes; the lowest level of government control; and skin in the game) would give local communities both the opportunity and the responsibility to make their own decisions. If they want to build “bridges to nowhere”—that is, extend transportation and other infrastructure beyond the metropolitan fringe—they’d be free to do so, but they would need to figure out how to repay the federal loans.

Based on the recent experience of places such as Atlanta, Raleigh, and Los Angeles, it’s more likely that metro areas will respond to market pressures by using their enhanced freedom to develop walkable communities. They’ll build new mass transit systems, lay down networks of bicycle lanes, and replace stretches of interstates that have destroyed the pedestrian capacities of their cities. They can achieve the latter by creating underground tunnels for their interstates, as Boston did for I-93 with its “Big Dig” project. Alternatively, and far less expensively, they can build land bridges across freeways, as Seattle, Dallas, and Duluth have done, or turn highways into boulevards, as San Francisco did after the elevated Embarcadero Freeway collapsed in the 1989 earthquake. Today, Embarcadero Boulevard is one of the most popular parts of the city, with adjacent property values and tax revenues far higher than before yet with no reduction in traffic movement.

While the old policy of endlessly widening and expanding highways makes no sense in an era when the public increasingly wants walkable development, many existing highways are in dire need of investment, especially in metro areas where they take a constant pounding. This will be an immensely expensive undertaking and won’t spur adjacent walkable development, but it still needs to be done. The Maryland section of Washington’s famous Beltway, for instance, needs to be rebuilt from the dirt up. The price will be far higher in constant dollars than what it cost to build it in the first place because the renovation has to happen lane by lane, at night and on weekends, in order to accommodate tens of thousands of cars every day.

SIDEBAR: Oklahoma City, Oklahoma — Walking Gains Ground in City Once Rated “Worst” for Pedestrians

In addition to the three major policy shifts described above, any new infrastructure bill, to be successful, will also require both liberals and conservatives to accept some concessions.

Liberals must concede that conservatives are right about environmental regulations having become a way for some people to simply stop beneficial infrastructure and real estate projects they don’t like. Environmental regulation in general, and especially the complex process of public comment and multiple reviews mandated by the federal government’s National Environmental Policy Act, must be modified to speed up the approval process and not encourage endless lawsuits and delays. It is imperative that we establish a maximum time period for appeals of infrastructure projects. The Seattle Sound Transit’s fourteen-mile light rail line from downtown to the east side of Lake Washington is taking seventeen years to plan and build, rather than the four to six it should take. In Beijing, by contrast, the government has been building subways at eighteen miles per year. The slow progress is not just a waste of money in itself, but also delays the economic and tax-producing benefits that attend walkable urban development around new stations. Meanwhile, the polluting highway congestion continues to strangle the regional economy.

Conservatives must concede that liberals are justified in warning that the building of walkable development often harms poor and lower-income Americans. Enabling more such projects could eventually bring down prices to the point where middle-class families could afford them. Plus, walkable communities are more equitable than they might first appear. Low-income households—defined as those making 80 percent or less of a metro area’s median income—typically devote about 40 percent of their income to housing. But counter-intuitively, they pay the same (punishing) percentage whether they live in metros areas with few walkable places (say, Tampa) or many (say, Seattle). And those who live in the latter typically pay 40 percent less in household transportation costs—because they don’t need a car, or can get by with only one—and have access to two to three times more jobs, which on average pay much better.

SIDEBAR: St. Michael, MinnesotaLife in the Exurbs

Still, there’s no getting around the fact that many walkable developments cater to the affluent and push out the poor. The only remedy is to consciously add affordable housing requirements. Any federal infrastructure bill should therefore stipulate that a local government receiving federal infrastructure funding (grants or loans) must require that 10 to 20 percent of housing units built within walking distance of those projects be affordable to families making below the area’s median income.

Finally, both liberals and conservatives must come to a shared understanding of how we want emerging technologies to serve us. New transportation technologies and business models, including Zipcar, Uber, drones, and self-driving cars, will have a dramatic impact on how our built environment evolves. Some predict that they will inevitably revitalize drivable suburban development (long commutes aren’t such a problem if you can watch a movie on your phone while riding). Others foresee them making walkable urban development even more popular (cheap driverless vehicles will make car ownership obsolete and the drive from home to rail stations an affordable breeze). The truth is, of course, that nobody knows the future impact of these technologies, and we have to make surface transportation decisions now.

The transportation policies fashioned by elected officials in Washington more than half a century ago created the drivable suburbs. Today, the market is clearly signaling that more and more Americans want a different, more walkable built environment for themselves and their children. Our leaders in Washington should listen.

SIDEBAR: Arlington, Virginia — Who Says No One Walks in the Suburbs?

Meanwhile, Baby Boomers’ kids, the Millennials, aspire to pretty much the same lifestyle, which is causing a problem for Boomers trying to sell their large drivable homes. Having mostly grown up in traffic-choked suburbs, Millennials have never been instilled with the old American romance about driving. Smartphones, not cars, are the objects of their affection. Many Millennials have settled in city centers and now, as they start to have children, are focusing their energy on improving the public school systems, not home hunting in the suburbs. And Millennials who do live in the suburbs generally pick closer-in, urbanizing ones over the greenfield developments to which their parents flocked in the 1970s and ’80s. Together these two generations, Baby Boomers and Millennials, account for a majority of the U.S. population.

The urbanization of the suburbs is the most understudied development trend in the country right now. A fascinating example is Arlington, Virginia. In the mid-twentieth century, Arlington was an entry-level bedroom community for Washington, D.C. It became car dependent starting in the 1950s, when the streetcars that ran from D.C. along Wilson Boulevard, Arlington’s commercial “Main Street,” were shut down and a new shopping mall, Parkington (named for its then-innovative multistory parking garage), was built.

By the 1980s, as the cutting edge of development in the D.C. region moved farther out, the area around Wilson Boulevard had become economically depressed. But change was happening beneath the surface, literally, as a line on the underground Metrorail system was being constructed below Wilson. Since then, a building boom has transformed the area. Each Metro stop along Wilson has become a mini downtown of high-rises filled with offices, condos, and rental apartments, interspersed with shops and restaurants. These walkable neighborhoods, which make up 10 percent of Arlington’s landmass, now contribute 55 percent of the county’s tax revenue, up from 20 percent from the same area twenty-five years ago. Tax revenue now supports a highly diverse and academically distinguished public school system, where students speak more than eighty languages. Arlington’s new challenge is that housing prices are now among the highest in the region because of pent-up demand, especially by Millennials looking for an urbanizing suburb with good schools.

Other parts of the D.C. suburbs aren’t faring quite as well. Housing prices in exurban communities such as Manassas Park, Virginia, and Upper Marlboro, Maryland, are flat or just beginning to rise following many years of decline, and prices for high-end homes in tony suburbs with no Metro lines, such as Great Falls, Virginia, and Potomac, Maryland, remain below their pre-recession levels. Fortunately for greater D.C., the Metrorail system is extending new lines to currently unconnected suburbs (even as it wrestles with deferred maintenance and high demand), creating more opportunities for walkable developments and for the companies that build them. Last year, the area’s biggest real estate development firm, JBG Smith, shed nearly all its properties except for those located within a half mile of a Metro stop, and all of its seventy new development projects are within walking distance of the Metrorail.

SIDEBAR: Charleston, South Carolina — Where Walking is a Luxury

Walkable urbanism is now taking place in nearly every metro area in the country, including San Diego, Salt Lake City, Chattanooga, and Philadelphia. The most walkable urban metros in the country have a 49 percent higher GDP per capita than the most drivable ones—equivalent to the difference between the GDP per capita of a First World country such as Germany and a Second World country such as Russia.

While there are more walkable urban places than there used to be, demand still substantially outstrips supply. Progress has been slow, in part because real estate developers can’t build as fast as market preferences shift (the country adds only about 2 percent to the real estate inventory in a good year), and in part because government policy has not caught up with what the market wants.

Take zoning and building codes. As mentioned above, these codes, enacted by local governments, have long mandated drivable suburban development. This means that in vast numbers of American municipalities it is literally illegal to build walkable communities. Any developer who tries to get those local codes changed in order to construct walkable projects runs into a multiyear variance process and a jihad of NIMBY protests and lawsuits from local residents worried about increased traffic, overloading of schools—especially with students from lower-income families—and loss of open space.

The irony of this opposition (which often comes from a small minority of residents) is that walkable places don’t take up much land, as is the case in Arlington, Virginia. According to recent research, typically 5 to 7 percent of total metropolitan land is all that is required for walkable urban development. The rest of a region’s real estate will likely stay drivable in nature for decades. Ironically given the outcry by some homeowners, single-family homes immediately adjacent to walkable urban town centers, such as Birmingham, Michigan; Kirkland, Washington; and Park Cities, next to Dallas, have a 40 to 100 percent price premium over values of drivable homes in these same towns. Residents can live in suburban splendor and still walk to fine restaurants, theaters, and shopping outlets. Sometimes the most vociferous NIMBY critics are the very ones who would benefit the most from the urbanization of the suburbs.

Two recent shifts in public policy are beginning to accelerate the trend toward walkable development. One is the increasing willingness of local voters to support proposals to raise local taxes for rail, bus, biking, and pedestrian projects. A record seventy-seven such proposals were on local ballots in 2016. Of those, 71 percent passed—a success rate that has been consistent for such ballot measures for more than a decade. The 2016 measures commit more than $200 billion, primarily for major rail and bus expansions, in metro areas like Los Angeles, Atlanta, and Raleigh—cities that have been poster children for sprawl. That’s a lot of local money. By comparison, the big transportation bill that President Obama signed in 2015 authorized $305 billion over five years for the entire country, the bulk of it for highways.

The transportation policies fashioned by elected officials in Washington more than half a century ago created the drivable suburbs. Today, the market is clearly signaling that more and more Americans want a different, more walkable built environment for themselves and their children.

The other positive development is a set of provisions in the 2015 highway bill passed by a Republican Congress and signed by Obama that will make it much easier for places such as Los Angeles, Atlanta, and Raleigh to get their transit projects up and running. Even with dedicated local sources of revenue, local governments need cash-on-hand financing before they can break ground on big-ticket ventures such as rail lines. They can raise the money themselves by floating bonds. But that risks lowering their bond ratings, which can drive up the cost of other financing. A much better option is to borrow the money directly from the federal government, which itself can borrow at rock-bottom rates (less than 3 percent), or to have the feds guarantee their municipal bonds, which has the same effect. The U.S. Transportation Department has long provided two loan programs for this kind of funding—one for the building of toll roads, the other for freight and passenger rail projects (which was so restrictive it seldom got used). The 2015 transportation law merged these two programs and broadened the availability of financing to include rail and other transit systems and the infrastructure around stations. Los Angeles is leveraging this program to finance new transit projects approved by voters last fall.

With this system in place—transit-oriented infrastructure built with locally pledged revenues but financed by low-cost federal loans—walkable communities will spring up in more and more places throughout the country, even if the Trump infrastructure push fizzles. But clearly this economic and development transformation could spread much faster with a major infrastructure bill.

Such a bill should hew to three basic principles.

First, the federal government shouldn’t play favorites and pick winners and losers by advantaging one form of transportation infrastructure over others. In practice, this means that highway, mass transit, bike, and walking projects would get the same percentage of federal matching grants—as opposed to the current practice, in which highways get much larger matches. The federal share of that match would likely be somewhere between 20 and 40 percent, depending on the expansiveness of the infrastructure offerings.

Because of Republican resistance in Congress, Trump will almost certainly need large numbers of Democratic votes to pass any substantive infrastructure bill. That means Democrats will likely have significant leverage in determining the shape of such a bill. What should they demand in return for their support?

Second, the bill should allow the level of governance closest to the voters—cities, counties, metropolitan planning organizations, and place-specific organizations such as business improvement districts—to take the lead in determining what infrastructure investments best serve their communities. Traditionally, decisions on specific projects were made either by lawmakers in Washington’s “smoke-filled rooms” (the elimination of earmarks has curbed their power) or by state departments of transportation, which much prefer funding rural highways rather than urban rail transit projects. Local governments have had virtually no formal role in the decisionmaking process. This should change. Under a new transportation infrastructure bill, municipal governments, metropolitan-wide entities, and local governance organizations should have the same rights as state DOTs to compete for federal transportation dollars.

Third, Washington should insist that localities have skin in the infrastructure game—that is, that they find local sources for the funds needed to maintain the infrastructure and service the debt that federal grants and loans make possible. Repayment could come in the form of pledged sales or property tax increases, or as a percentage of the increased value of adjacent real estate, paid for by private-sector developers.

Indeed, any new infrastructure program should encourage private property owners and developers to share the burden of building local infrastructure, as their counterparts did a century ago. Developers are increasingly willing to effectively tax themselves by sharing the financial upside resulting from real estate projects made possible by these transportation investments. With federal infrastructure grants in short supply, this kind of model—combining local taxes and federal support with private-sector investments paying off cheap federal loans—makes great sense, far more sense than the expensive tax credit scheme the Trump team has put forth.

To help out, the feds should lift the ban on state and local governments charging tolls on interstate highways. Currently, tolls are allowed on only a few stretches, such as in New Jersey and Pennsylvania, where previously built state toll roads were incorporated into the national interstate system. Localities everywhere should be permitted to charge tolls and use the revenue as they see fit. This practice, especially when based on “congestion pricing”—tolls that go up when traffic is higher and down when it’s lower—is a market-based tool to allocate highway usage and mitigate traffic congestion, while at the same time paying for the overdue maintenance of those roads.

These three federal policies (equal treatment for transportation modes; the lowest level of government control; and skin in the game) would give local communities both the opportunity and the responsibility to make their own decisions. If they want to build “bridges to nowhere”—that is, extend transportation and other infrastructure beyond the metropolitan fringe—they’d be free to do so, but they would need to figure out how to repay the federal loans.

Based on the recent experience of places such as Atlanta, Raleigh, and Los Angeles, it’s more likely that metro areas will respond to market pressures by using their enhanced freedom to develop walkable communities. They’ll build new mass transit systems, lay down networks of bicycle lanes, and replace stretches of interstates that have destroyed the pedestrian capacities of their cities. They can achieve the latter by creating underground tunnels for their interstates, as Boston did for I-93 with its “Big Dig” project. Alternatively, and far less expensively, they can build land bridges across freeways, as Seattle, Dallas, and Duluth have done, or turn highways into boulevards, as San Francisco did after the elevated Embarcadero Freeway collapsed in the 1989 earthquake. Today, Embarcadero Boulevard is one of the most popular parts of the city, with adjacent property values and tax revenues far higher than before yet with no reduction in traffic movement.

While the old policy of endlessly widening and expanding highways makes no sense in an era when the public increasingly wants walkable development, many existing highways are in dire need of investment, especially in metro areas where they take a constant pounding. This will be an immensely expensive undertaking and won’t spur adjacent walkable development, but it still needs to be done. The Maryland section of Washington’s famous Beltway, for instance, needs to be rebuilt from the dirt up. The price will be far higher in constant dollars than what it cost to build it in the first place because the renovation has to happen lane by lane, at night and on weekends, in order to accommodate tens of thousands of cars every day.

SIDEBAR: Oklahoma City, Oklahoma — Walking Gains Ground in City Once Rated “Worst” for Pedestrians

In addition to the three major policy shifts described above, any new infrastructure bill, to be successful, will also require both liberals and conservatives to accept some concessions.

Liberals must concede that conservatives are right about environmental regulations having become a way for some people to simply stop beneficial infrastructure and real estate projects they don’t like. Environmental regulation in general, and especially the complex process of public comment and multiple reviews mandated by the federal government’s National Environmental Policy Act, must be modified to speed up the approval process and not encourage endless lawsuits and delays. It is imperative that we establish a maximum time period for appeals of infrastructure projects. The Seattle Sound Transit’s fourteen-mile light rail line from downtown to the east side of Lake Washington is taking seventeen years to plan and build, rather than the four to six it should take. In Beijing, by contrast, the government has been building subways at eighteen miles per year. The slow progress is not just a waste of money in itself, but also delays the economic and tax-producing benefits that attend walkable urban development around new stations. Meanwhile, the polluting highway congestion continues to strangle the regional economy.

Conservatives must concede that liberals are justified in warning that the building of walkable development often harms poor and lower-income Americans. Enabling more such projects could eventually bring down prices to the point where middle-class families could afford them. Plus, walkable communities are more equitable than they might first appear. Low-income households—defined as those making 80 percent or less of a metro area’s median income—typically devote about 40 percent of their income to housing. But counter-intuitively, they pay the same (punishing) percentage whether they live in metros areas with few walkable places (say, Tampa) or many (say, Seattle). And those who live in the latter typically pay 40 percent less in household transportation costs—because they don’t need a car, or can get by with only one—and have access to two to three times more jobs, which on average pay much better.

SIDEBAR: St. Michael, MinnesotaLife in the Exurbs

Still, there’s no getting around the fact that many walkable developments cater to the affluent and push out the poor. The only remedy is to consciously add affordable housing requirements. Any federal infrastructure bill should therefore stipulate that a local government receiving federal infrastructure funding (grants or loans) must require that 10 to 20 percent of housing units built within walking distance of those projects be affordable to families making below the area’s median income.

Finally, both liberals and conservatives must come to a shared understanding of how we want emerging technologies to serve us. New transportation technologies and business models, including Zipcar, Uber, drones, and self-driving cars, will have a dramatic impact on how our built environment evolves. Some predict that they will inevitably revitalize drivable suburban development (long commutes aren’t such a problem if you can watch a movie on your phone while riding). Others foresee them making walkable urban development even more popular (cheap driverless vehicles will make car ownership obsolete and the drive from home to rail stations an affordable breeze). The truth is, of course, that nobody knows the future impact of these technologies, and we have to make surface transportation decisions now.

The transportation policies fashioned by elected officials in Washington more than half a century ago created the drivable suburbs. Today, the market is clearly signaling that more and more Americans want a different, more walkable built environment for themselves and their children. Our leaders in Washington should listen.

About the only issue on which Donald Trump and Hillary Clinton agreed during the 2016 presidential race was the need to rebuild America’s infrastructure—roads, bridges, mass transit systems, and the like. In fact, Trump proposed spending twice as much on infrastructure as Clinton did. That endeared him to his base voters, who saw it as a concrete (literally) manifestation of his promise to “make America great again.”

The country’s infrastructure, particularly surface transportation, is certainly in sorry shape. The American Society of Civil Engineers rates the overall condition of our bridges as C+ and our roads and mass transit systems as D. Many of these structures, built decades ago, are reaching the end of their useful lives.

Under pressure from traditional conservatives in Congress, who are no fans of big domestic spending projects, Trump has been shy about the details and timing of his plan. But in his February speech to Congress, he reiterated his intention to push for major legislation.

Indeed, Trump has profound political and personal reasons to fulfill the promise he made during the campaign. His whole professional identity, after all, revolves around the creation of steel and concrete structures (usually built by others) with his name on them. And an infrastructure build-out of the size he has talked about—upwards of $1 trillion—would surely juice an already growing economy and provide well-paying jobs for many of the non-college-educated white male voters who supported him.

Moreover, some close Trump advisers see infrastructure as a way to pull Democratic voters, including some minorities, into a new political coalition that will remake the Republican Party and keep it in power for decades. “With negative interest rates throughout the world, it’s the greatest opportunity to rebuild everything,” Trump campaign CEO and now chief White House strategist Steve Bannon told the Hollywood Reporter soon after the elections. “Shipyards, ironworks, get them all jacked up. We’re just going to throw it up against the wall and see if it sticks. It will be as exciting as the 1930s, greater than the Reagan revolution—conservatives, plus populists, in an economic nationalist movement.”

Such grandiose political hopes aside, many Democratic lawmakers are clearly open to working with Trump. In fact, Senate Democrats have proposed spending the same amount on infrastructure as Trump has: $1 trillion. That is far more than most GOP lawmakers are comfortable spending. But precisely because of that Republican resistance, Trump will almost certainly need large numbers of Democratic votes to pass any substantive infrastructure bill. That means Democrats will likely have significant leverage in determining the shape of such a bill.

What should Democrats demand in return for their support? For one thing, that Trump drop the idea his advisers floated in December for $85 billion in new tax credits for infrastructure investors. The private sector must be involved in America’s infrastructure build-out, but tax credits are a very expensive way of making that happen. Tax credits attract private investors and corporations needing high rates of return to offset their tax liabilities—returns in the range of 18 to 35 percent annually. But infrastructure investments don’t typically produce those kinds of high returns unless the risks are somehow shifted onto others, typically taxpayers. With interest rates at 3 percent, it’s much cheaper and less risky to the public for the federal government to simply borrow the money and have it paid back by local sources, both public and private.

Even more importantly, liberals should recognize that it is no longer 2009. Back then, to stabilize an economy in free fall, Democrats passed a massive stimulus bill with tens of billions of dollars devoted to whatever “shovel ready” projects state transportation departments happened to have on the shelf—widening interstates, paving rural roads, and so on.

Today, with unemployment low and economic growth steady, there’s no need for an immediate Keynesian stimulus. Nor is it in Democrats’ political interests to embrace the Trump-Bannon vision for gargantuan and possibly indiscriminate building projects. In this, they have, unexpectedly, reason to make common cause with their fiscally conservative GOP colleagues.

Instead of agreeing to open the federal funding spigot and spraying infrastructure dollars haphazardly, Democrats and their conservative Republican allies should insist that any federal infrastructure legislation be focused on a long-term strategy based on two simple questions: What do Americans want their communities to look like twenty-five years from now? And what unique set of infrastructure investments will get them there?

Answers to these questions are already out there, as revealed by Americans’ choices in the marketplace. Almost every market metric we have—particularly in shifts in for-sale housing and rental prices—indicates a vast unmet demand for homes and commercial spaces in or near what real estate professionals call “walkable urban communities.” These are relatively densely built places where people can get to stores, parks, restaurants, bars, and movie theaters, as well as to their jobs, without having to use their cars. “Walkable urban” doesn’t necessarily mean city living, though the revitalization of American cities is strong evidence of the trend. Suburbanites, too, want walkable urban places in their communities, as shown by the popularity of new suburban “town centers” in places like downtown Bellevue, Washington; Plano, Texas; and Reston, Virginia. Small-town Americans also are seeing their traditional, often neglected town squares and main streets come back to life, in places such as Albert Lea, Minnesota, and Batesville, Arkansas.

Donald Trump has profound political and personal reasons to fulfill the promise he made during the campaign to rebuild America’s infrastructure. His whole professional identity, after all, revolves around the creation of steel and concrete structures (usually built by others) with his name on them.

The problem is that demand for walkable places far outstrips supply, artificially jacking up real estate prices in these communities—consider the insane rents and sale prices today in Brooklyn’s Park Slope, Chicago’s Lincoln Park, or the Virginia Highland neighborhood in Atlanta, compared to other parts of their metropolitan areas. This phenomenon, otherwise known as gentrification, makes it seem as if only the affluent want walkable neighborhoods, when in fact their appeal transcends class, race, geography, and political inclination. These price premiums suggest that the market is saying, “Build more of this stuff.”

This is not to say that all American communities, households, and businesses are rejecting drivable subdivisions, strip malls, and business parks. These are the exact types of communities many Americans, probably a slight majority, want. But we have vastly overbuilt this type of development; the market has been more than satisfied.

A major reason for this mismatch between market demand and supply is federal infrastructure policy that favors drivable development at the expense of walkable urban development. For instance, the feds have long been far more generous in subsidizing the building and widening of interstate highways that funnel traffic out to the distant suburbs than in funding rail, bike, and pedestrian improvements that make walkable neighborhoods practical. That policy bias resulted in this vast overbuilding of low-density suburbs on the metropolitan fringe—development that sparked the Great Recession and made it much deeper and longer than it otherwise would have been. The prices of homes in exurban communities such as Riverside County, outside Los Angeles, and Prince William County, outside Washington, D.C., plummeted and in many cases have still not returned to their pre-recession levels, leaving many homeowners with mortgages that remain underwater.

Any new infrastructure bill should aim to use precious and limited federal tax dollars in a way that enhances the market’s own ability to deliver the kind of real estate development Americans clearly want, instead of providing what would essentially be a bailout for an out-of-date development model. Creating federal infrastructure policy that encourages the walkable developments millions of Americans crave would unleash a virtuous cycle of change. It would attract hundreds of billions, perhaps trillions, of private-sector dollars to residential and commercial real estate markets that have not fully recovered from the Great Recession. Today, in the midst of a reasonably strong economy, we are building fewer new housing units per capita than during recessions over the past sixty years. The flow of investment to satisfy this pent-up demand would last a generation, not unlike the postwar drivable suburban boom. Because the so-called built environment (infrastructure plus real estate) is America’s largest asset class, representing 35 percent of the wealth of the country, this extra investment could increase the medium- to long-term growth rate of the U.S. economy by 50 percent—that is, add an extra point to the 2 percent average GDP growth we’ve seen over the past five years.

This demonstrates the adage “Transportation drives economic development.”

If done right, the infrastructure investment would also advance a range of other important national goals, from lowering health care costs (nothing is better for your health than walking) to fighting global warming (denser developments are far more energy efficient) to reversing inequality (as I’ll explain below). Finally, it would profoundly reshape our landscape so that many more Americans can live in the ways that make them happiest.

Walkable urban” is really just a new phrase for the traditional development pattern that humans have known since the dawn of civilization, and that was standard in American towns and cities before World War II. Walkable communities tend to be far more compact than drivable ones. Commercial and residential properties are located near one another, not separated by long distances, as we find in most suburbs today. People can comfortably get to where they need to go by foot or public transportation—cars, buses, bikes, and rail. Back in the day, the infrastructure that made these various transportation modes possible was built and paid for not by Washington but by state and local governments and the private sector. Indeed, the streetcar lines that were fixtures in nearly every U.S. city and town with more than 25,000 people were generally built by real estate developers as a way to get customers to their newly built walkable projects at what was then the edge of town.

Starting in the mid-twentieth century, this traditional settlement pattern was abandoned in favor of drivable suburban development: single-family homes on large lots, with offices and retail stores clustered all by themselves miles away. In such developments, cars and trucks are the only viable means of getting around. Anyone brave or desperate enough to go by foot must traverse long distances along arterial roads that often lack sidewalks, with cars whooshing by at fifty or more miles an hour. Buses come infrequently, if they are available at all.

Originally, few people complained about this new form of living, aside from urban visionaries like Jane Jacobs. Many in fact loved it (and still do). That was especially true of the GIs who returned from World War II eager to start families. Much of that generation had grown up in crowded cities that, for lack of a tax base during the Depression and the war years, had become dilapidated by the late 1940s—and in many cases were filling up with poor black migrants from the rural South. Racial segregation has always been a factor in how we have built our country.

The acute postwar demand for new housing fueled the suburban boom, but it was public policy that made it possible. The interstate highway system, 90 percent of which was paid for by the federal government, made cheap rural land outside cities accessible and valuable to developers. Suburban municipalities used federal grants to extend water, sewer, and electric lines to new subdivisions, charging developers and homeowners a fraction of the real costs of those projects. Under pressure from federal regulations, municipalities enacted zoning codes that effectively outlawed walkable development. On top of that, government-insured mortgages for veterans and others were regulated in ways that required that they be used only to buy newly constructed homes, not to purchase or remodel existing homes—an incentive that for decades strongly steered growth away from cities and toward the drivable suburbs.

Starting in the mid-1990s, the market pendulum began swinging back toward the traditional walkable model. First appearing in coastal metropolitan areas such as Washington, New York, Boston, Seattle, and San Francisco, the trend toward walkable urban development can now be seen in almost every metropolitan area, even unexpected spots such as the downtowns of Oklahoma City, Boise, and Phoenix. For the first time in a century, walkable urbanism is dramatically gaining market share and enjoying substantial price and rent premiums. Meanwhile, in all thirty of America’s largest metropolitan areas, drivable suburban development is losing market share and generally bringing much lower prices and rents.

As a result, some business parks and regional malls are losing tenants to walkable urban places, leading development organizations such as the Newmark Grubb Knight Frank international brokerage firm and the Urban Land Institute to figure out what to do about this obsolete real estate inventory. On the suburban fringes, drive-until-you-qualify single-family homes have generally not recovered their pre-recession valuations, while houses, townhouses, and condominiums in walkable urban areas have skyrocketed in value. Even some recently built luxury single-family mansions and McMansions in drivable suburbs have values below their pre-recession level and have a hard time finding buyers, at prices that are sometimes below replacement cost. These low prices are the reason housing production is at recessionary levels today, even in the midst of a steady economic expansion.

The declining value of drivable suburban development in many places is the consequence of both oversupply and demographics. Millions of Baby Boomers are emptying their nests and looking to downsize into apartments, condos, townhouses, or small-lot single-family houses in cities, urbanizing suburbs, or small towns where retail, community facilities, and employment are within walking distance. According to recent surveys by the National Association of Realtors, the top category of places where Baby Boomers say they want to retire is that of smaller cities and towns—as long as they’re walkable and have amenities like shopping and restaurants. Many such communities, such as Santa Fe, Ann Arbor, and Lancaster, Pennsylvania, are benefiting from Boomers’ retirement already, and that wave still has another ten years to run before it crests.

A sidewalk in Arlington, Virginia.

SIDEBAR: Arlington, Virginia — Who Says No One Walks in the Suburbs?

Meanwhile, Baby Boomers’ kids, the Millennials, aspire to pretty much the same lifestyle, which is causing a problem for Boomers trying to sell their large drivable homes. Having mostly grown up in traffic-choked suburbs, Millennials have never been instilled with the old American romance about driving. Smartphones, not cars, are the objects of their affection. Many Millennials have settled in city centers and now, as they start to have children, are focusing their energy on improving the public school systems, not home hunting in the suburbs. And Millennials who do live in the suburbs generally pick closer-in, urbanizing ones over the greenfield developments to which their parents flocked in the 1970s and ’80s. Together these two generations, Baby Boomers and Millennials, account for a majority of the U.S. population.

The urbanization of the suburbs is the most understudied development trend in the country right now. A fascinating example is Arlington, Virginia. In the mid-twentieth century, Arlington was an entry-level bedroom community for Washington, D.C. It became car dependent starting in the 1950s, when the streetcars that ran from D.C. along Wilson Boulevard, Arlington’s commercial “Main Street,” were shut down and a new shopping mall, Parkington (named for its then-innovative multistory parking garage), was built.

By the 1980s, as the cutting edge of development in the D.C. region moved farther out, the area around Wilson Boulevard had become economically depressed. But change was happening beneath the surface, literally, as a line on the underground Metrorail system was being constructed below Wilson. Since then, a building boom has transformed the area. Each Metro stop along Wilson has become a mini downtown of high-rises filled with offices, condos, and rental apartments, interspersed with shops and restaurants. These walkable neighborhoods, which make up 10 percent of Arlington’s landmass, now contribute 55 percent of the county’s tax revenue, up from 20 percent from the same area twenty-five years ago. Tax revenue now supports a highly diverse and academically distinguished public school system, where students speak more than eighty languages. Arlington’s new challenge is that housing prices are now among the highest in the region because of pent-up demand, especially by Millennials looking for an urbanizing suburb with good schools.

Other parts of the D.C. suburbs aren’t faring quite as well. Housing prices in exurban communities such as Manassas Park, Virginia, and Upper Marlboro, Maryland, are flat or just beginning to rise following many years of decline, and prices for high-end homes in tony suburbs with no Metro lines, such as Great Falls, Virginia, and Potomac, Maryland, remain below their pre-recession levels. Fortunately for greater D.C., the Metrorail system is extending new lines to currently unconnected suburbs (even as it wrestles with deferred maintenance and high demand), creating more opportunities for walkable developments and for the companies that build them. Last year, the area’s biggest real estate development firm, JBG Smith, shed nearly all its properties except for those located within a half mile of a Metro stop, and all of its seventy new development projects are within walking distance of the Metrorail.

SIDEBAR: Charleston, South Carolina — Where Walking is a Luxury

Walkable urbanism is now taking place in nearly every metro area in the country, including San Diego, Salt Lake City, Chattanooga, and Philadelphia. The most walkable urban metros in the country have a 49 percent higher GDP per capita than the most drivable ones—equivalent to the difference between the GDP per capita of a First World country such as Germany and a Second World country such as Russia.

While there are more walkable urban places than there used to be, demand still substantially outstrips supply. Progress has been slow, in part because real estate developers can’t build as fast as market preferences shift (the country adds only about 2 percent to the real estate inventory in a good year), and in part because government policy has not caught up with what the market wants.

Take zoning and building codes. As mentioned above, these codes, enacted by local governments, have long mandated drivable suburban development. This means that in vast numbers of American municipalities it is literally illegal to build walkable communities. Any developer who tries to get those local codes changed in order to construct walkable projects runs into a multiyear variance process and a jihad of NIMBY protests and lawsuits from local residents worried about increased traffic, overloading of schools—especially with students from lower-income families—and loss of open space.

The irony of this opposition (which often comes from a small minority of residents) is that walkable places don’t take up much land, as is the case in Arlington, Virginia. According to recent research, typically 5 to 7 percent of total metropolitan land is all that is required for walkable urban development. The rest of a region’s real estate will likely stay drivable in nature for decades. Ironically given the outcry by some homeowners, single-family homes immediately adjacent to walkable urban town centers, such as Birmingham, Michigan; Kirkland, Washington; and Park Cities, next to Dallas, have a 40 to 100 percent price premium over values of drivable homes in these same towns. Residents can live in suburban splendor and still walk to fine restaurants, theaters, and shopping outlets. Sometimes the most vociferous NIMBY critics are the very ones who would benefit the most from the urbanization of the suburbs.

Two recent shifts in public policy are beginning to accelerate the trend toward walkable development. One is the increasing willingness of local voters to support proposals to raise local taxes for rail, bus, biking, and pedestrian projects. A record seventy-seven such proposals were on local ballots in 2016. Of those, 71 percent passed—a success rate that has been consistent for such ballot measures for more than a decade. The 2016 measures commit more than $200 billion, primarily for major rail and bus expansions, in metro areas like Los Angeles, Atlanta, and Raleigh—cities that have been poster children for sprawl. That’s a lot of local money. By comparison, the big transportation bill that President Obama signed in 2015 authorized $305 billion over five years for the entire country, the bulk of it for highways.

The transportation policies fashioned by elected officials in Washington more than half a century ago created the drivable suburbs. Today, the market is clearly signaling that more and more Americans want a different, more walkable built environment for themselves and their children.

The other positive development is a set of provisions in the 2015 highway bill passed by a Republican Congress and signed by Obama that will make it much easier for places such as Los Angeles, Atlanta, and Raleigh to get their transit projects up and running. Even with dedicated local sources of revenue, local governments need cash-on-hand financing before they can break ground on big-ticket ventures such as rail lines. They can raise the money themselves by floating bonds. But that risks lowering their bond ratings, which can drive up the cost of other financing. A much better option is to borrow the money directly from the federal government, which itself can borrow at rock-bottom rates (less than 3 percent), or to have the feds guarantee their municipal bonds, which has the same effect. The U.S. Transportation Department has long provided two loan programs for this kind of funding—one for the building of toll roads, the other for freight and passenger rail projects (which was so restrictive it seldom got used). The 2015 transportation law merged these two programs and broadened the availability of financing to include rail and other transit systems and the infrastructure around stations. Los Angeles is leveraging this program to finance new transit projects approved by voters last fall.

With this system in place—transit-oriented infrastructure built with locally pledged revenues but financed by low-cost federal loans—walkable communities will spring up in more and more places throughout the country, even if the Trump infrastructure push fizzles. But clearly this economic and development transformation could spread much faster with a major infrastructure bill.

Such a bill should hew to three basic principles.

First, the federal government shouldn’t play favorites and pick winners and losers by advantaging one form of transportation infrastructure over others. In practice, this means that highway, mass transit, bike, and walking projects would get the same percentage of federal matching grants—as opposed to the current practice, in which highways get much larger matches. The federal share of that match would likely be somewhere between 20 and 40 percent, depending on the expansiveness of the infrastructure offerings.

Because of Republican resistance in Congress, Trump will almost certainly need large numbers of Democratic votes to pass any substantive infrastructure bill. That means Democrats will likely have significant leverage in determining the shape of such a bill. What should they demand in return for their support?

Second, the bill should allow the level of governance closest to the voters—cities, counties, metropolitan planning organizations, and place-specific organizations such as business improvement districts—to take the lead in determining what infrastructure investments best serve their communities. Traditionally, decisions on specific projects were made either by lawmakers in Washington’s “smoke-filled rooms” (the elimination of earmarks has curbed their power) or by state departments of transportation, which much prefer funding rural highways rather than urban rail transit projects. Local governments have had virtually no formal role in the decisionmaking process. This should change. Under a new transportation infrastructure bill, municipal governments, metropolitan-wide entities, and local governance organizations should have the same rights as state DOTs to compete for federal transportation dollars.

Third, Washington should insist that localities have skin in the infrastructure game—that is, that they find local sources for the funds needed to maintain the infrastructure and service the debt that federal grants and loans make possible. Repayment could come in the form of pledged sales or property tax increases, or as a percentage of the increased value of adjacent real estate, paid for by private-sector developers.

Indeed, any new infrastructure program should encourage private property owners and developers to share the burden of building local infrastructure, as their counterparts did a century ago. Developers are increasingly willing to effectively tax themselves by sharing the financial upside resulting from real estate projects made possible by these transportation investments. With federal infrastructure grants in short supply, this kind of model—combining local taxes and federal support with private-sector investments paying off cheap federal loans—makes great sense, far more sense than the expensive tax credit scheme the Trump team has put forth.

To help out, the feds should lift the ban on state and local governments charging tolls on interstate highways. Currently, tolls are allowed on only a few stretches, such as in New Jersey and Pennsylvania, where previously built state toll roads were incorporated into the national interstate system. Localities everywhere should be permitted to charge tolls and use the revenue as they see fit. This practice, especially when based on “congestion pricing”—tolls that go up when traffic is higher and down when it’s lower—is a market-based tool to allocate highway usage and mitigate traffic congestion, while at the same time paying for the overdue maintenance of those roads.

These three federal policies (equal treatment for transportation modes; the lowest level of government control; and skin in the game) would give local communities both the opportunity and the responsibility to make their own decisions. If they want to build “bridges to nowhere”—that is, extend transportation and other infrastructure beyond the metropolitan fringe—they’d be free to do so, but they would need to figure out how to repay the federal loans.

Based on the recent experience of places such as Atlanta, Raleigh, and Los Angeles, it’s more likely that metro areas will respond to market pressures by using their enhanced freedom to develop walkable communities. They’ll build new mass transit systems, lay down networks of bicycle lanes, and replace stretches of interstates that have destroyed the pedestrian capacities of their cities. They can achieve the latter by creating underground tunnels for their interstates, as Boston did for I-93 with its “Big Dig” project. Alternatively, and far less expensively, they can build land bridges across freeways, as Seattle, Dallas, and Duluth have done, or turn highways into boulevards, as San Francisco did after the elevated Embarcadero Freeway collapsed in the 1989 earthquake. Today, Embarcadero Boulevard is one of the most popular parts of the city, with adjacent property values and tax revenues far higher than before yet with no reduction in traffic movement.

While the old policy of endlessly widening and expanding highways makes no sense in an era when the public increasingly wants walkable development, many existing highways are in dire need of investment, especially in metro areas where they take a constant pounding. This will be an immensely expensive undertaking and won’t spur adjacent walkable development, but it still needs to be done. The Maryland section of Washington’s famous Beltway, for instance, needs to be rebuilt from the dirt up. The price will be far higher in constant dollars than what it cost to build it in the first place because the renovation has to happen lane by lane, at night and on weekends, in order to accommodate tens of thousands of cars every day.

SIDEBAR: Oklahoma City, Oklahoma — Walking Gains Ground in City Once Rated “Worst” for Pedestrians

In addition to the three major policy shifts described above, any new infrastructure bill, to be successful, will also require both liberals and conservatives to accept some concessions.

Liberals must concede that conservatives are right about environmental regulations having become a way for some people to simply stop beneficial infrastructure and real estate projects they don’t like. Environmental regulation in general, and especially the complex process of public comment and multiple reviews mandated by the federal government’s National Environmental Policy Act, must be modified to speed up the approval process and not encourage endless lawsuits and delays. It is imperative that we establish a maximum time period for appeals of infrastructure projects. The Seattle Sound Transit’s fourteen-mile light rail line from downtown to the east side of Lake Washington is taking seventeen years to plan and build, rather than the four to six it should take. In Beijing, by contrast, the government has been building subways at eighteen miles per year. The slow progress is not just a waste of money in itself, but also delays the economic and tax-producing benefits that attend walkable urban development around new stations. Meanwhile, the polluting highway congestion continues to strangle the regional economy.

Conservatives must concede that liberals are justified in warning that the building of walkable development often harms poor and lower-income Americans. Enabling more such projects could eventually bring down prices to the point where middle-class families could afford them. Plus, walkable communities are more equitable than they might first appear. Low-income households—defined as those making 80 percent or less of a metro area’s median income—typically devote about 40 percent of their income to housing. But counter-intuitively, they pay the same (punishing) percentage whether they live in metros areas with few walkable places (say, Tampa) or many (say, Seattle). And those who live in the latter typically pay 40 percent less in household transportation costs—because they don’t need a car, or can get by with only one—and have access to two to three times more jobs, which on average pay much better.

SIDEBAR: St. Michael, MinnesotaLife in the Exurbs

Still, there’s no getting around the fact that many walkable developments cater to the affluent and push out the poor. The only remedy is to consciously add affordable housing requirements. Any federal infrastructure bill should therefore stipulate that a local government receiving federal infrastructure funding (grants or loans) must require that 10 to 20 percent of housing units built within walking distance of those projects be affordable to families making below the area’s median income.

Finally, both liberals and conservatives must come to a shared understanding of how we want emerging technologies to serve us. New transportation technologies and business models, including Zipcar, Uber, drones, and self-driving cars, will have a dramatic impact on how our built environment evolves. Some predict that they will inevitably revitalize drivable suburban development (long commutes aren’t such a problem if you can watch a movie on your phone while riding). Others foresee them making walkable urban development even more popular (cheap driverless vehicles will make car ownership obsolete and the drive from home to rail stations an affordable breeze). The truth is, of course, that nobody knows the future impact of these technologies, and we have to make surface transportation decisions now.

The transportation policies fashioned by elected officials in Washington more than half a century ago created the drivable suburbs. Today, the market is clearly signaling that more and more Americans want a different, more walkable built environment for themselves and their children. Our leaders in Washington should listen.

The post The Thinking Person’s Guide to Infrastructure appeared first on Washington Monthly.

]]>
63762 Mar-17-Walljasper-Arlington
The Decline of Black Business https://washingtonmonthly.com/2017/03/19/the-decline-of-black-business/ Mon, 20 Mar 2017 00:40:23 +0000 https://washingtonmonthly.com/?p=63948

And what it means for American democracy.

The post The Decline of Black Business appeared first on Washington Monthly.

]]>

At the new National Museum of African American History and Culture in Washington, D.C., a hallway of glass display cases features more than a century of black entrepreneurial triumphs. In one is a World War II–era mini parachute manufactured by the black-owned Pacific Parachute Company, home to one of the nation’s first racially integrated production plants. Another displays a giant time clock from the R. H. Boyd Publishing Company, among the earliest firms to print materials for black churches and schools. Although small, the exhibit recalls a now largely forgotten legacy: by serving their communities when others wouldn’t, black-owned independent businesses provided avenues of upward mobility for generations of black Americans and supplied critical leadership and financial support for the civil rights movement.

This tradition continues today. Last June, Black Enterprise magazine marked the forty-fourth anniversary of the BE 100s, the magazine’s annual ranking of the nation’s top 100 black-owned businesses. At the top of the list stood World Wide Technology, which, since its founding in 1990, has grown into a global firm with more than $7 billion in revenue and 3,000 employees. Then came companies like Radio One, whose fifty-five radio stations fan out among sixteen national markets. The combined revenues of the BE 100s, which also includes Oprah Winfrey’s Harpo Productions, now totals more than $24 billion, a ninefold increase since 1973, adjusting for inflation.

A closer look at the numbers, however, reveals that these pioneering companies are the exception to a far more alarming trend. The last thirty years also have brought the wholesale collapse of black-owned independent businesses and financial institutions that once anchored black communities across the country. In 1985, sixty black-owned banks were providing financial services to their communities; today, just twenty-three remain. In eleven states that headquartered black-owned banks in 1994, not a single one is still in business. Of the fifty black-owned insurance companies that operated during the 1980s, today just two remain.

Over the same period, tens of thousands of black-owned retail establishments and local service companies also have disappeared, having gone out of business or been acquired by larger companies. Reflecting these developments, working-age black Americans have become far less likely to be their own boss than in the 1990s. The per capita number of black employers, for example, declined by some 12 percent just between 1997 and 2014.

What’s behind these trends, and what’s the implication for American society as a whole? To be sure, at least some of this entrepreneurial decline reflects positive economic developments. A slowly rising share of black Americans now hold white-collar salaried jobs and have more options for employment beyond running their own businesses. The movement of millions of black families to integrated suburbs over the last forty years also is a welcome trend, even if one effect has been to weaken the viability of the many black-owned independent businesses left behind in historically black neighborhoods.

But the decline in entrepreneurship and business ownership among black Americans also is cause for concern. One reason is that it largely reflects not the opening of new avenues of upward mobility, but rather the foreclosing of opportunity. Rates of business ownership and entrepreneurship are falling among black citizens for much the same reason they are declining among whites and Latinos. As large retailers and financial institutions comprise an ever-bigger slice of the national economy, the possibility of starting and maintaining an independent business has dropped. The Washington Monthly has addressed the role of market concentration in suppressing opportunity and in displacing local economies in depth (see, for example, “The Slow-Motion Collapse of American Entrepreneurship,” July/August 2012, and “Bloom and Bust,” November/December 2015). Other studies, including a report published last year by President Obama’s Council of Economic Advisors, have substantiated these developments.

The role of market concentration in depressing black-owned businesses is also troubling because of the critical role that such enterprises have played in organizing and financing the struggle for civil rights in America. In the 1950s and ’60s, black Americans employed by whites, including professionals like teachers, often faced dismissal if they joined the civil rights movement, whereas those who owned their own independent business had much greater freedom to resist. This is a largely forgotten history, but one that is gaining new urgency for all Americans in the age of Donald Trump. It shows the crucial way in which advancing and protecting basic civil rights can depend not only on moral and physical courage, but also on possessing the economic independence to stand up to concentrated power.

The decline of black-owned independent businesses traces to many causes, but a major one that has been little noted was the decline in the enforcement of anti-monopoly and fair trade laws beginning in the late 1970s. Under both Democratic and Republican administrations, a few firms that in previous decades would never have been allowed to merge or grow so large came to dominate almost every sector of the economy.

This change has hurt all independent businesses, but the effects have disproportionately hit black business owners. Marcellus Andrews, Bucknell University professor of economics, says that pulling back on anti-monopoly enforcement was a “catastrophic intellectual and political policy mistake,” and that for the black community, the “presumed price advantages of concentration often do not translate into better economic opportunities.”

A case in point is the decline of black-owned financial institutions, including banks and insurance companies. “Mainstream insurers went after black insurance companies for their top personnel to sell their products,” says Wichita State professor Robert Weems Jr. When MetLife bought United Mutual Life Insurance Company in 1993, this was the end of the sixty-three-year existence of the last black-owned insurance company in the Northeast. Black Enterprise called the 1990s “a virtual bloodbath” for the black insurance industry, noting that from 1989 to 1999, the number of black-owned insurers declined by 68 percent.

Parks Sausage—which many readers of a certain age may remember for its jingle “More Parks Sausages, Mom, please”—also serves as an example of how market concentration led to the decline of black-owned independent businesses. Founded in 1951 by Henry Parks Jr., the Baltimore-based company grew into a multimillion-dollar operation, selling pork products from New England to Virginia. In 1969, Parks took the company public, making the 200-employee firm the first black-owned business on the New York Stock Exchange. Yet by the 1990s, after a turbulent series of ownership changes, the company had fallen into bankruptcy. In 1996, two black Americans, the former football stars Franco Harris and Lydell Mitchell, attempted to revive the company, but faced an increasingly consolidated meatpacking industry in which the four largest meatpackers controlled 78 percent of the market. As Harris put it, before selling out, “it’s been hard to get distribution.”

Much the same story occurs with black-owned grocers. In 1969, J. Bruce Llewellyn grew ten Bronx supermarkets into the nation’s largest minority-owned retail business. By the 1990s, however, a retreat from antitrust enforcement and other fair trade laws permitted a few giant corporations like Walmart to engage in anticompetitive behaviors that in previous decades would have resulted in civil and criminal prosecution. These included undermining the pricing power of suppliers and loss leading, or the practice of selling below cost in order to drive competitors out of business. In 1999, Llewellyn sold his last remaining stores to the Dayton-Hudson Corporation, now known as the Target Corporation.

In 1986, a top executive at Revlon made a prediction about the future of the beauty and hair care industry. “In the next couple of years,” he told Newsweek, “the black-owned businesses will disappear. They’ll all be sold to white companies.” The prediction proved accurate. In 1993, IVAX Corp. purchased Johnson Products Co., the thirty-nine-year-old maker of Ultra Sheen, beginning a decade-long series of acquisitions that wiped out remaining black ownership in the hair care industry. One consequence was fewer new hair care products for black customers. Funds once channeled into research and development, University at Buffalo professor Robert Mark Silverman explains, now were accrued as profits by the larger firms.

Much the same has happened to black-owned firms in the entertainment, communications, and publishing sectors. In response to the merger wave, the founder of Black Entertainment Television, Robert Johnson, told an audience at an investment conference in 1997, “You cannot get big anymore by being 100 percent black-owned anything.” Four years later, Viacom bought out BET for $2.34 billion. In 2005, Time Warner acquired Essence Communications Partners, the publisher of the then-leading black women’s magazine, Essence.

Meanwhile, “small enterprises,” writes the business scholar John Sibley Butler, “could not compete with the expansion of larger retail chains, shopping malls, and franchises which developed.” Black-owned funeral homes are a prime example. During the civil rights era, writes author Suzanne Smith in her book To Serve the Living, “funeral directors were usually among the few black individuals in any town or city who were economically independent and not beholden to the local white power structure.” Yet, today, black-owned independent funeral homes are an imperiled institution, as national chains like Service Corporation International muscle out more and more businesses. Charlotte Clark, a black funeral home owner in Roanoke, Alabama, explains that these companies “buy local folks’ funeral parlors but leave up the signs and play it off like there’s been no change, but call the shots from elsewhere.” The National Funeral Directors and Morticians Association, which represents black funeral directors, has seen its membership decrease by 40 percent since 1997.

Reflecting on the past three decades, Bob Dickerson, CEO of the Birmingham Business Resource Center in Alabama, says, “Had our institutions and businesses been maintained, had that money been plowed back into our communities, it could have meant a world of difference.”

The role of market concentration in driving down the number of black-owned independent businesses becomes all the more concerning when one considers some mostly forgotten history. In principles, people, and tactics, the fight for black civil rights, going back to before the Civil War, was often deeply intertwined and aligned with America’s anti-monopoly traditions.

An early example is the Free Soil Party. Emerging in the 1840s, its members opposed slavery on moral grounds. They also opposed all forms of monopoly power as a threat to liberty, including its most terrible manifestation: the monopoly of slave owners over slaves. Marching under the banner of “Free Soil, Free Speech, Free Labor, and Free Men,” the movement focused on granting equal citizenship to all Americans, in large part by promoting the then-radical idea of giving black freedmen and slaves the right to own the land on which they labored.

Credit: Courtesy Black Enterprise

Free Soilers proposed breaking up land monopolies and dividing western lands into 50-to-100-acre homesteads that would grant white and black families independence. As the antebellum history scholar Jonathan Earle explains, “Opponents of aristocracy, land monopoly, and slavery saw yeoman farming and inalienable landownership as the true opposites of servitude.” It was therefore no surprise when Helen Douglass, the wife of the Free Soiler and abolitionist Frederick Douglass, wrote that his opposition against any type of coercion “not only made him a foe to American slavery, but also to all forms of monopoly.”

After the Civil War, passage of the Fourteenth Amendment, which outlawed racially discriminatory laws like the Black Codes, depended in no small part on white supporters who saw it as a means of prohibiting all grants of monopoly or class privilege. An 1866 article in the Boston Daily Advertiser said it “[threw] the same shield over the black man as over the white, over the humble as over the powerful.” In fact, one of the earliest applications of the law was by a group of independent Louisiana butchers who argued that a state-sanctioned twenty-five-year slaughterhouse monopoly violated the Fourteenth Amendment.

During Reconstruction, the relationship between black political enfranchisement and economic independence gained strength. Abolitionists in Congress passed the Southern Homestead Act, which promised to replace the monopoly of slavery with the creation of a black yeomanry secured with grants of free land. Like General William Tecumseh Sherman’s Special Field Order No. 15, with its promise of “forty acres of tillable ground” to newly freed slaves, it fell victim to white backlash and sectional compromise, and was rescinded. Nevertheless, by the 1890s, black Americans who owned or aspired to own their own land joined with independent white farmers in the multiracial Populist Party, in states including Georgia and Texas.

Keeping it in the community: During the mid-twentieth century, anti-monopoly laws, particularly fair trade legislation, contributed to an increase in the number of black-owned, independent businesses like this one in Harlem.

Credit: Library of Congress

The natural alliance between supporters of black civil rights and opponents of monopoly, though strained by race-baiting demagogues, especially during the darkest days of Jim Crow, would endure. As a 1913 editorial by the National Association for the Advancement of Colored People (NAACP) put it, the Emancipation Proclamation and the legacy of the Free Soil movement “gave black men not simply physical freedom, but it attempted to give them political freedom and economic freedom and social freedom. It knew then, as it knows now, that no people can be free unless they have the right to vote, the right to land and capital, and the right to choose their friends.”

To be sure, many black leaders during this era recognized that economic independence was a necessary but not sufficient condition for securing full rights as citizens. Unlike their white counterparts, independent black business owners were vulnerable to the brutality of lynching, to voter suppression at the polls, and to the plundering of black business districts by white supremacists. Yet many black leaders nonetheless saw that the fight for racial justice also required expanded economic independence, which in turn depended on containing market concentration.

In the early twentieth century, groups like the National Negro Business League, for instance, supported anti–chain store legislation as a way to preserve black Americans’ economic self-sufficiency and freedom. As an editorial in the black newspaper New Journal and Guide bemoaned, “Chain stores are constantly draining every dollar, every week, from all of our Southern communities . . . never putting any of it back so that the communities can use it again.” Indeed, many black leaders supported these laws, even though the Ku Klux Klan and many racist white populists also championed them for their own reasons.

In 1928, W. E. B. Du Bois validated the black community’s embrace of anti-monopolism when he wrote, “To ask the individual colored man . . . to sell meat, shoes, candy, books, cigars, clothes or fruit in competition with the chain store, is to ask him to commit slow but almost inevitable economic suicide.” In 1932, the Associated Negro Press and the National Negro Business League, with the cooperation of the U.S. Department of Commerce, printed a newspaper column called “Business and Industry.” One article in the series noted that “an embarrassing problem confronts the 70,000 or more Negro-owned individual enterprises in the U.S. today[:] . . . Big Business, which so perceptibly handicaps the small industrial business units in which category Negro enterprise unquestionably belongs.”

Coming into the New Deal era, the federal government adopted many policies that enormously benefited whites but did little or nothing to help black Americans. The Federal Housing Authority engaged in redlining, the destructive practice of refusing to issue mortgages in predominately black neighborhoods. The Wagner Act left black workers still unable to join unions. Black agricultural and domestic laborers couldn’t reap the benefits of Social Security. “Roosevelt’s New Deal,” Ta-Nehisi Coates has argued, “rested on the foundation of Jim Crow.” But the expansion of anti-monopoly laws that also occurred during this period provided one important exception to this pattern.

These measures included stepped-up antitrust enforcement, along with new fair trade laws, like the Robinson-Patman Act of 1936 and the Miller-Tydings Act of 1937, that prevented dominant firms from exploiting their market power. Combined with anti–chain store measures passed in twenty-seven states, the new legal and regulatory constraints on market concentration benefited independent enterprise, including black-owned independent businesses. Between 1935 to 1939 the number of black-owned retail stores increased by 31 percent and the number of black employees hired by black-owned retail stores grew by 14.5 percent.

Into the 1940s, black leaders battled segregation while continuing to advocate for anti-monopoly laws. In August 1941, the student organization Negro Youth published a list of demands from the National Defense Program, including “that the Attorney General investigate and prosecute all violations of the Sherman Antitrust laws.” In response to a 1947 New York fair trade law that prohibited loss leading, a coalition of black wholesale grocers declared that the law “will afford additional protection to the small businessman, be he Negro or white.”

Independent business owners also played a key role in financing civil rights protests, especially during their peak in the 1950s and ’60s. In Tallahassee, black grocery store owner Daniel Speed bankrolled a bus boycott similar to that in Montgomery, and his shop served as a meeting ground for black leaders. In Biloxi, Gilbert R. Mason, owner of Modern Drug Store, led a “wade-in” against the whites-only section of a federally funded Gulf Coast beach. In his autobiography, Mason wrote, “Pharmacists represented an economically independent class of black businessmen who might have been thought difficult for the white establishment to control. In many cases, the black-owned pharmacy was itself a nexus in black communities.”

Funeral home owners emerged as another powerful bloc of civil rights activists. In 1956, funeral home owner William Shortridge cofounded the Alabama Christian Movement for Human Rights, a group that sought to end employment discrimination and abolish segregation in public accommodations. A. G. Gaston, who built his business empire as the owner of the Smith and Gaston Funeral Home, threatened to transfer his accounts from a white-owned bank unless it removed a “Whites Only” sign from a water fountain. In 1963, he lent Martin Luther King Jr. a room at his Gaston Motel. Soon known as the “War Room,” it was there that King decided to submit himself to arrest in Birmingham, a galvanizing moment in the civil rights movement.

King himself connected part of the civil rights movement with the struggle against market concentration. While giving a talk in 1961 to students at the Southern Baptist Theological Seminary in Louisville, King drew parallels between the Sherman Antitrust Act and discrimination in public accommodations, noting, “This is what is said in the Sherman [Antitrust] Act, that if a business is in the public market it cannot deny access . . . [a]nd I think the same thing applies here . . . that a man should not have the right to say on the basis of color or religion, one cannot use a lunch counter that is open to everyone else in another racial group but not to these particular people; he has an obligation to the public.”

In this era, support for the civil rights movement and opposition to monopoly were political stands often advocated by the same person. For instance, Justice Felix Frankfurter, who made anti-monopoly policy one of the causes of his life, served on the NAACP’s National Legal Committee while also being the first member of the Supreme Court to hire a black law clerk. New York Representative Emanuel Celler sponsored the Celler-Kefauver Act of 1950, a major anti-monopoly law, and also introduced the Civil Rights Act in the House. Sargent Shriver, the architect of Lyndon Johnson’s War on Poverty program, said at a dinner reception describing his vision for anti-discrimination laws and programs like Head Start, VISTA, and Job Corps, “The day may well come when Congress enacts a new Sherman Act for the social field—an antitrust law to ensure that . . . monopoly power is not used to expand and perpetuate itself.”

Attorney General Robert F. Kennedy similarly drew a link between civil rights and anti-monopoly policy. “The principles of free enterprise which the antitrust laws are designed to protect and vindicate,” he said in 1961, “are economic ideals that underlie the whole structure of a free society.” Two years later, King, in his sermon “On Being a Good Neighbor,” echoed Kennedy’s vision when he said, “Our unswerving devotion to monopoly capitalism makes us concerned about the economic security of the captains of industry, and not the laboring men whose sweat and skills keep the wheels of industry rolling.”

Black Americans employed by whites, including professionals like teachers, often faced dismissal if they joined the civil rights movement, whereas those who owned their own independent business had much greater freedom to resist.

A seminal moment in the history of the civil rights movement came on a bloody Sunday in 1965 when Alabama state troopers attacked John Lewis and hundreds of others marching across the Edmund Pettus Bridge in support of voting rights. Here, too, the important link between black-owned independent businesses and civil rights was operating behind the scenes. Civil rights leader Amelia Boynton and her husband, Sam, for example, dedicated half the office space of their real estate and insurance company in Selma to host organizers from the Southern Christian Leadership Conference.

Student Nonviolent Coordinating Committee founder Bernard LaFayette also set up an office in Selma because he knew that the black commercial class would provide a measure of protection for activists. Betty Boynton, the wife of Sam and Amelia’s son Bruce, explained in an interview, “School administrators fired teachers and workers who were sympathetic to the movement.” Indeed, one of the reasons so much of the activity of the civil rights movement was centered in Selma is that its strong community of black business owners offered critical logistical, financial, and other forms of support.

The link between civil rights and anti-monopoly policy also was a matter of tactics. In 1961, the owners of ten independent medical practices used the Sherman Antitrust Act against sixty-one local hospitals and medical organizations in Chicago that barred black Americans from the medical staff. The suit claimed that the hospitals, which provided more than 75 percent of the city’s private hospital beds, discriminated against black physicians. The settlement slowly helped integrate black citizens into the medical profession.

In 1964, Reginald Johnson, secretary of the National Urban League, encouraged the use of antitrust laws to break up housing segregation in the nation’s cities. Of the twenty million dwellings built since World War II, only 3 percent had been open to black families. “Widespread conspiracies in flagrant restraint of trade,” Johnson said, “have confined millions of the nation’s Negro citizens to lives of squalor, misery, and privation.” Antitrust actions taken by the American Civil Liberties Union, the NAACP Legal Defense and Educational Fund, and other organizations helped force desegregation of neighborhoods and realty boards in cities including Trenton, St. Louis, Pittsburgh, Akron, and New York City.

Bruce Boynton even sought to join the Justice Department’s Antitrust Division to combat discrimination and fight for greater equality. In a 1964 interview with Jet he said, “I purposely picked Antitrust instead of the Civil Rights section because we have to get involved in other areas, too. . . . Negroes have to learn how to operate stores, as well as boycott them.” He never made it to the Justice Department but made history anyway. The Alabama Bar Association refused to grant him his law license because of his previous arrest for refusing to leave a “Whites Only” lunch counter. Boynton’s protest led to the Supreme Court case Boynton v. Virginia, which helped desegregate interstate bus travel. NAACP lawyer Thurgood Marshall, who was already famous for having successfully argued Brown v. Board of Education, represented Boynton in that case.

As it happens, Marshall later became the nation’s first black Supreme Court justice and one of the Court’s last great defenders of anti-monopoly laws. Marshall grew up in the largely middle-class Druid Hill neighborhood of Baltimore, the grandson of two grocery owners, and as a young boy worked in their stores. Marshall’s philosophy, his biographer Juan Williams writes, “was the result of being the child of a proud, politically active, black, middle-class family that owned successful businesses and lived in an integrated neighborhood.” His greatest defense of the anti-monopoly vision came in the majority opinion he authored in United States v. Topco Associates, in which he argued that “antitrust laws, in general, and the Sherman Act, in particular, are the Magna Carta of free enterprise.”

After the late 1970s, both Democrats and Republicans generally retreated from the long-standing tradition of using anti-monopoly laws to foster economic and political equality. Since then, successive administrations have evaluated mergers only for their “efficiency,” and by and large have resisted antitrust actions except in the most egregious instances of collusion and price fixing. The subsequent three decades of merger mania have brought steep increases in both market concentration and inequality.

Some members of the black community applauded these changes. In a 1986 interview, Dr. William Bradford, chairman of the University of Maryland Finance Department, said, “Selling out will result in gaining future expansion opportunities. . . . [Black businesses] will move up the hierarchy and control more resources.” But other voices expressed worry. An editorial in the Atlanta Daily World noted, “Mergers don’t always make for better service or lower prices to the consumer, and one certain result of weakening the antitrust laws is more and more mergers.”

Indeed, the number of mergers did keep growing, and in most instances involved smaller black-owned companies being bought out by larger firms controlled by whites. In 1988, MCA and Boston Ventures bought Motown Records for $61 million. In 1995, Shorebank Corporation acquired Chicago’s black-owned Drexel Bank. In 1999, the French advertising giant Publicis Groupe acquired 49 percent of the black-owned marketing firm Burrell Communications Group. In 2005, a group of white investors purchased the nation’s oldest black-owned bank, Consolidated Bank & Trust Co.

The process continues today. Indeed, one of the legacies of Obama’s economic policies has been a particularly sharp drop in the number of black-owned banks. This is not only the result of lessened enforcement of the anti-monopoly laws but also an unintended side effect of measures like the Dodd-Frank Act. In the process of attempting to keep big banks from failing, Dodd-Frank created regulatory burdens that small banks could not meet. These policy changes contributed to a 14 percent decrease in the number of community banks between 2010 and late 2014. Particularly hard hit were black-owned banks, which decreased by 24 percent during this period.

Black-owned financial institutions and the businesses that depend on them for credit were also deeply damaged by the misallocation of bank bailout funds. Referring to the government’s Troubled Asset Relief Program (TARP), former Atlanta banker George Andrews says, “If there ever was a crime committed to our community it was in the way the government handled TARP funds.” According to a 2013 study of TARP investments, black-owned banks were ten times less likely to receive bailout money than nonminority-owned banks. Black Americans suffered disproportionately from the predatory lending practices of big banks and from the reform measures put in place to contain banks that had become too big to fail.

The story of how the struggle for civil rights intertwined and intersected historically with the struggle against monopoly provides a lesson for the future. It suggests that going forward we also should consider how political independence connects with economic independence in the struggle for social justice. Without freedom from domination in one sphere, there is no freedom in the other. Allowing the powerful to corner markets erodes the democratic spirit that makes America great.

The post The Decline of Black Business appeared first on Washington Monthly.

]]>
63948 Mar-17-Feldman-FrancoCover No more Parks Sausages: Former National Football League star Franco Harris made the cover of Black Enterprise magazine in 1996 when he tried to rescue the first black-owned business listed on the New York Stock Exchange. A rapidly monopolizing meat processing industry doomed the effort. Mar-17-Feldman-BlackGrocer Keeping it in the community: During the mid-twentieth century, anti-monopoly laws, particularly fair trade legislation, contributed to an increase in the number of black-owned, independent businesses like this one in Harlem.
A Cure for High Health Care Costs https://washingtonmonthly.com/2017/03/19/introduction-a-cure-for-high-health-care-costs/ Mon, 20 Mar 2017 00:35:55 +0000 https://washingtonmonthly.com/?p=64011

Republican reform plans misdiagnose the problem. The solution is better care for the minority of patients who drive most of the spending.

The post A Cure for High Health Care Costs appeared first on Washington Monthly.

]]>

Republicans have long articulated a case for reforming American health care. That case, in short, is this: Too many people are buying too many health care services with other people’s money. The key to controlling health care spending, according to this view, is to give individuals “skin in the game”—that is, financial incentives to be more prudent health care consumers.

This theory underlies virtually all of the GOP proposals floating around Washington, including House Speaker Paul Ryan’s plan, revealed in February, to “repeal and replace” Obamacare and transform Medicaid. The basic idea is to limit the federal government’s role (and financial stake) in health care by shifting more of the costs and burdens onto individuals via measures such as high-deductible health care plans. Make health care “consumers” feel at least some of the pain of paying for their care, the thinking goes, and they will shop for better deals and stop demanding care they don’t need. This, in turn, will force providers to be more efficient, reduce their prices, stop pushing unnecessary care, and thus lower the nation’s health care bill.

Special Report: Health Care

Missouri, Compromised Home Remedy Stanford’s Big Health Care Idea Mind-Body Connections

Republicans are right about the desperate need to control health care spending, which is eating away at both the federal budget and the livelihoods of individual Americans. But their theory of change rests on a peculiar vision of human nature, which, not to put too fine a point on it, assumes that most Americans are hypochondriacs. While we all know somebody who fits that description, most of us are actually not eager to hand our bodies over to be punctured with needles, probed with instruments, and cut open with scalpels. We do so only when the pain gets bad enough or when our doctors say we should. And when it comes to making health care purchasing decisions, our own judgment isn’t necessarily the best guide. A recent study of workers whose Fortune 500 employer switched them to a high-deductible health insurance plan found that employees never learned to do price comparisons, and while they reduced their health care spending, they did so not only by cutting back on wasteful services like unneeded CT scans, but also by forgoing necessary care, such as a follow-up visit after a diagnosis of diabetes.

Even if policy changes could somehow make Americans savvier medical consumers, the effect on overall health care costs would be surprisingly small. That’s because the vast majority of Americans aren’t big users of the health care system. Rather, statistics show that 5 percent of the population accounts for fully 50 percent of all health care spending, and 20 percent of individuals consume fully 80 percent.

Who are these people? They are our elderly parents, whose health is slowly deteriorating and who need help coping with their worsening illnesses. They are younger people, most of them still working, who suffer from multiple chronic conditions, such as diabetes, lung disease, and rheumatoid arthritis. Many of these patients have mental health problems that make it a challenge for them and their doctors and families to deal with their chronic physical ailments. These are people who, almost by definition, can’t use their purchasing power to fix an out-of-control health care market, because they rapidly spend down their deductibles on necessary care, leaving the bulk of the cost to be paid by insurance. These folks have plenty of skin in the game—their own skin.

The real source of our spending problem is not a nation of hypochondriacs, nor even the sickest 5 percent of Americans. It’s our sick health care system. American medicine is the best in the world at “acute care,” saving the lives of victims of car accidents and heart attacks. But it is remarkably bad at caring for people with chronic conditions, whose symptoms can be controlled but rarely cured and who require routine help in managing their health. Without that help, they will inevitably be hit with sudden health crises that leave them nowhere to go but the emergency room and the hospital. And that’s the most expensive place to treat them.

There are many reasons American health care is doing such a bad job, one of them being the way insurers—both private companies and Medicare and Medicaid—pay for the majority of the care we receive. Most doctors and hospitals are paid on a “fee-for-service” basis; that is, they charge a fee for each individual office visit, test, drug, minor procedure, major surgery, and hospitalization. This means that the more patients a primary care doctor sees in a day, the more she makes; the more stuff that gets done to patients in a hospital, the higher its revenue. Providers are rewarded for delivering more care, not better care. Fee-for-service has made it difficult for even the most ambitious and well-meaning hospitals, clinics, and clinicians to transform their practices to provide the routine, largely low-tech, often home-based care the 5 percent really needs.

In this Washington Monthly special report, we offer four examples of health care programs that have walked away from this broken system. They employ “low-tech, high-touch care,” much of it provided by non-doctors, that not only improves the day-to-day lives of people with chronic illnesses, it also saves money for the government and private insurers. Each takes advantage of an alternative payment model, which opens the door to “wraparound” care that can keep the sickest 5 percent out of the emergency room and hospital.

In San Diego, the Transitions program, run by Sharp HealthCare, provides extra care at home that helps elderly people who are not yet sick enough for hospice avoid frequent hospitalizations. At Stanford, a clinic coordinates doctors, nurses, and medical assistants to care for university employees with multiple chronic illnesses. A “health home” in Missouri makes sure that people with mental illness get the treatment that enables them to manage their physical health challenges. And in Pennsylvania, four counties run an innovative program that has dramatically improved the coordination of physical and mental health care for patients with serious mental illness.

Each of these programs is at risk if Congress repeals the Affordable Care Act or tinkers in the wrong way with Medicare or Medicaid. In addition to covering nearly thirty million people who previously had limited or no access to health insurance and adequate care, the ACA supports novel payment schemes that can break the cycle of neglect toward the chronically ill. If these programs are dismantled, or even slowed, we will continue to see rising health care spending—and worsening health in America. If the GOP is serious about lowering costs, it should reconsider its assumptions about how the health care market works. The lives of millions of American patients, and the financial health of the whole system, are riding on it.

The post A Cure for High Health Care Costs appeared first on Washington Monthly.

]]>
64011
How an Obscure Obamacare Provision Is Quietly Saving Lives, and Money, in Missouri https://washingtonmonthly.com/2017/03/19/missouri-compromised/ Mon, 20 Mar 2017 00:30:08 +0000 https://washingtonmonthly.com/?p=64014

Patients with mental as well as physical illnesses are hard to treat. The Show-Me State has figured out a better, cheaper way, using funds from Obamacare. Will Republicans in Washington kill it?

The post How an Obscure Obamacare Provision Is Quietly Saving Lives, and Money, in Missouri appeared first on Washington Monthly.

]]>

About a decade ago, Pat Powers’s life began to spin “out of control,” as she puts it. Powers, a soft-spoken Missouri native, was stressed out from working at Walmart and two other part-time jobs to make ends meet. She was also suffering from diabetes and severe anxiety and depression. She found her way to Crider Health Center, in St. Charles, Missouri, west of St. Louis, a federally funded community health facility that provides a host of physical and mental health services to Medicaid patients like Powers, all under one roof. There, a psychiatrist, realizing that Powers could not juggle multiple jobs and hope to get better, got her onto disability as he worked to stabilize her mental health issues. A physician also prescribed medication for her diabetes.

Yet despite the treatment, Powers still couldn’t tame what she called her “nervous breakdowns”—emotional storms that left her seeking help in Missouri emergency rooms. While she fought to regain her emotional stability, her diabetes only got worse.

The problem was that the care Powers was receiving wasn’t well coordinated, and she wasn’t receiving the guidance and support she needed. Her psychiatrist and the doctor treating her diabetes weren’t communicating sufficiently, and no one knew whether she was taking her prescribed medications (it turns out that she wasn’t, at least not regularly).

The management at Crider, which is part of the Compass Health Network, was aware of the lack of coordination, and like thousands of health care professionals around the country, they were seeking ways to fix it. But like any facility trying to survive on Medicaid’s penurious reimbursement rates, the funding wasn’t available to do much more. Meanwhile, the lack of investment in coordination was, ironically, costing taxpayers a bundle. In 2011 alone, Powers racked up more than $10,000 in care, primarily in emergency rooms.

Powers is a member of a group with a dubious distinction: the “5/50” population, short for the 5 percent of patients who account for approximately 50 percent of the nation’s health care costs. These are people who typically suffer from two or more chronic, complex health conditions. (See Anna Gorman, “Home Remedy.”) Many are elderly. Some, like Powers, also have mental health issues that make treating their physical ailments especially challenging.

“They could be hearing voices, they could be in a manic phase and not be able to focus,” Pam Haynes, a nurse care manager at Crider says of her patients. “Some people don’t understand, for example, that a piece of paper they are given by a doctor is a prescription and that you actually take that to a pharmacy…[They] might go back to the hospital in three days and say, ‘I’m still having the same problem. What am I supposed to do?’”

Things began to turn around for Powers in 2012. That’s when Crider and twenty-five other community mental health centers around Missouri began to receive two years of enhanced federal funding to test integrated care for high-need Medicaid patients as part of Medicaid’s Section 2703, a provision of the Affordable Care Act.

Section 2703 grants help health care providers defray the costs of becoming “health homes”—that is, organizations that offer a range of carefully integrated services, including clinical and behavioral health care, along with supportive social services—care thought to be particularly effective for high-need, high-cost patients like Pat Powers. Crider used its share of the funds to, among other things, hire and train nurse managers to help patients set goals and guide their care. It also brought in integrated care managers to help all of the health home’s various care providers—and often outside social service agencies—work in concert for every patient. Those social services might include home visits, and support addressing tough issues like homelessness, unemployment, and social isolation.

In Powers’s case, health home care involved sharing information about her medications, hospitalizations, diet, diabetes management, and even employment and housing status. “Like a lot of my clients here, she struggled with basic things, like ‘What’s a carbohydrate’ and how to eat for her diabetes,” says Mary Puetz, the dietician on Powers’s team. By working together, the team helped Powers lower her A1C (a measure of blood glucose) from a dangerous 9.0 to a more manageable 7.4 (6.5 is considered normal) and her cholesterol from 210 to 172. “They helped me understand my depression and cope with things that I stress on, and they helped me with my weight control and diabetes,” says Powers. “Now I’m taking my medication…and I lost ten pounds.” They also helped her find a part-time job. Now, instead of showing up at emergency rooms, she shows up to work the buffet at a local restaurant. “I feel better about myself,” she says with evident pride. “I know I can handle any situation I come across.”

Missouri health care officials arranged for primary care doctors, psychiatrists, social workers, and others to be located in the same building so that patients could have “one-stop shopping.” But putting these professionals under the same roof did not guarantee that they would work together. They were neighbors, but not yet teams.

The 2703 program is one of the many types of care delivery and payment reform buried in the ACA, and it’s been notably successful in improving patient outcomes while driving down health care costs in many states. Yet as congressional Republicans and the Trump administration try to make good on their promise to “repeal and replace” Obamacare—ostensibly because of its high cost—the 2703 program is at no small risk of getting wiped out.

The idea of coordinating care for better results is hardly new. The concept dates back to at least the 1960s, when some pioneering physicians became concerned that ever-increasing medical specialization and the growth of complex chronic diseases among the elderly required a more integrated and scientifically driven approach to health care. These physicians organized the first large-scale health maintenance organizations (HMOs), in which primary care doctors would coordinate care in large, multi-specialty medical group practices that would be part of a system of hospitals, labs, and pharmacies. But the HMO experiment largely fizzled out, and numerous other attempts to encourage integration also failed to take off. One reason is that most health care payment systems, be they private insurance companies or government programs like Medicare, make it difficult for providers to be reimbursed for much of the work—like home visits and coordination meetings—that integrated care typically requires. Another obstacle is cultural: organizations are hard to change, and doctors, nurses, and other health care providers were trained to work in silos. The level of care integration—or “wraparound care,” as some experts call it—is a challenge to achieve.

Health care professionals in Missouri understood the need for integrated care, especially after a 2006 report by Joe Parks, a researcher and the current director of MO HealthNet, a division of Missouri’s Department of Social Services, showed that patients with serious mental illness were dying twenty-five years earlier than the rest of the population. “They were dying of things we could help with—chronic health problems,” recalls Nancy Gongaware, a senior vice president of outpatient health care at Missouri’s Compass Health Network, “but we needed to develop a new way of taking care of them.” Among other things, Missouri health care officials arranged for primary care doctors, dentists, and psychiatrists, along with social workers, dieticians, and others, to be located in the same building so that patients could have “one-stop shopping” for all their health care needs. But just because these professionals worked under the same roof did not guarantee that they would work together. They were neighbors, but not yet teams.

The opportunity to go further came when Barack Obama signed the Affordable Care Act in 2010. Missouri, under then Governor Jay Nixon, was one of the first six states to apply for the ACA’s new 2703 grants. Eventually, nineteen states, including the District of Columbia, would do the same.

Now, instead of showing up in emergency rooms, Pat Powers shows up to work the buffet at a local restaurant. “I feel better about myself,” she says with evident pride. “I know I can handle any situation I come across.”

For years, health policy experts have known that “a lot of the expense in health care comes from poor care coordination,” says Cheryl Damberg, who studies payment reform for RAND. The ACA established policies supplying billions of dollars that fast-tracked experiments in new and better ways aimed at comprehensive health care, while achieving savings through that improved care.

According to a review by the Missouri Department of Mental Health, the results of the 2703 grant program in that state have been impressive. The more than 23,000 Missourians who have received care under the health home initiative met or exceeded six of nine benchmark goals for disease management after the ACA-supported expansion. For patients with diabetes alone (America’s most costly disease, at approximately $332 billion a year), the number with controlled blood glucose levels rose from 18 percent to 61 percent. The percentage of patients with hypertension and cardiovascular disease who controlled their blood pressure went from 24 to 67 percent, and their good cholesterol levels soared from 21 to 56 percent. On the cost side, hospitalizations and emergency room visits for this group dropped 14 percent and 19 percent respectively. This saved the state $31 million just in the first year of the program, and the savings have continued, according to Natalie Fornelli, manager of integrated care at Missouri’s Division of Behavioral Health. In 2015, Missouri’s health home program won the American Psychiatric Association’s Gold Achievement Award for community health services. The program is now considered a national model.

If there is a downside to the health home initiative, it is that too few Missourians benefit from it. That’s largely because of the politics of Obamacare. While Governor Nixon, a Democrat, had the statutory ability to request the 2703 grant funds, and did so aggressively, Missouri’s GOP-controlled legislature adamantly opposed accepting federal Medicaid expansion funds under the ACA. As a result, 632,000 Missourians remain uninsured, including 40 percent of the one in ten Missouri residents with serious mental illness. That situation is unlikely to change anytime soon, unless Missouri’s new Republican governor, Eric Greitens, who replaced term-limited Jay Nixon in January, can convince his legislators to change course—an uphill climb at best. Last April, the legislature went in the opposite direction, passing a bill requiring Medicaid recipients to pay an $8 copay for any ER visit that is not deemed an emergency, or any missed doctor’s appointment. Former Governor Nixon vetoed the law in July. It will go to Greitens next.

At the national level, the fate of the 2703 program is also in doubt. It’s possible that, as Republican lawmakers in Washington and the Trump administration wrestle with the complexities of repealing and replacing Obamacare, they’ll conclude that failing to continue the 2703 grants will likely cost more in tax dollars than it saves, even as it would deprive hundreds of thousands of poor, mentally ill Americans the coordinated treatment that can save their lives. But, as Sidney Watson, a professor at the Saint Louis University School of Law and an expert on health care access for the poor, observes, Trump’s new Health and Human Services secretary, Tom Price, “has expressed a lot of skepticism about the Medicare and Medicaid demonstration centers.”

Still, the advances made at places like Crider Health Center are real and ongoing, even if, without more 2703 grants, they’re unlikely to spread to other community mental health centers. The improved care at Crider has certainly done a world of good for patients like Pat Powers. “Without it,” she says, “I wouldn’t be here. I’d be gone.”

The post How an Obscure Obamacare Provision Is Quietly Saving Lives, and Money, in Missouri appeared first on Washington Monthly.

]]>
64014
Home Remedy https://washingtonmonthly.com/2017/03/19/home-remedy/ Mon, 20 Mar 2017 00:20:33 +0000 https://washingtonmonthly.com/?p=64013

A San Diego “pre-hospice” program helps chronically ill patients live longer, live better, and stay out of the hospital—all while saving the health care system money.

The post Home Remedy appeared first on Washington Monthly.

]]>

Gerald Chinchar isn’t quite at the end of life, but there have been times when it seemed that it might not be far away. The seventy-six-year-old fell twice last year, shattering his hip and femur, and now he navigates his San Diego home in a wheelchair. He has multiple conditions, including diabetes, chronic obstructive pulmonary disease, and congestive heart failure, all of which increase his chances of landing in the hospital.

Chinchar says the hospital is the last place he wants to be. He still likes to watch his grandchildren’s sporting events and play blackjack at the casino. “If they told me I had six months to live or go to the hospital and last two years, I’d say leave me home,” Chinchar proclaims. “That ain’t no trade for me.”

Like Chinchar, most elderly people would rather avoid the hospital in their last years of life. But for many, it doesn’t work out that way: they are in and out of the ER, getting treated for flare-ups of various chronic illnesses. Often, they spend the last few days or weeks of their lives in hospital beds undergoing unpleasant treatments and procedures that have little or no chance of extending their lives in a meaningful way. It’s a massive problem that has galvanized health providers, hospital administrators, and policymakers to search for solutions.

Some seniors repeatedly land in the hospital because they are seriously ill and can’t get care at home that could have prevented a trip to the emergency room. They are not ready for hospice care, which is limited to those expected to live less than six months. But they could still benefit from the type of services hospice provides, including home visits by health providers and medications aimed at relieving pain and other symptoms. Those services not only help improve quality of life; they also keep people out of expensive emergency rooms and inpatient units.

Fortunately for Gerald Chinchar, he landed in one of the few programs set up for people like him. Sharp HealthCare, the nonprofit San Diego health system where Chinchar receives care, has devised a way to fulfill his wishes and reduce costs at the same time. It’s a “pre-hospice” program called Transitions, designed to give elderly patients the care they want and need at home and to help them avoid the hospital.

Through Transitions, social workers and nurses from Sharp regularly visit patients in their homes to explain what they can expect in their final years, help them make end-of-life plans, and teach them how to better manage their diseases. Physicians track the patients’ health and scrap unnecessary medications. Unlike in hospice care, patients don’t need to have a prognosis of six months or less and can continue getting treatment for their diseases.

Before the Transitions program started, the only option for many of its patients in a health crisis was to call 911 and be rushed to the emergency room. Now, they have round-the-clock access to the program’s nurses, just a phone call away. “Transitions is for just that point where people are starting to realize they can see the end of the road,” said San Diego physician Dan Hoefer, one of the creators of the program. “We are trying to help them through that process so it’s not filled with chaos.”

“At this point in the patient’s life, we should be bringing health care to the patient, not the other way around,” said Jeremy Hogan, a neurologist at Sharp.

The chaos for patients and the expense for Medicare that programs like Transitions seek to address is likely to grow in coming years—10,000 Baby Boomers turn sixty-five every day, and many of them have multiple chronic diseases. Transitions was among the first of its kind when it started ten years ago, but several such programs, formally known as “home-based palliative care,” have since opened around the country. They are part of a broader push to improve people’s well-being and reduce spending through better coordination of care and more treatment outside hospital walls.

Health policymakers increasingly recognize that to control health care costs, they must target the sickest patients. About a quarter of all Medicare spending for beneficiaries sixty-five and older is to treat people in their last year of life, according to a report by the Kaiser Family Foundation.

Another expensive group includes those who are seriously ill but not necessarily at the very end of life. People who are chronically ill and have functional disabilities (like the inability to walk or bathe themselves) make up about 14 percent of the population but account for 56 percent of health care costs, according to a 2014 Institute of Medicine report, Dying in America.

But one huge barrier stands in the way of home-based palliative care: Medicare and private insurers have not traditionally paid for it. Under regular fee-for-service Medicare, the federal government reimburses health providers for office visits and procedures, and hospitals for patients in their beds. The services provided by home-based palliative care, like home visits, don’t fit that model.

Sharp has made Transitions work financially because its patients chose Medicare Advantage plans, the HMO-like part of Medicare. Instead of being paid for each visit or procedure, Sharp receives a set amount from the government (via insurers) for all medical services, including hospital-based care, for every patient every month. If Sharp can save money overall by spending more on home visits and less on hospitalizations, it pockets the savings.

While few Medicare Advantage providers have followed Sharp’s lead, more and more home-based palliative care programs are cropping up nonetheless, mainly as a result of new rules and pilot programs in the Affordable Care Act that reward the quality rather than the quantity of care. But with the Trump administration and Republican congressional leaders vowing to “repeal and replace” Obamacare and making noises about overhauling Medicare, it’s an open question whether the home-based palliative care trend will continue.

Palliative care is for people with serious illnesses, such as cancer, dementia, and heart failure, no matter what their prognosis. The care focuses on relieving patients’ stress, pain, and other symptoms and helps them maintain their quality of life while they continue to receive curative treatment. Hospice care, on the other hand, is designed for people who are expected to live six months or less and have stopped trying to treat an underlying disease; to receive hospice care under Medicare, patients must discontinue such treatment. The idea behind the Transitions program is for patients to first get palliative care and then move into hospice, although they don’t always make that transition.

Delivery system: Nurse Sheri Juan gives Sharp HealthCare Transitions patient Gerald Chinchar a checkup at his home. “If they told me I had six months to live or go to the hospital and last two years, I’d say leave me home,” Chinchar says.

Credit: Heidi de Marco/KHN

Dying in America recommended that all people with serious advanced illness have access to palliative, or pre-hospice, care. About two-thirds of hospitals with fifty beds or more have palliative care programs, according to the Center to Advance Palliative Care. The care is delivered by teams of social workers, chaplains, doctors, and nurses. Until recently, however, few such efforts had opened beyond the confines of hospitals.

Kaiser Permanente set out to address this gap. Nearly twenty years ago, it created a home-based palliative care program, testing it in California and later in Hawaii and Colorado. Two studies by Kaiser and others found that participants were far more likely to be satisfied with their care and more likely to die at home, not in the program. One of the studies, published in 2007, found that 36 percent of people receiving palliative care at home were hospitalized in their final months, compared to 59 percent of those getting standard care. The overall cost of care for those who participated in the program was a third less than those who didn’t. “We thought, ‘Wow. We have something that works,’” said Susan Enguidanos, an associate professor at the University of Southern California’s Leonard Davis School of Gerontology, who worked on both studies. “Immediately we wanted to go and change the world.”

But Enguidanos knew that Kaiser Permanente was unlike most health providers. It was responsible for both insuring and treating its patients, so it had a clear financial motivation to improve care and control costs. Enguidanos said she talked to medical providers around the nation about this type of care, but the concept didn’t really take off at the time. Providers kept asking the same question: How do you pay for it without charging patients or insurers? “I liken it to paddling out too soon for the wave,” Enguidanos said. “We were out there too soon. . . . But we didn’t have the right environment, the right incentive.”

Dan Hoefer’s medical office is in the city of El Cajon, which sits in a valley in eastern San Diego County. Hoefer, a former hospice and home health medical director and nursing home doctor, has spent years treating elderly patients. He learned an important lesson when seeing patients in his office: when patients suffered a crisis, which could mean anything from increased swelling in their legs to difficulty breathing, “they were far more likely to be admitted to the hospital than make it back to see me,” he said.

Many of Hoefer’s patients would decline quickly after being hospitalized. Simply being immobilized for even a short period could cause elderly people to lose strength and become confused. Even if their immediate crisis was treated successfully, they sometimes left the hospital less able to take care of themselves. While in the hospital, some got infections, suffered from delirium, or fell.

Hoefer’s colleague Suzi Johnson, a nurse and administrator in Sharp’s hospice program, saw the other side of the equation: patients admitted to hospice care would make surprising turnarounds once they started getting medical and social support at home and got off the hospitalization merry-go-round; some lived longer than doctors had expected. In 2005, she and Hoefer hatched a bold idea: What if they could design a home-based program for patients before they were eligible for hospice?

Thus Transitions was born, modeled in part on the Kaiser experiment. Hoefer and Johnson said the Sharp program focuses more on preventing exacerbations among their chronically ill patients and doesn’t require patients to be hospitalized prior to going into the program. Rather, the pa-tients can be referred directly from primary care providers or specialists.

Hoefer and Johnson set out to convince doctors, medical directors, and financial officers from Sharp’s physician groups and hospitals to try it. But they met resistance from doctors and hospital administrators who were used to getting paid for seeing patients and were reluctant to refer them to palliative care. Despite the concerns, the Grossmont Hospital Foundation, one of the philanthropic foundations of Sharp HealthCare that helps fund patient care and innovative projects, gave Hoefer and Johnson a $180,000 grant to test out Transitions. And in 2007, they started with heart failure patients and later expanded the program to those with advanced cancer, dementia, chronic obstructive pulmonary disease, and other progressive illnesses.

They began to win over some doctors, convincing them to refer patients to Transitions. Jeremy Hogan, a neurologist with Sharp, was initially skeptical, but after he referred some of his dementia patients to the program he quickly realized that the extra home support meant fewer panicked calls to his office and emergency room trips for his patients. Hoefer said doctors started realizing that home-based care made sense for these patients—many of whom were too frail to get to a doctor’s office regularly. “At this point in the patient’s life, we should be bringing health care to the patient, not the other way around,” he said.

Whether the home-based palliative model continues to spread also depends on what President Trump and the Republican Congress do with Medicare.

Across the country, more doctors, hospitals, and insurers are starting to see the value of home-based palliative care and are figuring out how to pay for it, said Kathleen Kerr, a health care consultant who researches palliative care. One such program, Advanced Illness Management, is run by Sutter Health in Northern California. The program is designed to help patients with late-stage chronic illnesses manage symptoms and medications and plan for the future. Data on the program found that patients had 63 percent fewer hospitalizations after enrolling in it compared to the ninety days prior—an average savings of about $2,000 per member each month.

Providers are motivated in part by a growing body of research. Studies have shown that patients with terminal illnesses who receive palliative care live longer than those who receive the usual care for their conditions. A study published in January showed that in the last three months of a patient’s life, medical care in a home-based palliative care program cost $12,000 per patient less than more typical treatment. Patients in the program also were less likely to use the emergency room and more likely to go into hospice. Two studies of Transitions in 2013 and 2016 reaffirmed that such programs save money. The second study, led by outside evaluators, showed that it saved more than $4,200 per month per cancer patient and nearly $3,500 on those with heart failure.

One reason for the success of these programs is that the teams get to know their patients well, said Christine Ritchie, a professor at the medical school at the University of California, San Francisco. In addition, social workers and nurses say that through home visits they may notice problems—that a patient is out of medication, doesn’t have any help at home, or has no food in the refrigerator—that could lead to a physical and mental decline. “There is nothing like being in someone’s home, on their turf, to really understand what their life is like,” Ritchie said.

On a cold morning in January, Sheri Juan and Mike Velasco, a nurse and a social worker who work for Sharp, walk up a wooden ramp to Gerald Chinchar’s front door. Juan is rolling a small suitcase behind her that holds a blood pressure cuff, a stethoscope, books, a laptop computer, and a printer. They are greeted by Gerald’s wife, Mary Jo. Before Gerald’s doctor suggested, late last year, that he enroll in Transitions, the couple already knew about the program; Mary Jo’s mother was in it before she entered hospice and died, in 2015, at the age of 101.

Gerald served in the Navy in the late 1950s as a payroll clerk, and then he and Mary Jo married and moved to San Francisco. He traveled the country for jobs, painting and sand-blasting fuel tanks. He suffers from breathing problems, including asthma, which he’s had since he was a boy. He also has diabetes, the disease that led to his mother’s death. He recently learned that he has heart problems as well.

Chinchar knows he’s not a young man anymore; after all, he has nine grandchildren and four great-grandchildren. He also never expected to live into old age—his father, a heavy drinker, died of cirrhosis of the liver at forty-seven. Still, he believes he has a lot of life left. So he and Mary Jo want to know how to keep his precarious health from getting worse.

This is where Juan comes in. Her job is to make sure the Chinchars understand Gerald’s diseases so he doesn’t have a flare-up that could send him to the emergency room. She sits beside the couple in their living room. Any pain today? she asks. How is your breathing? Are you more fatigued than before? Is your weight the same?

Juan then checks Gerald’s blood pressure and examines his feet and legs for signs of swelling. She looks through his medications and tells him which ones the doctor wants him to stop taking. “What we like to do as a palliative care program is streamline your medication list,” she says. “They may be doing more harm than good.”

Mary Jo says she appreciates the visits, especially the advice about what Gerald should eat and drink. Her husband doesn’t always listen to her, she says. “It’s better to come from somebody else.” Indeed, Gerald has given up alcohol and has cut back on some of his favorite foods.

On another January day, doctors, nurses, and social workers gather in a small conference room for their bimonthly meeting to discuss patient cases. Information about the patients—their hospitalizations, medications, diagnoses—is projected on the wall. The group’s task is to decide if new patients are appropriate for Transitions, if current patients should remain there, and whether any current or new patients should be admitted to hospice. Patients typically stay in Transitions for about seven or eight months, but some last as long as two years. Some stabilize and are discharged from the program. Others go directly to hospice. Still others die while they are still in Transitions.

It’s nearly impossible to predict how long someone will live. It’s an inexact algorithm based on their mental state, appetite, social support, severity of their disease, and other factors. Nevertheless, the team tries to do just that, and if patients are expected to live less than six months, they may recommend hospice care, which Sharp also provides. This is the case with an eighty-seven-year-old woman suffering from Alzheimer’s disease. She has fallen many times, sleeps about sixteen hours a day, and no longer has much of an appetite. Those are all signs that the woman may be close to death, so she is referred to Sharp’s hospice program.

The group then turns its attention to an eighty-nine-year-old woman with dementia. She also suffers from depression and kidney disease, and was hospitalized twice last year. “She’s a perfect patient for Transitions,” Hoefer tells the team, adding that she could benefit from extra home help.

Another morning in January, Hoefer is seeing patients in his office. One is ninety-four-year-old Evelyn Matzen, who has started to lose weight and is having more difficulty caring for herself. They took her into Transitions eight months ago because “we were worried that it was going to start what I call ‘the revolving door of hospitalization,’ ” Hoefer says. Now, Hoefer checks her labs and listens to her chest. Her body is starting to slow down, but she is still doing well, he tells her. “Whatever you are doing is working,” he says. Matzen’s son Bill says that she has started to stabilize since going into Transitions. “She is on less medication, she is in better condition, physically, mentally,” he says.

Hoefer explains that frail elderly patients have fewer reserves to tolerate medical treatment, and hospitalization in particular. Bill Matzen says his mother learned this the hard way after a recent fall. Though the Transitions nurse had come to see her, the Matzens decided to go to the hospital because they were still concerned about a bruise on her head. While she was in the hospital, Evelyn Matzen started hallucinating and grew agitated. Being in the hospital “kicks her back a notch or two,” her son says. “It takes her longer to recover than if she had been in a home environment.”

Outpatient palliative care programs are cropping up in various forms, some run by insurers, others by health systems or hospice organizations. Others are for-profit, including Aspire Health, which was started by former Senator Bill Frist in 2013. Some of this growth is because providers have seen that it works, but it is also because of changes in how providers are being paid. For example, the federal government began to shift Medicare payments under President Obama from a model based on traditional fee-for-service to one that rewards better care.

The Affordable Care Act has also driven the trend by giving providers the freedom—and the funding—to innovate. Sutter Health expanded its program after receiving a $13 million, three-year grant in 2012 from the Center for Medicare & Medicaid Innovation, an agency created by the ACA to fund experiments that improve health and control costs. Another program in New York was part of an accountable care organization, a network of doctors and hospitals formed under the health law that shares financial and medical responsibility for patient care.

Late last year, the University of Southern California and Blue Shield of California received a $5 million grant through another Obamacare creation, the Patient-Centered Outcomes Research Institute. They plan to provide and study different types of community-based palliative care and measure the impact on emergency room visits, hospital stays, caregiver depression, and patients’ symptoms and pain.

Christine Ritchie believes there will be even more home-based programs in the years to come, especially if palliative care providers work alongside primary care doctors. “My expectation is that much of what is being done in the hospital won’t need to be done in the hospital anymore, and it can be done in people’s homes,” she said.

But home-based palliative care is still not widespread, and the programs that exist face hurdles. First, there are not enough trained providers available to meet the need. In addition, creating and sustaining home-based palliative care programs requires a cultural shift for doctors and patients. Some providers are unfamiliar with the concept, so they may not refer patients. And patients themselves may be reluctant to participate, especially those who haven’t been told clearly that their condition is terminal.

Whether the home-based palliative model continues to spread also depends on what President Trump and the Republican Congress do with health care. If they repeal and replace Obamacare but don’t include provisions to encourage innovation, the outpatient palliative trend may lose momentum. The new health and human services secretary, Tom Price, has criticized the Center for Medicare & Medicaid Innovation and efforts to move health care providers away from fee-for-service payments. Republicans have also signaled support for transforming Medicare and providing seniors with premium support to buy private insurance, a move that some experts believe could result in more seniors selecting Medicare Advantage. In theory, that could create more opportunities for innovation, since Medicare Advantage plans provide greater payment flexibility to health care providers. In practice, however, very few providers have utilized that flexibility the way Sharp has.

Researchers say it’s too early to tell what will happen, but some fear that without that flexibility provided by the Affordable Care Act, the recent gains in home-based palliative care could be lost. “To have that go away when you know you could be doing a lot of good is heartbreaking,” Kathleen Kerr said. “I am very worried about what’s going to happen.”

The post Home Remedy appeared first on Washington Monthly.

]]>
64013 Mar-17-Gorman-Nurse Delivery system: Nurse Sheri Juan gives Sharp HealthCare Transitions patient Gerald Chinchar a checkup at his home. “If they told me I had six months to live or go to the hospital and last two years, I’d say leave me home,” Chinchar says.
Stanford’s Big Health Care Idea https://washingtonmonthly.com/2017/03/19/stanfords-big-health-care-idea/ Mon, 20 Mar 2017 00:15:08 +0000 https://washingtonmonthly.com/?p=64015

How an unconventional team of doctors figured out how to provide high-need university employees with better health care, for less money.

The post Stanford’s Big Health Care Idea appeared first on Washington Monthly.

]]>

When Arnold Milstein arrived at Stanford University in 2010 to create the Clinical Excellence Research Center, he already had several careers’ worth of experience in medical innovation. He had been in private practice as a psychiatrist; founded a health care consulting company; examined the organizational structure of hospitals and private practices, poring over the data on the quality of health care; and applied what he learned to improve care for Boeing employees in Seattle and hotel workers in Atlantic City. The biggest lesson he took from all those experiences was that American health care was ill-serving the very people who needed it the most. He had come to Stanford to study ways to make health care work better.

Tall and slim, with a kind face and short hair cropped straight across his forehead, the sixty-seven-year-old Milstein explains the problem succinctly: “It’s a 5/50, 10/70 world.” That is, 5 percent of patients account for 50 percent of health care spending, and 10 percent account for 70 percent, whether they’re insured privately or by the government. These high spenders are the sickest and frailest, patients Milstein calls the “medically fragile.”

At Stanford, in sunny, health-conscious California, Milstein saw the same thing. As a member of Stanford’s employee benefits committee, which oversees the university’s self-funded health insurance plan, he knew that medically fragile Stanford employees were sucking up the vast majority of health care spending and straining Stanford’s system, without many signs of improved health. He had a theory for why this was happening. The patients weren’t the problem; the problem was that the health care system was treating them the way it treats everybody else.

Milstein also had a theory for how to solve the problem. What if you took the concept of an intensive care unit—a single location that pulls together all the personnel and technology needed to care for the sickest patients in a hospital—and applied it to patients who were well enough not to be in the hospital but a lot sicker than the average patient in a primary care doctor’s practice? Some of these people are old and frail, but many are young, hold down jobs at Stanford, raise families, and coach Little League, even though they have one or more chronic illness, like diabetes, depression, or cancer. They are among the most expensive to care for not just because they are sick, but also because the health care system is inefficient and disorganized when it comes to taking care of their multiple conditions. Why not organize the care around them the way a hospital organizes all the nurses, doctors, and technology needed for patients in the ICU?

Through his experience with Boeing and in Atlantic City, Milstein knew that the hardest part of setting up this new model of care would be not the patients, but the limitations of the current system, which organizes individual physicians around individual appointments, not overall patient health. Years ago, he said, he had realized that for this kind of project he would need what he jokingly called a “mission impossible” team, one willing to organize a radically new kind of primary care practice. He immediately thought of Alan Glaseroff, a “diabetes guru” who ran a private practice and led a group of 240 physicians in California’s Humboldt County. Glaseroff’s group had figured out how to control diabetes far more effectively than most primary care practices without raising costs. So Milstein invited Glaseroff to meet; Glaseroff, in turn, invited Ann Lindsay, Humboldt County’s public health officer and the co-owner of Glaseroff’s own private primary care practice—and his wife. Milstein asked the couple if they would consider bringing their innovative model of care down from the forests of Humboldt County to the rolling hills and shopping malls of Palo Alto. It would be part of a new clinic, Glaseroff remembers Milstein telling him—Stanford Coordinated Care, or SCC—that would be “designed specifically for the 5 percent of people who cost 50 percent of the money.” The pair decided to give it a try.

If you exit the eight-lane freeway that bisects the hills just south of San Francisco, pass the turn for the local community college, and travel up a winding hill dotted with eucalyptus trees, you’ll find a wood-shingled bungalow with a sign next to the door: “The Emerald Hillbillies.” The name is a play on the grandiose name of the town, Emerald Hills, and the doctors who have lived there, with their poodle, Lola, since 2011.

Glaseroff and Lindsay met in college and attended medical school at Case Western Reserve University in Cleveland. For both of them, medical training was essential, but their personal experiences were perhaps more instructive.

Lindsay’s epiphany came first. As a college student in the 1960s, involved in the women’s health movement, she did something that women a generation previous never would have considered: a breast self-exam. She found a lump. The next logical step was to visit her doctor—who asked, condescendingly, “What are you doing feeling your breast?”

“At that time, the medical care for women in particular was that things were done to them rather than done with them,” she said. In part as a result of that encounter, she decided to become a doctor herself, committed to the idea that physicians should work with patients, male and female, not dictate to them.

Glaseroff’s moment of enlightenment came in 1983, when he was diagnosed with Type 1 diabetes. By that time, the couple was already in Humboldt, sharing a primary care practice. When he received the diagnosis, he pictured his life unspooling like that of a family friend with the disease: amputations, dialysis, and, finally, death.

What he learned was that diabetes only kills if it’s not well managed. Chronic illness requires patients to reorient their whole lives around daily vigilance. And, Glaseroff said, having a supportive medical team was essential. He incorporated those lessons into his own practice. “What I did was stop yelling at people,” he said. When doctors scold patients about the consequences of their weight, diet, and exercise habits, he explained, all it does is scare them.

“It’s a 5/50, 10/70 world,” says Arnold Milstein. That is, 5 percent of patients account for 50 percent of health care spending, and 10 percent account for 70 percent, whether they’re insured privately or by the government.

Soon, he was seeing most of the diabetes patients in Humboldt County. While he emphasized self-management, he also saw that the way primary care practices typically work—medical assistant measures blood pressure and weight; doctor spends seven minutes with patient, writes prescription, tells patient what to do—wasn’t doing the trick. Many of Glaseroff’s patients would do well for a while and then, for lack of a better word, relapse. A prescription for insulin does no good for a patient who doesn’t have a refrigerator to store it in. If the reason someone can’t control her eating is that she is coping with the untreated after-effects of trauma, nutrition counseling is useless.

Suddenly Glaseroff and Lindsay were pulling in multiple disciplines to meet their patients’ mental and physical needs. Their medical assistants, who in traditional primary care offices may do little more than take vitals and escort patients to exam rooms, began following up with the clinic’s patients on the phone—monitoring their progress, providing test results, and answering questions. This new strategy improved the care they were able to provide to their patients, but it was tough under the standard fee-for-service model, which pays physicians only for seeing patients in the office. Glaseroff and Lindsay managed to pay their assistants by making do on half of a typical primary care doctor’s income each.

While Glaseroff was learning how to manage diabetes, Lindsay became the public health officer for Humboldt County. She’d always understood the concept of “social determinants of health”—that is, that things like education levels, socioeconomic status, and the physical environment can make it easier or harder for patients to get and stay healthy. But until she worked in public health, she didn’t have deep experience with it.

The fee-for-service model is profitable for specialists, who get a handsome fee each time they perform a procedure. For primary care doctors, however, the only way to make more money is to see as many patients as possible, often in fifteen-minute slots. For medically fragile patients, fifteen minutes isn’t enough.

Then came an outbreak of shigella—infectious diarrhea—at a homeless encampment on a beach jetty in Humboldt. When Lindsay sent public health outreach workers out to the beach, they found that what most of the jetty residents needed wasn’t health care; it was tires on their cars, spaying or neutering for their pets, official identification, addiction counseling. And, inevitably, housing. So the public health workers got to work hooking the people on the jetty up with social services that were already available. As more people were housed, sanitation improved—and the shigella outbreak evaporated.

When Lindsay and Glaseroff arrived at Stanford to create the coordinated care clinic, they had three guiding principles: one, they wanted to emulate the community outreach model Lindsay had used to deal with the shigella outbreak; two, they knew that engaged patients fared better; and three, fee-for-service payment would have to go.

The fee-for-service model is straightforward and profitable for specialists like orthopedic surgeons, gastroenterologists, dermatologists, and cardiologists. They get a handsome fee each time they perform a knee surgery, do a colonoscopy, or spend thirty seconds burning off a suspicious mole. For primary care doctors, however, who don’t do procedures, and whose fees are far lower, the only way to make more money is to see as many patients as possible a day, often in fifteen-minute slots. Short visits work for healthy patients who have a sudden, minor ailment that can be cleared up with a prescription or a referral. But for patients who are medically fragile, fifteen minutes isn’t enough.

Lindsay and Glaseroff began interviewing potential SCC patients—Stanford employees who, on average, had nine separate medical conditions and cost the health plan $43,000 per year, according to data from a study funded by the National Institutes of Health. The couple heard over and over again that these patients were dissatisfied with their primary care. They were treated like a list of conditions, they said, and were constantly being shuttled between specialists who didn’t communicate with one another. If one specialist glossed over something in the chart, the patient had to catch it. And their primary care providers—whom many of the patients valued—just didn’t have time to help them decipher the complicated and often conflicting medical advice they were getting from their various doctors.

“In many ways, they were just overwhelmed,” Lindsay said. “Many had lost all hope. . . . No one [on the medical team] put their health needs in the context of their overall goals.” And how could they, in fifteen minutes?

To figure out how to improve care for the most fragile patients, Lindsay and Glaseroff analyzed the records of the biggest regular health care users among Stanford employees and their dependents, looking for conditions shared by multiple patients. The goal, said Glaseroff, was to build a team that could meet the most common needs. That way, they wouldn’t have to refer to specialists outside of the clinic. Fewer referrals meant fewer additional charges to Stanford Health Care-—the university’s self-funded health insurance system-—and fewer copays for patients. The intensive care would also, theoretically, help patients avoid health crises that often sent them to the emergency room.

The final team was small but powerful: two physicians (Glaseroff and Lindsay each served part-time and together made up less than a single full-time equivalent physician), a registered nurse, a physical therapist, a social worker, a pharmacist, and four medical assistants who would be trained to act as “care coordinators.”

Glaseroff and Lindsay divided the total cost of those providers’ salaries by twelve to arrive at a monthly cost for the clinic. They calculated that they could provide care to about 500 patients. They landed on a reasonable figure: the clinic needed $286 per patient per month from the Stanford Health Care plan. The monthly amount would allow the practice to do all the work that was “unbillable” under fee-for-service, like phone calls, emails, and video calls, to keep patients connected without having to come into the office.

Under fee-for-service, most primary care practices want to keep their waiting rooms filled, whether the patient really needs to come in or not. “If a patient thought they had a urinary tract infection, they would have to come in,” said Glaseroff. Under the per-patient, per-month payment plan, “We handled them on the phone and got them treated. We handled them in a way that under fee-for-service would have been financial suicide.”

But the even more important benefit of up-front payments, Glaseroff said, was that if patients didn’t have to come into the office, the clinic could see fewer patients in a day. That let first-time appointments expand from fifteen minutes to two hours. The physician would spend an hour with a patient, going through their medical history thoroughly and addressing complaints. That session would be bookended by two thirty-minute meetings with a care coordinator, who would help the patient set reasonable goals. Follow-up appointments would allow thirty minutes with a doctor and another thirty with a care coordinator. In between, patients could call or email their care coordinators and doctor at any time, and get a thorough response.

They had determined the model; now they had to sell it to Stanford. It wasn’t easy, said Milstein. For one thing, while SCC stood to save Stanford Health Care substantial money, it would hurt Stanford’s hospital in the short term. Assuming they succeeded in keeping patients out of the hospital, fewer admissions would mean less revenue, because the hospital is paid on a fee-for-service model. The insurance side stood to win, the hospital to lose.

They also had to win over their Stanford Medical Center colleagues. Unlike in Humboldt County, where Glaseroff had made a name for himself leading the local physicians’ association and everyone knew Lindsay from her public health work, none of the doctors at Stanford had ever heard of the couple. And bureaucracy meant that what would have taken three months to set up in Humboldt took eighteen months at Stanford.

Theresa Munoz Wynn was a typical patient for SCC. When she began coming to the clinic, five years ago, she had diabetes, high blood pressure, high cholesterol, sleep apnea, and depression. Even with insurance, she was spending more than $500 a month out of pocket for four insulin shots a day, another diabetes drug, cholesterol and blood pressure drugs, and an antidepressant.

For all the money she (and Stanford Health Care) were spending, her blood sugar was still wildly out of control, she smoked a pack of cigarettes a day, and she woke up just about every morning feeling achy and heavy, with a headache and, sometimes, flu-like symptoms. Her first thought would be, “Ugh. I wish I hadn’t woken up.” She worked up to seventy hours a week at Stanford’s contracts and procurement department, and all she wanted when she got home was to get back into bed.

“I was on a path of self-suicide,” she said. “I didn’t have a gun or a handful of pills or a rope, but I was on a collision course. I didn’t care. How I felt was that everything was futile. I was going to be sick forever.”

Most primary care practices struggle to protect patients like Munoz Wynn from health crises in the midst of what often seems like intractable pathology. Insurers struggle to predict the cost of their care, and the sticker price of medical fragility under a fee-for-service model can be astronomical for payers, whether they are the employer, a private insurer, or the government.

After six months at Stanford Coordinated Care, the overall cost of health care for the first group of patients dropped by 13 percent compared to the previous six months, largely due to a 59 percent drop in ER visits and a 29 percent drop in inpatient admissions.

Helping these patients begins at SCC with a team-wide meeting every Friday. On a typical Friday, Samantha Valcourt, a registered nurse, takes the train to Palo Alto from her home in San Francisco. As the train ticks past the stops, she scans through her work email, checking for messages from patients with critical needs or questions. When she gets to the office, she takes the elevator to the fourth floor and enters what is, essentially, SCC’s command center.

It’s a large room, set up for collaboration: twelve work-stations line the four walls, with small pony walls dividing each team member slightly from the others. Just before 9 a.m., Valcourt is at her desk and, like everyone else, juggling calls and emails from patients. One patient might report that his prescriptions haven’t been filled. Another might be anxious and need thirty minutes of reassurance to stave off a trip to the ER. In just about all of the calls, the care coordinators ask their patients an important question: What’s your goal?

For one diabetic patient with toe pain, the answer is to dance again. For another with heart failure, it’s the dream to fly to Norway for a family reunion. For many patients, said care coordinator Delila Coleman, it’s simply to get off most of their medications.

As they go through the calls and emails, Valcourt, Coleman, and the rest of the team constantly remind patients that they are in charge of their own health and can make their lives better. The theory is that framing care in terms of helping patients achieve their own goals makes them more hopeful, more motivated to care for themselves, and more likely to call SCC before heading to the ER when they may not need to.

Spreading the Stanford model will depend in part on the Trump administration and Congress. The Affordable Care Act created the Center for Medicare & Medicaid Innovation, which supports new models of care delivery. But Tom Price, the new health and human services secretary, has made no secret of his disdain for the CMMI and any effort to reform fee-for-service payment.

At 9 a.m., team members turn from their desks and assemble around a modest table. After five minutes of mindfulness exercises, they begin reviewing new patients and existing patients who are experiencing complications or are in need of additional support. After discussing a patient’s medical issues, staff members divvy up duties for what the patient needs—a care coordinator will volunteer to call the patient back to talk about his or her goals, or the physical therapist will schedule an appointment to address pain issues, or the pharmacist will do a review to make sure the patient is on the right combination of medications. The approach sounds simple, but it’s completely different from standard American health care, in which the various practitioners often only know what they see in the chart—if they even have time to thoroughly review it.

Over time, that approach saved Theresa Munoz Wynn’s life. First, she started getting mental health and nutrition counseling. With Coleman, she began to devise an exercise plan. Every once in a while, when her steps toward self-care faltered, she would be the subject of that 9 a.m. meeting, and, Munoz Wynn said, the team would come back to “regroup and redirect my care until we found a self-management approach that worked.”

Today, Munoz Wynn doesn’t smoke. She has lost 100 pounds, is no longer diabetic, and doesn’t have high blood pressure. The only medication she still takes is an antidepressant. With more energy, she has picked up hobbies. She crochets now. She has joined some clubs. She is back in school, working toward a bachelor’s degree in business.

In March 2016, Lindsay and Glaseroff presented the initial results of their approach, based on the clinic’s first 253 patients. After six months at SCC, the overall cost of their health care dropped by 13 percent compared to the previous six months, largely due to a 59 percent drop in ER visits and a 29 percent drop in inpatient admissions. (Those numbers exclude one astronomically expensive heart transplant patient.) Using the $43,000 average health care cost figure, that would amount to a savings of about $1.4 million in one year for that cohort.

Today, Lindsay and Glaseroff are in the midst of moving back to their beloved Humboldt, which Lindsay missed every day of the five years they spent at Stanford. Glaseroff will commute to Milstein’s Clinical Excellence Research Center, where he will help spread SCC’s model and others the center has developed. Lindsay, meanwhile, will be working remotely with the Pacific Business Group on Health, a large group of self-insured employers who are looking for ways to improve the quality of care and reduce costs, and might be willing to test out SCC’s model.

Spreading that model will depend in part on how the Trump administration and Congress approach their promise to “repeal and replace” Obamacare. A key provision in the Affordable Care Act was the creation of the Center for Medicare & Medicaid Innovation, which supports the development and implementation of new models of care delivery. Recently, the Department of Health and Human Services, which runs the CMMI, released a new rule for care for medically fragile patients based in part on Stanford’s model. But Tom Price, the new health and human services secretary, has made no secret of his disdain for the CMMI and any effort to reform fee-for-service payment.

The stakes are not merely financial. The care Theresa Munoz Wynn gets from SCC has restored not just her health, but also her dignity and self-worth. “I’m a person, not a statistic,” she said. “I don’t want to sit and die anymore. I want to live.”

The post Stanford’s Big Health Care Idea appeared first on Washington Monthly.

]]>
64015
Charles Peters on Recapturing the Soul of the Democratic Party https://washingtonmonthly.com/2017/03/19/charles-peters-on-recapturing-the-soul-of-the-democratic-party/ Mon, 20 Mar 2017 00:10:56 +0000 https://washingtonmonthly.com/?p=63800 Charles Peters, the Washington Monthly's founder and longtime editor in chief died two years ago at 96.

In a new book, the Washington Monthly founding editor explains where liberal elites went wrong — and suggests a way forward.

The post Charles Peters on Recapturing the Soul of the Democratic Party appeared first on Washington Monthly.

]]>
Charles Peters, the Washington Monthly's founder and longtime editor in chief died two years ago at 96.

Most of us, as we get older, tell ourselves that we’ll keep working past age sixty-five, or at least use our skills and experience productively in retirement. That’s especially true of writers. But few of us will pull off what Charlie Peters has done. At ninety years old, Peters, my mentor and the founding editor of the Washington Monthly, has just published an important book on the central issue facing the country.

We Do Our Part
We Do Our Part: Toward a Fairer and More Equal America by Charles Peters Credit:

We Do Our Part is a history of how American political culture evolved from the communitarian patriotic liberalism of Peters’s New Deal youth to a get-mine conservatism in which someone like Donald Trump could be elected president. It’s a fall-from-grace story interlaced with Peters’s rich life experiences and generally consistent with the Greatest Generation narrative we’ve all come to know. The arguments and anecdotes will also be familiar to anyone who has read Peters’s previous books and the Tilting at Windmills column he wrote for so many years.

But as he told me when, as a young Washington Monthly editor, I groused about having to commission a version of a story we’d previously published, “there’s no sin in repeating the truth if the truth hasn’t sunk in yet.” The truth Peters aims to impart in this book is one that all Americans, and especially liberals, need to understand: An America in which the elite serves the interests of the majority isn’t a pipe dream. That world actually existed, in living memory. And there are signs, in the country’s reaction to the election of Donald Trump, that it could exist again.

Peters was a six-year-old in Charleston, West Virginia, when Franklin Delano Roosevelt took office at the height of the Great Depression. He remembers unemployed men, mostly from the outlying rural areas, selling apples on the street corners and knocking on the back door of his home asking for food. He also vividly remembers the popular culture of his youth—Spencer Tracy and Jimmy Stewart playing Average Joe heroes, comedies that mocked the pretensions of the rich. Over the course of the 1930s he saw the numbers of apple sellers and beggars decline as a result of New Deal policies that were crafted and implemented by thousands of idealistic bureaucrats who had poured into Washington to do their part for the country.

At seventeen, he caught a glimpse of the most brutal side of that era when the local police chief gave him a tour of the jail and, “trying to treat me as a man of the world, said he wanted to show me how they dealt with niggers. He opened a door to a closet that was full of bloody garments.” But soon after, as an Army draftee, Peters broke his back in basic training, and during several months spent recuperating in a racially integrated hospital ward saw signs of a more hopeful future. “Our laughter came so frequently and with enough volume that the nurses would tell us to quiet down. There was absolutely no racial tension. [It]…made you think of what could be.”

From there came Columbia University, law school at the University of Virginia, and a move home to Charleston to join his father’s law firm. In 1960 he ran for the state legislature while also helping lead John F. Kennedy’s presidential primary campaign in West Virginia. Both men won, and after a short time in the statehouse Peters, like the young New Dealers a generation earlier, went to Washington. There he ran evaluations for the newly founded Peace Corps, a job he held well into the Johnson administration.

In the standard telling, the decline of big government liberalism begins sometime around the Tet Offensive and the assassination of Bobby Kennedy. Peters fixes the date much earlier: 1946. That’s the year a number of senior advisers to the recently deceased FDR, people like Thurman Arnold and Abe Fortas, decided to become lobbyists. Few New Dealers had done this before, so the connections and insider knowledge these men possessed were rare and valuable. Arnold and Fortas grew rich and powerful—the advance guard of what would become a vast Washington industry.

Peters’s concern isn’t just with how lobbying corrupted the political process, though it certainly did that—Fortas, for instance, was denied the job of chief justice of the Supreme Court thanks to shady payments from a client-connected foundation—but more broadly with how it corrupted the incentives and worldview of those who came to Washington. Men like Fortas, a brilliant Yale Law School grad from a modest background who owned multiple homes and Rolls-Royces, set a new lifestyle standard in Washington. As more staffers and ex-congressmen followed the lobbying path, those still in government began to see their salaries, which they once considered comfortable, as penurious. (Eventually they became so, as all the high incomes bid up real estate prices and the local cost of living.)

This acquisitiveness was connected to another rising sin: snobbery, specifically the practice of signaling superiority to the hoi polloi through one’s purchases and discriminating tastes in food, drink, and culture. JFK himself, despite his war heroism and inspiring call to service, embodied the trend by marrying the high-born, fashionable Jacqueline Bouvier and surrounding himself with celebrities.

The twin viruses of greed and snobbery are not, to say the least, conducive to a focused and sympathetic concern for average Americans. But Peters reminds us that these behaviors were not widespread among educated people in Washington or throughout America in the 1950s and ’60s. The postwar prosperity and compression of incomes continued, the draft was still nearly universal—even baseball greats served their two years—and the federal government continued to deliver impressive new national projects, from interstate highways to Medicare, that the vast majority of Americans appreciated.

All that changed in the tumultuous late ’60s and early ’70s. Rising crime, race riots, and draft-deferred college students protesting the Vietnam War while working-class kids fought and died alienated white working- and middle-class voters. Lyndon Johnson’s lies about Vietnam, Richard Nixon’s about Watergate, and Carter’s fecklessness made educated Baby Boomers cynical about the government.

Under these conditions, the viruses of snobbery and selfishness spread wildly over the course of the 1970s and ’80s. Graduates from top colleges flocked to high-paying jobs at law firms and investment banks rather than to public service, and the caliber of the civil service accordingly declined. Magazines that catered to consumer chic and cultural signaling, like New York, Vanity Fair, and Washingtonian, grew fat with advertisers and subscribers. On PBS, the TV home of the educated elite, Louis Rukeyser’s Wall Street Week became the number one show.

“Money had become a major and open interest of the meritocratic class,” writes Peters, in a way it simply hadn’t been from the 1930s through the ’60s. As a consequence, “the cause of lower taxes and of conservatism in general flourished, as shown by the election of Ronald Reagan in 1980.” Even elites who didn’t support Reagan were sympathetic to the growing idea that the market should deliver more “shareholder value.” So they didn’t protest (some even cheered) when corporations closed plants, busted unions, and spent their cash on stock buyback schemes rather than on new products and services. To the extent that they expressed their public spiritedness, it was by supporting causes—gay rights, the environment—that weren’t the central concerns of most middle- and working-class voters, whose incomes were stagnating while the meritocrats’ were soaring.

The result was greater and greater resentment of the educated elite. The Rush Limbaughs and Roger Aileses of the world fed off that resentment to boost their ratings and advance a conservative movement that didn’t, in the end, improve their audiences’ economic situation—a fact that Trump exploited by running against establishment conservatives as well as liberal elites.

Peters credits Bill Clinton with being the only Democratic president or candidate in decades who managed, through his policies and gift for empathy, to bridge the gap between the meritocrats and the white middle and working classes. And he sees evidence that Democrats have awakened to the problems of greed, snobbery, and elite detachment, including “the radical increase in awareness of income inequality” and “some meritocrats overcoming their snobbery to make a serious effort to understand the Trump vote.” He also sees signs “that people are beginning to question their relentless pursuit of money, or at least some of the reasons why they think they have to make a lot of money.”

More concretely, he is heartened by examples of elites returning to government service. These include the investment banker Steve Rattner, who joined the Obama administration and helped save the auto industry, and the top Silicon Valley talent Obama personally recruited to the new U.S. Digital Service after the disastrous rollout of the health care exchange website. Peters makes a plea for more Americans, especially liberals, to run for office at the local, state, and national levels—something that, in the months since his book went to press, actually seems to be happening.

Charlie Peters with JFK
Charles Peters with President John F. Kennedy. Credit:

If anything, I think Peters underestimates the degree to which Americans are hungry to serve. What confounds his call for more of the best and brightest to join government is a lack of opportunity. The problem is political. There are eight applicants for every slot in AmeriCorps, the national service program founded by Bill Clinton. But Democrats’ attempts to expand the program have been consistently checked by Republicans. Trump’s budget office has drawn up plans to eliminate it altogether. More broadly, the federal workforce, at 2.8 million employees, is the same size it was in the 1960s when Peters was part of it, even though the U.S. population since then has more than doubled and the federal budget has quadrupled in real terms. Lawmakers control the federal head count and don’t want to be seen as “growing the bureaucracy.” The most Democrats in Congress have been willing to do is beat back repeated Republican efforts to further decimate the federal workforce.

To make up for the inadequate number of staff, the government increasingly relies on contractors. Peters bemoans this trend, citing numerous examples of how it has hurt government’s performance. He’s right. But he doesn’t call for the obvious solution: boost the number of federal employees so more of the work can be done in house. This would require hiring a million new federal workers, according to University of Pennsylvania political science professor John DiIulio, and boosting their pay as well.

That is also the key to curbing the power of lobbyists, which won’t happen merely by inveighing against their greed. Lobbyists’ power comes mainly from their control of information—about the industries they represent, about the ways government programs work—that congressional staffers, many of them young and inexperienced, often lack. The way to neutralize that power is to strengthen government’s capacity to get that information independently, by hiring more staffers and researchers and paying them more so they can make a decent living without having to join the private sector.

Of course, a politician who called for hiring a million more federal workers, and raising their salaries, might appear suicidal in the current political climate. But if Peters is correct—and I think he is—that a key to bridging the class gap is for more Americans, especially the elite, to serve in government, a political way has to be found. The same bilious anti-government fever that gave America Ronald Reagan and Newt Gingrich has now given us Trump. Peters reminds us that government service was once a broadly shared and elite experience and value. To cure the fever, today’s liberals must figure out how to make it so again.

The post Charles Peters on Recapturing the Soul of the Democratic Party appeared first on Washington Monthly.

]]>
63800 We do Our Part We Do Our Part: Toward a Fairer and More Equal America by Charles Peters Charlie and JFK
The D.C. Working Man’s True Power Suit https://washingtonmonthly.com/2017/03/19/the-d-c-working-mans-true-power-suit/ Mon, 20 Mar 2017 00:05:52 +0000 https://washingtonmonthly.com/?p=64017

Forget the French cuffs. Washington is run by an army of underpaid schlubs.

The post The D.C. Working Man’s True Power Suit appeared first on Washington Monthly.

]]>

There is a misconception that the male D.C. uniform is a “power suit” and tie.

It isn’t.

That ensemble belongs to the select number of lobbyists, lawyers, and self-titled trusted advisers who inhabit the dazzlingly white hallways of influence along K Street. They are rare specimens, populating a lucrative microclimate where jackets aren’t too tight at the shoulders or too loose at the waist, where slacks break at just the right point on the ankle, and where shirts have subtle geometric patterns ingrained in the soft fabric. These are the lucky few who shop at select boutiques in Georgetown where rich leather dress shoes decorate storefront windows (no price tag—far too gauche) and honey-accented Brits wait patiently inside for the next prospect to enter.

The real male uniform—for the tens of thousands of unpaid interns, underpaid Hill staffers, bureaucrats, think tank wonks, and nonprofit advocates who do the grunt work in Washington, D.C.—is much less refined. It’s typically an off-the-rack dark suit, bought on sale from Calvin Klein or Banana Republic and worn Monday through Friday. (Of course, these guys work alongside an equally underpaid army of female toilers. But as a man, I don’t feel qualified to comment on the ladies’ uniform.) Each day features a different shirt-and-tie pairing, which is supposed to make colleagues believe that the wearer has a closet full of suits. But they all know the truth—that closet has one, maybe two jacket-pant combinations—because they’re running the same con. It’s what you have to do when you’re faced with a formal dress code and a tight budget. The District’s average monthly rent on a one-bedroom apartment is $2,000. Uber pricing seems to be permanently on surge. Craft beers cost upwards of $9 a pint; a decent shot of whiskey at a bar in a gentrifying neighborhood might set you back $8.80, tip not included. All of this chisels deeply into the budgets of Washington’s white-collar laboring class: staff assistants’ salaries on Capitol Hill average just under $34,000 a year, and their entry-level brethren elsewhere fare only slightly better.

Note the shiny elbows of a dark suit worn too often at an office job that pays too little. Pay particular attention to the right sleeve, constantly rubbing across a legal pad as notes are furiously scribbled for a boss who seldom reads them. Check out the cracked leather around the belt holes, worn too long across an expanding waistline—the result of snatched meals of greasy but free passed hors d’oeuvres and happy hour specials. It was a cheap belt to begin with, probably from J.C. Penney, Sears, or the like. But it’s just a belt, right? Who looks that closely at your waist, unless they’re trying to divert their gaze from a garish polyester-blend necktie? (Banana Republic as well. Cheap material, but what a price!)

Look, now, at the seat of the pants. That’s the dead giveaway for the D.C. uniform, the sign of a man gone local. It shimmers with each step, the reflection of light in odd contrast to the deep hue of the rest of the trousers: the mark of the Metro.

This fabric, once a deep and satisfying navy blue or midnight black, has become a glistening reminder of the realities of life inside the Beltway. This particular suit may have begun life as a gift: a college graduation or first-Washington-job kind of present, never intended to really last, just like the job itself. And yet each lingers, as stagnant as the wages paid out every other Friday.

Washington isn’t Manhattan or Hollywood, where power and money always walk hand in hand.

Public transportation, and the wear and tear it leaves on a wardrobe, has long surpassed the once-envisioned black Suburban as means of daily transit for the wearer of the D.C. uniform. Do quiet mutterings into the ears of power still happen in the backseats of SUVs? Absolutely. Just not often enough to rescue a Men’s Wearhouse buy-one-get-one-free-type getup from repeated slides back onto subway seats with too little cushion, smashed down into the cheap plastic by all the commuters who have come before. A little sparkly? A year, maybe two into the tour of duty. Bright to the eye? Three years, minimum. Missing a back pocket button? Getting close to five, the unspoken tipping point of the Beltway Stay, when the short-timers heading back to their home states are separated from the District lifers.

Power suit–clad downtowners look at this sartorial deterioration with a mixture of smugness and relief. They can remember a time when they, too, had an overburdened wardrobe and an under-stuffed wallet. Yet even now, as their flesh shifts satisfyingly beneath costly fabrics, they sit mired in an irony of their own. With the casting off of a worn old suit for a fine new one, they’ve moved from decisionmaker to influence seeker, from teller to asker. Ill-clad as he is, the guy sporting the D.C. uniform can take some comfort in knowing that he steers the power, even if he doesn’t wear it.

Washington isn’t Manhattan or Hollywood, where power and money always walk hand in hand. Much of the business in D.C. gets done by legislative toilers whose cuffs are rubbed raw with the keystrokes of a hundred bills, and by uncomfortably clad service members, thrust into the heart of the National Security Council’s decisionmaking apparatus, who are more used to fatigues than suits and slacks. Bills are written, troops deployed, and regulations designed by this intrepid collection of twenty- and thirty-somethings. Want to add a zero to a line item in that appropriations bill? Put on your best suit and sit down with the staffer responsible—don’t mind his baggy dress shirt. Talking points and regulations and senators’ schedules are all written by this class of people, the gatekeepers to the powerful and the architects of the world’s most significant minutiae. Even a staff assistant, the poverty-wage sentinel for a billionaire cabinet secretary, can “lose” a meeting request with the click of a mouse. Their outfits may scream insignificance, but their job descriptions belie it as, day after day, the District floats on a sea of emails covered in their electronic fingerprints.

Wearers of the D.C. uniform could have pursued a more profitable career path, or at least one available in a cheaper part of the country. But these men have chosen climbing the ladder of influence over riding the escalator of profit. They labor tirelessly in the capital city of the world’s last superpower, with the full knowledge that wealth is out of reach and fame is confined to decades’ worth of C-SPAN B-roll. Such is the nature of true power in D.C. And those wrinkles will iron out, right?

The post The D.C. Working Man’s True Power Suit appeared first on Washington Monthly.

]]>
64017