June/July/August 2014 | Washington Monthly https://washingtonmonthly.com/magazine/junejulyaug-2014/ Tue, 25 Jan 2022 17:54:21 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg June/July/August 2014 | Washington Monthly https://washingtonmonthly.com/magazine/junejulyaug-2014/ 32 32 200884816 The Big Lobotomy https://washingtonmonthly.com/2014/06/09/the-big-lobotomy/ Mon, 09 Jun 2014 13:00:10 +0000 https://washingtonmonthly.com/?p=12599

How Republicans Made Congress Stupid

The post The Big Lobotomy appeared first on Washington Monthly.

]]>

Last September, as they scrambled to decide on one final ultimatum before shutting down the federal government, Republican House leaders came up with what seemed like an odd demand: to strip their own staff of health care benefits.

At the time, staffers reacted to the news with a mixture of despair and disbelief. “It was like getting sucker-punched by your boss,” one aide told me. “Everyone was thinking, What’s the point? How is screwing us going to help you?”

The dubious logic behind the House Republicans’ demand can be traced back to a contested provision in the Affordable Care Act (ACA), the gutting of which was the price the Republicans were demanding for agreeing to fund the government. The provision requires employees of the U.S. Congress, including members and their staffs, to buy insurance on the new health care exchanges, while still allowing them to receive subsidies from their employer. Over the course of more than a year, ideologues at several conservative think tanks, especially the Tea Party-friendly Heritage Foundation, which was pushing for the shutdown, managed to put an imaginative spin on the provision, convincing the conservative world that members and their staff were getting a sneaky, backroom deal, a “special exemption from Obamacare.”

In fact, had the Republicans’ desired language passed, congressional personnel would have become the only employees in America whose employer (in their case, the federal government) was explicitly forbidden from contributing to their health care—a blow that, in all likelihood, would have caused most of the best and brightest staffers, and perhaps some lawmakers, to simply hightail it for the door. Some quite conservative members even said as much. Representative Jim Sensenbrenner, in a candid moment later, called the move “political theater” that would do nothing more than catalyze a rapid “brain drain” in Congress.

Jun14-Glastris-CongressLobotomy

Credit:

While Sensenbrenner was right, one must appreciate the irony. A debilitating brain drain has actually been under way in Congress for the past twenty-five years, and it is Sensenbrenner and his conservative colleagues who have engineered it.

A quick refresher: In 1995, after winning a majority in the House for the first time in forty years, one of the first things the new Republican House leadership did was gut Congress’s workforce. They cut the “professional staff” (the lawyers, economists, and investigators who work for committees rather than individual members) by a third. They reduced the “legislative support staff” (the auditors, analysts, and subject-matter experts at the Government Accountability Office [GAO], the Congressional Research Service [CRS], and so on) by a third, too, and killed off the Office of Technology Assessment (OTA) entirely. And they fundamentally dismantled the old committee structure, centralizing power in the House speaker’s office and discouraging members and their staff from performing their own policy research. (The Republicans who took over the Senate in 1995 were less draconian, cutting committee staff by about 16 percent and leaving the committee system largely in place.) Today, the GAO and the CRS, which serve both House and Senate, are each operating at about 80 percent of their 1979 capacity. While Senate committee staffs have rebounded somewhat under Democratic control, every single House standing committee had fewer staffers in 2009 than in 1994. Since 2011, with a Tea Party-radicalized GOP back in control of the House, Congress has cut its budget by a whopping 20 percent, a far higher ratio than any other federal agency, leading, predictably, to staff layoffs, hiring and salary freezes, and drooping morale.

Why would conservative lawmakers decimate the staff and organizational capacity of an institution they themselves control? Part of it is political optics: What better way to show the conservative voters back home that you’re serious about shrinking government than by cutting your own staff? But a bigger reason is strategic. The Gingrich Revolutionaries of 1995 and the Tea Partiers of 2011 share the same basic dream: to defund and dismantle the vast complex of agencies and programs that have been created by bipartisan majorities since the New Deal. The people in Congress who knew those agencies and programs best and were most invested in making them work—the professional staffers, the CRS analysts, the veteran committee chairs—were not going to consent to seeing them swept away. So they had to be swept away.

Of course, all of this slashing and cutting has done nothing to actually help shrink the federal government. Real federal spending has increased 50 percent since 1995, in line with the growth of the U.S. population and economy. Meanwhile, Washington has fought two major land wars, added two large new entitlement programs (Medicare’s prescription drug benefit under George W. Bush, the ACA under Barack Obama), and created several new federal bureaucracies, ranging from the Consumer Financial Protection Bureau to the gigantic Department of Homeland Security.

At the same time, as political scientist Lee Drutman of the Sunlight Foundation has noted, both the government and the issues it has to deal with have grown more complex. There are more contractors to manage, more stakeholders to liaison with, more technologies to adapt to, more industry-funded research studies to take account of. That, in turn, has made the jobs of congressional staffers, of keeping an eye on government and sorting through the ever-growing amount of information coming at them from lobbyists and constituents, far more difficult, even as their numbers have not remotely kept pace with the growth of government and K Street. In 2010, the House spent $1.37 billion and employed between 7,000 and 8,000 staffers. That same year, corporations and special interests spent twice as much—$2.6 billion—on lobbying (which excludes billions spent on other forms of influence) and employed 12,000 federally registered lobbyists, according to Sunlight Foundation.

Instead of helping to shrink the government, the gutting of congressional expertise and institutional capacity—what New America Foundation scholar and former congressional staffer Lorelei Kelly refers to as a “self-lobotomy”—has had two other effects, both of which have advanced conservative power, if not necessarily conservative ideals.

The first effect is an outsourcing of policy development. Much of the research, number crunching, and legislative wordsmithing that used to be done by Capitol Hill staffers working for the government is now being done by outside experts, many of them former Hill staffers, working for lobbying firms, think tanks, consultancies, trade associations, and PR outfits. This has strengthened the already-powerful hand of corporate interests in shaping legislation, and given conservative groups an added measure of influence over Congress, as the shutdown itself illustrates.

Recall that last summer and fall many establishment Republicans, having lived through Newt Gingrich’s disastrous shutdown in the 1990s, argued that doing so again would be folly. So why did so many GOP House members ignore those warnings and listen instead to the Heritage Foundation? Part of the reason was that they were conditioned to do so. Over the years, as Congress’s in-house capacity for independent policy thinking atrophied, the House GOP largely ceded that responsibility to Heritage, which has aligned itself with the Tea Party since former Senator Jim DeMint took the helm in 2013. The think tank became the only outside group that was allowed to brief members and their staff at the influential weekly lunches of the Republican Study Committee, the policy and messaging arm of House conservatives. So when Heritage promised, despite all the evidence to the contrary, that the Democrats would cave to GOP demands for a delay in the individual mandate and cuts to “special” health care benefits for congressional staffers, many GOP members believed them. (Many who didn’t followed Heritage’s instructions anyway when its lobbying arm, Heritage Action, orchestrated a grassroots email campaign demanding that members hang tough. Subtext? Or else.)

The second effect of the brain drain is a significant decline in Congress’s institutional ability to monitor and investigate a growing and ever-more-complex federal government. This decline has been going on quietly, behind the scenes, for so many years that hardly anyone even notices anymore. But like termites eating away at the joists, there’s a danger of catastrophic collapse unless regular inspections are done. While Congress continues to devote what limited investigative resources it has into the fished-out waters of the Internal Revenue Service and Benghazi “scandals” (thirteen Benghazi hearings in the House alone, with a new select committee launched in May), just in the last year we’ve witnessed two appalling government fiascoes that better congressional oversight might have avoided: the botched rollout of the health insurance exchanges and the uncontrolled expansion of the National Security Agency’s surveillance programs. (Fun fact: while annual federal spending on intelligence has roughly doubled since 1997, staff levels on the Senate Select Committee on Intelligence have actually declined.) Debacles like these, by undermining the public’s faith in government, wind up perversely advancing the conservative antigovernment agenda—another reason why many Republicans don’t worry much about the brain drain on the Hill. But the rest of us should.

The organizational capacity that conservatives began attacking in 1995 had been painstakingly built up by their liberal and moderate predecessors over the previous quarter century. In the late 1960s, there was a general sense in Congress that the institution needed to upgrade its ability to understand and confront the challenges of a more technologically and socially complex country. Meanwhile, with the Vietnam War heading south and the Richard Nixon administration resorting to such high-handed moves as the secret bombing of Cambodia, many liberal Democrats and moderate Republicans became convinced of the need to counter the power of the White House and of the hawkish southern Democrats, who, because of seniority and other rules, treated the major congressional committees like personal fiefdoms. The result was a series of major reforms in the early to mid-1970s that changed the institution in two fundamental ways.

First, recognizing that information is power in Washington (the first standing committees in the House were established in the 1790s as an independent source of information to counter that of George Washington’s powerful but controversial treasury secretary, Alexander Hamilton), Congress enhanced its internal data-gathering and analytical capacities. It bulked up the staffs of committees and member offices. It expanded its in-house think tank, the Legislative Reference Service, renaming it the Congressional Research Service. It overhauled the rules of the budget process and created the Congressional Budget Office (CBO) to produce nonpartisan fiscal information and projections. And it formed the Office of Technology Assessment to provide timely analyses of the promises and pitfalls of cutting-edge science and technology developments. This expansion of expertise changed the very landscape of Capitol Hill. Congress built the vast Madison Building on Pennsylvania Avenue to house the expanded CRS. It bought the Congressional Hotel to accommodate the growing ranks of committee staff and appropriated an old FBI fingerprint records warehouse for the new CBO.

Second, congressional reformers took on the committee chairs and their ironfisted control over everything from the hiring and firing of staff to which lawmakers got to sit on which subcommittees. A series of rules changes in the House allowed chairmen to be deposed via a secret ballot of committee members; some were, and subcommittees won more control over their budgets, staff, and agendas. The minority party was guaranteed a set percentage of resources and staff. As power flowed down and out, it also flowed up, with the speaker of the House garnering the authority to, among other things, refer bills to committees, privileges once reserved for committee chairs. In the Senate, where individual members always enjoyed more freedom of action, various reforms decentralized power even further.

The result was a great spike in congressional policy development and oversight. A rough but useful measure of both is the number of committee meetings. These rose by half in the Senate and 80 percent in the House from the late 1960s through the ’70s. In the 1980s and mid-’90s, they plateaued, at about 5,000 to 6,000 per year. (Then, with the GOP takeover in 1995, the number of hearings plummeted by nearly 50 percent in the House and by a quarter in Senate. To put it in perspective, in 1958 congressional committees met almost three times more often than they did in 2010. Those numbers rose again, if only briefly, under the Democrats from 2007 through 2010, the latest years for which figures are available.)

The 1960s and ’70s marked one of the great eras of congressional oversight, with the Church and Pike committees investigating intelligence abuses and the Watergate hearings exposing the crimes of the Nixon White House. The latter investigations not only made a bipartisan group of committee members household names (Sam Ervin, Howard Baker) but also employed staffers who would themselves become famous (Fred Thompson, Hillary Rodham).

It was also an important era of policymaking. In his book The Last Great Senate, former Senate staffer Ira Shapiro details how lawmakers of that period—George McGovern, Bob Dole, Charles Mathias, Jacob Javits, Robert Byrd, Ted Kennedy—used their mastery of subject matter and process to move complex, politically gnarly legislation, from the successful bailouts of Chrysler and New York City to the Panama Canal Treaty. He recounts, for instance, how Senator Henry “Scoop” Jackson made himself so knowledgeable on defense issues that he became a thorn in the side of Nixon and Henry Kissinger, whose policy of détente he deplored for, among other things, ignoring the Soviets’ human rights abuses. Aided by brilliant and well-connected staffers who shared his hawkish views—people like Richard Perle and Dorothy Fosdick—Jackson passed the Jackson-Vanik Amendment, which denied most-favored-nation trading status to communist-bloc countries that restricted emigration. The amendment ultimately led to the emigration of millions of Soviet Jews and was used by Soviet dissidents as a vital tool in mobilizing support for the overthrow of communism.

The House, too, became a bastion of professional expertise. In the early 1970s, for instance, Representative Henry Reuss, a diligent conservation-minded Wisconsin liberal, and his staff on the Subcommittee on Conservation and Natural Resources, discovered a dusty old piece of legislation, the Refuse Act of 1899, that required anyone who pollutes a lake or stream to have a permit to do so from the Army Corps of Engineers. Reuss then got the U.S. attorney in his home state to successfully sue four major polluters—actions that, Reuss later recalled, “convinced industry to stop fighting federal antipollution legislation and instead accept the reasonable federal regulatory system created by the Clean Water Act of 1972.”
Similarly, in the early days of the effort to pass tax reform in 1984, House Ways and Means Committee Chairman Dan Rostenkowski organized a retreat on an Air Force base in Florida where twenty committee members of both parties and ten committee staffers spent three days, with no lobbyists or reporters around, listening to fifteen experts, both liberal and conservative, lecture on how tax reform might work. A year later, when the tax reform legislation was on the ropes, Rostenkowski organized another retreat in rural Virginia between members and top Treasury officials. This kind of deep, bipartisan engagement in the complexities of the tax code (almost inconceivable in today’s House) helped lead to what is still seen as one of the great legislative achievements of the decade, the Tax Reform Act of 1986.

That’s not to say that the 1970s and ’80s were some golden age of evidence-based legislating. The era saw its share of ill-advised government programs, like the Synthetic Fuels Corporation, launched by a Democratic Congress during the 1979 energy crisis despite prescient warnings from the GAO that it would turn out to be a boondoggle. The bipartisan willingness to work together on substantive issues also frayed in the late 1980s and early ’90s when, among other things, a pair of hard-core conservative judicial nominees (Robert Bork, Clarence Thomas) received especially rough treatment by Senate Democrats. Meanwhile, in the House, as the ranks of confrontational antigovernment conservatives grew, Democrats responded by arrogantly exploiting their majority control. Republicans were especially incensed when Speaker Tip O’Neill orchestrated the seating of a Democratic candidate in a contested election for an Indiana House seat and his successor, Jim Wright, used parliamentary maneuvers to limit the GOP’s ability to affect legislation. The leader of the Republicans’ restive House conservatives, Newt Gingrich, rose to power in part by decrying the Democrats’ tactics as “corrupt” in front of C-SPAN cameras. The requirement that all House floor speeches be televised was, ironically, one of the democratizing reforms liberals put in place in the early 1970s.

When Newt Gingrich became speaker of the House in the fall of 1994, he set about almost immediately creating “the most controversial majority leadership since 1910,” according to longtime Congress watchers and political scientists Thomas Mann and Norman Ornstein in their 2006 book, The Broken Branch. Under his leadership, backed up by seventy-three conservative Republican freshmen who swept to power that year, the goal was not to reform, but to destroy; not to compromise, but to advance a highly conservative agenda no matter the means. The shift in culture was palpable almost immediately, with freshman lawmakers eschewing bipartisan freshman orientations in favor of partisan ones, and the vast majority joining what’s known as the “Tuesday-Thursday Club,” flying in on Tuesday evening and out Thursday afternoon so as to reduce the likelihood of contracting “Potomac fever.” “There was a total contempt for the institution,” said Scott Lilly, who served as a high-level staffer in Congress for thirty-one years before joining the Center for American Progress in 2004. John Dingell, who will have served in the House for fifty-nine years when he retires this year, said it succinctly: “The place just got meaner.”

Gingrich’s strategy, as he explained it to Mann and Ornstein, was simple: Cultivate a seething disdain for the institution of Congress itself, while simultaneously restructuring it so as to eliminate anything—powerful chairmen, contradictory facts from legislative support agencies, more moderate Republicans—that would stand in the way of his vision.

Gingrich’s first move in 1995 was to dismantle the decentralized, democratic committee system that the liberals and moderates had created in the 1970s and instead centralize that power on himself. Under his new rules, committee chairs were no longer determined by seniority or a vote by committee members, but instead appointed by the party leadership (read: by Newt himself, who often made appointees swear their loyalty to him). Subcommittees also lost their ability to set their own agendas and schedules; that too largely became the prerogative of the leadership. At the same time, Gingrich imposed six-year term limits and required chairs to be reappointed (by leadership) every two years. Finally, Gingrich protected, and in some cases bulked up, the staff leadership offices and increasingly had those offices write major pieces of legislation and hand them to the committees.

These rules, taken together, essentially stripped all congressional Republicans, especially those in previously senior positions, of power; instead, whether or not they advanced in their careers—whether they were reappointed or on which committee they were appointed—would be determined by party leaders based on their loyalty and subservience. (Two years after the Democrats took the majority in the House in 2007, they eliminated the term-limits rule; Speaker John Boehner reinstated it when the Republicans regained control in 2010.) “If you were thinking about the next stage in your career, you did what you were told to do,” observes Scott Lilly. The point of this centralization of power was to give the leadership maximum control of the legislative agenda and to jam through as many conservative bills as possible. That, it achieved: the Gingrich House passed 124 measures in 1995, more than double the 53 that Tip O’Neill’s House passed in 1981. But over time it also had the effect of dumbing down the institution.

After the first round of term-limit expirations rolled around in 2001, for instance, Republican Representative Ralph Regula was termed out of his chairmanship of the Interior Subcommittee, a position he had held off and on for twenty-six years, during which time he had become the chamber’s de facto expert on public lands and natural resources. Regula was famous in environmental circles for his relentless interest in the unglamorous issue of national park infrastructure maintenance. (At one point his committee uncovered a $330,000 outhouse complete with a slate roof, picture windows, and a twenty-nine-inch-thick earthquake-proof foundation.) His detailed understanding and thrifty instincts helped Regula win support in his caucus for increased funding to reduce the backlog of national park maintenance projects. But that knowledge and clout went with him when he was termed out.

It’s worth noting, of course, that term limits do, in theory, have an upside. They sweep away lawmakers who, over the years, have been captured by the agencies they oversee and the special interests they interact with. And they bring new blood into the committee leadership. In the 1970s, many liberal reformers advocated for term limits for these reasons, and as a means of limiting the power of long-serving southern Democrats who then dominated the chairmanships of the powerful committees. But in order to work, limiting committee chairs’ power and hard-won knowledge needs to be offset by enough staff who have sufficient institutional memory to educate the new members and explain, for instance, when they’re being lied to by the agencies and the special interests. Gingrich, of course, cut the staff, too.

And anyway, the problem with term limits in the mid-’90s was not only a loss of experience in a given subject; it was also a decline in the motivation to learn a subject in depth in the first place. After all, members who know they will move to a new committee in a few years are sometimes hard-pressed to really dig into a subject matter. That natural inclination has been greatly exacerbated by the fact that, beginning in 1995 and continuing to the present day, the leadership often dictates to committees what it wants bills to look like or drafts them outright. So instead of learning deeply about a given subject, debating various policy options, engaging in the nitty-gritty of a topic over the course of years and sometimes decades, committee members nowadays are often asked either to reverse-engineer a piece of legislation based on party leadership’s description of what kind of bill they’d like to see or to simply vote on a bill they did not write to begin with. Is it any surprise that, under those circumstances, deep policy knowledge, curiosity, and innovation have gone out the window? “What’s the payoff for doing a good job? If you take your job seriously as a chairman, who gives a shit?” says Bruce Bartlett, who worked as a congressional staffer in the 1970s and ’80s for Representative Jack Kemp, Representative Ron Paul, and the Joint Economic Committee.

In the past, members angled for committee assignments in part based on their personal backgrounds and the interests of their states or constituencies—another factor that favored the accumulation of subject-area knowledge—but under the new rules, leadership made the choices based more on political calculation. Seats on the “prestigious” committees began to go most often to members who were likely to face a strong challenger in the next election, so that they could brag to constituents about their powerful role or, more to the point, position themselves for corporate campaign contributions. “There was an immediate atrophy of the professional qualifications of the committees,” said Mike Lofgren, a former Republican congressional aide who has since been publicly critical of the Republican Party. “Knowing anything about the committee’s jurisdiction just didn’t factor in.”

Of course, it’s hard to learn much about the substance of the issues if you’re spending four hours a day “dialing for dollars” to raise campaign funds, as both parties recently instructed their members to do, in addition to another hour or two of attending fund-raisers. Compare that to the three to four hours per day members spend on average attending hearings, voting, meeting with constituents, studying, debating, and legislating. But while both parties are equally aggressive in hunting for money, Democrats have repeatedly tried to pass public financing bills to lessen the fund-raising burdens on incumbents and challengers alike. Republicans have just as frequently (and successfully) fought those bills. That’s an indication of how the parties differ philosophically on the role of private money in politics. But it’s also a fair gauge of how much weight each party gives to the importance of lawmakers knowing the substance of the issues they are legislating on.

Gingrich’s second move in 1995 was to go after the pooled funding that paid for the so-called professional staffers, who worked for the institution itself. Professional staff, most of whom were not explicitly partisan, were often deeply knowledgeable not only about policy issues within their expertise but also about the institution itself. They knew what had worked in the past, what members’ preferences and personalities were like, and how to draft a bill that would pass. As one member described it to me, professional staffers are “legislative lubricant,” often acting as referees or liaisons during committee debates. Within months, Gingrich had laid off about 800 of them. All told, he cut the total population of professional staffers by more than a third—a wound from which Congress has never recovered. In 1993, Congress employed nearly 2,150 professional committee staff; in 2011, there were just 1,316, according to the most recent data.

The professional staff also helped to run legislative service organizations (LSOs), informal study groups where members, often of both parties, would discuss specific issues, debate, share information, build trust, and “gain expertise on ‘big-picture’ national issues outside the jurisdiction of committees,” notes the New America Foundation’s Lorelei Kelly. The LSOs vanished along with pooled funding and shared staff in 1995.

Those professional staffers who were there at the time remember the atmosphere changing. “After Newt put the kibosh on shared staff, the whole place began to work more like Politico,” said one former senior staffer who worked on the Hill for two decades, referring to the website’s 24/7 attention to scandal, intrigue, and political strategy over deeper policy discussions. “No one was asking, What kind of legislation will this be? How can we make it better? Where do we need to go to get a compromise? They were asking, What kind of political fallout will this have? Who will look good? Who can we make look bad?”

Gingrich also cut the number of staffers working directly for House members. While those numbers later rose, they did so in a way that further reflects the intellectual hollowing out of the institution. Nearly all of the net increase in member staff since the late 1990s has been not in Washington, where the actual legislating happens, but in district offices, where the main jobs are handling constituent complaints, shuttling members around to local events, and getting them press—in short, ensuring their reelection. Not surprisingly, in the past decade, members have moved roughly 33 percent of their staff capacity away from policymaking and toward communications roles, according to a recent Congressional Management Foundation report.

Keeping good staff, professional or otherwise, is also a struggle considering the pay scales. While it’s difficult to compare salaries—the direct equivalent of a “legislative assistant” in the private sector is hotly debated—a measured 2009 report by the Sunlight Foundation concluded that Hill staffers are paid roughly a third less than they could make in the private sector. The on-the-job knowledge and connections staffers accumulate become exponentially more valuable over time to the lobbying shops on K Street, and the opportunity costs of staying become hard to ignore. According to a 2012 Washington Times analysis, 82 percent of Senate staffers and 70 percent of House staffers hired in 2005 had left the Hill by 2012.

Jun14-Glastris-CongressLobotomy3

Credit:

The one thing that has traditionally kept at least some Hill staffers from leaving for the private sector is the heady nature of the work on Capitol Hill—the ability to fight for issues they believe in, to be in the room when the big decisions are being made, to put their personal stamp on legislation that will change history. But even that motivation has been undermined by the Tea Party-inspired gridlock that has blocked most major legislation in both houses since 2010 and squandered staff energies in pointless budget standoffs. “Those who are nourished by accomplishment are starving,” observes former Senator Byron Dorgan. “People who come highly motivated, they want to feel good about their challenge, their work, what they’re doing for the country. When they’re not getting that, they start looking around.”

The third target of Gingrich’s attacks was the legislative support agencies. The Government Accountability Office, Congressional Budget Office, Congressional Research Service, and now-defunct Office of Technology Assessment operated on a bipartisan basis, offering measured reports on topics suggested by members themselves. Sometimes, these agencies act as helpful librarians, and sometimes they’re more like referees, carefully adjudicating among the competing quantitative claims of various members and outside groups.

Gingrich, perhaps not surprisingly, viewed them all as potential rivals to his singular narrative. He particularly despised the OTA, as did many other conservatives, despite its evident usefulness. For instance, Congress saved hundreds of millions of dollars by incorporating OTA recommendations when the Social Security Administration moved to replace its old-school mainframe computers with a new computer network in the mid-’90s. (One wonders how things might have been different if the OTA had been around when the health care insurance exchanges were being built.) But over the years, the OTA had also cast doubt on some conservative ideas—a mortal sin as far as Gingrich and his followers were concerned. For example, an OTA report raising serious questions about the feasibility of Reagan’s Star Wars project was later used by Democrats to help defund the program.

Within a few months of taking the helm in 1995, Gingrich eliminated the OTA entirely and cut roughly a third of the staff in all the other congressional service agencies. According to the Brookings Institution’s Vital Statistics of Congress, those agencies have never recovered. The GAO lost more than 2,000 staffers between 1993 and 2010. CRS staff lost about 20 percent of its capacity. All told, in 1993 Congress employed 6,166 researchers; by 2011, that number was down to just over 4,000.

How has the brain drain affected Washington? To begin with, just look at all the construction cranes that dot the city’s skyline. The ongoing migration of talent out of Capitol Hill has helped drive the building boom in downtown D.C. as surely as the Pentagon’s contracting-out craze, which also took off in the 1990s, gave rise to the corporate office towers of Northern Virginia.

You can see the effect in the shabby, politicized work product coming out of many committees. In May, for instance, the House Energy and Commerce Committee released a survey conducted by its GOP staff purporting to show that only 67 percent of people who signed up for health insurance on the federal exchange had paid their first premium—a number that, if true, would have embarrassed the administration. In fact, the survey gave a false impression by counting as nonpayers people who hadn’t yet been billed. Insurance company executives later testified in public to the committee that their estimated payment rate was 80 percent. “Republicans were visibly exasperated,” reported The Hill, “as insurers failed to confirm certain claims about ObamaCare, such as the committee’s allegation that one-third of federal exchange enrollees have not paid their first premium.”

You can see it in the recent string of surprise retirement announcements from House GOP committee chairman who will be term-limited out of their positions next year. That includes Ways and Means Chairman Dave Camp, whose committee (which has miraculously retained some level of bipartisan competence) labored to put together a credible tax reform plan. The GOP said the plan was one of the party’s top legislative goals, but John Boehner tabled the measure in an effort not to muddy the midterm elections with substantive issues. “It used to be that the chairman would call the speaker up and say, ‘I want this bill on this floor at this time,’” explains Dingell. “Now it’s the opposite.”

You can see it in frequent little dustups meant to undermine the legitimacy of the findings of congressional service organizations, like the one that engulfed the Congressional Research Service in 2012, when its economics division published a report surveying the effects of tax cuts going back decades and concluding that they do not generate sufficient new tax revenues from economic growth to pay for themselves—the main tenet of supply-side economics. A firestorm of anger from Senate Republicans led the CRS to pull the report.

You can also see the effects in stories like the one that appeared on the front page of the New York Times last May about a House bill that would exempt broad swaths of derivative trades from new Dodd-Frank Act regulations. The bill, which passed the House 292 to 122 before dying in the Senate, was written not only at the behest of lobbyists from Citigroup but by Citigroup lobbyists:

In a sign of Wall Street’s resurgent influence in Washington, Citigroup’s recommendations were reflected in more than 70 lines of the House committee’s 85-line bill. Two crucial paragraphs, prepared by Citigroup in conjunction with other Wall Street banks, were copied nearly word for word. (Lawmakers changed two words to make them plural.)

It’s true that both parties have outsourced much of their policy development over the years. Groups like the Center for American Progress to some extent do for Democrats what Heritage does for Republicans (or did prior to Jim DeMint’s takeover), and plenty of lawmakers from both parties take their policy instructions from Wall Street lobbyists. But whereas for Democrats the outsourcing of policy has happened more by necessity, for Republicans it’s been by design. Newt Gingrich began the process in the 1990s with his attacks on in-house congressional expertise. Leaders like Tom DeLay in the House and Rick Santorum in the Senate advanced that process in the 2000s with the “K Street Project,” an organized effort to place GOP Hill staffers in key jobs in the most important D.C. law firms and trade associations.

As Nicholas Confessore explained in these pages (“Welcome to the Machine,” July/August 2003), the K Street Project tried to harness the muscle and campaign cash of a fractious lobbying community behind the specific legislative agenda of the George W. Bush administration with the ultimate aim of creating a permanent GOP majority. While it failed in that larger goal, it did succeed in providing GOP congressional leaders with something they needed: an alternative to the in-house legislative expertise Gingrich had decimated. With the leadership’s own former employees now in charge of D.C.’s biggest lobbying shops and all the research and other resources they commanded, K Street became, in a sense, the new permanent staff of the GOP Congress. (Democratic leaders have since attempted to place more of their former staffers on K Street, but have yet to catch up to Republicans in terms of numbers and clout.)

In addition to the outsourcing of policy development, the other big effect of the brain drain has been the atrophying of congressional oversight. Good oversight requires teams of educated, detail-oriented staffers who have the time to cull through documents, review thousands of line items in a budget, read budget justifications, and then follow up with federal agencies or local programs to determine what is really happening in government programs on the ground. Those teams have traditionally resided in the committees, buttressed by permanent staff and long-serving members, and in the legislative service agencies like the GAO. As we’ve seen, both were greatly downsized in the 1990s and remained profoundly understaffed and under-resourced.

Of course, good oversight has always been more the exception than the rule in Congress, in part because it has never been a particularly sexy part of a Congress member’s job, and in part because voters don’t generally reward members who excel at it. Rare are the headlines congratulating Congress for catching disasters before they happen.

Even today, valuable oversight still happens on occasion. In the run-up to the 2010 census, for instance, the GAO identified fatal flaws in the handheld computer devices the Census Bureau was planning to use as a cost-saving measure. Thanks to the GAO’s reports, major fixes to the devices were made, the officials originally in charge of the project canned, and a possible disaster with the decennial census averted.

Still, there has unquestionably been a massive falloff in congressional oversight. In the decade after the GOP takeover of Congress in 1994, the number of Senate oversight hearings dropped by a third, and House oversight hearings fell by half, according to the Brookings Institution. And even these numbers probably understate the problem. A lot of oversight hearings today are almost strictly for show, especially in the House. And even those that are meant to be serious suffer from the ignorance and poor preparation of many lawmakers. “In the old days, the member used to know more than any witness from the outside that came before the committee,” Dingell said. “Today, they don’t. Members don’t even understand the issues. They don’t even ask questions that are relevant. Sometimes they just want to give a political speech.”

Congress’s failure of oversight is perhaps least obvious but most critical on the appropriations committees and subcommittees. These entities control the purse strings for every government program and agency. It has traditionally been their job—and they once took it seriously— to ensure that dollars were being spent on programs that were doing what they said they were doing. That sort of line-item oversight takes time and a dedicated staff that is paying an inordinate amount of attention to detail. “It was never a thrilling process,” said Scott Lilly, who served as a clerk and staff director of the House Appropriations Committee, “but it was vital.”

And it has all but ceased to happen in the past decade or so, as staff numbers have dwindled and the passing of sweeping, omnibus budgets have become the norm. Even when they do try to look, appropriations subcommittees are snowed under by literally thousands of pages— “multiple tomes,” as one staffer put it—of oversight reports that no one has the time to read. “Agencies just fill up these budget justifications with all sorts of meaningless metrics, which is a convenient tool to overwhelm a handful of staffers, who are stretched so thin they don’t have the time to find out anything that’s going on,” Lilly said. The result, Republican Senator Tom Coburn pointed out in a 2012 report, is wasted money, uncontrolled government programs, and a panicky sense of “fire-alarm oversight” in which members of Congress don’t ask questions until a scandal breaks and there’s a mad scurry to assign blame.

This widespread, decades-long congressional brain drain could be fixed overnight. Members of Congress, after all, control the national budget. All they need to do is allocate a couple hundred million bucks—chump change in the $4.8 trillion budget—to boost staff levels, increase salaries to retain the best staff, and fill out the institutional capacity of the body. This wouldn’t necessarily mean recreating precisely the infrastructure of the 1970s—hundreds of guys in white short-sleeved shirts sitting in cubicles in some building on South Capitol Street. As New America’s Lorelei Kelly has observed, technology now allows for any number of ways to create distributed networks of expertise. Congress could place policy and oversight staff in district offices, for instance, where they’d be closer to the ground, or create research and advisory partnerships between Congress
and universities.

Regardless of how it’s organized or what new technologies can be brought to bear, what’s clear is that members of Congress need the institutional capacity to help them make sense of it all. As the issues facing members of Congress become increasingly intertwined and technological in our complex global economy, what we need is not fewer people in government who understand the implications of, say, the international derivatives market; what we need is more. And we need them, whether they be knowledgeable committee chairs or long-serving professional staff, to be experienced, well paid, and appreciated so they want to stick around for a while.

The problem, however, is that conservatives as a rule don’t see this lack of expertise as a problem. Quite the contrary: they’ve orchestrated the brain drain precisely as a way to advance the conservative agenda. Why, when your aim is less government, would you want to add to government’s intellectual capacity?

The answer, as some conservatives are beginning to realize, is that making Congress dumber has not, in fact, made government smaller. As the conservative but independent-minded Senator Tom Coburn wrote in his 2012 report, cuts to the GAO budget and declines in Senate and House committee oversight activity have resulted in billions of dollars in unnecessary, duplicative, and wasteful government spending. In another sign of dawning awareness, last year the House leadership, having been led astray one too many times by the Heritage Foundation and its Heritage Action lobbyists, barred those lobbyists from attending the Republican Study Committee’s weekly meetings.

At a press conference in the aftermath of last fall’s pointless government shutdown, a dazed and incredulous Speaker Boehner squinted into the cameras and proclaimed that groups like Heritage had “lost all credibility.” You’ll recall, he noted, that “the day before the government reopened, one of those groups stood up and said, ‘We never really thought it would work,’ ” Boehner said, his eyes bugging theatrically. He waited a beat or two for dramatic emphasis before his voice crackled with dismay: “Are you kidding me?”

It’s a long way from these glimmers of recognition that outsourcing Congress’s thinking ability may not be such a good idea to a willingness to do something serious to reverse the brain drain. The Republicans are nowhere near even considering that (which means the best hope for now may be a Democratic takeover of both houses). But it’s a start.

The post The Big Lobotomy appeared first on Washington Monthly.

]]>
12599 Jun14-Glastris-CongressLobotomy Jun14-Glastris-CongressLobotomy3
Talk of the Toons https://washingtonmonthly.com/2014/06/06/talk-of-the-toons-7/ Fri, 06 Jun 2014 16:07:35 +0000 https://washingtonmonthly.com/?p=12572

Our selection of recent political cartoons.

The post Talk of the Toons appeared first on Washington Monthly.

]]>

Our selection of recent political cartoons.

Jun14-Toons2
Credit:
Jun14-Toons3
Credit:
Jun14-Toons5
Credit:
Jun14-Toons6
Credit:
Jun14-Toons4
Credit:

The post Talk of the Toons appeared first on Washington Monthly.

]]>
12572 Jun14-Toons2 Jun14-Toons3 Jun14-Toons5 Jun14-Toons6 Jun14-Toons4
Making School Choice Work https://washingtonmonthly.com/2014/06/06/making-school-choice-work/ Fri, 06 Jun 2014 15:11:43 +0000 https://washingtonmonthly.com/?p=12574 Two D.C. schools, a traditional public and a nonunionized charter, are experimenting with socioeconomic integration.

The post Making School Choice Work appeared first on Washington Monthly.

]]>
Over the past few decades, Americans have witnessed an explosion in public school choice—the ability to choose a magnet school, a charter school, or an out-of-boundary public school. While the neighborhood public school still remains the norm for most American children, the number of families who chose a non-neighborhood public school increased by 45 percent between 1993 and 2007. Nationally, more than a quarter of parents choose a school other than the public school their children are assigned by neighborhood to attend. In New Orleans, 80 percent of students attend charter schools, and in Washington, D.C., 44 percent of public school students attend charters and another 28 percent choose out-of-boundary public schools.
The rise in school choice is not hard to understand. In a culture where parents are accustomed to enjoying a wide variety of choices in most facets of their lives, they also like the idea of choosing a public school that meets the individual needs of their children.

Jun14-Robinson-Books
Our School:
Searching for Community in the
Era of School Choice

by Sam Chaltain
Teachers College Press, 208 pp.

But education reformers have advocated school choice for larger public purposes as well. Teachers union leader Albert Shanker, for example, first proposed charter schools in 1988 as a vehicle for empowering teachers to tap into their expertise and create new teaching methods and approaches from which traditional public schools could learn. Shanker and other early advocates of charters also saw the opportunity to move beyond schools that reflect residential neighborhood segregation by race and class, drawing children from a variety of backgrounds who could learn from one another. Over time, even conservatives advocated charters and other forms of choice as a way of boosting competition between schools and fostering innovation.

In Our School: Searching for Community in an Era of School Choice, the author and consultant Sam Chaltain looks into how these theories play out in practice at two schools located in Washington’s mixed-income and racially diverse Columbia Heights/Mt. Pleasant neighborhood. One, Mundo Verde Bilingual Public Charter School, is a brand-new elementary school whose founders made the most of their chance to “start a new school from scratch.” Teaching in Spanish and English, the school focuses on environmental sustainability and hands-on expeditionary learning, in which students delve deeply into a topic for weeks at a time, producing original research and presenting the results to the public.

The other school, Bancroft Elementary, founded in 1924, is a traditional public school that educated some of the parents and grandparents of current students. But Bancroft also draws almost half of its population through out-of-boundary transfers. And, like Mundo Verde, it divides classes into sections that are alternatingly taught in English and Spanish.

One of the strengths of this thoughtful, highly readable book is that Chaltain, himself a former teacher, takes the concerns of teachers, parents, and students seriously as he spends an entire school year observing them in action. If some education policy analysts view teachers as workers to be tested, coaxed, better “distributed,” and (sometimes) “weeded out,” Chaltain sees them as flesh-and-blood human beings who often work extraordinarily hard to improve the lives of children such as Albert, who has never met his father and never will, or Harvey, who knows the local cops because they visit his home so often to settle domestic disputes.

As the book opens, Chaltain introduces us to two kindergarten teachers at Mundo Verde: Molly Howard, an idealistic Yale graduate and former policy wonk, and Bernice Pernalete, who immigrated to Houston as a child from Venezuela. These teachers, like educators in 88 percent of charter schools nationally, are not unionized. Charter advocates tout the absence of unions as a strength because it is easy to fire bad teachers and reward great ones. But it comes with a large downside: lower pay, fewer benefits, and longer hours. In the view of some, Chaltain writes, Mundo Verde created “an organizational model that relied on smart single women whose social lives would allow for crowded workweeks and little else.” The lack of union representation also means teachers have no one to help them mediate their concerns. At one meeting Chaltain sits in on at Mundo Verde, he observes two teachers who complain about a lack on teacher input, but “then quickly censored themselves. ‘So we don’t get fired.’”

Lacking voice, many charter teachers exit. Research finds that the rate of teacher turnover at charters is roughly double that at traditional public schools. And the nonunion environment in charters can create an adversarial relationship with unionized public schools. Kristin Scotchmer, Mundo Verde’s inaugural executive director, told Chaltain she worried “whether charters will find ways to work collaboratively with other public schools in the city.” Shanker’s vision, of teacher-led schools that would share ideas with traditional public schools, had been turned on its head.

But Chaltain hardly depicts Bancroft, with its unionized workforce, as a paradise for teachers. He focuses on two third-grade teachers, Rebecca Lebowitz, a recent Brown graduate, and Rebecca Schmidt, a young Grinnell graduate, who feel discouraged by the challenges their students bring (only 5 percent read at grade level), by the general dysfunction of the District of Columbia school system, and by D.C.’s intensive focus on judging teachers by student test scores. Both of these highly effective teachers end up leaving at the end of the year. Altogether, Bancroft lost 25 percent of its teachers that year, and at Mundo Verde, three of the four teachers whom Chaltain observed most closely also quit.

Though both schools are located less than half a mile from one another, and both are bilingual, the demographics are quite different. Bancroft, Chaltain reports, is heavily low income and minority: 75 percent of students receive subsidized lunch; 74 percent are Hispanic, 10 percent white, 8 percent black, and 7 percent Asian. Mundo Verde is more racially and economically balanced: 45 percent of students are Hispanic, 27 percent white, 19 percent black, and 5 percent belong to two or more races. Some 33 percent qualify for subsidized lunch. Students go to great lengths to attend Mundo Verde—one took three bus rides every day to get there, and the school has a long waiting list of applicants.

With its healthy racial and economic mix, however, Mundo Verde is an outlier among charter schools. In fact, many charters brag of having campuses that are made up almost entirely of poor and minority students. Some consciously target specific immigrant or racial groups.

Chaltain is an advocate of integrated schooling, in charter schools and in traditional schools, in part as a way of building social cohesion. (Full disclosure: I coauthored an op-ed with him and Michael Petrilli in favor of integration in D.C. schools earlier this year.) Chaltain writes, “The specific landscape of school choice may be new, but the general challenge is as old as the country itself: E Pluribus Unum—out of many, one.”

Chaltain does not suggest that socio-economic integration is easy. He acknowledges that it takes a skilled teacher to educate students who come to schools with differing levels of academic preparation. But a mix of students is far less overwhelming than a classroom of highly needy students, and the burnout level of teachers is much lower than in high-poverty schools.

That is what is so exciting about charter schools like Mundo Verde (and magnet schools throughout the country): by virtue of location and an enticing academic program, they have been able to attract a broad cross-section of students. The bilingual program, in particular, vividly illustrates how diversity helps everyone. Spanish-speaking students help English speakers learn Spanish and vice versa. Students from different backgrounds become a resource for one another.

But Chaltain notes that choice, by itself, will not promote equity, citing Michael Sandel’s research on “the moral limits of markets.” Integration often slips away, Chaltain says, and Mundo Verde’s founder worried that “there was no way to guarantee that new families would maintain a healthy mix between English and Spanish speakers.” Weighted lotteries to promote integration could help, but in D.C. that was not permitted.

Some people worry that public school choice can destroy a sense of community as neighborhood children head off to different schools and make different sets of friends. But at their best, if choice programs are designed to support integration, they can create new school communities that transcend the race and class divisions that define so many of our neighborhoods. Chaltain suggests that, in fact, this is a big part of what public schools in America are designed to do—move us beyond segregation to a place where students can celebrate diversity and learn what they have in common as Americans.

This democratic message of integrated schools—that we are all social equals—can be reinforced if school administrators treat their teachers well, as professionals who can contribute to the strength of a school, rather than as factory workers who must be closely supervised. As Chaltain notes, giving teachers and parents and students a say in school affairs can “model democratic principles, practices and policies”—preparing students to be self-governing citizens, which is, after all, the primary rationale for public education in the first place.

The post Making School Choice Work appeared first on Washington Monthly.

]]>
12574 Jun14-Robinson-Books
A Liberal’s Call to Real Liberty https://washingtonmonthly.com/2014/06/06/a-liberals-call-to-real-liberty/ Fri, 06 Jun 2014 15:10:22 +0000 https://washingtonmonthly.com/?p=12575 How FDR redefined freedom and changed America.

The post A Liberal’s Call to Real Liberty appeared first on Washington Monthly.

]]>
While I was reading Harvey J. Kaye’s The Fight for the Four Freedoms, for this assignment, my ninety-two-year-old great-uncle, Jim Fischer, died. He came of age during the Great Depression, joined the Army when called to fight in World War II, and worked in an armament plant in Pittsburgh after he was discharged. Jim also worked at times as a milkman and a welder.


Jun14-Kaye-Books
The Fight for the Four Freedoms:
What Made FDR and the
Greatest Generation Truly Great

by Harvey J. Kaye
Simon & Schuster, 304 pp.

Similarly, my aunt Chris, Jim’s wife, worked in an armament plant during the war, and later she ran a neighborhood children’s clothing shop for many decades. Jim and Chris were born into poverty during the Depression; Chris recalled vividly how, when she was a child, her family was kicked out of company housing in the middle of the night after her father and his fellow miners mounted a strike for better working conditions. She often told stories of how, as children, she and her siblings and friends helped heat their family homes by stealing pieces of coal from trains stopped at stations dotting Pittsburgh’s rivers. This was not theft, she explained, it was justice. That coal belonged to everyone, and everyone had a right to a warm home.

It was clear that the Depression and the war never left my great-aunt and -uncle; throughout their lives, it continued to animate their views about work, community, justice, and service. One can’t help but be filled with admiration for how they lived and what they sacrificed, and their stories are fairly typical of Americans who came of age during the Depression and served during World War II. For that they’ve collectively been dubbed—to use that hoary phrase—“the greatest generation.” In his new book, historian Kaye seeks to reexamine the generation’s greatness, by looking not only at what they sacrificed, but also at what they built—and he describes how, through that process, that cohort radically redefined America.

Franklin Delano Roosevelt introduced the concept of the Four Freedoms several weeks after he won an unprecedented third term as president. In a speech on January 6, 1941, he declared,

In the future days, which we seek to make secure, we look forward to a world founded upon four essential human freedoms. The first is freedom of speech and expression. . . . The second is freedom of every person to worship God in his own way. . . . The third is freedom from want. . . . The fourth is freedom from fear. . . . That is no vision of a distant millennium. It is a definite basis for a kind of world attainable in our own time and generation.

By balancing the civil liberties contained in the nation’s founding documents with a new vision of economic security, Roosevelt invited another reconstruction of our democracy—one no less radical than the upheaval that took place after the Civil War. He declared that the “right to life” described in the Declaration of Independence means that everyone “has also a right to make a comfortable living. . . . Our Government . . . owes to everyone an avenue to possess himself of a portion of [America’s] plenty sufficient for his needs, through his own work.” Roosevelt challenged the notion that economic laws were products of nature, and instead argued that the rights of each individual trump the rights of business. While signing a minimum-wage bill Roosevelt, in a statement that bears repeating today, stated that “no business which depends for existence on paying less than living wages has any right to continue in this country.”

If these ideas sound radical, it is because they were. They were also wildly popular: a May 1942 survey found that the Four Freedoms had “a powerful and genuine appeal to seven persons in ten.” Throughout his book, Kaye does a masterful job of showing how the American ideal, captured succinctly in the Four Freedoms, was one of the animating forces of the generation.

Nine days after hearing President Roosevelt describe the Four Freedoms, labor and civil rights leader A. Philip Randolph called for a march on Washington to fight for jobs in national defense, the integration of the armed forces, and “the abolition of Jim Crowism in all Government departments and defense employment.” In response to Randolph’s call, FDR passed Executive Order 8802, creating the Fair Employment Practices Committee (FEPC), which sought to ban discrimination by defense contractors.

The war effort was similarly viewed in terms of freedom and democracy. The majority of Americans polled in a May 1942 survey asking for an alternate name for World War II chose “War of World Freedom,” “War of Freedom,” “War of Liberty,” or “Anti-Dictator War.” Reporting from London in 1943, John Steinbeck stated in no uncertain terms that the soldiers were fighting “under a banner of four unimplemented freedoms.” A March 1943 survey of enlisted men found that 89 percent of white soldiers and between 66 and 70 percent of black soldiers agreed that “[t]he United States is fighting for the protection of the right of free speech for everyone” and “for a fair chance for everyone to make a decent living.” The United States that these soldiers knew had never provided a fair chance for everyone—most notably black servicemen—to make a decent living (the unemployment rate in the United States in the years 1937 to 1940 hovered between 15 and 20 percent), so these soldiers’ answers as to why they were fighting indicated that they fought not for what America was, but for what they wanted to make it.

How is it then, that such a central concept of the time has been so thoroughly misremembered and deradicalized? How has one of the driving forces of the “greatest generation” been selectively written out of their history?

Since the Four Freedoms were an important source of radical change—especially once Roosevelt used them in arguing for an economic bill of rights—they were regarded as dangerous by many conservatives. So, taking the advice of Walter Fuller of the National Association of Manufacturers, conservatives and business leaders wasted no time in co-opting Roosevelt’s principles for their own ends. They did this through a process of appending and supplanting. First, the U.S. Junior Chamber of Commerce passed what they termed the “Fifth Freedom,” the opportunity of free enterprise, arguing that without it the other freedoms were “meaningless.” Similarly, Republican Congresswoman Edith Nourse Rogers of Massachusetts presented a congressional resolution to add the freedom of private enterprise as the Fifth Freedom. Liberals timidly backed away from the radical view embodied in the Four Freedoms, allowing it to be disfigured and contorted. In time the idea became an empty vessel, a brand name, which conservatives used to fill with their own ideals. This transformation was apparent by 1987, when President Ronald Reagan announced his plan to enact an “Economic Bill of Rights that guarantees four fundamental freedoms: The freedom to work. The freedom to enjoy the fruits of one’s labor. The freedom to own and control one’s property. The freedom to participate in a free market.”

Currently, one can see a similar story unfolding with regards to the First Amendment, which also contains four freedoms. Liberals and progressives fought throughout the twentieth century to define the First Amendment as an individual right that protects speech, without regard to the speaker or the dangerousness of the content. After decades of fighting this idea, conservatives have now co-opted it and are using it as a tool of suppression. In the past few years, the First Amendment has been used to invite unlimited (and often anonymous) money into politics, to fight against regulation, and to diminish workers’ rights. It has become a vessel that is used to shelter undemocratic practices and unfair economic ideas. The story of the life and death of FDR’s Four Freedoms offers a useful lens through which to see how progressive ideas can be used for opposite ends if they are not continually guarded and fought for.

In The Fight for the Four Freedoms, Kaye is as much trying to recover a lost history as pushing a progressive call to arms. Throughout the book, one cannot help but notice the parallels between “the greatest generation” and our own. Both periods are marked by high unemployment, low wages, increased inequality, and wealth and power concentrated in the hands of a few. The first step toward doing something about it is to remember the courage and radicalism of the previous generation, and the world it created. The second step is to guard it.

The post A Liberal’s Call to Real Liberty appeared first on Washington Monthly.

]]>
12575 Jun14-Kaye-Books
The Unkindest Cut https://washingtonmonthly.com/2014/06/06/the-unkindest-cut/ Fri, 06 Jun 2014 15:08:34 +0000 https://washingtonmonthly.com/?p=12576 The visionary guidance counselor in a poor urban high school discovers why some top colleges don't want even his best students: money.

The post The Unkindest Cut appeared first on Washington Monthly.

]]>
It sometimes feels like low-income students are to our K-12 education system what cadavers are to hospitals. Often teachers secure their first jobs in challenging schools in poorer districts, where the turnover rate is high. Here, they hone their teaching skills, and in a few years they trade up to districts with higher salaries and better working conditions. Poor students are left behind to train the next crop of educators.

Jun14-Steckel-Books
Credit:


Hold Fast to Dreams:
A College Guidance Counselor,
His Students,
and the Vision
of a Life Beyond Poverty

by Joshua Steckel and Beth Zasloff
New Press, 320 pp.

Joshua Steckel, coauthor of Hold Fast to Dreams: A College Guidance Counselor, His Students, and the Vision of a Life Beyond Poverty, intentionally went the other way. After four years at Birch Wathen Lenox, an expensive private school on Manhattan’s Upper East Side, Steckel became a college counselor (and sometime teacher) at an overwhelmingly poor, black, and Latino public school in Brooklyn, the Secondary School for Research (now called Park Slope Collegiate). Once there, he used the skills and connections he had developed at Lenox to help get his new charges admitted to some of the country’s more selective colleges.

Along with his wife and coauthor, Beth Zasloff, Steckel chronicles his relationship with ten of his students, from their senior year of high school into young adulthood. The stories are invaluable both to educators who deal with children from similar backgrounds and to non-educators, who often don’t appreciate the overwhelming odds stacked against poor children. The first chapters cover the students’ past and Steckel’s experiences with them in high school; subsequent chapters cover post-high school struggles; the final chapters talk about the students in their early twenties. Each chapter is satisfying on its own, but the reader is eager to find out what happens next in these students’ lives.

I have taught for more than fifteen years in a Maryland public high school that has demographics similar to those at the Secondary School, and a colleague and I similarly shepherded promising students to selective residential colleges. Steckel’s stories remind me of my own. Hold Fast neither exaggerates nor minimizes what these kids are faced with. Steckel and Zasloff write about the rawness and trauma of the working poor, the family life constantly disrupted by parents’ late-night shifts, long hours at work, and unstable employment. Some of Steckel’s students move frequently (including in and out of homeless shelters), work part-time to contribute to the family budget, babysit younger siblings, and protect them from getting caught up in life on the street. While some dream of going to college, living at the social bottom makes that goal seem beyond reach; others see higher education as so remote that they don’t bother dreaming about it.

It is here that Steckel takes his stand. Many educators who endure at these schools are sustained by a social vision of helping needy children lead happy and productive lives, and Steckel is no exception. While a 2004 Century Foundation study found that the most selective colleges drew 74 percent of their students from the richest quartile and only 3 percent from the poorest, Steckel tells his students at the start of school that his aim is to get them into these competitive colleges—the kind of schools where future leaders are born. “[T]hese colleges [are] training the country’s future leaders,” Steckel told his classes, “and it wasn’t right that they should be filled with rich kids.”

As part-time teacher, part-time college adviser, and full-time mentor, Steckel slowly gains the trust and confidence of students who have been let down by adults so often. He must be able to nurture their fragile self-confidence while pushing them to compete to gain admittance to institutions that will completely change their lives. Hold Fast includes several compelling college application essays that Steckel helped to edit, a process that no doubt encouraged students to share and process past traumas. By the book’s end, some students have graduated from selective residential colleges that would be the envy of any parent, some have graduated from community colleges, and others have not managed to complete any college course work. For all Steckel’s skill and dedication, he cannot overcome all the problems facing his students.

And those students are a diverse lot. With a high school transcript full of Cs and Ds, and little motivation, Dwight Martin did not seem like he was college bound. Hospitalized for a concussion in a gang fight just weeks before graduation, Martin—with Steckel’s help—moves to North Carolina, earns straight As at Guilford Technical Community College, and becomes a certified aviation mechanic. Michael Forbes’s apartment burns down while he’s in high school, and he has to write his college essay while living in a homeless shelter with his ailing mother and two younger brothers. As a Skidmore College graduate four years later, Michael moves his mother and brothers to Durham, North Carolina, where he now teaches at a public charter school. Nkese Rankine determines at age thirteen to be “the girl who gets out of the ’hood”—and she does get out. At Bates College she struggles to overcome feelings of cultural isolation but proudly receives her diploma in four years. Santiago Hernandez, an undocumented immigrant who has lived in the United States since he was six, finds the economic imperatives of supporting his mother and brother too staggering to think about college. Feeling “already old” at eighteen, he gets the money to attend community college for a semester, but lacks the money to continue. Four years after high school graduation, he still has only a semester of course work to his name.

At Birch Wathen Lenox, where the cost of tuition approaches that of a residential college, Steckel’s students were, not unnaturally, focused primarily on getting into college, not on getting financial aid. But the reverse is true for poor kids: the most difficult part isn’t getting them accepted, reports Steckel, it is getting the money to pay for tuition, room and board, books, and travel. Of the 2,000 private colleges in the United States, only a tiny fraction have the endowments to offer full scholarships to complement the paltry federal grants and student loans available to low-income students. Poor children, after overcoming myriad struggles to excel in high school, have to find colleges that will not only accept them but also pay their freight. “Most aid packages asked students to contribute more than their families could handle,” the authors write, but even Steckel was surprised by the extent to which colleges would “gap” students, leaving them far short of what they needed to accept an offer to attend the school of their choice. This is perhaps the unkindest cut of all that the poor sustain during the admissions process—being offered a $20,000 scholarship to attend a $50,000-a-year college. This leaves Steckel “the task of having to communicate to students which colleges were real choices and which were not, without undermining the pride they felt at getting in.”

To succeed, Steckel exploits the handful of national and New York state programs aimed at assisting first-generation students to attend college, and he also markets his students to wealthy selective colleges looking to diversify their student body. Colleges such as Bates (Maine), Muhlenberg (Pennsylvania), and Williams (Massachusetts) want to increase the number of minority students, not only to fulfill their liberal educational missions but also to enhance the educational experiences of all of their students. But a look at the numbers demonstrates that each school can only afford to admit a handful of poor students—so the more Josh Steckels there are, the less successful each will be. The resources of even wealthy colleges are limited. Middlebury College in Vermont has accepted and generously funded several of our Maryland students, for example, yet even with the maximum amount of federal aid available, a low-income student brings $200,000 less over four years to Middlebury than a student who pays full tuition. Even with a commitment to economic diversity, how many low-income students can a college afford to accept?

This reality undermines a widely publicized conclusion of “The Missing ‘One-Offs’: The Hidden Supply of High-Achieving, Low-Income Students,” the 2012 paper by economists Caroline M. Hoxby and Christopher Avery. The study argues that there is a large, untapped supply of high-achieving, low-income students who should be considered at more competitive schools that can offer them more money. While such schools may be able to accept and fund a few more poor students, even the wealthiest colleges are unable to admit all high-achieving applicants regardless of economic status. Many of Steckel’s students who do not end up at selective residential colleges attend local public colleges, often from fear of social isolation of faraway campuses filled with affluent students, or because they worry about leaving their families in need. But finances are a struggle at public schools, too. A generation ago, strong state university systems offered their residents a high-quality education for any student who could save money from a summer job and was willing to work part-time during the school year. Unfortunately, tuition and fees at these institutions have increased in recent decades, while state aid has dropped. The average cost of tuition at a four-year public university (even with grants and student loans) now runs more than $10,000 a year. Low-income students and their families must either take out loans they cannot afford or lower their academic expectations to fit their budget

Hold Fast to Dreams takes a hard look at the obstacles highly motivated poor children must surmount to attend and graduate from college. It also shows that a dedicated, resourceful counselor can help a few students escape the lives proscribed by their circumstances; many of Steckel’s students end up with college degrees and productive careers. But a few dedicated educators cannot help poor kids succeed in a vacuum. We need the political will to devote enough resources to ensure that all our students are prepared for, and can afford, quality higher education. We see they can succeed, so why are they not given the chance to do so?

The post The Unkindest Cut appeared first on Washington Monthly.

]]>
12576 Jun14-Steckel-Books
Alone on His Own Ice Floe https://washingtonmonthly.com/2014/06/06/alone-on-his-own-ice-floe/ Fri, 06 Jun 2014 15:06:02 +0000 https://washingtonmonthly.com/?p=12577 How Antonin Scalia ceased to be a powerhouse jurist and became a crank.

The post Alone on His Own Ice Floe appeared first on Washington Monthly.

]]>
The cover of this book says it all. There he is, grinning complacently, his black judges’ robes fading into black. The hair has thinned and the jaw is heavier than it used to be. He is an old bull now instead of a young buck. No one is there on the cover with him: he is all alone. Smug, cocksure, fond of his own wit, certain of his rectitude, Justice Antonin Scalia of the Supreme Court of the United States is oblivious to the way his strident, obnoxious, moralistic hectoring has chased away every friend. If such a doubt occurs to him for an uncertain moment, it soon dissolves, for the upside to being surrounded by idiots is a clear conscience. Let them call him intolerant—let them try to prove he’s wrong. It is all background noise to the sound of his own voice.

Jun14-Murphy-Books
Credit:


Scalia: A Court of One

by
Bruce Allen Murphy
Simon & Schuster, 736 pp.

Scalia was once a force to be reckoned with on the legal right. He taught at the University of Chicago and cofounded the Federalist Society. When I entered law school in 2001, his dissents were required reading, and could erode confidence in any majority opinion. Scalia was particularly strong on procedure: he made nuances about jurisdiction and standing seem even more important than the merits. A brilliant rhetorician, he was a funny and colorful writer in a profession that seemed wedded to the stereo-instructions model of prose. Scalia also exuded scholarly refinement, peppering his decisions with allusions to classical works in Latin and Greek, and parsing statutory language with precision and rigor. When cracking wise off the cuff, he pivoted on arcana like the distinction between Gothic art and Rococo.

And then something happened. Somewhere in the mid-2000s, Scalia ceased to be a powerhouse jurist and became a crank. He began thumbing his nose at the ethical conventions that guide justices, giving provocative speeches about matters likely to come before the Court. He declined to recuse himself from cases where he had consorted with one of the parties—including, famously, Vice President Dick Cheney. He turned up the invective in his decisions. His colleagues’ reasoning ceased to be merely unpersuasive; it was “preposterous,” “at war with reason,” “not merely naïve, but absurd,” “patently incorrect,” and “transparently false.” More and more, he seemed willing to bend his own rules to achieve conservative results in areas of concern to social conservatives, like affirmative action, gay rights, abortion, gun ownership, and the death penalty. Above all, Scalia stopped trying to persuade others. He became the judicial equivalent of Rush Limbaugh, who has made a career of preaching to the choir. But Limbaugh is not merely a shock jock; he is also a kingmaker. Scalia’s position on the bench precludes any such influence. As a result, he has more fans than power.

The deterioration of Supreme Court justices is a sad tradition in our public life. Abe Fortas was ruined by scandal; William Douglas suffered a stroke and remained on the Court well past the point of incapacity. William Rehnquist became grouchier and nastier with each term; liberals like Harry Blackmun and William Brennan grew acid tongues and scolded their colleagues where before they had built coalitions. Frustrated by the Court’s ascendant conservatism in the 1980s, Thurgood Marshall all but checked out.

Scalia’s fall has been loud and it has been public. He is the Court’s most outspoken and quotable justice, and whether he is flicking his chin at reporters or standing at the lectern attacking secular values, he makes headlines. So when he was passed over for the position of chief justice in 2005, the legal world noticed. President George W. Bush had cited Scalia as well as Clarence Thomas when asked as a candidate to name justices he admired. Yet when Rehnquist suddenly died, Bush did not seriously consider elevating Scalia. “Nino” had rarely demonstrated leadership in assembling or holding together majorities; he had alienated every one of his colleagues at one point or other. His flamboyant antics off the bench might compromise the dignity of the office of chief justice. He would be the devil to confirm. Bush nominated instead John Roberts, an equally brilliant but far more disciplined judge, and one who was better suited to the responsibilities of leadership. After that, Scalia stopped playing nice and started using real buckshot.

Bruce Allen Murphy’s Scalia: A Court of One is the second biography of Antonin Scalia in the past five years—an indication of an unusually high level of interest in a sitting Supreme Court justice. The previous volume was Joan Biskupic’s American Original (2009), a fluid journalistic account filled with insights gleaned from the author’s access to Scalia and the other justices as a Supreme Court reporter. Murphy, a political scientist at Lafayette College, has produced a book more comprehensive and scholarly but with less color and texture. The author interviewed neither Scalia nor his colleagues, and relies for backstage anecdotes on the reporting of others. Scalia is nevertheless a significant achievement—Murphy’s third impressive biography of a member of the Court. Murphy’s and Biskupic’s books differ in tone and emphasis: Murphy is not exactly hostile to Scalia, but he is less sympathetic to him than Biskupic. Yet both authors agree on the central fact that seems likely to define Scalia’s career. In Murphy’s nice phrase, Scalia is “alone on his ice floe,” and it is drifting away from the Court’s center.

Murphy makes a comprehensive study of the way Scalia has alienated the three swing voters to sit on the Court with him: first Lewis Powell, then Sandra Day O’Connor, and then Anthony Kennedy. In one of his first cases as a justice, Hodel v. Irving (1987), Scalia set the wrong tone with the chivalrous Powell. The case concerned the ability of Native Americans to bequeath tribal land as property, but the Court’s focus became the litigants’ standing to sue. Defying the convention that a junior justice should be modest and deferential, Scalia dominated the oral argument, prompting Powell to whisper to a colleague, “Do you think he knows that the rest of us are here?” After the argument, Scalia badgered and browbeat O’Connor in an uncivil draft opinion, on which Powell handwrote, “I don’t like this.” Only Rehnquist’s intervention kept the majority together. It would not be the last time the chief justice would have to repair Scalia’s damage. “Nino! You’re pissing off Sandra again. Stop it!” he wrote at one point.

O’Connor is tough cookies, and claims not to have been bothered by Scalia’s harsh tone over the years. Whether or not that is true, the same cannot be said for the sensitive Kennedy, whose social libertarianism and grandiloquent rhapsodies have made him a particular target for derision. Scalia chased him away like a bully on a playground. In the critical abortion decision Planned Parenthood v. Casey (1992), Kennedy, contributing to a joint opinion along with O’Connor and David Souter, described a woman’s “right to define one’s own concept of existence, of meaning, of the universe, and of the mystery of human life.” Moist and misty though those sentiments be, a prudent opponent would hold his tongue so as not to alienate a potential ally in future cases. Not Scalia. When referring to Casey in a later decision, he openly mocked its “sweet-mystery-of-life passage.”

Casey is a good example of Scalia’s ineptitude as a coalition builder. The Court initially voted 5-4 to uphold Pennsylvania’s tough abortion law and to overturn Roe v. Wade. But then Kennedy began to wobble. Scalia tried to reinforce his vote, inviting him on a walk to discuss the case. (When Kennedy’s papers become available to future historians, this encounter will be a must-read.) As Murphy writes, Scalia left the conversation believing he had convinced Kennedy to stay on board. But Kennedy had already defected: along with Souter and O’Connor, he was secretly at work on the joint opinion that would modify but ultimately spare Roe. Scalia had engendered no loyalty in previous cases that would have enticed Kennedy to hold the line; he could not call upon close friendship or affection. Who knows, Kennedy may have even enjoyed sticking it to a colleague who had savaged him ruthlessly in case after case. It was the type of scenario where the more pragmatic and collegial tactics of a Brennan or a John Paul Stevens might have made the difference.

Indeed, Murphy narrates several cases where those two justices easily outmaneuvered Scalia to advance their own interests. In South Carolina v. Gathers (1989), a capital case concerning the use of victim impact statements during sentencing, Justice Byron White was the undecided justice. The liberals wanted to prohibit such statements as inflammatory, while the conservatives, including Scalia, believed they were permissible. Brennan picked up White without difficulty by writing a narrow decision permitting victim impact statements generally, but finding that such a statement was inappropriate in the case at hand because of its unique facts. In another abortion case, Webster v. Reproductive Health Services (1989), Stevens picked off O’Connor by writing a memo catering to her doubts about the majority decision, minimizing the damage from a conservative victory.

Scalia purports to be above such deal making. The Constitution says what it says, and it means what it means. You might as well horse-trade verses of holy scripture! But while this type of sanctimony is common in the law, it is nevertheless foolish. When nine judges must jointly interpret statutes, rules, and constitutional text, those who cooperate will get things done. Those who rigidly go their own way will not. Scalia’s recent victories in Second Amendment cases (District of Columbia v. Heller, 2008) and campaign finance law (Citizens United v. FEC, 2010, and McCutcheon v. FEC, 2014) represent not the triumph of reason or coalition building but of raw political power: Republican presidents have appointed enough justices to win the day. It is no surprise that Scalia’s hero is Sir Thomas More, the patron saint of lawyers who refused to agree to the annulment of King Henry VIII’s marriage and was executed for it. To Scalia, More is an inspiring and principled role model. Yet there are other interpretations of More’s defiance. Hillary Mantel suggests in the novel Wolf Hall that at the time, everyone actually thought More was a pompous ass.

Murphy devotes much of his book to examining Scalia’s vaunted method of constitutional interpretation, which is called originalism. Scalia’s originalism differs from, say, Judge Robert Bork’s, who claimed to divine the “original intent” behind constitutional provisions. What, Bork asked, did the Founders intend with phrases like “cruel and unusual punishments,” “reasonable searches and seizures,” and “equal protection of the laws”? In a series of books and lectures, Scalia emphasized that he was interested not in the Founders’ original intent, but in the text’s “original meaning.” In his own words: “What was the most plausible meaning of the words of the Constitution to the society that adopted it?” What did the general public understand “cruel and unusual punishments” to mean when the Bill of Rights was ratified in 1791? This approach—judicial interpretation as time travel—has led Scalia to some offensively wooden results, like insisting that the Fourteenth Amendment’s Equal Protection Clause, which was adopted after the Civil War, does not prohibit sex discrimination.

The eminent historians of the founding era think Scalia is on a fool’s errand. Murphy renders a great service by quoting their reflections at length. Stanford historian Jack Rakove notes,

Historians have little stake in ascertaining the original meaning of a clause for its own sake, or in attempting to freeze or distill its true, unadulterated meaning at some pristine moment of constitutional understanding. They can rest content with—even revel in—the ambiguities of the historical record, recognizing that behind the textual brevity of any clause there once lay a spectrum of complex views and different shadings of opinion.

Likewise, Gordon S. Wood of Brown University points out, “not only were there hundreds of Founders, including the Anti-Federalists, with a myriad of clashing contradictory intentions, but in the end all the Founders created something that no one of them ever intended.” Judge Richard Posner has derided Scalia’s originalism as “law office history,” in which the justice, untrained and incompetent as a historian, searches out historical sources to bolster his position, ignoring or downplaying those that contradict it. Scalia has made a career of scorning the use of legislative history—committee reports and floor speeches—to interpret statutes, because he says the process amounts to looking out over a crowd and picking out your friends. Finding the Constitution’s original meaning seems awfully similar.

If those are the mechanics of originalism, what about its politics? Scalia has insisted, loudly and repeatedly, that originalism “establishes a historical criterion that is conceptually quite separate from the preferences of the judge himself.” This is nonsense dressed up in jargon. Originalism is by its very nature a reactionary enterprise that addresses contemporary problems by turning back the clock 250 years. Small wonder that it overlaps closely with Scalia’s socially conservative politics. Suppose a justice were to announce that she planned to interpret the Constitution in accord with the latest practices in Vermont—or, better yet, Sweden. This “neutral” principle would of course be blatantly progressive. Suppose further that the justice gave speech after speech insisting that Swedenism was strictly divorced from her politics—even though she herself was the most outspoken liberal on the Court. You would not only disbelieve her, you would begin to lose respect for the institution she served.

And that is Antonin Scalia’s legacy. He is toxic. He has corroded the public’s faith in the Supreme Court. His vituperative rhetoric has undermined its tradition of collegiality while making its work seem like cheap partisanship. His ethical improprieties have besmirched the Court’s reputation for integrity and impartiality. And as he has become increasingly and nakedly right-wing—he astonished onlookers by lambasting President Obama’s immigration policy during a recent oral argument—he has continued to insist, more and more shrilly, on the apolitical glories of originalism. No one believes him anymore. And no one is listening.

The post Alone on His Own Ice Floe appeared first on Washington Monthly.

]]>
12577 Jun14-Murphy-Books
Can Listicles Fund the Baghdad Bureau? https://washingtonmonthly.com/2014/06/06/can-listicles-fund-the-baghdad-bureau/ Fri, 06 Jun 2014 15:04:44 +0000 https://washingtonmonthly.com/?p=12578

How BuzzFeed and HuffPo manage to do real journalism. For now.

The post Can Listicles Fund the Baghdad Bureau? appeared first on Washington Monthly.

]]>

The Internet has harmed journalism in part by obliterating our self-delusions. In the past, when media companies funded labor-intensive journalism—foreign coverage, investigative projects, beat reporters who spend days tracking down leads—we believed this reportage was very valuable, even financially. Readers wanted to know, advertisers liked the prestige that high-quality reporting brought, and the publications made plenty of money.

Occasionally a wiseass would say something like, “The box scores are paying for the Baghdad bureau,” and we thought, Well, maybe that cross-subsidy exists, maybe it doesn’t—but the whole package seems to be doing just fine.

The Internet blew apart the package and eliminated the cross-subsidy. Now readers can go to ESPN and get box scores, and they can go to a separate site to get news. Sports scores no longer subsidize the foreign correspondent, and the comics no longer support the city hall reporter.

This has led us to confront the ugly reality of just how lousy—financially speaking—many of our journalistic projects were. Media managers can now produce a profit-and-loss statement not only for the news division as a whole, but for each reporter—and each piece of content. Managers have mostly concluded that volume—getting reporters to do faster stories and more of them—generates a better P&L outcome. Articles that take a few days to report, let alone a few weeks or months, rarely have a positive return on investment.

In a world where each piece of content has to earn its own way, we end up with more listicles (“The 10 Hottest Women on the Texas Sex Offenders List”) (yes, that list actually exists) and fewer reporters covering the sewer commission. ProPublica, a nonprofit online news organization, once estimated that it had cost them about $750,000 to do their exposé on the health hazards of acetaminophen. To attract enough ad revenue to pay for itself, the project would have had to generate about 100 million monetizable page views. The entire (superb, Pulitzer Prize-winning) ProPublica site hasn’t generated that many page views cumulatively over the past five years.

This pattern has led many media analysts, including myself, to worry that certain types of reporting—especially reporting that takes real time and money to complete—would simply have no business model. Recently, however, I’ve been pleasantly surprised at a few efforts to reestablish the cross-subsidy. Huffington Post rose to fame and girth through aggregating and summarizing other people’s content, with very little original reporting. Arianna Huffington said she’d always planned to do real journalism, and it turned out she meant it. The Huffington Post has real reporters, and even won a Pulitzer Prize for foreign reporting in 2012.

BuzzFeed is an even more surprising candidate to reestablish the cross-subsidy. It rose to mammoth size—more than 130 million unique visitors, far bigger than the New York Times—through delectable fare such as “Pictures of Animals with Betty White” and “14 Cats Who Think They’re Sushi.” And then, lo and behold, they started hiring real journalists to do real work. Recent pieces included “A Late-Night Phone Call Between One of Syria’s Top Extremists and His Sworn Enemy” and “Former Cisco Execs Allege Vast Kickback Scheme in Russia.”

Here are three reasons why my hat goes off to BuzzFeed: First, they figured out how to draw massive amounts of traffic in socially harmless, enjoyable, and sometimes informative ways. Second, when they got to be big they spent some of that capital on real journalism. Third, they have a sense of humor, and sometimes that punctures the powerful better than anything else—as with their masterpiece, “16 Homoerotic Photos of Vladimir Putin.”

But here’s why I don’t think this is going to become the new model—and why I fear that even Huffington Post and BuzzFeed won’t be able to keep it up.

First, the model won’t work locally. HuffPo and BuzzFeed are able to generate enough financial cushion to support harder reporting in part because of their mammoth size. That offers some hope for national enterprises but not for local news entities, whose size is fundamentally limited by their populations. “40 Things You Never Knew About Game of Thrones” may garner traffic from all over the world. “5 Most Significant Mistakes the Albuquerque Planning Board Made” may garner traffic from all over . . . well, Albuquerque. At best.

Second, the metrics will stalk them, rubbing their noses in the financially poor (and journalistically impressive) decisions they like to make. There will come a time at any digital media organization—and it will happen at HuffPo and BuzzFeed too, if it hasn’t already—when they hit a financial bump in the road, and the CEO scans the list of which efforts lose money. He will say, “What kind of business intentionally invests in products that they know are financial losers?” A few admirable media leaders will say, “To hell with you and your crass metrics!” But most won’t.

Third, advertisers care less about content than they used to. In the past, one of the strongest financial arguments for good journalism was that it made the publication more respected and thereby more desirable for advertisers. That translated directly into income because they could charge premium ad rates, unlike the bottom-feeding publications with lousy content. But most advertisers no longer care where their ads appear. They can market directly to readers, either through Google, Facebook, or new technologies that allow ads to follow consumers around the Internet.

The publications that want a patina of prestige can get it via less time-consuming forms of smart commentary—Ezra Klein comes to mind—or quick-hit reporting (covering events, press conferences, and so on). This can be valuable, and will often give us a sense that things are well covered. What will be lost is the stories that take time to tell.

Is all hope lost? No. Innovation is still occurring at the national level, and locally the nonprofit media sector can have a major impact. We just have to hope that funders of nonprofit media don’t start evaluating them with the same criteria as commercial players.

Ben Smith, BuzzFeed’s talented editor, does not buy the argument that BuzzFeed is recreating a cross-subsidy, with frothy lists subsidizing the Russian coverage. He believes that excellent journalism is very much in their financial interest. Personally, I think he’s kidding himself—and I hope he keeps on doing it.

The post Can Listicles Fund the Baghdad Bureau? appeared first on Washington Monthly.

]]>
12578
Tilting at Windmills https://washingtonmonthly.com/2014/06/06/mobilizing-chick-power-male-menopause-can-dems-match-gun-raffles/ Fri, 06 Jun 2014 15:03:27 +0000 https://washingtonmonthly.com/?p=12579 Is McConnell’s “free speech” machine about to meet its match?

The post Tilting at Windmills appeared first on Washington Monthly.

]]>
Does it make me a bad person that I’m tickled pink by Mitch McConnell’s scramble to keep pace in the money race for his Senate seat?

During my tenure as an editor at the Washington Monthly (late 1996 through 1998), among the political hot topics were the Clinton fund-raising scandals. Amid the buzz about John Huang and “no controlling legal authority,” advocates for campaign finance reform clamored for action. I recall trudging up to the Hill to talk with John McCain about his and Russ Feingold’s signature reform plan (which took another seven years to pass). I also recall reporting multiple stories on McCain-Feingold’s chief nemesis, McConnell. Known back then as “the Darth Vader of campaign finance reform” (a moniker he wore with pride), McConnell did his damnedest to smother any attempt to reduce the influence of money in politics with his big, fluffy, money-equals-free-speech pillow.

Flash-forward nearly two decades (and multiple Supreme Court decisions empowering big donors), and McConnell is trying to weather an incoming storm of hostile free speech. Though he ultimately squashed his primary challenger, the Senate minority leader was compelled to spend millions hammering at Tea Party darling Matt Bevin, who had the backing of numerous outside groups, including FreedomWorks, the Madison Project, and the Senate Conservatives Fund. (Plus, Bevin had the personal resources for some supplemental self-financing.) Better still, buoyed by McConnell’s flaccid approval numbers, Dems have helped Alison Lundergan Grimes out-raise McConnell in two of the last three quarters, and the race is expected to be one of the most, if not the most, expensive this cycle.

I assume that, in the end, McConnell will live to serve another term. The GOP can’t afford the humiliation of having its Senate leader booted (though just think of the vicious succession battle!). Plus, as you might imagine, McConnell has a “free speech”-raising machine most pols would kill for. Still, it’s been a while since Darth Vader had a real race on his hands, and nobody more deserves to have to go out and grub for campaign cash.

Jeb stands up for “Obamacore”

Say this for Jeb Bush: the man has cojones. The former Florida governor threw conservatives into a tizzy when, during an April 6 shindig at his daddy’s presidential library in Texas, he told a Fox News interviewer that illegal immigration is not a felony but “an act of love.” Immigrant bashers from Iowa Representative Steve King to rodeo clown Donald Trump jockeyed to see who could most vigorously slam Jebbie as a combination of stupid, naive, and opportunistic.

But getting all touchy-feely about immigrants wasn’t the only giant bull’s-eye Jeb slapped on his chest in that interview. He also reaffirmed his love for the Common Core State Standards, or CCSS, the set of education benchmarks that enjoyed strong bipartisan backing until conservatives decided that it was too closely associated with President Obama and so, by definition, must be a tool of Satan. Trust me: do not get Tea Party types talking about “Obamacore” unless you are ready to have your ears seared with talk about the federal government’s plot to control America’s children. But not Jebbie, who has spent much of his post-governorship focused on education reform: “I just don’t feel compelled to run for cover, when I feel like this is the right thing to do for our country,” he insisted.

Way to stand tall, big guy! Of course, the question isn’t really whether Bush should run for cover. It’s whether the base will support his running for anything else. At this rate, his best shot at the presidency in 2016 may be on the Democratic ticket.

Extremism in the denial of moderation

Then, of course, there’s the GOP’s anti-Jeb, Louisiana Governor Bobby Jindal, who has been struggling mightily for the past few years to erase any sign of the pragmatic reformer he was once known to be.

As Bush courts the base’s wrath with his defense of the CCSS, Jindal scurries to sandbag the education reform he too once supported. In early April, the governor cheered efforts by a small band of state legislators to derail Louisiana’s adoption of the new standards. A couple of weeks later, Jindal called on the state to pull out of the consortium, known as the Partnership for Assessment of Readiness for College and Careers (PARCC), that has been working to develop the assessment test associated with the new standards. He even made noises about wielding his executive pen if the legislature failed to act.

As the New Orleans Times-Picayune points out, pulling out of PARCC is, practically speaking, a symbolic gesture at this point. The test is already pretty much developed. But, as the Times-Picayune also notes, all Jindal’s huffing and puffing will play well with the CCSS’s conservative critics.

Mobilizing chick power

Despite my admiration for Texas senator turned gubernatorial hopeful Wendy Davis’s true grit, I expect the Democratic nominee to lose her bid against Republican State Attorney General Greg Abbott. For all the talk about its shifting demographics, Texas, at this point, is still Texas. Just ask Ted Cruz.

That said, Davis may have done more to influence the midterms thus far than any Democratic pol not named Obama. And unlike POTUS, who has Dems playing defense, Davis has helped put them on offense. Specifically, her relentless gigging of Abbott over the wage gap this winter proved to have such traction that it revived the issue as a national cause. Chick-power groups like Emily’s List started firing off blistering press releases accusing the GOP of not caring about women’s economic struggles. Before long, the entire Democratic Party was looking to pay inequality as a way to reorient 2014 away from Obamacare and back toward the Republican-war-on-women meme. The Democratic Senatorial Campaign Committee went so far as to launch a “GOP Pay Gap” campaign, looking to, as a spokesman explained to CNN, “hold Republican Senate candidates accountable for their baseless and partisan opposition to equal pay for equal work.”

Not bad for a woman sneeringly dismissed by right-wingers as “Abortion Barbie.”

Learning from ladies eating

OMG! OMG! OMG! This PR email landed in my in-box from Vogue about its May cover girl, actress Emma Stone: “Emma Stone is flying high—major movie roles, a Spider-Man beau, fashion-world heat—but, as Jason Gay discovers, she’s just as down-to-earth and devilish as ever.”

Of course he does, because that’s what every writer for every glossy magazine discovers about his or her celebrity interview subject. No matter how rich or famous, Hollywood A-listers profiled in Vogue or GQ or Vanity Fair or Esquire are always revealed to be so very, very real. And, bizarrely enough, writers’ absolute favorite method for illustrating this
realness—at least when profiling women—is to describe the ladies eating.

I’m serious here. The first thing that crossed my mind when I read Vogue’s press release was: I bet you a million bucks the piece talks about how Stone likes to scarf down junk food just like you and I. Sure enough. The opening sentence: “It is a quiet Tuesday afternoon in Los Angeles, and Emma Stone and I are at a mall, eating hot dogs on a stick from Hot Dog on a Stick, sitting with our teddy bears from Build-a-Bear Workshop.”

Hot dogs on a stick! How adorably down-to-earth is that? I was immediately reminded of an old GQ profile of Mad Men’s January Jones (November 2009, it turns out), which opened with the actress wanting to hit the Chili’s down near “the H gates” at O’Hare. Or how about the lead of this May 2010 Esquire profile of Jennifer Lawrence: “Jennifer Lawrence is hungry. It’s 9:00 A.M., she’s been up for an hour, and she hasn’t eaten a thing. ‘I’m freakish about breakfast,’ she says, by which, thank God, she doesn’t mean she wants an extra diet cookie. ‘You’re not gonna order, like, fruit or something, are you?’ she asks, with real concern. ‘Because I’m gonna eat.’ She orders the eggs Benedict without looking at the menu.”

Turns out, the svelte-actresses-who-eat-like-lumberjacks trope is so common that a veteran movie publicist coined the term “Documented Instance of Public Eating,” or DIPE, to describe it. The New York Times Dining and Wine section ran a feature on DIPE in early 2011. Clearly, the situation has not improved since then.

Heavy sigh. It’s not enough for magazines to airbrush cover models to the point that they only vaguely resemble real people. Do they have to pile on by trying to convince us that these alien creatures look like they do despite cruising airport terminals for beer and queso?

Rising tide swamps Republicans I

Much has been written about the emerging fault lines in the Republican Party over climate change. The Christian Coalition, of all groups, has been pushing GOP pols to stop foot-dragging on the issue. Out in Arizona, Barry Goldwater Jr. has put his family’s famous brand name behind a crusade for solar energy. Multiple polls show that, particularly among non-antiquated Republicans, patience is wearing thin for politicians who toe the global-warming-is-a-hoax line even as rising sea levels spill into cities like Miami and Norfolk.

Rising tide swamps Republicans II

And now for our latest installment in the ongoing adventures of How Conservatives Love to Bash Big Government Right Up Until They Need It. This month’s topic: federally subsidized flood insurance.

To review: The National Flood Insurance Program (NFIP), created in 1968, established a basic compact between the federal government and residents of flood-prone areas. As Scott Gabriel Knowles, author of The Disaster Experts: Mastering Risk in Modern America neatly explained in Slate, “In exchange for government-subsidized insurance, communities would undertake and maintain serious commitments to restricting development in low-lying areas. Government scientists would provide the detailed floodplain maps necessary to judge where to build or not.”

Alas, the latter half of this compact collapsed under the combined weight of aggressive state-level lobbying by developers, the NFIP’s failure both to enforce coverage requirements and to keep up with its mapping duties, and, of course, Americans’ deep-rooted conviction that nothing as silly as frequent floods or hurricanes should deter them from living anywhere they damn well please. Perhaps unsurprisingly, the NFIP wound up bankrupt post-Katrina, and today, taxpayers are on the hook for an estimated $24 billion in flood insurance debt.

Looking to fix this mess, in 2012 Congress overhauled the NFIP so that high-risk residents would pay premiums closer to market rate. But—whoopsie!—the minute those higher insurance bills started rolling in last October, residents flipped out and ran screaming to their congressional reps. Rather than seek targeted fixes (like, say, issuing vouchers to low-income residents), besieged lawmakers scrambled to restore, indefinitely, the bulk of the subsidies—which they did with the March passage of the Homeowner Flood Insurance Affordability Act.

The rich irony here is the abundance of conservatives who lined up to roll back the 2012 reform, including big-government bashers such as Marco Rubio (Florida), Jeff Sessions (Alabama), and, Mr. Government Shutdown himself, Ted Cruz (Texas). Hmmmm. Wonder if this we-love-subsidies vote will hurt Cruz’s 100 percent rating from the American Conservative Union?

Of Dems and denial

Cruz et al.’s hypocrisy on flood insurance may be snicker worthy, but the underlying problem is downright depressing. Subsidizing people to live in flood-prone areas isn’t merely irrational, it’s dangerous. As Ari Phillips at ClimateProgress.org pointed out, “This is especially true in coastal flooding zones where rising sea levels due to climate change, extreme weather events and human-induced erosion and environmental degradation can make the risks outweigh the benefits, and the costs—for which taxpayers are liable—
exceedingly high.”

The 2012 NFIP reform was a vanishingly rare instance of Congress coming together to address a long-term problem. But the second the blowback started, members on both sides of the aisle raced to put short-term political concerns ahead of sensible, long-term policy. Hardly a bipartisan portrait in courage.

Closing the confidence gap

Tied to last month’s Coachella festival, the New York Times Style section ran a piece about how, increasingly, men are feeling pressure to prove their fashionista credentials at the annual music, arts, and recreational-drug extravaganza. Good-bye baggie shorts and Ts. Hello mesh cardigans and pastel onesies.

The piece made me smile—and not just because it pictured a guy with a ZZ Top beard clad in a baby-blue shortie jumpsuit printed with honeybees. Nope. My satisfaction ran deeper. I have long felt that, if pop culture is going to make women paranoid about everything from the shape of our eyebrows to the brand of our sports bras, then men should face similar expectations. Seriously. Why should guys get away with rolling out of bed, pulling on the same sweatpants and grungy ball cap they’ve been wearing since college, and calling it a look? No sirree. I want men angsting about whether their new Tom Ford jeans make their butts look big just like women do. Fair is fair.

Male menopause

In February, the New York Times looked at one of the newest studies, a comprehensive analysis of some 2.6 million people born in Sweden between 1973 and 2001, which ominously concluded, “Children born to middle-aged men are more likely than their older siblings to develop any of a range of mental difficulties, including bipolar disorder, autism and schizophrenia.”

This shouldn’t shock us. Boomers may have changed the way we think about aging, but, from a baby-making standpoint, fifty is not the new thirty. The fact that more and more of us (myself included) are finding reasons to put off having kids until well into our thirties or forties or even later does not change the biological reality that reproductive organs wear out. Cells mutate. The center cannot hold.

Until very recently, however, discussion of parental age revolved almost wholly around the fertility and genetic freshness of mothers—which meant that debate over the pros and cons got swept up into the super-touchy terrain of gender politics. The reasons why are not mysterious: waiting to have kids until one is professionally, economically, and/or emotionally stable has done much to expand women’s horizons. To bring up the downsides makes many of us anxious, as though even acknowledging the trade-offs is an endorsement of the keep-’em-barefoot-and-pregnant thinking of yore.

The new focus on old sperm is unlikely to defuse this bomb entirely. But at least now women aren’t the only ones who have to put up with annoying biological-clock jokes.

Baby Clinton

I’m sure you’ve heard by now that Chelsea Clinton is expecting a baby in the fall. I’d like to request that the family save us a whole lot of time and drama by simply naming the baby “President.”

Can Dems match gun raffles?

This election cycle has seen the rise of a hot new campaign gimmick among Republican candidates: gun raffles. Voters hand over all sorts of valuable personal information for a chance to win a shiny new firearm—the higher caliber, the better.

Some folks are appalled by this trend. But you have to admit it’s got a raw genius to it that goes beyond the usual, boring “Send us all your money for a chance to win dinner with Politician X.” As inflammatory culture war totems go, guns are hard to beat, imbuing these giveaways with the irresistible frisson of rebellion.

In the interest of fighting fire with fire, Dems need to get creative with contests of their own. Forget breakfast with Nancy Pelosi or even beer and pizza with POTUS. There have to be some awesome giveaways that would mobilize left-leaning voters. But what exactly? Most of the things that progressive pols want you to have that conservatives dislike don’t lend themselves to sexy raffle prizes: Obamacare, a higher minimum wage, reproductive rights, comprehensive immigration reform, marriage equality. . . . Maybe voters could register for a chance to have Joe Biden officiate at their same-sex wedding. (Or, really, any kind of wedding. Biden makes everything more festive.)

Obviously, this calls for some imaginative crowdsourcing. Poll your friends. Send along your best ideas.

W. gets with transparency, sort of

Calling out politicians for dishonesty, venality, cowardice, and general stupidity is part of what journalists are paid to do. But we should also give credit where credit is due. So count me among those impressed by the recent revelation that, in 2010, George W. Bush quietly signed directives expediting the public release of records from his presidency. According to Politico, the documents affected include “purely factual or informational” memos from aides, policy talking points, adviser recommendations on bills, and scheduling info. Plenty of hush-hush stuff will remain under wraps. But still, this is not what you’d expect from a guy who, in his first year in office, signed an executive order giving presidents greater power to keep their papers secret.

Friendly and unfriendly skies

News you can use: In the wake of the Spring Break rush, the fine folks at
Airfarewatchdog.com released a survey on which airlines have the rudest flight attendants.

Voted worst (with 26 percent): Spirit Air.

And at the other end of the niceness spectrum: a tie between Southwest and Alaska (whose attendants only 1 percent of respondents found horrible).

Book your summer travel accordingly.

The post Tilting at Windmills appeared first on Washington Monthly.

]]>
12579
Thrown Out of Court https://washingtonmonthly.com/2014/06/06/thrown-out-of-court/ Fri, 06 Jun 2014 14:47:29 +0000 https://washingtonmonthly.com/?p=12580 Supreme Court

How corporations became people you can't sue.

The post Thrown Out of Court appeared first on Washington Monthly.

]]>
Supreme Court

Late last year a massive data hack at Target exposed as many as 110 million consumers around the country to identity theft and fraud. As details of its lax computer security oversight came to light, customers whose passwords and credit card numbers had been stolen banded together to file dozens of class-action lawsuits against the mega-chain-store company. A judge presiding over a consolidated suit will now sort out how much damage was done and how much Target may owe the victims of its negligence. As the case proceeds, documents and testimony pertaining to how the breach occurred will become part of the public record.

All this may seem like an archetypical story of our times, combining corporate misconduct, cyber-crime, and high-stakes litigation. But for those who follow the cutting edge of corporate law, a central part of this saga is almost antiquarian: the part where Target must actually face its accusers in court and the public gets to know what went awry and whether justice gets done.

Two recent U.S. Supreme Court rulings—AT&T Mobility v. Concepcion and American Express v. Italian Colors—have deeply undercut these centuries-old public rights, by empowering businesses to avoid any threat of private lawsuits or class actions. The decisions culminate a thirty-year trend during which the judiciary, including initially some prominent liberal jurists, has moved to eliminate courts as a means for ordinary Americans to uphold their rights against companies. The result is a world where corporations can evade accountability and effectively skirt swaths of law, pushing their growing power over their consumers and employees past a tipping point.

To understand this new legal environment, consider, by contrast, what would have happened if Amazon had exposed its 215 million customer accounts to a security breach similar to Target’s. Since Amazon has taken advantage of the Court’s recent decisions, even Amazon users whose bank accounts were wiped clean as a direct result of the hack would not be able to take the company to court. “The lawsuits against Target would almost certainly not be possible against Amazon,” says Paul Bland, executive director of Public Justice. “It’s got its ‘vaccination against legal accountability’ here.”

Following the 2011 and 2013 Supreme Court rulings, dozens of other giant corporations—from Comcast and Wells Fargo to Ticketmaster and Dropbox—have secured the same legal immunity. So have companies ranging from airlines, gyms, payday lenders, and nursing homes, which have quietly rewritten the fine print of their contracts with consumers to include a shield from lawsuits and class actions. Meanwhile, businesses including Goldman Sachs, Northrop Grumman, P. F. Chang’s, and Uber have tucked similar clauses into their contracts with workers.

Hastily clicking through terms of service is now all it can take to surrender your rights to these companies. Once you do, your only path for recourse if you’re harmed by any one of them is “mandatory arbitration,” where the arbitrator is often chosen by the corporation you’re challenging, and any revelations about the company’s wrongdoing tend to be kept secret. Rather than band together under the light of the public courtroom, each individual has to work through the darkness of a private tribunal, alone, where arbitrators can interpret laws however they wish. Certain inalienable rights, the Court has ruled, are actually kind of alienable.

The court decisions that birthed this brave new world coincided with a rising conservative legal movement that advocates judicial restraint and a corporate lobby that has successfully pushed the idea that America is an excessively litigious society in dire need of “tort reform.” The result, lawyers and scholars say, is that thousands of cases that individuals once had a shot of winning can no longer even enter a courtroom, jeopardizing enforcement of laws spanning consumer and employee protection, civil rights, and antitrust.

“Arbitration is being used to keep individuals from having any effective ability to enforce their rights,” says Margaret Moses, a professor at the University of Loyola School of Law. “You’ve completely undercut the law, cut it off at the knees.”

As with many of America’s legal traditions, our right to sue was born of a deep skepticism of concentrated power. Early Americans recognized that their ability to bring civil suits against politically connected wrongdoers would lessen their dependency on often corrupt-government officials. Following ancient British traditions, the Founders also enshrined the right to a jury trial in the Seventh Amendment, while preserving the principle, dating back to the 1267 Statute of Marlborough, that all trials be open to the public. Recognition of these rights reflected a fundamental awareness that laws created in a democratic society would be meaningless unless citizens also ensured their fair enforcement.

In the nineteenth century, Congress further embedded these principles by allowing individual plaintiffs to assume much of the enforcement role played by large regulatory bureaucracies in other industrializing countries. Practices like awarding triple damages to successful plaintiffs in antitrust suits, for example, encouraged private parties to take the lead at a time when “politically powerful institutions [were] able to intimidate and subvert public enforcement,” explains Paul Carrington, professor at Duke University School of Law. “Congress made the assessment that if it wanted the antitrust law enforced, it would have to rely primarily on private lawyers advising and representing the smaller businessmen whom the law was intended to protect.”

None of these principles were disturbed when, in 1925, Congress passed the Federal Arbitration Act, which recognized a limited use of arbitration as a way for businesses to speedily resolve disagreements with each other outside of courts. As corporate transactions had risen dramatically in the early 1900s, so had corporate disputes. Lobbied heavily by New York business interests, lawmakers recognized that freeing judges from resolving procedural skirmishes over contracts could benefit everyone. Some officials were wary that expanded use of arbitration might, as one put it, let “the powerful people … come in and take away the rights of the weaker ones.” But the business lobby assured them that arbitration would only be used between equally sophisticated companies, and only if both parties agreed.

And for many decades to come, arbitration worked as promised. But starting with a series of decisions in the 1980s, unlikely bedfellows on the Supreme Court would steer us down an entirely new path.

The slow creep began with a 1983 case, Moses H. Cone v. Mercury Construction. Though arbitration wasn’t the central issue at play, the Supreme Court used its opinion to offer a radically novel interpretation of the Federal Arbitration Act. Writing for a 6–3 majority, Justice William Brennan—a leader of the Court’s liberal wing—declared that the FAA reflected a “federal policy favoring arbitration.” The idea that Congress had intended arbitration as preferable to courts rather than just as an alternative hadn’t been aired before. Still, Brennan’s language was clear and decisive—and future judges would lean heavily on it as they razed the walls that had kept arbitration in its place.

Two successive decisions accelerated what might have been a brief and quirky deviation into a major turning point. In 1984, the Supreme Court heard a case brought in California by 7-Eleven franchisees against their parent company, Southland, which had included in their contracts a binding arbitration clause. California outlawed these clauses, recognizing that the franchisees rarely had the power to negotiate these terms. Yet Southland boldly argued that its contract overrode the state’s law. Drawing on Brennan’s unusual interpretation from the previous year—that Congress had intended a “federal policy favoring arbitration”—a 7–2 majority on the Supreme Court ruled for Southland, eroding the power of states to regulate how companies use arbitration.

In a striking dissent, Justice Sandra Day O’Connor, a conservative, berated the majority for ignoring legislative history. “Today’s decision is unfaithful to congressional intent, unnecessary, and … inexplicable,” she wrote. “Although arbitration is a worthy alternative to litigation, today’s exercise in judicial revisionism goes too far.”

It would soon go farther. In 1985, the Supreme Court heard Mitsubishi v. Soler Chrysler-Plymouth, a case in which a car dealer had sued the Japanese manufacturer for violating antitrust laws, and Mitsubishi had pushed to arbitrate. Recalling the Federal Arbitration Act, the car dealer pointed out that companies could only use arbitration to settle contracts they had written, not interpret laws Congress had passed, like the Sherman Antitrust Act. Stunningly, a five-justice majority—riding its recent wave—sided with
Mitsubishi. Arbitrators could now rule on actual law—civil rights, labor protections, as well as antitrust—with no accountability or obligation to the public.

Penning an impassioned dissent, Justice John Paul Stevens warned that there were great hazards in allowing “despotic decision-making,” as he called it, to rule on law like antitrust. “[Arbitration] is simply unacceptable when every error may have devastating consequences for important businesses in our national economy, and may undermine their ability to compete in world markets,” he wrote.

Three years, three decisions: the Supreme Court had drastically enlarged the scope of arbitration. The way the Court split didn’t neatly map onto partisan ideology: liberal justices led the majority in two of the cases, dissented in others; while the conservative arm—which generally preferred to leave arbitration to the states—also jumped around.

Little evidence suggests that Brennan’s analysis followed congressional intent. “There was nothing in the legislative history that says Congress favored arbitration,” says Loyola Law School’s Margaret Moses. “The Supreme Court just stated it and then kept citing itself. It’s spurred a huge policy shift, with no basis in legislation.”

However baffling its reasoning, this drastic shift by the Court followed a decade during which the conservative legal movement had rapidly gained intellectual clout and political power. The infamous 1971 “Powell memo”—a call to arms to corporations written by then corporate lawyer Lewis Powell, who would join the Supreme Court the following year—had galvanized the business community into organizing against liberal groups and consumer activists like Ralph Nader.

This Court’s turn also accorded with a well-financed political campaign for “tort reform,” a conservative cause backed by groups such as the Federalist Society and the Olin Foundation. George H. W. Bush campaigned on tort reform in 1991, while Vice President Dan Quayle headed up the Council on Competitiveness, which held as a central aim eliminating class-action litigation against business. As one scholar of the movement put it, the prevailing belief at the time was that America suffered from “too much law, too many lawyers, courts that take on too much—and an excessive readiness to utilize all of them.”

And true enough, by some measures litigation had increased. In 1962, for example, U.S. district courts conducted just under 6,000 civil trials; by 1981, they conducted more than 11,000. Public figures and the media tended to attribute all of this growth to “frivolous” lawsuits and zealous trial attorneys, but the rise also traced back to other factors, such as the civil rights wins of the 1960s, which meant that laws now protected a much larger segment of the population.

Nonetheless, it became received wisdom in many quarters that America had become an excessively litigious society. Over the 1990s books like The Litigation Explosion: What Happened When America Unleashed the Lawsuit proliferated, setting the culture in which the Court continued to restrict lawsuits and promote arbitration. In 1998 the Chamber of Commerce founded the Institute for Legal Reform, committed to reducing “excessive and frivolous” lawsuits. The Federalist Society convened discussions such as “Is Overlawyering Taking Over Democracy?”

Against the ongoing meme of superfluous litigation, the courts further expanded the realms in which companies could compel arbitration. In the 1995 case Allied Bruce, the Supreme Court approved the use of arbitration clauses by companies in routine consumer contracts. In 2001 the Court ruled against a group of Circuit City workers, holding that employers could use arbitration clauses in contracts with employees. In 2004 a court ruled that arbitration clauses were enforceable against illiterate consumers; another court ruled that they were enforceable even when a blind consumer had no knowledge of the agreement.

Yet the true watershed moment came in 2011, in the case of AT&T Mobility v. Concepcion. Vincent and Liza Concepcion had sued AT&T in California court, charging that the company had engaged in deceptive advertising by falsely claiming that their wireless plan included free cell phones—a practice that had shortchanged millions of consumers out of about $30 each. When they tried to litigate as a class, AT&T pointed to the fine print that prohibited consumers from banding together.

The Concepcions countered that these kinds of class-action bans violated California law as well as that of twenty other states. Moreover, scores of federal judges had forbidden this kind of class-action ban, on the grounds that people often had no practical way to make a claim unless they joined with other plaintiffs in sharing the cost. Allowing companies to wipe away this right in “take-it-or-leave-it” contracts for products like credit cards or phone service would effectively let corporations write themselves a free pass.

The district court and the Ninth Circuit Court of Appeals both supported the Concepcions, ruling that AT&T’s terms were “unconscionable,” a term of art historically used to describe contracts that so favored parties with superior bargaining power as to be unjust. When the case reached the Supreme Court, eight state attorneys general, as well as a legion of civil rights organizations, consumer advocates, employee rights groups, and noted law professors, also weighed in, arguing that allowing these kinds of class bans would enable companies to evade entire realms of law. But the Supreme Court, in a 5–4 split, blessed AT&T’s contract, opening the door for companies to ban class actions routinely in their fine print.

At this point, there was one slender thread of protection left: class-action bans still weren’t enforceable if they eliminated the only way someone could bring a case. But in 2013, the Supreme Court gutted even this provision in a case pitting Italian Colors, a family restaurant in Oakland, California, against American Express. This time around, the same five-judge majority ruled that class-action bans in arbitration contracts were legal—even when they left citizens with no recourse at all.

Immediately, law firms around the country blasted out advisories to their corporate clients: Time to rewrite your contracts. The law firm Baker & McKenzie called it a “sea change,” comparing it to the “disruptive innovations from chemical photography to digital photography, from personal computers to smart phones and from snail mail to email,” and noting that if employers drafted the right language, “[e]mployment class action suits are no longer necessary.”

Schnader Harrison Segal & Lewis put it most succinctly: “For practical purposes the ‘effective vindication’ doctrine is a dead letter.” Courts no longer cared whether the fine print blocked individuals from claiming their statutory rights. Now, companies would be foolish not to adopt this innovative clause.

So corporations have taken heed, quietly folding these new terms into what are often “take-it-or-leave-it” agreements based on pure market power—realizing the exact scenario Congress feared ninety years ago. “These terms get foisted on us,” said Pamela Gilbert, partner at Cune Gilbert & LaDuca and consumer rights advocate. “They’re not really ‘contracts’ at all.”

Stories documenting Americans’ fabled zeal for lawsuits are legion. There’s the one about the old lady who sued McDonald’s over a cup of coffee that was too hot, or about the guy who took Anheuser-Busch to court because his six-pack failed to deliver visions of beautiful women clad in bikinis on a balmy beach.

Yet while anecdotes about frivolous litigation have risen to the rank of cliché, the number of lawsuits brought by Americans has actually been falling for decades. The latest data shows that, on a per capita basis, the total number of cases commenced in U.S. district courts fell by 11 percent between 1996 and 2013, personal injury cases by 58 percent, and civil rights cases by 29 percent. At the state level, the number of tort cases filed per capita between 2001 and 2010 dropped by 23 percent in Texas district courts, by 29 percent in California superior courts, and by 30 percent in New York supreme and county courts.

It is still too early to quantify the fallout since the Supreme Court’s latest decisions. But anecdotes capture many instances in which companies have taken advantage of the rulings to thwart suits. In 2011, for example, students won a $40 million settlement by filing actions against the Career Education Corporation (CEC), owner of for-profit culinary schools. According to students, the CEC misrepresented its degree and deceived the students into taking on crippling debt. Since then, the CEC has added a binding arbitration clause and class-action ban in its contract. An attorney who represented students in one of the earlier cases says he will no longer bring suits for similarly situated students, as the CEC’s new clause is effectively impossible to overcome.

Julie Strandlie, legislative and public policy director of the National Employment Lawyers Association, says that the organization now regularly sees employees forced into arbitration on matters spanning alleged wage theft, discrimination, and unlawful termination. “These terms are slipped into contracts between parties of unequal bargaining power, they force people to give up their rights to get or to keep a job,” she said.

Consumer and employee advocates note that arbitration can be a fair and speedy alternative to courts, especially when tight budgets have slashed judiciary funds and shuttered courtrooms in places like California, which has closed 51 courthouses and 205 courtrooms since 2008. If individuals stand to gain from arbitration, though, critics note, they will readily sign up. “Nobody is against alternative mechanisms to resolve disputes—just those that are unknowingly and involuntarily forced on us,” says Gilbert.

If the only way businesses can get individuals to arbitrate is by imposing it on them, it seems clear who’s reaping the gains. “The whole notion of mandatory arbitration clauses is designed to disenfranchise consumers and citizens through language that nobody reads,” says Bill Brauch, director of the consumer protection division at the Iowa attorney general’s office.

When coupled with class-action bans, binding arbitration can wipe out private cases entirely. Experts say that this could become more common in the realm of antitrust, where the cost of bringing a case is usually far beyond the reach of any single individual or small business. So learned Alan Carlson, the plaintiff in the 2013 landmark American Express v. Italian Colors case. The owner of an Italian family restaurant, Carlson charged that American Express was abusing its market power by forcing him and other business owners to accept new cards with much higher rates. But when the Supreme Court denied Carlson the right to join with other small businesses in bringing an antitrust suit, it effectively prevented him and everyone else affected from pursing any recourse at all.

Because antitrust cases today require extensive economic analysis, bringing such a case would cost anywhere from $200,000 to $1 million in fees, while the highest judgment any individual might win would be around $40,000. Even if he’d been able to scrounge up the money somehow, “no attorney would take the case because it made no economic sense,” Carlson says. “The bully is holding all the cards.”

Bert Foer, president of the American Antitrust Institute, says that many sound antitrust cases no longer get heard, by a judge or an arbitrator, because of class-action bans. “The American Express decision cuts back the quantity of antitrust cases that can be brought [by private parties],” he says. “It takes away citizens’ rights.”

In a fiery dissent in the American Express case, Justice Elena Kagan honed in on how the same muscle that empowers a company to impose a monopolistic scheme also enables it to force its customers into signing away their right to sue or engage in collective action. The Court’s decision means that a “monopolist gets to use its monopoly power to insist on a contract effectively depriving its victims of all legal recourse,” she wrote. The Court’s answer, according to Kagan? “Too darn bad.”

Countering these stories are studies purporting to show that compelling consumers and employers into arbitration actually works in their interests. One by an attorney at the National Arbitration Forum claims that consumers prevail in 61 percent of the arbitrations they initiate. Another funded by the U.S. Chamber Institute for Legal Reform surveyed empirical research to conclude that individuals achieve “superior” results in arbitration compared to courts.

Groups like Public Citizen and Public Justice point out, however, that many of these studies cherry-pick data or offer up misleading analyses (a charge that the authors of these studies wage right back at them). A 2007 report by Public Citizen, for example, reviewed data from one of the big arbitrator companies and found that arbitrators ruled against consumers 94 percent of the time. In 2008 it also reviewed the Chamber’s study to reveal how the underlying data actually showed that individuals generally win fewer times and receive less money in arbitration than in court. A recent study by Mark Gough, a PhD candidate at the Industrial Labor Relations School at Cornell University, focused on employee-employer disputes and documented the same trend: arbitration decreases the odds of an employee win by 59 percent, and decreases the amount awarded by 35 percent. For employees, “[o]utcomes in arbitration are starkly inferior to outcomes in litigation,” Gough concluded.

James Baker, a partner at Baker & McKenzie who specializes in defending employers against suits over pension plans, agrees. “There’s no question that [arbitration] favors the company’s interest over employees,” said Baker, adding that he sees fewer class-action cases filed in the wake of the Concepcion and Italian Colors decisions.

Bear in mind that all these studies on the outcomes of arbitration must either draw from small sample sizes or depend on the big arbitrators—like the American Arbitration Association and the Judicial Arbitration and Mediation Services—to voluntarily share data. No public body tracks even the number of arbitration claims filed. In 2002 California enacted legislation requiring arbitration companies to publish basic data on the consumer arbitrations they administer. A state compliance review last year found that as of December 2013, only eight of the twenty-five private arbitration companies in California posted any of the required information at all.

Critics also point out that assessing arbitration based on the outcomes of cases is misleading because it overlooks all the cases that arbitration may suppress. Studies have found that a tiny fraction of the population actually chooses to arbitrate. Last December the Consumer Financial Protection Bureau released the preliminary results of its study on the use of arbitration clauses in credit cards, checking accounts, and pre-paid cards. It found that between 2010 and 2012, out of 80 million American cardholders, only 1,033 consumers arbitrated with companies—or less than 0.001 percent. It’s the same picture across industries: between 2003 and 2007, only 170 of AT&T’s 70 million customers filed an arbitration claim, or 0.0002 percent.

Consider, too, how the incentives on an arbitrator differ from those on judges. Judges are paid from public taxes; arbitrators are paid by whoever is retaining them. Sometimes both parties split the cost equally; often companies will offer to cover the entire thing. Either way, critics say the chance of repeat business can give arbitrators an incentive to rule in a company’s favor.

In 2009, the National Arbitration Forum, the largest administrator of credit card and consumer collections arbitrations in the country at the time, was found to be persuading companies to insert arbitration clauses in contracts and then settling on itself to arbitrate them—in other words, peddling the very organizations whose actions it would later judge. Similarly, in cases where there is a dispute over whether a contract requires arbitration, the courts have ruled that arbitration companies themselves should decide whether or not they get the business.

Arbitrators are also unlike judges in that they don’t have to follow legal precedent, they’re not bound by the same rules of evidence as courts, and—in some states—they don’t have to be lawyers at all. “Arbitrators are much less demanding in the evidence they require,” said Allan King, a partner at Littler Mendelson with extensive experience defending companies against class-action suits.

Still, courts now give great deference to whatever arbitrators decide. In one Seventh Circuit case, the court said it would uphold the arbitrator’s decision if his interpretation of a contract were “incorrect or even wacky.” Only if an arbitrator shows “manifest disregard” for the law can the decision be overturned.

The greatest damage here isn’t to us as individuals. “Mandatory arbitration is a basic threat to our democracy,” says Deepak Gupta, who argued the 2011 AT&T case before the Supreme Court. “This isn’t about us all getting our $30 checks when a company has ripped us off. It’s about laws that Congress passes being enforceable. The Supreme Court is allowing corporations to overturn law made by people we elect.”

Diverting all cases to arbitration also promotes a culture of impunity, enabling wrongdoers to more easily continue their wrongdoing. And when the threat of litigation is strong, it discourages corporations from engaging in misconduct in the first place. By contrast, says Ed Mierzwinski, consumer program director at the U.S. Public Interest Research Group, “[t]he use of forced arbitration clauses has essentially immunized corporate America from any responsibility for its actions.” In the same way that the Justice Department’s decision to fine banks rather than prosecute executives encourages financial institutions to build in penalties as a cost of business, arbitration incentivizes companies to write down settlement awards as a routine cost, while perpetuating harms
at large.

Public officials say they cannot fill the void created by the drop-off in private suits. “We cannot bring every case. No state has ever had or will have enough resources to supplant the role that private-action attorneys have,” said Brauch from the Iowa attorney general’s office. The proliferation of binding arbitration means that “state laws are basically being gutted,” he said.

By enabling companies to keep their wrongdoing secret, arbitration chokes off information vital to the public. Consider the long course of anti–tobacco company litigation and how it ultimately affected policy. Individual plaintiffs filed over 800 suits against tobacco companies between 1954 and 1994, bringing reams of internal documents into the public domain. Although the companies overwhelmingly prevailed in the private actions, the information the suits unsealed eventually emboldened forty-six states to file their own cases, culminating in a $206 billion settlement that also imposed sweeping changes across the industry.

If people harmed by tobacco companies had been forced to arbitrate their cases, there’s a good chance the public today wouldn’t know how tobacco companies maneuvered to make cigarettes more addictive and to hide their lethal health effects. More recently, details on how Bank of America, JP Morgan, and other financial institutions wrongfully seized people’s homes in the wake of the subprime mortgage bust also emerged out of a case brought by a private lawyer.

What will become of all this depends on whether Congress chooses to act. Last year Minnesota Senator Al Franken and Georgia Representative Hank Johnson reintroduced the Arbitration Fairness Act, which would prohibit mandatory arbitration in employment, consumer, civil rights, and antitrust disputes. Lawmakers have been floating a version of the bill for seven years. Even its supporters admit that—given the level of opposition from the Chamber of Commerce and other business interests—it is unlikely to pass anytime soon.

If Congress doesn’t act, though, core legislation that Americans have won through decades-long fights for a more just society—including minimum-wage laws, bans on racial discrimination, and checks on monopolies—is greatly imperiled. The outcome will go to the heart of whether laws function as a check on corporate power in an era where the weight and reach of that power have grown immensely. Unless we seize back our right to courts, these other rights will exist only in name.

The post Thrown Out of Court appeared first on Washington Monthly.

]]>
12580
Working the GOP’s Weak Spot https://washingtonmonthly.com/2014/06/06/working-the-gops-weak-spot/ Fri, 06 Jun 2014 14:45:54 +0000 https://washingtonmonthly.com/?p=12581

How Barack Obama is following Bill Clinton's minimum wage game plan to try to hold onto the Senate.

The post Working the GOP’s Weak Spot appeared first on Washington Monthly.

]]>

On April 30, Senate Republicans filibustered a bill sponsored by the Democrats and heavily promoted by President Obama that would have raised the federal minimum wage from $7.25 to $10.10 an hour. It was an impressive show of unity by GOP senators, only one of whom, Bob Corker of Tennessee, supported the Democrats’ failed effort to send the bill to the floor for a vote.

But that very day, former Minnesota Governor Tim Pawlenty made news when he said on MSNBC that “Republicans should support reasonable increases to the minimum wage.” Five days later, another 2012 GOP presidential candidate broke ranks. “Let’s not make this argument that we’re for the blue-collar guy but we’re against any minimum wage increase ever,” said Rick Santorum. “It just makes no sense.” A month after that, Mitt Romney himself joined in the apostasy: “I, for instance, as you know, part company with many of the conservatives in my party on the issue of the minimum wage. I think we ought to raise it.”

Though Pawlenty, Santorum, and Romney aren’t running for office (at least not in 2014), their prominence in the party—and the journalistic rule that says three’s a trend—was enough to prompt this front-page Washington Post headline: “Split Appears in GOP as More Call for Raising Federal Minimum Wage.”

And it isn’t just Republican ex-presidential candidates who are having second thoughts on the minimum wage. In several hotly contested Senate races, GOP challengers have commenced various sorts of waffling on the issue. In Arkansas, Representative Tom Cotton, aiming to unseat Senator Mark Pryor, let it be known that he will “carefully study” a state minimum wage increase proposal after having previously opposed the very idea of a minimum wage. In North Carolina, Thom Tillis, during the GOP primary to take on Senator Kay Hagan, called the bill to increase the federal minimum wage a “dangerous idea.” But after winning the nomination in May, he told MSNBC’s Chuck Todd that it would be appropriate for the state legislature to decide whether to raise the minimum wage, though he refused to say whether he, the speaker of the North Carolina house, would support such a move.

That there’s some dissension and subtle repositioning occurring among Republicans on this topic is understandable. The GOP is well placed to retake the Senate this November, but the minimum wage is the one issue that could rob them of that prize. Most Republican elected officials oppose raising it (or even having it) out of a conviction that it will cost jobs (and it might—as many as 500,000 jobs, according to a recent CBO projection), or to match the beliefs of their most conservative voters, or because of pressure from business groups, or all three. Yet they know full well that a higher minimum wage is hugely popular, with 70 percent of voters, including about half of all Republicans, favoring it.

Democrats know that too, which is why they’ve made raising the minimum wage the main weapon in their 2014 electoral arsenal. It’s a core Democratic conviction, an evergreen the party has periodically and profitably turned to for decades. Yet it feels newly relevant in an era of rampant income inequality. And it is the perfect rhetorical snare for any Republican who tries to capitalize on the lack of income growth in the current recovery. The same recent CBO report projects that if the Democrats’ bill becomes law, more than 16.5 million families would see their wages go up—for a collective total of $31 billon— including nearly a million families who would be lifted above the poverty line.

Indeed, Democrats have lined the route to November with a series of minimum wage traps. The April 30 Senate vote was the first; Senate leaders plan to keep bringing the bill up for a vote again and again throughout the summer and fall. Meanwhile, Democrats and groups supporting them, including labor unions, have managed to get initiatives to raise state minimum wage levels on the ballot in several states, including Arkansas and Alaska, where key Senate contests are also taking place. Such ballot measures have been shown to boost Democratic voter turnout in midterm elections by several percentage points.

Whether this will be enough to help the Democrats keep the Senate remains to be seen. But the power of the minimum wage, if handled right, to wreak havoc in the Republican ranks should not be underestimated.

The best way to understand that power is to look at the last time a sitting Democratic president tried to get a minimum wage increase past a recalcitrant Republican Congress in an election year. In 1996, Bill Clinton was running for a second term on the promise to, among other things, raise the minimum wage for the first time in five years, from $4.25 to $5.15 an hour. The House was under the control of market fundamentalists Newt Gingrich and Tom DeLay. The Senate was led by the very man running against Clinton for the presidency, Bob Dole. A less accommodating environment, in other words, is hard to imagine. Yet in the end, Clinton not only won reelection, he actually got the minimum wage bill through Congress, driving Dole out of the Senate in the process. That astonishing rout is very much on the minds of senior Obama administration officials, some of whom were directly involved in the 1996 minimum wage fight. So it’s worth looking into that fight in detail.

Bill Clinton occasionally mentioned raising the minimum wage during the 1992 presidential race, but it was not a major feature of his campaign. Nor was there much talk of it from the White House during the first two years of his presidency, despite lobbying by Labor Secretary Robert Reich and liberal stalwarts like Senator Ted Kennedy. Instead, Clinton and his New Democratic advisers were much keener to expand the Earned Income Tax Credit (EITC), essentially a subsidy for the working poor, which they did in the 1993 budget bill. Because it is targeted to families whose incomes fall below a certain level, the EITC is dollar for dollar much more effective at lifting families out of poverty than the minimum wage, for the simple reason that a large portion of those who earn minimum wage live in middle-income households (about a quarter of them are teenagers). The administration also wanted to avoid picking unnecessary fights with Republicans and business interests while they were trying to pass health care reform in 1994, a policy that, had it passed, would have done far more to improve the lives of lower-income families.

But when the push for health care reform failed, bringing in its wake a GOP takeover of both houses of Congress in the November 1994 elections, Clinton’s reluctance to engage on the minimum wage began to slowly lift as he gathered advice on how to right his presidency. In a private White House meeting in late December, Ted Kennedy argued that, rather than move to the right, as others were counseling him to do, the president should stand up for core liberal positions, like raising the minimum wage and defending Medicare, Medicaid, and education against attacks by the new conservative revolutionaries in Congress. “Their harshness will not wear well over time,” Kennedy promised. In early January, Reich made the case to Clinton that he should call for a higher minimum wage in the upcoming State of the Union address. When news leaked that Clinton would do just that, the new House majority leader, Dick Armey, a conservative former economics professor, vowed to fight a higher minimum wage “with every fiber of my being.” Other Republicans, like Dole, were more circumspect, not quite promising to oppose a popular idea but emphasizing their concerns about its potential negative effects on the economy.

Clinton’s decision to push the minimum wage issue was about more than political strategy and defending the liberal base. It was also part of the coming battle over reforming welfare, a core New Democratic issue he had run on in 1992. Having already expanded the EITC, raising the minimum wage was the logical next step in achieving the administration’s goal of “making work pay”—that is, assuring that a full-time job was a better deal economically than a welfare check.

Clinton and many of his advisers were also impressed with a study published the previous fall by Princeton University economists David Card and Alan Krueger—the latter having joined the administration as chief economist at the Labor Department. Their study looked at whether an increase in New Jersey’s minimum wage reduced employment at fast-food restaurants by comparing it with seven border counties in Pennsylvania. They found no significant differences in job gains or losses between the two states. These results, though controversial at the time, were consistent with other recent studies from the 1980s and early ’90s that had cast doubt on older assumptions by economists that modest increases in the minimum wage always bring with them serious job losses. “Now I studied the arguments and evidence for and against the minimum wage increase,” Clinton said in his State of the Union address that January. “I believe the weight of the evidence is a modest increase does not cost jobs and may even lure people back into the market.”

Republicans countered with studies of their own predicting catastrophic job losses, especially for minority youth, if the minimum wage were raised. How voters would process these dueling claims wasn’t clear, but Clinton was soon feeling better about the political wisdom of his decision. The president delightedly related to his staff a CNN interview with a woman who said her husband favored Clinton’s minimum wage proposal because it would raise wages for the vast majority of low-wage workers like himself even though some might lose their jobs. “Honey, I’ll take my chances,” the husband said.

If the proposed minimum wage increase was smart politics, almost nobody in Washington in early 1995 thought it could actually become law. Republican lawmakers, feeling triumphant from their historic takeover of both houses, were in no mood to concede on the issue. Aside from a couple of perfunctory hearings there was little discussion of it that year, and certainly no votes were allowed that might have put Republicans in the uncomfortable position of having to vote against a measure that was widely popular with the public.

Instead, Newt Gingrich pushed ahead with passing the elements of his “Contract with America” reform proposal, which ended with his engineering a series of government shutdowns in late 1995 and early ’96 meant to force the administration to accept major budget cuts. But Clinton held firm, the shutdowns famously backfired on the GOP, and by the spring of 1996, the president’s approval ratings were climbing while Republican lawmakers struggled to regroup. It was then that the Democrats sprung their trap.

In his run for the presidency, Bob Dole had decided to follow a “Rotunda strategy.” He would remain Senate leader and use his position to pass a series of bills that Clinton would either have to sign or veto, making it seem like he, not the president, was the man in charge. The dubiousness of this strategy was revealed on March 26, 1996, the very day Dole cinched the GOP nomination by winning the California primary. Ushering through a noncontroversial parks bill, Dole made an uncharacteristic slip in parliamentary procedure that allowed an alert Ted Kennedy to offer an amendment to raise the minimum wage. (“He’s just as good at what he does as Michael Jordan is at playing basketball,” Clinton would later tell Newsweek about Kennedy’s cleverness that day.) Blindsided, Dole put the Senate into a slow quorum call until he figured out what to do. He eventually utilized another parliamentary maneuver to avoid having to vote on the measure that day. “That won’t happen again,” he was heard muttering as he left the chamber.

But the item was now officially on the Senate agenda. Two days later, several moderate Republican senators from heavily unionized states announced their support for it, creating a fifty-one-vote majority in favor. Dole had to engineer a filibuster by Republicans to keep it off the floor. “We’ll be back offering this, week after week, until we get it passed,” vowed Senate Minority Leader Tom Daschle. A few weeks later, Democrats tried to add a minimum wage amendment to an immigration bill, forcing Dole to pull that bill off the floor, too.

All that spring, Democrats managed to bring the Senate nearly to a standstill with procedural tactics over the minimum wage. Meanwhile, the president was stepping up his rhetorical attacks on Dole for blocking the minimum wage proposal and deploying members of his administration, including First Lady Hillary Clinton, to stump on the issue. In his book The Politics of the Minimum Wage, political scientist Jerold Waltman of Baylor University describes the situation thusly:

From a purely political perspective, nothing could be going better for the Democrats, or worse for Dole. He was on the wrong side of the popular issue; he appeared to be stifling democracy by not even allowing a vote; he was tied down in the Senate; he was caught between moderate and pragmatic Republicans who wanted to vote and put the issue behind them and diehard conservatives and business interests who were adamantly opposed to doing so; and his reputation for legislative effectiveness was being tarnished. One Democratic strategist gloated, ”Every day spent on the minimum wage is another good day for the Democrats.”

Jun14-Glastris3
Credit:

Road wage: Barack Obama has been travelling the country arguing for a higher minimum wage. Credit: Getty Images

Things were hardly better for Republicans on the House side that spring. While Minority Leader Dick Gephardt was pressing for a vote on a similar bill to boost the minimum wage, the AFL-CIO began broadcasting pro-minimum wage ads in the districts of moderate and vulnerable House Republicans. By mid-April, twenty House Republicans had broken ranks and announced their support for the minimum wage increase.

Newt Gingrich could see that a majority was forming in both the House and the Senate to pass the minimum wage. So instead of fighting the inevitable, he decided to use the occasion to extract as much as he could for his party’s allies in the business community. “We need to be able to say that while we’re doing something that kills jobs, we’re doing other things to create jobs,” Gingrich told officials of the National Federation of Independent Businesses, a GOP-aligned trade group. “So you guys need to tell us items on your agenda that fit that bill. Just give us the list.” Republicans took that list—which included more-generous tax treatment for small businesses that made equipment purchases, and combined them with demands from larger corporations ranging from pharmaceutical companies to soft-drink makers—to the Democrats (who also had corporate tax breaks they wanted to deliver) and negotiated a deal involving $21 billion in business benefits in return for an increase in the minimum wage. Democrats also agreed to a provision demanded by Republicans and heavily lobbied for by the National Restaurant Association’s then president (and future GOP presidential candidate), Herman Cain, creating a special sub-minimum wage rate of $2.13 for employees who work for tips.

Meanwhile, Senate Democrats continued using the minimum wage to bollix up Dole. Giving his Rotunda strategy one last shot, on May 7 Dole attempted to get a vote on a temporary cut in the gasoline tax, a proposal he’d unveiled ten days previously to help energize his flagging presidential campaign. Democrats countered with a demand that their minimum wage proposal be considered along with the gasoline tax cut. Not wanting to have to vote on the minimum wage measure, Dole threw in a “poison pill”: a provision allowing companies to negotiate workplace issues with unionized employees without union involvement. The provision was anathema to organized labor, and Clinton had threatened to veto any bill that included it. So the day ended in legislative gridlock, with no votes taken.

A week later, Dole announced that, after thirty-five years in Congress, he was leaving the Senate to run full-time for the presidency. He and his staff later confirmed to reporters that his decision was driven by the grueling weeks of legislative trench warfare over the minimum wage, though he didn’t blame Democratic senators. “He told colleagues that he would have done the same thing to the Democrats if their positions were reversed,” wrote the New York Times of the consummate Washington insider.

In late May, the House approved its version of the minimum wage increase, complete with the billions in corporate tax goodies Gingrich had negotiated. The Senate then took up the House bill over the summer and, after tussles between Democrats and Dole’s successor as Senate leader, Trent Lott, passed its own version. The final conference measure passed both houses on August 2. Clinton signed the bill on August 20, a week before the start of the Democratic National Convention, in a ceremony on the South Lawn featuring a bipartisan group of lawmakers. Over the next few days he also put his pen to a bill allowing employees with preexisting medical conditions to change jobs without losing their health insurance, and to bitterly negotiated welfare reform legislation, which was made politically and programmatically stronger by the fact that those leaving the welfare roles for the workplace would be earning modestly higher wages. Dole campaign manager Scott Reed called it “the worst week of the campaign.” On October 1, a month before the election Clinton would win handily, eleven million low-income working Americans got a raise.

What lessons can be extracted from that eighteen-year-old battle that might shed light on the minimum wage fight now going on in Washington? One lesson is that predictions of economic and social doom typically ginned up by foes of raising the minimum wage should be taken with great skepticism. Back in 1996, House Majority Whip Tom DeLay called Clinton’s proposed minimum wage hike “a job killer cloaked in kindness.” Representative John Shadegg warned that one in four young minority workers not in school would lose his job, and Senator Hank Brown predicted a juvenile crime wave “of epic proportions.” Instead, in the five years that followed the signing of the minimum wage bill, the U.S. economy added record numbers of jobs, incomes for the working poor rose, unemployment among African American young people plummeted, and juvenile crime rates continued to recede.

Another lesson is that minimum wage increases do not happen without leadership. As political scientist Jerold Waltman notes, while raising the minimum wage is typically popular, support tends to be shallow and mild (the exception being unions), whereas opposition from ideological conservatives and business interests tends to be deep and passionate. In such cases, defenders of the status quo have a built-in advantage. What the events of 1996 show is that it is possible to raise the minimum wage, but only when top political leaders, and especially the president, throw their weight behind it.

There are, of course, many differences between 1996 and 2014 that complicate the comparison. Then it was a presidential election year, now it’s a midterm. The economy in 1996 was more clearly on the mend than it is today (though the full force of the ’90s boom wasn’t yet obvious to most Americans). The Republican Congress Clinton faced was not as implacably hostile to compromise as the current one, which has far fewer GOP moderates and a larger base of hard-core conservatives. Clinton was also able to fit the minimum wage issue into his broader theory about economic growth and the value of work—that work needs to be both required (via welfare reform) and rewarded (via an expanded EITC and a higher minimum wage)—that resonated with the public. For all these reasons, Obama and congressional Democrats will have a tougher time using the minimum wage issue to divide and conquer the other party.

But the Democrats do have a few advantages going for them. One is that wage stagnation and rising inequality are palpably worse problems today than they were even at the bottom of the early-’90s recession. Another is that Obama has top members of his administration who were participants in the 1996 minimum wage battle and know the drill. White House senior counselor John Podesta was counselor to Minority Leader Tom Daschle in 1995 and 1996. Labor Secretary Thomas Perez was special counsel to Senator Ted Kennedy at the same time. Both men have lead roles in orchestrating the current political battle over the minimum wage. The fissures we’ve so far seen in the GOP over this issue, in other words, are no accident.

Bill Clinton, too, is playing his part. In February, the former president headlined a fund-raiser in Louisville, Kentucky, for Alison Lundergan Grimes, the Democratic Senate nominee hoping to unseat Senator Mitch McConnell. She and McConnell are currently tied in the polls. At the event Clinton said, according to the Associated Press, that McConnell’s opposition to the minimum wage is reason enough to support Grimes. If she wins in November it wouldn’t be the first time Clinton used the minimum wage to help drive a GOP Senate leader out of office.

The post Working the GOP’s Weak Spot appeared first on Washington Monthly.

]]>
12581 Jun14-Glastris3