March/April 2013 | Washington Monthly https://washingtonmonthly.com/magazine/marchapril-2013/ Tue, 25 Jan 2022 16:44:42 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg March/April 2013 | Washington Monthly https://washingtonmonthly.com/magazine/marchapril-2013/ 32 32 200884816 Slaves of Defunct Economists https://washingtonmonthly.com/2013/03/02/slaves-of-defunct-economists/ Sat, 02 Mar 2013 17:35:36 +0000 https://washingtonmonthly.com/?p=18981 Why politicians pursue austerity policies that never work.

The post Slaves of Defunct Economists appeared first on Washington Monthly.

]]>
On January 25, the British statistics office announced that the United Kingdom’s economy had shrunk by 0.3 percent in the last quarter of 2012. After enduring two recessions in the last four years, Britain is now well on its way into a third. The pain has been compounded by a succession of austerity budgets, in which Britain’s Conservative-led government has tried to hack away at spending. Repeated rounds of cuts have battered the British economy. However, Britain’s chief economic policymaker, Chancellor of the Exchequer George Osborne, wants still more pain. He is pushing the government to identify £10 billion more in cuts this year.

Mar13-Blyth-Books
Austerity: The History of a
Dangerous Idea

by Mark Blyth
Oxford University Press, 304 pp.

This makes no economic sense. Olivier Blanchard, chief economist at the International Monetary Fund, has pleaded for Britain to start focusing on growth rather than fiscal virtue, claiming that “we’ve never been passionate about austerity.” It doesn’t make any political sense, either. Voters like vague proposals for “reducing government waste” in the abstract, but hate cuts to programs that they care about. Why do so many members of the political elite disagree with Blanchard in their visceral passion for austerity? Why do they keep on pushing for pain when it threatens economic ruin and hurts their election chances?

Mark Blyth’s new book, Austerity: The History of a Dangerous Idea, gives us some important clues. Many books have been published in the last few years explaining why some economic ideas (the efficient markets hypothesis; the Black-Scholes option pricing model) are dangerous. Blyth, a professor of international political economy at Brown University (and a friend of mine), explains why a blind fixation on austerity is one of these terrible ideas. However, his book does two additional things that other books in this genre do not. First, it asks why bad economic ideas, like austerity, have such powerful consequences. Economists themselves do not think that ideas are powerful, and their models usually assume that people are motivated by straightforward self-interest rather than complicated notions. Second, it asks why these ideas keep on coming back. Every time governments have experimented with austerity, it has led to disaster, and yet a couple of decades later, their successors try again, with equally dismal consequences.

Blyth cares about bad ideas because they have profound consequences. We do not live in the tidy, ordered universe depicted by economists’ models. Instead, our world is crazy and chaotic. We try to control this world through imposing our economic ideas on it, and sometimes can indeed create self-fulfilling prophecies that work for a while. For a couple of decades, it looked as though markets really were efficient, in the way that economists claimed they were. As long as everyone believed in the underlying idea of underlying markets, and believed that everyone else believed in this idea too, they could sustain the fiction, and ignore inconvenient anomalies. However, sooner or later (and more likely sooner than later), these anomalies explode, generating chaos until a new set of ideas emerges, creating another short-lived island of stability.

This means that ideas are fundamentally important. The world does not come with an instruction sheet, but ideas can make it seem as if it does. They tell you which things to care about, and which to ignore; which policies to implement, and which to ridicule. This was true before the economic crisis. Everyone from the center left to the center right believed that weakly regulated markets worked as advertised, right up to the moment when they didn’t. It is equally true in the aftermath, as boosters of neoliberalism have moved with remarkable alacrity from one set of bad ideas to another.

After the initial shock wore off, American neoliberals interpreted the economic crisis as a morality tale about the need to reduce government debt by ending entitlements and hacking away at out-of-control government spending. Their European counterparts used Greece’s travails to tell another morality tale—one about the dire consequences of dishonesty and political corruption. As Blyth argues, by interpreting the problem as one of government failures, they sedulously overlooked the bad behavior of the private sector and made taxpayers liable for banks’ morally hazardous behavior.

These twin mythologies of austerity reinforced each other. American Republicans took up dubious academic claims and turned them into a variety of economic know-nothingism. During the presidential debates, Mitt Romney warned that Barack Obama was turning America into a second Greece. European politicians, for their part, took comfort from the arguments of U.S.-based economists and commentators such as Kenneth Rogoff, who argued against economic stimulus.

The consequences for American politics were bad enough: without austerianism, the Obama administration might have gotten a proper second stimulus through. In Europe, however, the impact of austerity has been crippling. Some countries have had austerity imposed upon them, by European Union officials who were apparently convinced that austerity would increase business confidence and help these countries pull themselves by their own bootstraps out of the quagmire. Others, like the United Kingdom, have embraced it voluntarily. None of them have had a happy experience. The countries held up as examples of the benefits of austerity, such as Ireland and Latvia, have in fact suffered brutally. For sure, things would have been even worse if rich countries such as Germany had not helped. However, the conditions accompanying this assistance have led to an explosion of anger and resentment. Austerity— especially austerity imposed at the demand of foreigners—is not a vote winner.

Given all of this, why did politicians ever think that austerity was a good idea in the first place? They should have known that it hadn’t worked in the past. Every few decades, politicians implement austerity programs in response to some economic shock, and every time, it is a disaster—from the gold standard crunches of the nineteenth century to the idiotic response of German Social Democrats to the Great Depression. And then, after a couple of decades, politicians begin to forget how bad it was. Blyth doesn’t have a complete explanation for this peculiar form of recurring amnesia. He does, however, have the beginnings of an intellectual history.

Blyth argues that austerity had its beginnings in the inability of classical liberal theorists like David Hume, Adam Smith, and John Locke to think straight about the state’s role in the economy. While their intellectual heirs recognized that economic crises happened, they thought of them as an inevitable hangover from previous economic exuberance. All that the state could do was balance the budget, and perhaps even raise taxes, to restore economic confidence. Under this theory, austerity was something like the apocryphal vomitorium at Roman feasts, allowing the economy to purge itself between successive bouts of overindulgence.

These arguments acquired ever fancier mathematical trappings. Economists came up with toy models under which austerity could actually expand the economy by restoring business confidence. And this general wisdom seeped down into politics. In 2009, Alberto Alesina and Silvia Ardagna wrote a paper arguing that austerity was a signal that politicians sent to entrepreneurs, guaranteeing that tax increases would not happen in the future so that they would have the confidence to invest in the present. When the crisis hit, they were invited to deliver a version of this paper to the gathered economics and finance ministers of Europe. Likely, many of these ministers now regret having listened to its recommendations, but the hurt is done.

Blyth’s book is not perfect. It veers between entertaining polemic and detailed analysis, and sometimes overstates its case. To take one example, states like Ireland were more profligate than Blyth suggests. As the political scientists Niamh Hardiman and Sebastian Dellepiane have shown, they maintained budget surpluses only through unsustainable tax policies, which assumed that the property bubble would keep on expanding forever. Blyth’s indictment of banks and private finance sometimes lets government off a little too lightly. And the book isn’t as well organized as it might be; Blyth tries to stuff 400 pages of analysis into 200 pages of prose, leading to some unseemly bulges.

Nonetheless, it’s essential reading. John Maynard Keynes famously argued that politicians are the unwitting slaves of the ideas of defunct economists. Blyth’s book is a practical application of Keynes’s dictum, asking what those ideas are, why they are so important, and where they came from in the first place. If Blyth is right, we are only going to get out of the mess we’re in by developing new ideas that work better than austerity and can shape a new economic order, at least for a while. He doesn’t know any better than I do where those ideas will come from, but at least he has some understanding of why and how they are important. The economy is much too important to leave to economists. We need to understand how ideas shape it, and Blyth’s new book provides an excellent starting point.


If you are interested in purchasing this book, we have included a link for your convenience: Buy from Amazon.com.

The post Slaves of Defunct Economists appeared first on Washington Monthly.

]]>
18981 Mar13-Blyth-Books
Charity Case https://washingtonmonthly.com/2013/03/02/charity-case/ Sat, 02 Mar 2013 16:40:05 +0000 https://washingtonmonthly.com/?p=18965 How taxpayers subsidize failing philanthropies.

The post Charity Case appeared first on Washington Monthly.

]]>
In 2006 the oil tycoon T. Boone Pickens gave $165 million to his alma mater, Oklahoma State University, to build a new football stadium, marking one of the largest charitable gifts on record to an American university and the most gargantuan to an athletics program. Pickens directed that the money be spent on a new west end for the Boone Pickens Stadium, for fields and practice facilities, and for a residential village and dining buffets for OSU athletes.

Mar13-Stern-Books
With Charity for All: Why
Charities Are Failing
and a Better Way to Give

by Ken Stern
Doubleday, 272 pp.

Less than an hour after Pickens’s donation landed in the bank account of Cowboy Golf, one of the athletic department’s fund-raising arms, the money was rerouted to BP Capital Management, Pickens’s hedge fund, where it was soon matched by an additional $37 million in unrelated donations that the university had agreed to invest as a condition of Pickens’s largess. With the funds now illiquid, OSU took on debt to construct the stadium—a strategy, notes author Ken Stern in With Charity for All, that is not unusual for a nonprofit organization, which can “leverage” its tax-exempt status to borrow at lower rates and invest existing assets in matching but higher-yield, taxable securities. The resulting arbitrage, in theory, produces a lower-risk cash flow for the organization, particularly when the matching funds are allocated in a relatively conservative way. OSU’s investment with Pickens, in an oil and gas fund betting on changes in commodity prices, initially spiked in value to $300 million. Then, in the 2008 crash, BP Capital Management lost more than $1 billion—and with it most of the OSU donations, Pickens’s and otherwise.

Stern’s account is not an indictment of college athletics nor of university resource management, though the Pickens affair begs important questions about both—particularly in the wake of this recent recession, which has nearly eviscerated state and private financing for education programs in many institutions of higher learning across the country. Nor does Stern single out Pickens, a man who, as he says, “never did things in a small way,” whether as a ruthless corporate raider, the sponsor of the Swift Boat attack ads that helped derail John Kerry’s presidential bid, or the godfather of OSU sports. After all, Stern concedes, “it was Pickens’s money and he had the right to dispense with it as he pleased,” as did the philanthropists John D. Rockefeller and Andrew Carnegie a century before, who distributed their ruthlessly acquired wealth in the form of libraries, museums, and major research centers. The difference today is that the charitable sector is shaped and supported by a wealth of public subsidies: substantial tax breaks in the form of deductions for donors, and waivers on income and property taxes (among others) for charities themselves, which render acts of private giving anything but. “In the end,” Stern reminds us, “the public does pick up the tab for a significant portion of such gifts, and it is entitled to question whether they are a wise use of public resources.” His insistence on this fundamental question about the purpose of American charity is the great and original strength of this book.

Stern is an engaging storyteller, and his catalog of venality and graft in the charitable sector borders on farce—and would be, were the theft not approximately $40 billion each year that donors and taxpayers had intended for public purpose. We hear about the double lives of priests who minister to the poor by day and spend millions on trips to Las Vegas and escort services in their off hours. Politicians who lard local social service organizations with government funds to employ their family and friends, who, in turn, get out the vote on election day. Executive directors of charities of all sizes who regularly raid the till for personal perks, and scam artists who prey on public sympathy with words like “hurricane relief,” “veterans,” and “cancer” to raise money for nonexistent organizations and services. The remarkably low barriers to entry for new non-profits (inexpensive filing fees, almost certain approval of charitable status), along with minimal government oversight and virtually nonexistent “market” mechanisms to shutter fraudulent or incompetent organizations, has led to unchecked growth in the number of nonprofits in the United States—two million up from 12,000 in 1940.

Part of the problem, Stern contends, is the lack of definitional clarity about just what constitutes a publicly subsidized charity. While we cling to historical, legal, and cultural expectations that nonprofits must be engaged in “purposes beneficial to the community” that serve some kind of “public function,” we regularly encounter organizations that may not fit the bill. For example, Stern cites a roller derby league in Bend, Oregon, the All Colorado Beer Festival, and the Grand Canyon Sisters of Perpetual Indulgence, a drag queen sorority, and surely we can all name tax-exempt nonprofit organizations of questionable “public function” within our own communities. Stern’s critique is hardly one born of cultural elitism, as he questions whether the Metropolitan Opera, priced well beyond the reach of most New York City residents, serves a community benefit sufficient to warrant forgone tax revenues.

Even more complicated are highly profitable charities. Here, too, some seem to fall squarely outside of the “public function” domain, such as the various football bowls that earn hundreds of millions of dollars each year and pay executives in salaries and golf club memberships on par with their corporate peers. Other cases are thornier: what happens, Stern asks, when nonprofit hospitals become “virtually indistinguishable from for-profit institutions” in terms of lavish facilities and compensation, high-end procedures, aggressive bill collection, and, in some cases, very little true charitable service? Should these institutions and their donors still receive significant public subsidy?

Stern argues that, more than malfeasance or subsidized profits that do not return commensurate social benefit, the greatest affliction of the charitable sector is “the absence of market mechanisms that reward good work and punish failure.” This theory of inefficient social capital markets isn’t new; it has been brewing, particularly in the “new” or venture philanthropy corners of the nonprofit world, for more than a decade. The most business-jargoned dialect of the theory tends to be spoken by philanthropists or executives who come to the charitable sector from the commercial realm. Pierre Omidyar, eBay founder turned über philanthropist, has described his journey thus: “Every business person who first engages in the nonprofit sector goes through a lot of growing pains, disappointments. It is a very different kind of sector, a different cultural environment.” Stern, by his own account, seems to have experienced something similar in becoming COO, and then CEO, of National Public Radio, evident by his faith in “market discipline, a new culture of effectiveness, and efficient, results-oriented service.” Although there are many strands of inefficient social capital market theory, at its heart it is about measurement and evaluation (M&E) and demonstrating “social impact”—what works and what doesn’t—in the delivery of social services. Or, as Bill Gates announced on the cover of the Wall Street Journal in January: “My Plan to Fix the World’s Biggest Problems: Measure Them!”

Stern, like Gates and others, describes some of the skewed incentives in the nonprofit sector that discourage rigorous M&E: donors often give out of personal, emotional, or “warm glow” motivations; they do not rely on performance metrics and in fact often chastise grantees that spend too much on “overhead,” the kind of organizational infrastructure necessary to measure (and often achieve) results.

Despite his pessimism, Stern points to bright spots in the charitable sector. The common characteristic of the organizations he extols—New Profit, Inc., a venture philanthropy “rooted in the idea that business innovation could propel the charitable sector forward,” Youth Village, an organization that works with emotionally and behaviorally challenged children and their families, and the Nurse Family Partnership, focused on home visitation for maternal and early child health services—is a devotion to rigorous M&E and proof of social impact. This is not the same, it is worth noting, as a business model or orientation. In fact, the randomized control trial approach to evaluation, a kind of gold standard of evidence for many in the field of social services, and one that Youth Village and the NFP remarkably achieve, draws not from the for-profit sector, but from the fields of science and medicine.

This may seem semantic, but it is an important distinction in parts of the nonprofit world, where, despite Stern’s remonstrations to the contrary, a “business knows best” bias has come to prevail. Jim Collins, author of Good to Great and well known for his insights into high-performing companies, asserts, “We must reject the idea—well-intentioned but dead wrong—that the primary path to greatness in the social sectors is to become more like a business.” Collins instead argues that the “culture of discipline” required for high-performing organizations across sectors “is not a principle of business; it is a principle of greatness.” Or, as Phil Buchanan, the president of the Center for Effective Philanthropy, recently wrote in the Harvard Business Review, “No type of organization—government, business, or nonprofit—has a monopoly on effectiveness. And nonprofits are typically tackling the most complex problems of all. If those problems could have easily been solved by government or business, they wouldn’t exist at all.”

Stern laments that organizations like New Profit, Youth Village, and the NFP are too few and far between. “For all its prominence,” he writes, “the social entrepreneurship movement” that these organizations represent “has hardly put a dent in the American charitable world.” I would posit otherwise. Omidyar distinguishes between “charity” (relief for immediate suffering) and “philanthropy” (from the Latin “love of humanity,” aimed at solving the root cause of problems). Arguably, the philanthropy industry has been profoundly influenced by the social entrepreneurship movement; nearly all professionally run foundations report to be engaged in some kind of formal evaluation of the impact of their grants. The fact that foundation funding only represents approximately 15 percent of total giving, in dollar terms (roughly 75 percent of giving is from individuals) understates the outsized role of philanthropists like Gates and Omidyar, organizations like New Profit, and thoughtful books like With Charity for All, in shaping practices in the field. Perhaps the most interesting and underreported manifestation of the social entrepreneurship movement’s sway is in the public sector, where, as Stern describes, there is a new penchant for evidence-based grant making and use of tools like innovation funds that allow government agencies to work with intermediaries like New Profit to channel public dollars to high-performing nonprofits. These kinds of partnerships indeed offer a better way to give, and a better use of all resources, public and private, to advance the common good.


If you are interested in purchasing this book, we have included a link for your convenience: Buy from Amazon.com.

The post Charity Case appeared first on Washington Monthly.

]]>
18965 Mar13-Stern-Books
Consequential Drift https://washingtonmonthly.com/2013/03/02/consequential-drift/ Sat, 02 Mar 2013 16:25:52 +0000 https://washingtonmonthly.com/?p=18966 The government program where party differences have widened the most, and matter the most, is Medicaid.

The post Consequential Drift appeared first on Washington Monthly.

]]>
On June 28, 2012, when the U.S. Supreme Court announced its decision in the case of National Federation of Independent Business v. Sebelius, the happiness of progressives over the upholding of the Affordable Care Act’s individual health insurance purchasing mandate was accompanied by a rare moment of dismay over the Court’s treatment of provisions affecting Medicaid. Long called “Medicare’s poor second cousin,” the principal federal-state program supplying health care for the poor and disabled (and long-term care for the improvident elderly) had been transformed by the ACA, from being a stopgap and sloppily designed entitlement destined to be ultimately supplanted by a universal health care initiative into a key element of health care reform.

Mar13-Thompson-Books
Medicaid Politics:
Federalism, Policy Durability,
and Health Reform

by Frank J. Thompson
Georgetown University Press, 288 pp.

With the Court’s ruling that the ACA’s expansion of Medicaid would be optional rather than mandatory for the states, the program was thrust into the spotlight of national politics, where it remains today as Republican-controlled states seek to thwart Obamacare by rejecting the Medicaid expansion. There were even brief moments when Medicaid served as an issue during the 2012 presidential campaign, albeit a minor one when compared with the constant references to each party’s thrusts and parries over Medicare, considered the most politically crucial existing health care program.

For those with a special concern over the fate of low-income Americans, and for the states in whose budgets Medicaid is typically the weightiest item, the emergence of Medicaid as what the vice president would call a “BFD” was long overdue. But for everyone else, a crash remedial course in the ever-evolving history, arcane structure, and multi-dimensional politics of the program has become essential. It is largely supplied by Medicaid Politics: Federalism, Policy Durability, and Health Reform, a new book by Rutgers professor Frank J. Thompson.

Thompson’s book focuses on the period from the beginning of the Clinton administration through most of Barack Obama’s first term. Its sustaining concern is explaining how a program with so many flaws and anomalies—widely varying levels of funding and commitment among the states, an early unsavory reputation as second-class “welfare medicine,” disparate and often competing constituencies, and an occasionally receding but eventually powerful hostility from the Republican Party—has come to cover sixty-seven million Americans and potentially fifteen to twenty-two million more.

Medicaid has never really been the “health care for the welfare population” it is so often presumed to be. As Thompson notes, prior to the ACA 15 percent of Medicaid beneficiaries suffered from disabilities, and another 10 percent were over sixty-five. Half were children, and thus only a quarter were able-bodied adults. From the perspective of Medicaid expenditures, seniors and the disabled account for about two-thirds, and other adults and children only a third.

But precisely because Medicaid has served as the ever-changeable, ever-expandable vehicle for health care initiatives at the federal and state levels, its political identity has been hard to pin down. Since it originally reflected the Republican alternative to universal health coverage—a state-administered means-tested program for the very poor— it enjoyed bipartisan support until the Reagan administration, where the first of four major attempts to limit federal responsibility for it fell short in Congress. In the late 1980s and early ’90s, Medicaid became the object of liberal efforts to expand coverage (particularly for children in families with incomes a bit higher than the original Medicaid population) and also to provide alternatives to nursing home care for the very elderly and the disabled. By the ’90s (and up through the George W. Bush administration) first Democrats and then Republicans expanded the executive branch’s waiver authority significantly to support a series of state variations from the program’s original design, including use of managed care, expansion of coverage into the ranks of working families with limited income, and, eventually, conservative privatization schemes for Medicaid service delivery.

A major departure point for Medicaid occurred with the enactment of welfare reform legislation in 1996, which effectively de-linked income support and health care services for the poor. Less dramatically but still significantly, the creation of the Children’s Health Insurance Program, or CHIP—part of Medicaid in some states, a free-standing but parallel program in others—made Medicaid-style services more of a middle-class phenomenon, reinforcing the middle-class stake in the program created by its original and continuing long-term-care component.

Thompson treats all these phenomena as further complicating an already complex program, but also as increasing its “durability” as a politically broad-based federal-state service.

A less visible but major contributor to partisan differences over Medicaid have been Democratic efforts to increase the “take-up rate” of Medicaid and CHIP—the number of eligible people who actually participate in the program—and Republican efforts to reduce expenditures via heavy-handed procedures designed to deter “waste, fraud and abuse.”

The major threats to the entitlement status of Medicaid have been a series of three Republican efforts to turn it into a “block grant” with capped funding and extensive state control over eligibility and services. The first effort was probably the most serious, made by a Republican Congress in 1995 in the budget showdown with Bill Clinton that produced two government shutdowns before Congress retreated. Thompson analyzes this and subsequent “block grant” battles in great detail, noting that in each case the typical unity of governors over Medicaid policy succumbed to national partisan pressures. The ’95 fight had another political by-product Thompson does not mention: the elevation of Medicaid into a big Democratic Party priority, as evidenced by the 1996 Clinton-Gore ticket’s mantra of balancing the budget while protecting what became known as “M2E2”: Medicare, Medicaid, Education, and the Environment.

The third Medicaid block grant fight is theoretically still under way: it was a feature of the “Ryan budget” passed twice by House Republicans in 2011 and 2012. Indeed, the relative treatment of Medicaid and Medicare in the Ryan budget showed that perceptions of the two programs’ political saliency remained: even as analysts showed that the Medicaid block grant would reduce real funding for the program by about a third over ten years, the budget’s treatment of Medicare (with the biggest cuts back-loaded to the distant future and eliminated entirely for current and soon-to-be beneficiaries) got vastly more attention.

But the extraordinarily varying treatment of Medicaid in the Ryan budget and in the Affordable Care Act dramatized the vast differences between the two parties over the program that have only fully emerged during the Obama administration. Another area of big partisan differences has only begun to become apparent, as conservative “entitlement reform” enthusiasts view Medicaid’s long-term care component as a potential fiscal nightmare and as the final factor that will convert Medicaid into an unassailable “middle-class entitlement” like Medicare.

And that brings us back to the Medicaid-expansion battle.

The conservative vision of a market-based system of limited subsidies for the purchase of private health insurance (and, to some extent, “personal responsibility” for buying health services from providers) that increasingly dominates Republican health policy thinking depends on Medicaid as a low-priority, high-population experiment station. As noted above, Medicaid is now central to the progressive vision of a universal health care system. This provides Republicans at the federal and state levels with a dual motive for sabotaging the Medicaid expansion, even if that means that federally run health care exchanges must pick up the slack.

In other words, two very different and largely incompatible points of view are colliding in the politics of Medicaid at the moment.

This reality is discussed by Thompson near the conclusion of his book, where his general optimism over the “durability” of Medicaid from 1993 to 2010 begins to fade. After discussing the adoption of an openly hostile position toward Medicaid by Republican members of Congress via their votes for the Ryan budget, Thompson expresses hope that Republican governors will resist the pressure to go along: “[I]f they sustain some sense of the pragmatic, incremental approach to the program that they have frequently exhibited in the past, they may well temper Medicaid cuts.”

He wrote those words before the Supreme Court decision gave the states the opportunity to opt out of the ACA’s exceptionally generous Medicaid expansion provisions. Since then, only seven of the thirty Republican governors have indicated that they will follow all but one Democratic governor in going along with the Medicaid expansion.

As occurred in the mid-’90s, the loss of bipartisan support for Medicaid may have at the same time made the program more attractive to Democrats. They did not campaign for protecting “M2E2” in 2012, but they did insist on protecting Medicaid from appropriations “sequestrations” in the 2011 deficit reduction deal that’s hanging fire in Washington in early 2013. In an era of asymmetrical polarization, that may be the best fate available for any element of the New Deal/Great Society legacy.


If you are interested in purchasing this book, we have included a link for your convenience: Buy from Amazon.com.

The post Consequential Drift appeared first on Washington Monthly.

]]>
18966 Mar13-Thompson-Books
Bar Examined https://washingtonmonthly.com/2013/03/02/bar-examined/ Sat, 02 Mar 2013 16:21:26 +0000 https://washingtonmonthly.com/?p=18967 The ever-diminishing advantages of a career in the law versus the undiminished enthusiasm of law schools to mint new attorneys.

The post Bar Examined appeared first on Washington Monthly.

]]>
Steven J. Harper has been blessed with notably good timing.

Born smack in the middle of the Baby Boom, the Minneapolis native excelled in school, collected bachelor’s and master’s degrees in economics from Northwestern University, then headed straight to Harvard Law School, where he was a classmate of future U.S. Supreme Court Chief Justice John Roberts.

Neither of Harper’s parents—a truck driver dad and a stay-at-home mom—got past high school. A working-class kid, Harper financed his future with student loans. By the time he graduated Harvard Law, magna cum laude, in 1979, the total debt he had incurred for his three degrees from Northwestern and Harvard came to about $16,000.

Mar13-Harper-Books
The Lawyer Bubble:
A Profession in Crisis

By Steven J. Harper
Basic Books, 272 pp.

It paid off. Harper went straight from Harvard Law to Kirkland & Ellis in Chicago, where he had been a summer associate. His starting salary was $25,000. He flourished, and was mentored and trained as a litigator by sage elder partners who invested their time and energy in the new blood brought into the old-even-then firm. This was the start of the Reagan era, and the U.S. economy began to boom after a long malaise. The future for the bright young corporate litigator was sweet.

“I led what anyone would call a charmed life in the law,” Harper writes in his new book, The Lawyer Bubble: A Profession in Crisis. “Then, as now, most people assumed that the legal profession offered financial security and a way to climb out of the lower or middle class. Career satisfaction, upward mobility, social status, financial security—who could ask for more?”

Harper spent his entire career at Kirkland & Ellis, made equity partner by the time he was thirty-four, and did so well financially that he was able to retire from the practice of law in 2010, when he was fifty-three years old. He now writes books. His latest seeks to warn bright young sons and daughters of midwestern truck drivers that they best not try to climb that ladder that served him so well.

Some of the rungs are broken, others greased and impossible slippery, and that ladder doesn’t stretch to any place you would want to be, really. The era of law being the safe and well-compensated “traditional default option for students with no idea what to do with their lives,” Harper notes, is over, even if the tens of thousands who still flood into law school each year stubbornly believe that these macro forces will somehow not apply to them.

New JDs coming out of private law schools carry an average debt load of $125,000, the American Bar Association reported in March 2012. Student loan debt cannot be discharged in bankruptcy, and the supposed relief now offered by federal loan balances being forgiven after twenty-five years is hardly a cheerful prospect.

Though the Bureau of Labor Statistics expects 73,600 new lawyer jobs to be created in the U.S. in the current decade, American law schools graduate about 44,000 new JDs each year. So averaged over the decade, there are six new lawyers for each new job.

And a good number of those jobs are hardly the stuff of which dreams—or early retirements in favor of more creative pursuits—are made. They are low-end document-review gigs, at $20 or $25 an hour. New lawyers with loan payments to make cannot be too choosy, notes University of Colorado School of Law professor Paul Campos.

Legal-outsourcing firms “hire these desperate law school grads, who sit in horrible little basement rooms. They are performing mindless work in Dickensian conditions, stuck in there,” Campos says. “They will never be able to get regular legal work. This is the only thing they can get to pay the bills. Doc reviewers are the detritus of a collapsing system.” Campos likens doc review to “doing porn if you are an actor.”

Predictably, salaries even at traditional law firms have been falling. New associates hired by a big firm—as Harper was hired by Kirkland & Ellis—saw their median salaries fall by more than a third, to $85,000, between 2009 and 2011. The social mobility Harper enjoyed three or four decades ago thanks to his smarts, his hard work, an affordable top-notch education, and a job market that valued his new degree has vanished.

Harper chronicles the disruption of his once-genteel profession with considerable sadness, and places the blame squarely at the wing-tipped feet of two breeds of scoundrel: law school deans, and executive committees that have run big law firms (sometimes into the ground) for the benefit of a handful of big-name partners and at the expense of any old-fashioned notion of partnership and institution-
al longevity.

The bulk of Harper’s book dissects the structural and cultural transformation of Big Law, and goes on (for many, many pages) about factors that caused such firms as Finley, Kumble and Dewey & LeBoeuf to fold. He laments that the concentration of power in the hands of a few executive partners atop global behemoths means that relatively few of the thousands of junior lawyers will ever be granted a full
equity-partner role. With no job security or likely financial payoff for years of oftentimes mind-numbing junior work, Harper notes, a good number of young and mid-career lawyers devolve into alcoholism, depression, and even suicide.

Some of Harper’s most pointed criticism is aimed at law schools, “operating on the outer perimeter of candor to fill their classrooms,” funded by “free-flowing student loan money for which law school deans never have to account.” (In a stunning bit of research, Am Law Daily blogger Matt Leichter estimated recently that the federal government will loan law students $53 billion over this decade—a huge public investment to educate an army of new lawyers who are then rejected by the job market.)

Law school deans and professors occupy an odd moral ground. They are educators, dedicating their careers to teaching and the concept of justice. They are also well-paid cogs in an industry that is increasingly accused, by Harper and others, of essentially perpetuating a marketing fraud that destroys the financial lives and careers of half its students.

I got a glimpse of law schools’ reality-distortion field last fall, when I interviewed Erwin Chemerinsky, the founding dean of the new law school at the University of California, Irvine, a public university. Including living expenses (which are often borrowed along with tuition), a year in UC Irvine’s program costs nonresident students $77,000—the second-most expensive legal education in the country. (In-state students pay $71,000.)

Why, I asked Chemerinsky for a Washington Post Magazine story, did he design a new program that was so notably expensive? First, he denied that UC Irvine was more expensive than the University of Southern California or Stanford, which is not true. Then he said that UC Irvine had to charge so much because the school receives no public subsidies or taxpayer support: “If we are not going to be subsidized by the state,” Chemerinsky said, in an oddly high-pitched, singsong voice, as if he were explaining the simplest concept to a child, “and we are going to be a top-quality law school, there is not an alternative in terms of what it is going to cost.”

Chemerinsky’s no-subsidy explanation for why UC Irvine’s program is so expensive is not true either, it turns out. Last fall, the law school and the university refused to answer questions about how much, if anything, UC or any other public entity had spent to get the law school up and running. Repeated Public Records Act requests and months later, long after the Post story was published, I got my answer: as of June of last year, the state had “subsidized” Chemerinsky’s school to the tune of $75 million, and will do so at about $25 million a year going forward.

When I asked Campos last fall about my conversation with Chemerinsky, he was unsurprised. There is no crisis yet in the business of running a law school—there are plenty of students willing to fill those seats, and the federal government will loan them whatever their law school costs. “Here in legal academia,” Campos said, it is “like the French aristocracy in 1785. They have no idea what is going on.” While noting that he admires Chemerinsky as a constitutional scholar, Campos says that it is “impossible” for Chemerinsky to understand or assume any culpability for playing a role in what Campos regards as a crisis. Citing Upton Sinclair, Campos says that “it is difficult to get a man to understand something if his salary depends on his not understanding.”

So with law schools still marketing away to prospective students, insiders like Harper (who teaches as an adjunct at Northwestern University’s law school) and Campos are serving a need as they seek to warn those thinking about a career in law.

In fact, here again Harper finds himself a beneficiary of timing. A whole genre of disillusioned-lawyer/law student/law professor blogs have sprouted up, known generally as “scamblogs.” Campos started a (briefly anonymous) one called “Inside the Law School Scam” in 2011, and in October came out with his new book, Don’t Go to Law School (Unless): A Law Professor’s Inside Guide to Maximizing Opportunity and Minimizing Risk. And Brian Z. Tamanaha, a professor at Washington University School of Law, made his own big splash with Failing Law Schools last summer.

There will certainly continue to be law schools, and lawyers, and plenty of legal work to be done. Harper and his brethren simply seek to counsel bright young aspiring lawyers to think long and hard before booking impossibly expensive passage on a creaky ship headed toward one nasty-looking reef.


If you are interested in purchasing this book, we have included a link for your convenience: Buy from Amazon.com.

The post Bar Examined appeared first on Washington Monthly.

]]>
18967 Mar13-Harper-Books
Chávez’s Magical Realism https://washingtonmonthly.com/2013/03/02/chvezs-magical-realism/ Sat, 02 Mar 2013 16:17:47 +0000 https://washingtonmonthly.com/?p=18968 How the Comandante may get the last laugh, even from the grave.

The post Chávez’s Magical Realism appeared first on Washington Monthly.

]]>
For months, the enemies of Hugo Chávez have hardly even tried to conceal their delight at his apparent end. With the Venezuelan president lying uncharacteristically silent on what may be his deathbed, the goal that has obsessed but eluded them for fifteen years seems within reach—thanks, ultimately, not to elite machinations or imperial meddling, but to simple mortality. The long chavista nightmare, as they see it, is finally over.

Mar13-Carroll-Books
Comandante:
Hugo Chávez’s Venezuela

by Rory Carroll
Penguin Press HC, 320 pp.

But if Chávez’s past teaches us anything, it is that opponents should think twice before celebrating his defeat. His greatest successes have always come just as his antagonists are preparing to declare victory. In the early 1990s, when he was an obscure lieutenant colonel, his disastrously inept attempt to overthrow the government won him national fame and launched his political career. A decade later, a short-lived coup cheered on by the Bush administration secured his place as the world’s premier anti-imperial hero and revolutionary standard-bearer. When the Venezuelan opposition was on the verge of deposing him in a referendum, oil prices shot up, and he poured the huge profits into popular social programs and cemented his control. If the pattern holds, Chávez will somehow get the last laugh after all, even from the grave.

Chávez has attributed these repeated resurrections to divine favor and good luck. The real explanation may have more to do with sheer political skill—with his “cunning, foresight, and subtlety,” as Rory Carroll puts it in Comandante: Hugo Chávez’s Venezuela, his sharply observed portrait of Chávez in power. Carroll, a correspondent for the Guardian, watches Chávez at work—powerfully and genuinely human in moments, clownishly charismatic in others, diabolically manipulative when necessary—and admits that “the effect [is] mesmerizing.” Yet for Carroll, this appreciation only heightens the tragedy of what Chávez’s “Bolivarian revolution” has wrought: “Here was a sublimely gifted politician with empathy for the poor and the power of Croesus—and the result, fiasco.”

In a decade and a half as Venezuela’s president (and almost as long as a global figure, notorious or valiant according to taste), Chávez has attracted plenty of chroniclers. Most of their chronicles, however, have been warped by adulation or animosity. Defenders tend to dismiss criticism as the griping of a vanquished predatory elite; detractors let their hostility blind them to Chávez’s undeniable appeal. Carroll’s virtue as a chronicler is that he is often charmed but never seduced.

Gabriel Garcia Márquez once called Chávez “a natural storyteller,” with “a touch of the supernatural,” a master of magical realism in his own right. Carroll agrees, watching, fascinated, as the Comandante spins his myths. Despite what must have been strenuous effort and repeated requests, Carroll never gets close to the man himself. Instead, he experiences Chávez the way most Venezuelans do: on TV. “He was on television,” Carroll writes, “almost every day for hours at a time, invariably live, with no script or teleprompter, mulling, musing, deciding, ordering.” Even with their preposterous frequency and length, these performances are, to Carroll, rarely dull: “Sublime, unexpected moments lit up the screen and showed why the Comandante remained popular even after a decade in power.”

Sometimes, it is a moment like one televised encounter with a barely literate old woman, whose voice quavers as she shows off the communal garden she cultivates (with help from Cuban advisers sent by Fidel Castro to support “twenty-first-century socialism”). As a smiling Chávez reassures her, it is easy to scoff at the populist pandering, or to point out the illogic of directing resources into unproductive agricultural communes. And yet: “Laura seemed fit to explode with happiness…. A long, humble life of scratching subsistence from baked earth, a life anonymous like those of her ancestors, had just been sprinkled with magic.” Other times it is Chávez’s earthy monologues, which repulse elite society as much as they endear him to the street. He discusses bowel problems on camera (“I was sweating so bad”). On Valentine’s Day he says his wife “is gonna get hers,” and on another occasion he explains that he would never sleep with Condoleezza Rice, not even for the sake of his beloved homeland. He gleefully labels adversaries squealing pigs, rancid oligarchs, bandits, vampires, and perverts—all of them escualidos, squalid ones, his catchall term for hated elites.

Entrancing, astute, comical as these performances may be, Chávez’s populist touch would hardly be enough if he didn’t also have the enviable luck of presiding over the country with the world’s largest oil reserves at a time of skyrocketing energy prices. Political scientists Javier Corrales and Michael Penfold have documented just how closely Chávez’s popularity has tracked with growth in oil-fueled public spending. Still, as Carroll points out, many of Chávez’s predecessors in the Venezuelan political elite have had similarly enviable luck and haven’t always used it to the same advantage. “When he accused [them] of looting the nation’s oil wealth,” Carroll writes, “he was essentially correct.” Chávez hasn’t shown the foresight to direct the windfall toward building a diversified modern economy—oil now accounts for 96 percent of Venezuela’s exports, up from an already-dismaying 80 percent—but the patronage of the petro-state has at least helped allay extreme poverty. The revolution’s social “missions,” which provide, among other things, health care and subsidized food to slum dwellers, are popular enough that Chávez’s savvier opponents have promised to maintain and even expand them.

Yet one of the many dark ironies of twenty-first-century socialism is that while extreme poverty has fallen, inequality—Venezuela’s longtime national scourge, and a large part of the explanation for why a Chávez could arise in the first place—has persisted, in sometimes surreal forms. Under twenty-first-century socialism, a new class of Bolivarian oligarchs—”Boligarchs”—has traded on revolutionary credentials to supplant the escualidos exiled to Miami. They have given Venezuela not only one of the highest per capita rates of whiskey consumption in the world, but also, as cocaine imports have increased more than 400 percent percent, newfound distinction as a top transport route for South American smugglers. Although Chávez has a reputation for personal scrupulousness, rampant corruption among those around him seems to suit him just fine. “He gets intelligence reports detailing who is stealing what,” one fallen sycophant tells Carroll. “That way if someone steps out of line, bang, he has them.”

The toll of this “parasitic ecosystem” on the rest of Venezuela has become harder and harder to ignore, for everyone except perhaps Chávez himself. In reporting around the country, Carroll documents this toll in grim detail. Infrastructure is crumbling, nationalized companies are imploding, inflation is surging, and the state oil company has become “a bloated hydra so overloaded with social and political tasks it neglect[s] its core business of drilling and refining.” The capricious and curiously haphazard tyranny of the revolutionary state, blessed by a twice-rewritten constitution, has gutted democratic institutions and protections. “Venezuela’s social contract,” Carroll writes, has “shredded under Chávez.” That shredding is most vividly and violently demonstrated by rising crime, the issue that Venezuelans say most worries them today. The increase in assaults and murders and kidnappings under Chávez has made, Carroll notes, “Venezuela more dangerous than Iraq, and Caracas one of the deadliest cities on earth.”

Much of this dysfunction is the sad but predictable fate of a country living under the resource curse. In that sense, Chávez is as much a symptom as a cause of what ails his country. After all, it was a Venezuelan who first called oil “the devil’s excrement,” long before anyone knew the Comandante’s name. Yet in the end, what remains extraordinary about Chávez is how little touched he has been by the atrophy spreading beneath him, how he has hovered above it all and retained the faith of so many. By 2011, some of his supporters had even taken to shouting, “Long live Chávez, down with the government.”

In the past, Chávez has likened himself to Christ, “a great rebel … an anti-imperialist,” and attacked opponents with cries of “Burn the Judas!” As his cancer has progressed and his voice gone silent, that comparison has become more frequent and direct. One sign at a recent rally: “Chávez Christ I love you.” When he is dead, that faith will keep Chávez looming above Venezuelan politics—one final victory over his enemies.

The apostles have already started scrambling to claim and define his legacy. In December, Chávez anointed Nicolas Maduro, a union leader turned foreign minister, as his successor. But Maduro represents only one of many factions and interests: Castroite socialists, military men, street militia chiefs, Boligarchs who have grown fat and prosperous on the fruits of twenty-first-century socialism. When Chávez is gone, the knives will come out as they fight to protect the spoils and take up the mantle of the revolution, knowing well that for years, the fundamental question of Venezuelan politics will be, What would Chávez have done? He will be hailed as the model of every policy. He will remain the touchstone for ambitious upstarts eager to flaunt their opposition to the establishment or the United States. His name, his revolution, will be invoked to justify an expanding range of ideologies. Eventually, perhaps decades from now, chavismo will become so elastic a concept as to be meaningless. Only then will Hugo Chávez be truly defeated.


If you are interested in purchasing this book, we have included a link for your convenience: Buy from Amazon.com.

The post Chávez’s Magical Realism appeared first on Washington Monthly.

]]>
18968 Mar13-Carroll-Books
Three Ways to Bring Manufacturing Back to America https://washingtonmonthly.com/2013/03/02/three-ways-to-bring-manufacturing-back-to-america/ Sat, 02 Mar 2013 16:14:00 +0000 https://washingtonmonthly.com/?p=18969

The much-ballyhooed “in-sourcing” trend is real enough. But it won’t amount to much unless Washington acts.

The post Three Ways to Bring Manufacturing Back to America appeared first on Washington Monthly.

]]>

In January 2012, President Barack Obama convened nineteen CEOs and business leaders at a White House forum to tout a potentially promising new phenomenon: instead of “shipping jobs overseas,” U.S. companies were bringing them back. “[W]hat these companies represent is a source of optimism and enormous potential for the future of America,” Obama said. “What they have in common is that they’re part of a hopeful trend: they are bringing jobs back
to America.”

Anecdotally, the record is impressive. A number of major companies—including some of the same firms that first took flak for “offshoring” jobs to China—are now expanding their manufacturing operations stateside. General Electric, for example, says it has created 16,000 new U.S. jobs since 2009, including jobs at a new locomotive plant in Fort Worth, Texas; a solar panel factory in Aurora, Colorado; and an engine manufacturing facility in Pennsylvania. The company’s recent revival of Appliance Park, in Louisville, Kentucky, as a maker of high-end refrigerators, was the subject of high-profile coverage, including a recent piece in the Atlantic.

Other companies that have seemingly caught the “reshoring” wave are appliance maker Whirlpool (which rejected sites in Mexico in favor of Tennessee), and iconic brands like Intel, Canon, Caterpillar, and DuPont. All of these firms have reported expanding or building new U.S. facilities in the last few years. In December 2012, computing giant Apple announced it would bring some Mac production back to America, investing about $100 million to do so.

So given these recent wins, can “insourcing” save America’s economy?

No. And yes. On one hand, insourcing is unlikely to be the magic elixir for a job market that’s only slowly gaining steam more than three years after the official end of the Great Recession. Only some jobs are coming back, and not in nearly large enough numbers to reverse the overall decline in U.S. manufacturing employment. While manufacturing gained about 530,000 jobs between January 2010 and December 2012, America is still 7.5 million manufacturing jobs down from its last peak in 1979. Even if reshoring picks up steam, manufacturing employment is unlikely to recapture the heights of the 1950s, when more than one in three employed Americans worked the line.

Nevertheless, policymakers should encourage insourcing as much as possible, even if net job growth might be a fraction of what’s been lost. At stake is something much broader—America’s future capacity for innovation.

Though President Obama has praised insourcing’s leaders as “CEOs who take pride in hiring people here in America,” what’s prompting some companies to consider insourcing isn’t mere corporate patriotism.

For one thing, some companies have learned the hard way about offshoring’s hidden downsides, such as the lack of intellectual property protection for proprietary manufacturing processes, quality control issues, and the frustration of waiting weeks for products to arrive by container ship while rivals potentially rush hot new products to the shelves. Moreover, long supply chains mean more exposure to earthquakes and tsunamis, wars, oil shocks, and other unpredictable disruptions.

In an oft-cited 2011 study, the consulting firm Accenture surveyed 287 major companies and found that nearly half are plagued by “cycle or delivery time” problems and quality issues due to offshoring. For some manufacturers, such as baby crib makers, the problems went far beyond quality to basic safety. In 2010, the Consumer Product Safety Commission recalled more than two million cribs—made mostly in China, Thailand, Mexico, Indonesia, and Croatia—for safety defects leading to at least thirty-two infant deaths.

The breadth of these recalls prompted at least one crib manufacturer—North Carolina-based Stanley Furniture Company, Inc.—to invest about $9 million on a plant in Robbinsville, North Carolina, to produce its “Young America” line of cribs and other children’s furniture. In its 2011 annual report, the company made this business case:

Because we target a younger, often pregnant consumer shopping for her child, we believe Young America products must be domestically made in our own manufacturing facility to meet both a well-informed consumer’s demands for product safety and the health of her child.… In addition, our retailers are finding that this consumer, unlike the consumer for our Stanley brand, has a growing desire for certain products carrying the words “Made in USA.”

Another advantage of “Made in the USA” is the ability to meet customers’ needs better and more quickly if production is nearby. BMW and Nissan, for example, have built plants in South Carolina and Tennessee instead of just shipping more cars from Germany and Japan to meet growing U.S. demand.

Perhaps most significant, however, are offshoring’s shifting economics, which have begun favoring U.S. production. In a 2011 study by the industry-supported Manufacturing Institute, researchers calculated that the “raw cost” of American manufacturing—including wages, raw materials, and capital costs, but excluding taxes, regulatory compliance, and other “structural costs”—is now 9 percent lower than the average raw cost of production among America’s nine largest trading partners (including China, but also high-cost places like Canada). In 2003, by comparison, American raw production costs were 20 percent higher than the average among our trading partners.

Several factors account for this dramatic about-face. The Manufacturing Institute estimates, for instance, that raw production costs in China skyrocketed 132 percent from 2003 to 2011. This includes Chinese wages, which the Bureau of Labor Statistics (BLS) reports more than doubled from 2003 to 2008 (albeit from 62 cents an hour to $1.36). Meanwhile, American production costs have fallen. The natural gas boom, for example, has meant that industrial prices for natural gas are less than half what they were in 2001.

New advances in technology have also made American production cheaper by increasing productivity and shrinking labor as a share of total costs. Instead of hiring low-cost Chinese workers, companies can now use even lower-cost American machines. 3-D printing, for example, is part of a hot new trend in “additive manufacturing,” where 3-D objects are built through the successive layering of materials from a computerized blueprint. Not only does this technology erase the lag time (and even the distinction) between thinking and making, it’s portable and absurdly cheap. For about $1,500, a “desktop” 3-D printer can eliminate the need for a Chinese factory, along with its attendant headaches.

The downside is that 3-D printing, as well as other advances in automation, robotics, and other technologies, eliminates the need for American workers too. When DuPont recently built a highly automated $20 million battery production facility in Chester, Virginia, to make state-of-the-art lithium ion batteries for electric cars, it created just eleven new manufacturing jobs.

But again, insourcing’s potential isn’t so much about new manufacturing jobs. Rather, the current mini trend toward insourcing might be a golden opportunity to lure back the research and development and innovative capacity that the nation lost in the rush to offshore—and perhaps turn the mini trend into a mega trend that could transform the nation’s economy.

In their highly influential book, Producing Prosperity: Why America Needs a Manufacturing Renaissance, Harvard Business School professors Gary Pisano and Willy Shih argue that when countries lose the ability to manufacture, they also lose the ability to innovate. Pisano and Shih push back on claims that innovation can happen successfully in one country (America) when manufacturing happens in another (China), arguing that this thinking “is based on false premises about the divisibility of R&D and manufacturing in the innovation process.”

For some industries, they write, a product’s design is “so tightly intertwined” with the production process “that it makes little, if any, sense to talk about them separately.” As a consequence, they argue, when companies offshore manufacturing, they’re not only offshoring jobs, they’re offshoring future innovation, plus all of its spillover benefits.

For instance, Pisano and Shih cite the case of photovoltaic cells, which were developed at Bell Labs and later improved upon by a host of American universities and companies including Boeing and IBM. Now they’re largely manufactured in Asia. In 2008, only 6 percent of PV production was American.

What happened, the professors say, is that many of the technologies involved in PV cell production were offshored to Asia long ago. Consequently, Asia started out with a technological edge, as well as geographical proximity to key component suppliers (also Asian), which enabled them to dominate the competition from the get-go.

A host of other American-born inventions have moved offshore and taken their economic potential with them. A list of these lost opportunities, compiled by Dieter Ernst, senior fellow at the East-West Center, includes many staples of modern connected living: laptops, tablets, smartphones, cell phone batteries, and flat panel displays, in addition to the entire semiconductor industry. Pisano and Shih add more esoteric innovations, such as “ultra-heavy forging,” a process required for making large, super-strong structures such as nuclear reactors.

Nevertheless, Ernst says, “our innovation capabilities are still way ahead of China.” According to the National Association of Manufacturers, U.S. manufacturers account for two-thirds of the nation’s private R&D. America’s computer and electronics industries also account for half of all U.S. patents, according to Brookings scholars Susan Helper, Timothy Krueger, and Howard Wial, and manufacturers employ more than a third of the nation’s engineers.

But maintaining this track record means making sure that the industries most fueled by innovation stay at home. Fortunately for us, the companies most likely to consider insourcing right now are in exactly these innovation-intensive industries. And public policy could help tip the balance.

America still holds many of the advantages that made it the world’s foremost inventor in the first place: the world’s best universities, abundant natural resources, a stable democracy and rule of law, an entrepreneurial culture, and respect for personal and intellectual property.

Nevertheless, public policy is critical to whether insourcing picks up momentum or fizzles out. First, we need to remember that insourcing is far from the new normal. It’s true that companies are much more likely now to rethink where they make their products, rather than stampeding like lemmings as they once did toward the country with the cheapest labor, but it’s hardly a sweeping trend. Second, even if many economic fundamentals are moving in America’s direction, foreign governments have plenty of policy tricks they can and do use to lure manufacturing firms.

The question now is whether America is doing enough to push the companies sitting on the fence firmly in our direction and to compete with other countries that are doing the same. To that, the experts say no: we are falling behind by standing still.

To his credit, President Obama has made manufacturing a central plank of his economic agenda. But to accelerate insourcing’s pace and potential, here’s a three-step framework for what government can do.

First, we should shamelessly court companies to America and help them expand when they get here. Even as offshoring’s economics tilt America’s way, other
countries—less opposed than we are to heavy-handed industrial policy and even downright cheating—are busily dreaming up strategies to put a thumb on the scales in their favor.

In their book Innovation Economics, Ezell and Robert Atkinson document a slew of incentives regularly offered by foreign governments to woo business. Israel subsidized $1.2 billion of Intel’s investments in the country, while Vietnam waived all corporate taxes on the first four years of Intel’s operations there (and offered big discounts thereafter). Singapore and South Korea freely provide footloose firms tax holidays and free land. South Korea also routinely offers companies tax-free and interest-free loans, while India offers some firms a chance to deduct all profits for ten years. China plays the most aggressive and strategic game of all, often demanding that firms move high value-added manufacturing operations to China or transfer valuable technologies as a condition of accessing its market.

Some of these tactics are offered in blatant disregard of international trade laws, and the U.S. government is often quick to complain to the World Trade Organization. But because of the multiplicity of alleged violations, the complexity of proving a case, and the tremendous time and resources required to win, few cases are ever really pursued and only when enormous sums are at stake—as in the case of Boeing versus Airbus, for example.

So given the current free-for-all, why hasn’t Washington joined the game? Partly it’s because recruiting individual companies has never been seen as a federal responsibility. Presidents are supposed to think in terms of big sweeping policies, not lower themselves and their administrations to the level of marketers. Partly it’s the ingrained fear of doing anything that smacks of taboo “industrial policy” or “picking winners and losers.” Whatever the reason, such things just aren’t done. Nevertheless, there’s plenty of room for Washington to get more aggressive without crossing the line into industrial policy. For starters, suggests Economic Strategy Institute President Clyde Prestowitz, U.S. officials could simply ask companies to stay, and then thank them when they do.

Prestowitz recalls one meeting between a senior government official and an American CEO over some intellectual property issues the CEO’s company was having overseas. Instead of offering to help with the problem, the official asked the CEO if he’d ever considered moving to Singapore. Eventually, the CEO and his company did. “[This CEO] came from the Ukraine, was grateful to the U.S. for giving him a living, and didn’t want to leave,” Prestowitz says. “If somebody from the government had just called him and told him to stay in the U.S., he would have.”

While President Obama’s 2012 showcase of insourcing CEOs was a good first step, something far more systematic is called for. For example, the president should direct his senior staff to ask for pledges from any CEO they meet with to expand operations in the U.S. These pledges would be reported directly to the president, who would then publicly thank them. Such recognition is prized by even the most powerful business leaders—recall the White House cuff links JPMorgan Chase CEO Jamie Dimon wore at his congressional grilling last June (though he refused to say which president gave them to him). Most of these executives are patriotic Americans who are acutely aware of the reputational hit their companies are taking because of outsourcing, so they might well be eager to move some production back to America if they can justify it on economic grounds. Routinely asking them to do so would not cost the federal government any money nor require congressional approval.

And the president could go further still. He should ask Congress for a several-billion-dollar “war chest” to meet or beat any incentive offered to a company by a foreign government to lure production there.

As iffy as it sounds under current trade laws, the reality is that America’s competitors routinely engage in this behavior, and the United States would only be leveling the playing field by doing the same. By entering the game, Washington could use the opportunity to argue for clearer and more restrictive rules on such recruitment activity, which would be the best outcome.

Moreover, some of our elected leaders are already out on the field anyway. Governors and mayors routinely give away public funds, in the form of tax breaks, special infrastructure spending, and the like to lure firms to their states and municipalities—to the tune of $80 billion a year, according
to an investigative series in the New York Times last December.

For instance, the Texas Enterprise Fund, which calls itself “the largest ‘deal-closing’ fund of its kind,” has spent nearly $473 million since 2003 to entice companies to the Lone Star State. Some of this state and local spending probably does result in a net increase in jobs on American soil. But the bulk of it is wasted in a costly, senseless competition between states and cities for jobs that would have come to the United States anyway or that were already here to start with. For instance, according to the New York Times, in 2011 Kansas awarded AMC Entertainment $36 million to move from neighboring Missouri, only to watch as Missouri, a few months later, lured Applebee’s headquarters from Kansas.

But while states can compete against each other, they can’t win against China. Having the federal government play the lead role in the global incentive game would not only minimize this beggar-thy-neighbor dynamic among states and cities. It would also allow America to effectively use the much more powerful tools available at the federal level—such as federal tax and trade policies—to win business.

The second step to bringing back manufacturing would be to make America the most “user-friendly” country on the planet. To get the companies and industries we want, we not only need to make the first move, we need to be the prettiest girl at the ball. This means investing in what Pisano and Shih call the “industrial commons”—the shared infrastructure, human capital, and other resources that make a country attractive to business and that can support a company’s success.

The ingredients of these commons include a large pool of educated workers with skills well matched to the industries who want to invest. They also include first-rate physical infrastructure, such as well-maintained and efficient roads, bridges, and railways. As everyone knows, ours are in terrible shape compared to our competitors. With interest rates low and plenty of private investment dollars looking for safe haven, now would be the perfect time to set up a national infrastructure bank, as the Obama administration has suggested.

Infrastructure also means universal and fast broadband—the Federal Communications Commission ranks America twenty-fourth in the world on broadband speed, nearly three times slower than Korea’s and about twice as slow as Bulgaria’s. And it means a state-of-the-art energy grid. As Jeffrey Leonard has argued in these pages (“How We Could Blow the Energy Boom,” November/December 2012), ours is decrepit and blackout prone but fixable with the right regulations and minimal government investment. While thousands of pages have been written about the ideal economic habitat for wooing business, a couple of fresh ideas are worth considering.

One idea is to become more like Germany. In his recent State of the Union address, President Obama reiterated his proposal to create a National Network for Manufacturing Innovation—a $1 billion effort that was launched in August 2012 with a pilot, the National Additive Manufacturing Innovation Institute, in Youngstown, Ohio. The NNMI is the first move in replicating a robust network of nearly sixty national research labs—the Fraunhofer Institutes—that Brookings scholars Helper, Krueger, and Wial argue have been central to Germany’s manufacturing success. The country’s manufacturing sector is among the world’s best, employing one-fifth of German workers and paying an average of $46 an hour to boot (versus $33 an hour here). In a 2012 GE survey of 3,000 global business executives, Germany was top ranked as the most “innovation conducive.” Jointly funded by government and industry, the Fraunhofer Institutes act as incubators, research labs, and clearinghouses. As a result, Germany leads patent registrations in Europe. Among their inventions: the MP3.

Some U.S. states have created “mini Fraunhofers,” such as the Connecticut Center for Advanced Technology, the Florida Center for Advanced Aero-Propulsion, and Virginia’s Commonwealth Center for Advanced Manufacturing. Connecticut’s center, for example, provides a suite of services, including technical assistance for small manufacturers, field testing of innovations, and training curricula for workers. But these state centers, Wial says, “are not integrated with each other and by themselves aren’t likely to have big impact nationwide.” A coordinated national network, however, could greatly accelerate “radical product innovation.” Moreover, it could be self-funding and even profitable. In Germany, licensing rights for MP3 have generated millions of euros in revenues.

Getty Images
Getty Images Credit:

For example, an American Fraunhofer network could help accelerate advances in robotics and automation. Even as these technologies are booming here, America is actually far behind several other advanced economies. In an indication of just how much our advanced trading partners have been investing in manufacturing while we’ve been letting ours go, manufacturing firms in Germany, Japan, and South Korea use two to three times as many robots per 10,000 employees as do American manufacturers, according to a study by the International Federation of Robotics, an industry trade group. But this also shows how much potential the United States has when it comes to future innovation in this sector.

A final benefit of this network might be to counteract what Robert Hayes and William Abernathy called (in 1980!) “competitive myopia,” in which “maximum short-term financial returns have become the overriding criteria for many companies” as the imperative to maximize shareholder returns is increasingly urgent. By providing both the venue and the support for “patient capital” and longer-term investments, the network could help American companies recapture the vision that led to the breakthroughs of the twentieth century.

Another idea would be to train more American engineers, and keep the foreign ones we educate. America produced just 10 percent of the world’s science and engineering graduates in 2008. Moreover, a high proportion of these were foreign, including 57 percent of engineering doctorates and 54 percent of computer science grads. America is facing a potential shortage in one of manufacturing’s key raw ingredients: engineers.

One thought-provoking proposal to boost the pipeline, offered by a Florida task force on higher education reform, is to offer discount tuition for degrees in “strategic areas of emphasis,” such as math, science, and engineering. With college costs soaring, bargain engineering degrees might draw students who would otherwise choose different majors.

And if immigration reform does happen this year, Congress should dramatically expand high-skilled immigration so we don’t export back to our competitors all the foreign-born engineering graduates we’re producing. One idea, proposed in this magazine, is to provide foreign science and engineering students with green cards on graduation, so they can land a U.S. job more easily.

The third and final way Washington can lure back manufacturing is to stop building new cliffs to leap from. We need to end the self-inflicted wounds caused by short-term dramas over our long-term fiscal challenges. “If Congress has limited political capital, we should be spending less of it inflicting harm on the country and more of it on doing good,” says Ed Gresser, executive director of Progressive Economy, a Washington think tank. “If the U.S. government weren’t doing so much to damage confidence, we might be better off.”

Before businesses decide to invest, they need long-term certainty about their future tax liabilities, confidence that the government will be operational for more than three months at a time, and assurances that America won’t default on its debts or stay fixed on a course toward budgetary ruin.

Few things could do more for the long-term economic stability of America than a true “grand bargain” on deficit reduction, health care cost control, and government spending. Tax reform also wouldn’t hurt.

Hopeless as it currently seems, Congress and the administration shouldn’t give up. As Australia’s foreign minister, Bob Carr, observed, “The United States is one budget deal away from restoring its global pre-eminence.”

The post Three Ways to Bring Manufacturing Back to America appeared first on Washington Monthly.

]]>
18969 Mar13-Kim-Manufacturing2 Getty Images
A Tale of Two Trade Deals https://washingtonmonthly.com/2013/03/02/a-tale-of-two-trade-deals/ Sat, 02 Mar 2013 15:42:58 +0000 https://washingtonmonthly.com/?p=18970

Never mind Asia, time to pivot to Europe.

The post A Tale of Two Trade Deals appeared first on Washington Monthly.

]]>

While Washington is consumed by political furor over how to get the federal budget deficit under control, strangely few people are talking about its troublesome twin sister. Unlike the budget deficit, the half-trillion-dollar U.S. trade deficit does nothing to stimulate the economy even in the short term. Rather, it is sucking jobs out of the country year in and year out while also raising doubts about America’s ability to maintain its global security commitments. Taking sensible measures to reduce our chronic imbalance of trade would require neither austerity nor tax increases, and is the key both to creating jobs and to restoring confidence in America.

So what, you might ask, is the administration’s policy on trade? Right now its primary focus is on a deal known as the Trans-Pacific Partnership, or TPP, which President Obama wants finished up by October. If concluded according to plan, the TPP will include the United States, Canada, Mexico, Peru, Chile, New Zealand, Australia, Brunei, Singapore, Malaysia, and Vietnam, with the possibility that Japan and Korea might also join. The treaty would also be open for other countries to join if they could meet the required standards.

Beyond this, as Obama announced in his State of the Union address, the White House is looking for a deal gazing in the opposite direction. Known as the Transatlantic Free Trade Agreement (TAFTA), it would tie the United States and the European Union into the world’s largest trading block.

As with most trade deals, both the TPP and TAFTA have geopolitical as well as economic significance. Indeed, one reason the administration is placing strong priority on the TPP is because it sees the deal as an important part of its larger foreign-policy “pivot to Asia.” With the rise of China and recent U.S. emphasis on Iraq, Afghanistan, and the Middle East, some Southeast Asian and East Asian countries have been looking for assurances that the U.S. will remain engaged in the region and provide a countervailing power. So along with stationing 2,500 Marines and increasing the U.S. naval presence in Australia, opening a drone base in the Cocos Islands, increasing naval visits to Singapore and other Asian ports, and raising the U.S. naval presence in the western Pacific to 60 percent of all U.S. ships, the administration is seeking special economic ties with the nations noted above—which, of course, do not include China.

A potential trade deal with Europe also has economic and geopolitical implications, but it has not yet generated the same level of expectation, commentary, and lobbying. This is partly because it is not as far advanced as the TPP, but it is also because few imagine that Europe will be a source of either major opportunities or major threats. Why place bets on an aging, stagnant Europe, goes the conventional thinking, when this is likely to be the century of Asia?

Yet the geopolitical case for the TPP is not nearly so strong as the administration argues, and the agreement is certainly not worth the cost the U.S. is likely to have to pay. Meanwhile, the economic case for forging closer trading ties with Europe is comparatively much stronger. It’s time to look deeper at how these two trade deals fit into America’s grand strategy.

In early 2011, Deputy National Security Advisor for International Economic Affairs Mike Froman invited me and a few other trade experts to the White House to request suggestions and support for the TPP.

Froman made two basic arguments. The first was geopolitical: the TPP, Froman explained, was the administration’s way of demonstrating to our Asian friends and allies that we are back and committed to them. The second was economic: the deal would further open some important markets to American business, he argued, and, most importantly, would serve as a template for negotiating much broader and purer global free trade deals in the future.

To understand the administration’s reasoning, it is necessary to have a little background. In 2001, the World Trade Organization (WTO), the 158-nation body that attempts to govern global trade, launched the so-called Doha Round (after Qatar’s capital city, which hosted the launch meeting). The purpose of these negotiations was to achieve a dramatic increase in global trade liberalization. By 2008, however, the talks had gone nowhere and American frustration was at the boiling level.

In response, the Bush administration developed a theory of competing free trade agreements, or FTAs. The idea was that by concluding a series of special bilateral and regional trade deals, the U.S. would eventually force reluctant countries to sign on to Doha for fear of being frozen out of preferred access to key markets. While such FTAs were permissible under WTO rules, they were anathema to free trade economists because they inevitably distort trade and welfare by granting preferential treatment to favored partners.

As a test of the theory, the Bush White House announced in the fall of 2008 that it was joining an existing, largely ignored regional trade agreement among Singapore, Brunei, New Zealand, and Chile. After this, however, not much happened until the Obama administration committed to its “pivot to Asia.” That put crafting a much larger “Trans-Pacific” deal on Washington’s front burner, as Forman explained.

This reasoning struck me at the time, and still strikes me, as dubious at best. Let’s start with the geopolitical calculation.

There is no doubt that the growth of China’s power does unnerve other Asian nations. A Singaporean minister for foreign affairs expressed the concern succinctly to me over dinner one night. “As an ethnic Chinese myself,” the minister said, “I know that China views the world hierarchically—with a particular position, either up or down but not equal, for each country. Furthermore, I know the place the Chinese are likely to have for Singapore, and I don’t want to be in it in a Chinese-dominated world. So we need America to prevent Chinese domination.”

Yet if the rise of China makes Singapore and other Asian nations feel insecure, it’s hard to see how the TPP should make them feel better. It won’t halt the rise of China nor the relative decline of the U.S. And in any case, the United States has hardly abandoned Asia. The Pentagon maintains 100,000 troops in East Asia and the Pacific and keeps the Seventh Fleet patrolling the western Pacific as it has for nearly seventy years. We have security treaties with Japan, Korea, the Philippines, and Australia and quasi-security arrangements with Singapore.

We have done this despite no longer having to worry about the Soviet Union or the spread of world communism. We have done it despite China’s lacking any motive to interrupt its trade with us, and any desire to attack us. Meanwhile, most of the Asian countries benefiting from our security umbrella pursue industry targeting and strategic trade policies that have contributed to the chronic U.S. trade deficit and the offshoring of millions of U.S. jobs even as our Navy dutifully patrols the trade lanes. Geopolitically, the question should not be what we can do for prospering Asian countries made anxious by the growth of China. Rather, it should be what they can do to help relieve us of the economic burden of our continuing military commitment.

Then what about the economic case for the TPP? The Peterson Institute for International Economics has done an analysis showing that the TPP might result in a 0.0038 percent increase in GDP for the U.S. by 2025. In actuality the economic benefits, if any, are likely to be still less.

The agreement would virtually abolish tariffs. It also has provisions for liberalizing textile trade, and for reducing agricultural subsidies and barriers. It would ensure that state-owned enterprises compete fairly with private industry, and provide for stronger protection of intellectual property and investment rights. And it would reduce barriers to entry in a variety of service industries, including telecommunications and environmental goods and services.

No doubt, many multinational companies headquartered in the U.S. would benefit from these provisions. Companies such as Apple and General Electric, for example, would find it easier and safer to offshore R&D and production and to avoid U.S. taxes by keeping profits in Asia.

But whether it will be good for the U.S. economy as a whole is doubtful. That’s because the TPP ignores the most important drivers of global trade and investment. For example, it has no provisions for dealing with currency manipulation, even though several of the countries in the negotiation, and others that are likely to join later, routinely drive down the cost of their exports and drive up the cost of their imports by keeping their currencies artificially low.

The effects of this easily negate any benefits that might result from lowering trade barriers. For example, new Japanese Prime Minister Shinzo Abe’s first policy action to restart the Japanese economy was to devalue the yen by about 20 percent versus the dollar. That is a multiple of Japan’s average tariff level.

Nor does the TPP address the many structural issues that lock foreign producers out of Asian markets. Consider the Japanese car market, for example. Because of the strong yen and high wages, Japan has become a high-cost location for automobile production. At the same time, the United States has become a low-cost production center, thanks to the recent restructuring of its industry and the stagnation of American wages. One would expect that in view of its high costs, Japan’s imports of foreign cars would be soaring and Japan’s producers would be closing factories as U.S. and European producers have done under similar circumstances.

But none of that is happening, despite the fact that Japan imposes no tariffs on imported cars. So what else is at work? Complex Japanese rules and taxes that favor its domestic industries, plus a dealer network designed to exclude foreign-built cars. Joining with Japan in the TPP would not fix any of that. Indeed, by reducing the 2.5 percent U.S. tariff on cars built in Japan, it would help Japanese automakers to avoid having to reduce their costs.

Nor does the TPP deal with the problem of investment incentives. These are the packages of tax holidays, free land, state-financed worker training, regulatory exemptions, and capital grants that countries use to attract investment in production, R&D labs, and other facilities by global companies.

For example, Intel recently opened a new plant in China to make Pentium chips, the microprocessors that drive most of the world’s computers. Why China? Chip fabrication is highly automated, making labor costs insignificant. In the absence of distorting subsidies, the low-cost places to produce Pentiums would be Intel’s facilities in New Mexico and Arizona. But Intel CEO Paul Otellini has pointed out that the financial incentives offered by China are not available in America, and that they are worth about $100 million in annual Intel profits. These kinds of incentives are far, far more important as drivers of trade, production, and jobs than anything the TPP is talking about.

An additional problem is how the TPP would destroy the Caribbean Basin Free Trade Agreement (CAFTA) and poke big holes in the North American Free Trade Agreement (NAFTA). For example, under both agreements, textile producers in the Caribbean and Mexico who use U.S. yarn receive duty-free access to the U.S. market for textiles and apparel. The U.S. struck these deals partly in response to the discriminatory trade and industrial policies of some Asian countries that were distorting markets and causing the loss of U.S. jobs. A second objective was to help create jobs in Mexico and the Caribbean and thereby reduce the number of undocumented immigrants from these countries while also providing an alternative to employment in the drug-trafficking trade.

By removing tariffs on textile imports from Vietnam, the TPP would displace an estimated 1.2 million textile workers in the Caribbean Basin and Mexico along with about 170,000 in the United States, according to Mary O’Rourke, an industry analyst. Some see that as simply the price of achieving true free trade and optimizing the planet’s division of labor. But Vietnam is dominated by state-owned enterprises and is far from being a market economy. Furthermore, under a situation of true free trade, it would be China, not Vietnam, that would take most of the textile business, because China has gigantic excess capacity in textiles, as it does in just about everything else.

Meanwhile, it isn’t even clear that the TPP will change trading patterns within Asia to the advantage of the U.S. economy. Beijing is now pushing its own Regional Comprehensive Economic Pact, and so far all ten member countries of the Association of Southeast Asian Nations, plus Japan, Korea, Australia, New Zealand, and India, have signed up. This China club has more members—and more important members—than the American TPP club. Indeed, all the TPP members except those from the Americas are also in the China club.

None of this is to suggest that the United States couldn’t prosper from a deeper trading relationship with Asia if it were done on the right terms. Indeed, imagine if the U.S. went for the whole enchilada and proposed something like a trans-Pacific European Union? Take the advanced democratic economies of the Pacific—Canada, the U.S., Mexico, New Zealand, Australia, Japan, and Korea—and make them one integrated economy with a common antitrust regime, a common set of employment and environmental standards, one banking system, and eventually one currency—call it the Yollar or the Yelarso or the Denso. Make the union open to new entrants if and when they reform their economies enough to qualify for membership.

That entity would be the world’s biggest, richest economy. It would eliminate the structural impediments to U.S. exports in much of Asia, and bring most of the world economy under a true free trade standard. With this kind of a union there would be much less currency manipulation and much less (if any) need for “pivots” and more U.S. military in Asia. But, alas, Washington isn’t even thinking about anything like this, even though it would be a game-changing economic and geopolitical move.

So if the TPP looks like a lose-lose situation for the United States, what about a deal with Europe?

This notion was first voiced at a high level in 1995 when then Secretary of State Warren Christopher proposed a joint effort to better bridge the Atlantic. In November of that year, I was with 100 European and American CEOs when they met in Seville, Spain, and proposed a far-reaching agenda to increase trade and investment in the European and American markets. The following December, Washington and Brussels agreed to conduct a joint study on how to reduce tariff and non-tariff barriers. In a book at the time, I cited analysis by my own Economic Strategy Institute estimating that a TAFTA would boost U.S. GDP by 1.6 to 2.8 percent while raising EU GDP by 1 to 1.9 percent.

Nevertheless, objections were raised. Because it would be so large, some feared that a U.S.-EU deal might destroy the newly created WTO. Because the markets were already highly integrated, the likely gains from the deal might be small. A formal negotiation might actually worsen U.S.-EU relations by bringing contentious issues, like subsidies to our respective aircraft manufacturers, brutally to the fore. Some also said it would really be the rich, white guys ganging up against the rest of the world. So nothing happened. Eventually the WTO launched the aforementioned Doha Round, in which important parts of the rest of the world thumbed their noses at the European and American visions of free trade.

This experience is one reason why TAFTA deserves a second look, but there are also others. First is the need for growth in an age of high debt and austerity. Neither the U.S. nor Europe is politically prepared to stimulate its economy through any significant increase in deficit spending. That means growth must come from some other source, and the efficiencies that would come from further integration of European and American economies are a plausible answer.

Labor unions in the EU are strong and wages are high, so there will be no race to the bottom. And the EU and the U.S. largely share a commitment to free markets, free trade, and democracy. Much of what has gone wrong with the WTO derives from systemic conflicts between the U.S. and the EU on the one side and the more interventionist and authoritarian political economies of the rest of the world on the other.

The biggest economic gain from TAFTA would be from harmonizing regulation. Inconsistencies in regulation raise the costs of transatlantic trade in automobiles, for example, by 27 percent, and by 6.5 percent in the electronics sector alone. The United States and Europe both have safe headlights, for instance, but EU cars exported to America must have different headlights than those sold in Europe and vice versa. That non-tariff barrier inhibits exports while raising costs by forcing producers to keep extra stocks of different headlights. The same holds true for electronics and most other products. Mutual recognition of essentially equivalent standards and removal of similar non-tariff barriers would boost U.S. GDP by 1 to 3 percent ($150 billion to $450 billion), according to a 2005 study by the Organisation for Economic Co-operation and Development.

Beyond this, there are also significant economic gains to be had by eliminating non-tariff barriers to trade in services. More than 70 percent of the GDP in both markets derives from provision of services. Yet service trade is highly restricted by non-tariff barriers, such as differing rules and conflicting standards that bar the integration of cell phone service in the U.S. and Europe.

Meanwhile, although tariffs between the two economies are already low, there would also be significant immediate economic gains from tariff elimination. These potential gains range from 1 to 1.3 percent of GDP ($135 billion to $181 billion) for the United States. The European Commission has estimated that a comprehensive deal would result in a 50 percent increase in overall transatlantic trade. Putting all this together suggests that TAFTA would add 2 to 4 percent to U.S. GDP.

There is also a strong geopolitical case for TAFTA. For one, it would help contain the centrifugal forces in the EU that are gaining strength. British Prime Minister David Cameron wants to renegotiate the UK’s membership and submit the results to a referendum. This creates a real threat of a UK exit from the EU that might take others along. TAFTA would provide glue to hold Europe together, something that is a major long-term U.S. geopolitical and economic objective for the very good reason that the EU is not only a zone of peace and democracy but also America’s major partner in dealing with Russia, the Middle East, and Africa. It is also, of course, America’s biggest economic partner by far.

To those who view Europe as a demographic and economic backwater compared to Asia, it may be surprising to know the facts. Yes, Europe faces an aging population. But birth rates have plunged far deeper and faster in East Asia than in Europe, and as a result East Asia’s population is aging far more quickly. Fully 30 percent of Japan’s population will be over sixty-five by 2030, and its working-age population has been shrinking since the mid-1990s. China’s workforce is shrinking now too, thanks to its one-child-one-family norm. Demographers expect China to soon experience hyper-aging unlike anything in the cards for Europe. South Korea, Singapore, and Taiwan are already experiencing the same fate. The United Nations projects that by 2030, France and South Korea will each have the same proportion of elders in their population.

Europe also remains, despite its slow recovery from the Great Recession, the world’s largest economy and a powerhouse that is vital to the United States. Americans sell three times more merchandise exports to Europe than to China, while the EU sells the United States nearly twice the goods it sells China. Transatlantic trade generates $5 trillion in sales annually and employs fifteen million workers. It accounts for three-quarters of global financial markets and more than half of world trade. No other commercial artery is as integrated. Roughly $1.7 billion in goods and services cross the Atlantic daily, equaling one-third of total global trade in goods and more than 40 percent in services.

Finally, far from destroying the prospects for a global economy based on true free trade, TAFTA may be the only thing that can get China and other mercantilist nations on board. Given that the U.S.-EU economies constitute more than half of the global economy, TAFTA would tend to set global standards, and along lines affirming the fundamental precepts of free trade.

The conclusion is clear. The TPP is lose-lose for the U.S., both geopolitically and economically. The White House should drop and the Congress oppose it while we focus on the vast promise of a stronger transatlantic partnership that will benefit the whole world.

The post A Tale of Two Trade Deals appeared first on Washington Monthly.

]]>
18970
Why Agencies Are Always Missing Their Deadlines https://washingtonmonthly.com/2013/03/02/why-agencies-are-always-missing-their-deadlines/ Sat, 02 Mar 2013 15:38:50 +0000 https://washingtonmonthly.com/?p=18971 Even the best mainstream news stories on the regulatory process tend to mention the number of deadlines an agency has missed as if that’s an indication of its performance. But that kind of coverage is actually an indication of just how little we know about what’s going on behind the scenes. For one, deadlines are […]

The post Why Agencies Are Always Missing Their Deadlines appeared first on Washington Monthly.

]]>
Even the best mainstream news stories on the regulatory process tend to mention the number of deadlines an agency has missed as if that’s an indication of its performance. But that kind of coverage is actually an indication of just how little we know about what’s going on behind the scenes.

For one, deadlines are essentially nonbinding. While agencies are technically required by law to aim for them, if (and, more often, when) they miss them, there are very few consequences. As long as an agency can demonstrate that it’s working on a rule, Congress and special interest groups have very little recourse, except to attempt to embarrass and discredit the agency for missing its deadline—which is exactly what they do.

Deadlines are also, more often than not, simply unrealistic. Agencies are required to adhere to the plodding processes outlined in the 1946 Administrative Procedure Act, all of which can be slowed down by external forces—by no fault of the agency’s own. For example, the APA requires that the public be given an adequate amount of time to comment on a proposed rule. During the rule making for the Dodd-Frank financial reform laws, financial industry groups have regularly buried rule makers under mountains of comment letters and studies, all of which they must respond to in detail in their reports justifying a new rule. Industry groups have also regularly claimed that they need more time to review a rule, deliberately delaying its progress—a tried-and-true tactic to stall a rule they deem unsavory.

When an industry group requests more time, the agency is caught between a rock and a hard place. If it does not grant the extra time, industry can sue the agency for violating the APA or for not performing an adequate cost-benefit analysis. When that happens, the agencies usually don’t fare well. Of all the SEC rules overturned by the court on the grounds that the agency failed to do an adequate cost-benefit analysis, none have ever been re-proposed. If the agency does grant the extra time, industry and its allies in Congress can use public testimony and the media to harangue the agency for—what else?—missing its deadline.

The post Why Agencies Are Always Missing Their Deadlines appeared first on Washington Monthly.

]]>
18971
He Who Makes the Rules https://washingtonmonthly.com/2013/03/02/he-who-makes-the-rules/ Sat, 02 Mar 2013 15:28:58 +0000 https://washingtonmonthly.com/?p=18972 Mar13-Edwards-Rules

Barack Obama's biggest second-term challenge isn' guns or immigration. It's saving his biggest first-term achievements, like the Dodd-Frank law, from being dismembered by lobbyists and conservative jurists in the shadowy, Byzantine rule-making process.

The post He Who Makes the Rules appeared first on Washington Monthly.

]]>
Mar13-Edwards-Rules

In late 2010, Bart Chilton, one of three Democratic commissioners at the U.S. Commodity Futures Trading Commission (CFTC), walked into an upper-floor suite of an executive office building to meet with four top muckety-mucks at one of the biggest financial institutions in the world.

There were a handful of staff members present, but it was a pretty small gathering—one, it turns out, that Chilton would never forget.

The main topic Chilton hoped to discuss that day was the CFTC’s pending rule on what are known as “position limits.” If implemented properly, position limits would put a leash on speculation in the commodities market by making it harder for heavyweight traders at places like Goldman Sachs and JPMorgan Chase to corner a market, make a killing for themselves, and screw up prices for the rest of us. Position limits are also one of many ways to tamp down the amount of risk big institutions can take on, which keeps them from going belly up and minimizes the chance taxpayers will have to bail them out.

The financial institution Chilton was meeting with that day was a big commodities exchange, which is like a stock exchange except that instead of trading stocks they trade derivatives based on the value of actual products, like oil and gas. Chilton wouldn’t say which major commodities exchange he was meeting with that day, but suffice it to say two of the biggest—the Chicago Mercantile Exchange and Intercontinental Exchange—have a lot to lose from federally administered position limits. To them, the more derivatives traded, the better. They’ve been fighting the CFTC’s attempts to establish position limits for years.

The passage of the Dodd-Frank Wall Street Reform and Consumer Protection Act in July 2010 seemed to promise meaningful reform on this front. The law includes Section 737, which explicitly directs the CFTC to establish position limits and lays out detailed guidelines on how they should do so. “The Commission shall by rule, regulation or order establish limits on the amount of positions, as appropriate,” it reads.

Still, even with the strength of the law behind him, Chilton waited until the end of the meeting to broach what he knew would be a tense subject. He began diplomatically. Now that the CFTC was required by law to establish position limits, his commission wanted to do so “in a fashion that made sense—one that was sensitive to, but not necessarily reflective of, the views of the exchange,” he told the executives.

Chilton’s gracious overture fell flat. His hosts, who had been openly discussing other topics moments before, were suddenly silent. They deferred instead to their top lawyer, who explained that the exchange’s interpretation of Section 737 was that the CFTC was not required to establish position limits at all.

Chilton was blindsided. While other parts of Dodd-Frank were, admittedly, vague and ambiguous and otherwise frustrating to those, like him, who were tasked with writing the hundreds of rules associated with the act, Section 737 didn’t exactly pull any punches. The Commission shall establish limits on the amount of positions, as appropriate.

“You gotta be kidding,” Chilton told the executives. “The law is very clear here. The congressional intent is clear.”

But the executives stood their ground. Their lawyer quietly referred Chilton to the end of the sentence in question: as appropriate. Those two little words, the lawyer said, clearly modify the verb “shall.” Therefore, the statute can be interpreted as saying that the commission shall—but only if appropriate—establish position limits, he explained.

Anyone with a passable command of the English language should, faced with that argument, feel both dismay and a grudging sort of admiration. After all, given the context in which that sentence appears, the sheer brazenness of such a linguistic sleight of hand is, in a way, inspired. It’s the kind of thing that would make Dick Cheney and John Yoo proud. Joseph Heller has written books on less.

But it’s still, rather obviously, just that: a linguistic sleight of hand. The words “as appropriate” have appeared in statutes governing the CFTC’s authority to implement position limits for at least forty years without challenge. In fact, the CFTC used the authority of that exact line, complete with its “as appropriate,” to establish position limits on grain commodities decades ago. Even those who drafted Dodd-Frank later weighed in, saying they had intended for the language to explicitly instruct the CFTC to establish position limits at levels that were appropriate. The summary of Dodd-Frank, drafted by the Congressional Research Service, doesn’t quibble either: “Sec. 737 Directs the CFTC to establish position limits,” it reads. No ifs, ands, or “as appropriate”s.

“But this kind of thing”—manipulating the minutiae—”is how the game is played,” said Bartlett Naylor, a financial policy advocate at Public Citizen, one of a handful of public interest groups tracking the rule-making process for Dodd-Frank. Since the law passed, the financial industry has been spending billions of dollars on lawyers and lobbyists, all of whom have been charged with one task: weaken the thing. One strategy has been to carve loopholes into the language of the law, Naylor said. A verb. An imprecise noun. A single sentence in an 876-page statute. “With a thousand lawyers on your payroll, that’s nothing.”

In the meeting that day, Chilton couldn’t believe what he was hearing. He pointed out to the executives that, in Dodd-Frank, Congress had not only directed the CFTC to establish position limits, it had also imposed a deadline asking the commission to do so months before almost any other rule. It was obvious, he argued, that it was a matter of when position limits would be in place. Not if.

But the executives refused to discuss the matter further. The meeting ended abruptly, and Chilton wandered out into the hallway, dazed and reeling. One of the muckety-mucks from the meeting walked with him to the elevator. While they waited, away from the rest of the group, Chilton turned to his host. “You guys have got to be kidding about this ‘as appropriate’ stuff, right?” he said.

“I know,” the muckety-muck replied, admitting it was a stretch. He let out a little chuckle—”but that’s what we’re going with.”

“He laughed,” Chilton told me recently, remembering that day. “He was laughing about how ludicrous it was.”

A couple of months after that inauspicious meeting, the CFTC released a proposed rule establishing position limits on oil, gas, coffee, and twenty-five other commodities markets. They received about 15,000 letters during the public comment period and spent the next six months reading through all of them, incorporating the suggestions into the draft, meeting with industry and consumer groups, and revising the rule. Fearful of being sued, the CFTC held off voting on the rule several times and agreed to delay its implementation for a year to help financial institutions comply. Finally, in October 2011, the CFTC issued a final rule. It was a victory, but a short-lived one.

Two months later, two powerful industry groups, who together represent the biggest speculators in the world, hired Eugene Scalia, the son of Supreme Court Justice Antonin Scalia, as their lead counsel, and launched a lawsuit against the CFTC. The Securities Industry and Financial Markets Association and the International Swaps and Derivatives Association were suing on the same grounds that the exchange executives’ lawyer had cited in that meeting with Chilton a year earlier: the CFTC had not demonstrated that establishing position limits was necessary and appropriate, they claimed. They also argued that the commission had not sufficiently studied the economic impact of the rule.

House Democrats and nineteen senators, some of whom had drafted Dodd-Frank, petitioned the court to rule in favor of the CFTC, a handful of op-eds beseeched judges to do the right thing, and financial reform advocates called foul.

None of it made a difference. In September 2012, the U.S. Court for the District of Columbia Circuit overturned the CFTC’s rule. In the decision, the court wrote that the commission lacked a “clear and unambiguous mandate” to set position limits without first demonstrating that they were necessary and appropriate. And with that, more than two years after the passage of Dodd-Frank, there were still no federally administered position limits for any commodities except grain, and the CFTC was back to square one. The muckety-mucks at the exchanges rejoiced, as appropriate.

Welcome, dear readers, to the seventh circle of bureaucratic hell.

As Obama begins his second term, all the talk in Washington is about whether ongoing congressional gridlock and soul-crushing partisanship will block the administration from achieving significant legislative victories, be they immigration reform, a big fiscal deal, or an infrastructure bank. But at least as important to the future of the country and to the president’s own legacy is whether that potentially game-changing legislation he signed in his first term—like the Affordable Care Act and Dodd-Frank, as well as a slew of other landmark bills—is actually implemented at all.

It may seem counterintuitive, but those big hunks of legislation, despite being technically the law of the land, filed away in the federal code, don’t mean anything yet. They are, in the words of one CFTC official, “nothing but words on paper” until they’re broken down into effective rules, implemented, and enforced by an agency. Rules are where the rubber of our legislation hits the road of real life. To put that another way, if a rule emerges from a regulatory agency weak or riddled with loopholes, or if it’s killed entirely—like the CFTC’s rule on position limits—it is, in effect, almost as if that part of the law had not passed to begin with.

As of now, there’s no guarantee that either Obamacare or Dodd-Frank will be made into rules that actually do what lawmakers intended. That’s partly because the rule-making process is a dangerous place for a law to go. We might imagine it as a fairly boring assembly line—a series of gray-faced bureaucrats diligently stamping laws into rules—but in reality, it’s more of a treacherous, whirling-hatchet-lined gauntlet. There are three main areas on this gauntlet where a rule can be sliced, diced, gouged, or otherwise weakened beyond recognition.

The first is in the agency itself, where industry lobbyists enjoy outsized influence in meetings and comment letters, on rule makers’ access to vital information, and on the interpretation of the law itself.

The second is in court, where industry groups can sue an agency and have a rule killed on a variety of grounds, some of which make sense and some of which most definitely do not.

The third is in Congress, where an entire law can be retroactively gutted or poked through with loopholes, or where an agency can be quietly starved to death through appropriations bills.

And here’s the really alarming part: rules run this gauntlet largely behind closed doors, supervised by people we don’t elect, whose names we don’t know, while neither the media nor great swaths of the otherwise informed public are paying any attention at all. That’s not because we don’t care what happens; we do. After all, millions of us spent the better part of a year closely monitoring the battles to pass Obamacare and Dodd-Frank. Remember? It was high drama! Every detail was faithfully chronicled in front-page headlines and long disquisitions on The Rachel Maddow Show; in countless posts by wonky bloggers, who dissected every in and out, every committee hearing, every new study about the public option or the Volcker Rule.

That kind of stuff is the Washington journalist’s bread and butter: the artful, insidious process by which a bill becomes a law. And since reporters know how the process works, how influence gets wielded and where the pressure points are, the rest of us were able to follow along closely. We knew what to root for, what to keep our eye on, and which decision makers in Washington we could remind to do the right thing.

But fast-forward a couple of years, and as the fate of those very same laws is being determined in the rule-making process we’ve found ourselves distracted by new shiny objects, like women in combat and how Pennsylvania will allocate its electoral votes in 2016. Part of the reason for that, no doubt, is that many Washington journalists, underpaid, overworked, and required to write a dozen blog posts a day, don’t have time to dedicate to following the rule-making process. Others simply don’t understand it.

Regardless, the result is that the rest of us haven’t followed the progress of these landmark laws in anywhere near the same way that we followed it during the legislative process. And in our inattention we’ve made it infinitely easier for industry lobbyists and members of Congress who voted against the laws to begin with to destroy them by subtle, nuanced, backdoor means. By quibbling over “as appropriate”s and misplaced verbs. By crafting crafty legal arguments and drowning understaffed rule makers in industry-funded hogwash. This is the way a law ends: not with a bang but with a whimper.

For purposes of illustrating the problem, this article will focus on just one of these landmark laws, Dodd-Frank. It passed more than two and a half years ago, in July 2010, but most of its rules have yet to make it through the rule-making gauntlet. While many liberals have already written it off as a total failure—some were, in fact, writing its eulogy the day it passed—it’s time we had some perspective. It’s true that it’s not as strong as many experts on financial markets had called for. It’s true that it doesn’t break up the big banks, nor fundamentally change the structure of our financial system. We may have been hoping for, say, a bulletproof SUV with state-of-the-art airbags; what we got instead are a few seat belts that need to be welded into our old rig. But as of now, those jury-rigged seat belts are the only thing we’ve got, and given the gridlock on the Hill they’re all we’re likely to get. And the truth is that they’re strong enough that the financial industry is willing to spend billions of dollars to keep them from being installed.

As of now, roughly two-thirds of the 400-odd rules expected to come from Dodd-Frank have yet to be finalized. That includes big, potentially game-changing rules governing inappropriate risk taking and international subsidiaries of American banks, and how exactly we’ll go about regulating derivatives. In the next year or so, the vast majority of these rules will be launched down the rule-making gauntlet. The necessary first step in assuring that they come out the other end as strong as they should be—or that they come out the other end at all—is to understand the challenges they’ll face along the way.

The Basic Rules of Rule Making

The rule-making process is governed by the Administrative Procedure Act, which became law in 1946, in response to the New Deal-era expansion of the federal bureaucracy. In the late 1930s and early ’40s, all the new agencies were dancing to their own beat; the APA established a system-wide metronome. Since then, a handful of other laws have been passed, including the Regulatory Flexibility Act, Paperwork Reduction Act, Government in the Sunshine Act, and Congressional Review Act, which also govern parts of the process; but for the most part the APA is the foundation.

Every stage in the rule-making process is guided by the APA. It begins the moment a law is passed and shunted off to the regulatory agency that will oversee its implementation. Once it’s in the agency, the APA governs the activities of a team of rule makers—researchers, analysts, economists, and lawyers—who do a bunch of fact gathering, perform studies, and hold a ton of informational meetings in an attempt to get a handle on how best to abide by the intention of the law and how to apply that intention to real life. Since big laws like Obamacare and Dodd-Frank deal with complex issues, Congress often makes the statutes deliberately vague, deferring to rule makers’ technical expertise and policy decisions, and giving them a significant amount of authority on how to interpret a law. All of that interpretation generally happens in the very beginning of the rule-making process, which is called the Notice of Proposed Rulemaking, or, in the acronymic parlance of the federal bureaucracy, NPRM.

After spending months and months in the NPRM process, the agency eventually publishes a proposed rule, on which, the APA stipulates, the public gets an “adequate” amount of time to comment. Usually, that’s about sixty days, but it can be shorter or longer, depending on how complex or controversial a rule is. After that, the rule makers revise the rule again, taking into account concerns raised by regulated industries and the public’s comment letters.

From there, executive branch agencies like the Food and Drug Administration and the Environmental Protection Agency send their rules to the White House Office of Management and Budget’s Office of Information and Regulatory Affairs (OIRA), which reviews the projected costs and benefits of those agencies’ major new rules, as well as the suggestions of other agencies, before the final rule is published and implemented. At independent agencies like the Securities and Exchange Commission (SEC) and the CFTC, a bipartisan panel of commissioners publicly debates and votes on the rule—a process that often results in further revisions and compromises.

Like the rest of us, rule makers use the Track Changes feature in Microsoft Word, which assigns a different color font to each contributor. By the time a complex rule has made it through this whole process, it is “lit up like a Christmas tree,” said Leland Beck, who worked for various agencies for thirty years and practiced administrative law. “A rule becomes a decision on all the comments and revisions and compromises between agencies and all the individuals who got their hands on it.” Eventually the agency publishes a final rule, which is implemented and enforced. Voilá .

Or that’s how it’s supposed to work. But like many things in Washington, that’s just half the story. The rule-making process is actually a much messier, much more cacophonous affair, dictated to a large degree by lawmakers who voted against the law to begin with, and by industry groups who would often prefer that no rules be implemented at all. In the last decade, conservative members of Congress have built ever-higher hurdles that agencies must clear, and done so while cutting their staff and budgets.

Meanwhile, since the passage of Dodd-Frank, financial industry groups have also sabotaged parts of the APA’s carefully plodding process, overwhelming rule makers with biased information and fear tactics and threatening to sue the agencies over every perceived infraction. That’s a big reason why agencies have missed so many of their deadlines for implementing Dodd-Frank—a subtlety reporters frequently miss. (See “Why Agencies Are Always Missing Their Deadlines.”)

“It’s just this constant, never-ending onslaught,” a former SEC staffer told me. “You’re doing battle every day.”

The Gauntlet, Stage 1: Asymmetric Warfare in Rule Making

Public interest and consumer advocates tend to describe the fight over the rules of Dodd-Frank in martial terms. “It’s like World War II,” said Dennis Kelleher, the president and CEO of the nonprofit Better Markets. “There’s the Pacific theater, the Atlantic, the European, the African theater—we’re fighting on all fronts.” Former Senator Ted Kaufman, an outspoken advocate for financial reform, says it’s “more like guerrilla warfare.” The reformers are trying “to make it at the margins, but they’re totally outgunned,” he said.

The financial industry certainly has a spectacularly enormous arsenal. Since the passage of Dodd-Frank, the industry has spent an estimated $1.5 billion on registered lobbyists alone—a number that most dismiss as comically low, as it doesn’t take into account the industry’s much more influential allies and proxies, including a battalion of powerful trade groups, like the U.S. Chamber of Commerce, Business Roundtable, and American Bankers Association. It also doesn’t take into account the public relations firms and think tanks, or the silos of campaign cash the industry has dumped into lawmakers’ reelect-
ion campaigns.

“The amount of money and resources they’re willing to deploy to protect the status quo is unlimited,” said Kelleher. His company, Better Markets—one of the slickest and most vocal financial reform shops in town—has a $2 million annual budget, Kelleher said, which is about how much the financial industry spends on its lobbying team every day and a half.

While there’s no record of the total amount the industry has spent, it’s clear that there’s no shortage of money in its war chest. In the last quarter of 2010, just a few months after Dodd-Frank passed, the financial industry raked in nearly $58 billion in profits alone—about 30 percent of all U.S. profits that quarter. With that sort of bottom line, spending a hundred million or so to kill a single rule that could “cost” them a couple billion in profits is a pretty good return on investment.

In 2009, researchers at the University of Kansas and Washington and Lee University studied the return on corporations’ investment in lobbying for the American Jobs Creation Act, which included a one-time corporate tax break, and found that it was a staggering 22,000 percent. That means that for every dollar the corporations spent lobbying, they got $220 in tax benefits. Based on the billions Wall Street has spent to weaken Dodd-Frank, it seems that they have done similar math.

One thing all that industry money buys is a well-disciplined army. According to public records, representatives from the financial industry have met with the dozen or so agencies that regulate them thousands of times in the past two and a half years. According to the Sunlight Foundation, the top twenty banks and banking associations met with just three agencies—the Treasury, the Federal Reserve, and the CFTC—an average of 12.5 times per week, for a total of 1,298 meetings over the two-year period from July 2010 to July 2012. JPMorgan Chase and Goldman Sachs alone met with those agencies 356 times. That’s 114 more times than all the financial reform groups combined.

“For every one hundred meetings I have, only one of them is with a consumer group or citizens’ organization,” said Chilton. While it’s good that regulated industries have a chance to express their opinions and concerns to those who regulate them, he said, “the deck is just stacked so heavily against average people.”

It’s not just the quantity of meetings, it’s the quality, too. Kimberly Krawiec, a professor at Duke Law School, published a study last year analyzing the role of external influence during the NPRM period of Dodd-Frank’s Volcker Rule. (The Volcker Rule would ban proprietary trading, which is when banks trade for their own profits, and not on behalf of their customers, making them more likely to fail.) In her study, Krawiec found that while public interest organizations met with agencies in giant group meetings on the same day, head honchos from the industry often met with the agencies’ top staff alone. Former Goldman Sachs CEO Lloyd Blankfein, for instance, was not expected to share the floor.

That’s not an insignificant advantage, considering that the NPRM period is when “the majority of the actual agenda setting and rule making happens,” Krawiec said. Because APA stipulations require that the public get a fair shake at commenting on a rule before it is implemented, a proposed rule can’t be too different from the final rule or an agency can get sued, she explained. That has the effect of pushing most of the rule making to the very beginning of the process, which is also the least transparent, since agencies don’t have to publish what they’re up to or who their staff is meeting with during this time. Because of increased transparency efforts surrounding Dodd-Frank, agencies have been encouraged to publish all of the meetings that occur during the NPRM period—hence Krawiec’s study.

Krawiec has also found that after the Volcker Rule was proposed the vast majority of substantive public comment letters were from the financial industry, trade groups, and their various proxies—lawyers, lobbyists, and under-written think tanks—all of whom have the time and money to present extensive, if wildly biased, legal and economic arguments. Often, industry lawyers will simply rewrite entire paragraphs of the proposed rule, fashioning loopholes or limiting an agency’s scope with a single, well-placed adjective or an ambiguous verb. Whether a rule survives that change, whether it then can be effectively implemented and enforced, really does come down to such trivialities. In the rule-making process, the minutiae aren’t incidental to the rule; they are the rule. (Don’t believe me? The U.S. Supreme Court recently heard a case on a 1934 SEC rule on fraud that centered entirely on different definitions of the verb “to make.”)

Industry lobbyists are well aware that they don’t need to outright kill a rule; they need only to maim it, and it’s as good as dead. In fact, it’s better than that: it’s on the books, the newspapers cover it—it looks like a success for financial reform—but industry remains as unfettered as it was before. “That happens all the time,” said a former rule maker at the CFTC, who spoke on the condition of anonymity. “The public interest groups get the headline, but if you look at the details, the industry group has actually won. There’s an order of magnitude between the public interest groups’ and the industry groups’ attention to detail.” When I spoke to an industry lobbyist in mid-January, he put that another way. “We can’t kill it, but we can try to keep it from doing any damage,” he said.

Jeff Connaughton, a lobbyist turned crusader for financial reform, said that the “ubiquitous presence of Wall Street” goes beyond meetings and legalese in comment letters. In his book The Payoff: Why Wall Street Always Wins, he describes the tight-knit relationships between industry lobbyists and proxies and government officials as the “Blob,” which, in his experience, “oozed through the halls of government and immobilized the legislative and regulatory apparatus, thereby preserving the status quo.” Many in the Blob are married to one another and move fluidly from industry to government and back again, he told me. For example, CFTC Commissioner Jill Sommers, who recently announced her resignation, is married to Speaker of the House John Boehner’s top aide. She used to work at the Chicago Mercantile Exchange, one of the biggest exchanges in the world, which is overseen by the CFTC; she also worked at the International Swaps and Derivatives Association, the organization that later sued the CFTC to overturn the rule on position limits.

In this light, the traditional notion of “regulatory capture” doesn’t go far enough. Instead, we should think of it as “cultural capture,” writes the political scientist James Kwak. There may be no bags of cash exchanging hands, but that doesn’t matter when regulators, like many of the rest of us, have been steeped for so long in the idea that Wall Street produces the best and brightest our society has to offer. Regulators often look up to industry representatives, or know them personally, which begets “the familiar effect of relationships,” Kwak wrote in Preventing Regulatory Capture, a compilation of essays that will be published this year by Cambridge University Press in collaboration with the Tobin Project, a nonprofit research center. “You are more favorably disposed toward someone you have shared cookies with, or at least it is harder for you to take some action that harms her interests.”

Like many reformers, Connaughton points a finger at the so-called “revolving door,” which sends former bureaucrats into the private sector and vice versa, blurring the line between the regulators and the regulated. From 2006 to 2010, 219 former SEC employees filed 789 statements saying that they would be representing a lobbyist or industry group in front of the SEC, according to the Project on Government Oversight. A complex law like Dodd-Frank accelerates that cycle, Connaughton said, as industry has even more incentive to hire people directly from the agencies to help them navigate the new regulations. “Put your time in at one of these regulatory agencies while they’re doing the Dodd-Frank rule making and it’s a license to print money when you come out,” he told me.

Of course, the revolving door doesn’t explain everything. A lot of the agencies are packed with ten-, fifteen-, and twenty-year veteran rule makers, who are motivated by the esprit de corps and have no interest in leaving for industry. “Money isn’t everything. If you leave, there’s the feeling that you’re in the audience, and no longer on the public policy stage,” the former CFTC rule maker told me. “That, and at the agency you’re actually performing a public service. People recognize that. It’s a factor.”

Also, the revolving door revolves both ways. Industry leaders who are later appointed as commissioners sometimes provide a valuable asset to rule makers. In agency parlance, “they know where the bodies are buried.” In many instances, these former industry officials head agencies at the end of their careers and have no intention of returning to the private sector. CFTC Chairman Gary Gensler, for example, spent eighteen years at Goldman Sachs, eventually rising to partner, before becoming one of the most outspoken advocates in recent years for better regulation. (In 1934, President Franklin Delano Roosevelt appointed Joseph Kennedy to head the brand-new SEC for this exact reason.)

Another swinging mace in this stage of the rule-making gauntlet is what Kelleher, the head of Better Markets, calls the “Wall Street Fog Machine.” “They come at you with this jargon,” he said. “They want to make you feel like it’s too complicated for you to understand. You’re stupid, and they’re the only ones who get it—that’s the end game.” This is particularly true when it comes to financial products, like customized swaps, which traders on Wall Street have spent the last decade designing precisely in order to swindle their clients.

“That’s how you make money. You make it so complicated the clients don’t understand what it is they’re buying and selling, or how much risk they’re taking on,” said Alexis Goldstein, who worked in cash equity and equity derivatives on Wall Street for several years, first at Merrill Lynch and then at Deutsche Bank, before joining the reform movement. The more complex the product, the higher the commission you can charge, and the less likely it is that there will be copycats driving down your profit margins with increased competition, she explained. In other words, complexity “isn’t a side effect of the system—it’s how the system was designed.”

Partly as a result of that business model, the system really is complicated—extraordinarily so. But that doesn’t mean it can’t also be regulated in the right ways, reformers say. How exactly that should be done is often a bone of contention. Take those customized swaps, for example. Right now, they’re traded in the private “over the counter” market, which means that they’re contracted bilaterally, often between a single bank and a counterparty during a phone call, and they aren’t transparent. Dodd-Frank gives the CFTC the power to regulate them, and many suggest that all trades should be conducted in clearinghouses, where customers can easily compare prices and are therefore less likely to be fleeced. Banks claim they’re too complex to be traded in that way.

Kelleher says that’s “just plain false.” A customized swap is nothing more than a bundle of so-called “two-legged” swaps, he said. If you unbundle them, which the banks themselves do, for lots of reasons, like hedging, there’s no reason we can’t regulate them, he said. Just as Wall Street used the excuse of complexity to hoodwink their clients, they’re now using the excuse of complexity to hoodwink their regulators—”it’s the greatest coup they’ve managed to pull off,” Kelleher said.

Others argue that customized swaps should be regulated but clearinghouses aren’t the answer. They worry that if all such trading is moved to clearinghouses, then those institutions will balloon, leaving them vulnerable to collapse, said Peter J. Ryan, a fellow at the University of California Washington Center whose research focuses on financial services policymaking. In other words, the clearinghouses themselves could become too big to fail.

The real problem here is not that rule makers can’t understand Wall Street’s complex financial products. It’s that they often don’t have enough information about those products or the systems that govern them to see the whole picture, and therefore to choose the best possible way to regulate. As it stands, rule makers, as well as the teams of agency researchers who help them, rely to a large degree on industry to provide data about things like banks’ internal trading. For proprietary reasons, only the banks have access to much of that information, and they have no incentive to share it. When regulators request data in public comment letters, industry rarely provides it; when they do, it’s often incomplete, one-sided, or missing crucial variables. “If there’s a datum that supports their argument, they produce it. If not, they don’t—why would they?” said Naylor of Public Citizen.

This is one of the main reasons the Volcker Rule has been such a mess. It requires that regulators determine what’s proprietary trading (when banks trade with their capital base for their own profit) and what’s market making (the backbone of a bank’s basic business model). A Credit Suisse lobbyist claimed recently that the metrics in the Volcker Rule were flawed since, in a test run, the bank found that proprietary trading and market making were indistinguishable. Credit Suisse’s claim will go into the rule makers’ record, which, in turn, can be used as evidence in court, should implementing agencies be sued. In that situation, rule makers and reformers are left without a card to play. “We can’t dispute [their claim], because Credit Suisse owns the data and won’t share it publicly,” Naylor said.

While Dodd-Frank provides rule makers with access to a variety of new information sources—the new Office of Financial Research, the SEC’s Consolidated Audit Trail, the CFTC’s Swaps Report—none of these tools do enough yet to keep them ahead of the financial industry’s constantly morphing business model, which changes every time an analyst invents a new product or a new way to trade it. “The regulators need to be able to pool all of this disparate information together into a complete picture of the financial system, which I’m not sure if they have the funding and coordination to do,” said Marcus Stanley, the policy director at Americans for Financial Reform, a coalition of consumer, labor, small business, and public interest groups. If a shape shifter shows up as a mouse, building a mouse trap will only get you so far.

It is in some ways a Sisyphean task. Here you have a group of rule makers—lawyers, economists, analysts, and specialists—sitting around a table. On one side, they’ve got the language of Dodd-Frank, which requires them, by congressional mandate, to effectively regulate new, never-before-regulated products in never-before-regulated markets that change by the month. On the other side, they’ve got a pile of reports, nine out of ten of which were provided by the same industry they’re trying to rein in. Meanwhile, industry lobbyists and lawyers are crowding into their conference rooms on a nearly daily basis, flooding their in-boxes with comment letters, and telling them that if they do something wrong, they’ll be personally responsible for squelching financial innovation and destroying the economy. “They’re scared to death,” said Naylor of Public Citizen, who compares the effect the financial industry has on rule makers to Stockholm syndrome. “No one wants to be the one who writes the rule that screws up the entire financial system.”

Wall Street is well aware of rule makers’ human vulnerabilities. Last year, when the SEC was writing rules governing money markets, the U.S. Chamber of Commerce, one of the financial industry’s staunchest allies, launched a public relations campaign in D.C.’s Union Station, which abuts the SEC building. They papered the place with dozens of bright purple and orange posters, billboards, and backlit dioramas on the train platforms and above the fare machines, asserting that money markets are strong: “Why risk changing them now?” It is not coincidental that a good number of rule makers began and ended their daily commute beneath those very banners. “We certainly want to get the attention of those who are capable of giving us the answers,” David Hirschmann, a Chamber of Commerce official, told Bloomberg at the time. One imagines him stifling a smirk.

Given the many whirling hatchets in this stage in the regulatory gauntlet, it’s a miracle any rules have emerged in the last couple years reasonably unscathed. But they have. When that happens, industry can appeal to the second stage in the gauntlet: litigation.

The Gauntlet, Stage 2: Cost-Benefit Analysis and a Conservative Court

On a sweltering summer day in 2011, the U.S. Court of Appeals for the D.C. Circuit—the de facto second most powerful court in the land, and the body that oversees the agencies—sent shockwaves through the regulatory apparatus.

In a now-infamous case, Business Roundtable vs. SEC, a three-judge panel decided in favor of two of the financial industry’s biggest backers and overturned the SEC’s so-called “proxy access” rule. The rule would have made it easier for shareholders to elect their own candidates to corporate boards, allowing investors to put the brakes on out-of-control CEO pay. In the past decade, it has attempted to establish a proxy access rule on three separate occasions, but each time it was cowed into submission by industry lobbyists claiming that the rule would destroy corporate growth. In 2011, emboldened by the language of Dodd-Frank, which explicitly authorizes the SEC to establish a proxy access rule, the agency tried once again.

Almost immediately after the final rule was published, the Business Roundtable and the U.S. Chamber of Commerce sued the SEC on the grounds that the agency’s cost-benefit analysis was inadequate. The judges agreed, marking the first time that the court had overturned a rule explicitly authorized by Dodd-Frank. But that’s not the part that sent shockwaves through the regulatory apparatus. The D.C. Circuit has overturned dozens of regulations over the years, including six SEC rules in the previous seven years, for lots of reasons, including inadequate cost-benefit analyses.

What sent the shockwaves was that this case didn’t seem to have anything to do with cost-benefit analysis at all. In the vitriolic decision, the panel of judges, all of whom were appointed by Republican presidents, lamented that due to “unutterably mindless” reasoning, the SEC had “failed once again” in its cost-benefit analysis. But the court never cited how exactly the agency’s twenty-three-page economic impact report could have done better. It simply appeared to disagree with the agency’s policy choice—and that, apparently, was grounds enough to overturn the rule.

“It was a shot across the bow,” said Michael Greenberger, a former regulator and professor at the University of Maryland Carey School of Law. The decision set a radical new precedent that would affect not only the SEC but all the independent agencies tasked with implementing Dodd-Frank, he said. It would also raise a powerful question: Should specific policy judgments be made by the agencies or the courts? “It upset the balance of the power,” Greenberger said.

Part of the issue here is that the D.C. Circuit is packed high with conservative judges. Eight out of eleven on that bench were appointed by Republicans; despite four vacancies, Obama’s nominations have been stymied consistently by Republicans in Congress. The three-judge panel that decided Business Roundtable included two Reagan appointees, Judge Douglas Ginsburg and Chief Judge David Sentelle, a Jesse Helms protégé. (That’s the same Sentelle, by the way, who headed the panel that fired Whitewater independent counsel Robert Fiske, a moderate Republican, and replaced him with Kenneth Starr.) The third judge was George W. Bush appointee and consummate Ayn Randian Janice Rogers Brown. All three have made a bit of a name for themselves over the years as conservative activists, unafraid to mold precedent to fit their ideological ends. Their decision in Business Roundtable didn’t break that mold.

In one section, for instance, the judges ask why the SEC would have dismissed public comments suggesting that proxy access could exact a significant economic cost to corporations. Judge Ginsburg writes, “One commenter, for example, submitted an empirical study showing that ‘when dissident directors win board seats, those firms underperform peers by 19 to 40% over the two years following the proxy contest.’ ” But hold the phone. Or, better yet: WTF? Ginsburg fails to note here that the “one commenter” in question is one of the plaintiffs, the Business Roundtable. And as for that “empirical study”? It was conducted by an economic consulting group hired by that same plaintiff. In the rest of the decision, Ginsburg appears to ignore the precedent set by the foundational 1984 Chevron case, which, among other things, stressed that judges must afford “deference” to an agency’s interpretation of a statute, especially when it’s “evaluating scientific data within its technical expertise.”

Questionable judicial behavior aside, the Business Roundtable decision marked “the culmination of a trend empowering regulated entities to strike down regulations almost at will,” wrote Bruce Kraus, a former counsel at the SEC, in a subsequent report. For one, it established an inherent bias—reformers cannot, after all, challenge a rule in court to make it stronger. For another, it opened up the floodgates for future suits. If two of the industry’s most powerful organizations could sue the SEC and overturn a rule on such grounds, it was suddenly feasible for industry groups to sue any agency and overturn any new Dodd-Frank rule using the same arguments.

It was a point that did not go unnoticed by industry. “I would hope the agencies are taking to heart the potential consequences for Dodd-Frank rules,” said lead counsel Eugene Scalia, after the case was decided. (Scalia was also lead counsel on the case that overturned the CFTC’s rule on position limits a year later.) Industry groups have since brought a half-dozen more cases against agencies on practically identical grounds.

The Business Roundtable decision had the immediate effect of adding a whole new lethal section to the regulatory gauntlet, this time complete with flypaper and trapdoors. In the months following, the SEC’s progress through the Dodd-Frank rule making is estimated to have slowed by half as they struggled to “bulletproof” their rules from future lawsuits. (“They have to be more than bulletproof,” Chilton told me, when I asked him if that was a factor for the CFTC, too. “They have to be layered in Kevlar. We go way beyond the requirements of the law.”)

The decision also had the effect of tipping the balance of power at independent agencies. By making an agency’s cost-benefit analysis the centerpiece of the litigation, economic models now hold disproportionate weight. If a single economist at an agency produces a report, based on a single model, and “demonstrates” that a rule would exact steep costs from a given industry, it acts like a trump card, according to former staffers at the SEC and the CFTC. Even if the majority of that economist’s colleagues disagree with him, his report will enter the public record, where it can be cited in a subsequent lawsuit and end up determining if a rule is implemented or not. And economic models are like statistics; you can always find one that supports your position.

Along those same lines, in the wake of Business Roundtable a single commissioner—one of five on a bipartisan panel—now has the de facto power to torpedo a rule simply by questioning its economic impact in a public forum. For example, if a Republican commissioner disagrees with a rule, he will, under normal circumstances, be required to compromise with his fellow commissioners, or risk being simply outvoted. If at least three of his colleagues disagree with him, the rule will pass. The Business Roundtable decision seemed to suggest that a single commissioner’s verbal expression of disapproval could be used later as grounds for litigation and as evidence in court. Indeed, a year after the Business Roundtable decision, in the CFTC’s position limits case, part of Scalia’s argument rested on the fact that former CFTC Commissioner Michael Dunn has expressed misgivings about the rule.

“When a commissioner says publicly, ‘I’m concerned about the economic impact of this rule,’ that’s enough to lay the groundwork for a future case,” said Chilton. Several former rule makers and staffers at the CFTC and the SEC told me they would “not be surprised,” given the wording of these public expressions of disapproval, if these commissions were getting their language directly from industry lawyers.

The most profound weapon the Business Roundtable decision introduced into the regulatory gauntlet is stupefying uncertainty. “It has been paralyzing for the agencies,” the former CFTC rule maker told me. How extensive must their cost-benefit analyses be? What kind of costs must be measured? And costs to whom—the industry or the investors? What were the criteria? “It’s like going into a class and not having any idea how your professor grades,” he said. “Everyone is trying to figure out how to move forward without getting sued.”

In the past, when an agency has been sued over a rule, that litigation has often marked the end of the rule altogether. Most are never re-proposed, and those that are often emerge pitifully weak. It also has the effect of sending an agency back to the starting line, where it must run the gauntlet yet again, only this time with more attention from Congress—which is often the most lethal weapon
of all.

The Gauntlet, Stage 3: Congress’s Retroactive Attacks

Many of us think of Congress as passing a law, shunting it off to the agencies, then wiping its hands of the matter. Not the case. Lawmakers, and particularly those who voted against Dodd-Frank to begin with, have a number of tools up their sleeves, which they’ve been using consistently since 2010 in an attempt to retroactively weaken the act.

One way has been to go after the regulators personally, lambasting them publically, smearing their reputations, and wasting their time. In the wake of the Business Roundtable decision, for example, the House Financial Services Committee summoned former SEC Chairwoman Mary Schapiro to testify before Congress about why the SEC had failed in its cost-benefit analysis. The Senate Banking Committee, obliquely questioning her competency as a leader, also requested a series of investigations into why her agency’s cost-benefit analyses were falling short. While lawmakers have a legitimate right to ask the heads of regulatory agencies to testify, in the past few years Congress has seemed to blur the line between inquiries and something more akin to the Inquisition. All told, since 2009 Schapiro has been called to testify before Congress forty-two times.

“On one hand, those attempts to create a scandal don’t mean anything,” said Lisa Donner, the executive director of Americans for Financial Reform, referring to Congress’s harassment of Schapiro late last year. “But on the other hand, those performances waste an enormous amount of time. It plays a role. It’s intimidating.”

Also in the wake of Business Roundtable, Alabama Republican Senator Richard Shelby, as if on cue, wielded another of Congress’s favorite weapons to kill a law in the regulatory process. He introduced a bill suspending all the independent agencies’ major rules until they could be subjected to OIRA, the Office of Management and Budget’s subsidiary, which vets the cost-benefit analyses for new executive branch rules. Had that bill passed, it would have had the effect of stopping all Dodd-Frank rule making in its tracks indefinitely. It didn’t pass, but last summer a similar bill—this one bipartisan—the Independent Agency Regulatory Analysis Act, was introduced and passed in the House, before failing, in the nick of time, in the Senate.

In the two and a half years since Dodd-Frank passed, lawmakers have introduced dozens of other such bills, so-called “technical amendments,” that purport to change or clarify certain sections of Dodd-Frank but would actually gut, defang, or kill the act entirely. Because the bills are presented as mere tweaks to an existing law, and because industry cash is the only way many of these congressmen will get reelected, the bills are often voted on quickly, sometimes even coming up for a voice vote—a procedure usually reserved for uncontroversial issues.

Take the Swap Jurisdiction Certainty Act, for example. That bipartisan bill would have prevented the CFTC and the SEC from regulating derivatives trades conducted by American companies’ subsidiaries overseas. That’s insanity. First, if any of those subsidiaries—much less hundreds of them at once—were to fail, they would threaten and potentially take down the U.S. market. (Indeed, during the 2008 crash, U.S. taxpayer money was used to bail out those foreign-based subsidiaries too, for precisely that reason.) And second, if you only regulate the derivatives traded by American institutions on U.S. soil, American traders will simply scoot their business over to the thousands of subsidiaries abroad, making those unregulated markets even larger and more dangerous. In other words, had this bipartisan, innocent-looking bill passed, it would have undermined all the provisions in Dodd-Frank that attempt to regulate the derivatives market at all.

While the efforts of public interest groups and financial reform advocates, like Americans for Financial Reform, have succeeded thus far in keeping any of these bills from passing, they still have an effect behind the scenes. “There are instances where regulators say, ‘I know what we want to do with this, but if we go too far, Congress is just going to wipe out the whole thing, and I want what we’re doing to last,’ ” said Stanley, the policy director at Americans for Financial Reform. “That’s a calculation.”

A much more common weapon congressional opponents can wield after a law has been passed is a little less dramatic. By attaching riders to appropriations bills, Congress can simply forbid an agency from using its money to enforce one specific rule or another—and, of course, an unenforced rule is a dead rule. Lawmakers can do that even if Congress has passed another law that pointedly mandates that an agency take the action in question. In 2011, for instance, the House Appropriations Committee, which is dominated by Republicans, attached a rider to its funding bill preventing the U.S. Department of Agriculture from using its funds to finalize and implement a series of specific rules helping small farmers fight back against big livestock and poultry corporations. Despite the Obama administration’s attempts to get those exact rules implemented, the rider passed, tying the USDA’s hands and sending small farmers adrift. (For more on this, see Lina Khan, “Obama’s Game of Chicken.”)

Using the same mechanism, Congress also has the power to defund or severely underfund any agency that relies on congressional appropriations, including the CFTC and the SEC—a guillotine it has successfully used for decades. Just last year, for instance, the House Appropriations Committee cut the CFTC’s annual budget by $25 million, leaving it with an anemic $180 million. (For a sense of how little money this is, consider that San Bernardino, a county of about two million people in California, spends more than $180 million just on its public works department.) In 2011, congressional opponents of financial regulation blocked any increase in the SEC’s budget, despite or perhaps because of the agency’s massive new workload with Dodd-Frank. The Republicans’ argument against funding the independent agencies is delightfully absurd: since the agencies have not written and enforced rules fast enough, Congress should “punish” them, rather than “reward” them with adequate funding.

Yet another weapon Congress uses to retroactively kill bills in the rule-making process is to block presidential appointments. In January, another three-judge panel at the D.C. Circuit, led by the same conservative crusader who voted to overturn the SEC’s proxy access rule, Judge Sentelle, ruled that Obama’s recess appointments were unconstitutional. It was a radical decision that has the potential to invalidate rules and guidelines promulgated by the National Labor Relations Board and the Consumer Financial Protection Bureau for the previous year. The decision may be reconsidered (and, heaven help us, affirmed) by the Supreme Court, but in the meantime it brings the independent agencies further into Congress’s orbit.

Congressional Republicans are already using the decision to strong-arm Congress into weakening the CFPB’s independence. The only way Congress will allow Obama to reappoint CFPB Director Richard Cordray, or to install another head, Republican lawmakers say, is if the agency’s funding is brought under congressional appropriations controls. It’s an underhanded move that would eliminate the CFPB’s strongest asset—that it’s not subject to Congress’s manipulative purse strings—and may have the effect of gutting the entire agency, one of the strongest things that’s come out of Dodd-Frank thus far.

Gunning For the Finish Line

It’s true that Dodd-Frank started out as a compromise. “It was compromise on top of a compromise—a pile of compromises,” said Kelleher of Better Markets. And that’s what we can expect from the rule-making process too, he said. As it stands, how the law has fared in its journey down the regulatory gauntlet has been mixed.

Some rules have been spectacularly hacked to death. Take, for example, a joint rule by the SEC and the CFTC, which was intended to force swaps dealers into maintaining more capital and to prevent horrible scenarios, like the collapse of AIG, from ever happening again. When it was first proposed, the rule required that every dealer trading more than $100 million in swaps should be subject to regulatory oversight. A bill proposed by Illinois Republican Representative Randy Hultgren raised that threshold to $3 billion, but the agencies, intimidated by lobbyists’ doomsday scenarios and under the constant threat of litigation, raised it again: to $8 billion. The rule that eventually emerged now exempts about two-thirds of all swaps dealers from new capital requirements.

Scenarios like that can be deflating for reformers, but there have been wins, too. The CFPB remains a major success for consumer and investor advocates, and the SEC’s rule on whistleblowers appears to have emerged fairly intact. The CFTC’s brand-new Swap Data Repositories, which were designed to collect data about over-the-counter derivatives transactions, are also up and running, with the potential to shed some much-needed light on that shady industry. Whether the new repositories will be useful to regulators, or whether they will be undermined by a future lawsuit or lack of funding, is still unclear.

In some arenas, most notably the D.C. Circuit’s activist bench, reformers have faced crushing defeats. Yet all is not lost. In a case this past December, the U.S. District Court for D.C., a notch below the D.C. Circuit, handed the industry its first loss in years, deciding in favor of the CFTC’s rule requiring registration of mutual funds that engage in derivatives trading. It also marked the end of a five-case winning streak in lead counsel Eugene Scalia’s battle against agency rules. Judge Beryl A. Howell, an Obama appointee, decided against the U.S. Chamber of Commerce and the Investment Company Institute. (Both are now appealing that case to the D.C. Circuit.)

The Dodd-Frank rules that, against all odds, have emerged relatively intact underscore an important point: those who favor strong regulations are not without shields to protect rules against the many whirling weapons along the regulatory gauntlet. But in order to be effective, of course, those shields have to be used.

First and foremost, the White House has to get more involved in defending its own legislative achievements from being gutted in the rule-making process. In addition to appointing more judges to the D.C. Circuit (and that’s no guarantee of success; the judge who decided against the CFTC’s position limits rule was a Democratic appointee), the administration should deploy its best Justice Department lawyers to defend against the industry’s court attacks on Dodd-Frank rules. It should aggressively push to fill vacancies at the agencies with pro-regulation commissioners and other agency heads, and fight harder for bigger agency budgets. And the president himself should shine a spotlight on the process, and support the work rule makers do by paying personal visits to the agencies.

Second, the administration and its allies in Congress must address as quickly as possible the asymmetry of information in the agencies. In order to do their jobs, regulators must be armed with objective information to offset the biased or incomplete reports they receive from industry. This is particularly important for a small, underfunded agency like the CFTC, which doesn’t have the stable of researchers and economists employed by some of its brethren, including the Fed, the CFPB, and the FDIC.

The good news is that Dodd-Frank mandated the creation of a new office whose mission, in part, is to correct this imbalance of information. Housed in the Treasury and funded by bank fees, the new Office of Financial Research was conceived as a kind of giant weather station monitoring the financial industry in order to detect potential “storms” before they arrive. To that end, it’s statutorily authorized to gather, with subpoena power if necessary, granular-level data from financial institutions, including information about banks’ trading partners, positions, and transactions, and to make that data available to other regulatory agencies. The only question is whether the OFC will have the political backing it needs to fulfill those ends.

As of now, it has a very small budget and an advisory board heavily weighted with industry insiders. It’s also facing extraordinary political opposition, mostly from congressional Republicans, who have called for nothing less than its immediate abolishment, arguing that it compromises data security and encroaches on the private sector. Making sure that the OFC survives and overcomes any legal challenges to its ability to share key information with regulators should be a top agenda item for congressional Democrats and the new treasury secretary, Jacob Lew.

Third, reformers and reform-minded analysts, lawyers, and academics need to do a better job of making their voices heard in the agencies. The Administrative Procedure Act, which governs the rule-making process, painstakingly enshrines public commentary, but as of now the vast majority of the substantive comments are coming from industry groups and their proxies, including bought-and-paid-for think tanks, trade groups, and consulting firms, which have the time and legal expertise to dedicate to such things. Launching a counterinsurgency in kind will obviously require a pretty chunk of change. Perhaps it’s a place where foundations can make a real difference. If more individuals and groups weighed in with smart ideas and substantive research to counter industry, it could help strengthen the rule makers’ hands.

Rule makers read and make note of every comment letter, and those letters have a cumulative effect of pushing policy, staff members at the SEC and the CFTC told me. That’s particularly true in instances where a rule-making team believes the best public policy differs from what industry is advocating. “To the extent that there was already an argument for a given position, a public letter will give a team support. There’s a sense of ‘See? Other people think this too,’ ” a former SEC staffer told me.

Reform groups like Americans for Financial Reform, Better Markets, and Public Citizen have thus far done a heroic job writing substantive, evidence-based letters of concern and organizing public letter-writing campaigns. Groups like Occupy the SEC, which is run by people with direct experience in the financial industry, have also submitted long, well-informed reports to the agencies and engaged with rule makers personally. Those voices make a big difference. But they go only part of the way toward countering the overwhelming influence that industry has enjoyed.

Fourth, what’s needed is the vigilance of the wider public. That may seem unreasonable to expect—who has the time or inclination to follow the grammatical arcana of rule making as it moves through the process? But in an age of Wikipedia, when millions of people write and edit tomes on obscure and complex issues on a daily basis, there’s no reason in theory why more Americans couldn’t weigh in on regulations that most of them clearly favor. Nearly 75 percent of voters, Republicans and Democrats alike, support “tougher” rules and enforcement for Wall Street financial institutions, according to a 2012 poll commissioned by a coalition of consumer, reform, and public interest groups.

Those same citizens should also prod their members of Congress. The political scientist Susan Webb Yackee has found that the attention of lawmakers is one of the primary factors that can help curb industry influence in the regulatory process. In the stew of congressional power struggles, and with the financial industry furiously underwriting lawmakers’ reelection campaigns, members of Congress have a variety of reasons not to stick their necks out. Their constituents should insist that they do.

Finally, there’s no mystery about how to stir up public attention: the press needs to do a better job of covering the regulatory process. Again, that may seem unreasonable, especially in an age when for-profit news has lost its business model. But it needs to be done. Those same editors, reporters, bloggers, and wonky producers at The Rachel Maddow Show who followed the passage of Dodd-Frank so closely two and a half years ago should tune in again.

As of early February, fewer than 150 of the estimated 400 rules from Dodd-Frank had been finalized, according to Davis Polk & Wardwell, a law firm that keeps track of such things. Nearly the same number had not even been proposed yet. All together, almost 65 percent of the law, including potentially significant hunks, like rules on extraterritoriality and systemic risk, have yet to be finalized.

In the next year or so, the vast majority of these new rules will enter the regulatory gauntlet, while agencies and industry will watch carefully as those that have already been finalized are implemented and enforced. Industry and its allies in Congress will scream bloody murder and claim that Dodd-Frank rules are imposing an insurmountable burden on industry, the economy, and the American people. Meanwhile, the agencies either will attempt to hold the line or, without the glaring light of public scrutiny, they will allow industry to take the lead again. What happens in the next year or two will have a profound effect not only on Dodd-Frank, but on the future of our financial industry. “We’re in the fifth inning,” said Kelleher. “The only way to guarantee you’ll lose is if you walk out before the end of the game.”

The post He Who Makes the Rules appeared first on Washington Monthly.

]]>
18972
The Republican Case for Waste in Health Care https://washingtonmonthly.com/2013/03/02/the-republican-case-for-waste-in-health-care/ Sat, 02 Mar 2013 15:24:03 +0000 https://washingtonmonthly.com/?p=18973 Conservatives love to apply “cost-benefit analysis” to government programs—except in health care. In fact, working with drug companies and warning of “death panels,” they slipped language into Obamacare banning cost-effectiveness research. Here’s how that happened, and why it can’t stand.

The post The Republican Case for Waste in Health Care appeared first on Washington Monthly.

]]>
Why are you reading this when you could be doing jumping jacks?

And how come you’ve gone on to read this sentence when you could be having a colonoscopy?

You and I could be doing all sorts of things right now that we have reason to believe would improve our health and life expectancy. We could be working out at the gym, or waiting in a doctor’s office to have our bodies scanned and probed for tumors and polyps. We could be using this time to eat a steaming plate of broccoli, or attending a support group to help us overcome some unhealthy habit.

Yet you are not doing those things right now, and the chances are very strong that I am not either. Why not?

Even people who take their health very seriously calculate costs and benefits. Time spent at the gym, for example, is time we cannot spend playing with our kids, or making the money we need to pay for our ever-rising health insurance premiums. Submitting to a colonoscopy, while minimally costing time, money, and discomfort, may not provide us with any personal benefit whatsoever—all of which we put into the mix before deciding if this is the day we have the test done.

In short, in our day-to-day lives we regularly apply a kind of informal cost-benefit analysis to the decisions we make about health care. To take another example, say you decide it’s worth the effort to lose twenty pounds and firmly resolve to do so. Then your mind will instantly turn to mulling what would be the most cost-effective way to go about it: eat less or exercise more, for example, or perhaps take a pill or undergo a liposuction operation or some combination of all of those.

Where’s the Waste in Health Care?

Mar13-Longman-HealthCareChart
Credit:

Source: Institute of Medicine

In making this decision, you may well act on assumptions that are shortsighted or misinformed. You may ascribe more effectiveness to those interventions that seem easy (taking a diet pill) than to those that seem hard (giving up sweets and sweating it out in the gym more often).

Similarly, you may underestimate the risk that a liposuction will bring with it a hospital infection and other complications that will get you killed. Your decision may also be constrained by lack of money to throw at the problem, or lack of time, or competing ambitions. Yet however imperfectly, your mental energies will be directed at how to achieve your goal of losing those twenty pounds at the least cost in other things that matter to you.

This pattern of “rationing,” if you will, our own health and health care on the basis of perceived costs and benefits is arguably a defining feature of what makes us human. Certainly my fat cat does not deliberate the pros and cons of how and whether to overcome her obesity.

Yet here is a curious fact about humans, in the United States, at least. Though we spend more per person on health care than any other people on earth, and with results that are no better and often worse than all other advanced nations, we have allowed conservatives and corporate interests to bind us with laws that explicitly forbid the use of formal cost-benefit analysis to determine how health care dollars are spent. Until we get our heads around this contradiction, we are in big trouble.

The stunning inefficiency of the U.S. health care system as a whole is now beyond dispute. To see the magnitude of aggregate waste, one only has to look at the gross disparities in how medicine is practiced in different parts of the country and with what results.

The best-known work in this area comes from the Dartmouth Atlas Project. For more than a decade, researchers there have systematically reviewed the medical records of deceased Medicare patients nationwide, including those who suffered from specific chronic conditions during their last two years of life. And by doing so, the researchers have uncovered striking anomalies that point to vast inefficiencies.

In Miami, for example, the Dartmouth researchers have discovered that the average number of doctor visits for a Medicare patient during the last two years of his or her life is 106. But in Minneapolis, among Medicare patients suffering from the same chronic conditions, the average number of doctor visits during the last two years of life is only twenty-six. Yet in both cities, all of these patients are equally dead at the end of two years.

The implication is unavoidable. The much higher volume and intensity of medicine as it is practiced in Miami as compared to Minneapolis may benefit some patients in some ways. But all the extra exams, as well as the extra tests, drugs, and operations that doctors in Miami regularly order for their patients, bring no aggregate gain in life expectancy.

By extrapolating from such disparities in medical practice around the country, Dartmouth researchers have developed the widely accepted estimate that roughly a third of all health care spending in the United States is pure waste or worse, mostly in the form of unnecessary and often harmful care—amounting to some $700 billion a year. Using a similar approach of comparing best and worst practices, a recent study by the Institute of Medicine concludes that overtreatment and other forms of waste in the system consume $750 billion annually. That’s roughly the cost of the entire Iraq War.

This finding is in line with that of another recent study published in the Journal of the American Medical Association (the house organ of America’s doctor lobby!). It calculates that on its current course the U.S. will spend nearly $11 trillion between 2011 and 2019 on health care that has no benefit to patients and that is often harmful to their health. Cutting that waste by just 4 percent a year, the study concludes, would be enough to keep health care spending in line with the growth of the economy, which in turn would be enough to evaporate the federal government’s long-term deficits. And it would mean that wasteful health care would no longer crowd out care that actually improves and prolongs the lives of patients.

Yet while we know the system as a whole is grossly inefficient, it remains easy for those responsible for the waste to escape detection, let alone accountability. The biggest single reason is that, due to the insistence of conservatives allied with drug manufacturers and medical device makers, the federal government is not allowed to consider the cost-effectiveness of different treatments in deciding how to invest health care dollars.

To be sure, in recent years, the Obama administration has begun to underwrite research into the so-called “comparative effectiveness” of different drugs and treatments. It has done this primarily through a new entity called the Patient-Centered Outcome Research Institute (PCORI), which was created by the Affordable Care Act. Late last year, PCORI announced its first grants and is now funding research on, for example, how well stroke victims do when they receive rehabilitative therapy at home as compared to care in nursing homes.

That’s important to know and long overdue. Most ordinary Americans would be shocked to learn how little research has been done on the outcomes of different practices in medicine, including on the actual health effects of such common, costly, and invasive procedures as back and heart surgery. For example, from 2000 through 2005, American cardiologists performed more than seven million coronary artery angioplasties, arthrectomies, and stent insertions. Yet only in recent years has there been any research to determine whether these procedures work any better than simple noninvasive treatments, such as aspirin or cholesterol pills, for patients with stable coronary disease. It turns out that they don’t.

Yet while the work of PCORI is important, it will never tell us what we most need to know to get the waste out of the U.S. health care system. That’s because, as PCORI’s executive director told a health care conference in 2011, “You can take it to the bank that PCORI will never do a cost-effectiveness analysis.”

PCORI’s work compares benefits to benefits, but not, as a matter of law, cost to benefits, and that’s a big deal. Such research does not tell us, for example, whether measures to prevent a stroke would be more cost-effective than measures to deal with its consequences. The same is true of all other government research into “comparative effectiveness.”

And by ignoring costs, such research also cannot tell us how to make sure that the money we spend on health care saves the most lives or reduces the most suffering. More fundamentally, even if the work of PCORI and other comparative effectiveness research could answer those questions, the government could not act on the information. That’s thanks to obscure but deeply consequential language inserted into the Affordable Care Act by the very corporate interests that stand to lose the most from our actually knowing which drugs and procedures offer the highest value.

The story of how this happened and what it means is full of perverse ironies. Leading up to the Obama years, mainstream health care policy experts and many politicians in both parties generally agreed on the need for the federal government to fund cost-effectiveness studies. As far back as 1996, a panel convened by the U.S. Public Health Service called for evaluating specific drugs and treatments based on how many years of healthy life they produced per dollar. When President George W. Bush signed the Medicare Modernization Act into law, he authorized $50 million to study the clinical effectiveness and appropriateness of health care services, including prescription drugs, while Bush’s Medicare program administrator, Mark McClellan, pushed for using such research in Medicare coverage decisions.

Conservatives have long championed the use of cost-benefit analysis in other realms, including those that involved putting a dollar value on both the length and quality of human life. In 1981, for example, President Ronald Reagan issued Executive Order 122911, which established the still-routine practice of evaluating consumer safety and environmental regulations based on the estimated number of lives saved per dollar. In 2003, under the Bush administration, the Environmental Protection Agency even went so far as to adopt the so-called “life expectancy” factor in cost-benefit analysis, which recognizes that more years of life are saved when children are spared death than when elders are. At the time, it wasn’t conservatives who objected, but some environmentalists and senior groups, who characterized the policy as imposing a “senior death discount.”

As late as 2008, Republican presidential nominee John McCain issued a position paper, entitled “Straight Talk on Health System Reform,” that reflected the bipartisan consensus on the need for government research into the actual value of different drugs and treatments. Based on the thinking of one of his health care advisers, Gail R. Wilensky, who had long championed the cause, the position paper stated, “We must make public more information on treatment options and doctor records, and require transparency regarding medical outcomes, quality of care, costs and prices. We must also facilitate the development of national standards for measuring and recording treatments and outcomes.”

President Obama came to office strongly sharing this conviction and committed to putting it into practice. But as it happened, even the administration’s most tentative moves in this direction were met by a firestorm of opposition from the drug and medical device lobbies. This opposition would have far-ranging consequences, including, in the end, an effective ban on government even sponsoring cost-effectiveness research in health care, let alone using it as a guide for setting health care policy.

The firestorm began in early 2009, when, as part of the stimulus bill, the administration proposed $1.1 billion (or one-twentieth of 1 percent of total U.S. health care spending at the time) for research into the efficacy and safety of different medical procedures. In an accompanying report, the administration explained that the research could help eliminate costly treatments. Immediately, pharmaceutical companies and device makers pounced.

“You have to be very careful,” warned W. J. “Billy” Tauzin, then president of the Pharmaceutical Research and Manufacturers of America, in explaining why he mobilized his industry’s legions of lobbyists in fierce opposition to the administration’s proposal. “An arrogant staffer writing a report was about to dramatically change the direction of health care in America,” Tauzin told the Los Angeles Times, adding ominously, “I hope it is a clear warning. There are a lot of beehives out there. You don’t just go around punching them.”

Soon, the entire conservative noise machine was in full swing. Just how much of this was the result of marching orders from Tauzin and his lobby, and how much was the result of Republican ideologues seizing on what they saw as an opportunity to destroy Obama’s chances for passing comprehensive health care reform, will remain forever hard to sort out, but the effects were devastating. On January 23, 2009, the Republican Study Committee sent out an alert that Obama’s true intent was to create “a permanent government rationing board prescribing care instead of doctors and patients” (emphasis in original). “Every policy and standard,” the statement warned, “will be decided by this board and would be the law of the land for every doctor, drug company, hospital, and health insurance plan.”

Within days, more than sixty patient advocacy groups, many of them funded by the drug industry, cosigned a letter to influential members of Congress making parallel arguments, which also appeared in a Wall Street Journal editorial and op-ed piece by George Will. The American Spectator took up the task of mobilizing the pro-life movement, writing, “Euthanasia is another shovel ready job for Pelosi to assign to the states. Reducing health care costs under Obama’s plan, after all, counts as economic stimulus, too—controlling life, controlling death, controlling costs.” By early February, Rush Limbaugh was outraging and terrifying his listeners by charging that a new federal bureaucracy “will monitor treatments to make sure your doctor is doing what the federal government deems appropriate.”

Republicans and the drug industry did not succeed in killing off cost-effectiveness research at that point, but they had sent a powerful shot across the bow. The stimulus bill passed and included some provision for the research Obama wanted. But politicians on both sides of the aisle were deeply intimidated by how a once second-tier issue that enjoyed support from wonks and politicians in both parties had suddenly become the target of a death ray of demagoguery.

The message would still be ringing in their ears later that year as the national agenda turned toward comprehensive health care reform. The architects of “Obamacare” knew full well, of course, that there was no conceivable way to expand access to health care, improve its quality, and simultaneously “bend the cost curve” unless medical practice became far more driven by information about the actual costs and benefits of different treatments. They knew it wouldn’t be necessary, appropriate, or even practical to use such cost-benefit analysis to determine what specific treatments doctors could give specific patients. But they also knew that such information was absolutely necessary to guiding rational decisions, on, for example, how much Medicare should pay, or whether it should pay at all, for treatments that offered fewer benefits than lower-cost alternatives.

After all, why should Medicare pay surgeons tens of thousands of dollars to perform costly, dangerous back surgeries if research established that the patients undergoing these operations do better with low-cost physical therapies? Isn’t medical practice supposed to be driven by science? And how is the public supposed to know how to allocate health care dollars if no one even knows the value of different procedures?

Accordingly, bills introduced by Democrats in both the House and the Senate called for the creation of some kind of entity to do the necessary research. But by the summer of 2009, the death ray was back, and now packing super-high voltage that threatened the political life of anyone who stood anywhere near these bills.

We all remember Sarah Palin’s sensational talk about “death panels.” And who can forget the images of lawmakers being assaulted in town hall meetings that summer by constituents even as erstwhile “moderate” Republicans like Charles Grassley fanned the flames. What many people don’t know, however, is how this firestorm forced the administration and Democrats in Congress to cave on the very measure most necessary to improving the quality of the U.S. health care system and, by extension, making it sustainable.

During the legislative battles that eventually lead to the passage of the Affordable Care Act, Republicans repeatedly introduced amendments that would bar the government from any use of cost-effectiveness research in health care. For example, in September 2009, Republican Senator Jon Kyl introduced an amendment “prohibiting the use of taxpayer dollars to conduct cost-based research and ration care.”

Meanwhile, conservative Democrats, such as Senate Finance Chairman Max Baucus, though remaining publicly committed to the idea of government sponsoring some kind of “comparative effectiveness research,” also began introducing measures that would ensure strong industry influence over any entity conducting the research. After the August 2009 recess, Baucus introduced legislation stipulating that the research not be done by a federal agency, but rather by a nonprofit group with the drug and medical device industries well represented on its board.

At the same time, according to Brookings fellow Kavita Patel, who was then following the legislative maneuvering as a White House aide, the pressure was on all Democrats to back away from cost-effectiveness research. In an account published in Health Affairs in 2010, she recounted how
“[a]n endless stream of organizations, citizens, researchers, and thought leaders weighed in with Senate Finance Committee staff,” with at least “some” warning “of the ramifications of using cost-effectiveness in the research.”

Their lobbying was effective, especially after Republican Scott Brown was elected to the Massachusetts Senate seat vacated after the death of Ted Kennedy. At that point, the Democrats’ balance of power was shifting away, and so was any thought of holding out for an independent federal agency that would study cost-effectiveness in health care. Instead, with Baucus driving the train, Democrats found a way to capitulate that would allow them to give the opposite impression to all but those who were paying very close attention.

In its final language, the ACA specifically bars policymakers from using cost-effectiveness as a basis for even recommending different drugs and treatments to patients. In practical effect, the ACA ensures that such research won’t even be done, let alone be used as a criterion for guiding how the nearly $2.6 trillion the U.S. spends on health care each year might be put to best use. Here’s what you need to know to understand how the fix was put in behind the scenes and why correcting it must become a high priority for health care reformers.

To understand the story, you have to be familiar with a basic concept that researchers around the world use to measure cost-effectiveness in health care. It’s known as a QALY. What’s a QALY? It stands for quality-adjusted life years, and despite its technical sound, it’s based on the common sense we all use in our day-to-day lives.

Let’s start by asking ourselves what it is that we mere mortals want from health care. Of course, most of us would like it to help us live to a ripe old age. But we also want health care to improve not just the quantity, but to the extent possible also the quality of our lives. If a genie came to you and said she would grant you any wish, you might blurt out that you wanted to live to be 110. But what if she granted your wish and at the same time condemned you to living out the rest of your years in extreme pain or in a coma? Obviously, quality of life is a factor, and often a huge one, in what we want from health care.

Researchers evaluating the effectiveness of different health care practices and policies recognize that, too. Say the comparison is between two different drugs. People who take the first live one year longer in perfect health than those who don’t. People who take the second drug also live one year longer than those who don’t, but as a side effect, they also go blind. Which is the better drug? Obviously, the first one.

This reality is captured by the concept of a QALY, which weighs the gain in life expectancy that a medical intervention is estimated to bring against its effects on a patient’s quality of life. There are many different ways to do this, but a common formula, used officially by such other advanced industrial countries as Canada, the United Kingdom, and Australia, goes like this: Take each year of extra life that a medical intervention is found to produce and multiply it by a variable that reflects the intervention’s effect on a patient’s quality of life. If the intervention results in one extra year of perfect health, the value of that variable is put at 1. If it results in something less than one year of perfect health, the value of that variable is put at some number less than 1.

If we apply this formula to the example above, here’s what we get: Because the first drug results in one year of extra life in perfect health, we multiply that year of extra life times 1, and since 1 x 1 = 1, the QALY score is 1. In evaluating the second drug, we would multiply the one extra year of life it produces by some number less than one to reflect the fact that the drug causes blindness. If that number were 0.75, we would multiply 1 times 0.75 and conclude that the second drug produced 0.75 years of quality-adjusted life.

Now right here you may be starting to have qualms about QALYs. By picking that number, 0.75, are we implying that the lives of blind people are 0.25 percent less valuable than the lives of sighted people? No, though the drug industry and many other powerful forces in our society would like you to conclude that any use of QALYs demeans the handicapped and sets them up to be killed off by death panels. But researchers use QALYs simply to rate the value of different health care interventions, not the value of people, and unless you believe that people simply don’t care if they are blinded or otherwise disabled by a medical practice, that’s surely appropriate.

In practice, interventions that help people to overcome or deal with their disabilities will score highly in QALYs, including interventions that elderly people with chronic disabilities particularly want and need. Writing in the New England Journal of Medicine, researchers Peter J. Neumann and Milton C. Weinstein note that populations with more impairment have more to gain with effective interventions, and that therefore interventions that benefit these populations will score high in QALY per dollar.

Of course, there are many reasons why people might legitimately argue over just how much to discount the value of a drug that produces blindness. Is 0.75 is the right number to use, or maybe 0.85? Who can say exactly how valuable it is to be able to see? Many studies show that humans tend to both overestimate how much they’ll enjoy getting what they wish for in life and underestimate how much they’ll suffer if their worst fears come true. People who become blind may not find it as bad as they thought it would be before they lost their sight.

Which is all quite fascinating, and a reason why we can’t just turn the whole matter over to experts, let alone computers. But certainly the right number to use in evaluating the second drug is some number less than 1, because otherwise we’d be ignoring the fact that most people do care if a pill makes them blind. (Would you?)

Whatever the philosophical and process issues involved in estimating QALYs, they pale in moral difficulty compared to a stance in which we simply ignore the full health consequences of health care. The concept of getting the most years of quality life for the least cost clearly captures what we all want from health care, including in our personal lives. Without the use of QALYs or some equivalent measure, measuring cost-effectiveness in health care is simply impossible.

And that brings us back to the Affordable Care Act. The ACA established the Patient-Centered Outcomes Research Institute, and gave it the mission of doing so-called “comparative effectiveness” research. But in so doing, it specifically forbids PCORI from using QALYs, or anything like them. The statute stipulated that PCORI “shall not develop or employ a dollars per quality adjusted life year (or similar measure that discounts the value of a life because of an individual’s disability) as a threshold to establish what type of health care is cost effective or recommended.”

More broadly and pointedly, it also states that the secretary of health and human services, who oversees Medicare, Medicaid, and the soon-to-be-up-and-running exchanges for private health insurance, “shall not utilize such an adjusted life year (or such a similar measure) as a threshold to determine coverage, reimbursement, or incentive programs.”

The implications of this language are far reaching, and explain why PCORI’s executive director so emphatically asserts, “You can take it to the bank that PCORI will never do a cost-effectiveness analysis.” As long as using the concept of a quality-adjusted life year is forbidden, along with “similar measures,” there is no way to measure the cost-effectiveness of any given drug or treatment. And that’s just what the big drug companies and medical device makers wanted all along.

Taken literally, it means that it is impossible for health care policymakers to even “recommend” one drug over another just because one costs, say, $1 million per pill and produces blindness as a side effect and another costs only $10 and leaves you with your sight intact.

Other language in the ACA ensures that it will be taken literally. The statutes that govern PCORI, for example, establish it as a nonprofit organization and then specifically require that “members representing pharmaceutical, device, and diagnostic manufacturers” are guaranteed seats on its board and allowed to serve on its expert advisory panels. Today, for example, PCORI’s board includes representatives from Pfizer, the world’s largest drug company; from Medtronic, a $16.2 billion manufacturer of pacemakers, stents, and other medical devices; and from a “patient advocacy” group called Friends of Cancer Research, whose funding derives from such Big Pharma players as Pfizer, Bristol-Myers Squibb, AstraZeneca, and GlaxoSmithKline.

The ACA also stipulates that scientists doing research under contract with PCORI may not publish their findings unless PCORI determines that the research is “within bounds”—the meaning of which, of course, the board itself gets to decide. If PCORI’s board deems it out of bounds, both the offending scientist and his or her institution are banned from receiving any other grant for a period of not less than five years, which for many scientists who specialize in evaluating the quality of health care would be a career ender.

Welcome to the real politics of health care.

How do we go forward? Explaining what’s at stake to the American public will not be easy. Every penny of that $750 billion the Institute of Medicine says we waste every year in health care goes into someone’s pocket. And the beneficiaries of that spending are not going to be quiet, from millionaire cardiologists performing unnecessary stent operations to drug and medical device makers peddling products that cannot be justified.

Not only are these interest groups highly concentrated and highly motivated, they are well funded and well practiced at manipulating public opinion. At the same time, even though prices in the U.S. health care system are primarily determined by a combination of market concentration, political manipulation, poor information, and sheer inefficiency, many citizens are predisposed to assume that more expensive treatments are always better than cheaper alternatives. And so, when told that “faceless bureaucrats in Washington” are busy putting a number on the dollar value of their lives in preparation for “rationing” their health care, they do indeed fear that “death panels” will decide who lives and who dies.

For those truly committed to the cause of health reform, overthrowing the ban on cost-effectiveness research now must move high on the agenda, and it requires clearly and forthrightly explaining what it really is and why it’s essential to everyone getting the best care possible. I have done the best I know how here to explain what’s going on and what’s at stake in ordinary language, but I certainly don’t have it down to a sound bite. Others must also try.

The post The Republican Case for Waste in Health Care appeared first on Washington Monthly.

]]>
18973 Mar13-Longman-HealthCareChart