March/April/May 2014 | Washington Monthly https://washingtonmonthly.com/magazine/marchaprilmay-2014/ Sun, 09 Jan 2022 05:25:16 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg March/April/May 2014 | Washington Monthly https://washingtonmonthly.com/magazine/marchaprilmay-2014/ 32 32 200884816 FDR as Preacher, Campaigner, Bon Vivant https://washingtonmonthly.com/2014/03/24/fdr-as-preacher-campaigner-bon-vivant/ Mon, 24 Mar 2014 08:22:54 +0000 https://washingtonmonthly.com/?p=13576 Three new books fill in our picture of Roosevelt The Good Neighbor: Franklin D. Roosevelt and the Rhetoric of American Power by Mary Stuckey Michigan University Press, 376 pp. Roosevelt’s Second Act: The Election of 1940 and the Politics of War by Richard Moe Oxford University Press, 492 pp. Young Mr. Roosevelt: FDR’s Introduction to […]

The post FDR as Preacher, Campaigner, Bon Vivant appeared first on Washington Monthly.

]]>
Three new books fill in our picture of Roosevelt

The Good Neighbor: Franklin D. Roosevelt and the Rhetoric of American Power
by Mary Stuckey
Michigan University Press, 376 pp.

Roosevelt’s Second Act: The Election of 1940 and the Politics of War
by Richard Moe
Oxford University Press, 492 pp.

Young Mr. Roosevelt: FDR’s Introduction to War, Politics, and Life
by Stanley Weintraub
DeCapo Press, 388 pp.

The life of Franklin Delano Roosevelt, America’s 32nd and longest-serving commander-in-chief, has been closely examined in hundreds of books since his death over 70 years ago. The conclusions drawn by the majority of FDR scholars – James MacGregor Burns, William Leuchtenberg and Alan Brinkley, among other preeminent 20th century historians – are remarkably similar: President Roosevelt’s contribution to American political tradition is vast. Their research portrays a patrician son-turned populist Governor of New York, a reformer (if ultimately a pragmatic one) who remained true to his 1932 presidential campaign pledge of “bold and persistent experimentation” to alleviate the depressed economy.

Roosevelt brought a zeal for politics and a propensity for action to the Oval Office that was unmatched. He battled the ominous tide of totalitarianism in Europe, while doubling down on his determination to achieve economic freedom for the vast majority of Americans.. “We cannot be content, no matter how high that general standard of living may be, if some fraction of our people—whether it be one-third or one-fifth or one-tenth—is ill-fed, ill-clothed, ill-housed, and insecure,” FDR declared in what has been dubbed his 1944 Second Bill of Rights. He promised, moreover, that a “basis of security and prosperity can be established for all—regardless of station, race, or creed.”

In her 1994 Pulitzer-Prize winning book, No Ordinary Time, historian Doris Kearns Goodwin describes Roosevelt as a dynamic leader who alongside his wife and political partner, Eleanor, squared up to unprecedented global challenges in a rapidly modernizing world. Historian H.W. Brands’ more recent A Traitor to His Class, which also won national recognition, rediscovered FDR as a fearless, even radical, political crusader who defied his aristocratic pedigree to fight for the Depression-plagued masses.

Those presidents we view as the nation’s most iconic are in a permanent historical spotlight. As New York Times executive editor, Jill Abramson, reminded us in an essay exploring newly released JFK biographies on the 50th anniversary of his death, there is invariably opportunity for more complete history. While 40,000 books have chronicled Kennedy since his assassination, Abramson concluded that “to explore the enormous literature is to be struck not by what’s there but by what’s missing.” Whether because of the longer passage of time since his death or because Roosevelt’s giant personality loomed over America’s political landscape during perhaps the most momentous decade since the nation’s founding, the books on FDR, to date, show the 32nd president to be anything but “elusive.” A trio of recently published Roosevelt books adds to the already plentiful historiography of FDR and his age but not monotonously. Each deepens and complicates our understanding of Roosevelt.

Mary E. Stuckey, who specializes in political communication at Georgia State University, tackles the entirety of FDR’s rhetoric. The Good Neighbor: Franklin D. Roosevelt and the Rhetoric of American Power is an impressively comprehensive analysis of Roosevelt’s presidential rhetoric that reinforced, before anything else, human dignity and culminated in Roosevelt’s famous Four Freedoms speech. (The application of “Good Neighbor” here does not refer to his Latin American foreign policy but rather to the notion of being a better neighbor to humanity here at home.)

The ideals underpinning FDR’s rhetoric made it possible for FDR to invite Americans “to participate through him in an embodied politics in which he represented the shared perspective of the audience.” Unlike current-day political language whose goal is to win hearts and minds, but principally votes, Roosevelt’s rhetoric assumed the country was unified in her struggle for economic recovery and, later, victory in World War II. Stuckey starts with reexamining a less historically cited line from Roosevelt’s first inaugural address: “In the field of the world policy I would dedicate this Nation to the policy of the good neighbor – the neighbor who resolutely respects himself and, because he does so respects the rights of others – the neighbor who respects his obligations and respects the sanctity of his agreements in and with a world of neighbors.” It is such a neighborhood motivated to advance the “betterment of humanity,” a progressive value system, that she contends is at the heart of Roosevelt’s politics. Collective confidence in democracy, according to Stuckey, was essential to mobilization of good neighborliness.

Roosevelt’s enormously popular Fireside Chats were the primary vehicle for communicating with the electorate, but would have been substantially harder without a unified populace with stake in its government. The increasingly diverse generation listening to Roosevelt, a coalition of Jews, Catholics and Protestants, found itself on common ground by virtue of shared Judeo-Christian values represented in New Deal politics and its argument for the common good. To Stuckey, this was a vastly appealing application of “religious identity in the service of politics.” Roosevelt’s Fireside engagement with the nation “brought mass media and the presidency together in ways that the nation had never seen,” adds Stuckey. FDR combined the language of an educator with the fervor of a minister who was eager to crusade – with “vicious language” – against the “foxes and weasels” or “appeasers” among Republican opposition. In this undertaking, Roosevelt employed creative metaphors in arguing for action against Nazi Germany and totalitarianism in Italy and Japan: “No man can tame a tiger into a kitten by stroking it. There can be no appeasement with ruthlessness. There can be no reasoning with an incendiary bomb.”

Though refusing to take a stand against the anti-lynching legislation, Roosevelt’s public rhetoric on race suggested a new tolerance for African Americans, even an unstated commitment to their economic security. However, the New Deal’s benefits for citizens of color were largely unacknowledged. “By wielding ambiguous arguments on the issue of race, and through the more explicit interventions of his wife,” Stuckey writes, “Roosevelt managed to earn the support of African Americans without alienating southern conservatives.” Even if his support was unspecific to blacks, by being a genuine advocate of all of America’s dispossessed, he was so to people of color.

A former political advisor to President Carter, Richard Moe is not a historian by training. But his book is a compelling read. Roosevelt’s Second Act: The Election of 1940 and the Politics of War is a superbly reconstructed chronology of the 1940 campaign, the book chronicles Roosevelt’s journey through the political campaign amid an international chaos. Despite his huge appetite for politics, FDR never intended to break Washington’s precedent of the two-term presidency. Journalists mocked FDR for his coyness on the third-term question, but with his health already declining, his instinct was to find a successor within his inner circle and among political allies. But when he failed to find the right match among viable Democratic successors, Roosevelt made the decision to run for a third term.

Moe characterizes FDR’s special relationship with the American public as central to his argument for re-election: “The President sought to reassure a nation beset by fears of the unknown, and he did so with his now well-developed style of speaking with confidence in a calm, conversational manner.” When Roosevelt was finally nominated for a third term, he wrote to the Democratic National Convention that he would turn down the nomination unless it decisively accepted his vice presidential candidate: “I wish to give the Democratic Party the opportunity to make its historic decision clearly and without equivocation.” He had zeroed in on the youthful Agriculture Secretary and New Deal stalwart, Henry Wallace, vowing he would not run without choosing his own vice-president. In his two prior conventions, Roosevelt bowed to the Southern wing of the Democratic Party and picked Texan John Nance Garner during the balloting for a vice presidential nominee. Before settling on Wallace, who he called “honest as the day is long” sharing “the general ideas we have,” Roosevelt asked Frances Perkins, the Labor Secretary, for her support. When she agreed his selection was wise, Roosevelt enlisted her to share the news. “Would you mind going over to tell Harry [Hopkins]? You’d better not telephone. Probably someone is listening in on Harry’s wires.” On domestic politics, Secretary Wallace fervently championed an expansionist government to improve the livelihood of Americans.

From the convention into the general election campaign, Moe probes FDR’s fascinating Republican opponent, Wendell Willkie, the dark-horse candidate whose improbable nomination that Washington Monthly founder Charles Peters recounted in his last book, Five Days in Philadelphia. A Republican convert whose opposition to the New Deal was less ferocious than his GOP counterparts, Willkie at heart was an internationalist like Roosevelt, yet was forced to reconcile his campaign positions with the “isolationist core of his new party,” as Moe wryly points out, “a contradiction that was widely noticed.” As he closes his survey of the 1940 campaign, Moe describes what he considers perhaps the most “moving” campaign oratory in American political history delivered by Roosevelt on the eve of the election. “There is a great storm raging now, a storm that makes things harder for the world. And that storm is … the true reason that I would like to stick by these people [factory workers in Cleveland] of ours until we reach the clear sure footing ahead.”

Stanley Weintraub’s Young Mr. Roosevelt: FDR’s Introduction to War, Politics, and Life, who most recently chronicled Roosevelt and Churchill’s Christmas season together after the attack on Pearl Harbor, is—disappointingly—the least substantive of the three titles. Weintraub, a professor emeritus at Penn State University, offers a patchwork of Roosevelt’s Naval history, updates on the Roosevelt family – including Theodore Roosevelt’s encouragement of young Franklin – and FDR’s own romantic dalliances. While Weintraub’s book is certainly colorful, it is a somewhat aimless adventure into Roosevelt’s youth. There are no serious historical analyses, and the book does not explore the impact of FDR’s Navy Department travels, and of his global mindset toward world affairs. FDR wanted to join the military in his boyhood, but this was against his father’s wishes. While he never served in combat, Roosevelt’s hands-on exposure to Naval and broader military matters in his role as assistant secretary to the Navy Department during the Wilson Administration informed his own presidency’s imperative to mobilize militarily in anticipation of World War II. While he clearly took inspiration from Wilson’s worldview – manifested in his own role in the creation of the United Nations – Roosevelt was never a proponent of the controversial League of Nations.

In a chapter-by-chapter account, Weintraub displays Roosevelt’s responsibilities that ranging from inspections to commissioning of battleships as the Navy underwent “a rebuilding mode,” in which Roosevelt fiercely campaigned for support from the White House and Congress. When the Great War began, Roosevelt wrote his colleagues “One of us…ought to go and see the war in progress with his own eyes; else he is a chess player moving his pieces in the dark.” Weintraub does show that Roosevelt’s Navy Yard years, perhaps more than at any other moment of a tumultuous life, threatened his marriage. Roosevelt then a handsome thirty-something was rumored to have had a dalliance with Lucy Mercer who had been Eleanor’s social secretary, but now worked at Navy. Roosevelt’s relationship with Eleanor became fraught with great tension; there was talk of divorce, but FDR’s mother, Sara, made it clear. “Whatever his qualms of conscience, if he [FDR] broke up his marriage, even with Eleanor’s reluctant consent… he would be disinherited.”

Weintraub’s narrow plane of focus gave him enormous potential to investigative Roosevelt’s transition from Navy ambassador to vice presidential aspirant – or how specific knowledge he obtained on the job aided him in WWII. But his book omits such details. In fact, the most favorable aspect of his book may be the seldom-viewed photographs of a lively Roosevelt before his paralysis from polio. At the conclusion, Weintraub abruptly weaves in FDR’s vice presidential nominee selection during the 1920 presidential campaign, an unsuccessful partnership with Democratic presidential candidate James Cox that marked Roosevelt’s entry into the national limelight prior to his New York governorship and being at the top of the ticket in 1932.

Faults aside, these titles collectively show the merit of delving into whatever uncharted aspects of a president’s life may remain. Even in the case of FDR, there is more to explore. Stories of national unity are a welcome respite from the flame-throwing partisanship that is the backdrop of contemporary politics. Over the last two years, we have witnessed again the notorious curse of a president’s second-term. Roosevelt’s challenges as he considered a third term largely derived from his major setbacks of the second term, namely his failed plan – critics argued an unconstitutional overreach – to pack the Supreme Court. As the proverbial pendulum of American politics has swung, to varying degrees of intensity, from the Reagan to Clinton to Bush to Obama years, one of the most hotly debated questions is to what extent was Roosevelt’s presidency transformational in the long haul.

Stuckey, for now, has an answer. “We still live in Roosevelt’s world,” she writes, but [that] is changing.” By our own unraveling, we have lost camaraderie on the home front and a consensus about what constitutes a good neighborhood. In 2014, Stuckey laments “a fractious mass public” combined with a president “stuck in educative discourse” that has failed to reignite unity. Facing an ubiquitous and increasingly ideologically-charged media – and in the absence of institutions that facilitate and help rally an informed public – we may ask ourselves if a broad mobilized coalition such as Roosevelt created can survive today’s political pressures. Does not the construct of blue and red states inherently suggest the improbability of New Deal-style coalition-building? The politics of FDR, while combative, importantly fused an increasingly diverse population in four consecutive election cycles. It is difficult to imagine a presidency in 2014 that could shore up the kind of national confidence that a Roosevelt administration instilled in the American people.

The post FDR as Preacher, Campaigner, Bon Vivant appeared first on Washington Monthly.

]]>
13576
Abolition and Backlash https://washingtonmonthly.com/2014/03/02/abolition-and-backlash/ Sun, 02 Mar 2014 15:03:00 +0000 https://washingtonmonthly.com/?p=13843 Efforts to ban capital punishment are growing. But keep this in mind: the last time the Supreme Court tried to end the death penalty, we got more executions.

The post Abolition and Backlash appeared first on Washington Monthly.

]]>
Not too long ago it was difficult to find a politician in America who would publicly oppose capital punishment. Today, abolition is ascendant. Six states have scrubbed the death penalty from their books in the last decade—most recently Maryland, where governor and presidential aspirant Martin O’Malley signed repeal legislation last year.

The Maryland repeal was a victory for the Baltimore-based NAACP, which had lobbied hard for the measure. The civil rights organization is also promoting abolition in other states, and it has declared an audacious endgame. Once twenty-six states outlaw executions, the NAACP says, it will ask the U.S. Supreme Court to invalidate the death penalty nationwide by declaring it a “cruel and unusual punishment” under the Eighth Amendment.

Mar14-Mandery-Books
A Wild Justice:
The Death and
Resurrection of
Capital Punishment
in America
,
by Evan J. Mandery,
W. W. Norton & Co., 544 pp.

This may seem a quixotic quest, but both the NAACP and the Supreme Court have done it before. The justices shocked the nation by declaring executions “cruel and unusual” in the 1972 case of Furman v. Georgia. The decision was the product of a decade-long litigation campaign led by the Legal Defense Fund (LDF), a public interest law firm affiliated with the NAACP. At the time, Furman was widely interpreted as the end of capital punishment in America.

But the abolitionist triumph was short-lived. Furman became an outlet for all the anger the Supreme Court had prompted with its decisions on civil rights, criminal cases, and—soon after—abortion. Riding the wave of outrage, state politicians rewrote their death penalty statutes and dared the Court to invalidate them again. In 1976, in Gregg v. Georgia, the justices gave the green light for executions to resume, setting off a new spree of state killing in America.

How did the justices reach their unexpected and radical decision in Furman? And, having crossed the Rubicon, why did they reverse course four years later? Evan J. Mandery, a former capital defense attorney and a professor at New York’s John Jay College of Criminal Justice, answers these questions in his new book, A Wild Justice: The Death and Resurrection of Capital Punishment in America. As Mandery vividly shows, litigating the death penalty is like riding a bull. You can’t tame it—so just hang on tight and prepare to be thrown.

Mandery draws his title from a quote by Francis Bacon, who declared in 1625, “Revenge is a kind of wild justice; which the more man’s nature runs to, the more ought law to weed it out.” The difficulty of this weeding-out is the central drama of the story. Many government decisions have profound moral dimensions, but they are seldom as stark as with the death penalty. As Justice Potter Stewart wrote in Furman, “The penalty of death differs from all other forms of criminal punishment, not in degree but in kind.”

There was another problem with capital punishment that the Supreme Court was loath to acknowledge but unable to ignore: racism. In the South, where the death penalty has always been strongest, it is the historic and symbolic heir to the lynch mob. That’s why the LDF, which made its name in cases such as Brown v. Board of Education and Smith v. Allwright, had always found itself compelled to represent black defendants in capital cases.

Before 1963, the LDF fielded such cases individually, with no intention of ending the death penalty altogether. But that year, Justice Arthur Goldberg, a John F. Kennedy appointee, set in motion the chain of events that nine years later would lead to Furman. Goldberg published an opinion arguing that the Supreme Court should consider the constitutionality of the death penalty for crimes less than murder—an incredible breach of protocol, given that no litigant had brought the issue up. The justice’s brilliant young clerk, Alan Dershowitz, had drafted the opinion to focus on the racism of the death penalty system. But Goldberg’s colleagues prevailed upon the justice to remove all references to race.

Still, the LDF took note, and it soon launched a litigation campaign to challenge the constitutionality of the death penalty. True to Goldberg’s concerns, the organization initially focused on racial disparities in sentencing. But in the years it took the LDF to mount an elaborate statistical study and walk it through the lower courts, its mission morphed. The lawyers decided that it was impossible to represent only black capital defendants when they had the expertise to do the same for whites.

As its client list grew, the LDF’s legendary executive director Anthony Amsterdam hit upon a new idea: the organization would launch a rearguard action in the lower courts, with the goal of blocking all executions until the Supreme Court had settled the matter once and for all. The idea was to create a situation in which approving the death penalty would be tantamount to unleashing a bloodbath.

It would be the justices, however, who would choose the terms of a showdown. Over the years, the LDF and other lawyers supplied the Court with numerous abolitionist arguments to weigh. The one it never chose to tackle was race. The justices flatly refused to review a lower court decision dismissing the LDF’s statistical evidence. Mandery does not reveal why, but chances are that the justices’ political antennae were better than the LDF’s: going to bat for convicted rapists and murderers in the name of racial justice was, at this stage, likely to pollute both causes.

Instead, the litigation that culminated in Furman revolved around a different question: whether death penalty cases demanded special procedures above and beyond the protections afforded to ordinary defendants. In 1968, for example, the Court ruled that jurors opposed to capital punishment could not be screened out of death penalty cases, on the grounds that people disposed to hanging were generally more likely to convict a defendant in the first place.

However, the biggest debate was about how and when jurors should pick a death sentence. Under the prevailing system, conviction and sentencing decisions happened at once, with no opportunity for a guilty defendant to separately plead for his life. What’s more, the jurors who made these decisions had boundless discretion—they were given no guidance as to who should live and who should die. The result, the LDF argued, was that death sentencing was arbitrary, more lottery than law.

The Court wrestled with these questions for years without issuing a decision, rehearing a single case multiple times, arguing viciously, finding itself tripped up by departures and arrivals of justices. Finally, in 1971, the Court decided in a pair of cases that neither split trials nor jury standards were required. The LDF campaign appeared doomed.

But later that year, something odd happened: Justice Hugo Black, a self-proclaimed originalist who thought the death penalty was clearly constitutional, opened up another round. Black had chafed at the Court’s constant brooding on capital procedure, which he saw as a back-door effort to abolish the death penalty. He wanted the matter settled once and for all and saw the 1971 cases as a missed opportunity in this regard. They had been decided on the grounds of a Fourteenth Amendment claim, but remained silent on the more fundamental issue: Was the death penalty cruel and unusual punishment under the Eighth Amendment?

Certain of victory, Black persuaded the brethren to take on this ultimate question. It proved to be an epic mistake.

Nobody had been executed in the U.S. since 1968, thanks in part to the LDF’s moratorium strategy. Even so, the math in Furman initially appeared to favor death penalty supporters. Four conservative votes for retaining capital punishment were assured, and even liberals Thurgood Marshall and William Douglas voiced serious doubts about the Eighth Amendment challenge. But changes of heart, aided by brilliant legal argument from the LDF, gradually tipped the balance to the abolitionists.

In the final decision, a 5-4 monster in which each justice wrote separately, Marshall and his colleague John Brennan declared the death penalty cruel and unusual under all circumstances. Douglas wrote that capital punishment in America was fatally compromised by economic and racial discrimination. But in what would become the crucial opinions, Potter Stewart and Byron White focused on the rarity of death sentences. Stewart famously likened execution to being struck by lightning; White argued that so few people were executed each year as to make the death penalty ineffective, and therefore cruel.

The irony was that just a year earlier, both justices had ruled that the likeliest measures to discipline the punishment—split trials and sentencing standards—were not mandated by the Fourteenth Amendment. Now, they seemed to imply that the Eighth Amendment required what the Fourteenth did not: a way to rationalize death sentencing. This held open the possibility that states could restore capital punishment by overhauling procedures.

Mandery argues that Stewart’s opinion would have been stronger had he not compromised to bring White into the majority. Stewart struck the deal in part because he saw himself as hastening the inevitable. It seemed a good gamble at the time: few observers believed that conservatives would actually try to steer through the apparent loophole Stewart and White had created.

But they were wrong. As Mandery explains, conservative populists were thrilled to at last have found a Supreme Court decision they could actually fight back against. State after state passed new statutes that adopted split trials and standards for capital cases, and defendants were soon being sentenced to death again. In 1976, facing a hostile nation brandishing a set of revised laws that claimed to answer the critique of Furman, the Court allowed executions to resume.

Mandery writes about these events like they felt to the people who lived through them—as a thriller. His research is based in part on the incredible archival record of the death penalty cases, filled with snarky memos, threats to write extraordinary dissents, and anguished hand-wringing. Mandery also talked to law clerks and litigators involved in the saga. Armed with this dramatic material, he probes deep into the bumpy lives and brilliant minds of the lawyers and justices and highlights the moral and political logic underlying what seem like arcane legal debates.

What this virtuoso performance does not do, however, is trace just how the drama of 1963 to 1976 got us into the situation we find ourselves in today: clinging to a punishment that remains marred by inequality, too attached to its symbolism to see how it perverts the criminal justice system atop which it is perched.

Justice Harry Blackmun, who voted to uphold the death penalty throughout the 1970s, later famously denounced efforts to rationalize the punishment as fruitless. But as an institution, the Court has clung fast to the idea that better procedures make a better death penalty, despite overwhelming evidence to the contrary. In 1986, for example, the justices agreed that statistical evidence proved race was a strong factor in death sentencing decisions, but refused to see this as a constitutional problem. The Court, it seems clear, did not think it could afford another Furman.

But the system spawned in the wake of Furman churns out more death sentences than it can carry out—indeed, more than the public would countenance being carried out. For the U.S. to clear out its death rows within one year, we would have to carry out eight executions per day. Instead, we spend millions in legal fees to let defendants rot on death row while denying the survivors of their crimes the “closure” they were promised. As the LDF argued decades ago, there remains a gulf “between what public conscience will allow the law to say and what it will allow the law to do.”

Meanwhile, the backlash to Furman had lasting effects on the criminal justice system more broadly. It imprinted a generation of politicians with an outraged, Nixonian brand of law-and-order politics, and it skewed the nation’s sense of proportionality as it embarked on a historic prison-building spree. When death is the ceiling for punishment, mandatory minimum sentences of twenty years don’t seem so bad.

It is likely that the fever pitch of the Furman aftermath could have been avoided had the Court moved more carefully, invalidating executions initially just for the crimes of robbery and rape, for example, before moving on to murder. There were enough opportunities to do so. But as Norman Mailer once put it, “Capital punishment is to the rest of all law as surrealism is to realism. It destroys the logic of the profession.”

The Court’s death penalty jurisprudence was not a diligent effort to follow a coherent strand of reasoning. It was a tug-of-war on quicksand.

The United States is now on a trajectory that looks remarkably similar to that of the 1960s, when governors refused to sign death warrants, states abolished their death penalty statutes, and executions eventually ground to a halt even before the Court banned them in Furman.

The shift is partly environmental. Law-and-order politics have lost much of their poison over the last decade, thanks both to the substantial drop in crime and to Democrats’ success in proving that they were just as “tough” as Republicans.

A stream of exonerations from death row has also given abolitionists a leg up. A turning point came in 2003 when Illinois Governor George Ryan commuted all of the state’s death sentences. His action disrupted the fundamental logic of the post-1976 system: with endless stages of legal review, and the fiction of standards and objectivity, no individual had to feel responsible for pulling the switch.

But as a rule, governments do not abolish the death penalty because doing so is popular. In the other industrialized democracies, abolition was an elite project, a decision political leaders closed ranks around and imposed on their citizens. Elite abolition is much harder in the United States, where justice is locally controlled, crime is an easy target for populists, and lethal violence is more widespread.

The LDF’s advocates struggled heroically to achieve a razor-thin victory that soon blew up in their faces. To succeed this time around, abolitionists will have to dig in for a battle that will be even longer and harder.

Buy this book from Amazon and support Washington Monthly: A Wild Justice: The Death and Resurrection of Capital Punishment in America

The post Abolition and Backlash appeared first on Washington Monthly.

]]>
13843 Mar14-Mandery-Books
Taking on the Heiristocracy https://washingtonmonthly.com/2014/03/02/taking-on-the-heiristocracy/ Sun, 02 Mar 2014 14:38:43 +0000 https://washingtonmonthly.com/?p=13845 History shows that growth alone won’t stop vast economic inequality.

The post Taking on the Heiristocracy appeared first on Washington Monthly.

]]>
In his annual address, the president of the American Economic Association, Irving Fisher, sounded the alarm about the issue he described as “the great peril today”—the “striking inequality of capital,” which, he argued, was “perverting” American democracy. It was “distressing,” he said, that wages were “actually decreasing while profits have been increasing.” “Something like two-thirds of our people have no capital,” said Fisher, while “the major part of our capital is owned by less than 2 percent of the population.” Moreover, “half of our national income is received by one-fourth of our population.”

In the year 2014, the theme of Irving Fisher’s address is resonant—which is why you may be surprised to learn that he gave it in 1919. Fisher, moreover, was a mainstream economist, very much in the neoclassical tradition. Milton Friedman called him “the greatest economist the United States has ever produced.” Today, the overriding concern of most mainstream American economists is what it has been for decades: economic efficiency. Questions of equity, on the other hand, have fallen by the wayside. But as Fisher’s address vividly demonstrates, concerns about distribution were once seen as vitally important.

Mar14-Piketty-Books
Capital in the
Twenty-first Century
,
by Thomas Piketty
translated by
Arthur Goldhammer
Belknap Press, 696 pp.

In his important new book, Capital in the Twenty-first Century, French economist Thomas Piketty asserts that one of his chief goals is “putting the distributional question back at the heart of economic analysis.” As he notes, today the concentration of wealth has soared to levels that have not been seen in over a century. In recent years, the issue of economic inequality has moved out of the seminar rooms to become an issue of broad public concern. We’ve heard it in the rallying cry of the Occupy movement—“We are the 99 percent!”—and in Pope Francis’s thundering denunciations of capitalist excess and “trickle-down” economics. We’ve seen it in the surprising electoral success of economic populists like Elizabeth Warren and Bill de Blasio. Late last year, President Barack Obama gave a speech devoted to the subject, and the Democratic Party is pushing economic inequality as its major campaign theme for the 2014 midterm elections.

Inequality is on the political map, all right, and without question, the economist most responsible for putting it there is Thomas Piketty. Beginning in 2003, Piketty, along with his colleague and frequent coauthor Emmanuel Saez, published a series of ground-breaking studies documenting the dizzying rise of income inequality in the United States. Piketty’s innovation in this empirical work was his use of tax returns, rather than household surveys, to measure inequality. Tax returns give a more accurate picture of inequality than household surveys, which frequently fail to capture what is going on at the top of the income distribution. Partly this is because of nonresponse bias (rich people are far less likely to participate in such surveys) and partly it is due to the practice of “top-coding,” which caps the reported top incomes at a maximum value and thus prevents the exact amounts from being disclosed.

Piketty and Saez have demonstrated that while groups at the top of the income distribution have increasingly reaped disproportionately large economic rewards, in recent decades it is those with incomes at the very top—the top 1 percent and, even more, the top 0.1 percent—where the gains have been truly spectacular. And when Piketty and his colleagues examined inequality in other rich countries, the pattern held: since the 1970s, inequality has risen sharply in every developed economy, with the gains concentrated among the richest 1 percent. Saez’s most recent report found that in 2012, the top 1 percent of U.S. earners took in over a fifth of all income—among the highest levels ever recorded since the enactment of the income tax in 1913.

In Capital in the Twenty-first Century, Piketty sums up his research, tracing the history and pattern of economic inequality across a number of countries from the eighteenth century to the present, analyzing its causes, and evaluating some policy fixes. Spanning nearly 700 densely packed pages, it’s a big book in more than one sense of the word. Clearly written, ambitious in scope, rooted in economics but drawing on insights from related fields like history and sociology, Piketty’s Capital resembles nothing so much as an old-fashioned work of political economy by the likes of Adam Smith, David Ricardo, Karl Marx, or John Maynard Keynes. But what is particularly exciting about this book is that, due to advances in technology, Piketty is able to draw on data that not only spans a substantially longer historical time frame, but is also necessarily more complete and consistent than the records earlier theorists were forced to rely on. As a result, his analysis is significantly more comprehensive than those of his predecessors—and easily as persuasive.

Another of Piketty’s strengths is his enthusiastically interdisciplinary approach. One of the pleasures of this book is the way Piketty draws on sources as varied as the classic economic theorists, the great nineteenth-century social novelists like Jane Austen and Honoré de Balzac, recent research by historians and sociologists, and popular movies and TV shows like Titanic and Mad Men. He prefers the richness of these sources to the sterile mathematical models that are prevalent in contemporary academic work in economics. Indeed, he is scathing about much contemporary work in the economics field, condemning its “childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.”

Capital is a consistently engrossing read, encompassing topics including the stunning comeback that inherited wealth has made in today’s advanced economies, the dubiousness of the economic theory that a worker’s wage is equal to his or her marginal productivity, the moral insidiousness of meritocratic justifications of inequality, and more. But the book’s major strength lies in Piketty’s ability to see the big picture. His original and rigorously well-documented insights into the deep structures of capitalism show us how the dynamics of capital accumulation have played out historically over the past three centuries, and how they’re likely to develop in the century to come. From his analysis he’s derived several important lessons, none of them particularly comforting.

The first of these is that, contrary to widely held economic theories, there is no “natural” tendency of inequality to wane in advanced capitalist societies. In the 1950s, economist Simon Kuznets famously argued that in advanced economies, inequality looks like an inverted U curve, with inequality increasing during the early stages of industrialization, then decreasing as economic development spurs growth that benefits all. But as Piketty demonstrates, Kuznets’s inequality theory was based on fatally incomplete data—he only dealt with one country (the United States), from the years 1913 to 1948.

Economic inequality in the U.S. and Europe experienced a precipitous decline between World War I and World War II, but the causes were hardly natural. Inequality fell due to a series of shocks that included the physical destruction in Europe left by two massive wars, the bankruptcies of the Great Depression, and, he says, “above all” to “new public policies”—policies that included rent control, nationalizations, steeply progressive income taxes, and “the inflation-induced euthanasia of the rentier class that lived on public debt.”

The decline in inequality that began between the wars and continued for decades afterward was a historical anomaly that is unlikely to repeat itself—something that many American liberals, still mired in New Deal and Great Society nostalgia, need to hear. Left to its own devices, capital begets capital, wealth becomes increasingly concentrated, and inequality spirals. Piketty warns us that “[t]he consequences for the long-term dynamics of wealth distribution are potentially terrifying, especially when one adds that the return on capital varies directly with the size of the initial stake, and that the divergence in the wealth distribution is occurring on a global scale.”

Piketty’s second major lesson is that, contrary to the fervent hopes of advocates ranging from supply-side economics’ true believers to many left-leaning economists, growth will not save us. As Piketty explains, economic growth does indeed tend to decrease economic inequality. Demographic growth, which is one component of economic growth, tends to decrease the influence of inherited wealth. For example, if families are larger, the average inheritance per child will be smaller, everything else being equal. In addition, high-growth economies tend to increase social mobility, because they provide greater opportunities for those whose parents weren’t part of the old economic elite. During the late 1990s—the last period of sustained, relatively high growth we saw in the U.S.—workers experienced a nearly full employment economy, which increased wages and caused inequality to decline, albeit only slightly and temporarily. But Piketty argues that the sustained, high rates of economic growth we saw in advanced economies in the twentieth century are very likely a thing of the past. He emphasizes that “there is no historical example of a country at the world technological frontier whose [inflation-adjusted] growth in per capita output exceeded 1.5 percent over a lengthy period of time” (one hundred years or so).

Piketty’s central insight is that inequality is an increasing function of the gap between r, the net rate of return on capital, and g, the rate of economic growth. Throughout most of history, the rate of return on capital has been durably and significantly higher than the rate of economic growth, and this trend is likely to continue, particularly given the likelihood of slower growth rates in advanced economies. The ratio of capital to income—as good a measure as any of the influence of wealth in society—peaked at about 700 percent in Europe in the nineteenth century, then plummeted to between 200 and 300 percent in the 1950s and ’60s. Currently it hovers around 600 percent and shows every sign of returning to its nineteenth-century peak. Indeed, plugging highly plausible assumptions into Piketty’s model yields predictions that Europe and America will experience rates of inequality and wealth concentration that will not only match but exceed their nineteenth-century peaks.

This dystopic scenario is deeply disturbing, but it doesn’t have to be our destiny. Piketty notes that “the history of income and wealth is always deeply political, chaotic, and unpredictable.” This brings us to Piketty’s third major lesson, which is that, happily, there is, in theory at least, a solution to the problem of soaring economic inequality, and a surprisingly simple one at that: taxes. Specifically, Piketty advocates a steeply progressive income tax and a global tax on wealth. He estimates that the optimal top marginal income tax rate is approximately 80 percent. Such a rate, he says, “not only would not reduce the growth of the U.S. economy but would in fact distribute the fruits of growth more widely while imposing reasonable limits on economically useless (or even harmful) behavior.”

Piketty’s proposed global tax on wealth would be progressive, annual, and applicable to all forms of capital, from real estate to financial and business assets. He advocates that the tax be based on bank information that is automatically shared, a move that would enable governments to manage banking crises more efficiently and would also promote financial transparency. Even a modest wealth tax could bring in significant revenue, and it would minimize the “substantial risk that the top centile’s share of global wealth will continue to grow indefinitely.”

Piketty admits that such a tax is “utopian.” International cooperation would be a formidable challenge, particularly given our globalized banking system with its vast panoply of seductive tax havens for the 1 percent. But precisely since our economy is now global, solutions to economic problems need to be global as well. More fundamentally, Piketty lacks a theory of politics that tells a persuasive story about how such changes might come about in the real world. In the twentieth century, it took the economic shocks of two world wars and the Great Depression to compel governments to adopt aggressively redistributionist tax policies. Absent such traumas, and given the declining power of institutions like labor unions that built support for such measures, what would make governments adopt such far-reaching reforms today?

Even in social-democratic Europe, many governments, including the party that Piketty supports, the French Socialists, are strongly supporting lower taxes and fiscal austerity. In the U.S., while Democrats have adopted economic inequality as a major campaign theme, in substance their anti-inequality political agenda is, sadly, less than meets the eye. It appears to consist mainly of supporting a long-overdue increase in the minimum wage, a modest expansion of food stamp benefits, and, perhaps, tax cuts for lower-middle-class families. Yes, last year Congress implemented far tougher financial reforms than expected. But among most mainstream Democrats, there appears to be little appetite for enacting policies that would seriously upset Wall Street or challenge the entrenched power of the 1 percent. No one is talking about returning to an 80 percent marginal tax rate.

So are we doomed to a dystopic future after all—a Hunger Games-like society where a tiny but powerful elite lives in luxury and splendor while the masses toil and starve? Hardly. As Piketty argues, history suggests that the levels of inequality we are approaching tend to be politically unsustainable. As Piketty notes, there is an odd yet widely accepted idea that the U.S. is significantly more inegalitarian than Europe because we like it this way. Certainly, many U.S. conservatives appear highly invested in persuading their fellow Americans—and perhaps themselves—that this is the case. But recent opinion polls tell a rather different story.

Moreover, as Piketty rightly emphasizes, equality is our American birthright. We Americans are, after all, a proudly democratic people whose signature achievement—ridding ourselves of kings, queens, and a titled aristocracy—marked a revolutionary step forward in human progress. In the early years of the American republic, the U.S. truly was a more egalitarian society than Europe—
Piketty’s historical data confirms this. During the Gilded Age of the late nineteenth century, when economic inequality soared and wealthy elites began to exercise unprecedented power over our society and our political system, many Americans—even free market economists like Irving Fisher—expressed alarm.

The inequality crisis America faced in the early twentieth century was a profoundly serious one, but in the end we rose to the challenge. After all, as Piketty points out, America is the country that, after World War I, literally invented “confiscatory” taxes—the type of progressive tax that is designed not so much to yield revenue but to “put an end to” large incomes and estates, because they are regarded as “socially unacceptable and economically unproductive.” Even an ardent apostle for capitalism like Fisher felt that the best solution to the early-twentieth-century inequality problem was a steeply progressive tax on the largest estates—with a rate that could climb as high as 100 percent for an estate that was more than three generations old. America’s twenty-first-century inequality crisis is, if anything, even more daunting and complex than the one we experienced a century ago. But as Piketty reminds us, the solutions to this problem are political, and they lie within our grasp. Should Americans choose to deploy those solutions, not only would we be doing the right thing, we’d be living up to our deepest traditions and most cherished ideals.

Buy this book from Amazon and support Washington Monthly: Capital in the Twenty-First Century

The post Taking on the Heiristocracy appeared first on Washington Monthly.

]]>
13845 Mar14-Piketty-Books
Journalism and the CNBC Effect https://washingtonmonthly.com/2014/03/02/journalism-and-the-cnbc-effect/ Sun, 02 Mar 2014 14:29:16 +0000 https://washingtonmonthly.com/?p=13846 Before 2007, the press failed to see the growing rot in the U.S. financial system and warn the public. Why?

The post Journalism and the CNBC Effect appeared first on Washington Monthly.

]]>
Recessions are, to a great extent, inevitable. But it is not inevitable that they be on the scale of the most recent one, the worst since the 1930s, costing the economy somewhere between $6 trillion and $12 trillion. Nearly 8.7 million jobs were lost, with each of those having untold ripple effects in terms of family life decisions, social mobility, divorces, alcoholism, kids not going to college, depression, and who knows what else on the misery index. The historic scale and devastation resulted from an unusual brew of fraud, regulatory laxity, and deliberate and misguided corporate decisions—much of which was preventable.

Mar14-Starkman-Books
The Watchdog That
Didn’t Bark: The
Financial Crisis and
the Disappearance of
Investigative Journalism
,
by Dean Starkman
Columbia University Press, 368 pp.

That the press did not understand and aggressively convey this early enough to stop it must therefore rank as one of the great journalistic failures of recent decades. That good accountability journalism matters may seem obvious, but the dirty little truth is that many people believe we aren’t particularly worse off when the ranks of reporters shrink. As newspapers collapsed, a surprising alliance arose to say, Calm down, it’s not such a big deal. Conservatives rejoiced that the liberal media was shrinking while the conservative media commentariat was growing. Progressives shrugged that all this corporate-owned media was useless anyway in, for instance, stopping the Iraq War—Judith Miller, Judith Miller, Judith Miller—so who cares? And digital evangelists said the dead-wood newspaper industry would be replaced by an even better digital news apparatus, flush with iPhone-wielding citizen reporters and freelance blogger networks.

And it turns out that the difficulty of proving a negative makes it challenging to illustrate how accountability reporting matters. If we have no news about a scandal at city hall this week, is it because there is no corruption, or because the rot just hasn’t been uncovered?

Media critic Dean Starkman grapples with these and other questions in his new must-read book The Watchdog That Didn’t Bark: The Financial Crisis and the Disappearance of Investigative Journalism. Interestingly, Starkman does not argue that the press ignored the scandal entirely but instead describes something more nuanced: that it did pretty well covering the economy in the early stages (2000-2003) but then dropped the ball during the crucial years of 2004-2006, when the subprime market was exploding and something could still have been done to avoid the collapse. Starkman and his colleagues at the Columbia Journalism Review arrived at this conclusion in part by asking major business media to send them their best reporting on the subject. An analysis of the material basically concluded that most of what was written during that crucial time almost never told the story with great detail or alarm.

For the most part, reporters didn’t understand the subprime mortgage industry—nor did they understand that it was based on fraud. “Subprime was where the least sophisticated met the most ruthless,” in Starkman’s devastating phrase—as opposed to being just a slightly riskier (and therefore pricier) version of regular mortgages. A handful of reporters were documenting the predatory nature of the sales operations and their victims. But these stories didn’t tend to get picked up by the national media. And on the few occasions that media outlets did convey the shady nature of the business, they rarely explained that these new mortgages could be tied back directly to major financial players like Lehman, Citicorp, or AIG. “What the reporting failed to see was that the real danger was not in shoddy consumer products per se,” writes Starkman, “but in institutionalized, systemic corruption based on misaligned incentives to put as many of the most vulnerable borrowers into loans under the most onerous terms.”

Part of the problem, Starkman explains, is that most business reporting is geared toward helping investors, small or large, while assessing stock performance on Wall Street. While useful, this approach to assigning, framing, and reporting stories does lead to different questions being asked than if one is taking a “public interest” approach. The increased attention to stock prices coincides with what Starkman calls the rise of “access reporting”—what he dubs the “CNBC-ization” of business journalism. (To justify the special scorn for CNBC, Starkman reminds us that Jim Cramer called out on his March 11, 2007, show that “Bear Stearns is fine!” A week later it had to be bailed out by the Federal Reserve.)

Starkman explains how access reporting works:

I argue that within the journalism “field” a primal conflict has been between access and accountability.… But this is hardly a fair fight. Nearly all advantages in journalism rest with access. The stories are generally shorter and quicker to do. Further, the interests of access reporting and its subjects run in harmony. Powerful leaders are, after all, the sources for much of access reporting’s product. The harmonious relationship can lead to a synergy between reporter and source. Aided by access reporting, the source provides additional scoops. As one effective story follows another, access reporting is able to serve a news organization’s production needs, which tend to be voracious and unending.… Accountability reporting requires time, spaces, expense, risk, and stress. It makes few friends.

Indeed, at the same time the crisis was developing, many business publications were trumpeting the performance of the most implicated companies in the collapse.

Businessweek in 2004, for instance, described how Lehman Brothers had become a “deal making power.” A Fortune profile of Citigroup’s Charles Prince was slightly critical but only because he hadn’t gotten the stock price moving up, not because Citigroup was at the heart of a massive financial debacle that would soon bring it and the economy to their knees.

I would add another element: thanks to improved tools measuring traffic metrics, media managers could judge every piece of content and the financial value of each reporter. And the reality is that the scoop-per-dollar-spent ratio of access reporting is much better than that of accountability reporting. Most of the examples Starkman cites of those who did great journalism were people who spent weeks, if not months, on stories, such as the seven-month investigation by Southern Exposure magazine (notably, a nonprofit), a practice few media operations now tolerate. And it didn’t help that the subprime crisis began right around 2004, when cutbacks in mainstream journalism were accelerating.

One of Starkman’s most astute observations is that the pullback in regulatory restrictions not only coincided with but also fed the journalism problems—and this is the part that reporters tend to not discuss. “The impact of a compromised federal regulatory system profoundly affected not just mortgage lending but journalism’s coverage of it,” he writes. “Reporters rely on regulators for stories, and regulators rely on reporters for cases. Each provides support and public affirmation for the work of the other while educating the public and creating a context for further reform.”

One might think that when regulators pull back, reporters would do more, plugging the gaps left by government watchdogs. But accountability journalism is not countercyclical that way. Indeed, when regulators pull back it makes journalists’ jobs infinitely harder, and vice versa. They tend, therefore, to do less just when they should be doing more.

Starkman spends most of his time analyzing the output of newspapers and magazines, and while he mocks CNBC, he largely ignores network and local TV news (still the sources of news for most Americans), NPR, Fox Business Network, and, for that matter, major providers of digital news such as Huffington Post or Yahoo! He probably figured they were inconsequential players in this drama, and that in and of itself deserves mention.

And I wanted to know: In the few cases when reporters did write the stories critical of financial giants, why didn’t those stories explode on the public scene? Some interviews with the big-time business editors might have shed more light on why they ignored the growing evidence.

The digital world is supposed to be able to take great stories—whether they’re in a small hamlet or New York City—and bring them to massive audiences. Why didn’t that happen? To some extent it’s because the 2004-2006 collapse predated the Twitter explosion and the rise of BuzzFeed, Upworthy, and other online news sources that are focused on accelerating virality. Starkman’s contempt for the digital evangelist types, aggregators, and other newfangled media is unfortunate, and may have blinded him to the profoundly important role that this new amplification system could play in accountability reporting of future scandals.

But we’ll never know: Would those same stories have gotten more traction now because of this new amplification sector? Or would they have been crowded out by lists of “Betty White and Animals” or “Cats Who Think They’re Sushi”?

My guess is that the new media ecosystem—mixing journalism and social media, top down and bottom up—can be very effective at making good journalism more impactful. But as Starkman ably shows, that’s not going to happen if the accountability journalism isn’t done in the first place. And for that we need reporters and institutions who are willing to invest significant sums in potentially low-return reporting that may have small readership but big public impact.

Buy this book from Amazon and support Washington Monthly: The Watchdog That Didn’t Bark: The Financial Crisis and the Disappearance of Investigative Journalism (Columbia Journalism Review Books)

The post Journalism and the CNBC Effect appeared first on Washington Monthly.

]]>
13846 Mar14-Starkman-Books
Refuting U.S. Declinism, Sort of https://washingtonmonthly.com/2014/03/02/refuting-u-s-declinism-sort-of/ Sun, 02 Mar 2014 14:20:05 +0000 https://washingtonmonthly.com/?p=13847 Don’t worry about America losing its dominant position in the global economy. Worry instead about whether average Americans will benefit.

The post Refuting U.S. Declinism, Sort of appeared first on Washington Monthly.

]]>
For as long as the United States has dominated the world as its economic, military, and cultural hegemon, fears of American decline have been as much a part of our national psyche as pride of place in our power—this even as our strongest challengers have failed to topple us. The Soviet Union collapsed in 1991, while Japan slid into recession just three scant years after Japanese investors bought New York’s iconic Rockefeller Center. Still, our anxiety over the threat of subjugation to a foreign rival persists.

Mar14-Kurtzman-Books
Unleashing the
Second American Century:
Four Forces for
Economic Dominance
,
by Joel Kurtzman
PublicAffairs, 320 pp.

Today’s pessimists especially worry about China. Over a single decade—in part through its invasion of American shelves and closets with cheap goods—the nation has catapulted into becoming the world’s second-largest economy. Since 2002, its gross domestic output has exploded from a mere $1.4 trillion to $8.2 trillion in 2012, fueling fresh concerns about a pending American twilight. As Carl Minzner of the Council on Foreign Relations observed in 2007, “China’s steady rise in economic and political influence is the single event that will reshape international politics in the 21st century.”

To counter the doomsayers, Milken Institute senior fellow Joel Kurtzman argues that America has nothing to fear and much to anticipate from its economic future. In Unleashing the Second American Century, Kurtzman, former editor in chief of the Harvard Business Review, makes the case for why America will continue to dominate the global economy and why its best days are yet to come. “[E]ven in an ailing world,” he writes, “America will grow stronger still.”

But while Kurtzman presents compelling evidence that the American economy as a whole will prosper, he doesn’t answer two questions of equal current salience: Who will benefit, and at what cost?

To build his case, Kurtzman identifies four interlocking forces that he sees as the foundation for future U.S. growth. First, he points to what he argues is a uniquely American brand of creativity rooted in a desire for “self-improvement.” The American way of thinking (“We are antsy, eager to move upward and onward”) enables the kind of “game-changing” innovation that other countries can’t match.

For example, Kurtzman points to the burgeoning robotics industry, which promises to transform not just high-tech manufacturing but also everyday life. U.S. companies are developing robots that could take on such dangerous jobs as mining, undersea exploration, or even fire fighting, as well as more mundane tasks such as housework (think Rosie from The Jetsons). Overwhelmingly, Kurtzman says, robotics companies are based in America. At the end of 2012, America had 373 robot makers, compared to eighty-one in Japan, sixty-seven in Germany, and forty in France. While other countries might be ahead of America in deploying big but mindless industrial robots, American companies such as iRobot and Boston Dynamics are pioneering the development of nimble, “autonomous” robots that can essentially think for themselves. IRobot, for instance, manufactures a popular robotic vacuum cleaner called the Roomba, which cleans the floor and then automatically returns itself to a recharging station. Rosie’s robotic grandsire won’t be a German industrial robot, but the Roomba.

The rise of the U.S. robotics industry also exemplifies a second trend Kurtzman identifies: the renaissance in manufacturing. In addition to new manufacturing enterprises such as robotics, Kurtzman documents the relatively recent trend of “reshoring” jobs from overseas. Among the companies rebooting U.S. production are Ford, GE, Whirlpool, Caterpillar, Dow, and the cooler manufacturer Coleman, as well as foreign companies investing in America. BMW, for example, invested $1 billion in 2009 to expand its plant in Spartanburg, South Carolina. In 2012 the company built 301,519 vehicles and exported nearly 70 percent of them—many to China.
The reasons for this American resurgence, Kurtzman argues, include the higher productivity of American workers, rising Chinese labor costs and considerations such as proximity to markets (shipping stuff from China is time consuming and expensive), and the protection of intellectual property. (For another analysis of the reshoring phenomenon, see “Three Ways to Bring Manufacturing Back to America”).

A third American advantage Kurtzman identifies is abundant energy from domestic shale reserves. According to one estimate he cites, America now holds 17 percent of the world’s total fossil fuel reserves—more than Saudi Arabia and Russia. Cheaper energy, especially for electricity made from natural gas, is helping to tip the balance for companies like Dow, for whom energy costs drive the decision about where to locate. Kurtzman names eighty-nine companies planning to invest a total of $65 billion in new domestic production facilities as a result of lower energy costs.

The final force that will undergird America’s economic preeminence, says Kurtzman, is the huge pools of capital that have been stockpiled by companies since the financial crisis. Much of that capital is parked overseas but still available for investment.

All of these advantages are real, and could potentially lead to higher GDP in the years to come than many economists are forecasting. But in his zeal to make his case, Kurtzman barely acknowledges, much less defends himself against, a host of inconvenient facts and counterarguments.

For example, Kurtzman says nothing about the possibility that the coming robotics revolution could lead to a net elimination of U.S. jobs, including many highly skilled ones. Yes, similar fears about automation have proven overblown in the past. But there is at least some reason to worry that this time may be different. Economists Erik Brynjolfsson and Andrew McAfee of MIT’s Sloan School of Management, for instance, have argued that increased automation is a big reason why recent recoveries, including the current one, have been relatively “jobless” and why employment and productivity growth have not been rising in tandem since the start of the century as they were before.

Kurtzman’s faith in America’s extraordinary level of entrepreneurial creativity—a faith drawn from his focus on a particular and mostly male world of high technology—also deserves scrutiny. As the Kauffman Foundation has shown, overall rates of business creation and entrepreneurship have basically stayed flat since 1996, with a slight decline from 2011 to 2012. And as this magazine has reported, when adjusted for population growth, rates on new business formation have fallen dramatically since the late 1970s (see Barry C. Lynn and Lina Khan, “The Slow-Motion Collapse of American Entrepreneurship”).

And what about those giant pools of capital companies have amassed? Kurtzman argues that the presence of all this untapped capital is a cry for corporate tax reform, and he assumes that “unlocking” all this cash will lead to a boom in corporate investment. But he doesn’t address the almost certain critique that some companies, given the opportunity to “repatriate” the funds, might spend the money on CEO bonuses or stock buybacks rather than on new plants, equipment, and other pie-enlarging investments. This is in fact the critique leveled after the last repatriation tax holiday in 2004. One report by the Democratic staff of the Senate Permanent Subcommittee on Investigations charged that even after companies brought home more than $300 billion in offshore money, more jobs were lost than gained and research activities declined.

Even if the forces Kurtzman points to do ultimately allow America to retain its preeminence in the world economy—and I for one very much hope he’s right about that—that’s no guarantee that average Americans will benefit. Kurtzman at least nods to this darker potential reality: “[O]ur kids—if they do the work and get a college education—will have it better than we do” (emphasis by Kurtzman). It’s a whopper of a caveat: in 2012, 62 percent of Americans had no college degree, and just 28 percent had a bachelor’s degree or more.

Even manufacturing’s rebirth could wind up bypassing the masses. Unlike the factories of yesterday, when a high school degree could get you a job on the line, today’s factories require highly specialized and educated workers—and fewer of them.

Kurtzman’s solution is, of course, the default one touted by every policymaker and politician: expanding education, as well as “educating people about the need for education.” He is also unsympathetic toward those who don’t get with the program: “For people who dropped out of high school and lack motivation and interest, or who are unwilling to go back to school, I don’t see much hope. The economy and the riches it will be producing will pass them by.”

But what about the many millions of Americans who’ve “worked hard and played by the rules,” in Bill Clinton’s original formulation, but find themselves displaced in the future economy? What’s the right balance between compassion and competitiveness? How can government promote mobility without sacrificing growth? And if inequality is an inevitable outcome of growth, how much inequality are we willing to “pay” for as the price of economic dominance? The future of the second American century might lie not so much in the four factors that Kurtzman identifies but in how policymakers answer these questions.

In the famed 1941 essay “The American Century,” from which Kurtzman takes his inspiration, Henry Luce painted a vision of America that was as humanitarian as it was commercial. Not only should America be the “dynamic center of ever-widening spheres of enterprise,” he wrote, it should also be “Good Samaritan” to the world and to all “for whom progress and prosperity have been denied.”

In this century, America’s great humanitarian project is likely to be right here at home.

Buy this book from Amazon and support Washington Monthly: Unleashing the Second American Century: Four Forces for Economic Dominance

The post Refuting U.S. Declinism, Sort of appeared first on Washington Monthly.

]]>
13847 Mar14-Kurtzman-Books
Backward, Christian Soldiers https://washingtonmonthly.com/2014/03/02/backward-christian-soldiers/ Sun, 02 Mar 2014 14:10:52 +0000 https://washingtonmonthly.com/?p=13848 To end the culture war that divides America, we need to recognize that each side has the same roots: the radical democratic individualism of America’s Protestant heritage.

The post Backward, Christian Soldiers appeared first on Washington Monthly.

]]>
For decades our nation has been divided, often bitterly, by the so-called culture wars. During Barack Obama’s presidency these unremitting tensions have manifested themselves in clashes over gay marriage, contraception coverage, and state-level abortion restrictions. Culture war loyalties and worldviews have also helped define who is on which side in the battles over debt and deficits, the size and role of government, and issues of economic fairness that have all but paralyzed the federal government and brought it twice to the edge of default.

Mar14-Marsden-Books
The Twilight
of the American
Enlightenment: The
1950s and the
Crisis of Liberal Belief

by George M. Marsden
Basic Books, 264 pp.

Why is it so hard for Americans to talk to one another across the ideological divide, let alone understand the anxieties of those on the other side? In his new book, the historian George Marsden offers a perspicacious answer. Professor emeritus at the University of Notre Dame, Marsden is a distinguished historian of American religion, best known for his biography of the eighteenth-century evangelical firebrand Jonathan Edwards, he of the harrowing treatise “Sinners in the Hands of an Angry God.” Marsden has spent years studying the fundamentalist strain in the American outlook, and in The Twilight of the American Enlightenment he succinctly describes our current polarized predicament: “Secular liberals believe their freedoms are threatened by a conservative Christian takeover. Conservative Christians believe that secularists are excluding their Christian views and using big government to expand their own dominion.” Each side exaggerates its own vulnerability and the malevolence of its opponents; each claims sole possession of the truth on fundamental questions of individual responsibility and public purpose.

The gnarled roots of this stalemate—American society’s inability to accommodate genuine pluralism—reach to the very conception of the nation, which joined the Enlightenment values of the Founders to the pieties of an overwhelmingly Protestant society. Contrary to popular belief, writes Marsden, the United States “does not have well-developed traditions or conceptions of pluralism that can embrace a wide range of both religious and nonreligious viewpoints.” True, the nation from the start had no established church, yet it had a powerful de facto Protestant culture, one that pushed dissenting groups to the margins in every debate about American identity and values. The nation’s much-celebrated commitment to religious diversity—as Catholics, Jews, and minority Protestant sects learned—was observed most often in the breach. Assimilation did not just mean losing one’s accent, it meant losing one’s distinctive worldview, especially if it clashed with reigning Protestant notions of liberty and individualism.

As intellectual and cultural history, The Twilight of the American Enlightenment will be catnip to those who remember the homogenizing pressures and chauvinisms of postwar suburban America. Marsden lays out how the prevailing social conformity and religious piety set the stage for the notorious rebellions of the 1960s. Describing a social ethos shaped by the emergence of mass consumption and the mass media, he summons the bland, Ozzie and Harriet uniformity of popular culture. Such uniformity, he asserts, was at least in part a reaction to deeper anxieties, and indeed the era’s erudite handwringing featured such heavyweights as Reinhold Niebuhr, Hannah Arendt, David Riesman, Daniel Bell, and Time publisher Henry Luce. Luce’s magazine both reflected the values and sought to shape the development of every important American institution. Luce was especially concerned (with good reason) about the challenge posed to traditional Christian values by a skeptical and materialistic modernity, and viewed a certain kind of intellectually flexible liberal Protestantism as essential to shoring up the nation’s strength and ensuring its future, especially in the struggle with the Soviet Union.

The United States was, of course, the first nation to write the separation of church and state into law; the idea, popular in some right-wing circles today, that this is fundamentally a “Christian” nation finds little warrant in our founding documents—or in the convictions of the Founders themselves, most of whom were deists wary of religious enthusiasm and distrustful of ecclesiastical authority. As Marsden points out, however, the framers “took for granted that there was a Creator who established natural laws, including moral laws, that could be known to humans as self-evident principles to be understood and elaborated through reason.” The American enlightenment ideal “of a consensus based on rationally derived, shared humanistic principles,” he writes, “[was] congenial to a broadly theistic Protestant heritage.” This consensus enabled a broad agreement on moral and political boundaries to emerge and to persist for nearly two centuries.

By the 1950s, however, that consensus had been undermined, at least in elite circles, by Darwinism, the prestige of science, and the influence of Freudianism and psychology more generally. The carnage of World War II, the horror of the Holocaust, and the specter of nuclear annihilation all contributed to a deepening moral uncertainty, and appeals to self-evident God-given moral laws increasingly fell on deaf ears. Many of America’s leading postwar thinkers, meanwhile, worried that modern “mass” man, caught up in the imperatives of the modern economy and seduced by the blandishments of affluence, lacked the discipline for self-government. Marsden shows how old-fashioned American individualism, fed by postwar prosperity, undermined traditional morality in the 1950s: World War II and the automobile, for instance, had as much to do with loosening inherited notions of sexual morality as did any of the flamboyant excesses of the following two decades.

Though community values were ritually celebrated, everywhere Americans turned, they now saw individual freedom, self-determination, self-reliance, and self-expression invoked as American absolutes. That exaltation of the autonomous self—whether in the bedroom or the shopping mall—had deep roots in the nation’s Emersonian and radical Protestant traditions, and now, in the new environment created by America’s postwar global dominance, it grew to full bloom. “What is fascinating and revealing,” Marsden writes, “is how easily talk about the unassailable idea of ‘freedom’ in a political sense blended into an ideal of personal attitudes of independence from social authorities and restraints.”

True, the 1950s witnessed crowded churches and teeming parishes, yet in most other spheres of mainstream American life, religion was little seen or heard. Outside of a perfunctory morning prayer, religion as a cultural force was absent from the public schools. Countless social as well as legal barriers guarded against the potentially discordant consequences of competing religious beliefs, turning faith into a largely individual and private matter. Economic rather than religious imperatives played an ever-larger role in how Americans spent their time and conducted family matters, including deciding where to live. “Although American piety had always had some impact on American political culture,” Marsden writes, “it had had almost no impact on its economic culture, where the demands of efficient technique overwhelmed all else.” As a result, “the dominant public discourse of the era … was conducted mostly without religious reference.”

Marsden reminds us that until the 1960s it was rare for anyone but Protestant males to head a major American cultural, business, or educational institution, a fact that helps explain the remarkably homogeneous tone of the larger culture. Change, however, was coming, especially thanks to the civil rights movement, launched in earnest in the 1950s—a movement that, along with the women’s, antiwar, and sexual liberation movements of the 1960s and ’70s, would effectively shatter the old Protestant consensus. Within only a few years, the approach espoused by secular liberals—pragmatism and a faith in supposedly value-neutral science—would prove as anachronistic as appeals to natural law in responding to the social and political fragmentation of the 1960s and ’70s.

The challenge Americans face today, after fifty years of social and political turmoil, is how to build what Marsden calls a truly “inclusive pluralism.” A self-described Augustinian Christian, Marsden is a member of the Calvinist Christian Church; he urges refers to consider the work of Abraham Kuyper, a Dutch prime minister (1901-1905) who was also a journalist, political theorist, and Reformed theologian. Kuyper helped devise democratic political reforms in the Netherlands to accommodate “principled” or “confessional” pluralism. In that multiparty system, Marsden explains, the “primary task of government is to promote justice and to act as a sort of referee” among a multiplicity of religious and secular groups. Accordingly, not just government but also mediating institutions such as churches, families, and businesses are granted considerable authority within their own spheres.

Marsden discerns lessons for the United States in Kuyper’s efforts. Historically, he writes, much of America’s cultural dynamism and impetus for change has come from its various subcommunities—he points to the vitality of evangelical churches as one contemporary example, noting that not all evangelicals are politically conservative. And yet, as the sociologists Robert Putnam and David Campbell have recently demonstrated, evangelical churches are on the wane. In fact, among Americans under thirty, one-third profess no institutional religious affiliation at all. This is an unprecedented development, and Putnam and Campbell attribute it to how the culture wars have politicized religion, deepening the nation’s divisions.

Marsden is clear-eyed in describing the forces, especially the economic ones, which did so much to weaken community ties and authority in the 1950s and ’60s. Those pressures have hardly lessened. If anything, community is more ad hoc and transient than ever. The historian Mark Lilla addressed such developments in a 1998 essay in the New York Review of Books, “A Tale of Two Reactions.” Analyzing much of the same cultural and political history as Marsden, Lilla asked why the social revolutions of the ’60s and the subsequent Reagan revolution of the ’80s have embedded themselves so deeply in American society. Those seemingly incompatible movements, he concluded, were in fact “complementary, not contradictory,” each finding its spiritual source and justification in the radical democratic individualism of America’s Protestant heritage. Reaganism, Lilla argued, was “an extension of the same utopian vision” as the antinomianism of the ’60s—one viewing economic freedom as an inalienable right, the other individual personal and sexual expression. How, Lilla lamented, “have our notions of equality and individualism been transformed to support a morally lax yet economically successful capitalist society”? Marsden wonders, along similar lines, how Americans can continue to find in individual self-determination and self-fulfillment “a complete standard for a public philosophy that would adjudicate the hard questions that arise when individual interests conflict.”

The Twilight of the American Enlightenment helps explain why so many Americans think this way, and why doing so threatens our tradition of self-government—which, when all is said and done, remains the only real guarantee of individual freedom. Can we alter course and develop the habits of communal solidarity and self-denial needed to loosen the straitjacket of what Christopher Lasch long ago called America’s culture of narcissism? Marsden is hopeful; Americans, he believes, will come eventually to realize that self-seeking rarely brings the happiness promised in the Declaration of—what else?—Independence. If we don’t come to this recognition, we may soon discover that we have more in common with Jonathan Edwards’s sinners in the hands of an angry God than we like to think.

Buy this book from Amazon and support Washington Monthly: The Twilight of the American Enlightenment: The 1950s and the Crisis of Liberal Belief

The post Backward, Christian Soldiers appeared first on Washington Monthly.

]]>
13848 Mar14-Marsden-Books
The Origin of Ideology https://washingtonmonthly.com/2014/03/02/the-origin-of-ideology/ Sun, 02 Mar 2014 14:01:58 +0000 https://washingtonmonthly.com/?p=13849 Are left and right a feature (or bug) of evolution?

The post The Origin of Ideology appeared first on Washington Monthly.

]]>
If you want one experiment that perfectly captures what science is learning about the deep-seated differences between liberals and conservatives, you need go no further than BeanFest. It’s a simple learning video game in which the player is presented with a variety of cartoon beans in different shapes and sizes, with different numbers of dots on them. When each new type of bean is presented, the player must choose whether or not to accept it—without knowing, in advance, what will happen. You see, some beans give you points, while others take them away. But you can’t know until you try them.

In a recent experiment by psychologists Russell Fazio and Natalie Shook, a group of self-identified liberals and conservatives played BeanFest. And their strategies of play tended to be quite different. Liberals tried out all sorts of beans. They racked up big point gains as a result, but also big point losses—and they learned a lot about different kinds of beans and what they did. Conservatives, though, tended to play more defensively. They tested out fewer beans. They were risk averse, losing less but also gathering less information.

One reason this is a telling experiment is that it’s very hard to argue that playing BeanFest has anything directly to do with politics. It’s difficult to imagine, for example, that results like these are confounded or contaminated by subtle cues or extraneous factors that push liberals and conservatives to play the game differently. In the experiment, they simply sit down in front of a game—an incredibly simple game—and play. So the ensuing differences in strategy very likely reflect differences in who’s playing.

Mar14-Hibbing-Books
Credit:


Predisposed:
Liberals,

Conservatives,
and the Biology
of Political Differences
by John R. Hibbing, Kevin B. Smith, and John R. Alford
Routledge, 304 pp.

The BeanFest experiment is just one of dozens summarized in two new additions to the growing science-of-politics book genre: Predisposed: Liberals, Conservatives, and the Biology of Political Differences, by political scientists John R. Hibbing, Kevin B. Smith, and John R. Alford, and Our Political Nature, by evolutionary anthropologist Avi Tuschman. The two books agree almost perfectly on what science is now finding about the psychological, biological, and even genetic differences between those who opt for the political left and those who tilt toward the right. However, what they’re willing to make of these differences, and how far they are willing to run with it, varies greatly.

Hibbing, Smith, and Alford, a team of researchers at the University of Nebraska-Lincoln and Rice University who have published some of the most penetrating research on left-right differences in recent years, provide a lively and amusing tour of the landscape. But they mostly just walk up to and peer at the overriding question of why these apparently systematic left-right differences exist in the first place. Their explanation for the “origin of subspecies,” as they put it, is tentative at best. Tuschman, by contrast, has written a vast and often difficult book that attempts nothing less than a broad evolutionary explanation of the origins of left-right differences across countries and time—and does so by synthesizing such a huge body of anthropological and biological evidence that it’ll almost bury you. Whether the account deserves to be called merely thought-provoking or actually correct, though, will be up for other scholars to evaluate—scholars like Hibbing, Smith, and Alford.

Let’s begin with the large body of shared ground. Surveying the evidence with a fair mind, it is hard to deny that science is revealing a very inconvenient truth about left and right: long before they become members of different parties, liberals and conservatives appear to start out as different people. “Bedrock political orientations just naturally mesh with a broader set of orientations, tastes, and preferences because they are all part of the same biologically rooted inner self,” write Hibbing et al. The research demonstrating this is so diverse, comes from so many fields, and shows so many points of overlap and consistency that you either have to accept that there’s really something going on here or else start spinning a conspiracy theory to explain it all away.

Mar14-Tuschman-Books
Credit:


Our Political Nature: The Evolutionary Origins of What Divides Us
by Avi Tuschman
Prometheus Books, 500 pp.

The most rock-solid finding, simply because it has been shown so many times in so many different studies, is that liberals and conservatives have different personalities. Again and again, when they take the widely accepted Big Five personality traits test, liberals tend to score higher on one of the five major dimensions—openness: the desire to explore, to try new things, to meet new people—and conservatives score higher on conscientiousness: the desire for order, structure, and stability. Research samples in many countries, not just the U.S., show as much. And this finding is highly consequential, because as both Hibbing et al. and Tuschman note, people tend to mate and have offspring with those who are similar to them on the openness measure—and therefore, with those who share their deeply rooted political outlook. It’s a process called “assortative mating,” and it will almost certainly exacerbate our current political divide.

But that’s just the beginning of the research on left-right differences. An interlocking and supporting body of evidence can be found in moral psychology, genetics, cognitive neuroscience, and Hibbing’s and Smith’s preferred realm, physiology and cognition. At their Political Physiology Lab at the University of Nebraska-Lincoln, the researchers put liberals and conservatives in a variety of devices that measure responses like skin conductance (the moistening of the sweat glands) and eye gaze patterns when we’re exposed to different types of images. In doing so, Hibbing and his colleagues have been able to detect involuntary physiological response differences between the two groups of political protagonists when they encounter a variety of stimuli. Once again, it’s hard to see how results like these could mean anything other than what they mean: those on the left and right tend to be different people.

Indeed, here is where perhaps some of the most stunning science-of-politics results arise. Several research groups have shown that compared with liberals, conservatives have a greater focus on negative stimuli or a “negativity bias”: they pay more attention to the alarming, the threatening, and the disgusting in life. In one experiment that captured this, Hibbing and his colleagues showed liberals and conservatives a series of collages, each comprised of a mixture of positive images (cute bunnies, smiling children) and negative ones (wounds, a person eating worms). Test subjects were fitted with eye-tracker devices that measured where they looked, and for how long. The results were stark: conservatives fixed their eyes on the negative images much more rapidly, and dwelled on them much longer, than did the liberals.

Liberals and conservatives, conclude Hibbing et al., “experience and process different worlds.” No wonder, then, that they often cannot agree. These experiments suggest that conservatives actually do live in a world that is more scary and threatening, at least as they perceive it. Trying to argue them out of it is pointless and naive. It’s like trying to argue them out of their skin.

Perhaps the main reason that scientists don’t think these psychological and attentional differences simply reflect learned behaviors—or the influence of cultural assumptions—is the genetic research. As Hibbing et al. explain, the evidence suggests that around 40 percent of the variation in political beliefs is ultimately rooted in DNA. The studies that form the basis for this conclusion use a simple but powerful paradigm: they examine the differences between pairs of monozygotic (“identical”) twins and pairs of dizygotic (“fraternal”) twins when it comes to political views. Again and again, the identical twins, who share 100 percent of their DNA, also share much more of their politics.

In other words, politics runs in families and is passed on to offspring. Hibbing and his coauthors suspect that what is ultimately being inherited is a set of core dispositions about how societies should resolve recurring problems: how to distribute resources (should we be individualistic or collectivist?); how to deal with outsiders and out-groups (are they threatening or enticing?); how to structure power relationships (should we be hierarchical or egalitarian?); and so on. These are, of course, problems that all human societies have had to grapple with; they are ancient. And inheriting a core disposition on how to resolve them would naturally predispose one to a variety of specific issue stances in a given political context.

All of which brings us to the really big question. It is difficult to believe that systematic psychological and biological differences between those who opt for the left and the right in different countries—differences that are likely reflected in the genetic code—arose purely by chance. And yet, providing an evolutionary explanation for what we see is fraught with peril: to put it bluntly, we weren’t there. We didn’t see it happen.

Moreover, in evolution, some things happen for an explicitly Darwinian “reason”—traits become more prevalent or fixed in populations because they advanced organisms’ chances of survival and reproduction in a particular environment—while others happen more accidentally. Some complex social traits may emerge, for instance, because they are a fortuitous by-product of other, more fundamental traits laid down by Darwinian evolution.

A good example of such a trait may be religion. It’s pretty clear that evolution laid down a series of attributes that predispose us toward religiosity, such as “agency detection,” which refers to the human tendency to detect minds and intentions everywhere around us in the environment, even when they aren’t necessarily there. The evolutionary reason for such a trait seems obvious: after all, better to be safe than sorry when you’re out in the woods and hear a noise. But start thinking that there are intentions behind the wind blowing, or the hunt failing, and you are well on your way to constructing gods. And indeed, religion seems to be a cross-cultural human universal. But does that mean that evolution selected for religion itself, or just for simpler precursors like agency detection?

You see the difficulty. In this context, Hibbing and his colleagues consider a variety of potential explanations for the stubborn fact that there is large, politically relevant psychological and biological diversity among members of the human species, and ultimately settle on a tentative combination of two ideas. First, they assert, conservatism is probably more basic and fundamental, because it is more suited to a world in which life is “nasty, brutish, and short.” Being defensive, risk aversive, hierarchical, and tribal makes sense when the threats around you are very real and immediate. As many of these threats have relaxed in modern times, however, this may have unleashed more variability among the human species, simply because now we can afford it. Under this scenario, liberals are the Johnny-come-latelys to the politico-evolutionary pageant; the Enlightenment itself is less than 300 years old, less than an eyeblink in evolutionary time. “Liberalism may thus be viewed as an evolutionary luxury afforded by negative stimuli becoming less prevalent and deadly,” write Hibbing et al.

However, Hibbing and his colleagues also consider a more controversial “group selection” scenario, in which evolution built some measure of variability in our political typologies because sometimes, diversity is strength (for the group, anyway, if not for the individual). The trouble is, it is still fairly novel for evolutionary explanations to focus on the reproductive fitness of a group of individuals, rather than on the fitness of a single individual or even that individual’s DNA. Nonetheless, it’s easy to see why a group of early humans comprised of both conservative and liberal psychologies might have fared better than a more homogenous group. Such a society would have forces in it that want to hunker down and defend, but also forces that push it to explore and change. This would surely make for better adaptation to more diverse environments. It just might enhance the group’s chance of survival.

Yet it would be going much too far to suggest that Hibbing et al. have a strong or highly developed theory for why biopolitical diversity exists among humans. Avi Tuschman does, though. “Political orientations are natural dispositions that have been molded by evolutionary forces,” he asserts. If he’s right, a dramatic new window opens on who we are and why we behave as we do.

One of the most stunning revelations of recent genetic anthropology is the finding that Homo sapiens, our ancestors, occasionally bred with Homo neanderthalensis in Europe or the Middle East some 40,000 to 50,000 years ago. These encounters may have been quite rare: just one offspring produced every thirty years, according to one estimate. But it was enough to shape who humans are today. Recent genetic analyses suggest that some modern humans have a small but measurable percentage of Neanderthal DNA in our genomes—particularly those of us living in Europe and Asia.

The more you think about it, the more mind-boggling it is that this cross-species mating actually occurred. Imagine how strange it must have been, as a member of Homo sapiens, to encounter another being so closely related to us (much more closely than chimpanzees), and yet still so different. J. R. R. Tolkien buffs can probably visualize it the best, because it would indeed have been something like humans encountering dwarves. Neanderthals were shorter and stronger, with outjutting brows. There is some evidence suggesting that they had high-pitched voices and red hair.

Knowing how prevalent racism and xenophobia are today among members of the same human species, we can assume that many of our ancestors would have behaved even worse toward Neanderthals. And yet some Homo sapiens bred with them, produced offspring with them, and (presumably) cared for those offspring. Which ones were the lovers, not the haters?

The answer, hints Tuschman in Our Political Nature, is that it may have been the liberals. For one core of the apparently universal left-right difference, he argues, is that the two groups pursue different reproductive strategies, different ways of ensuring offspring and fitness in the next generation.

And thus we enter the realm of full-blown, and inevitably highly controversial, evolutionary explanations. Tuschman doesn’t hold back. Conservatives, he suggests in one of three interrelated evolutionary accounts of the origins of politics, are a modern reflection of an evolutionary impulse that leads some of us to seek to control sexual reproduction and keep it within a relatively homogenous group. This naturally makes today’s conservatives more tribal and in-group oriented; if tribalism does anything, it makes it clear who you are and aren’t supposed to mate with.

Tuschman’s liberals, in contrast, are a modern reflection of an evolutionary impulse to take risks, and thereby pull in more genetic diversity through outbreeding. This naturally makes today’s liberals more exploratory and cosmopolitan, just as the personality tests always suggest. Ultimately, Tuschman bluntly writes, it all comes down to “different attitudes toward the transmission of DNA.” And if you want to set these two groups at absolute war with one another, all you need is something like the 1960s.

According to Tuschman, these competing reproductive strategies arise from the fact that there are advantages to keeping mating close within the group, but also advantages to mixing in more genetic diversity. Moreover, there is a continuum from extreme inbreeding to extreme outbreeding, featuring many different reproductive strategies along the way. Thus, we see in other species, such as birds like the great tit, a range in mating behavior, from a high level of breeding with more closely related birds to a high level of outbreeding.

Outbreeding brings in diversity, which is vital. For instance, diversity in the genes that create the proteins that ultimately come to comprise our immune systems has obvious benefits. But outbreeding also has risks—like encountering deadly new pathogens when you encounter new human groups—even as a moderate degree of inbreeding appears to have its own advantages: perpetuating genetically based survival strategies that are proven to work, increasing altruism that arises in kin relationships, and also, it appears, having more total offspring.

Extreme inbreeding, to be sure, is deleterious. But Tuschman presents evidence suggesting that there is an optimum—at around third-cousin or fourth-cousin mating—for producing the largest number of healthy offspring. He also shows related evidence in Danish women suggesting that a moderate degree of geographic dispersal to find a mate (measured by the distance between a woman’s birthplace and her husband’s) is related to having a high number of children, but too much dispersal and too little are both related to less overall fertility.

Returning to the present, Tuschman emphasizes that conservatives, and especially religious conservatives, always want to seem to control and restrict reproduction (and other sexual activities) more than liberals do. It’s understandably hard for an evolutionary biologist not to see behaviors that systematically affect patterns of reproduction in a Darwinian light.

And it’s not just reproductive patterns: Tuschman also suggests that other aspects of the liberal-conservative divide reflect other evolutionary challenges and differential strategies of responding to them. He traces different left-right views on hierarchy and equality to the structure of families (a move that cognitive linguist George Lakoff has in effect already made) and the effect of birth order on the personalities and political outlooks of siblings. And Tuschman traces more positive and negative (or, risk-aversive) views of human nature on the left and the right to different types of evolutionarily based altruism: altruism toward kin on the conservative side, and reciprocal altruism (which can be toward anyone) on the liberal side.

But is all of this really … true? Tuschman’s book is difficult to evaluate on this score. It says so much more about evolution than Hibbing, Smith, and Alford do, and yet manages to do so without leaving the same impression about the importance of caveats and nuances. Is Tuschman advancing a group selection theory, or not? It sometimes sounds like it, but it isn’t clear. And most importantly, is the variation among humans of politically relevant traits just part of the natural order of things, or does it itself reflect something about evolution? Again, it isn’t clear. This is not to suggest that Tuschman lacks a view on such questions; it’s just that he synthesizes so much scientific evidence that this kind of hand-holding seems less of a priority.

In the end, Tuschman’s book attempts a feat that those of us monitoring the emerging science of politics have long been waiting for—explaining the now well-documented psychological, biological, and genetic differences between liberals and conservatives with reference to human evolution and the differential strategies of mate choice and resource allocation that have been forced on us by the pressures of surviving and reproducing on a quite dangerous planet. It may or may not stand the test of time, but it certainly forces the issue.

In the end, what’s so stunning about all of this is the tremendous gap between what scholars are learning about politics and politics itself. We run around shutting down governments and occupying city centers—behaviors that can only be driven by a combination of intense belief and equally intense emotion—with almost zero perspective on why we can be so passionate one way, even as our opponents are passionate in the other.

To see politics as Hibbing, Smith, Alford, and Tuschman see it, by contrast, is inevitably to want to stop fighting so much and strive for some form of acceptance of political difference. That’s why, even though not all of the answers are in place yet, we need their line of thinking to catch on. Ideological diversity is clearly real, deeply rooted, and probably a core facet of human nature. Given this, we simply have no choice but to come up with a much better way to live with it.

Buy these books from Amazon and support Washington Monthly: Predisposed: Liberals, Conservatives, and the Biology of Political DifferencesOur Political Nature: The Evolutionary Origins of What Divides Us

The post The Origin of Ideology appeared first on Washington Monthly.

]]>
13849 Mar14-Hibbing-Books Mar14-Tuschman-Books
It’s All in the Implementation https://washingtonmonthly.com/2014/03/02/its-all-in-the-implementation/ Sun, 02 Mar 2014 13:57:01 +0000 https://washingtonmonthly.com/?p=13850

Why cannabis legalization is less like marriage equality and more like health care reform.

The post It’s All in the Implementation appeared first on Washington Monthly.

]]>

Is marijuana legalization on the gay marriage track toward decisive and irrevocable public acceptance? The liberals and libertarians who support it—call them liberaltarians, to borrow a term from the Cato Institute’s Brink Lindsey—certainly hope so, and the similarities are not hard to see.

Public approval trends for legal marijuana and gay marriage look remarkably similar. (See chart below.) Both have crossed the magic 50 percent line defining majority support, and both, as a result, have seen recent political breakthroughs. In 2012, Colorado and Washington legalized the production, sale, and use of marijuana; since 2009, meanwhile, eight states have legalized medical marijuana, bringing the total to twenty-one (including Washington, D.C.). Gay marriage has similarly picked up momentum, winning adoption in three state initiatives in 2012 and subsequently legalized in an additional eight states. Here is one thing we can say for sure: whatever happens next, there will be no going back to the status quo ante. Drug warriors and marriage traditionalists will need to come to terms with that fact.

But, having noticed the obvious similarities between legal marijuana and legal gay marriage, marijuana reform advocates—especially liberals who care about government’s effectiveness and reputation—need to pay at least as much attention to the less obvious differences. Otherwise they may encounter some of the same sickening surprises they have run into with an issue that may seem not at all like marijuana, but that in fact has much in common with it: Obamacare.

At first glance, this might seem like a stretch. What can a top-down federal reform of the health care system tell us about a state-led reform of drug laws? Quite a lot, actually. Marijuana legalization, unlike gay marriage but very much like Obamacare, requires the government to execute a complicated new program well. Indeed, one might argue that legalizing marijuana is to the states that are doing it much as Obamacare is to the federal government: a test of modern government’s ability to innovate at a time when it is under siege.Consider, then, four lessons Obamacare holds for marijuana reformers.

Change in public opinion over same-sex marriage and marijuana legalization by year
1. Pragmatism trumps moralism.

Gay marriage is a moral values issue. Proponents see it as a core civil right; opponents see it as a violation of God’s or nature’s laws. Moral attitudes are slow and difficult to change—not a lot of people will be convinced one way or another by looking at statistics. But once moral opinions do change, they tend to change decisively and durably, which is why the change in attitudes toward homosexuality has become a cascade in the last decade. And gay marriage, to its proponents, is not something merely to be grudgingly allowed; it is something to celebrate, the exercise of the virtues of commitment and love. To some extent, marijuana legalization fits the moral template: the stigma that used to attach to toking has diminished since the Reefer Madness days. Public morality no longer supports a zero-tolerance attitude toward pot use. But that is where the similarity ends.

Last year, my colleagues E. J. Dionne and William Galston conducted a comprehensive analysis of the public opinion data available on marijuana. Their conclusion: though some people do see marijuana liberalization as a moral value—a freedom issue—mainstream opinion sees no virtue in smoking weed. Opinion has moved toward legalization because the public—including sizable majorities of conservative Republicans as well as liberal Democrats—has come to believe that prohibition is a failed policy and legalization might work better. As such, support for legalization is pragmatic, conditional, and precarious.

That is not true of support for gay marriage, but it is similar to support for Obamacare. True-blue liberals and hard-boiled conservatives may believe that Obamacare is a moral issue (a test of the country’s compassion; a threat to the country’s values), but most of the public cares mainly about whether it works. If it creates chaos, fails to contain costs, or leaves many people feeling worse off, it will be—in fact, already is—on very thin ice with the public. Likewise, if marijuana legalization creates chaos, fails to contain crime, or leaves many people feeling worse off, a backlash and subsequent rejection and retrenchment are quite possible, leaving policy stranded between failed prohibition and stalled liberalization.

2. It’s the implementation, stupid.

So marijuana legalization needs to work—or at least it needs not to fail. Moreover, it needs to be perceived to work. Unfortunately for it, it is not self-implementing. Gay marriage requires little more than a few legal changes and the issuance of marriage licenses. There are assuredly complexities, such as what to do about incompatible state and federal policies and how to handle disputes over religious objections, but they are secondary issues that do not call into question the legitimacy of same-sex marriage itself.

Marijuana legalization, by contrast, is like Obamacare in being anything but binary. Changing the law is merely the first step down a long and tortuous road. Colorado, Washington, and any other states that may eventually legalize need to create administrative and bureaucratic structures to regulate the growth, distribution, and sale of marijuana; they also need to coordinate those efforts with continuing law enforcement against illegal sellers. They need to set tax levels high enough to deter heavy use but not so high as to sustain a black market. They need to make all kinds of regulatory determinations, from how marijuana can be marketed to what level of use constitutes impairment; they must defend those rules in court and regroup when they lose. They need to work out a modus operandi with a hostile federal legal regime and a skeptical law enforcement establishment. They need to track outcomes, identify problems, and make adjustments. And not least, as the president so painfully forgot during the fight for Obamacare, they need to make their case effectively to the public all along the way.

All of that sounds almost unmanageable when listed on paper. But in fact, early indications are that Colorado and Washington are faring reasonably well. If they pass the implementation test, marijuana legalization could prove that obituaries for effective, adaptive government—some of them written by me—are premature. But if they yield chaos or crisis, they would discredit the policy they seek to promote.

As of now, I’m cautiously optimistic that the states’ experiments will be made to work, not perfectly but well enough. But liberaltarians and drug reformers need to get it through their heads that just passing legalization initiatives is not enough. They need to stick around once the vote is over and commit to the hard slog of making the policy succeed.

3. Overpromising is perilous.

This, of course, is something Obamacare supporters are learning the hard way. (“Keep your current insurance, if you like it”? And we’ll see about near-universal coverage, cost-curve bending, and budgetary savings.) Of course, to overcome public resistance to any reform, you must make promises, usually optimistic ones. But there is a price to pay for overdoing it.

Here again, marijuana legalization faces some of the same challenges as Obamacare. Voters in Colorado and Washington were told that legalization would produce new gushers of revenue for education and public safety; many experts at the time warned that the revenue claims were unrealistic. Similarly, legalization was touted as a public safety measure, redirecting policy resources toward more serious forms of crime, but experts say that suppressing the black market as it fights for market share may initially require more law enforcement resources, not fewer.

The public safety and revenue arguments for legalization are entirely legitimate and, over the long run, likely to prove valid. In the short run, however, the best way to avoid an Obamacare-style collision with reality is to tailor expectations by making clear to the public that legalization is a journey, not a destination, and that magical results won’t come overnight.

4. Avert zero-sum politics.

Obamacare would have been a lot easier to launch and fix if the Republican half of the country did not feel invested in its failure. Zero-sum politics—which scores any Obamacare success as a win for Democrats and a loss for Republicans, rather than as a win for the country—has proved toxic to the environment for reform.

The same has not quite happened for marijuana legalization, but it could. Drug warriors vigorously object to legalization; many expect legalization to fail and hope to nip it, excuse the pun, in the bud. But many are also cognizant that the old approach was unsustainable and unsuccessful. Partly because marijuana legalization has usurped the limelight, the Obama administration’s drug czar, Gil Kerlikowske, has received too little notice and credit for steering federal antidrug policy away from criminalization and toward prevention and treatment; even drug warriors increasingly speak of addiction as a disease and accept that we will never incarcerate or zero-tolerate our way out of the problem.

To these drug warriors of good faith, responsible proponents of marijuana legalization need to address a message. What Colorado and Washington are doing is conditional legalization, wrapping marijuana within a regime of regulation that uses both carrots (legitimate profits for those who follow the rules) and sticks (punishments for those who don’t) to better control the marijuana market and more effectively deploy public resources. The alternative, should regulated legalization fail, might be chaotic legalization or policy drift, which would be worse, even from a drug war point of view. So drug warriors have a stake in helping Colorado and Washington and other states find new paths toward an effective drug-control framework—just as liberalizers have a stake in keeping a regulatory handle on marijuana markets.

To avoid the zero-sum mind-set and all the counterproductive friction that goes with it, that message is politically essential. It has the additional advantage of being true.

Return to “Saving Marijuana Legalization.”

The post It’s All in the Implementation appeared first on Washington Monthly.

]]>
13850
Nonprofit Motive https://washingtonmonthly.com/2014/03/02/nonprofit-motive/ Sun, 02 Mar 2014 13:52:26 +0000 https://washingtonmonthly.com/?p=13851 How to avoid a likely and dangerous corporate takeover of the legal marijuana market.

The post Nonprofit Motive appeared first on Washington Monthly.

]]>
The standard debate about marijuana legalization has been “Should we, or shouldn’t we?” For better and for worse, the country appears to be moving toward answering that question in the affirmative. The next logical question is, or ought to be, “What sorts of organizations do we want to supply that legal marijuana?”

The debate typically skips past that crucial question and presumes that legal cannabis will be produced and sold by for-profit companies—with the government setting some regulatory limits such as restricting access to minors. Colorado and Washington, the two states where voters have already approved legalization, have gone this commercial route.

At first glance, it doesn’t seem so bad. After all, the companies that have obtained licenses to produce or sell marijuana in Colorado, and the quasi-legal medical marijuana dispensaries that operate in a number of states, seem quaint and countercultural right now—hippies enjoying a nice middle-class lifestyle clucking over little plots with fifty or a hundred plants. But such mom-and-pop operations aren’t likely to last.

Cannabis is just a plant. If cannabis production ends up looking anything like modern agriculture, these small, independent operations will be shunted aside by bigger, professional farms. And if cannabis distribution looks anything like distribution of other consumer goods, those farms will supply companies that use marketing savvy to develop and exploit brand equity. These more organized enterprises will be driven by profit and shareholders’ interests, not concern for public health or countercultural values. And as these companies grow, they’ll begin to wield considerable political power. (Note that even the current legal cannabis industry, in its infancy, already has trade associations, lobbies, and holds annual conventions.)

While a few small firms may adroitly adapt to a low-volume, high-touch niche market serving primo brands to college-educated connoisseurs—akin to microbreweries—a larger, corporate reality looms. With 60 percent of marijuana being used by people with a high school education or less, we should expect the majority of consumers to shop for value: they’ll seek Walmart-style everyday low prices, not boutique ambience at boutique prices.

So if we want to avoid a legal cannabis market dominated by large companies that push to sell as much as possible to as many people as possible, then what do we do? How do we design a legal market that protects public health and limits drug abuse?

The most effective model, as Mark Kleiman explains elsewhere in this issue, may be restricting sales to government-owned and -run stores. Right now, that is at best a long shot. Marijuana is still prohibited under federal law, and a state law involving the state government directly in the trade would likely be preempted by the Controlled Substances Act. Even if the federal laws do change, government stores may be political nonstarters, hated by both moral conservatives and libertarians alike. There are two other models, however, that could work. One restricts production and sale to nonprofit organizations. The other restricts production and sale to small user co-ops.

Model 1: The legalization of nonprofit production and sale

Entrusting the supply of sensitive commodities or services to nonprofits—such as blood banks, credit unions, art museums, and so on—is not new. Indeed, a good chunk of economic activity in the United States is conducted by organizations that are neither government nor for-profit enterprises.

Of course, simply designating an organization as a nonprofit doesn’t guarantee that it won’t be self-aggrandizing and politically powerful. (Nonprofit hospitals and universities, for example, compete aggressively, though they are typically more restrained by ethical principles than are corporations.) But in the nonprofit arena one can do more to guard against those tendencies. For example, a nonprofit seeking a marijuana license could be required to have a board of trustees whose members are selected by child welfare and public health agencies. The state could require that any licensee’s charter documents (e.g., its articles of incorporation or bylaws) pledge that it operate in ways that meet demand but eschew promotion, and stipulate that all excess net operating revenues be donated to drug prevention, treatment, or other charitable causes related to substance abuse.

Placing the cannabis industry in the hands of nonprofits committed to fostering public health would rejigger the incentive structure. For-profit corporations seek to grow sales and, as with alcohol and tobacco, the greatest revenues come from the most vulnerable populations; 40 percent of past-month marijuana users today meet clinical guidelines for substance abuse or dependence (on marijuana, alcohol, and/or some other substance). Nonprofits, by design, operate under different prerogatives. They are not inherently interested in expanding sales or streamlining production, since their mission is to serve the public interest, not to maximize shareholders’ profits.

The usual knock against nonprofits is that they do not operate as efficiently as do for-profit businesses. But that hardly matters, because after legalization marijuana production costs will be very low. A typical heavy user can be supplied annually by three square feet of greenhouse space (at forty grams per harvest, four harvests per year), and professional farmers’ annual costs for comparable crops run between $5 and $20 per square foot. Outdoor farming would be even cheaper; all of the marijuana currently consumed in the U.S. each year could be produced on about 10,000 to 15,000 acres—or about a dozen modest Midwest farms.

Production costs today are high because growing is done by two entities: criminal organizations, whose comparative advantage is avoiding enforcement, not practicing agronomy; and by plant-loving aficionados supplying medical dispensaries and other high-end markets from artisanal operations. A decade or so down the road, when the for-profit marijuana farming sector approaches the efficiency of tomato or pepper farmers, the production cost for a joint’s worth of basic, high-potency intoxicant, which runs about $4 today, will drop to about a nickel.

The challenge under a system of legal availability is therefore not achieving greater efficiency and lower prices but, rather, keeping prices from falling too far, since low prices are not good for public health. Multiple studies have shown that consumption rises as price falls. Consumption may be especially price sensitive among heavy users, because marijuana now takes a bigger share of their personal budgets, and among cash-strapped teenagers: two groups whose consumption we’d prefer to see go down, not up.

Low prices also undercut revenues from taxes that are assessed as a percentage of value, as in Colorado and Washington. While nonprofits would not pay corporate income or property taxes, their products would be subject to excise and sales taxes, and employees’ wages would be taxable. (Bringing employees’ wages above the table might generate roughly as much tax revenue as will the sales and excise taxes, something the debate often overlooks.)

Restricting production and sale to nonprofits also solves the problem of how to allow production without also allowing aggressive marketing, including promotions that appeal to youth, such as the Joe Camel campaign or alcopops. It’s very difficult to limit commercial entities’ advertising and other promotional efforts, since they enjoy constitutional protections as “commercial free speech.” (Most restrictions on alcohol and tobacco advertising come from voluntary restrictions, negotiated agreements, and lawsuits, not legislation, which helps explain why they are so weak.)

Placing the industry in the hands of public health-minded nonprofits sidesteps the problem of overzealous promotion, because their interests would be aligned with social welfare, not with shareholders. Big Tobacco and Big Alcohol court youth because it is the only way they can grow, and growth is in their DNA. Nonprofits operate under different incentive structures.

Model 2: Co-ops

Another option would be to restrict production and distribution to user co-ops. The co-op model might be described as “grow-your-own plus share-with-others.” Alaska effectively legalized grow-your-own when its supreme court ruled that growing up to twenty-five plants was protected by the state constitution’s privacy rights, and Colorado’s 2012 proposition, in addition to legalizing large-scale commercial production, allows any adult to grow up to six plants.

One issue with grow-your-own, however, is that not everyone has the time, inkling, or skill to grow marijuana. While growing marijuana would not challenge a professional farmer, it is trickier than growing carrots, and it suffers from the zucchini problem. One productive plant produces more than the average user can consume, but one plant that dies produces nothing. So grow-your-own is feast or famine. Furthermore, some users like variety, and they may not be able to grow all of the strains they would like to consume. Co-ops solve that problem by allowing each registered member to grow one plant or grant that growing right to another member. Members could share or trade within the co-op. Someone who grew Cannabis indica, for example, could swap with someone who grew Cannabis sativa. Someone whose plant thrived might give some of the product to a member whose plant died, knowing that the shoe might be on the other foot after the next harvest. Co-ops would be small, perhaps restricted to 100 members (and thus 100 plants total), enough to overcome the problems of grow-your-own but not enough to allow large-scale production.

Uruguay, which recently became the first country to legalize the production, sale, and use of marijuana, allows such co-ops, or “clubs.” Spain has allowed them too, for quite some time. (Spain’s laws criminalize the sale but not the possession, so sharing falls in a legal gray area.)

The co-op model would not offer all the advantages of full-blown legalization. It would not generate tax revenue, and its small scale and informal nature would leave production looking much like it does today for medical-grade marijuana—a relatively inefficient activity providing handcrafted products. Co-ops might also produce a narrower range of traditional products, like smokable marijuana and simple edibles, but not necessarily candies, bath oils, and highly concentrated forms such as butane hash oil.

Nevertheless, permitting co-ops could take a sizable bite out of the black market. More than half of marijuana users already report that they most recently obtained marijuana for free or by sharing. Others paid, but bought from friends or family. Just 7 percent reported buying from someone other than a friend, relative, or family member. So retail marijuana distribution already operates via an informal co-op model. Shifting to a formal co-op system would simply mean that product would come from co-ops, rather than black market criminal operations.

There is a spectrum of options for legal supply, ranging from none (prohibition with no medical exception) to for-profit enterprise subject only to standard business regulations. The pros and cons of the extreme positions have been well discussed. Prohibition produces black markets, arrests, and imprisonment; private enterprise promotes greater consumption in all forms, including greater abuse.

Colorado and Washington have chosen to swing from close to one end of the spectrum (prohibition except for a quite permissive medical system) to close to the other end (for-profit enterprise subject to standard regulations plus some regulations particular to marijuana). They have skipped over several viable intermediate alternatives.

Allowing organizations to produce, distribute, and sell marijuana but restricting that privilege to nonprofits is better than the Colorado and Washington strategies in almost every respect. In fact, the only people who should prefer commercial marijuana enterprises are those who themselves hope to make millions operating businesses, or hardcore libertarians who don’t believe that government should interfere in a consumer’s right to harm himself through his own bad choices.

And there may be wisdom in moving incrementally. States that jump all the way to the commercial version of legalization will have a hard time stepping back to a nonprofit or co-op model. Once a legal industry becomes entrenched, and has lobbying clout, it will be very hard to uproot. Likewise, a state monopoly model, favored by many experts, will only become an option after national legalization if there is not already an established commercial interest that would fight it. Unless voters are certain that they want for-profit businesses to control the marijuana trade, it would make sense to legalize the industry with one of these intermediate models, at least at first.

The author would like to thank GiveWell and Good Ventures for supporting his work on cannabis policy. The views expressed are the author’s and should not be attributed to Carnegie Mellon, GiveWell, or Good Ventures, whose officials did not review this article in advance.

Return to “Saving Marijuana Legalization.”

The post Nonprofit Motive appeared first on Washington Monthly.

]]>
13851
A Nudge Toward Temperance https://washingtonmonthly.com/2014/03/02/a-nudge-toward-temperance/ Sun, 02 Mar 2014 13:50:11 +0000 https://washingtonmonthly.com/?p=13852 The great policy challenge to cannabis legalization is discouraging problem use. Most consumers have no trouble keeping their consumption within reasonable bounds, but 10 to 15 percent lose control for months at a time, and some of them develop chronic problems. At any one time, there are a couple of million Americans who self-report that […]

The post A Nudge Toward Temperance appeared first on Washington Monthly.

]]>
The great policy challenge to cannabis legalization is discouraging problem use. Most consumers have no trouble keeping their consumption within reasonable bounds, but 10 to 15 percent lose control for months at a time, and some of them develop chronic problems. At any one time, there are a couple of million Americans who self-report that they’re trying and failing to cut down on their cannabis use. That hasn’t kept zealous advocates from convincing themselves and others that “pot isn’t addictive.”

No one intends to become habituated to cannabis, any more than anyone intends to develop any other bad habit. Abuse creeps up on some people because their long-term desire to avoid habituation, and to function well in school, at work, and at home, has a hard time competing with the short-term allure of getting high.

One way to limit the number of problem users is to prevent the creation of an industry of for-profit firms eager to sell to them (see Jonathan P. Caulkins, “Nonprofit Motive,” page 39). Another possibly complementary approach would be to keep consumers mindful of how much they’re actually using, compared to how much they intend to use, with a system of user-chosen monthly purchase quotas. Since using more, or more often, than intended is among the defining characteristics of substance abuse, helping users enforce on themselves their own chosen consumption patterns would address the problem at its root. This is one version of the sort of “choice architecture” or “libertarian paternalist” strategy described by Richard Thaler and Cass Sunstein in their book Nudge.

Under a quota system, each cannabis user would have to sign up and receive an account number. (There would be no need to match the number with a name or any other identifier.) On signing up, each user would be asked to set his or her own monthly purchase limit. A user who wanted to get high only once a week, and hadn’t yet built up a tolerance, might set a limit of 200 milligrams of THC per month. How to provide appropriate guidance at that stage—and how to convince a new user that habituation might happen to him, not just to someone else—constitutes the hardest operational challenge. One approach would be to set some modest arbitrary number as the “default option” for users who don’t choose a higher or lower limit for themselves.

A user could set any limit he wanted to, but once set, that limit would be binding until changed. If a user asked to buy more than was left out of his or her self-selected monthly allowance, the clerk would have to refuse. Those limits would not be carved in stone; users could change them, but only—and here’s the key—after some delay. (Two days? Two weeks? There would have to be a similar waiting period between applying for a new card and its effective date.) Since testing-and-labeling rules mean that each package will contain a measured quantity of THC, the system would only have to track the monthly total for each account number: not a daunting data-management task.

Of course, no such system is foolproof. Someone will always invent a bigger fool. Some users who run out of quota would simply set a higher limit, and do so again when that higher limit didn’t keep pace with their growing appetite for intoxication. For them, the system might not do much good. However, others—and there’s no way to guess in advance how many—would not reset their limits, choosing instead to keep them in place as external props for occasionally weak wills. Those, and the other users for whom a quota system is an encouragement to think about how much they’re using, would be the beneficiaries of this “nudge” toward temperance.

Like most nudge strategies, there’s a “chicken soup” aspect to a personal user quota: it might not do much good, but it couldn’t hurt.

Return to “Saving Marijuana Legalization.”

The post A Nudge Toward Temperance appeared first on Washington Monthly.

]]>
13852