July/August 2023 | Washington Monthly https://washingtonmonthly.com/magazine/july-august-2023/ Thu, 13 Jul 2023 16:51:09 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg July/August 2023 | Washington Monthly https://washingtonmonthly.com/magazine/july-august-2023/ 32 32 200884816 The Founders Want You to Work From Home https://washingtonmonthly.com/2023/06/19/the-founders-want-you-to-work-from-home/ Tue, 20 Jun 2023 01:40:00 +0000 https://washingtonmonthly.com/?p=148088

The CEOs who would herd us back to the office are flouting a tradition of economic autonomy and productive leisure that stretches back to the American Revolution.

The post The Founders Want You to Work From Home appeared first on Washington Monthly.

]]>

During the darkest depths of the pandemic, as businesses laid off millions of people and refrigerator trucks idled outside hospitals, one segment of the workforce enjoyed a degree of freedom and autonomy not seen in at least a century. With offices closed, the “laptop class” of knowledge workers was forced—or freed—to phone it in from home. At first, we (I am a member of this tribe) languished, doom-scrolling Twitter and streaming episodes of Tiger King. But then a funny thing happened. Without a boss looming over our shoulders, we were given an unprecedented amount of control over our own schedules. Many of us avoided hours-long commutes, as well. And in that extra time, amid enormous suffering and upheaval, occurred a curious flourishing. We baked bread. We learned to knit. We video-chatted with friends we’d never have thought to reach out to in normal times. We tended our gardens, botanical and spiritual alike.

Working from home lifted burdens we didn’t know we carried. Implicit in office life is a degree of power and control—not only do the bosses buy our services, they also buy our physical presence. Decoupling those things unlocked liberty not seen in any office worker’s lifetime; managers were forced to communicate precisely what they wanted from their employees, rather than just keep them around for incidental productivity—or, as Jamie Dimon, CEO of JPMorgan Chase, put it, “spontaneous idea generation.” 

The change in physical terrain—from a space controlled by the boss to the sanctuary of one’s own home—might seem minor, a matter of personal convenience, but it speaks to societal forces that reach deep into American history. In the Revolutionary Era, founders like Thomas Jefferson feared, more than almost anything else, industrialization and the rise of wage labor. When independent farmers left their homes to toil in factories under the watchful eye of a boss, their fates—and their votes—became yoked to their industrialist masters, Jefferson believed. They lost their independence as citizens as well as workers. To Jefferson, John Adams, James Madison, and others, the yeoman farmer was the ideal citizen who set his own schedule and was beholden to no one. With a little suspension of disbelief, one could imagine the darkest days of 2020 as a time machine transporting knowledge workers back to their freehold farms. There, we tended our intellectual crops at our leisure, reviving a model of work long forgotten. We discovered we liked it.

Now, the Man wants to take it all away. A steady drumbeat of CEO voices is calling for a return to the office, where white-collar workers will allegedly be more productive—and more easily surveilled. Steven Rattner, who manages Michael Bloomberg’s $8 billion personal investment fund, summed up the anxiety of the ownership class in a New York Times op-ed: “Has America gone soft?” In his piece, Rattner briefly acknowledges that many Americans may no longer value the economic incentives of constant toil over the prospect of a less productive, but more leisurely, existence. But he warns that, while we rest, the Chinese are working 72-hour weeks, and that “less output … eventually means a lower standard of living (or a less quickly rising one).” But his assumption—that working from home equals less productivity—is far from an established fact. Yes, some people are using their extra time to play video games. Others, however, are using their freedom to work harder and more efficiently, sending Slack messages in the doctor’s waiting room and folding laundry while answering calls. Besides, what was the “standard of living” that we strove for all those years—having more stuff? Could there, perhaps, exist collective goods other than the wealth we generate for our bosses? It doesn’t matter; for Rattner and his billionaire friends, the one measurable good to be observed in a worker is how much they produce.

To Thomas Jefferson, John Adams, James Madison, and others, the yeoman farmer was the ideal citizen who set his own schedule and was beholden to no one. With a little suspension of disbelief, one could imagine the darkest days of 2020 as a time machine transporting knowledge workers back to their freehold farms. There, we tended our intellectual crops at our leisure, rediscovering a model of work long forgotten. We discovered we liked it.

That the laptop class can even have this debate reflects a privilege not shared by service workers, who during the pandemic suffered through layoffs, workplace outbreaks, and rudeness and violence from customers. Still, the struggles of taxi drivers and restaurant waiters are like those of knowledge workers in kind, if not degree. What the average working person lacks, from baristas to mechanics to salespeople to journalists, is economic agency. Economic agency moves hand in hand with political power, and when great swaths of people are denied those things for too long, republics crumble.

Much ink has been spilled on the post-pandemic future of work, but little of it has addressed the civic underpinnings of the debate. Some, like Rattner, seem eager to herd us back into our cubicles, with little change to underlying socioeconomic arrangements. Other, more conciliatory, commentators note that more leisure time can improve productivity. Still others chide the marketplace for treating leisure as a “productivity hack” rather than a good in itself. This is closer, but it still misses why the discussion matters to everyone, and not just to whichever class of worker is demanding better treatment.

The debate is about more than greedy bosses and fed-up workers; it’s about democracy’s foundations. An intellectual lineage stretching back to ancient Greece argues that self-governing societies need independent, well-informed, and civically engaged citizens to survive. And to build those people up, societies must give them two things: economic autonomy and leisure time—not vacant hours spent staring at TikTok, but edifying leisure, time spent volunteering, reading, learning a new skill, or raising children. The Greeks called that scholê, the root of our modern-day “school.” Work, in the sense of income-gaining labor, they defined negatively: ascholia, “not-leisure.” Scholê brought societies together by making individual happiness a matter of collective welfare. Building the ideal citizen was everyone’s concern.

Though now largely discarded, this ethos, often known as “civic republicanism,” has informed many of America’s greatest economic and political debates, from the writing of the Declaration of Independence to the creation of child labor laws and the modern workweek. 

Today, great masses are staging a mute protest against the status quo of labor. But they lack the language to frame their objections as something more than selfish, if well-grounded, complaints about their own welfare. By tracing the history of civic republicanism, we see how an issue as apparently quotidian and personal as remote work is, in fact, a matter of democratic importance. Give us the liberty to work unshowered and in underwear, or give us death.

It was tradition in ancient Athens to bury the city’s fallen soldiers together, just as they had fought. Mourning the dead was collective as well; once a year during wartime, the living gathered to hear a funeral oration reminding them of their shared values. In 431 BC, Athens was ending its first year of war against Sparta for hegemony of the Hellenic world. This duel pitted competing visions of government against each other: egalitarian democracy and militarized autocracy. That year’s speech would need to do many hard things—mourn the dead, explain what the living were fighting for, and steel its listeners for another year of loss. 

Thankfully, the speaker was Pericles. The foremost orator and statesman of his time began his eulogy by extolling Athens’s greatness. But he placed the source of that greatness in the citizens themselves. Not only did Athenians enjoy the fruits of democracy—its “regular games and sacrifices” and “many relaxations from toils”—they were the democracy. Their virtues were Athens’s virtues, their faults its faults. “To sum up,” Pericles said,

I say that Athens is the school of Hellas, and that the individual Athenian in his own person seems to have the power of adapting himself to the most varied forms of action with the utmost versatility and grace. This is no passing and idle word, but truth and fact; and the assertion is verified by the position to which these qualities have raised the state.

A democratic society, Pericles was saying, reflects its citizens’ personal qualities, which must be cultivated for the good of all through industry and free time well spent. The Greeks, like many other ancient civilizations, thrived on slave labor. And women had few rights. But what made democratic city-states like Athens unique was that even the humblest male citizens were encouraged to take part in public activities separate from their daily labor, be those theater, athletic games, or government. Indeed, one of Pericles’s greatest innovations was a system of per diem pay for public service that ensured even the poorest citizen could afford to take time off from his toils. Standing, perhaps, among gravestones in the Kerameikos cemetery—we know the speech only through Thucydides—Pericles said of his fellow Athenians, “We alone regard a man who takes no interest in public affairs, not as a harmless, but as a useless character, and if few of us are originators, we are all sound judges of a policy.”

Pericles died of the plague less than two years later, and Athens lost the Peloponnesian War. But its democracy eventually recovered, and just a few decades after Pericles’s speech, another link in the intellectual chain arrived. Aristotle was born in the latter days of Spartan tyranny and mentored Alexander the Great, another conquering warlord. Nevertheless, his thinking on self-improvement and individual autonomy would become essential to the success of democratic societies throughout history. “There remains to be discussed the question,” Aristotle says in Politics, “whether the happiness of the individual is the same as that of the state, or different. Here again, there can be no doubt—no one denies that they are the same.”

In the Nicomachean Ethics, Aristotle laid out what happiness was, charting the path to eudaimonia—the “good life.” It wasn’t just a matter of feeling good or having nice things; it was, the philosopher said, “virtuous activity in accordance with reason.” The word virtue had two faces: virtue in the sense of excellence—striving to be one’s best—but also service to others. Sacrifice for one’s family, neighbors, and country was essential to the good life.

Over the next 2,000 years, political philosophers picked up these ideas and shaped them to their times, forming a lineage that carries us directly to the founding of the United States: Polybius, a Greek who connected the Roman republic’s rise with its civic traditions; the brothers Tiberius and Gaius Gracchus, plebeians who pursued agricultural reforms meant to establish the average citizen as an independent yeoman farmer; Donato Giannotti, a chronicler of class politics in Renaissance mercantile republics; James Harrington, of 17th-century England, who believed that republics flourish when a robust middle class attains both economic and political power; Montesquieu, theorist of the separation of powers as a backstop to civic virtue; and John Locke, model of the American Founders, who disagreed with Aristotle on many things, but not with his idea that the legitimacy and stability of government are bound inextricably with human happiness.

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness

Thomas Jefferson’s words, which reverberate throughout American history, are themselves an echo. Just a few decades before the writing of the Declaration of Independence, parts of Jefferson’s immortal phrase appeared in Locke, who wrote that “the highest perfection of intellectual nature lies in a careful and constant pursuit of true and solid happiness” and that the interests of the state are “life, liberty, health, and indolency of body.” Both Jefferson and Locke lived in dialogue with the civic republican lineage, including with Aristotle, whom they in turns admired and criticized. And both shared the Aristotelian understanding that happiness is, as per the Nicomachean Ethics, “the end to which our actions are directed.”

At 42, Benjamin Franklin sold his printing business and pronounced himself a “Man of Leisure,” a designation that didn’t mean idleness. On the contrary, he filled his now-open life with meaningful pursuits: invention, scientific inquiry, social clubs, and international diplomacy.

Meanwhile, Jefferson had a specific idea of the “life, liberty and pursuit of happiness” that he believed we are entitled to pursue. Time and again, the Virginian tobacco planter preached the virtues of the “yeoman farmer”—the self-sufficient man whose leadership in the miniature society of the home prepared him to participate in political decision-making. “The small land holders are the most precious part of a state,” Jefferson proclaimed. Meanwhile, like the Greeks, Jefferson, a slaveholder, reveled in a vision of equality but could only envision extending those privileges to a select few.

The aristocratic Jefferson had one idea of the good life. Another influential model for American work and leisure—the self-made man—was Ben Franklin. The tenth son of a Boston tallow chandler, Franklin ran away to Philadelphia at 17 with only a few shillings to his name. Somehow, he maneuvered his way into the proprietorship of a printing house and, by 23, was editor and publisher of the Pennsylvania Chronicle. As a young man on the rise, he cultivated an image of extreme industry and frugality. As he writes in his Autobiography: “I drest plainly; I was seen at no Places of idle Diversion; I never went out a-fishing or shooting; a Book, indeed, sometimes debauch’d me from my Work; but that was seldom, snug, & gave no Scandal.” 

Justice Brandeis’s clerks became accustomed to leaving notes at his office door at ungodly hours and watching the slips of paper silently be withdrawn underneath; meanwhile, in his leisure time, he threw himself with equal vigor into philanthropy, Zionist activism, and recreation.

Published after Franklin’s death in 1790, the Autobiography was a best seller that influenced generations of strivers. But the tale was incomplete—it covered mostly his rise. At 42, Franklin sold his printing business and pronounced himself a “Man of Leisure,” a designation that didn’t mean idleness. On the contrary, he filled his now-open life with meaningful pursuits: invention, scientific inquiry, social clubs, and international diplomacy. He founded the University of Pennsylvania and some of the first subscription libraries. Unlike Jefferson, Franklin gave up slaveholding to become a full-throated abolitionist, and was a proponent of humility compared to the Virginian’s Old World noblesse oblige. As John Paul Rollert noted in The Atlantic, Franklin’s ethos of hard work and productive leisure—the early retirement, the thrift and restraint—held sway for a long time but lately has been abandoned. Now, to those raised on Gordon Gekko and Jordan Belfort, it looks almost quaint. What entrepreneur today, Rollert asked, would walk away “at the height of his earnings potential”?

Debates over the civic consequences of economic arrangements shaped the country’s development throughout the first half of the 19th century. Alexander Hamilton’s vision of political economy, with a strong federal government and an increasingly industrialized society, would clash repeatedly with Jefferson’s agrarian utopianism. Even as the advantage swayed back and forth between the Hamiltonians and the Jeffersonians, both sides saw their arguments as being not only about the running of the economy but also about the health of the budding democracy.

In this era, there’s perhaps no more powerful negative example of civic republican values than the southern plantation system, where masters forbade reading and limited social gatherings to ensure that slaves were too isolated and uneducated to break free. As Frederick Douglass said in 1894, “Education … means emancipation. It means light and liberty. It means the uplifting of the soul of man into the glorious light of truth, the light only by which men can be free.” During Reconstruction, the Radical Republicans in Congress attempted to cement former slaves’ freedom by providing them with 40 acres of land and a mule taken from the plantations. (An override of Andrew Johnson’s veto failed by only a few votes.) After the federal government abandoned Reconstruction and withdrew its troops from the South, white supremacists struck at Black citizens’ economic and political agency in tandem, forcing them into sharecropping while restricting their access to the polls.

Civic republicans split during the Gilded Age. Some deployed the ideals of economic autonomy to protect the sovereignty of corporations; others saw the emergence of robber barons and grueling factory conditions as a new threat to freedom. Many of the latter joined organizations like the Knights of Labor, whose demands followed the twin prongs of civic virtue: economic independence and edifying leisure. The Knights fought for greater autonomy and a voice for workers within companies (even if they couldn’t return them to their freehold farms), while also establishing reading rooms and education centers where workers could better themselves in their free time. In the 1880s, 700,000 Knights and their allies braved hunger and hired thugs to strike for the modern workday, chanting, “Eight hours for work, eight hours for rest, and eight hours for what you will.” Populist leaders like William Jennings Bryan picked up on those demands. In his 1896 “Cross of Gold” speech, Bryan argued that workers and their bosses ought to have equal economic agency:

The man who is employed for wages is as much a businessman as his employer; the attorney in a country town is as much a businessman as the corporation counsel in a great metropolis; the merchant at the crossroads store is as much a businessman as the merchant of New York; the farmer who goes forth in the morning and toils all day—who begins in the spring and toils all summer—and who by the application of brain and muscle to the natural resources of the country creates wealth, is as much a businessman as the man who goes upon the board of trade and bets upon the price of grain; the miners who go down a thousand feet into the earth, or climb two thousand feet upon the cliffs, and bring forth from their hiding places the precious metals to be poured into the channels of trade are as much businessmen as the few financial magnates who, in a back room, corner the money of the world.

As labor organizations fought the street battles and Populists waged political campaigns, Progressives like Louis Brandeis carried the same ideals to the marble halls of government. Before he was elevated to the Supreme Court, where he served from 1916 to 1939, the first Jewish justice was, among other things, a crusading labor lawyer. In the infamous Lochner decision of 1905, the Court struck down a New York law limiting bakers’ hours to 60 a week, arguing that states could not interfere with freedom of contract. Three years later, Brandeis persuaded the Court to limit the workday for women in factories and laundries—a narrow victory, but an important precedent. 

Brandeis’s conception of leisure was all-encompassing, both Franklinian and Jeffersonian. His clerks became accustomed to leaving notes at his office door at ungodly hours and watching the slips of paper silently be withdrawn underneath; meanwhile, he preached the value of free time well spent and threw himself with equal vigor into philanthropy, Zionist activism, and recreation. “I can do 12 months’ work in 11 months, but not 12,” he once said. 

On the nation’s birthday in 1915, Brandeis gave a speech in Boston that asked how, in an age of immigration and nativism, one should define what is quintessentially American. He might have stolen his answer straight from Pericles’s mouth: “What are the American ideals? They are the development of the individual for his own and the common good.” 

And how should one develop the individual? “The worker must … have leisure,” Brandeis said. “But leisure does not imply idleness. It means ability to work not less, but more—ability to work at some thing besides breadwinning … Leisure, so defined, is an essential of successful democracy.”

It wasn’t enough that a citizen has the opportunity to improve himself, Brandeis added: “He must be free. Men are not free if dependent industrially upon the arbitrary will of another. Industrial liberty on the part of the worker cannot, therefore, exist if there be overweening industrial power.” On that power, he said, “some curb must be placed.”

It’s hard, Brandeis was saying, to start and maintain a small business when monopolists have the market cornered. And it’s hard to pass laws restricting work hours and child labor when monopoly corporations have undue influence over the machinery of government. Progressive reformers like Woodrow Wilson eventually curbed corporate power by enforcing and strengthening antitrust laws. But the Supreme Court blocked their efforts to limit working hours and ban child labor. 

Two decades later, Franklin D. Roosevelt took up those causes. “A self-supporting and self-respecting democracy,” he proclaimed in 1937, “can plead no justification for the existence of child labor, no economic reason for chiseling workers’ wages or stretching workers’ hours.” The following year, he signed the Fair Labor Standards Act, and the justices, cowed by his court-packing threats and by changing public opinion, let the law stand. 

Robbed of knowledge of America’s civic republican tradition, the average person has been unable to protest growing inequality in a way that feels true to the nation’s shared values. (You want the average worker to have vacation time and parental leave? What are we, France?)

At the same time, FDR’s Justice Department cranked up antitrust enforcement—efforts that continued under four subsequent administrations. With the size and market power of corporations in check, entrepreneurship flourished. America enjoyed a decades-long period of broad prosperity and growing leisure the likes of which it had not seen in a century. Other federal interventions, like the introduction of the 30-year fixed mortgage, helped create a modern version of American yeomanry: the suburban homeowner, with his well-trimmed yard and white picket fence. Again, economic independence moved in concert with political stability, though midcentury efforts to include those left out of the bargain—especially Black Americans—would provide leverage to tear it apart.

Now we are again in a Gilded Age, but one without the language that Brandeis and his allies used to break the control of monopolist overlords. Over the past 40 years, Americans have been encouraged to separate politics from economics, the latter supposedly being the domain of experts better suited to the higher-order calculations that keep the great machine humming and the profits trickling down to all. As a result, the average person has been unable to protest the nation’s growing inequality—at least, not in a way that feels true to America’s shared values. (You want the average worker to have a baseline amount of vacation time and parental leave? What are we, France?)

After the pandemic, the discontent with work runs so deep that some are ready to throw the arrangement out altogether. Recently in Harper’s, Erik Baker linked this disillusionment to the 20th-century rise (and recent decline) of the “entrepreneurial work ethic”—the idea that Americans ought to find their passion and pour themselves into it because “when you do what you love, you won’t work a day in your life.” One might think of it as a reversal of the ancient ethic of scholê-ascholia: You depend on your work, rather than your leisure time, for personal fulfillment. The trouble is that, in present-day economic conditions, the outcome of your strivings (and how much you’re paid for them, and for how long) is in someone else’s hands. The most likely result of this dilemma, Baker concludes, is that Americans will simply give up and reject the entrepreneurial work ethic “in favor of a more cold-blooded understanding of work as a simple exchange of drudgery for money.”

Now that employees have tasted the benefits of remote work, the law of loss aversion suggests that they will be loath to give up those freedoms. The next step is to remind them that this is not just a fight over narrow personal concerns but over the health of democracy.

Does it have to be that way? A crisis that, from one angle, might evoke cynicism or despair could, from another, present an opportunity. Instead of just giving up on work, now could be the time to revive an ancient, and foundationally American, understanding of how our toil, complemented by leisure, creates civic virtue. Fighting for remote work can be a first step in advancing those wider goals.

But there are lessons to be learned from the last Gilded Age, when efforts to restore workers’ autonomy and give them edifying leisure time were crushed, one after another, by monopolist oligarchs. Right now, a tight labor market provides workers leverage to demand work from home. Current economic conditions, which have spurred an outflux of white-collar workers from big, expensive cities, can also help to build a national labor market that gives more options to workers and less power to big employers based in a few desirable cities like San Francisco or New York.

The current worker-friendly market won’t last forever, however. Both pillars of the civic republican tradition—economic agency and free time well spent—can be upheld over the long term only by curbing the market power of the large corporations that would take it away. That’s one reason why the Biden administration’s aggressive new antitrust efforts, which are popular even with some Republicans, are so important. Opening consolidated markets will lead to more competing firms that employees can play off against each other to get better pay and benefits, including the right to work from home. And, perhaps counterintuitively, anti-monopoly action can also improve American leisure. Thanks to massive, predatory corporations, much of our free time is spent swirling in the vortex of social media algorithms calibrated to hypnotize and enrage us, rather than teach us something new or inspire us to contribute to our communities. Part of envisioning America’s civic future is thinking about ways to encourage technology that rebuilds community rather than atomizing it.

At first glance, it might seem morally obtuse to worry about comparatively affluent Americans fighting to retain a “lifestyle” benefit when others can barely pay their rent. But the changes needed to win the right to work from home will benefit everyone. That’s because reducing the economic power of monopolies will also lessen their inordinate political power. And that will make it easier to pass laws that help less advantaged workers—for instance, requiring corporations to provide health insurance and paid vacations to truck drivers, office cleaners, and others they employ as independent contractors.

Now that employees have tasted the benefits of remote work, the law of loss aversion suggests that they will be loath to give up those freedoms. The next step is to remind them that this is not just a fight over narrow personal concerns but also over the health of democracy.

The post The Founders Want You to Work From Home appeared first on Washington Monthly.

]]>
148088
Greenwashing Big Ag https://washingtonmonthly.com/2023/06/19/greenwashing-big-ag/ Tue, 20 Jun 2023 01:35:00 +0000 https://washingtonmonthly.com/?p=148064

A bipartisan law claiming to tackle greenhouse gas emissions instead just helps the agriculture industry launder its reputation.

The post Greenwashing Big Ag appeared first on Washington Monthly.

]]>

Late last year, Democratic and Republican lawmakers performed a kind of Washington magic trick. In this famously acrimonious time, a bipartisan group not only succeeded in passing a bill designed to take on greenhouse gas emissions in the agricultural industry, which is responsible for as much as a third of all global climate pollution, but did so while appearing to please almost everyone. 

The law, the Growing Climate Solutions Act, passed as part of the big year-end government funding package. It was cosponsored by more than half the Senate and heralded by top Democratic and Republican leaders, including Agriculture Secretary Tom Vilsack and minority ranking member of the Senate Agriculture Committee John Boozman. It was also endorsed by more than 175 nonprofits, corporations, agricultural trade associations, and climate activist groups. “The inclusion of the Growing Climate Solutions Act in the omnibus is a tremendous bipartisan victory that will help combat climate change while rewarding farmers for their climate-smart practices,” Jennifer Tyler of the Citizens’ Climate Lobby, a grassroots advocacy group, said in a statement.

The legislation was built around a simple idea. The federal government would help facilitate private, voluntary, farm-based “carbon markets,” wherein corporations, like Microsoft or Amazon, can purchase from farmers special credits, known as carbon offsets. In exchange, the farmers agree to keep carbon in the soil by, say, planting cover crops or improving cattle grazing methods. Big agricultural companies can also pay farmers within their own supply chains to store carbon in the soil, thus similarly claiming a special credit, known in that case as a carbon inset. Either way, big polluting corporations can purchase enough credits to claim that they are “carbon neutral” or a “green” company in commercials, on packaging, or in presentations to investors and board members. Meanwhile, farmers get to pocket a nice paycheck for doing the right thing. Democrats applauded the law for helping to deliver on Joe Biden’s campaign promise to make agriculture “the first net-zero industry in America,” while Republicans cheered it for helping farmers, corporations, and the environment while avoiding new regulations or government spending. A win, win, win. 

Unfortunately, it was too good to be true. These private, voluntary farm-based carbon markets don’t actually do what they purport to do. They don’t make big polluting corporations carbon neutral. They don’t guarantee that anyone cuts their carbon emissions. And they don’t generally encourage farmers to transform their operations to remove the most carbon. In fact, they don’t even really function as markets at all. Within these shadowy, private exchanges, there is no agreed-upon standard for what counts as “sequestered carbon”; no central oversight mechanism; no cap on corporations’ total allowable carbon use; and no penalty for cheating. Studies of other carbon markets reveal that the vast majority of offsets and insets fail to remove any additional carbon at all. The result is that these farm-based carbon exchanges function, essentially, as state-sanctioned greenwashing facilities. 

Voluntary farm-based carbon markets don’t actually do what they purport to do. They don’t guarantee that anyone cut their carbon emissions. They don’t generally encourage farmers to transform their operations to remove the most carbon. And perversely, they might result in an increase of total emissions.

The Growing Climate Solutions Act didn’t create these unregulated exchanges, but it did offer them the powerful endorsement of the U.S. government—and that’s arguably worse than if Congress had done nothing at all. By lending credibility to these loosely organized programs, the government is helping to fuel already-surging corporate demand for carbon offsets—which may seem like a good thing, but remember, these voluntary, unregulated exchanges operate according to a kind of magical accounting, wherein the number of carbon offsets sold is often entirely unrelated to the amount of new carbon released into the atmosphere. By one estimate, in order to meet their net-zero goals, corporations will demand two to four times more land-based carbon removal offsets than the Earth’s plants and soil could even plausibly supply. 

It gets worse. With so many phony offsets being bought and sold, these government-endorsed private carbon exchanges may, perversely, result in an increase of total emissions, by allowing big polluters to continue business as usual and to push off demands from activist investors and the public to meaningfully change how they operate. Take, for example, the global meatpacking behemoth JBS. According to one study, JBS’s annual climate footprint in 2021 was already larger than the entire nation of Italy’s. And it’s continuing to grow: The company plans to continue expanding its livestock production business, which generates 90 to 97 percent of its climate footprint. Yet, by leveraging precisely the kind of pay-for-carbon schemes endorsed by the Growing Climate Solutions Act, JBS will soon be able to label itself a “green” company. It claims that it will achieve “net-zero” carbon emissions in the next decade and a half. It might then be possible for a truly environmentally destructive company like JBS to be included in mutual funds sold to environmental, social, and governance (ESG) investors. It’s a farce. 

People on both sides of the ideological spectrum should find this state of affairs depressing because it didn’t need to be this way. We already have effective, voluntary, and broadly respected policy solutions that address greenhouse emissions in the agricultural industry. At a bare minimum, Congress could have increased funding for the handful of federal farm programs that already exist, are already popular among actual farmers, and already support a greater variety of sustainable farming practices than carbon exchanges could ever reach. Dramatically reducing carbon emissions in the U.S. agriculture sector is within reach. But Washington’s misguided enthusiasm for farm-based carbon credits leaves us even further away from that goal. 

The federal government first experimented with allowing companies to trade in pollution credits in the 1980s, as part of an effort to phase out leaded gasoline. The idea was, in part, to make environmental cleanup more efficient, since it might cost less for a big polluter to pay someone else to reduce pollution elsewhere than to reduce their own emissions. 

The 1990 Clean Air Act built on that theory by creating the first cap-and-trade market to break political gridlock around tackling acid rain. The government created a market for sulfur dioxide, the driver of acid rain, by setting a shrinking “cap” on the total amount of pollution allowed, then forced polluters to meet requirements by “trading” pollution allowances. Sulfur dioxide pollution decreased dramatically during the program—and lawmakers called the effort a huge success. A handful of retroactive studies have since shown that concurrent changes in the coal and rail industries, which made it much cheaper to ship low-sulfur coal from the western U.S. to power plants in the East, were likely responsible for a large part of the reduction in pollution. But at the very least, cap-and-trade was broadly seen as a politically palpable way to set some pollution limits that otherwise might not have been set at all, and the narrative about cap-and-trade’s acid rain success has buoyed bipartisan support for pollution trading ever since. 

Over the past three decades, cap-and-trade has expanded to carbon pollution, most notably in the European Union and the state of California. Pollution trading public policies have also given rise to private pollution offsetting projects. Some regulatory cap-and-trade regimes have started letting polluters buy offsets to comply with their shrinking emissions cap, which is where farm-based offsets could come into play. While regions with pollution-trading policies have generally lowered their greenhouse gas emissions, it’s not always clear that cap-and-trade drove those reductions. The EU’s cap-and-trade program appears to have played a moderate role in reducing greenhouse gas emissions, whereas California’s program has had little to no effect. What’s clear is that the structure and implementation of cap-and-trade programs matter a lot. The most effective cap-and-trade programs have the strictest rules and oversight, in which regulators set and enforce aggressively low caps, closely monitor emissions, and penalize noncompliance. The best programs also prevent pollution allowances from becoming too cheap or painless to get.

The private, voluntary farm-based carbon exchanges endorsed by the Growing Climate Solutions Act have none of these features. There is no “cap” on total pollution, no strict oversight body, no penalties for noncompliance, and these private farm-based carbon exchanges are entirely voluntary. Companies choose to buy agriculture carbon offsets, almost always for public relations reasons, then track and self-report their own progress using their own internal metrics—many of which employ an imaginative flourish. For instance, the U.S.-based lumber baron Weyerhaeuser has claimed a carbon reduction credit for cutting down trees—organisms that, if left living, remove carbon from the atmosphere. Weyerhaeuser’s argument was that making a tree into, say, a bookcase, released less carbon than allowing that tree to burn or decompose. If companies fail to meet their own goals, according to their own, self-imposed rules, there’s no financial or regulatory penalty. An analysis by Bain found that corporations miss their own sustainability targets 98 percent of the time. 

Perhaps more fundamentally, these private exchanges do not function as “markets” in the first place because there’s no shared understanding of what’s being bought and sold. Here’s how they work: A collection of private, unorganized, self-regulated companies sell carbon credits directly to corporate buyers. Most of the time, these carbon credit companies hire a third-party “verifier” to certify their claims of how much carbon has been sequestered per credit they’re selling. But that process is dicey. There are no state, federal, or even industry-wide rules defining what counts as sequestered carbon. Instead, each third-party verifier creates its own protocols for measuring and evaluating sequestered carbon, then certifies claims based on those standards. The result is a Wild West of accounting. One 2021 evaluation found such wide variation between different companies’ protocols that they “run the risk of creating credits that are not equivalent or even comparable.” The farm inset landscape operates similarly, except that corporations claim and certify credits for carbon sequestered within their own supply chain. Some critics say that insets are little more than self-dealt offsets.

Within this hall of smoke and mirrors, it should come as no surprise that carbon exchanges are awash in false claims, double-counted credits, and outright fraud. A 2017 report by the European Commission estimated that 73 percent of the carbon credits in the EU’s carbon trading system had a low likelihood of reducing emissions. In California’s forest-based offset program, landowners successfully exploited the state’s oversimplified carbon accounting methods to claim millions of meaningless carbon credits. The result was a net increase in emissions as of 2021. 

One of the biggest challenges of many farm-based carbon exchanges is the so-called additionality problem. That’s when the carbon credits that are being bought and sold don’t represent any new, or “additional,” carbon reductions. Consider, for example, a farmer who owns a tract of forest that she has no intention of cutting down. Under many offset programs, she could credibly sell a carbon credit to, say, Microsoft, for agreeing not to cut down that tract. Microsoft gets to bank a carbon credit and she gets a paycheck. Everyone wins—except the environment. Because, of course, from a climate perspective, nothing has changed; the same amount of carbon would be in the atmosphere had that transaction never occurred. This bit of trickery is extremely common in pollution markets. Studies of the UN’s carbon offset scheme, the Clean Development Mechanism, found as many as 85 percent of available offsets likely fail to represent any additional carbon reductions. 

There’s an especially high risk that farm offsets will fall prey to the additionality problem when prices are too low. Right now, for example, a company like Amazon can purchase a farm-based carbon offset for bargain-basement prices that trickle down to about $20 per acre for farmers. That’s nowhere near enough to entice skeptical farmers to go through the trouble and cost of meaningfully changing their operations. 

But even if farm-based carbon programs got their prices just right and solved the additionality problem—two really big ifs!—there are still three fundamental problems with structuring public policy around farm-based carbon exchanges. The first is that we simply don’t have an easy or cost-effective way to quantify how much carbon is sequestered in soil as a result of farmers using new practices, like planting cover crops. That’s because different soils have very different absorption capacities. “Some soils have more room, and some have less room,” says the University of Nebraska professor Humberto Blanco, who studies soil science. Even on the same, seemingly uniform field, soil carbon concentrations can vary fivefold. Any truly accurate accounting would require prohibitively expensive, site-specific sampling, performed deep in the ground. (Measuring just the top foot of soil often overestimates carbon sequestration potential.) As a result of these challenges, almost all carbon offset programs use limited sampling and imperfect models, which can lead to gross overestimates and generalizations in how much carbon is actually being sequestered. 

The second fundamental problem of farm-based carbon exchanges is that carbon credits are inherently unstable and impermanent. Think about a farmer who sells a carbon offset by agreeing not to plow his fields and instead to plant crops with no-till methods. As soon as he, or a future farmer on that same land, chooses to till those fields, all of the built-up soil carbon is released. The same is true for forests conserved to store carbon. What happens when those forests go up in flames, sending all their carbon into the atmosphere? Should farmers or landowners pay companies back for the carbon offsets they’d promised? Will companies adjust their offsetting claims? 

The third problem is that private, farm-based carbon exchanges are subject most of the time to agribusiness’s definitions of what qualifies as “climate smart” farming, which has the effect of skewing programs toward practices that serve agribusinesses’ interests, rather than the best environmental outcome. For example, the seed and chemical giant Bayer launched a carbon program in 2020 that only pays farmers to reduce tillage or plant cover crops. While both practices have real environmental benefits, they are much less effective at sequestering soil carbon than other practices, like planting trees or shrubs between crop rows or in buffer zones. One study found that, even by conservative estimates, agroforestry practices like these can sequester two to five times more carbon per acre than practices such as no-till or cover cropping. But companies like Bayer aren’t interested in paying farmers to do something that means they’ll buy fewer proprietary seeds or chemical treatments. In fact, paying farmers to reduce tillage and plant cover crops actually boosts Bayer’s sales, since large-scale, conventional commodity crop farms generally use a Bayer product, Roundup, to control for weeds that would have been tilled under and to “knock down” cover crops when it’s time to plant a cash crop. Across the board, carbon offset or inset programs disproportionately reward tweaks to the status quo over transformational change. 

There’s a good reason why many Democratic and Republican lawmakers seized on carbon exchanges as a potential solution to reducing emissions in the agricultural industry. There’s genuine bipartisan belief that pollution exchanges can work, and it’s also a politically easy path forward. Voluntary carbon exchanges impose no pain points on industry, touch no political third rails, anger no arsenal of lobbyists. But the reality is that any effort to meaningfully reduce greenhouse gas emissions requires getting big polluting agricultural companies to shift how they do business—which requires the hard work of holding them to account. 

That means, at a minimum, that regulators at the USDA and the Environmental Protection Agency must begin treating agriculture like any other polluting industry. Currently, many big agricultural companies, including animal feeding operations, enjoy special deals to avoid air pollution standards in exchange for funding monitoring, and leverage other carve-outs that shield them from complying with federal pollution laws, including the Clean Air and Water Acts. Forcing such companies to simply abide by the same rules that apply to every other industry and tweaking standards to better cover large livestock farms would have a huge effect: The top 6 percent of animal feeding operations produce more than 85 percent of U.S. animal agriculture’s climate pollution, and animal agriculture generates roughly 80 percent of all U.S. agriculture emissions. The EPA and USDA could also do a better job of zeroing in on specific farming methods, like over-applying synthetic fertilizer that doesn’t get absorbed by plants. Such wasteful methods produce outsized greenhouse gas emissions in the form of nitrous oxide. 

There’s also a clear, pragmatic—and politically feasible—policy road map. It starts by expanding and strengthening the farm-based federal environmental programs that already exist. Take, for example, two U.S. Department of Agriculture programs: the Environmental Quality Incentives Programs (EQIP) and the Conservation Stewardship Program (CSP). These programs pay farmers to make their operations more climate friendly—by, for instance, planting cover crops, managing rotational grazing, using agroforestry, and restoring wetlands—and they’re extremely popular. Congress hasn’t appropriated anywhere near enough money to accept all of the farmers who want to participate in them. A study by the Institute for Agriculture and Trade Policy found that the USDA denied more than half of all EQIP and CSP applications between 2010 and 2020. That’s a great problem to have. Congress and the USDA should move immediately to fully fund these programs to meet current interest and expand them in the future, delivering an easy win to farmers. 

But expanding these programs isn’t enough; Congress needs to make them better, by changing their funding priorities to offer the greatest rewards to farmers pursuing practices that have the greatest climate impact. An analysis by the nonprofit watchdog Environmental Working Group found that only 23 percent of EQIP funds distributed between 2017 and 2020 went to practices that reduce greenhouse gas emissions. Far too many taxpayer funds bankroll expensive false solutions, like manure biodigester systems on large hog and dairy farms. 

These solutions aren’t particularly flashy. It’s difficult to call a press conference announcing that the USDA is dramatically improving an existing program, or to cast a congressional decision to fully fund EQIP as a “bipartisan triumph.” But such basic, deliberate policy pushes could do much more to address the climate crisis than farm-based carbon markets ever could.

The post Greenwashing Big Ag appeared first on Washington Monthly.

]]>
148064
Liberty on the Ballot https://washingtonmonthly.com/2023/06/19/liberty-on-the-ballot/ Tue, 20 Jun 2023 01:30:00 +0000 https://washingtonmonthly.com/?p=148068

How Biden’s freedom agenda could win him a second term and save the republic.

The post Liberty on the Ballot appeared first on Washington Monthly.

]]>

Joe Biden officially launched his reelection campaign in April with a three-minute video laying out the stakes in this election: the survival of the American experiment itself. 

“The question we’re facing is whether in the years ahead, we have more freedom or less freedom. More rights or fewer. I know what I want the answer to be, and I think you do too,” the president said, as images flashed by at subliminal speeds of him speaking to union workers on a factory floor, in the Rose Garden, and at the opening of a new Amtrak railroad tunnel in Baltimore. “This is not a time to be complacent.” 

He made the case that, as in 2016, we’re in a battle for the soul of the country. On his side is a commitment to decency, honesty, respect, democracy, and freedom itself. On Donald Trump’s side there’s a die-hard embrace of hatred, lies, and repression, illustrated by tear gas clouds and armed insurgents engulfing the Capitol and a lone woman protesting the overturning of Roe v. Wade in front of the Supreme Court. “Around the country, MAGA extremists are lining up to take on those bedrock freedoms … dictating what health care decisions women can make, banning books, and telling people who they can love, all while making it more difficult for you to be able to vote,” he added. Freedom is the most important and sacred thing to Americans, he said, and making sure we’re all given “a fair shot at making it” is essential to securing it.

This is how Biden has decided to run for what will almost certainly be a rematch with Trump, who, despite launching a coup attempt two and a half years ago to overthrow the republic, is the likely Republican nominee rather than a prison inmate. As someone who has advocated for an approach like Biden’s to repair our democracy since before MAGA was even a thing, I was pleased to see it receive an uncharacteristically warm reception from mainstream commentators, including the columnists E. J. Dionne and Thomas Edsall, the pollster Celinda Lake, ABC News political director Rick Klein, and the union boss Mary Kay Henry. 

But Biden has set up a rhetorical foundation that can anchor more than just the near-term defense of the republic against a proto-fascist movement; he is also providing a long-term agenda that addresses the root causes of the crisis we’ve found ourselves in. 

As Nicholas Lemann argued in the January/February/March issue of the Monthly, we absolutely need a new political economy capable of unwinding the untenable concentration of economic power and the staggering inequality and social and political instability that has followed from it. Lemann sketched some general principles—among them, that in a healthy democracy, economic policy isn’t left to technocratic experts, as the United States has done for the past half century, but is ultimately settled by the clash of competing interests in the political process, as was the case for most of U.S. history. I’ll take that analysis a step further. Even if the Trumpists were vanquished tomorrow, the American experiment will remain vulnerable unless we stop the descent down the laissez-faire crevasse and recommit to building an economy meant to bring shared prosperity. 

Biden has set up a rhetorical foundation that can anchor more than just the near-term defense of the republic against a proto-fascist movement; he is also providing a long-term agenda that addresses the root causes of the crisis we’ve found ourselves in.

I’m going to expand on that here, outlining a philosophical framework designed to further our Constitution’s stated mission: to “promote the general welfare and secure the blessings of liberty, to ourselves and our posterity.” In other words, to ensure the common good and the individual’s liberty, intergenerationally. But first it’s important to understand what went wrong and the consequences of not making a significant change of course.

The political class has been waking up to the fact that Trumpism is going to remain a force with or without Trump himself. Even after he tried to stage a coup, hoarded classified records, got indicted for business fraud, promised to pardon convicted January 6 insurrectionists, and was fined $5 million by a jury for sexually abusing a woman and lying about it, Trump remains the front-runner for the Republican presidential nomination. Rather than distance themselves from this seditious, corrupt, and amoral man, his rivals within the party emulate him in a race toward fascism, banning books, persecuting transgender people, valorizing vigilantes, and vanishing gays and lesbians from public schools and libraries even as they purge from textbooks mention of why Rosa Parks was asked to move to the back of the bus. They do this because the Republican base wants it. Recent polls show that 68 percent of Republicans still back Trump in his confrontation with the rule of law and two-thirds would vote for him for president even if he was convicted of crimes. Among Republicans, 42 percent say a strong unelected leader is preferable to a weak elected one; 44 percent say the “true American life is disappearing so fast we may have to use force to save it”; nearly 60 percent think the country’s changing demographics pose “a threat to white Americans and their culture and values”; more than 60 percent believe transgender people are “trying to indoctrinate children into their lifestyle”; and a majority say the U.S. stands on the brink of civil war. No wonder so many GOP aspirants are emulating the orange man.

Trump and his fellow demagogues have galvanized a populist movement not unlike that of Hungary’s Viktor Orbán or prewar Vladimir Putin, one that promises to create an illiberal democracy where the aspirations of the so-called majority shall not be inhibited by concessions to the civil liberties of the “others,” be they minorities, immigrants, annoying journalists, or political opponents. The movement has elected authoritarians to represent Ohio and Missouri in the U.S. Senate, to serve as chief executives in Florida and Texas, and to hold seats for swaths of northern Georgia, central Colorado, eastern Houston, and the Florida panhandle in the U.S. House. Polls suggest that the demographic is large enough to give Trump a shot at returning to the White House. This is exactly how liberal democracies fail. They’re voted out of existence by an electorate that has lost faith in its characteristics: free and fair elections, universal suffrage, the separation of powers, the rule of law, and equal protection of citizens’ human rights, civil rights, and liberties. One party is already there, which means we remain one election away from disaster.

How could this have come to pass? Because both political parties, for a half century, essentially abandoned a large chunk of the U.S. electorate, creating an angry, impatient, and illiberal constituency that has upended the Republican Party and can still defeat Democrats and pro-democracy Republicans at the polls. Yes, there’s always been a latent authoritarian, white supremacist constituency in this country—they ruled half the republic for most of our history, after all—and Trump himself speaks and acts from their playbook. He’s a hatemonger with autocratic intentions. But the Trumpist movement is bigger than that. It has drawn in millions of people who voted for Barack Obama twice. It was sufficiently attractive to some 12 percent of Bernie Sanders’s supporters that they voted for Trump in the 2016 general election—a bloc that was larger than his margin of victory in Wisconsin, Michigan, and Pennsylvania. It convinced millions of Latinos who voted for Hillary Clinton in 2016 to vote for Trump in 2020. Fourteen percent of African American men—a group presumably not beguiled by white ethnonationalism—voted for Trump in 2016, and 12 percent stuck with him in 2020. 

The core of Trump’s support—indeed, that of Sanders as well—are white Americans without college degrees, people whose economic interests have been ignored by both parties for decades. An analysis of 2016 voters by the political scientist Lee Drutman showed that almost all of Trump’s general election supporters were conservative on social issues, but on the economic front they were split almost evenly between liberal and conservative tendencies. (The Obama voters who then chose Trump—about 9 million of them in 2016—were almost entirely economically liberal and socially conservative.) This was an electoral coalition of traditional, Mitt Romney–style Republicans (happy with the pro-rich economic order) and angry “populists” (very unhappy about the same), many of whom were likely identifying with the Tea Party movement before Trump came along. It doesn’t really matter if the populist voters are primarily upset about their own economic situation or a perceived loss of status—the issues are related. Take away voters who were unhappy with the 21st-century economy, and Trump wouldn’t have won the 2016 primary, let alone the presidency.

Now, for the first time in a century, we face an illiberal authoritarian movement that is politically competitive on the national stage. And unlike in the heyday of white Anglo-Saxon Protestant supremacy in the 1910s and ’20s, this movement doesn’t even pretend to be upholding the classical republican values of ancient Athens. It’s speaking out of the rhetorical script of the early Nazis, Slobodan Milošević, and contemporary Russia. That means freedom absolutely is the issue in the 2024 election and Biden is absolutely correct to be running on it. But he’s also pointing in the right direction to get us out of the danger zone. 

Freedom is key to all of this. The immediate danger is clear enough: The authoritarians are going to take it away; those of us opposed to them are fighting back. Abortion is the most obvious and potent example. As soon as the Supreme Court’s conservative supermajority overturned Roe, Republicans across the country went from “let the states decide” to efforts to criminalize cross-border travel and commerce and to politicize drug safety. But if abortion is the point end of the spear of freedom, the point is being driven into the body politic by the spear itself, which is made up of a lot of issues, many of them economic in nature. The challenge for Biden and the Democrats is to articulate how putting our political economy back on the right track is part and parcel of defending and advancing freedom.

Since the beginning, the American political conversation has centered on how to protect and further our liberal democratic experiment—the aspirational pursuit of a society where all individuals can be free. The problem is that we’ve never agreed on how to do that, not just as individuals but as regional cultures, with the Northeast and the Deep South having contradictory traditions that have kept them at loggerheads throughout our history. 

The argument is as follows:

Is freedom ultimately about maximizing the autonomy of the individual, about personal sovereignty and a lack of restraints—especially (but not exclusively) from government? If we had less government, fewer taxes, and less regulation, is it not axiomatic that each of us would be more free? 

Trump and his fellow demagogues have galvanized a populist movement that promises to create an illiberal democracy where the aspirations of the so-called majority shall not be inhibited by concessions to the civil liberties of the “others.”

Or is it that freedom is about building and maintaining the infrastructure and institutions of a free society, the enabling and leveling mechanisms which ensure that each person has a fair shot at achieving their potential, of being meaningfully free, regardless of the circumstances of their birth? Is it a shared endeavor, a social project, a cultivation of a republican citizenry?

I argued in my 2016 book, American Character, that these two sides of freedom—individual liberty and the common good—are both essential components of a liberal democracy, in moderation. If you stray too far toward one or the other, you wind up in tyranny. On the libertarian end, tyranny takes the oligarchic form found in late-20th-century Honduras or El Salvador, where the Five Families or the Fourteen Families (which were always capitalized) had maximized their freedom and killed anyone who wanted some themselves. In the communitarian direction, it’s the Orwellian form of Hitler’s Germany or Stalin’s Soviet Union, where the keepers of the “common good” criminalized dissent and “wrong” thinking, and murdered millions in an attempt to cleanse the nation of the disloyal. Optimizing a free society for the long haul isn’t about one of these aspects of freedom conquering the other, it’s about keeping the two in equilibrium so individuals are neither tyrannized nor deprived of a decent chance at pursuing their freedom and happiness. Different cultures will choose different equilibrium points—Japan is not Australia, and vice versa—which makes things particularly difficult for the U.S., as the centuries-old regional cultures that make up our unwieldy federation don’t agree on these things. 

We have regions like the Deep South and Greater Appalachia, where the common good has few friends, and others like Yankeedom—the tier of the Northeast first colonized by New Englanders and their descendants—that prize it. Then there are swing regions that fall in between, like the Quaker-founded, multicultural-from-the-outset Midlands, which cuts through what are or once were swing states: Pennsylvania, Ohio, Iowa, and Missouri. (I explain all of this in detail in my 2012 book, American Nations.) Midlanders are communitarian, but they’re also skeptical of top-down government intervention. Far Westerners—the Great Plains and mountain regions not colonized by Spain—are more individualistic, but the extremities of their settlement environment forced a reckoning with their interdependencies, both with each other and with the federal government and corporate masters. Getting balance in this very individualistic federation of ours has, in large part, been about getting the swing regions aboard a “soft communitarian” regional coalition.  

Is freedom ultimately about maximizing the autonomy of the individual, about personal sovereignty and a lack of restraints—especially (but not exclusively) from government? Or is it that freedom is about building and maintaining the infrastructure and institutions of a free society?

Our democracy has gotten into trouble when we’ve tacked too far in one direction or the other. Antebellum southerners tried, in the 1850s, to force the rest of the country to protect their “liberty to enslave” and to expand it to shared federal territories; 750,000 Americans died in a war to settle the issue. The economic regime created by the Gilded Age collapsed at the end of the 1920s when an unregulated, crony capitalist system devoured itself, ending an era of yawning inequality and decreasing freedom. The crisis was such that on taking office, Franklin D. Roosevelt was urged to seize dictatorial powers to preempt a fascist or communist revolution. Instead, FDR responded with regulations and public investments in the common good—which were wildly popular—but when, in the mid-1930s, he started dabbling in central economic planning (price controls, industrial production quotas), he was severely rebuked at the polls and could well have lost power if Hitler had not started invading countries. 

We’ve again reached such a point. For the four decades between 1980 and 2008, and under administrations and congressional leaders from both parties, the U.S. charged toward laissez-faire individualism—a shift in power, resources, policy, and the law toward the interests of an oligarchical class. Ronald Reagan denigrated the “tax and spend” Democrats but ran a “tax cut and spend” administration, which ran up massive deficits by cutting taxes for the wealthy in the midst of a huge military buildup. This was intended, his budget director, David Stockman, later admitted, to force huge cuts to social programs and other public investments. It was accompanied by slashing regulations and federal grants to municipalities, which forced draconian cutbacks at public schools, libraries, clinics, hospitals, and housing projects. The number of people living in poverty or experiencing homelessness grew, as did the gap between the wealthy and everyone else, and it kept getting worse from there. His successor, George H. W. Bush, tried to clean up the mess, breaking his “no new taxes” pledge to do so, and was shown the door by a frustrated electorate.

Democrats were back in the White House in 1993, but the march toward libertarian individualism continued. President Bill Clinton didn’t lead the march, and indeed pushed in the opposite direction in some ways, such as creating the AmeriCorps national service program. But on economic policy he largely marched in a libertarian direction: spearheading passage of the North American Free Trade Act; preventing meaningful regulation of those credit default swaps that nearly destroyed the world economy in 2018; and delivering the coup de grâce to what little power regulators hadn’t already stripped away from the cornerstone of New Deal financial regulation, the Glass-Steagall Act, which had forbidden (taxpayer-insured) commercial banks from engaging in speculative ventures. He twice renominated Ayn Rand’s personal friend and apprentice, Alan Greenspan, as the chair of the Federal Reserve, and largely embraced the pro-monopoly merger positions of the Reagan administration. 

George W. Bush, raised in Texas’s Deep Southern–settled section, may have occasionally feinted toward the communitarian ethos of “compassionate conservatism,” but he shared the domestic policy concerns of the Deep South’s oligarchy: giving tax cuts to the wealthy and corporations; privatizing Social Security; deregulating energy markets; opening protected areas for oil exploration; appointing industry executives to run the agencies that oversee their own industries; replacing civil servants with corporate contractors; and giving away billions in taxpayer money to corporations in no-bid contracts executed without effective oversight. He scuttled efforts to crack down on offshore tax havens, slashed taxes while engaged in two deficit-funded foreign wars, and so gutted the Federal Emergency Management Agency that it was unable to adequately respond to a hurricane strike on New Orleans. He left his successor to deal with a massive budget deficit, two disastrous military occupations, and the greatest financial and economic collapse since the Great Depression.

Obama sought to bring the country to a middle ground, but it wasn’t enough to overcome the post-2008 populist vitriol or to bring the now-radicalized minority Republicans in Congress on board. Many Americans responded to his call for hope, change, and healing, but few were happy with his decision to let the financial titans off the hook for any criminal accountability, even as taxpayers often paid for their executives’ bonuses, while ordinary mortgage holders lost everything. Consistent with his balanced approach, Obama eschewed single-payer health care reform in favor of a market-driven system championed by the conservative Heritage Foundation and Mitt Romney’s gubernatorial administration in Massachusetts; Republicans demonized him for it all the same. After losing the House, he negotiated a $4 trillion “grand bargain” with Speaker John Boehner—tax increases with entitlement cuts—only to have the Tea Party caucus reject it and instead try to force a default on the national debt. Meanwhile, the anger at the devastation and bailouts caused by crony capitalists raged across the country. It fueled the continued rise of the Tea Party and the Occupy movement, Bernie Sanders, and, most fatefully, Donald Trump.

Demagogues don’t do well when a country is enjoying broad and growing prosperity. They do well when most people’s lives have gotten more perilous. They do well when the gap between the elite and the masses has grown so wide that members of one group seem to always fail upward, avoiding legal or financial consequences for misdeeds, while the other group’s chances of holding ground—better yet, of working up the ladder—have grown more constrained. Then millions are primed to believe outlandish conspiracy theories; to rally to a populist who champions their interests, panders to their resentments, and demonizes their enemies; to back a strongman who will set things right without regard for semantic or even constitutional niceties. Trump exploited that opportunity, and now most Republicans have followed suit. 

Two and a half years into his first term, Biden has racked up enough policy successes that he and his party have a shot at resetting the country by restoring balance to the Force. From June 2022 to April 2023, per capita income in America, after inflation, rose 3.6 percent—the highest real income growth in a quarter century. (Under Trump before the pandemic, it was 2.5 percent.)  Public opinion hasn’t caught up to this reality, which is not surprising—a similar lag occurred in the 1990s, when economically traumatized voters didn’t believe that positive developments would persist. Consequently, Biden’s job approval numbers on the economy remain low. If current trends continue, however, he’s likely to be in a stronger position with voters going into the November 2024 elections. 

Still, to survive the coming GOP onslaught, Biden and the Democrats will need to talk about their past accomplishments and agenda for the future in terms that are persuasive to voters in the swing regions of the country. Seven years ago, in American Character, I wrote that the American way—the set of political values shared by the vast majority of Americans—is about pursuing happiness through a free and fair competition between individuals and the ideas, output, and institutions they produce. If someone becomes fantastically rich through hard work or brilliant innovation, most Americans applaud them. If they squander their opportunities by greed, sloth, or indulgence, most Americans have little sympathy. Rightly or wrongly, we Americans have great faith that when individuals are so freed, their aggregate actions will contribute to the creation and sustenance of a happy, healthy, and adaptable society, one responsive to change and inhospitable to the seeds of tyranny: ignorance, hopelessness, fear, and persecution. There are other approaches to building a happy, thriving society—witness the pre-contact Algonquians or contemporary Scandinavia—but they just aren’t our way. Those are paths we will never follow, at least not without a national breakup. Democrats, understandably concerned about being seen as “blaming the victim,” have trouble speaking plainly about the role of individual initiative and personal responsibility in our national life. But they need to learn how to do it.

Democrats, understandably concerned about being seen as “blaming the victim,” have trouble speaking plainly about the role of individual initiative and personal responsibility in our national life. But they need to find a way.

Notice, though, that I said this competitive society needs to be not just free, but also fair. Indeed, we’ve learned by painful experience that these two values are linked: an unfair society quickly ceases to be a free one. Once formed, monopolistic firms will use their control of their market to (unfairly) crush competitors and their innovations. If left entirely unchecked, the winners of a Darwinian social struggle will seize control not just of a nation’s wealth but also of its government, courts, and internal security, becoming a hereditary oligarchy that (unfairly) prevents others from ever rising to challenge them. Ethnonational states are bad because they exclude those who don’t belong to a chosen tribe from fairly sharing in opportunities, rights, and benefits. Free markets and free societies are not naturally occurring developments, like a mature forest. They’re more like successful gardens, the product of sustained nourishment, attention, and, yes, protection. 

It’s through democratic government that we protect our freedom, be it economic or civic. We use it to keep our external enemies at bay, of course, but also to ensure that our unending domestic competitions remain fairly played. Our system requires a government that’s strong enough to act as our collective referee, to prevent a slide into corporate or plutocratic oligarchy by stopping “cheaters” and the accretion of hereditary privilege by maintaining the conditions for free and fair competition. (That we can’t allow it to become so strong that it becomes a tyrannical force is obvious to most Americans, just as it was to the Founders, who built in many checks and balances.) The freedom-and-fairness agenda isn’t about a government handout or hand up, or a plutocracy’s resources trickling down; it’s about the government having your back as you make your way in the world (if you’re not one of the 0.1 percent at the top) or keeping your power in check (if you are). As Americans, we’re committed to defending each other’s equal moral right to pursuit happiness, participate in our politics, and not be tyrannized, which is why government should vigorously respond to discrimination and disenfranchisement.

Free markets and free societies are not naturally occurring developments, like a mature forest. They’re more like successful gardens, the product of sustained nourishment, attention, and, yes, protection. It’s through democratic government that we protect our freedom, be it economic or civic.

Biden and his party have a record of policy achievements that beautifully match this freedom-and-fairness creed. The administration, for instance, has begun reversing four decades of lax federal antitrust enforcement that has allowed corporations in a few big metro areas to monopolize much of the economy and has narrowed freedom and opportunity for entrepreneurs, employees, and smaller cities and towns in the middle of the country. The administration has blocked airline mergers that would have raised ticket prices and reduced choices for travelers, sued Google for cornering digital ad revenues and thereby killing off local news outlets that average Americans trust, and proposed a ban on “noncompete” agreements that rob employees of the ability to negotiate higher wages by seeking jobs at rival firms. At times, the president has discussed his antitrust actions in eloquent freedom-and-fairness language. “Capitalism without competition is not capitalism,” he said in his 2023 State of the Union address. “It is exploitation.”

Biden and the Democrats have other big achievements to brag about, but so far they haven’t consistently done so in freedom-and-fairness terms. The infrastructure bill—passed with some Republican support—is a big communitarian investment package that maintains and expands the bridges, roads, tunnels, ports, and rails that allow Americans to freely participate in economic and social opportunities regardless of where they live, and keeps clean water and power running to their homes and communities. The Inflation Reduction Act made nearly $400 billion in clean energy investments, giving hundreds of millions of Americans and their decedents potential freedom from dependence on unreliable petroleum markets controlled by despotic regimes in Russia and the Middle East—and also quicker access to that ultimate expression of American freedom, latest-technology cars, this time low-maintenance, fast-accelerating electric ones. The legislation also made the wealthy and corporations begin to pay closer to their fair share through new increased corporate minimum taxes, a new 1 percent tax on stock buybacks, and better enforcement and collection by the IRS. 

Americans are practical-minded people. They may want government to have a limited scope of action, but they also want it to function well in those areas where it operates. And the vast majority of them want to live in a free and fair society. Biden’s pledge to ensure that Americans have freedom and a fair shot at succeeding is one that can speak not just to Democrats but also to the aspirations of an entire people. 

Most Americans also don’t want to live in the authoritarian, fascistic world Trump and his emulators are trying to create. They don’t hate their neighbors or fear trans kids or want LGBTQ people erased from schoolbooks, Target stores, and legislatures. They don’t want their government overturning democratic elections or pardoning convicted seditionists or kidnapping toddlers from migrant parents at our borders or deploying soldiers to crush those who demonstrate against it. They want women to have control of their bodies and their children to be free to go to school without the need for Kevlar, armed guards, and terrifying safety drills. They don’t think America should be an ethno-state of white Christians. But they need leaders to make stark the alternatives and to rally them to the cause: to build an America that is truly great because it’s a place where we all fight for each other’s inborn and equal entitlement to freedom. Biden has taken the first steps. We’d best not turn back.

The post Liberty on the Ballot appeared first on Washington Monthly.

]]>
148068
Don’t Blame Medicare for Rising Medical Bills, Blame Monopolies https://washingtonmonthly.com/2023/06/19/dont-blame-medicare-for-rising-medical-bills-blame-monopolies/ Tue, 20 Jun 2023 01:25:00 +0000 https://washingtonmonthly.com/?p=148075

For decades, hospitals have insisted that they charge the privately insured more to offset losses from Medicare patients. A health care regulator blows the whistle on that myth.

The post Don’t Blame Medicare for Rising Medical Bills, Blame Monopolies appeared first on Washington Monthly.

]]>

Medicare’s hospital insurance trust fund is officially projected to be exhausted within five years. To close the gap, President Joe Biden has proposed a tax on families earning more than $400,000, while also calling for large cuts in how much the government reimburses Medicare Advantage plans. Meanwhile, Republicans, from Paul Ryan to Ron DeSantis, have a long history of efforts to defund the program. But what if both sides are missing the point? America does face a big-time health care financing crisis. But Medicare is not the reason.

Yes, Medicare faces a shortfall, but it’s modest. The cost of the main hospital insurance trust fund, which currently comes to just 1.6 percent of GDP, is expected to rise by only half a percentage point by 2045 before flattening out and even falling thereafter with the passing of the Baby Boom generation. By contrast, the continuing cost of the tax cuts implemented under George W. Bush and Donald Trump is more than seven times as large as a percent of GDP. Or to take another measure, according to the “intermediate” range projections by the system’s actuaries, gradually increasing payroll taxes by less than one percentage point before 2045 would keep the trust fund solvent indefinitely. 

But even if Medicare is basically fine, the outlook for private health care plans, which cover the majority of working-age Americans, is not. The biggest reason is that the prices these plans and their members pay to doctors and hospitals are out of control. According to researchers at RAND, the prices paid by private payers for hospital care are nearly two and a half times higher on average than the prices paid by Medicare for the same treatments in the same hospitals. And the gap keeps increasing. According to a 2020 study by the Congressional Budget Office (CBO), the cost per enrollee for hospital and physician services under traditional “fee-for-service” Medicare rose by just 0.2 percent more than the rate of general inflation from 2013 to 2018. By contrast, the per-person costs in private plans rose by nearly double the rate of general inflation. 

Because of those high and rising prices, the cost of health care consumed by a typical middle-class family of four with a typical employer-sponsored preferred provider organization (PPO) plan reached $30,260 in 2022, according to the Millman Medical Index. This cost is borne almost entirely by the families themselves because, as any economist will tell you, even when the employers nominally cover the premiums, they do so by paying less in wages and other benefits. It’s a burden on middle-class families that is rising far faster than their ability to pay, increasing by nearly $9,000 for the typical family of four between 2010 and 2018 and by another $5,000 in just the past two years. Commercial health insurance doesn’t have a trust fund, but if we accounted for it the same way we do for Medicare, it would show a huge, unsustainable long-term deficit. 

Why aren’t we talking about this? The biggest reason is the power of a myth, promoted by monopolistic hospitals, other health care providers, and private insurers, that continues to pervert the thinking of both Republicans and Democrats when it comes to health care finance. It’s time to unpack that myth and show how it obscures what’s really driving up health care prices. 

According to groups like the American Hospital Association, hospitals and doctors charge so much more to treat patients covered by private insurance because they lose money treating Medicare and Medicaid patients and need to make up the difference. It’s a talking point that’s been around for decades, and lots of people involved in the practice of medicine, as well as policy makers and regulators, fervently believe that it’s true. 

I was once one of them. In the early 1990s, I was a physical therapist in private practice. At the time, Medicare cut its reimbursement rates, and the program would pay me only about 80 percent of what I usually charged for treating commercially insured patients. This made me feel like I was losing money every time I treated someone on Medicare, and to make up for that “loss,” I felt that I was more than justified in raising my rates for patients with private insurance. 

But while I told myself I was simply shifting costs to close a deficit, that was not what was really going on. As many studies now show, that story might provide a comforting rationalization to providers when they raise prices, but it has little basis in reality. 

How do we know this? 

For one thing, it turns out that there is no correlation between what hospitals get for treating Medicare and Medicaid patients and what they charge for treating everyone else. 

Medicare, for example, has periodically increased reimbursement rates and expanded enrollment. But when this happens, hospitals and other providers don’t cut their rates for treating privately insured patients. Instead, most providers raise prices, while others do not. 

Similarly, passage of the Affordable Care Act expanded Medicaid coverage in many states but not in others. Subsequently, some states expanded coverage and some did not. Yet private prices did not rise or fall in lockstep. Instead, they continued to rise at most, but not all, hospitals throughout all states.

Or again, when Medicare and Medicaid rates go down, providers don’t uniformly raise prices for everyone else, as they would if they were simply trying to make up for inadequate government funding. Medicare prices were cut in 1990 and 1993 as part of deficit reduction plans. But in many cases providers responded not by raising prices for commercial plans but by cutting them, in hopes of getting more privately insured patients. 

In 2020, the CBO reviewed hospital prices before and after an increase in public-payer reimbursements. It found continued wide variation in what hospitals charge and concluded that there was “no evidence that the share of providers’ patients covered by Medicare or Medicaid played any part in price variation.” 

If cost shifting is not the explanation for why providers keep raising the prices they charge for treating people with commercial health insurance, what is? The answer, it turns out, is increasing monopoly power. In places where hospitals still face competition from other hospitals, the prices they charge private payers are much closer to Medicare and Medicaid prices because otherwise they would lose business to their rivals. To make ends meet, such hospitals must carefully control their expenses, and when they do, according to a landmark study by Jeffrey Stensland of the congressionally chartered Medicare Payment Advisory Committee, they can make profits even while treating patients with Medicare and Medicaid. 

But when hospitals lack competition and start prioritizing margins over mission, they increase the prices they charge for treating people with commercial insurance just because they can. 

Here’s how that plays out behind the scenes in the normal course of business. Every hospital maintains what it calls a “chargemaster.” It’s a price list for every diagnostic test, treatment, procedure, bandage, and aspirin. But virtually no one pays these “retail” prices, except sometimes the uninsured. Medicare and Medicaid set their own prices for different procedures, and hospitals and doctors, by law, must accept these prices when they treat people covered by these programs. By contrast, the prices providers get for treating people with commercial coverage is a matter of backroom, usually secret, negotiation with different commercial insurance plans. 

Every year in every health care market around the country, health care plans meet with local providers and propose a deal like this: “We will bring you lots of new paying patients by including you in our preferred provider network, but we ask in return that you give us and our members a discount off your chargemaster prices.” When this works, the plans are, in effect, acting as purchasing agents for their members and negotiating volume discounts on their behalf. But the whole process breaks down when one hospital dominates the local market and most of the doctors’ practices are under its corporate umbrella. In that case, the local provider monopoly can just dictate what prices it will accept.

And that’s what’s happening around the country. Approximately 80 percent of U.S. hospital markets are now highly concentrated. Studies by the Yale researcher Zack
Cooper and others show that as concentration rises, so does the money that hospitals charge for treating patients with private insurance, and with no increase in quality. That’s the crisis facing American health care, not anything to do with the finances of Medicare or Medicaid.

So why does the myth of the cost shift persist? Because it serves the interests of some very powerful forces in health care. First, it provides monopolistic hospitals and other profit-maximizing providers with a way to shift blame onto the government for their price gouging. And second, it perversely provides a sleight of hand that serves as a very convenient way of disguising their profits so they can avoid paying taxes. 

Here’s how that works. Most hospitals in America, even the richest ones, are chartered as charitable, nonprofit organizations. This status exempts them from paying most federal, state, and local taxes. The Kaiser Family Foundation calculates that the value of tax exemptions for nonprofit hospitals reached $28 billion in 2020. In exchange for these subsidies, hospitals are supposed to provide significant community benefits, such as caring for the indigent, training doctors, or investing in public health. Yet instead of meeting these obligations, many simply claim that their “community benefit” is the difference between the reimbursements they receive from government payers such as Medicaid and Medicare and the monopolistic prices they extract from everyone covered by commercial health insurance. 

And remarkably, many state governments not only accept this lie but help enable it. For example, in Vermont, health care is regulated by the five-member Green Mountain Care Board, of which I am a member. By law, each year we must calculate how much more Vermont hospitals would have made if Medicaid and Medicare paid the same as private payers. In 2021, the figure came in at $516,828,045. 

The most accurate way to characterize this number would be to say that it is a measure of how much Vermont hospitals used their monopoly power to overcharge private payers that year. After all, Medicare fully compensates hospitals for their actual expenses, taking into account, for example, whether they are rural or teaching hospitals or treat large, underinsured populations. So, in reality, the half a billion dollars reflects the fact that the prices Vermont hospitals charged private payers in 2021 were an average 214 percent higher than Medicare prices. But the hospitals spin a different tale. They characterize the number as an official estimate of how much they are underpaid for treating Medicare and Medicaid patients, and then use that as an excuse for not living up to their obligations to the public. Eric Schultheis, a staff attorney at the Vermont Office of the Health Care Advocate, looked at how each of Vermont’s fourteen hospitals calculates its community benefit. He reports that an average of roughly 70 percent of the purported benefit is a claimed insufficient payment from Medicare and Medicaid, with one hospital claiming 96 percent. 

Moving to a single-payer, “Medicare for All” system could help to restrain prices. But that’s not politically possible for the foreseeable future and would not by itself address the problem of hospital monopolies and their growing political and economic power over government. So where does that leave us? 

Obviously, if monopoly is a root cause of the problem, stepped up antitrust enforcement can be at least part of the solution. The Department of Justice recently took a step in the right direction by rescinding prosecutorial guidelines that had previously allowed many more hospital mergers to go forward than they should have. 

Congress also needs to pass legislation that gives the Federal Trade Commission authority to take antitrust actions against nonprofit hospitals. Additionally, state attorneys general and other regulators need to crack down on anticompetitive behavior by monopolistic hospitals. Such behavior includes imposing “gag” and “anti-steering” requirements on health care plans so they cannot share a hospital’s prices with their members or direct them to lower-cost providers. 

Regulation is also needed to stop the common practice of hospitals forcing their salaried doctors and other health care professionals to sign noncompete clauses, which suppresses competition. At the same time, regulators need to make sure that they do not stifle new entrants into health care markets through unnecessary licensing requirements and “certificate of need” processes that many states impose on anyone who wants to build a new hospital. 

But antitrust enforcement cannot fix it all. For one, even in highly competitive health care markets, prices are not set rationally. As classically described by the Nobel Prize–winning economist Kenneth Arrow, purchasers of health care, unlike purchasers of, say, ice cream, have a hard time measuring the value of what they are buying and are also usually insulated from its real costs through insurance coverage. Similarly, insurance companies don’t necessarily care what hospitals and doctors charge as long as they can pass the cost on to their customers in the form of higher premiums, co-payments, and deductibles. 

Finally, just breaking up big hospitals is not always the answer because in rural areas, there often are not enough patients to support more than one hospital. Moreover, when properly regulated, health care can be more clinically effective when it is delivered by large, integrated institutions. When all the different specialists involved in patient care work off a common medical record and coordinate their treatments, that can help to reduce medical errors and dangerously fragmented, inappropriate care. VA hospitals and clinics, for all their faults, consistently turn in superior performance on patient safety and adherence to evidence-based medicine because they operate as a system. 

In places where hospitals still face competition from other hospitals, the prices they charge private payers are much closer to Medicare and Medicaid prices.

What reforms can reconcile these trade-offs? As this magazine has argued before, the best politically possible answer is to outlaw price discrimination in health care. Hospitals and other providers need to start charging the same prices for the same treatments regardless of who the patient is or what health care plan he or she is on. The price of a treatment should reflect the cost of treatment, not the relative market power of different providers and insurers as it largely does today. Competition in health care should be over who can deliver the highest-quality care most effectively and efficiently, not over who can become a monopoly first. 

The CBO has calculated that just capping commercial prices at today’s already inflated levels and limiting future growth would result in a savings of 3 to 5 percent in the first 10 years. But we could do much more than that and still allow hospitals to earn the margins they need to finance their missions. Nationwide, hospitals in 2020 charged an average 250 percent more to treat commercially insured patients than Medicare patients, and in three states the surcharges ran at or above 310 percent of Medicare prices, according to a RAND study by Christopher Whaley. Put concretely, in these three states, an MRI scan that cost Medicare $1,000 would typically cost private payers more than $3,100. Yet in three other states, Hawaii, Arkansas, and Washington, hospitals did just fine with surcharges averaging “only” 175 percent above Medicare prices. 

Of course, many providers might be tempted to make up for lower prices with higher volumes, such as by ordering more unnecessary tests or performing more unnecessary surgeries. That’s an all-too-common phenomenon in American health care, as documented by the Dartmouth Atlas project at Dartmouth Medical School, where I teach, and by many other researchers and health care reporters. 

The solution to that problem is to require hospitals not only to charge everyone the same prices for the same treatments but also to cap their total revenues within a global budget. This is the approach the state of Maryland has recently taken, so far with very promising results. 

Can a public process like Maryland’s find the “right” prices? Medicare goes to excruciating lengths, using a process that involves providers themselves, to estimate the cost of care and then sets rates to cover cost plus a small margin. The results are far from perfect. Medicare overcompensates some kinds of specialists (cardiologists, radiologists) and under-compensates others (primary care physicians) relative to their actual contributions to public health. But overall, the prices Medicare pays reflect the cost of care and are adequate to sustain a well-run hospital. According to the National Association for State Health Policy, 39 percent of hospitals could break even today without any need to cut their budgets if payments from their private payers were reduced to what Medicare pays for the same treatments. Almost all would get by handsomely if we limited their monopoly rents to 175 percent of what Medicare pays—provided, of course, that they didn’t overcompensate their CEOs, bloat up their administrative staff, or go on vanity building programs. 

Despite all the bipartisan fixation on the future of the Medicare trust fund, Medicare is not the problem. It pays providers adequately and would be fiscally sustainable with only modest tax increases. The big problem is with commercial health care plans, and it’s brought to you by too much unregulated, monopoly pricing power. If we are to make the American health system work for the vast majority of working-age people and their children who rely on private insurance, legislators and regulators need to take aggressive action now to end corporate concentration and price discrimination in health care and cap hospital prices using global budgets.

The post Don’t Blame Medicare for Rising Medical Bills, Blame Monopolies appeared first on Washington Monthly.

]]>
148075
How College Athletes Finally Got Paid https://washingtonmonthly.com/2023/06/19/how-college-athletes-finally-got-paid/ Tue, 20 Jun 2023 01:20:00 +0000 https://washingtonmonthly.com/?p=148081

Antitrust lawsuits and a landmark state law were the key. But to make the system truly fair and rational, the NCAA needs to regulate, and players may need a union.

The post How College Athletes Finally Got Paid appeared first on Washington Monthly.

]]>

 Ed O’Bannon, who led UCLA to a national basketball championship in 1995, was visiting a friend in 2008 when his friend’s son shouted, “Ed, you’re in my video game!” The boy showed him a game, NCAA Basketball 09, that included an avatar that looked and played just like O’Bannon—African American, shaved head, six-eight, wore number 31 for UCLA, and programmed to score and rebound as O’Bannon had. His teammates were also depicted, and the game allowed the boy to match the 1995 Bruins against other teams of yore. 

O’Bannon was astounded and angry. No one had asked his permission to depict him, and he was paid nothing for the use of his image. O’Bannon lawyered up and sued the NCAA, alleging that it had violated antitrust law by preventing college athletes from earning money from licensing opportunities like this one. Since its inception in the early 1900s, the NCAA had been enforcing a strict amateurism regime that prohibited athletes from earning any money on account of their athletic endeavors—salaries, cash inducements to attend a particular school, bonuses for winning championships, or income from endorsements or advertising. This last category depends on athletes’ licensing their name, image, or likeness to businesses, a practice known as “NIL.” The NCAA had long argued that its insistence on strict amateurism was intended to prevent unsavory business arrangements and ensure that college athletes played “for the love of the game.” 

The all-too-predictable result, however, was that, as college football and basketball revenues skyrocketed, players, including many low-income and minority students, were locked out of the billions of dollars they made for their schools. Before the 2014 NCAA basketball Final Four, the University of Connecticut star Shabazz Napier complained that he and his teammates “have hungry nights [where] we don’t have enough money to get food.” Napier, who grew up in the projects in Boston, was about to play in a game watched by 80,000 people who paid an average of $500 per seat, along with a massive TV audience. But beyond his athletic scholarship, he didn’t have the proverbial two nickels to rub together. The social justice dimension of this regime has not been lost on commentators. In a 2011 article in The Atlantic titled “The Shame of College Sports,” the Pulitzer Prize–winning historian Taylor Branch opined that the lack of compensation had an “unmistakable whiff of the plantation.” Harry Edwards, a sociologist whose activism inspired two Black athletes’ iconic fist-raising podium protest at the 1968 Olympics, has called this issue “the civil rights movement of our time.”

The historian Taylor Branch once opined that college athletes’ lack of compensation had an “unmistakable whiff of the plantation.” Harry Edwards, who inspired two Black athletes’ iconic protest at the 1968 Olympics, has called this “the civil rights movement of our time.”

Ed O’Bannon’s lawsuit, which generated enormous publicity and discussion on this issue, delivered the first major blow to the NCAA’s amateurism regime in 2014, and influenced another antitrust case before the same judge, Alston v. NCAA, which culminated in a landmark Supreme Court decision in 2021. My law firm played a supporting role in O’Bannon’s case, and I worked directly on a predecessor case that involved compensation for assistant coaches. I also advised legislators who enacted a groundbreaking California law that allowed college athletes in that state to monetize their NIL. The California statute finished the revolution that O’Bannon had started, inspiring other states to adopt similar laws. In response, the NCAA withdrew its prohibition on athlete licensing deals, effectively ending amateurism as we know it in college sports.

Today, some college athletes whose recent predecessors played for a scholarship or simply the love of the game are making six or seven figures annually. The Athletic recently confirmed that it had seen an $8 million NIL contract, payable over three years, between a high school football star who preferred to remain nameless and boosters from a school that desired his services. (Boosters are groups of alumni or friends of the school who make substantial contributions to the athletic department.) Similarly, John Ruiz, a University of Miami alumnus, has committed $10 million to financing Miami athletes through purported NIL deals. And big money deals aren’t limited to football and basketball players, or to men. The LSU gymnast Olivia Dunne is reportedly making $2 million from apparel endorsements and social media, and the twin basketball stars Haley and Hanna Cavinder, most recently of Miami, have more than 40 endorsement deals, worth a total of $2 million. 

Meanwhile, however, the NCAA has declined to implement any kind of sensible regulation on this burgeoning market, which college officials, journalists, and other observers have dubbed the “Wild West.” The endorsements and apparel deals signed by Dunne and the Cavinder sisters are more or less what O’Bannon sued for, but the shadowy world of boosters is not. Over the past year or so, groups of boosters calling themselves “collectives” have begun recruiting high school athletes (and college athletes willing to transfer schools) with promises of up-front NIL money without specifying what licensing deals they’ll be expected to accept in return. Many college coaches and officials who were just getting comfortable with NIL are quite uncomfortable with this combination of NIL and recruitment, which looks more like salaries or bribes to attend a school. 

Last season, the now-retired Syracuse basketball coach Jim Boeheim accused three Atlantic Coast Conference rival schools of “buying players” with booster-generated NIL deals, only to have the allegation boomerang with a press report that a Syracuse booster was planning to spend $1 million on its recruits. The NCAA continues to insist that recruiting through NIL is improper, but it has taken no steps to regulate this market or punish offending boosters or schools. The turmoil led the legendary Duke basketball coach Mike Krzyzewski, known as “Coach K,” to say after his recent retirement, “This is the most tumultuous time in the history of inter-collegiate athletics, and there’s no leadership, no structure, no direction … It’s really good and it’s really bad.”

To college athletes and those who advocate for their compensation and empowerment, the good outweighs the bad. But there is admittedly a bad look from some shady-looking booster-generated deals, and many parties in this space are seeking to eliminate or limit the chaos. A congressional subcommittee recently held a hearing on this, and promises more, and the new NCAA chief (and former governor of Massachusetts), Charlie Baker, seems ready to try to address the issue. One thing missing from the debate, however, is solid data about how NIL is actually operating, so that the NCAA, Congress, and others might intelligently improve the situation. Regulators might take a cue from Supreme Court Justice Louis Brandeis, who said in 1913, “Sunlight is said to be the best of disinfectants.” 

One powerful government tool, antitrust law, worked in concert with state legislation to break the back of the “plantation” system that exploited college athletes. Now, to preserve the gains of the past few years, advocates must embrace other tools that historically have established a level playing field for businesses, workers, and consumers. The available tools fall into three categories: further NCAA regulation under its new leadership; federal legislation; and, in the long term, unionization for at least some college athletes, allowing them to take their seat at the table through collective bargaining. Athletes have taken an enormous step toward fair compensation, but it will be a few more years before we see if the civil rights battle championed by Harry Edwards, Ed O’Bannon, and so many others can finally be won. 

College sports is a uniquely American experiment. It has both wonderful and terrible elements. On the positive side, it allows young people to enjoy and hone their athletic skills, preparing a few of them for professional careers and providing others with scholarships or at least memories of great wins and cherished teammates. For the public, college sports creates very popular entertainment, live and on TV, and, for some, emotional bonds to universities they attended or have simply embraced. For universities, sports creates enormous athletic revenues (more than $3 billion for the 70 or so schools in the largest conferences), alumni and student pride, more undergraduate applications, and greater contributions to athletic departments. 

But on the negative side, many athletes are admitted because of their athleticism, not academics, and don’t have sufficient time to focus on their studies. The former Oklahoma State and NFL star Dexter Manley testified to Congress that he couldn’t read or write when he finished college. Many athletes leave school without a degree, and are, as O’Bannon testified, “athletes who masquerade as students.” The University of North Carolina infamously was found to have propped up athletes’ eligibility with classes that never met, but the school somehow avoided NCAA sanctions. Even academically minded athletes are pushed into easy majors so they won’t be distracted from their sport. And we have “college” basketball players who barely visit college, spending only August to March of their freshman year on a campus, practicing basketball, playing as many as 40 games, and then leaving to prepare for the NBA draft. Passing one semester of exams does not a college student make.

Further, the NCAA’s decades-long effort to block athletes from earning a penny for their efforts created an unfortunate under-the-table economy for star players, in which recruits were secretly paid tens of thousands of dollars in cash for choosing a particular school. A federal prosecution put some people in jail for these activities, and evidence in that case included audio of the LSU basketball coach Will Wade complaining that he made a “strong-ass offer” to a recruit (and still didn’t get him), and that he could pay his players better than the NBA minor-league minimum. Despite this, Wade was allowed to continue coaching at LSU for several years, and, when he was finally fired, was hired at another NCAA school. The NCAA claims to believe in amateurism, but violations by coaches are treated with kid gloves. Athletes do not get the same forgiving treatment, as we will see. 

The recent upheaval in college sports stems from a collision between the NCAA’s strict amateurism model and the rapid commercialization of college sports, in which everyone but the players got rich. The ever-increasing money in college sports has made the NCAA’s penurious attitude on athlete compensation look unfair and outdated, and the problems were exacerbated by the NCAA’s vindictive discipline and incoherent regulatory and litigation instincts. It has often been its own worst enemy, which is why many have little confidence in its ability to address current issues about NIL, even though it might be the most obvious candidate to do so.

In addressing these issues, one might think it would be useful to see how other countries handle college sports, but there is no such other country. (Well, Canada has a little.) Ask a Brit if Oxford has a strong soccer team this year, and you will get a long, silent stare. It’s the same in France, Australia, or Brazil. Around the world, potentially elite athletes do not go to college, and colleges do not field strong teams or televise their matches. The top athletes go pro or begin training for the Olympics at sixteen or seventeen. Only we Americans think we can mix education and big-time sports, and we are having a bumpy ride.

Intercollegiate college sports began in 1852 with a Harvard-Yale regatta. Harvard won, but its success was marred by questions about whether its coxswain was a current student. Eligibility questions and charges of cheating are as old as college sports itself.

Other sports followed, with Williams versus Amherst in baseball in 1859, and the first track meet in 1873. Football arguably began with Princeton against Rutgers in 1869, but that game was a hybrid of soccer and football, and is also considered the first college soccer game in America. Three games were planned between the two New Jersey schools, but the deciding game was canceled over faculty concerns about distraction from academics. Around this time MIT President Francis Amasa Walker said, “If the movement shall continue at the same rate, it will soon be fairly a question whether the letters BA stand more for Bachelor of Arts or Bachelor of Athletics.” 

Early rulemaking and enforcement came from “conferences,” also known as “leagues.” Predecessors of the Southeastern Conference, the ACC, and the Big Ten were created between the 1890s and 1920s. But 18 college football deaths in 1905 caught the attention of President Theodore Roosevelt, who called for national regulation. As a result, the Intercollegiate Athletic Association of the United States (IAAUS) was formed in 1905, and in 1920 changed its name to the National Collegiate Athletic Association. Today the group includes the vast majority of four-year colleges with athletic programs, broken into Divisions I, II, and III, denoting large, moderate, and small commitments to athletics respectively. 

The original NCAA constitution limited college athletics to “amateurs,” but without a definition of this crucial term. Whatever it initially meant to the NCAA, the concept was borrowed from overseas. In England, playing sports was considered a leisure-class activity that should be limited to gentlemen who ought not be paid for playing. But when the blue bloods were challenged by teams in factory towns in the 1880s, where players weren’t literally paid to play but were recruited to towns with factory job offers, the upper-crust teams complained that their opponents weren’t sufficiently amateur. The factory town teams ultimately prevailed in this debate, and amateurism in English soccer gradually faded away.

Amateurism also had roots in the Olympics, which restricted participation in the inaugural modern Olympics of 1890 to amateurs. Jim Thorpe famously (also infamously) had his 1912 track and field medals taken from him when it was learned that he had played a bit of minor-league baseball. But as decades passed, the Olympics came under pressure to abandon amateurism, amid allegations of secret payments to top athletes in Europe and the U.S., and government financial assistance to Soviet bloc athletes. Olympic amateurism was watered down in the 1970s, and abandoned in time for the USA “Dream Team” to win basketball gold in 1992.

Before the 2014 NCAA basketball Final Four, the University of Connecticut star Shabazz Napier, who grew up in the projects in Boston, complained that he and his teammates “have hungry nights [where] we don’t have enough money to get food.”

There is a clear pattern here—insistence on amateurism in these sports generated circumvention through under-the-table payments as the sports got more popular and profitable, and amateurism was abandoned as unrealistic in all these sports except American college athletics, by the late 1980s. The only surviving amateur regimes are the NCAA, and sports like golf and tennis, which are quite different since they allow athletes to choose whether to be pros or amateurs. There is no such choice in the major American sports, and colleges enroll hundreds of talented young athletes each year with substantial earning power who can’t go pro because of their age, which varies from sport to sport. Thus, the NCAA is the only game in town, but it told athletes for over a century that they couldn’t earn a penny from their athletic fame or talent. Zion Williamson, for instance, played in his first year of nationally televised basketball at Duke without any compensation. The next year he entered the NBA with a $9.75 million salary and signed a deal to endorse Nike products that was worth $75 million over five years. Nike would have paid him something substantial in his first year at Duke if NCAA rules had allowed it. It is this forced amateurism, contrasted with the vast and growing sums of money flowing to the schools from football and basketball, that generated growing opposition to the NCAA amateurism model from athletes and the public, and, with the help of antitrust law, ultimately toppled it.

If one chooses to enforce an amateurism regime, one must define “amateurism,” which is no mean feat. In broad concept, it is supposed to keep professional athletes out. But does an athletic scholarship make you a pro? How about free academic tutors? Can parents be transported to games for free? And, most important here, what about NIL money? Payments from an apparel company do not resemble professional salaries in source or terms. This was Ed O’Bannon’s grievance—his depiction in a video game should have earned him a few hundred or thousand dollars, but he couldn’t get anything. It was also described well by the NBA star Andre Iguodala: “After a while it starts to wear on you. Kids on campus were wearing my number and the school was getting $40–50 for each jersey sold, but I still was playing in exchange for tuition, room and board.” 

Looking at NIL from a different angle, college students who are good singers or dancers can make money performing off campus, endorsing a line of instruments, or monetizing their social media accounts. But their classmates who play sports couldn’t earn a nickel from any of those things. 

In 1948, the NCAA made its first effort to define athletic amateurism with what it smugly called its “Sanity Code,” which allowed for tuition scholarships only, no room and board or other financial inducements, and called for expulsion of any member school that violated the rules. Apparently the NCAA believed it insane to allow anything more.

Only three years later, however, “sanity” was redefined, after the NCAA proved unwilling to expel seven schools that self-reported violations of the code. By 1956, full scholarships were allowed—tuition, room, board, books, and a few dollars for incidentals—but nothing more.

While NCAA schools debated whether full scholarships were consistent with amateurism, they were in agreement that growing college football could be a highly profitable endeavor. The University of Chicago was an early pioneer, garnering attention around 1900 with a new coach, a big new football stadium, and entry into the Big Ten, winning two national championships. Its games were big social events. Applications for admission and enrollment at the university rose dramatically. 

Despite all this success, newly appointed President Robert Maynard Hutchins concluded in the 1930s that football and strong academics were incompatible. The university left both the Big Ten and Division I, never to return. But dozens of other schools mimicked Chicago’s success and never looked back. Michigan State, for instance, was a small agricultural college until it decided to develop a strong football team, grew enrollment to 40,000, and, with perfect timing, slid into the Big Ten slot vacated by Chicago.

The growth of college sports accelerated after World War II, as the GI Bill provided funds for far more Americans to attend college, growing an alumni and fan base. Even more important, in the decade of the 1950s, the percentage of U.S. households with televisions jumped almost tenfold, from 9 percent to 86 percent. There are only so many people you can fit into a stadium, but with TV, the sky is the limit.

The NCAA obtained its first national football TV contract in 1952. It was $1.1 million annually, and the price accelerated through the 1960s and ’70s, reaching $31 million annually by 1980 (the equivalent of $150 million in 2023 dollars). Televised college basketball likewise began small but experienced exponential growth in the 1970s and ’80s, with TV rights to March Madness growing from $500,000 annually in 1972 to $16 million per year in 1981. And again, having a strong team with a national audience paid great non-sports dividends to schools; little-known Butler University, for example, saw admissions applications increase 50 percent between 2009 and 2012, during which time it made two Final Fours.

However, as college sports became more profitable, college officials and others pointed out the unfairness (and danger) of schools profiting massively off the accomplishments of athletes who were not allowed to earn a penny from their skills. When an ugly basketball point-shaving scandal in the 1950s led to criminal convictions of 32 players who had conspired with gamblers to affect the final score of games, Justin Morrill, the president of the University of Minnesota, commented that these uncompensated athletes were “easy prey to the easy-money approaches of unscrupulous gamblers.”

The availability of substantial TV money also led to disputes among schools over who got that money, and ultimately introduced the NCAA to a pesky thing called antitrust law, which has plagued it ever since. In the late 1970s, the major football schools objected to the NCAA’s egalitarian approach to television, which limited how often individual teams could appear and required that appearances under the NCAA’s national TV contract be shared among at least 82 teams.

More than 50 prominent football schools formed the College Football Association in 1977, and pressed for less NCAA regulation and more TV games overall, including many more for the most popular teams. When they got outvoted at the NCAA, they went shopping for their own deal, and NBC offered the CFA $180 million for four years of football. The NCAA threatened to expel the CFA schools, and the CFA schools, having to decide between this rich TV offer and their NCAA membership (which implicated basketball and all other sports), opted for something entirely different—an antitrust lawsuit against the NCAA.

Antitrust law might be the lead character in this story. The Sherman Antitrust Act was passed more than 125 years ago, in 1890, and its short and simple text remains substantially unchanged. It prohibits contracts, combinations, and conspiracies that restrain trade, and monopolization, and was dubbed “the Magna Carta of free enterprise” by Justice Thurgood Marshall. It creates challenges to many big businesses, but particularly difficult problems for sports leagues, which almost by definition restrain trade—controlling how many games and teams there are, how much money can be paid and to whom (salary caps, rookie scales), and a variety of lesser restraints. Ford, GM, and Toyota do not meet and discuss how they could cooperate to make more money (they would be indicted if they did), but the Yankees, the Red Sox, and the Dodgers do. They call it a “league meeting.” They even put their agreements to restrain trade in writing and call them “rules.” Leagues can be easily put on the defensive by antitrust charges aimed at such rules. 

The legendary Duke basketball coach Mike Krzyzewski has said of the nascent licensing market, “This is the most tumultuous time in the history of intercollegiate athletics, and there’s no leadership, no structure, no direction … It’s really good and it’s really bad.”

 In 1981, the 63 CFA schools, led by the Oklahoma State Regents on behalf of the Sooners, filed the first major antitrust suit against the NCAA. This began a 40-year period in which the NCAA lost four crucial antitrust cases, leading directly—with crucial help from the California legislature in 2019—to massive changes in college sports. At the time, the NCAA limited the total number of televised games and how many of those slots could be taken by the strongest teams. Thus Notre Dame, which might have wanted to be on TV every Saturday, might be relegated to one or two, while far weaker football programs might get a game every year or every other year. Antitrust law frowns on such artificial limits on competition, and prefers to let the market decide which games will be televised. Consumers get more choices, and more attractive games with top teams, and antitrust law typically favors consumer choice.

Ask a Brit if Oxford has a strong soccer team this year, and you will get a long, silent stare. Only we Americans think we can mix education and big-time sports, and we are having a bumpy ride.

The NCAA’s primary defense in the case, NCAA v. Regents of Oklahoma, was that it didn’t resemble a pro sports league or a big business, the real targets of antitrust law, and that its TV rules were part of the amateur tradition of college sports, which was principally about education and fair competition, not money. But the argument didn’t sell in the trial court, or on appeal, or, most importantly, in the Supreme Court, which ruled for the big football schools by a vote of 7–2. The high court found that the NCAA’s TV rules unreasonably restrained trade, and that competition would be better served by letting each school or conference negotiate whatever TV deals it wished. The NCAA lost control of college football, and the big money earned thereby, and has never gotten it back. (By contrast, it does control college basketball, and runs its entire operations off the profits from March Madness.)

Although the NCAA lost the Regents case, its arguments about the collegiate model obtained a few friendly comments from the Court. The opinion said “college football” was a particular brand in which the players “must not be paid,” and added,

The NCAA plays a critical role in the maintenance of a revered tradition of amateurism in college sports. There can be no question but that it needs ample latitude to play that role, or that the preservation of the student-athlete in higher education adds richness and diversity to intercollegiate athletics and is entirely consistent with the goals of the Sherman Act.

So the NCAA lost the TV case, but it had reason to believe that it would do better if amateurism itself were at stake. Or so it appeared 40 years ago.

Not surprisingly, after Regents, fans got to see the top teams more often, more total games were broadcast, and total TV money increased spectacularly. For example, the Southeastern Conference was paid $17 million per year for football rights in 1996, and is now receiving roughly $722 million from all sports, most of which comes from football. Schools began to jump from conference to conference for increasingly lucrative TV deals, and the Big Ten, which once covered the Midwest, now stretches from sea to shining sea, from Rutgers to UCLA. These huge and spread-out conferences defy logic, convenience, and educational goals, but the TV money is very good.

The big TV deals caused everything in college sports to get bigger. Football and basketball coaches have become multi-millionaires, with many earning over $5 million annually—Coach K earned $12.5 million. To add insult to injury, these coaches are commanding long-term guaranteed contracts but being fired quickly if they don’t win, so in the past decade, public universities spent $530 million on coaches they had already fired. 

As the TV money grew, voices were raised in favor of athlete compensation and empowerment. Dick DeVenzio, a starting guard and Academic All-American at Duke, wrote a whole book about the subject in 1986. Ernest Chambers, a legislator in Nebraska, annually proposed legislation to put Cornhusker footballers on the state payroll, an idea that seemed extreme at the time but is discussed more seriously today. NIL rights for athletes were still decades away, but players in the 1980s got some group perquisites, such as separate dormitories for athletes and lavish amenities, like Clemson’s miniature golf course for football players. Teams began to fly to games via charter planes, a perk that WNBA players have only recently begun to enjoy. But cash compensation remained verboten, both from the schools and from the likes of Nike or a local business wanting the player to sign some autographs for a few bucks. 

Indeed, the NCAA not only stuck to amateurism, but also enforced it with a vengeance. The golfer Dylan Dethier of Williams College was ruled ineligible for the Division III championships because he wrote a book about golf while on a gap year. It was not a best seller. Brittany Collens and several tennis teammates at UMass were found ineligible, and their accomplishments deleted from the record books, because the university had accidentally overpaid them a few dollars on their scholarships, which they promptly repaid. The Colorado wide receiver Jeremy Bloom was ruled ineligible because he had apparel contracts from his prior success as an international skier.

This trend of unfortunate decision-making brings us to another antitrust loss—and one that was totally unnecessary. Despite ever-increasing TV money, in 1991 the NCAA decided to cap the annual pay of the lowest rank of assistant coaches at $12,000. The aptly named “Restricted Earning Coaches” rule was challenged in 1994 in a class action by Norman Law, a coach at Pittsburgh, and the NCAA got clobbered, and was forced to pay the coaches $55 million in damages. (I was involved in this case on behalf of the coaches, although the case was run day-to-day by others in my and another law firm.)

Importantly for athletes, antitrust law had crept closer to the mother lode of NCAA amateurism, because although Law’s case wasn’t about athletes’ rights, it involved an illegal “price fix” by the NCAA for people who toiled in college athletics, and bore much more resemblance to players’ financial grievances than the Regents TV-based case. If assistant coaches had antitrust rights, why not players?

Here we have several self-inflicted wounds. First, the NCAA should never have enacted the plainly illegal rule. Second, having been sued, it should have withdrawn the rule and settled. And finally, having lost the case, it shouldn’t have appealed, which created a published appellate decision finding the NCAA to be guilty of price-fixing.

The plot truly thickened in 2009 when O’Bannon saw his image in the video game and filed suit in federal court in Oakland. His case was combined with other athletes’ suits to challenge two things: their inability to earn NIL money, and the NCAA’s arbitrary caps on athletic scholarships. The latter set a maximum scholarship limit at an amount below what federal law defined as a “full cost of attendance” scholarship, a difference of several thousand dollars. But if federal law defines a certain level of scholarship help as the “full cost of attendance,” why would the NCAA set a ceiling on scholarships thousands below that? Does a truly full scholarship make an athlete a pro?

The NIL issues were hard-fought, but the arbitrariness of the scholarship rule likely hurt the NCAA’s credibility in front of the federal district judge, Claudia Wilken, who presided over both aspects of O’Bannon and later would preside over another crucial athlete compensation case, NCAA v. Alston. In August 2014, Wilken ruled in favor of O’Bannon on both points. She first brushed off the Supreme Court’s friendly remarks about amateurism in Regents as what lawyers call “dicta”—background commentary rather than a binding holding of the Court. Next, she concluded that the scholarship limits were an illegal price-fix. And finally, she turned to the big-ticket item—NIL. The money did not come from the schools, so it was different from a professional salary. And it had a commonsense attractiveness—if O’Bannon’s image and name were popular enough to help sell video games, why did everyone profit but O’Bannon? 

Wilken found this issue difficult, and reached a compromise solution—she held that antitrust law was violated by the NIL prohibition. The proper remedy did not need to permit athletes to personally market their NIL, but it did require the NCAA to at least allow schools to pay their athletes up to $5,000 per year in lieu of their NIL rights, to be held in trust until they finished school. 

O’Bannon’s wins were enormous in precedential significance. First, he got a federal court to use antitrust law to overturn, for the first time, the NCAA’s conception of amateurism; second, he won athletes the opportunity for full cost of attendance scholarships; and third, he got athletes an opportunity for compensation for their NIL rights, albeit with a low ceiling. He also secured direct compensation for athletes who appeared in the video game, whose publisher, Electronic Arts, paid $40 million in a pretrial settlement.

The NCAA appealed. In October 2015, the Ninth Circuit Court of Appeals affirmed Wilken’s ruling on scholarship size, but in a 2–1 vote overturned her NIL compromise. The appeals court held that the NIL trust funds could not be imposed on the NCAA because the association could reasonably limit athletes’ benefits to those “tethered to education,” and NIL rights were not so tethered.

Although the limited NIL rights were lost for the moment, the appeals court’s language about benefits being “tethered to education” opened the door to all kinds of new athlete compensation. Why couldn’t schools award post-graduate scholarships to athletes, or offer cash academic awards limited to athletes, or provide athletes paid internships in their fields of interest? Weren’t these benefits all “tethered to education”?

Ford, GM, and Toyota don’t discuss how to cooperate to make more money, but the Yankees, the Red Sox, and the Dodgers do. They call it a “league meeting.” They even put their agreements to restrain trade in writing and call them “rules.”

When the appeals court ruled in 2015, its “tethering” language was quickly adopted in another antitrust suit against the NCAA filed in 2014 by Shawne Alston, a running back for the West Virginia Mountaineers, and other athletes. Alston’s case challenged various aspects of the NCAA’s athletic compensation limitations. Alston’s lawyers spun their case to fit the appeals court’s new test from O’Bannon, and Alston ultimately produced a $200 million settlement in 2017 and an order from Wilken in 2019 that the NCAA must permit schools to offer academic-related benefits such as postgrad scholarships. The NCAA appealed again, seeing this as just another way for schools to recruit athletes with promises of cash, or what it called “improper inducements.”

But before the appeal of Alston could be decided, the action shifted 80 miles northeast, from federal court in Oakland to the state capitol in Sacramento, where state Senator Nancy Skinner was ready to take center stage.

Skinner, who had taken classes with Harry Edwards at Berkeley and was aware of the plight of college athletes, learned about the ongoing legal battles over athlete compensation from Andy Schwarz, an economist who worked on O’Bannon. After considerable study, Skinner introduced California Senate Bill 206 in 2019. The bill, which permitted California college athletes to monetize their NIL without penalty from their schools, threatened to force a showdown with the NCAA, which would otherwise suspend the athletes for amateurism violations. (I became an unpaid consultant to Skinner early in the pendency of the bill, assisted in the amendment process, and testified in favor of the bill at a Senate committee hearing.)

The NCAA vigorously opposed Skinner’s bill, threatening to bar California schools from its championships. Most of the college sports establishment reacted like the retired Purdue basketball coach Gene Keady: “Oh, I hate that. It’s entitlement. Everybody thinks they’re entitled to certain stuff. You’re not entitled to anything. Go to class, get your degree, and earn your own way in life with a good education.”

The California legislature felt differently. Skinner had crafted the bill narrowly to permit only NIL, and not salaries or other university-paid benefits. Furthermore, the commonsense notion that athletes should be afforded the same economic freedom as artistically or musically gifted students caught on with both the legislature’s Democratic majority and, ultimately, the Republican minority. Former college athletes testified in favor of the bill, and the opposition witnesses were somewhat off point, claiming that NIL would be expensive and rob minor sports of their support from the schools. NCAA CEO Mark Emmert irritated California leaders with his threats against state schools, and the outmanned opposition simply faded away. The bill passed both houses unanimously in September 2019, and Governor Gavin Newsom signed it at a televised event with Ed O’Bannon, LeBron James, and the former UConn basketball star Diana Taurasi. In contrast to the yearslong battles in court, which had not yet delivered NIL, Skinner’s legislation had proceeded from mere concept to groundbreaking law in only a few months.

Whether the NCAA would have blocked California schools from NCAA championships was never really tested, as Florida and more than a dozen other states followed up by passing similar NIL laws. The NCAA then began to think about allowing NIL nationwide but regulating it so it wouldn’t morph into salaries or improper recruitment inducements. Many suggestions were floated, such as mandating full disclosure of deals, requiring them to reflect the fair market value of the rights in question, and keeping the schools out of NIL agreements and leaving it to private businesses. Indeed, each of these limitations was included in one or more of the state laws. 

But as the NCAA pondered how to regulate NIL, a bolt of lightning struck from the Supreme Court. Nine men and women in black robes were about to tilt the battle toward student empowerment and compensation. 

Simultaneous with action in Sacramento, the NCAA had appealed Wilken’s decision in Alston all the way to the Supreme Court, hoping to stop the onslaught on amateurism. The Court had a solid conservative majority, and conservatives are often not friendly to antitrust plaintiffs, which made the NCAA hopeful and the athletes’ lawyers fearful when the Court agreed to hear the case. 

But to both sides’ surprise, in June 2021 the Supreme Court unanimously affirmed the Alston decision, concluding that antitrust law required the NCAA to allow the schools to offer postgraduate scholarships, cash academic awards, and other education-related benefits if they wished. The Court’s opinion began by rejecting the 40-year-old amateurism-friendly comments from Regents as nonbinding, just as Wilken had, and added that when “market realities change, so may the legal analysis.” The justices were talking about television money, and they found that, particularly in this highly profitable market, antitrust law required the NCAA to permit schools to offer their athletes valuable education-related benefits. Indeed, the unanimous Court made its ruling sound relatively obvious. Antitrust law protects competition, and if Stanford wants to offer its athletes academic awards to try to compete more effectively for recruits, what’s the problem? 

But if the Court’s calm and straightforward main opinion frustrated the NCAA, Justice Brett Kavanaugh’s concurring opinion was its worst nightmare. He not only sided with the students, but also vigorously disagreed with pretty much everything the NCAA had argued. His opinion dripped with sarcasm, mocking the NCAA’s arguments that its rules were lawful because its fans preferred amateur athletics: “All of the restaurants in a region cannot come together to cut cooks’ wages on the theory that ‘customers prefer’ to eat food from low-paid cooks. Law firms cannot conspire to cabin lawyers’ salaries in the name of providing legal services out of a ‘love of the law.’ ” He concluded: “The bottom line is that the NCAA and its member colleges are suppressing the pay of student athletes who collectively generate billions of dollars in revenues for colleges every year.” Here he was equating the athletes to workers, anathema to the NCAA’s view of the world, and dangerous to their legal arguments that the athletes weren’t employees and couldn’t unionize.

In closing, Kavanaugh suggested that although the NCAA’s concerns could not be solved by bending antitrust laws to its purposes, they might be addressed by Congress, which could amend the antitrust laws, or by allowing the athletes to form unions, which under the labor exemption to the antitrust laws (described above regarding pro leagues) could legitimize limits on athlete benefits so long as they were collectively bargained with a player union. 

This bombshell—both the loss in Alston and the threats of more drastic action from Kavanaugh—stopped the NCAA’s consideration of regulating NIL in its tracks. The Court’s language was so athlete friendly that the NCAA could foresee losing a challenge to any NIL regulations as a restraint of trade, and such a case, House v. NCAA, was already pending. If Alston cost the NCAA $200 million, House could cost it billions. 

Nine days after the Alston decision, the NCAA announced that athletes in all 50 states could monetize their NIL. Regulations were expected to follow, but there were none for more than a year, and in reality, there have still been none with any teeth.

Many colleges took full advantage of the void. Some awarded the newly allowed cash “academic awards” to any athlete with a 2.0 average, which essentially meant anyone who had athletic eligibility. A Brigham Young University booster provided NIL money so all of its “walk-on” football players had the cash equivalent of full scholarships (undermining the NCAA’s scholarship limits). A University of Nevada, Las Vegas, booster provided free use of a car to all Running Rebel basketball players, surely something unrelated to the fair market value of the NIL of the least prominent benchwarmers. Each of these actions likely angered the NCAA, but there were no known inquiries to the schools, no penalties, no warnings, and no signs of life at NCAA headquarters in Indianapolis.

The world of NIL quickly became chaotic, and the chaos was multiplied by liberalized NCAA player transfer rules that also went into effect in 2021. Transfers previously had to sit out a year before playing for a new school, a serious deterrent, but that rule was withdrawn, and the number of players considering transfer boomed. You could not only buy a good batch of high school seniors, but you could also steal players from competing schools with high NIL offers. If NIL money became a recruitment tool, it might be used not only with high school seniors, but also with talented college freshmen, sophomores, and juniors who might wish to transfer. So many athletes wanted to test the transfer market that an online “transfer portal” was created where coaches could scroll through a compilation of available players on their cell phones. Wry observers dubbed it “Tinder for Coaches.”

Licensing deals no doubt generated several positive effects, as players were earning money commensurate with their market value. Some were staying in school longer because NIL money provided them a good living, and thus fans saw more continuity in their teams’ rosters. But the boosters, which had begun combining into larger groups called “collectives,” quickly introduced chaos. The original concept of licensing deals had been known as “third-party NIL,” where athletes made agreements directly with businesses in a free and open market. Now, boosters and collectives became middlemen to those third-party deals, saying, “Come to our school and get $300,000 per year.” If the offer was accepted, the boosters would try to find NIL deals to fund it, but failing that, would pay the athletes themselves. On rare occasions, the boosters have even failed to provide the promised funds. This bastardization of NIL is what Jim Boeheim was complaining about when he said three ACC rival schools had “bought” their teams (and also the basis for the countercharge against his Syracuse program). 

Boosters and collectives brought us John Ruiz of Miami, with his $10 million NIL fund; a Clemson collective with $5.5 million to spend on players at its inception in 2022; and the $8 million deal with a high school star reported by The Athletic. No wonder observers dubbed the NIL market the Wild West. And the lack of any required disclosure of NIL deals meant that the known deals might be just the tip of the iceberg. It was possible that some other school or its boosters had bought its whole basketball team at inflated prices and just kept quiet about it. But in this maelstrom the NCAA remained cautious, issuing non-enforced “guidance” that frowned on university involvement in licensing agreements. 

“All of the restaurants in a region cannot come together to cut cooks’ wages on the theory that ‘customers prefer’ to eat food from low-paid cooks,” Justice Brett Kavanaugh wrote in a concurring opinion overturning the NCAA’s amateurism rules.

As this article was nearing completion, the NCAA finally issued a slap on the wrist as its first sanction for NIL-related activity. The league found that the booster Ruiz and a Miami coach violated recruitment rules when the coach facilitated a meeting, and Ruiz bought the Cavinder sisters dinner in hopes of persuading them to transfer to his school. But the NCAA expressed no concerns about Ruiz’s $10 million “NIL” fund, and whether any was used for this or other recruitment.

Finally, there is one more NIL challenge worth noting—Title IX. That law requires that athletic opportunities and benefits provided to male and female athletes by schools be proportional to the male-female ratio at the school. Schools make an effort to balance their athletic spending on men and women, and a rich apparel deal with Nike by a football star won’t create a violation of Title IX because Nike money doesn’t count in the balance, only school money. But Gator Collective money might, or John Ruiz money. Schools are often held responsible for the conduct of boosters in traditional recruiting, so the same rule might apply here, putting a school out of compliance if men get more NIL money than women, which is likely. The Drake Group, a well-respected nonprofit that supports equity in college sports, has already raised this issue with the U.S. Department of Education, which administers Title IX. As Charlie Baker, the new NCAA president, takes office, many are pondering how to preserve athletes’ rights but end the chaos. What levers are available to the interested parties to do that? We might usefully divide the options into short-, medium-, and long-term fixes. 

First, the NCAA can finally begin to regulate NIL without overly stifling it. As Baker recently acknowledged, “Someone has to set what I would describe as the sort of the rules of the game, [and] I think the NCAA has a pretty important role to play.” As poorly as the NCAA has performed, it is the most likely source of short-term action. It can demand transparency in NIL deals; it can tell schools to stay out of this market, or how to act in it; and it can penalize boosters and schools that misbehave, as it has done for decades on recruitment issues. Transparency would be a great first step. If we don’t know what the athletes, schools, and boosters are doing, it is impossible to know what corrective action is required, if any. But the next steps are harder. The NCAA could try to restore the concept of “third-party NIL,” where the deals are generated by businesses rather than boosters. To go further, and require that NIL deals provide no more than fair market value to the athletes, seems like a challenging and possibly dangerous task. Who is going to decide what fair market value is, and on what basis, and how quickly? This is also a step that surely invites antitrust attack. 

A second method of change is federal legislation. The NCAA and its supporters have suggested that Congress could immunize the association’s licensing rules from antitrust attack, preempt past or future state legislation on this subject to avoid inconsistency, or write its own version of NIL regulations into law. Each of these steps has major problems or will miss the mark. Giving the NCAA antitrust immunity regarding NIL makes no sense given its track record of squelching athletes’ rights. The alleged differences among state NIL laws are rather minor and are not the central problem. And finally, having Congress write NIL rules would take months or years, might never occur, and, worse yet, might result in a political compromise that is counterproductive. Thus, Congress is at best a medium-term solution. 

The third possible fix relates to athlete unionization, a double-edged sword that would give players the power to bargain collectively with schools, but could also provide antitrust protection for schools and the NCAA. (Market-restraining agreements made between unions and leagues—like the NBA salary cap—are exempt from antitrust enforcement.) Indeed, athletes at USC are trying to form unions right now. But it will be a long and slow process. 

Even if college athletes are found to be employees, and thus eligible for union status, the challenges to creating a nationwide or even conference-wide union are enormous. If Alabama’s football players are found to be employees, they are employees of the state of Alabama and must be organized under Alabama law, while athletes at USC are private employees and must be organized under federal law. Combining state employees of multiple states and private employees into one nationwide union would be unprecedented in the annals of labor law. 

Thus, the broad unionization needed for the creation of common work rules (including compensation) will be quite difficult so long as the universities continue to oppose unionization vigorously, as they have. But the Penn State football coach James Franklin recently surprised many by saying, “I also think that ultimately, whether it’s in the next three years or next five years or next two years, there’s going to be some form of revenue sharing or collective bargaining agreement [with the players]. That’s going to happen. I think that’s inevitable.” 

If other coaches and athletic directors agree with Franklin, there are creative and collaborative unionization ideas floating around, such as making Division I football and basketball players employees of a conference or the NCAA who are then “assigned” to the school of their choice. Neither the conferences nor the NCAA are state entities, so all the athletes could be in one union. But these are bold and complex ideas that would take years to work out.

With both legislation and unionization slow and uncertain, the immediate focus should be on the NCAA. It can (and should) take an iterative approach, starting with disclosure and treading lightly, respecting the new rights athletes have won while trying to enforce fair and open competition among schools. Keep it simple, and if more regulation is needed, do it in year two or three. 

Ed O’Bannon, Nancy Skinner, and many others who supported their efforts were looking for fair compensation for college athletes through NIL rights. They have gotten that, and NCAA amateurism is essentially dead, but the forces they unleashed have, for unforeseen reasons, created chaos in the nascent licensing market. Rational people from all parts of the college sports scene should work together to create more certainty as to what is permitted, and enforce the resulting rules. Otherwise, we will have a race to the bottom, where the boldest (and richest) boosters and collectives strike backroom deals to dominate college sports. If that happens, it will erode not only fair compensation for athletes but also the public’s respect for college sports.

We are far better off than we were five years ago, when many star athletes “majored” in their sport, created millions of dollars for others, and earned nothing for their athletic efforts, talent, and fame. But as in so many areas of economic life, the market-warping forces of concentration, collusion, and worker disempowerment need to be turned around.

The post How College Athletes Finally Got Paid appeared first on Washington Monthly.

]]>
148081
Environmental Racism is Real. Ask Chicago. https://washingtonmonthly.com/2023/06/19/environmental-racism-is-real-ask-chicago/ Tue, 20 Jun 2023 01:15:00 +0000 https://washingtonmonthly.com/?p=147711

Like many cities, it’s coping with brownfields, pollution, and minority neighborhoods with ill-health conditions. Here’s how legal action and physical relocation can provide some relief.

The post Environmental Racism is Real. Ask Chicago. appeared first on Washington Monthly.

]]>

Environmental health scientists have long known that people who live in lower-income city neighborhoods suffer a disproportionate share of the ill-health effects of air, water, and soil pollution.

Over the past 60 years, their neighborhoods—built to serve industrial workers when city factories were booming—have become reservoirs for displaced workers and low-wage service-sector employees as corporations moved jobs to the suburbs, the non-union South, and overseas. After being torn down, the abandoned factories became brownfields, their land poisoned by a century of unregulated toxic dumping.

Transit planners added insult to injuries by targeting the same neighborhoods for the interstate highways that bisected cities. Aided by discriminatory Federal Housing Authority mortgage policies and bank redlining, white residents used the new interstates to flee to America’s burgeoning suburbs along with many of the good jobs. 

The remaining businesses were often the worst polluters—scrap metal processors, for instance. And new businesses willing to use the vacant land—warehouses and intermodal transfer facilities, for the most part—often polluted as much or more than the prior occupants due to their daily flow of 18-wheel diesel trucks. 

In recent years, community activists in cities across the country have launched campaigns to combat the ill-health effects of this legacy, often under the banner of fighting environmental racism. Most of the worst-off neighborhoods are majority Black and Hispanic.

In Chicago this spring, several local groups on the city’s predominantly Hispanic Southeast side scored a major victory when outgoing Mayor Lori Lightfoot signed a consent decree with the U.S. Department of Housing and Urban Development (HUD) promising to consider the health impact of industrial and commercial projects before issuing permit approvals.

The conflict began three years ago when the city issued a permit allowing Reserve Management Group/Southside Recycling to build a scrap metal processing plant in the neighborhood. The Southeast Environmental Task Force, the South East Side Coalition to Ban Petcoke, and People for Community Recovery filed suit, claiming that the company’s proposal to build the facility constituted discrimination based on race since the recycler’s original plant had previously been located in a gentrifying area of the city’s nearly all-white North Side. 

 At first, the city claimed that nothing in the law allowed them to block the permit. But then, local activists went on a 30-day hunger strike to protest the project, which garnered national attention.

In 2022, with help from the Environmental Protection Agency, Chicago’s health department weighed in with a report that concluded the company, which had a long history of violating Illinois emission standards, posed an “unacceptable risk” to community health. “In an already vulnerable community, the findings from the [health impact assessment] combined with the inherent risks of recycling operations and concerns about the company’s past and potential noncompliance are too significant to ignore,” Allison Arwady, the city’s health commissioner, says. The city reversed its decision and denied the permit.

But the community’s lawyers soldiered on. The three-year consent decree signed this spring, which ends the original lawsuit, goes much further than putting the kibosh on one polluter. It gives Chicago until September 1 to issue a comprehensive report documenting the “environmental burdens, health conditions, and social stressors” in every corner of the city. The goal is to identify “Environmental Justice Neighborhoods” where the city’s departments will be required to change its planning, zoning, and land use policies to mitigate “environmental impacts from existing and new industrial developments.”

The agreement is “very inspirational for communities that are struggling with environmental racism across the country,” Olga Bautista, executive director of the Southeast Environmental Task Force, told the Chicago Tribune. “This really could be a model for how cities [can] work together with impacted communities.”

Of course, it won’t provide immediate relief for the many acute and chronic conditions triggered by excessive air pollution in the neighborhood and others like it. They include elevated levels of heart disease, chronic obstructive pulmonary disease, many cancers, and asthma, where acute attacks can be particularly devastating for children.

A new study in JAMA by researchers at Johns Hopkins University offers another way to address the needs of those suffering from conditions exacerbated by excessive air pollution: help them move to another neighborhood. The study gave 123 low-income families with asthmatic children in Baltimore housing vouchers that allowed them to move into low-poverty areas. Nearly all (106) eventually did.

They compared that group to 115 children who remained in their low-income neighborhood. The moved children, who ranged from 5 to 17, saw the number of severe asthma attacks cut nearly in half, with only 8.5 percent of the kids experiencing an attack requiring medical intervention in the three months after moving compared to 15.1 percent in the three months before moving. The number of days with severe symptoms was cut in half compared to the matched cohort.

They also looked at changes in lifestyle that might account for the sharp reduction in asthma attacks. Indoor particulate matter (often from smoking) did not change. There were reduced levels of mouse and cockroach allergens, but it did not affect the outcome. More than a third of the gains led to decreased levels of what researchers called “psychosocial stress”: better daytime and nighttime safety and greater social cohesion by going to more integrated schools.

“The [Baltimore] housing mobility program has been effective in addressing racism in housing and how changing physical and social environments can improve asthma outcomes,” an accompanying editorial in JAMA by Neeta Thakur and Adali Martinez of the University of California, San Francisco, pointed out. Unfortunately, the study did not evaluate what specific features of living in a more middle-class neighborhood led to the improvement.

So these are two approaches to improving health in low-income neighborhoods: One uses a land-use policy to improve the physical environment over time; the other helps people leave the areas that are making them sick. Which is better?

Robert Weinstock, director of the Environmental Advocacy Center at the Northwestern Pritzker School of Law and the lead attorney in the Southeast Chicago neighborhood’s lawsuit against the city, says both approaches are valid. “You can move the people, or you can move the facilities. In Baltimore, they’re talking about moving the people. In Chicago, we’re talking about moving the facilities or regulating the ones that are there. We need a mix of both approaches for different situations.”

He also says Chicago’s consent decree offers a model for other big cities to address the ill-health effects of environmental racism. “HUD has never used its civil rights jurisdiction to identify violations stemming from industrial land use and pollution issues,” he says. “Advocates at the grassroots level have identified land use reform as a way to advance environmental justice. What’s new is HUD flexing its civil rights authority to require a municipality to directly address inequities that flow from its land use code.”

The post Environmental Racism is Real. Ask Chicago. appeared first on Washington Monthly.

]]>
147711
Home Is Where the Work Is https://washingtonmonthly.com/2023/06/19/home-is-where-the-work-is/ Tue, 20 Jun 2023 01:10:00 +0000 https://washingtonmonthly.com/?p=148034

When I worked at the Washington Monthly in the 1980s, the most notable aspect of the office, aside from its shabbiness, was the almost total physical absence of the boss, founder and editor in chief Charles Peters. Pioneering in so many ways—his column, “Tilting at Windmills,” a collection of delightful, insightful, counter-conventional mini-essays, has often […]

The post Home Is Where the Work Is appeared first on Washington Monthly.

]]>

When I worked at the Washington Monthly in the 1980s, the most notable aspect of the office, aside from its shabbiness, was the almost total physical absence of the boss, founder and editor in chief Charles Peters. Pioneering in so many ways—his column, “Tilting at Windmills,” a collection of delightful, insightful, counter-conventional mini-essays, has often been described as “the original blog”—Charlie was the first person I knew who worked from home.

That he managed to pull this off pre-internet is a testament to his will—and his command over his employees. After 12-hour days at the office, one of us (often me, because I had a car) was obliged to drop the pile of paper drafts we’d been working on in the milk bucket on the front stoop of Charlie’s modest row house in D.C.’s Palisades neighborhood. He would retrieve them the next morning and work on them through lunch, which he typically had at Annie’s Paramount Steakhouse or the Iron Gate Inn. He would then burst into the Monthly’s office, divvy up the drafts, bark a few orders, and leave. His remaining communication with us was by telephone, usually short calls, shorn of salutations. (Editor: “Hello, Washington Monthly.” Charlie: “On reflection, I would cut the entire third section.” Click.)

When I took over the Monthly in 2001, that remote oversight wasn’t my style. I put in long hours at the office, out of solidarity with my colleagues but also out of necessity. I’d never run an organization before and felt that I needed to be constantly present. 

Pretty quickly, though, I availed myself of the boss’s privilege of coming in late. This gave me the opportunity to read the morning papers, enjoy coffee with my wife, drive my kids to school, and avoid rush-hour traffic. 

As my self-assurance grew, I began working at home one day a week—then two, sometimes three. I still valued the camaraderie of the office and the spontaneous conversations that gave rise to unexpected insights and story ideas. But technology—email, then Dropbox and instant messaging—made working from home easy. And the hour I saved not commuting, plus the ability to concentrate with fewer interruptions, made it easier to get essential work done. When young editors asked if they could work at home some days, I gladly consented.

Then the pandemic hit. Suddenly, the whole staff had to work remotely. That suited me fine, as it did colleagues who had young kids and aging parents. 

Not everyone liked being out of the office, however. Meetings on Zoom felt weird, at least initially. It was harder to integrate new staff without the in-person touch. The troubles eased after vaccines allowed us to reopen the office; young editors and interns living in cramped quarters now had a comfortable, if largely empty, suite of offices to escape to. 

At the same time, the switch to mostly Zoom-based communication eased the way for us to hire more talent living outside the capital. We’re still experimenting with ways to build esprit de corps, like occasional all-staff lunches. But on balance, the new working-from-home regime has improved both the magazine and our lives.

That is not a view shared by leaders of some of America’s largest corporations. Mark Zuckerberg, Elon Musk, Jamie Dimon, and other CEOs are demanding that their employees return to their cubicles—asserting, with little evidence, that this will improve productivity. One suspects that their real aim is control over their employees. Since when do capitalists say deploying new technologies is the enemy of productivity? The CEOs’ leverage, however, is limited in this tight labor market by the willingness of their skilled employees to jump to firms that will let them work remotely.

The battle over working at home is about more than the lifestyle choices of the laptop class. It is about the same issues of work, freedom, and the pursuit of happiness that obsessed the nation’s Founders. “There’s an intellectual lineage going back to ancient Greece that argues that self-governing societies need independent, well-informed, and civically engaged citizens to survive,” Rob Wolfe writes in this issue’s cover story. “And to build those people up, societies must give them two things: economic autonomy and leisure time—not vacant hours spent staring at TikTok, but edifying leisure, time spent volunteering or reading or learning a new skill or raising children.” 

In our era, the first step to protecting the economic autonomy of citizens is to open up monopolized markets—because that leads to more firms that employees can bargain with to secure better pay and benefits, including the right to work remotely. Reducing the economic power of monopolies will also cut their political power, and that will make it easier to pass laws that help less-advantaged workers who can’t work from home. Indeed, as Len Simon details in this issue, antitrust lawsuits secured unpaid college athletes a share of the wealth that college sports create. Thom Walsh explains how monopoly hospitals, not Medicare, are what’s driving up health care costs. Claire Kelloway reports on Big Agriculture’s latest greenwashing scheme. And Colin Woodard argues that Joe Biden’s crackdown on monopolies is central to the “freedom” message that could win him a second term. 

Charlie Peters is now 96. He still resides with his sainted wife, Beth, in the same home in the Palisades. The milk bucket is still there, too. The cussedly independent way he spent his working years was ahead of its time. Yet I suspect it would have been
utterly recognizable and pleasing to the Founders. 

The post Home Is Where the Work Is appeared first on Washington Monthly.

]]>
148034
The Provocations of Martin Peretz  https://washingtonmonthly.com/2023/06/19/the-provocations-of-martin-peretz/ Tue, 20 Jun 2023 01:05:00 +0000 https://washingtonmonthly.com/?p=148008

The legendary editor shook up the political-media power axis. Was he a free-thinking iconoclast or an irredeemable bigot?

The post The Provocations of Martin Peretz  appeared first on Washington Monthly.

]]>

The art critic Harold Rosenberg once referred to the New York intellectuals as a herd of independent minds. It’s not a verdict anyone would apply to Martin Peretz, the former publisher of The New Republic, which was founded in 1914. Peretz was the most influential intellectual impresario of the late twentieth century, a left-wing firebrand during the 1960s who ended up moving right but, unlike many of the neoconservatives, never abandoned the Democratic Party. Instead, he battled it from within.

The Controversialist: Arguments with Everyone, Left Right and Center by Martin Peretz Wicked Son, 386 pp.

After he acquired the venerable liberal publication for $380,000 in 1974, Peretz, an impassioned supporter of Israel and a foe of affirmative action, moved swiftly to reinvent it. He succeeded. By the early 1980s, TNR, as it was known to its staff and fans, became a hot number in Washington. It boasted some of the leading writers and editors in the business, including Michael Kinsley, Hendrik Hertzberg, and Leon Wieseltier, and attracted attention for its courageous—or, depending on your viewpoint, schizophrenic—stands, ranging from support for the Nicaraguan Contras to denunciations of Reagan administration domestic policy. Conservatives loved to hate it and liberals hated to love it. By the 1990s, the magazine could brag that it was required reading on Air Force One.

But as that decade wore on, the magazine began to attract attention for the wrong reasons—staff writer Stephen Glass, TNR’s chief fact-checker, was outed as a fabulist, and writer Ruth Shalit exposed as a plagiarist. Next, the editors, consistent with their penchant for a hawkish foreign policy, embraced the George W. Bush administration’s push for war in Iraq, while sneering at opposition to it. Readership plummeted. By 2010, when Harvard, where Peretz taught for decades, held a ceremony honoring a new research fund in his name, student protesters chased him through Harvard Yard, chanting that he was a racist. How did it all go wrong?

In his new memoir, The Controversialist, Peretz offers a gritty, propulsive, and fascinating account of his career. While Peretz may be known for his pugnacity, his memoir, it must be said, largely steers clear of cheap shots and tedious self-justifications. Instead, it carefully recounts his role in the rise of a Jewish intellectual movement that replaced an old and tired WASP establishment. What his memoir inadvertently makes clear, however, is that ultimately TNR became a victim of its own success. 

Some of the most gripping portions of Peretz’s memoir center on his formative years growing up in the Bronx, the child of Jewish parents who had numerous relations in eastern Europe who were murdered by the Nazis. Peretz, who was born a month before Hitler invaded Poland, attended the Sholem Aleichem Folk Shul 45 in the afternoons to learn Yiddish. He had a turbulent relationship with his Old Testament father, but proudly recalls that “the one thing I had from my father, the thing I could always rely on: I was Jewish and American at all times, and there was no contradiction between those inheritances.” 

After graduating from the Bronx High School of Science, Peretz entered Brandeis University, where he became editor of the school newspaper, The Justice, and studied with Herbert Marcuse and Max Lerner. It was the larger-than-life Lerner—a Russian Jew, prominent journalist, educator, lover of Elizabeth Taylor and Marilyn Monroe—whom Peretz sought to emulate. According to Peretz, 

He’d come from some shtetl in Russia and now he was teaching at great universities and writing for a big newspaper and fucking movie stars and fighting for America—and what a great country this was! He was a happy man, and the implicit promise of his life was that you and I could be happy in this country in this way, too.

At Harvard, where Peretz earned his PhD under the legendary Sovietologist Adam Ulam, he found a different atmosphere. Ulam, Peretz reports, had emigrated from Lodz but was “afraid of being thought a Jew” and pretended to be a Protestant. Another set of Jews at Harvard, Peretz writes, were the assimilated upper-class Germans who “weren’t afraid or aspirational but simply established.” With his own pronounced Yiddishkeit, Peretz may have stuck out among the genteel Brahmins, but Harvard’s intellectual traditions exerted a magnetic attraction on him. Too impatient to pursue the standard academic track for tenure, he instead became a permanent lecturer in the fledgling social studies program, whose most talented students he tapped to work at his mothership in Washington.

During his early years at TNR, Peretz was invigorated by the challenge of restoring the weekly to its previous prominence. “Together,” he writes, “we were upstarts—young and pluralist, Jewish and intellectual, not afraid to provoke. But we also came with the imprimatur of the best institutions: Harvard, Columbia, and Oxford.” In addition, Peretz tapped into his contacts abroad, including the French writer Bernard-Henri Lévy, to warn about the Soviet threat. His assessment of Lévy is characteristically unvarnished: “BHL was also a little bit of a showboat, you couldn’t deny that. He was easy for some people to hate. But he didn’t care. And I didn’t either.” 

After savaging Jimmy Carter as an inept and feckless president, TNR was more than a little receptive to the presidency of Ronald Reagan, at least when it came to foreign affairs, much to the dismay of the Council on Foreign Relations crowd, which preached accommodation rather than confrontation with the Kremlin. Peretz himself was often accused of apostasy from liberalism. Not surprisingly, he has a different take. As he depicts it, TNR was struggling against three forces—the Reagan right wing, the left, and the neoconservatives. In some respects, this was true. When Peretz tapped Andrew Sullivan, a young British Tory, to become editor in 1991, it was a boldly unconventional choice. Sullivan, a supremely gifted stylist, recognized that the culture wars were central to American politics, and led a crusade for gay marriage that was as prescient as it was pivotal. But there can be no denying that, as the years went by, the magazine also became increasingly receptive to right-wing lunacy, whether it was Betsy McCaughey’s phantasmagoric cover story, “No Exit,” about Bill Clinton’s health care plan, or Richard J. Herrnstein and Charles Murray’s unsavory meditations about a connection between race and IQ, both of which appeared during Sullivan’s tenure as editor. 

The problem was pretty basic. Over the decades, Peretz had battled for a president who would more or less represent the credo of the Democratic Leadership Council. He thought, or at least hoped, that his protégé Al Gore would capture the Democratic nomination for president, but it was another southern politician, Bill Clinton, who did. In 1992, once Clinton became president, the magazine (where I worked from 1996 to 1999) remained on automatic pilot in its hostility to a mainstream Democratic president. It culminated in the brief tenure of Michael Kelly, who almost exhausted the vocabulary of abuse in the English language as he flayed Clinton each week for his various transgressions. Kelly was a brilliantly talented writer, but he became obsessed with denouncing Clinton. In a climactic showdown, Peretz ended up firing Kelly after he targeted Gore as well.

If TNR gradually became more predictable in its stands, it was also the case that on the issue of Israel it never deviated one iota. Here, more than anywhere else, was where Peretz overlapped with the neocons. Peretz himself observes that his detractors 

thought it was my obsession, and they were right about that. Zionism was the one thing I absolutely would not compromise on, the one way I unilaterally exercised my ownership prerogative. When it came to Israel, I answered to no one but myself. I knew what I was talking about.

Maybe so, but was shielding Israel, as far as possible, from criticism really doing it any favors? The fact that Peretz, together with Wieseltier, Paul Berman, and Michael Walzer, felt compelled recently to write an op-ed in The Washington Post deploring Prime Minister Benjamin Netanyahu’s attempt to suborn its judiciary suggests a different conclusion.

Peretz’s own difficulties began when he wrote a blog called “The Spine” that allowed him to ventilate his views about the Middle East as he pleased. In one entry in 2010, he declared that “Muslim life is cheap,” a statement that was widely denounced. The former 1960s radical had come full circle, denounced as a reactionary racist. The truth, as this memoir shows, is more complicated than that. A natural-born fighter with a proclivity for sweeping pronouncements, he acknowledges that he went off the rails. At bottom, Peretz, you could say, is more of a disillusioned liberal than a reactionary figure. 

With his relationship to TNR severed, Peretz only wrote sporadically on politics and foreign affairs. Perhaps that had a liberating effect on him. His memoir is the great book he never wrote at Harvard, a profound accounting of the passions that for several decades propelled him to the center of intellectual disputes about liberalism, Judaism, and America. Anyone interested in those controversies would do well to read The Controversialist.

The post The Provocations of Martin Peretz  appeared first on Washington Monthly.

]]>
148008 July-23-Books-Peretz The Controversialist: Arguments with Everyone, Left Right and Center by Martin Peretz Wicked Son, 386 pp.
Chamber of Secrets  https://washingtonmonthly.com/2023/06/19/chamber-of-secrets/ Tue, 20 Jun 2023 01:00:00 +0000 https://washingtonmonthly.com/?p=148025

The Supreme Court’s habit of deciding hugely important cases without briefings, arguments, or even a word of explanation threatens democracy.

The post Chamber of Secrets  appeared first on Washington Monthly.

]]>

In December 2022, the Supreme Court’s public affairs office made a discreet announcement about procedures for the end of the 2022–23 term: 

For the remainder of this Term, the Court will resume its traditional practice of announcing merits opinions in open Court. Consistent with past practice, the live audio feed will be limited to oral arguments, and the audio of opinion releases will be recorded and available from the National Archives at the beginning of the next Term.

The provision about the oral announcement of opinions—no audio available until months later—is almost the only feature of “traditional practice” that survives the coronavirus pandemic and the Donald Trump years unchanged. In the past, oral argument was not live-streamed, and recordings were only made publicly available days after the session. Opinion announcements were (and still are) withheld until the following October. 

The Shadow Docket: 
How the Supreme Court Uses Stealth Rulings to Amass Power and Undermine the Republic 
by Stephen Vladeck
Basic Books, 352 pp.
The Shadow Docket: How the Supreme Court Uses Stealth Rulings to Amass Power and Undermine the Republic by Stephen Vladeck Basic Books, 352 pp.

These two moments—argument and announcement—mark the only occasions on which the Supreme Court, one-third of the constitutional structure of our government, even tries to explain itself to the public it nominally serves. 

The Court began live-streaming oral arguments during the pandemic, and so far it has not reversed the practice. But announcements will still not be available to the public until a time when that public will no longer be paying attention. There is no logistical reason for this, and plenty of good arguments against it. Opinion announcements are among the most dramatic moments of a given term. They are not empty drama, nor are they a distraction: Such moments provide a much-needed glimpse of the Court’s mostly shrouded interior processes, and of the passions of the individual justices.

Some progressive legal scholars have coined the term demosprudence to underline that the opinion announcements are the only time that the Court’s members directly address the public. Clearly, however, the justices do not want the public to see or hear their words. This particular Court does not conceive of itself as belonging to the people.

Since Trump’s election in 2016, the Court has acquired four new justices; a new format for oral argument (formerly confined to one hour, oral argument now meanders to a conclusion when everybody runs out of steam); a new concept of precedent (previous cases are binding unless there’s something about them the new conservative majority just doesn’t like); and a new methodology for its constitutional jurisprudence (the “history and tradition”—or, more cynically, the “Look, I found something in Bracton’s De legibus et consuetudinibus Angliæ that agrees with me, case closed!”—test). 

Always available to the Court for genuine emergency cases (such as last-minute appeals from death row inmates), the shadow docket has become a major way in which the new Court shapes the law and steers the lower federal courts—almost uniformly in an extreme-right direction.

Also consequential is an unannounced procedural change: The Court makes more and more important decisions through its so-called shadow docket, in which it grants or denies orders to decide, delay, or reverse lower-court decisions. These orders are often only one sentence long and announced in written form either on regular Court days or after hours. They often include no explanation of the Court’s reasoning and do not always record individual votes. Always available to the Court for genuine emergency cases (such as last-minute appeals from death row inmates), the shadow docket has become a major way in which the new Court shapes the law and steers the lower federal courts—almost uniformly in an extreme-right direction. 

This semisecret aspect of the Court is the subject of The Shadow Docket by Stephen Vladeck, a law professor at the University of Texas. As a Court observer, Vladeck is a phenomenon. He teaches constitutional law and the federal courts; he has also argued three significant cases (on military law and border security) before the Court. (Because Vladeck is six foot eight, the Court’s lectern had to be specially raised to accommodate him.) He is a contributing editor of Lawfare, the prominent national security blog, and a cohost of the National Security Law Podcast. He is also a legal analyst for CNN and a regular contributor to Slate. I suspect he is studying brain surgery in his spare time. 

The Shadow Docket is a work of profound respect for a Court he plainly loves, and is intended as a warning that it is losing its way, and risks thereby forfeiting the place it has long held in American law and life. And the book places both the current “shadow” controversy and the Court itself within a history quite different from the reigning belief that the Framers had a clear vision of the Court as a check on the elected branches. Instead, beginning with Chief Justice John Marshall’s 1803 power grab in Marbury v. Madison,the Court’s history is largely a tale of an institution that is barely mentioned in the Constitution but has used ambiguity and guile to aggrandize itself. 

The term “shadow docket” was minted in a 2015 law review article by the University of Chicago law professor William Baude. It refers to the Court’s “non-merits docket,” or “orders list,” cases and matters the Court resolves in whole or part without full briefing and argument, either before an eventual decision on the merits or, more problematically, instead of a fully briefed, argued, and reported decision.

Baude is a prominent conservative scholar, a former clerk for Chief Justice John Roberts, and a Federalist Society member. Those conservative credentials did not prevent Justice Samuel Alito in a 2021 speech from blasting Baude’s neologism as “a catchy and sinister term” that “is part of unprecedented efforts to intimidate the Court and to damage it as an independent institution.” (Baude is a family friend; in my experience, people find him impressive but, well, not exactly intimidating.)

Alito’s chosen term is “emergency docket,” which is at best only partially accurate. Some orders are genuine emergencies—remember death penalty appeals—but many are not. The docket is now in essence an alternative Court by which the new conservative majority can achieve its goals without explaining or, indeed, acknowledging how it is changing the law. Consider, for example, a 2022 order in Louisiana v. American Rivers, in which a district court set aside a Trump-era regulation making it harder for states to review projects within their borders that might worsen water pollution. After the lower court, pending full review, had reinstated the regulation that had been in force for half a century, a group of conservative states asked the Supreme Court to impose the newer Trump regulation on an emergency basis while appeals were pending. Without explanation, the Court did so. In a pointed dissent, Justice Elena Kagan noted that “applicants’ own actions belie the need for a stay: Twice, the applicants waited a month before seeking that relief.” The success of their maneuver, she wrote, “renders the Court’s emergency docket not for emergencies at all. The docket becomes only another place for merits determinations—except made without full briefing and argument.” 

Revealingly, the lineup in American Rivers was 5–4—with Roberts providing the fourth dissenting vote. As Vladeck notes, the growth of the shadow docket has marched along with radical internal polarization of the Court, to the point that the deeply conservative chief, always alert to the institutional interests of the Court, is now a party to some very sharp “liberal” dissents. 

Shadow orders can take a number of forms: “summary” reversals or affirmations of lower-court opinions (making a final decision without waiting for briefs or argument); “vacatur” of those opinions (wiping them away as if they had never been written); “certiorari before judgment” (taking cases before lower courts have had a chance to rule); or granting or denying a stay or an injunction (preventing further proceedings until a specific issue can be decided). None of these need even a word of explanation. They may also contain a brief opinion explaining the majority’s rationale—or no explanation other than an anguished dissent by the losers. 

The Shadow Docket is an important book for anyone who wants a deep understanding of the way the post-Trump Court is moving to reshape the law. Vladeck is a clear and engaging writer; non-lawyers will appreciate his skill at making as clear as possible some intricacies of appellate jurisdiction. There’s still some complexity to climb, but the important thing to remember is that the Court has vastly expanded its use of these orders in cases that are not emergencies, and that, as Vladeck points out, have a distinctly partisan valence. 

No one designed the shadow docket; indeed, as Vladeck shows, in the years after adoption of the Constitution, the Court had so little to do that the first chief justice, John Jay, grew bored and resigned to run for governor of New York. In many “federal question” cases, it was a true court of appeals; litigants who lost in lower courts had a right to bring their cases before the high tribunal, which was required to hear and decide them. Since the Court first assembled in New York on February 1, 1790, and its legal jurisdiction started to expand, however, the justices have complained about their workload. Slowly, in 1891, and then in 1925, Congress gave the Court greater control by inaugurating the “writ of certiorari,” by which the Court could take on a case or decline it. In 1988, at the behest of new Chief Justice William Rehnquist, Congress gave the Court almost complete control over its caseload. 

It should surprise no one (least of all a reader of the Washington Monthly) that when a government bureaucracy is given a veto over its own inbox, its workload often declines precipitously. For much of the 20th century, it was not unusual for the Court to allow full briefing and argument, and produce full opinions in, as many as 200 cases per term. By 2005, the average had fallen to 80. In the 2021–22 term, the total was less than 60. 

Meanwhile, the “non-merits docket” was mostly used for genuine emergencies such as death appeals; it came into full sinister bloom only with the advent of the Trump administration. Between October 2019 and June 2022, Vladeck notes, the Court granted a total of 60 emergency petitions for relief—the most since the 1980s. But unlike the earlier period, comparatively few of these were death appeals, and many (like American Rivers) did not represent any sort of emergency at all. After the confirmation of Justice Amy Coney Barrett created a five-vote hard-right majority, the practice gathered momentum: for example, Vladeck notes, in John Roberts’s first 15 years as chief, the Court granted a total of four of the extreme orders called “injunctions pending appeal”; after the ascent of Barrett in 2020, it granted six in five months.

Over the years, the Court has developed a formula for determining when it should intervene in litigation below. One factor it is supposed to consider is whether the party asking the Court to jump in will suffer “irreparable injury” if the case is not stopped in its tracks. Vladeck was among the first commentators to note that “irreparable injury” has taken on a new meaning for this Court; rather than, let’s say, destruction of property or immediate financial loss, it now often refers to the indignity government officials suffer when forced to delay implementing their favored policies until courts can decide whether they are legal. Under Trump, the Court was quite tender-minded about the sufferings of government, insisting, among other things, that the administration be allowed to proceed with questionably legal construction of the border wall on private property and that the “travel ban” be allowed to take partial effect, with its disruption of families, before final adjudication of its constitutionality. With the advent of the Biden administration, the dreadfulness of this sort of delay has begun to seem less obvious to the conservative majority. Indeed, the partisan background of shadow docket cases seems like their dominant determining factor. “It’s difficult to dismiss as coincidence,” Vladeck writes, “that the Court’s interventions in immigration cases, for example, generally allowed President Donald Trump’s policies to go into effect and generally blocked President Joe Biden’s policies.” 

For years, the Court and lower courts understood that shadow docket orders have no “precedential value”—as nominally temporary orders pending resolution of cases, they are not binding on lower courts. To the new conservative majority, however, “even unsigned Emergency orders … were to be given precedential effect by lower courts, despite a long-standing tradition to give them no such weight,” Vladeck writes. One such decision, Tandon v. Newsom, completely rewrote the law of “free exercise of religion,” tilting the doctrine in favor of religion—a decision that Vladeck calls “indefensibly lawless.” 

The Shadow Docket is a work of profound respect for a Court Vladeck plainly loves, and is intended as a warning that it is losing its way, and risks thereby forfeiting the place it has long held in American law and life.

Conservatives dismiss criticism as sour grapes. Alito himself, in an October 2021 speech at Notre Dame Law School, noted that the Court has always had a non-merits docket and suggested that criticism of its present state must thus be offered in bad faith. But as Vladeck demonstrates, the current muscular docket bears only a superficial resemblance to that of even a decade ago. Second, Alito insisted that shadow docket orders are preliminary and procedural, and thus do not decide important constitutional issues. This claim is easily refuted by looking at Whole Woman’s Health v. Jackson, the challenge to Texas’s radically novel “Heartbeat Bill.” That bill banned abortions after about six weeks, and allowed private citizens to sue anyone who helped a woman obtain one after that.It flatly contradicted Planned Parenthood v. Casey—a major precedent of the Court that had not, at that time, been overruled. Having lost in lower courts, abortion providers sought an emergency injunction to block the bill from going into effect before the constitutional issue could be resolved. Without explanation, the Court delayed its decision until 24 hours after the law took effect. In a brief note, the majority wrote that the bill’s jurisdictional provisions were just too darn complicated for it to figure out, so it could go into effect.

Alito insisted that the Court’s brief order did not actually nullify Casey. After all, he noted, the opinion itself said, “we do not purport to resolve definitively any jurisdictional or substantive claim in the applicants’ lawsuit. In particular, this order is not based on any conclusion about the constitutionality of Texas’s law.” But Alito’s claim is almost impudently disingenuous: As Vladeck writes, the order’s prim language “made little difference to the millions of Texans who, in an instant, could not obtain a constitutionally protected abortion anywhere in the state.”

The “non-merits docket” came into full sinister bloom with the advent of the Trump administration. Between October 2019 and June 2022, the Court granted 60 emergency petitions for relief. Few of these were death appeals, and many did not represent any sort of emergency at all.

Other observers have claimed that the increased number of shadow docket decisions arises out of the growing practice of “nationwide injunctions.” Most of the Court’s shadow orders, however, do not come in such cases. The other justification, voiced by conservative legal commentators in the Trump years, is that (as one put it to me in private conversation) “the lower courts are out of control.” Early in the Trump years, conservative publicists coined the term “judicial resistance” to suggest that lower-court judges were deliberately thwarting Trump initiatives for partisan or ideological reasons. Though this is rarely made explicit, the current Supreme Court majority clearly distrusts Democratic appointees to the lower courts. 

Justice Clarence Thomas, for example, wrote (and Trump appointees Neil Gorsuch and Brett Kavanaugh joined) a remarkable separate opinion that directly questioned the motives of District Judge Jesse M. Furman, an Obama appointee, who had blocked the imposition of a citizenship question on the 2020 Census form: “I do not deny that a judge pre-disposed to distrust the Secretary [of Commerce, who conducts the census] or the administration could arrange [the] facts on a corkboard and—with a jar of pins and a spool of string—create an eye-catching conspiracy web,” Thomas wrote. (There seems to be, it must be noted, less apparent worry on the right about “judicial resistance” now that the tool of the nationwide injunction is in the hands of Trump appointees for use against Biden initiatives.)

Finally, some conservatives dismiss criticism of the shadow docket by suggesting that critics are liberal ideologues who object to the Court’s conservative turn. Vladeck responds, “I can’t speak for others, but I’ve been critical of the Court’s shadow docket behavior—and I wrote this book—not because I want to delegitimize the Court, but because I fear that the Court is delegitimizing itself, and that not enough people—the justices included—are caring.” 

By May of this year, the Court’s self-delegitimization had escalated from a mere crisis into what looks more like a death spiral, an institutional collapse of which the irresponsible and unaccountable use of its shadow docket power is only one aspect among many. Indeed, the most innocent explanation of the situation would be that the Court’s majority is so ethically challenged, internally divided, jurisprudentially sloppy, and ideologically polarized that it cannot do a competent job despite what by historical standards is a ridiculously light workload. 

The flaws in the current Court’s operation may simply betray an institution no longer able or willing to maintain its operations at a sustainable level. The general slovenliness of its judicial work may be just one result of an institutional disinclination to any sort of ethical standards, workplace competence, or public accountability. This spring we learned that, over the past decade, the justices have considered adopting a code of ethics but instead decided that, like Melville’s Bartleby the scrivener, they would just prefer not to. In 2011, the chief justice assured Congress that the justices are ethical and thus whatever they do must be all right. 

In reality, meanwhile, the moral and ethical collapse at 1 First Street NE is almost beyond parody. Thomas has become a kind of domestic pet of the far-right billionaire Harlan Crow, accepting hundreds of thousands of dollars’ worth of free travel and entertainment (as well as a suspiciously favorable set of real estate transactions and prep school tuition for a family member) from a benefactor whose other possessions include a trove of Hitler memorabilia. Thomas clearly knew that he was required to disclose these gifts, and at first he did—until, following adverse comment in the press, he decided that he’d really prefer the people not know about them. Thomas’s wife, Ginni, received large payments from a legal advocacy group that regularly files briefs with the Court; Leonard Leo of the Federalist Society, her benefactor, took care to prevent the public from knowing that the money went to a justice’s spouse. Thomas also couldn’t cite the correct name of the real estate enterprise that was paying him as much as $100,000 in annual proceeds; he won’t discuss his failure to recuse himself from a case in which the documents at issue proved deeply embarrassing to his wife. 

These offenses are rank and plainly intentional. Ordinary government employees who consciously lie on official documents may face prison, not festschrifts in their honor at right-wing think tanks. In any previous era, Thomas’s resignation would have been on Roberts’s desk weeks ago.

Gorsuch was also oddly stumped by a disclosure form that, properly filled in, would have revealed that a group headed by a big-firm lawyer had bought some previously slow-to-sell real estate from him immediately after he went on the Court. (Gorsuch also has a billionaire patron, the oil and gas magnate Philip Anschutz.) Alito was lavished with hospitality by real estate investors from Ohio as part of a carefully designed influence operation by a Christian right organization that used bribe-level contributions to the Supreme Court Historical Society as a way to befriend and lobby the right-wing justices. 

The Court’s leak problems also suggest that some inside the institution no longer believe in either its integrity or its autonomy. Breathless reports suggested that the May 2022 leak of Alito’s draft opinion overturning Roe v. Wade was unprecedented; though it was the first time in memory that an actual opinion was leaked, Dobbs v. Jackson Women’s Health Organization wasin fact the fourth major leak in the past decade or so—that we know of. In 2012, someone inside the Court leaked information to conservative media in an apparent attempt to pressure Roberts to provide a fifth vote to overturn the Affordable Care Act. In 2014, a Christian right organization knew the result in Hobby Lobby before it was announced; according to at least one person involved, the leaker was Alito. During the 2019–20 term, as Joan Biskupic reports in her new book, Nine Black Robes: Inside the Supreme Court’s Drive to the Right and Its Historic Consequences, someone apparently on the Court’s conservative side leaked internal deliberations in a case about the application of the Civil Rights Act to sexual orientation and gender identity cases. The aim was apparently to pressure Roberts and Gorsuch, who had strayed from conservative orthodoxy, to switch their votes and thus flip the result—an effort that failed.

Not to worry, though: After the Dobbs leak the next year, the Court, with the punctilio of Peter Sellers’s Inspector Clouseau, announced an “investigation” by the Court’s marshal. Months later, the marshal majestically produced—hey, presto!—nothing at all. This was hardly surprising, since “investigators” did not even apparently trouble the justices themselsves with on-the-record interviews. 

Meanwhile, the conservative majority lectures the nation on the people’s absolute obligation to respect the Court and obey its orders without reservation. To borrow a phrase from the English writer Evelyn Waugh, watching the post-Trump majority deface a beloved American institution “is to experience all the horror of seeing a Sèvres vase in the hands of a chimpanzee.” 

Vladeck expresses a more muted version of that horror. “Our constitutional republic needs a legitimate Supreme Court, even one staffed by a majority of justices with whom many of us routinely disagree,” Vladeck warns; indeed, “democracy itself may depend upon a Supreme Court [that is] widely perceived to be legitimate.” 

By May of this year, the
Court’s self-delegitimization
had escalated from a mere
crisis into what looks more
like a death spiral, an institutional collapse of which the irresponsible and unaccountable use of its shadow docket power is only one aspect among many.

The hope for reform is Congress, which has authority over the Court’s jurisdiction and, to some extent, its operations. Much of the Court’s freedom from hard work and accountability results from changes Congress has made over the years at the justices’ behest. Lawmakers could change the procedures and requirements of emergency orders by statute—requiring, for example, a higher majority for certain emergency orders or requiring the Court to explain itself when it steps prematurely into a case. It also could impose a binding code of ethics on the Court and create an actual enforcement mechanism (the lower federal courts manage to remain independent even though they have both).

But the likelihood of such reform is, for the moment, relatively remote. “Since 1988 (if not before), Congress has done far too little to assert its institutional authority over the Supreme Court,” Vladeck notes. The best hope for saving one failing institution thus lies in the hands of another. The somber likelihood is that the Court’s internal dysfunction will worsen, and even turn septic, before our system of government acts to save itself. 

If it does.

The post Chamber of Secrets  appeared first on Washington Monthly.

]]>
148025 July-23-Books-Vladeck The Shadow Docket: How the Supreme Court Uses Stealth Rulings to Amass Power and Undermine the Republic by Stephen Vladeck Basic Books, 352 pp.
Why Did Mike Pompeo Write This Book? https://washingtonmonthly.com/2023/06/19/why-did-mike-pompeo-write-this-book/ Tue, 20 Jun 2023 00:55:00 +0000 https://washingtonmonthly.com/?p=147283

"Never Give an Inch" had been perceived as a precursor to a presidential bid. Now, it looks like a vice presidential job application submitted to Donald Trump.

The post Why Did Mike Pompeo Write This Book? appeared first on Washington Monthly.

]]>

P

oliticians with presidential ambitions often kick off their campaign by publishing a book telling their life story and laying out why they deserve to be the leader of the free world. Mike Pompeo’s 400-plus page contribution to the genre is a little different, which may explain why he announced that he would not run for president three months after it was published.

Never Given an Inch: Fighting for the America I Love by Mike Pompeo. Broadside Books.

Pompeo chronicles his four years working with President Donald Trump, defending Trump’s foreign policy record and touting his own contributions to it. Pompeo has almost nothing critical to say about his former boss. Never Give an Inch feels less like a presidential campaign book than a vice presidential job application. 

Vice presidents are often attack dogs, and Pompeo appears to be auditioning for that role. His memoir is a furious, judgmental screed, perfectly calibrated to please the conservative Christians who dominate Trump’s political base. He attacks those whom he sees as foreign and domestic enemies of America. He directs a fire hose of vitriol at former President Barack Obama, a slew of former Obama administration officials, the U.S. Foreign Service, and even some former Trump aides, including John Bolton, a vocal Trump critic who might also run for president, and Steve Bannon, whom Trump fired early on in his presidency but who remains in the Trump orbit. 

Pompeo views the world in Hobbesian terms that Trump would probably appreciate, repeatedly describing the world as a “mean, nasty place.” He argues that the Trump administration inherited a disastrous international situation concocted by fools and knaves, requiring a divinely inspired “America First” foreign policy that combines a hard calculation of American interests with a Scripture-infused framework. 

Pompeo’s analysis of global events is not always wrong; he is justified, for example, in touting his own role in rebuilding U.S. relations with Greece and in criticizing Obama’s hesitancy in enforcing his ill-conceived “red line” in Syria. Yet the book is far from a fair-minded assessment of America’s foreign policy history. 

Pompeo devotes the first dozen or so chapters to listing those whom he regards as disloyal or cowardly, or even as traitors to the United States and its values, including former FBI Director James Comey and current FBI Director Christopher Wray. Trump surely won’t mind the special attention Pompeo gives to Hillary Clinton, who he accuses flimsily of being behind the “Russia hoax,” violating regulations on security, causing the debacle at Benghazi, and engaging in corruption. Shoring up his America First bona fides, Pompeo complains that Clinton “led the charge to take out Muammar Ghaddafi.” 

He demonstrates a particular hatred toward his predecessor at the CIA, John Brennan. Not so coincidentally, when Trump was president he claimed to have revoked Brennan’s security clearance, on the dubious grounds that Brennan had become too partisan after joining the private sector. (Trump never followed through on the paperwork.) Without a trace of self-awareness, Pompeo deems Brennan a “total disaster” who was “wildly political” and the “de facto commissar of the progressive movement.” He even goes as far as to baselessly accuse Brennan of supporting Palestinian terrorism. He heaps scorn on Brennan’s record at the CIA of administering diversity and inclusion policies and prioritizing climate change. Betraying the thinness of his case, Pompeo dredges up Brennan’s admission that as a college student, nearly 50 years ago, he voted for the Communist Party USA presidential candidate. 

Vice presidents are often attack dogs, and Pompeo appears to be auditioning for that role. The former secretary of state’s memoir is a furious, judgmental, Scripture-infused screed, perfectly calibrated to please the conservative Christians who dominate Trump’s political base.

Pompeo shares with Trump an active dislike for Europeans, in particular French President Emmanuel Macron and former German Chancellor Angela Merkel. He accuses Merkel and Macron of living in a fantasy world regarding Libya and China and criticizes their support for the Iranian nuclear deal that Trump tried to kill. He states that his counterparts at the European Union foreign ministry, Federica Mogherini (who, Pompeo notes, was once a member of the Italian Communist Youth Federation) and the “socialist” Josep Borrell, hated him and Trump because they believed them boorish and dumb. 

Perhaps most controversially, Pompeo defends the Trump administration’s thaw with Saudi Arabia. Pompeo expresses outrage that Obama would endanger America’s vital reliance on Saudi Arabia over “a single unjust murder”—that of the journalist Jamal Khashoggi, whom Pompeo accuses of being a closet Islamist. Pompeo further states that “there is zero intelligence that directly links” Saudi Crown Prince Mohammed bin Salman to the order of the murder, even though the CIA made the opposite determination shortly after Pompeo’s tenure as director ended. 

Pompeo’s account of Trump’s Afghanistan policies raises more questions than any other part of the book. Pompeo agrees with Trump—though at first grudgingly—that the U.S. needed to close the book on the Afghanistan story. He vacillates on different pages on the value of the force increases pushed by Trump’s generals but, in the end, welcomes the final drawdown to 2,500 troops. 

His description of the negotiations with the Taliban at times drifts into the surreal. One comes away with the impression that Pompeo saw the Taliban as tough fighting men who were worthy of his admiration and deserved to win. He saves his wrath for the recognized Afghan government, particularly President Ashraf Ghani, whom he accuses of trying to sabotage the negotiations. He justifies threats he made to cut Afghan aid if Ghani would not cooperate. 

Pompeo makes no mention of the fact that he approved of the decision to exclude the Afghan government and America’s NATO allies, who also had troops on the ground, from the negotiations to wind down U.S. involvement in Afghanistan. Nor does he mention the fact that the American negotiator, Zalmay Khalilzad, acceded to every concrete Taliban demand, such as forcing the Afghan government to release thousands of prisoners who went straight back into the fight. Instead, he praises Khalilzad’s success in extracting future concessions, such as a promise not to take away women’s rights or allow other terrorists to operate in Afghanistan. The book was published recently, well after the Taliban violated those promises, so Pompeo must have known before he went to press that those arguments are no longer operative. 

Pompeo repeats the claim that the Biden administration did not have to order a withdrawal from Afghanistan. But his insistence that the U.S. was not committed to withdrawal—despite the Doha Agreement of February 2020, which Trump signed—rings hollow. Trump was determined to get out of Afghanistan no matter what, and signed away all leverage over the Taliban. 

To be fair, Pompeo takes a few subtle swipes at Trump. He criticizes his former boss for appointing incompetent loyalists from time to time. And he voices his frustration at his own inability to wean Trump away from authoritarian leaders such as Russian President Vladimir Putin and Turkish President Recep Tayyip Erdoğan. 

But when it comes to North Korea’s Kim Jong Un, Pompeo gives a generous account. He acknowledges that Trump’s negotiations failed to end Pyongyang’s nuclear weapons program, yet insists that they nevertheless brought peace in our time, and gives Trump points for trying. 

Overall, Pompeo’s few criticisms of Trump are exceedingly gentle, in a tone of sorrow rather than anger, mixed with many more professions of admiration and loyalty. Trump has forgiven far worse slights from those he deems sufficiently loyal to him now. This book won’t get Pompeo taken off the vice presidential short list. If that’s all Pompeo was trying to achieve, then in all likelihood, mission accomplished.  

The post Why Did Mike Pompeo Write This Book? appeared first on Washington Monthly.

]]>
147283 Pompeo-book-1