November/December 2024 | Washington Monthly https://washingtonmonthly.com/magazine/november-december-2024/ Fri, 01 Nov 2024 20:50:13 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg November/December 2024 | Washington Monthly https://washingtonmonthly.com/magazine/november-december-2024/ 32 32 200884816 Take Your Mind Off the Election With the Fall Books Issue https://washingtonmonthly.com/2024/10/29/take-your-mind-off-the-election-with-the-fall-books-issue/ Wed, 30 Oct 2024 01:10:20 +0000 https://washingtonmonthly.com/?p=156083

The November/December issue is here

The post Take Your Mind Off the Election With the Fall Books Issue appeared first on Washington Monthly.

]]>

If you’re a reader of the Washington Monthly, you’re clearly not one of those “low information voters” who needs to be reminded that there’s a world-historic election taking place and you should pay attention to the issues and candidates and have a plan to vote. If anything, you’re probably paying too much attention—and suffering considerable anxiety as a result.

November/December 2024

Truth is, there’s a limit to how much one person’s doomscrolling can alter the course of historical events. So, for your sanity, we invite you to take a little break from the campaign and peruse the November/December issue of the print magazine, which we’re releasing today. Mercifully, there’s nothing in it on “crosstabs” or “recall vote weighting” or whether the entire race hinges on the Assyrian vote in Michigan.

Instead, you’ll find stories and essays calibrated to expand your knowledge without wearing down your soul. Our cover package consists of nine thoughtful reviews of books on politics, history, and cultural criticism—including Matthew Cooper’s delightful take on Johnny Carson, Sara Bhatia’s balanced appraisal of Woodrow Wilson’s legacy, and Garrett Epps’s insights on the unique power of the U.S. Supreme Court. And in his editor’s note, Paul Glastris offers a personal reflection on why growing up in a home without books made him a lifelong fan of book reviews.

Meanwhile, our features deliver deeply reported stories on vital subjects that deserve our attention regardless of what happens on Tuesday. Phillip Longman shows how the monopolization of freight rail is hampering America’s industrial comeback. Courtney Radsch reports on how artificial intelligence is far more dependent on high-quality, human-made information than anyone thought—which means human content creators, including journalists, have far more leverage over the industry than anyone realizes. Rob Wolfe uncovers a little-understood industry that controls who gets recruited for college. And Zachary Marcus reveals how online sports betting threatens to become the next opioid crisis—and how to fix its most predatory practices.

So put on some tea, kick up your feet, and let your spinning mind come to rest in these pages. After you’ve taken time to read these pieces, the election will still be there. That, for better or worse, we can promise.

Enjoy the issue,

The Editors

FEATURES

AI Needs Us More Than We Need It
Without a constant stream of high-quality, human-made information, artificial intelligence models become useless. That’s why journalists and other content creators have more leverage over the future than they might know.
By Courtney C. Radsch 

Train Drain 
How deregulation and private equity have gutted the U.S. freight rail system—and with it, the promise of America’s industrial renewal.
By Phillip Longman 

The App Always Wins
Online sports gambling companies use sophisticated and deceptive techniques to exploit problem gamblers. The same technologies could be used to protect the addicted, if government would only demand it.
By Zachary Marcus

The Student Recruitment Industrial Complex
We tend to think about equity in higher education in terms of how colleges treat students who apply. But long before that, a little-known industry called enrollment management decides who gets the glossy brochures and who gets ignored.
By Rob Wolfe

EDITOR’S NOTE

An Unliterary Childhood
Why growing up in a home without books made me a lifelong fan of book reviews.
By Paul Glastris

BOOKS

He-e-e-e-re’s Johnny! 
He dominated late-night television for 30 years, before our shoutfest era. A new biography of the reclusive Nebraskan is also an elegy for a lost America. 
By Matthew Cooper 

A Millennium of Conflict
Russia’s identity, not its security or the fear of NATO, has historically been the main driver of Moscow’s aggression toward Ukraine. But is the war really a genocide? 
By Tamar Jacoby 

Why They Reign Supreme
A fresh and readable one-volume history of the Court explains how we got from Marbury to Dobbs.
By Garrett Epps 

The Regressive Era 
A new biography of Woodrow Wilson puts the 28th president’s racism and sexism at the center of its narrative—and his world-historic domestic and international achievements on the periphery. 
By Sara Bhatia 

Modi Operandi
A chronicle of the rise of Indian nationalism offers lessons for defeating our own version.
By Daniel Block 

Who Will Protect the Watchdogs?
Inspectors general have fought for accountability in U.S. government for decades, with varying success. They will be prime targets in a second Trump administration.
By Mike Lofgren 

The Tyranny of the Welfare Queen
For decades, the conversation about social services in America has centered on a false and harmful stereotype about who deserves help.
By Anne Kim 

The New Christian Right
The charismatic religious movement that supports Donald Trump is less misogynistic and racist than the old Moral Majority, but far more determined to crush liberalism by any means necessary.
By Ed Kilgore 

From President to Convict
Jonathan Alter takes us inside the hush money trial that made Donald Trump the first ex-president felon in American history.
By James D. Zirin 

The post Take Your Mind Off the Election With the Fall Books Issue appeared first on Washington Monthly.

]]>
156083 NovDec_2024_Cover_Hi November/December 2024
AI Needs Us More Than We Need It https://washingtonmonthly.com/2024/10/29/ai-needs-us-more-than-we-need-it/ Tue, 29 Oct 2024 23:22:41 +0000 https://washingtonmonthly.com/?p=155942

Without a constant stream of high-quality, human-made information, artificial intelligence models become useless. That’s why journalists and other content creators have more leverage over the future than they might know.

The post AI Needs Us More Than We Need It appeared first on Washington Monthly.

]]>

As magical apps keep appearing that can do everything from compose music to write news stories and legal briefs, many human “content creators” sense the end of a way of life and looming financial doom. How will artists, journalists, or, for that matter, lawyers and analysts of all kinds be able to make a living in the future if a chatbot can easily replicate their creativity and expertise? 

But here’s some inside information about artificial intelligence that is at once scary and promising. It turns out that bots need people more than people need bots. 

Simply put, that’s because to make bots smart you need to feed them high-quality data created by humans. Indeed, for bots to approach anything like human intelligence, they need both massive quantities of data and quality data produced by actual humans. And as it happens, we are running low on such data and will run out all the faster if AI puts more human content creators out of business. 

This means that the more than $930 billion investors have so far poured into AI companies could ultimately turn out to be just inflating another bubble. But there is still a chance to get AI right. It involves using government policy to make sure that humans receive the compensation they deserve for creating the content that makes continued advancements in AI financially and intellectually sustainable.

Large-scale AI models that are capable of a wide array of uses or tasks are known as foundation models. They are trained on vast troves of data scraped off the internet by web crawlers. But to make chatbots, like ChatGPT, or Google’s generative search summaries work, these static models need to be updated with more relevant, timely, and accurate data. The best data comes from online publications, databases, and other digital content repositories that reflect reality and authentic human knowledge and expertise. Without using such sources, the AI applications that average users are familiar with can get trained on information about, say, restaurants that closed years ago or wild conspiracy theories that fact-checkers and scientists have debunked. If you tried to train a bot just using the misinformation and digital pollution that infects so much of the internet, you’d create a different kind of AI: artificial idiocy. 

But here’s the rub. The AI industry is running short of the kind of data it needs to make bots smart. It’s estimated that within the next couple of years the demand for human-generated data could outstrip its supply.

Unless AI training includes access to quality news outlets and periodicals, including local
newspapers, it is likely to be
based on out-of-date information or on data that’s false or distorted, such as inaccurate poll location information or false reports of illegal immigrants devouring pets.

Even in the early days, before quality training data became so scarce, AI models were beset by inherent challenges. Since AI outputs are created based on statistical correlations of previously created content and data, they tend toward the generic, emblematic, and stereotypical. They reflect what has done well commercially or gone viral in the past; they appeal to universalist values and tastes (for example, symmetry in art or facial replication and standard chord progressions in music); they bolster the middle while marginalizing extremes and outliers. 

For the same reason, even the AI models trained on the best data tend to overestimate the probable, favor the average, and underestimate the improbable or rare, making them both less congruent with reality and more likely to introduce errors and amplify bias. Similarly, even the best AI models end up forgetting information that is mentioned less frequently in their data sets, and outputs become more homogeneous. This is why, for example, OpenAI says its image generator, DALL-E 3, displays “a tendency toward a Western point-of-view,” with images that “disproportionately represent individuals who appear White, female, and youthful.” Or why the Stable Diffusion tool generated pictures of toys in Iraq that look like American troops, reflecting the American, English-language association of Iraq with war.

Like a giant autocomplete, generative AI regurgitates the most likely response based on the data it has been trained on or reinforced with and the values it has been told to align with. As a result, the systems and their outputs embed, reinforce, and regurgitate dominant values and ideas and replicate and reinforce biases, some obvious and others not. 

But now these inherent problems with AI are being made much worse by an acute shortage of quality training data—particularly of the kind that AI companies have been routinely appropriating for free. 

One reason for the shortfall is that more and more of the best and most accurate information on the internet is now behind paywalls or fenced off from web crawlers. The Data Provenance Initiative recently audited 14,000 domains commonly used for AI training and found “a clear and systematic rise in restrictions to crawl and train on data” over the past year, as most top web domains—like news, social media, and encyclopedias—have restricted AI crawlers. In another running tally, more than half of 1,165 news publishers surveyed had instructed at least one of the three leading crawlers to exclude their sites, while 76 percent of U.S. publishers have implemented paywalls, another signal that their content is not intended to be available for free. Yet unless AI training includes access to quality news outlets and periodicals, including local newspapers, it is likely to be based on out-of-date information or on data that’s false or distorted, such as inaccurate voting information or false reports of illegal immigrants devouring pets. 

Some AI companies, to be sure, are finding ways to scrape or steal data from news and other quality publications despite the technical and legal obstacles. Over the summer, journalists at Wired documented how Perplexity.ai, a free AI-powered search engine, surreptitiously scraped their content and other publishers like Forbes despite explicit instructions in both code and their terms of service prohibiting its crawlers and unauthorized uses by third parties. 

“Perplexity had taken our work, without our permission, and republished it across multiple platforms—web, video, mobile—as though it were itself a media outlet,” lamented Forbes’s chief content officer and editor, Randall Lane. The search engine had apparently plagiarized a major scoop by the company, not just spinning up an article that regurgitated much of the same prose as the Forbes article but also generating an accompanying podcast and YouTube video that outperformed the original on search.

Yet AI companies still find it harder and harder to get quality training, especially for free. And that leaves them more and more dependent on data scraped from the open internet, where mighty rivers of propaganda and misinformation flow. A NewsGuard investigation recently found, for example, that the top 10 chatbots have a propensity to repeat false narratives on topics in the news and to mimic Russian propaganda, reflecting the scale and scope of Russia’s historic and ongoing state-sponsored information operations.

Then there’s the data problem caused by AI itself. Every day, people and governments around the world are adding a Niagara Falls of nonsense to the internet using increasingly accessible AI apps. Amazon is inundated with AI-generated books and product reviews. Facebook is overrun by zombie accounts. Google Search and Google News are infested with AI-generated fake news and junk content.

As the spread of AI makes it harder and harder to find quality data for training AI bots, the industry has responded by increasingly relying on what on some researchers call “synthetic data.” This refers to content created by AI bots for the purpose of training other AI systems. But there are real limits to this approach. It’s like trying to advance human knowledge using photocopies of photocopies ad infinitum. Even if the original data has some truth quotient, the resulting models become distorted and less and less faithful to reality. Eventually they tend to malfunction, degrade, and potentially even collapse, rendering AI useless, if not downright harmful. When such degraded content spreads, the resulting “enshittification” of the internet poses an existential threat to the very foundation of the AI paradigm.

Recently, a team of AI researchers used the term model collapse to describe what happens when you use AI-generated data to train AI models. Model collapse, the authors explain, “refers to a degenerative learning process where models start forgetting improbable events over time, as the model becomes poisoned with its own projection of reality.” To avoid model collapse, they warn, “access to genuine human-generated content is essential.”

The AI industry knows this. And they know that when it comes to chatbots, information retrieval, and fine-tuning, data quality counts more than quantity. “In a nutshell, what has been learned over the last few years is that working with a smaller amount of high quality data with a larger model, often expressed in parameters, is a better way to go,” writes the data architect Dennis Layton, who also notes that the human capacity to learn depends on more than the raw number of examples we are given. “This seems to hold true for machine learning as well.”

The AI industry has taken up several different strategies for trying to overcome its increasing difficulties in appropriating the human-generated content it needs to survive. 

One tack has simply been to use its lobbying power to try to erode copyright laws and other protections for authors, publishers, musicians, and other content creators. AI companies have repeatedly claimed that requiring licensing would stall “progress” or make it impossible to develop some AI systems. “If licenses were required to train [large language models] on copyrighted content, today’s general-purpose AI tools simply could not exist,” according to Anthropic, the Amazon- and Google-backed generative AI firm. 

Similarly, a representative of the Silicon Valley venture capital firm Andreessen Horowitz told the U.S. Copyright Office that the “only practical way for these tools to exist is if they can be trained on massive amounts of data without having to license that data.” According to OpenAI, limiting model training to content in the public domain would not meet the needs of their models. 

Having flaunted copyright, digital rights management, and contractual constraints with impunity, Big Tech is working as hard as possible to convince lawmakers to limit the privacy and contract rights for content creators while expanding the legal zone of “fair use.” As Bill Gross, who is best known for coming up with the pay-per-click model for digital advertising, recently told Wired, “It’s stealing. They’re shoplifting and laundering the world’s knowledge to their benefit.”

Authors, publishers, photo and music agencies, entertainers, and ordinary users have tried to fight back, filing more than two dozen lawsuits against the AI companies at the forefront of what one plaintiff characterized as “systematic theft on a mass scale.” But these will take years to resolve, if ever. Meanwhile, entire professions that have evolved in part due to the protections and revenue provided by copyright and the enforcement of contracts become more precarious—think journalism, publishing, and entertainment, to name just three. 

Another tack being tried by the biggest players in AI has been to strike deals with those most likely to sue or to pay off the most vocal opponents. This is why the biggest players are signing deals and “partnerships” with publishers, record labels, social media platforms, and other sources of content that can be “datafied” and used to train their models.

Microsoft-backed OpenAI has made more than a dozen licensing deals and is in discussion with another dozen of the most prominent publishers in the U.S. and Europe, including a $250 million deal with NewsCorp that dwarfs the estimated $1 million to $5 million that several other publishers reportedly received. OpenAI’s deals with AP and Time include access to their archives as well as newsroom integrations likely to provide useful training and alignment, while a slew of other deals include newsroom integration and API credits, ensuring a supply of human-centered data. 

“We are starting to get really concerned that the platforms are going to talk to 15 or 20 of the big media owners around the world, make a lot of nice deals, then they’ll tell the governments and regulators that they’re playing nice with the publishers,” says Alastair Lewis, head of the industry association FIPP, which represents magazine publishers around the world. “But the vast majority of us are going to be left behind again, as we felt we were 15 years ago.”

Both OpenAI and Google made $60 million deals with Reddit that will provide access to a regular supply of real-time, fresh data created by the social media platform’s 73 million daily active users. But those users won’t see a dime. Meanwhile, Google, which was recently found guilty in federal court of using business practices to maintain an illegal  monopoly in search engines, is the only search engine able to return results from Reddit because it’s currently the only AI company allowed to scrape its site for training data. Google’s YouTube is also in talks with the biggest labels in the record industry about licensing their music to train its AI, reportedly offering a lump sum, though in this case the musicians appear to have a say, whereas journalists and Reddit users do not.

Yet these deals don’t really solve AI’s long-term sustainability problem, while also creating many other deep threats to the quality of the information environment. For one, if AI models are trained only on data that has been licensed through deals with a handful of the largest, most dominant media and entertainment conglomerates, this in and of itself will exacerbate the distortions to which AI is already inherently prone, such as undercounting outliers and magnifying conventional thoughts and opinions, while reinforcing existing dominance. For another, such deals help to hasten the decline of smaller publishers, artists, and independent content producers, while also leading to increasing monopolization of AI itself. As the AI Now Institute observed, those with the “widest and deepest” data advantages will be able to “embed themselves as core infrastructure.” Everyone else just winds up as vassals or frozen out.

The threat to smaller content creators goes beyond simple theft of their intellectual property. Not only have AI companies grown large and powerful by purloining other people’s work and data, they are now creating products that directly cost content creators their customers as well. For example, many news publications depend on traffic referred to them by Google searches. But now the search monopolist is using AI to create summaries of the news rather than providing links to original reporting. “Google’s new product will further diminish the limited traffic publishers rely on to invest in journalists, uncover and report on critical issues, and to fuel these AI summaries in the first place. It is offensive and potentially unlawful to accept this fate from a dominant monopoly that makes up the rules as they go,” says Danielle Coffey, CEO of the News Media Alliance, which represents more than 2,000 predominantly U.S. publishers. 

And while some U.S. writers and actors got some limited protections about how their work can be used by the publishers and studios they work for following union strikes last year, deep concerns remain about how and if they will be compensated for the use of their content by AI companies and whether those AI products will eclipse the need for human authors and actors entirely.

So what’s to be done? Clearly, advances in AI depend critically on humans continuing to create a high volume of new fact-based and creative knowledge work that is not the product of AI. And humans will not create that volume if monopolistic corporations are allowed to use AI technology in ways that strip out all the value created by artists, writers, journalists, and human experts of all kinds, many of whom are already struggling to make a living due to the monopoly power of Big Tech firms. This relationship suggests that a grand bargain is needed by both sides that redresses the imbalance of power between human creators and the corporations exploiting work. 

Some of the inequities can be settled through civil litigation, but that will take years and pit deep-pocketed monopolies against struggling artists, writers, musicians, and small publications. Better enforcement of existing law can also be part of the answer. That means prosecuting AI firms when they violate licensing requirements or violate privacy law by instructing their crawlers to ingest people’s personal information and private data. In France, competition authorities recently fined Google for using news publisher content without permission and for not providing them with sufficient opt-out options.

Competition authorities should also investigate whether data partnerships violate antitrust law. Deals struck by dominant AI firms often contain provisions that could be illegal under long-standing antitrust statutes because they magnify monopoly power. This includes tying the use of one product, like access to content, to exclusive use of another product. An example is Microsoft’s deal with the publisher Axel Springer, which gives Microsoft access to the global publisher’s content but also requires that Axel Springer use Microsoft’s cloud services. Authorities in the United States and Europe are already scrutinizing the anti-competitive effects of this kind of deal when it involves cloud computing; the same scrutiny should be extended to data partnerships.

Some litigation currently being considered could also help. For example, a raft of bills are trying to impose measures that will make it easier to identify content created by AI and trace the provenance of data used in generative AI systems. Often the motive behind such bills is to attack the problem of deepfakes and other AI safety concerns. Some of these same measures could also make it easier to track the use of copyright-protected work, thus making it easier for a small newspaper, for example, to spot, and prove, when its content is being ripped off. The European Union’s AI Act already requires that AI companies provide information about where they get the content to train their models, allowing “publishers and creators worldwide to peer under the hood of the foundation models.”

But more fundamentally, lawmakers need to look for ways to compel tech companies to pay for the externalities involved in the production of AI. These include the enormous environmental costs involved in producing the huge amounts of electricity and water used by AI companies to crunch other people’s data. And they include the huge and growing societal cost of letting AI companies steal that content from its rightful owners and strip-mine society’s creative ecosphere. 

There are many mechanisms by which government policy could achieve that end as part of grand bargains. Taxes that target AI production could make a lot of sense, especially if the resulting revenue went to shore up the economic foundations of journalism and to support the creative output of humans and institutions that are essential to the long-term viability of AI. Get this one right, and we could be on the cusp of a golden age in which knowledge and creativity flourish amid broad prosperity. But it will only work if we use smart policies to ensure an equitable partnership of human and artificial intelligence.

The post AI Needs Us More Than We Need It appeared first on Washington Monthly.

]]>
155942
Train Drain  https://washingtonmonthly.com/2024/10/29/train-drain/ Tue, 29 Oct 2024 23:22:13 +0000 https://washingtonmonthly.com/?p=155933

How deregulation and private equity have gutted the U.S. freight rail system—and with it, the promise of America’s industrial renewal.

The post Train Drain  appeared first on Washington Monthly.

]]>

For all our differences, Americans are remarkably united on one key point. Partisan Democrats and Republicans, business and labor leaders, and just about all the folks sitting in think tanks or on barstools across this great land agree that we must start making more stuff in America.

President Donald Trump, of course, pursued this goal primarily by imposing tariffs and ran for a second term promising to impose far more. The Biden-Harris administration maintained many of Trump’s original tariffs and added some new ones while also creating deep subsidies for many forms of domestic manufacturing, particularly computer chips and green technology.

Responding to these incentives as well as to the supply chain fragilities revealed during the pandemic, many corporations have started building more factories in the U.S. or have announced plans to do so. Some of the deals sound impressive. In March, Intel announced plans to invest $28 billion in the world’s largest chipmaking complex on a site in New Albany, Ohio, for example. 

But overall, the scale of actual and planned manufacturing construction is rather modest. According to an analysis of current trends sponsored by the Commercial Real Estate Development Association, the total number of square feet devoted to major factories is expected to rise by just 6 percent to 13 percent over the next 10 years.

Why so little? Many well-known factors work against the growth of American manufacturing, including shortages of appropriately skilled labor. Permitting requirements and other red tape can also make building a new factory in the U.S. cumbersome and expensive. But even if these challenges were overcome, there is another huge factor threatening the rebuilding of America’s industrial base and transition to a greener economy. 

In order to make stuff in the U.S., you must be able to efficiently transport not just finished goods to consumers but also lots of heavy raw materials and components to your factories. For example, making EV batteries in the U.S. requires making synthetic graphite, and making synthetic graphite requires transporting huge volumes of feeder stock like coal tar or petroleum coke, which are by-products of steel production and oil refining. You can’t use trucks to move that much heavy material for more than short distances—not even trucks running on battery power. You need freight trains. 

Yet our freight rail system is melting down. After being deregulated in 1980, freight railroads merged into a handful of giant monopolistic systems that became highly profitable. Those profits, plus the lack of regulation, in turn attracted financiers who over the past decade have taken control of major railroads and forced them to adopt a new predatory business model. Financiers are maximizing short-term profits and returns to shareholders by effectively liquidating the rail system through radical downsizing and degraded service, all while further damaging the prospects for American manufacturing by charging higher and higher freight rates. 

In order to make stuff in the U.S., you must be able to efficiently transport not just finished goods to consumers but also lots of heavy raw materials and components to your factories.

To deal with their diminished capacity and crew shortages, railroads have frequently imposed embargoes—refusals to accept new traffic—causing freight to stack up in warehouses and in railcars that remain stranded in yards and sidings for days and even weeks. At one point in 2023, millions of chickens faced starvation because of Union Pacific’s failure to deliver animal feed trains on time. Shippers who depend on railroads are often reluctant to voice public criticism for fear of retaliation, but through their trade associations they describe systematic breakdowns in service that prevent them from growing. In September, Jeffrey Sloan of the American Chemistry Council testified before the Surface Transportation Board that railroads threaten the safety and growth in the chemical industry through “excessive rates and charges, unreliable service, and a lack of network resiliency.” 

This state of affairs does not bode well for a manufacturing renaissance. Today’s stripped-down U.S. rail system does passably well at moving Chinese imports from West Coast ports to retailers like Amazon and Walmart. And it still handles large reverse flows of bulk commodities like coal and grain. In this, American railroads have become like the ones the British Empire once constructed in India and other colonies: optimized for importing manufactured goods and for exporting natural resources. But under their current ownership and operating plans, America’s downsized freight railroads are not configured to accommodate any significant reshoring of heavy industry, whether it’s petrochemicals, steel and shipbuilding, or EV batteries and windmills.

Meanwhile, the continuing degradation of the freight rail system is causing many other harms to the public. For example, it’s forcing many shippers to shift from rail to pollution-spewing heavy trucks that cause extensive damage to roads and bridges and that kill or injure a surging number of Americans every year. Freight transportation may seem like a banal subject. But getting it right is key to everything from fixing global warming and improving public health to overcoming dangerous dependencies on geopolitical rivals. 

To understand this problem, we have to know what has caused it. In short, this is a story of what happens when you deregulate an essential infrastructure, allow its managers to extract monopoly prices from captive customers, and then let assorted wolves of 21st-century Wall Street to gain control and take the whole system over the edge. 

By the late 19th century, even proponents of laissez-faire had come to realize that without regulation railroads threatened to distort the workings of the free enterprise system. In most places, a single railroad held a local monopoly, which it used to extract wealth from the community and retard its economic development. Meanwhile, in places served by more than one railroad, typically large mid-Atlantic cities, the competing carriers often engaged in price wars, giving those places an unfair and unearned economic advantage over rural America and midsize heartland cities. 

In response to these and other inequities, many states began regulating railroads as far back as the 1860s. In 1887, Congress extended railroad regulation to the federal level by passing the Interstate Commerce Act, which forbade railroads from charging more to transport “like kind of property, under substantially similar circumstances and conditions, for a shorter than for a longer distance over the same line.” 

It would be many years before the Interstate Commerce Commission overcame resistance from reactionary judges and gained full statutory power over railroad rates. But by the early 20th century it was effectively setting the industrial policy of the United States by controlling how much railroads could charge for shipping different kinds of products to different places. The ICC also gained the power to end the widespread, market-distorting practice of railroads offering discounts or rebates to powerful shippers, such John D. Rockefeller’s Standard Oil monopoly. 

Guiding the ICC’s approach to regulation was a principle known as “common carriage,” which is akin to what we today call “net neutrality” in debates over internet governance. Shippers, regardless of their market power, had to pay roughly the same price per ton and per mile for transporting the same kinds of goods for the same distance. Where railroads lost money on low-volume and high-cost services, such as serving a grain mill at the end of a branch line used by local farmers only at harvest time or running local passenger trains, they were expected to make up the difference from their high-profit routes and lines of business.  

The bureaucratic processes the ICC used to enforce a neutral, public rate structure were often protracted. It had to determine, for example, what should be the relative price of shipping a hog versus a ham 50 miles. Yet this regulatory structure was nonetheless critical to ensuring fair terms of competition between different businesses and different places and thereby helped launch America as a broadly prosperous economic powerhouse. 

Many other resource-rich countries, like Argentina and Australia, were slow to develop domestic industries in the late 19th and early 20th centuries. The U.S., by contrast, not only took off early but also developed in a way that distributed growth to midsize cities across its hinterlands. Rural America was also unusual in that it supported a large class of free-holding, independent farmers who had access to distant markets. What explains the difference? In a recent essay, Noam Maggor, a historian at Queen Mary University of London, documents one often overlooked factor: the great and unusual advantage the U.S. gained by regulating railroads in ways that preserved equal pricing and terms of service across all regions. 

Starting in 1935, the ICC began applying the same principles to interstate trucking. This was critical to ensuring that railroads and truckers competed under the same rules. By limiting the number of commercial interstate truck licenses and the freight markets individual truckers could serve, the ICC also preserved at least the possibility of each mode being put to its optimal use, such as using trucks for local and expedited regional deliveries, and fuel-efficient railroads for longer hauls. 

All this went by the board just as the country faced an energy crisis and a growing environmental movement. In 1980, President Jimmy Carter signed legislation that stripped the ICC of nearly all its power to regulate rail rates and substantially reduced common carriage requirements as well. That meant railroads were free to offer secret discounts to powerful shippers just as they once did for Standard Oil. It also meant that railroads could exercise virtually unrestrained pricing power over what’s known in the industry as “captive shippers”—that is, mines, factories, or refineries that are served by a single railroad and that are unable to use trucks as a substitute because of the bulk and weight of what they need to move. And finally, deregulation meant that railroads could simply refuse to serve individual shippers and whole communities, as they were prone to do so long as they could make more money pursuing different lines of business. 

At the time, many policy makers thought that deregulation would put off a pressing issue. The decline of manufacturing in the Northeast and the industrial Midwest was causing many railroads to fail, raising the specter of the government having to engage in a permanent takeover. Policy makers in both parties hoped that deregulation would help them avoid that scenario. Many were also influenced by sustained ideological attacks on regulation that were being mounted in that the era not only by “free market” conservatives but also by many Democrats worried about inflation and U.S. competitiveness. The consumer activist Ralph Nader and liberal Democratic Senator Ted Kennedy were united in their calls for abolishing ICC regulation of both railroading and trucking, just as they had successfully worked two years before to dismantle regulation of airlines. 

At first, the plan seemed to work. The bankrupt railroads in the Rust Belt merged into a regional monopoly called Conrail, which then became highly profitable by abandoning unprofitable or low-margin lines and customers, and by taking advantage of its ability to price-gouge captive shippers. The federal government even made money on the emergency loans and equity infusions it had once used to get Conrail started. Other privately owned railroads also became highly profitable by merging and taking advantage of their freedom from price regulation and common carriage obligations. But before long, the railroads’ monopoly power began to attract the attention of financiers, and that’s when the real destruction began.

Over the past decade, “activist” investors have gained effective control over most rail management. Their first order of business was to install new managers who would drive up short-term profits through ruthless cost cutting and downsizing. The most notorious of these was the hard-charging railroad executive E. Hunter Harrison, who pioneered a new business model called “precision-scheduled railroading,” or PSR, that has now been almost universally adopted by the nation’s major railroads. Despite its label, PSR has little to do with running trains on time. Instead, it mostly involves the common private-equity playbook of driving up share prices by downsizing workforces, selling off assets, and degrading services while raising prices. It is estimated that Harrison, who was affectionately known as the “trains whisperer” by Wall Street, created $50 billion in increased returns to shareholders during his tenure as CEO of four major North American railroads. 

Using this strategy, railroads have cut expenses to the point that they now spend as little as 60 cents for every dollar in revenue they take in. But this has come at great expense to individual shippers and increasingly to the economy as whole. Railroads have raised the cost of shipping by rail nearly 30 percent faster than the rate of general inflation since 2004, with captive shippers facing especially steep price hikes and other added fees. Meanwhile, railroads have cut their workforces by approximately 30 percent since 2014, scrapped thousands of locomotives and freight cars, and abandoned many branch lines and low-margin lines of business—all while also running longer and longer trains on less and less track to fewer and fewer places less and less often. 

After Harrison brought the PSR business model to the rail giant CSX in 2017, the effects on shippers were devastating. A typical complaint came from the Charles Ingram Lumber Company, of Effingham, South Carolina, a third-generation, family-owned company that shipped approximately 130 million board feet of lumber made from southern yellow pine per year—or tried to. The company’s troubles began when CSX, which controls roughly half of all rail infrastructure east of the Mississippi River, decided to tear up the low-volume branch line leading to the Effingham mill. This forced the lumber company to use trucks to reach the nearest remaining railhead, seven miles away. Then CSX began refusing to provide the company with enough railcars to fill its orders or to provide timely pickup, forcing the company to tie up $750,000 in inventory that it could not get delivered to its furious customers. These days, as railroads continue to downsize their rolling stock, many shippers who want to use rail cannot even count on railroads to supply the freight cars they need and wind up having to buy or lease their own, driving up the incentive to switch to trucks whenever possible. 

From 2010 to 2021, railroads spent an astounding $183 billion on dividends and stock buybacks, which is far more than the $138 billion they spent on their infrastructure.

Railroads have also been purposely driving away customers by eliminating direct service. For example, in 2017 CSX announced that as part of its new PSR operating plan, it would no longer handle movements of containers between 327 different city pairs. It followed up in 2018 by eliminating intermodal service between another 230 city pairs, including all service to Detroit and service between Miami and such major cities as Baltimore, Philadelphia, Memphis, Nashville, and Cincinnati. During an earnings call with investors, CFO Frank Lonegro made clear that CSX’s goal for its intermodal business was to “take 7 percent of the volume off the railroad intentionally every year, because we shouldn’t be doing that kind of work.” 

Why brag to stock analysts about losing customers? By shedding lower-volume, lower-margin business, CSX and other railroads free up their diminishing track capacity and shrinking train crews for more lucrative business. For instance, hauling lumber from a single mill down a weedy branch line is a low-margin business because it involves comparatively low volume and a higher handling cost. For the same reason, moving containers less than 500 miles between midsize cities is also less lucrative. By contrast, hauling large volumes of grain or coal long distances for shippers who cannot economically use trucks is a high-margin business, as is hauling containers full of merchandise made in China from the West Coast ports to major East Coast metro areas. So railroads optimize their systems to handle the latter two kinds of movements while demarketing anything less profitable. 

Where railroads cannot fully get rid of lower-margin business they find ways of at least discouraging it. For example, every day in Chicago, thousands of containers bound to and from small and midsize cities are unloaded from trains, then “rubber-wheeled” across the city by trucks and reloaded onto other trains for the rest of the journey. The process adds between $200 and $400 to the cost of each container, and is also very time consuming. Faced with this kind of delay and inefficiency, many shippers just switch to trucks exclusively. 

Why don’t railroads just offer direct through services between midsize cities? Because they don’t want to displease their shareholders by using their limited track capacity for lower-margin, short-haul business. And they don’t want to invest in new capacity, as that would leave less money available for rewarding today’s shareholders with dividends and stock buybacks even if it would help grow new business in the long run. 

The payouts to shareholders have been extraordinary. From 2010 to 2021, railroads spent an astounding $183 billion on dividends and stock buybacks, which is far more than the $138 billion they spent on their infrastructure. It was also far than more than the cash flow railroads had available even after reducing operating costs to the bone and liquidating assets. To make up for the gap, railroads have deferred maintenance of their tracks and other capital assets while taking on more and more debt.

Take Union Pacific, for example. It operates the original “transcontinental railroad” that linked Omaha with the West Coast in 1869. Today, through a series of mergers, it also controls roughly half the rail infrastructure west of the Mississippi. Though it uses the motto “Building America,” mostly what it builds today are returns to shareholders, due to its capture by hedge funds. In 2021, for example, UP used nearly $4 billion more paying out dividends and engaging in stock buybacks than it had available from its cash flow, which is akin to paying out more in rent than you have available after taxes when you cash your paycheck. Meanwhile, because of shrinking capacity and crew shortages, UP declared more than 1,000 embargoes in 2022 alone, thereby aggravating supply chain bottlenecks and inflation. 

Yet Union Pacific’s downsizing was still not enough to satisfy Soroban Capital Partners, a hedge fund that in early 2023 used its $1.6 billion stake in the railroad to demand that UP sack its CEO and replace him with someone who could wring out still higher returns to shareholders. As a result of this ongoing pressure to maximize profits, UP has had less money for keeping up its track and other physical assets, which in turn has resulted in a tab of $100 million for deferred maintenance. Succumbing to hedge fund demands has also encumbered the company with soaring financial debt. From the end of 2017 to the end of 2023, the burden of UP’s debt soared from 1.6 times its net income to 5.1 times. 

Similarly, from 2016 to 2023, Norfolk Southern, which along with CSX controls most rail infrastructure east of the Mississippi, spent $8.4 billion more on dividends and stock buyback than its cash flow provided. The money likely would have been better spent on safety and operational maintenance. After the 2023 wreck of a Norfolk Southern train in East Palestine, Ohio, that sent a mushroom cloud of toxic chemicals across 16 states, federal investigators laid blame on the company’s insufficient investment in wayside sensors that could have detected a failing roller bearing. Norfolk Southern subsequently tried to restore investment in improving service and attracting new customers, but once it began to do so, it was attacked by hedge funds demanding that its managers either boost returns to shareholders or step down. In response, the railroad has recently hired a Hunter Harrison protégé to help it more fully implement PSR. 

The breakdown of the freight railroads has deep consequences that go beyond America’s industrial potential. In a 2021 speech, Martin J. Oberman, then chairman of the Surface Transportation Board, noted that there would be 1 million fewer trucks on the road and 8.2 million fewer tons of carbon in the atmosphere if railroads had not ceded so much market share to trucks over the previous two decades. 

And that’s hardly a full accounting. Trucks are much less energy efficient than trains, mostly because rubber tires rolling on asphalt create a lot more friction than steel wheels rolling on steel rails. Railroads can carry a ton of cargo for 472 miles on a single gallon of diesel fuel, making them at least three times more fuel efficient than trucks. Trucks also cause almost five times more fatalities, and almost a dozen times more injuries, for every ton of cargo they carry. And they, of course, impose costs on the rest of us in the form of traffic congestion and damage to publicly maintained roads and bridges. Taking into consideration all these direct and indirect costs, the Congressional Budget Office calculated in 2014 that the societal cost of moving a ton of freight by truck was about eight times higher than moving it by rail. 

The high environmental cost of relying on trucks will remain even if all trucks in the future run on batteries charged with renewable energy. The consulting firm Oliver Wyman has calculated the effects of converting the nation’s entire truck fleet to EVs while continuing the current trend of moving an ever smaller share of freight by rail. Because trucks are so much less energy efficient than trains, under this scenario the U.S. would have to generate an extra 230 terawatt hours of electricity by 2050. To produce that much electricity using solar power would require covering 800 square miles of land with solar panels. 

What can be done? Going forward, we minimally need to find a way to loosen the monopoly power of railroads over captive shippers. So long as manufacturing in America entails being at the mercy of a predatory rail monopoly, both foreign and domestic companies have powerful incentives to avoid building new manufacturing facilities here. The Biden-Harris administration took a step in the right direction by supporting new “reciprocal switching” regulations that will, in some egregious cases, give some shippers more options in which railroads they use. But the many deeper problems remain. 

Tackling those will probably take some combination of reregulation and direct public investment in rail infrastructure. Regulation needs to ensure that railroads still have obligations as common carriers, meaning that they must not engage in price discrimination and purposeful demarketing of all but the most profitable lines of business. Regulation also needs to be extended to and harmonized with other transportation modes so all modes can compete on even terms and in ways that serve the public interest. This means setting rules of competition that favor trucking for short hauls and express service but that shift as much freight as is feasible to safer, more efficient, less environmentally destructive trains. 

Enacting this kind of regulation would probably cause most vulture capitalists to quickly sell off their stakes in railroads because it would severely reduce the opportunities for monopoly pricing in the sector. That will leave the problem of how railroads can attract the capital they need to serve shippers and communities that don’t necessarily offer opportunities for earning high returns. 

Some of this challenge can be met, as it was under the ICC, by using common carriage and price regulation in ways that effectively create cross-subsidies. Regulation can force railroads, for example, to use some of the high profits they earn from hauling coal on heavily trafficked mainlines to support lower-margin service, such as hauling boxcars or containers along branch lines or between midsize cities. But in some instances, direct government support may be needed to achieve these ends. This could include direct subsidies to private owners so long as they are attached to clawback provisions for poor performance, or it could include public ownership of specific rail infrastructure needed for passenger service and lower-volume freight. 

As during the era before America deregulated its freight transportation sector, finding the right mix of policy tools will not be easy. But at a time when we need a vital, efficient, and equitable freight system more than ever to achieve key national purposes, we cannot afford to ignore the challenge.

The post Train Drain  appeared first on Washington Monthly.

]]>
155933
The App Always Wins https://washingtonmonthly.com/2024/10/29/the-app-always-wins/ Tue, 29 Oct 2024 23:21:49 +0000 https://washingtonmonthly.com/?p=155926

Online sports gambling companies use sophisticated and deceptive techniques to exploit problem gamblers. The same technologies could be used to protect the addicted, if government would only demand it.

The post The App Always Wins appeared first on Washington Monthly.

]]>

It’s hard to say exactly what prompted me to start betting on the National Football League last year, when I was a sophomore in college. Maybe I wanted to justify the countless hours spent watching the RedZone channel each Sunday by turning it into some profitable side hustle. Maybe it was the deluded arrogance that my intro to stats class had equipped me with the tools to build a winning predictive model, a belief that somehow I could find an edge. Or maybe—and I think I knew this at the start—I was just avoiding the rapidly impending need to choose a career, and the trade-off between making a living and following one’s passion that comes with it. 

I judiciously created a series of rules for my betting: I would only gamble a set amount of money, only at certain times, and only when I had some angle that made the bet worthwhile—and I would bank half my winnings each week. The season started off great; I turned $500 into $5,000 in under five weeks. Confident that I’d cracked the code, I convinced a friend to join in, pitching it as a smart way to put his idle cash to work. (“Do you think rich people just let their money sit around?”) Then, predictably, I broke every single rule and lost nearly all of our money. I started chasing my losses, upping my bets after bad weeks to try to claw our money back. When the season mercifully ended, I had nothing to show for it except the unsettling realization that I desperately wanted to continue gambling. 

I wasn’t alone. Last year, 73.5 million Americans legally bet on the NFL—a staggering 58 percent increase from the year before. Since the 2018 Supreme Court ruling in Murphy v. National Collegiate Athletic Association overturned a federal ban on sports betting, the U.S. has undergone the most rapid expansion of legal gambling in its history. A 2022 New York Times investigation into the legalization of the industry, which examined thousands of documents and conducted extensive interviews, revealed a systematic state-by-state effort to lavish legislators with financial favors in exchange for regulatory regimes light on oversight and heavy on tax breaks. 

In just six years, sports betting has spread to 38 states, with revenues soaring 25-fold. Last year alone, Americans legally wagered nearly $120 billion. Universities, media companies, and the major sports leagues—longtime gambling opponents—have all signed lucrative contracts to evangelize for various sports betting platforms. Sports have become so parasitically suffused with grating gambling propaganda that the comedian Conan O’Brien quipped in 2022, “I haven’t seen an online sports betting ad in almost seven minutes. Am I dead?” Industry analysts predict a continued rosy outlook, projecting $45 billion in annual revenue as the market matures. 

In just six years, sports betting has spread to 38 states, with revenues soaring 25-fold. Last year alone, Americans legally wagered nearly $120 billion. Universities, media companies, and the major sports leagues—longtime gambling opponents—have all signed lucrative contracts to evangelize for various sports betting platforms.

The danger in all of this is that the house only wins when you lose. Unlike most other industries that purport to offer a win-win transaction, in which both the consumer and the producer are better off than before, the gambling industry wants you to lose the most amount of money in the shortest amount of time. Three new studies, employing various methodologies and examining different aspects of the betting industry, all come to the obvious conclusion: Sports betting’s meteoric rise has occurred at the direct expense of consumers’ financial health. 

One study conducted by researchers at Southern Methodist University, Emory University, and the University of California at San Diego examined consumer spending on sports betting and found that a staggering 43 percent of bettors exceeded responsible gambling guidelines—defined as betting no more than 1 percent of monthly income—during the average gambling month. The effects of legalization were particularly acute among low-income individuals; the post-legalization increase in bottom-tercile earners spending 10 percent of their income on gambling was five times greater than that of top-tercile earners. 

A second study, coming from Northwestern University, the University of Kansas, and Brigham Young University, tracked the effects of sports betting on broader economic activity. Contrary to industry claims that legalization would simply redirect spending from illicit gambling or other forms of luxury spending, the study found that “legalization reduces net investment by nearly 14 percent … $1 of betting reduces net investment by just over $2.” (Net investments refers to equity investments.) Again, low-income customers fared worse, cutting net investment by a much higher 41 percent. 

The final study, conducted by researchers from the University of Southern California and the University of California, Los Angeles, quantified legalization’s effects on statewide financial health indicators. States with legalized sports betting experienced a small but significant decrease in the average credit score, with credit scores three times worse in states that allow online betting. Even more alarming, states with online access to betting saw a 28 percent increase in the likelihood of bankruptcy. As Brett Hollenbeck, the lead author, told me, “We didn’t expect to find large effects for the average person because most people are not gambling at all … the fact that we could find any effects suggests that the impacts are quite severe for those who are affected.” 

Charles Fain Lehman recently argued in The Atlantic that this new wave of studies justifies banning sports betting altogether. “Unlike regulation—which is complex, hard to get right, and challenged by near-certain industry capture of regulatory bodies—prohibition cuts the problem off at the root.” However, there are three major challenges with this argument. First, after the Murphy decision, a national ban is practically impossible, meaning prohibition would have to happen on a state-by-state basis—a highly unlikely prospect given the recent legalization efforts and the gambling industry’s ability to influence state lawmakers. Second, sports betting bans are theoretically deeply unpopular: Nine in 10 Americans view it as an acceptable form of entertainment, and 75 percent support legal sports betting in their home state. Lastly, people will gamble whether it is legal or not. While legalization certainly increases the number of people who participate, Americans already spend more money on gambling every year than on concerts, plays, movie theaters, sporting events, and all forms of recorded music, combined. Before Murphy opened the door to legal sports betting, offshore operators dominated the market, and they still account for two-thirds of total bets, according to the gambling analysis firm Yield Sec. Criminalizing sports betting would only push the industry further underground, creating a dangerous illicit market that preys on the most vulnerable. Before I turned 21, many of my friends and I used offshore operators; the scope of options was endless (my friend liked to bet on Japanese baseball), and it’s not unheard of for sites to extend six-figure lines of credit to people who can’t possibly afford to pay it back. 

Legalization’s failure isn’t necessarily that it exists—it’s that right now, it’s even more damaging than the illegal industry. The sports betting world has developed highly advanced technology, not to safeguard its most vulnerable users, but to target them and drive them to gamble even more. Thankfully, with sufficient political will, most of the industry’s tactics and technology could be repurposed to protect consumers instead of exploiting them for profit. But without safeguards to prevent problem gambling, the industry is incentivized to pursue an addiction-for-profit business model—exactly what we’re seeing unfold today.

When I told a friend about some of the tragic stories I encountered while researching this piece, she responded, “That’s really sad. But also, like, how can you be so stupid?” Even though public opinion toward addiction has broadly liberalized, roughly half of Americans still believe that moral weakness contributes to problem gambling. The attitude benefits the gambling industry, of course—focusing on individual agency emphasizes a narrow sense of personal responsibility that shields other actors from scrutiny. 

Blaming gambling disorders on stupidity or a lack of willpower is a view emphatically rejected by the medical community. Starting in 1980, the American Psychiatric Association added “pathological gambling” to its Diagnostic and Statistical Manual of Mental Disorders (DSM). Today, the DSM-5 classifies gambling disorder as a behavioral addiction—the only behavioral addiction the DSM currently recognizes. 

The casino industry skirted around most regulation and liability concerns by endorsing the medicalization of excessive gambling while strategically focusing the spotlight on individual pathology. As Shannon Bybee, a former casino executive and the first president of the Nevada Council on Problem Gambling, explained, “Failure to resist impulses to gamble means to me that the problem—and the solution—is found within the individual.” To that end, the industry created the National Center for Responsible Gambling, a research institute with the mission of identifying “an objective measure—a blood test, maybe a genetic marker, saying this person is predisposed to addiction.” The industry’s blinkered approach implies that gambling itself is not inherently corrupting; rather, certain individuals are born corrupted. For everyone besides those unlucky individuals, gambling products are safe to use. 

The betting industry is currently doubling down on the casino approach. DraftKings claims that “the casino … doesn’t cause problem gambling any more than a liquor store would create alcohol problems.” The Guardian recently uncovered through a freedom of information request that FanDuel objected to limits on advertising to problem gamblers by arguing that it was “analogous to a liquor store not being able to advertise to customers who ‘may be’ alcoholics.” In this telling, the industry bears no responsibility—people join their platforms either problem gamblers or not. 

The industry’s framing fosters an incomplete and misleading picture of both addiction and its own involvement in it. Lia Nower, director of the Center for Gambling Studies at Rutgers University, rejects the industry’s notion that problem gamblers are a homogenous group that could be identified with something like a blood test. She developed a classification system for problem gamblers with three “pathways”: behaviorally conditioned, emotionally vulnerable, and antisocial impulsivist gamblers. Nower writes that “the behaviorally conditioned subgroup is characterized by the absence of psychopathology.” In other words, this subgroup of problem gamblers develops issues through their repeated exposure and continued participation in gambling activities, not because of any biological predisposition to addiction. Surprisingly, this subgroup is the largest of the three, accounting for 44 percent of problem gamblers and directly refuting the industry’s stance. These findings echo that of an independent federal commission in Australia, which, after assessing the nation’s legalization efforts, determined that problem gambling stems just as much from the design of the gambling technology as from any individual consumer behavior. 

Betting platforms claim that technology plays virtually no role in gambling disorders. But recent research indicates that the rate of gambling problems among online sports bettors is at least twice as high as among gamblers in general, and, shockingly, 30 times more than the population average.

Perhaps the largest consequence of legalization is that 90 percent of betting is now done online. Simply by making gambling both ubiquitous and frictionless, sports betting platforms are almost certainly exacerbating problem gambling. A 2005 gambling study found that living within 10 miles of a casino led to a 90 percent increase in the odds of becoming a pathological gambler. But even a 10-mile drive represents significantly more friction than today. When I spoke with Nower about online sports betting, she explained, “With rampant gambling on phones, 24 hours a day, there is a large proportion of people who will develop problems because they have access, and they can do this all the time.” 

By adhering to their narrow conception of gambling disorders, which claims that technology plays virtually no role, betting platforms insist that this won’t be the case. On the DraftKings website, they emphasize that only “1 percent of U.S. adults are estimated to meet the criteria for a severe gambling problem … research also indicates that most adults who choose to gamble are able to do it responsibly.” What Draft-Kings conveniently omits is that the rates of problem gambling among its customers are far worse. Recent research indicates that the rate of gambling problems among online sports bettors is at least twice as high as among gamblers in general, and, shockingly, 30 times more than the population average. 

Raymond Estefania, a psychotherapist and addiction specialist with almost 30 years of clinical experience, told me he’s never seen anything like the rise of online sports betting. “The way gambling is being done today is new. It’s automated, it’s online, you can do it right on your phone. It’s become so accessible. We’re going to end up seeing a huge spike in the number of people who end up experimenting and who end up developing a problematic relationship with gambling.” In essence, betting on a phone combines the dopamine-driven instant gratification of social media with the inherently compulsive nature of gambling to create a perfect storm for addiction. 

Every aspect of the platform has been designed to wring the most value out of each user—by being as addictive as possible. While the industry claims that technology has no impact on addiction, it has poured huge sums of money into optimizing for exactly that.

The problem with gambling goes well beyond its sheer ubiquity, however. Every aspect of the platform has been designed to wring the most value out of each user—by being as addictive as possible. The industry’s insidiousness is that while it claims technology has no impact on addiction, it has poured huge sums of money into optimizing for exactly that. Estefania agrees: “Sports betting platforms are absolutely part of the problem, and they do a lot of things to bait people and get them to engage in gambling. The scale of it just might shock you.” 

For most businesses, the best customers are the most knowledgeable and passionate—the tech enthusiast who buys every accessory along with the latest computer, for example. But in the gambling industry, the most valuable customers are the “whales”—those who think they know the game but don’t, consistently losing large sums of money. Although 96 percent of sports bettors lose money, more than half of the industry’s profits come from just 2.6 percent of its customer base. This dynamic creates the predatory nature of the industry; there is no avoiding the overlap between whales and problem gamblers or between industry profitability and the financial ruin of its customer base. 

These incentives drive the industry’s two-pronged strategy: First, create a platform that maximizes the chances of some users spending excessively; second, leverage detailed customer data to eliminate the winners while encouraging the losers. In the internet economy, the first goal of any enterprise is attentional capture. Sports betting platforms have been relentlessly gamified to maximize engagement. By making betting itself a game, users have the illusion of winning even when they’re losing tons of money. Point systems that track user activity, rewards for the number of bets per week, badges to commemorate win streaks, leaderboards to compete against friends and strangers, and constant push notifications alerting users to time-sensitive betting opportunities are all designed to keep users engaged and continuously betting. 

The goal of gamification is to induce a state of “flow”—a deep concentration where users become so absorbed that they lose self-awareness and track of time. In Natasha Schüll’s fantastic book Addiction by Design, she describes how slot machine players enter a “zone” of suspended animation. Unlike a positive flow state, the zone depletes one’s mental and financial resources, untethering gamblers from the reality of their time and money losses. 

Another crucial part of platform design is figuring out how to get customers to make the least profitable types of bets. The University of Nevada’s Center for Gambling Research found that while a typical sports bet offers a return of roughly 5.5 percent, same-game parlays (SGPs)—which combine multiple wagers into one—offer a return of 31 percent. For bettors to win, all individual wagers within the parlay must succeed; otherwise, they lose everything. This type of betting has only been made possible recently by algorithmic advances, as betting platforms can now simulate individual games thousands of times to quantify the correlation between different in-game bets. The industry and media aggressively promote these bets with their lottery-size returns; when a $20 parlay with odds of 29,000 to 1 hit, it was hailed as the “greatest bet of all time.” As DraftKings CEO Jason Robbins explained, “What we are trying to do is … [make] sure that we have a high parlay mix because people like that. Every quarter, the parlay as a percentage of the total bet mix goes up.” DraftKings’ then CFO Jason Park agreed: “Parlay mix is really the silver bullet.” And it’s working; parlays accounted for more than 60 percent of all sports bets placed last year in Illinois, which releases the most detailed data. Two years ago, parlays accounted for only 20 percent of sports bets in that state.

Sports betting platforms also made the introduction of “live betting”—in which users can bet on games in real time—possible. This form of betting is  “incredibly dangerous,” according to Nower. “There’s a huge amount of impulsivity because you’re in this enhanced emotional state.” Short feedback loops between stimulus and reward have also proved to be more reinforcing. “It’s not just betting on the outcome; it’s placing bets as you go on what could happen next. There’s an addictive component where you lose the last bet and then you place another bet to make up for it,” Hollenbeck warned. And yet, companies like BetMGM have made it a “core theme.” Last year, they added the ability to live-bet player props—like how many points LeBron James will have in the second quarter—which caused live betting to grow more than 160 percent year over year. 

After making betting as addictive and financially damaging as possible, the industry then seeks to identify which customers have been affected the worst. “It was originally the casino industry that pioneered this business model, observing every movement of the players and focusing on the most addicted, giving them rewards to keep them coming back,” explains Wolfie Christl, a Vienna-based data rights activist with whom I spoke to better understand the industry’s surveillance techniques. Last year Christl published a report that turned the tables on the betting industry by tracking their tracking of him

Christl first created an account on Britain’s largest online gambling platform, Sky Bet Gaming (SBG). After using their website 37 times, Christl monitored 2,154 unique data transmissions to 44 companies. The majority of the data flow goes to surveillance tech firms like Signal and Iovation. These firms then build a customer profile with at least 186 different attributes, including details like a player’s frequency of gambling, number of “free” bets used, favorite sport to bet on, and email open rate. SBG takes the data and runs it through algorithms that calculate variables like each customer’s lifetime value, specific products they might use, and even something called the “winback margin,” which refers to how much money SBG should spend trying to win a customer back. The reason for it all, as Christl explained, is to “personalize the marketing.” 

In a surprisingly candid moment during an earnings call in 2022, DraftKings CEO Robbins boasted, “There’s a lot of data science and customization at a player level to serve up parlays—Same Game Parlays to the customers that we think have proclivity to engage with that type of bet.” In other words—actually, in their own words—the sports betting industry is currently identifying who the most problematic gamblers are and then targeting them with tailored ads to further spend on their platforms. Christl views this as predatory: “This type of surveillance is specifically dangerous when an industry is able to destroy lives.” Not only that, but the surveillance of online gambling platforms far exceeds what traditional casinos could have dreamed of; the ability to track users through their phones grants the companies access to a continuous stream of data to optimize for further gambling. 

The notion that the sports betting industry bears some responsibility for problematic gambling behavior should be uncontroversial. Yet American culture often struggles to reconcile the neoliberal ideal of consumer sovereignty with the stark reality of harmful and abusive relationships developing with certain products.

Anyone who still thinks it’s possible to earn money on these sports betting apps should read The Wall Street Journal’s analysis of the industry’s use of surveillance practices to identify and ban winners. Although companies do not disclose how often they exercise this power, they are transparent about the practice. Speaking at Goldman Sachs in 2022, Robbins said, “What we are trying to do is get smart at limiting the sharp action.” Or, as he noted a year earlier, “This is an entertainment activity. People who are doing this for profit are not the players we want.”

The notion that the sports betting industry bears some responsibility for problematic gambling behavior should be uncontroversial. Yet American culture often struggles to reconcile the neoliberal ideal of consumer sovereignty—the notion that consumers are the best judges of their own welfare—with the stark reality of harmful and abusive relationships developing with certain products. History shows that when harm becomes undeniable on a large enough scale, change follows, as has happened in the tobacco and opioid industries. There are now hopeful signs of change within the social media industry, as well. 

Rather than wait for a full public health crisis to unfold, the federal government should enact comprehensive legislation based on the harms documented in the handful of most recent studies. While the clearest implication of the evidence is that online sports betting is very harmful, it is safe to assume for now that the industry could muster the necessary political capital to defeat what would essentially be a return to a prohibition on sports betting. If the industry is going to remain online, then federal regulation is clearly needed. 

Although the Supreme Court won’t allow Congress to ban sports betting outright, it should uphold a law granting the federal government the authority to regulate it. In Murphy,the Court ruled that the Tenth Amendment prevents Congress from directly regulating state legislatures. But, importantly, it didn’t give sports gambling any special constitutional protection under standard federal regulation. While Congress can’t directly regulate states, it frequently regulates industries and private companies. Take, for instance, Moyle v. United States,a recent case in which the Biden administration argued that the Emergency Medical Treatment and Labor Act (EMTALA) preempted Idaho’s abortion ban provision that omitted protections for the health of the pregnant woman. EMTALA doesn’t regulate states directly; rather, it ensures that hospitals comply with federal regulations. A similar approach should be applied to sports gambling—federal regulations imposed on corporations shouldn’t run afoul of Tenth Amendment concerns. While betting on this radically anti-regulatory Court is, well, a gamble, it’s certainly a chance worth taking. 

Congress should establish a gambling regulatory agency, analogous to what exists in nearly every European country where gambling is legal. The primary objective of that agency should be to hold the sports betting industry liable for problem gambling that occurs on their platforms.

Congress should establish a gambling regulatory agency, analogous to what exists in nearly every European country where gambling is legal. The primary objective of that agency should be to hold the sports betting industry liable for problem gambling that occurs on their platforms. A story in the December 2016 issue of The Atlantic followed the tragic case of Scott Stevens, a compulsive gambler who took his own life. His family tried to sue the casinos for their role in fueling his addiction, but American courts have been notoriously resistant to such claims, because no such regulation exists. Other countries, however, demonstrate the benefit of a more proactive approach. In Sweden, for instance, “The general starting point of the law is that a license holder shall protect its players from excessive gambling and actively monitor and follow up in order to help players reduce their gambling when there is reason for it,” according to The International Comparative Legal Guide. Sweden has successfully used that law to sue multiple gambling companies, demonstrating the promise of robust regulation. The goal of similar legislation in the U.S. wouldn’t be to ban sports betting any more than airbags ban driving—it would simply make it safer.

Three areas in particular stand out in need of reform. 

Surveillance

The sports betting industry is carefully monitoring its users—but for profit, not protection. Instead of using data to calculate a user’s lifetime value to a company, companies should be required to use the same data to identify and address users’ risk for problem gambling. 

Researchers using machine learning have been able to successfully identify players exhibiting signs of problem gambling. “By identifying key variables that measure the intensity of gambling, such as the number of bets placed and the frequency of betting sessions, we can easily detect the group displaying problem gambling,” researchers from the Corvinus University of Budapest write. Private market solutions also exist, like the British company BetBuddy, which sends out personalized and targeted messages to potential problem gamblers helping them to understand their own behavior. 

Transparency 

The probability and expected value of betting is poorly understood by the public. To rectify that, the gambling platforms should clearly display the expected value of every bet. 

For example, imagine a user makes a bet that they feel has a 30 percent probability of occurring. The odds they see are +150, which means that in order for the bettor to profit $150, they only have to win $100. That appears good! But the expected value of that $100 bet (the probability of winning × the amount wonthe probability of losing × the amount lost) nets out to −$25. For wagers like SGPs or live bets, the expected values are almost always terrible. But they appear deceptively lucrative because their associated probabilities are so small.

Previous research with food labels indicates the potential of such a reform. A meta-analysis found that labels reduced unhealthy dietary choices by 13 percent. Even more promising, labels caused the food manufacturers to create healthier products, reducing sodium content by 9 percent and trans fat content by 64 percent. 

Deposit Limits 

Users should be restricted to depositing only a certain amount of money each month. This kind of gambling regulation could effectively cap the financial damage—something that isn’t as easily achievable with other addictions. Three states already enforce monthly deposit limits: Massachusetts caps deposits at $1,000, Tennessee at $2,500, and Maryland at $5,000. These limits seem entirely reasonable; no betting platform should be extracting more than $12,000 a year from any individual user. 

The betting industry would likely argue that mandatory deposit limits are unnecessary since their platforms already offer voluntary limits. But that claim is misleading. A quick look at user experiences makes it clear just how ineffective the industry’s current “Responsible Gaming” guidelines really are. Despite the industry’s incessant marketing, you rarely hear about these limits, and users would only find them if they actively sought them out. This points to the biggest flaw of voluntary limits: Almost no one uses them until it’s too late. Recent studies show that just 2 percent to 8 percent of customers take advantage of voluntary limit setting. Moreover, these limits are easily bypassed, as many platforms allow users to change them within 24 hours, rendering them nearly useless. 

My grandfather, a journalist with the Associated Press and sports editor for The Christian Science Monitor, was one of the most intelligent people I’ve ever met. He lived a full life, with accomplishments such as interviewing Muhammad Ali and covering six Olympic Games, as well as raising six children. He also suffered from alcoholism and a gambling disorder. He died seven years ago, and I wish that he could have read something I published. 

I also enjoy sports betting, probably more than the average person. While writing this piece, I redownloaded a sports betting app. And even after extensively researching the industry, after reading horror story after horror story, after confronting all the statistics and research proving that sports betting is near equivalent to lighting your money on fire, I still placed a few hundred dollars in bets. Now I’m back at college in Vermont, where the app I was using isn’t legal. But I still feel the urge to download another one and start again. 

Of course, many people can gamble on sports, have fun, and easily move on with their lives. But some clearly cannot. Many others exist in a gray zone somewhere in between, in which the technology with which they interact will have a material impact. Regulations cannot solve gambling disorders; like any addiction, gambling disorders can only truly be solved at the source. But reforms can reduce the cultural saliency of sports betting, as well as the ease with which gambling companies transfer money out of users’ pockets into their own. 

Calls to gambling hotlines have increased 150 percent since 2019, the year after legalization. Raymond Estefania, the addiction specialist, told me he worries that online sports betting is becoming an epidemic among college students. And we’re still only at the tip of the iceberg, just six years into this. 

The sports betting industry is carefully monitoring its users—but for profit, not protection. Instead of using data to calculate a user’s lifetime value to a company, companies should be required to use the same data to identify and address users’ risk for problem gambling.

Meanwhile, the insertion of gambling into every facet of our life continues. You can now gamble on the weather, the odds of a recession, even what policies will be mentioned in an upcoming presidential debate. The pinnacle of gambling’s valorization might be Nate Silver’s latest book, On the Edge: The Art of Risking Everything. Silver, election forecaster and avid poker player, argues that the connective tissue binding most highly successful people together—hedge fund managers, AI accelerationists, effective altruists—is their propensity to gamble. “Blackjack and slots … are fundamentally not that different from trading stock options or crypto tokens or investing in new tech startups,” he writes. Gambling, for lack of a better word, is good: “Ever since 1776, we risk-takers have been winning,” he manages with a straight face. According to Silver, human flourishing—or, at a minimum, immense financial reward—depends on breaking life down into an infinite series of expected value equations and then betting on them. If the world is dominated by gamblers, then maybe you should gamble too. 

The world of sports betting shows why you shouldn’t. Because what Silver fundamentally misunderstands is that today’s economic winners aren’t the gamblers—they’re the house. They win by turning more of us into gamblers. And right now, they’re succeeding. It’s time to change the odds.

The post The App Always Wins appeared first on Washington Monthly.

]]>
155926
The Student Recruitment Industrial Complex https://washingtonmonthly.com/2024/10/29/the-student-recruitment-industrial-complex/ Tue, 29 Oct 2024 23:21:15 +0000 https://washingtonmonthly.com/?p=155916

We tend to think about equity in higher education in terms of how colleges treat students who apply. But long before that, a little-known industry called enrollment management decides who gets the glossy brochures and who gets ignored.

The post The Student Recruitment Industrial Complex appeared first on Washington Monthly.

]]>

Ozan Jaquette, a well-respected professor of higher education at UCLA known for his academic mentorship and groundbreaking research, takes great pleasure in calling himself a “slug.” As a high school student decades ago in Newton, Massachusetts, the young Jaquette was, by his own admission, a mediocre student. Smart, yes, but lacking confidence, with a tendency to hide himself behind jokes. Instead of academics, he poured his energy into sports. “I was just trying to get on the basketball team,” he recalls. 

Ozan Jaquette Credit: Courtesy of Ozan Jaquette

Imagine his surprise when, come junior year, the brochures from colleges started pouring in. Dozens of schools—UMass Amherst, Boston University, George Washington University—were interested in this son of college professors who earned “straight Bs,” while avoiding honors and AP classes. Jaquette took his second chance and ran with it: He enrolled at GW, studied hard, and followed his fascination with the inner workings of colleges and universities through to a PhD. Years later, as a professor at the University of Arizona, he advised a young grad student whose experience with college recruitment would forever change how he saw his own. 

Karina Salazar grew up on the south side of Tucson, the daughter of Mexican immigrants in a low-income neighborhood. Unlike Jaquette, Salazar was a standout scholar. She took every AP and honors class available at Sunnyside High School, earning a GPA of above 4.0 with the aim of becoming the first in her family to attend college. “Everything I was supposed to do, I did it, and I did it right,” she told me. But unlike Jaquette, she received no brochures in the mail. The only recruiters to visit Sunnyside were from community colleges. The military was there, too, tabling during lunch hour and handing out swag from a giant trailer at football games. Salazar ended up at the University of Arizona, mostly because, she says, “it was in my backyard. I didn’t know other colleges and universities.”

Karina Salazar Credit: Courtesy of Karina Salazar

As Jaquette and Salazar worked together, first as mentor-mentee and then as colleagues and research partners, they were struck by the disparity in their experiences. Why had Jaquette received so many extra chances to prove himself to colleges, while Salazar, a much better student, wasn’t on their radar at all? Part of the answer lies in the practices of enrollment management, the industry that helps colleges find the students they need to fill classes and pay tuition. When Jaquette calls himself a slug, it’s not just his self-deprecating humor. A slug, to enrollment managers, is something very desirable—a low- or mid-tier student with financial means who’s likely to accept an offer of admission and pay a large share of tuition. Universities, especially mid-ranked state and private schools without large endowments or Ivy League prestige, go looking for slugs to balance their budgets. They go looking for Ozan Jaquettes, not Karina Salazars.

For decades, the national discussion about equity in higher education has centered on the way colleges treat prospective students who apply. Do they choose among applicants based primarily on grades and test scores, for instance, or do they give preference to students who list fancy extracurriculars and legacy status on their admissions forms? Do they consider an applicant’s race, and if so, how? But schools don’t just wait passively for students to knock on their doors. They engage in expensive and time-consuming efforts to lure applicants, and in choosing whom to lure, they make decisions about equity and inclusion long before the admissions process even begins. As the experiences of Jaquette, Salazar, and millions of others illustrate, race and class play into who even gets to apply. 

The general public knows little about college enrollment managers and the vast web of private consultants who decide who gets recruited and who gets ignored. Yet the industry has been around since the 1970s and, despite having arisen in response to genuine financial pressures on schools, has played a large role in the inexorable rise of tuition around the country. Enrollment managers over the years have steadily increased the top-line tuition at their institutions to give themselves room to attract wealthy students with so-called merit aid. With inflated tuition sticker prices, colleges can afford to offer these scholarships that sound performance based but are really meant to flatter high-income students into matriculating while still paying top dollar. Meanwhile, schools offer insufficient aid to poor and middle-class students to cover the difference, which leaves them saddled with massive loans—collateral damage in the hunt for slugs.

So how do enrollment managers find these students and get them to apply? Here, Jaquette and Salazar have broken new ground. As research partners, they set out to explain just how it was that their paths to college came to be so unequal. After filing public records requests and painstakingly scraping university websites, they uncovered a bustling industry within the enrollment management industry—the Big Data realm of student lists. For the past few decades, a typical vice president of enrollment management at a major university has bought student names—somewhere between 50,000 and many hundreds of thousands of them annually—at roughly 50 cents a pop from the College Board, which administers the SAT, and from its competitor, ACT. These lists come with such detailed information as GPA, college intentions, and home addresses. That personal data, which parents and students give away when they sign up for standardized tests, allows colleges to customize their recruitment efforts for maximum returns. Need a student with middling grades and above-average SATs from a wealthy enclave like Newton, Massachusetts, who is interested in attending a school of your size in your region? Just apply a few filters in a computer program, or have your consultant do it for you, and you have thousands of names and addresses to which you can send glossy brochures. 

This technology is powerful, and theoretically could be used for noble purposes—to recruit, say, high-achieving students from low-income families in minority neighborhoods. (A few schools are doing so; more on this later.) But what Jaquette and Salazar found in surveying dozens of major schools is that list data is much more often a tool of exclusion, enforcing with even greater precision the hierarchies of race and class that limit access to college. 

Until very recently, colleges bought these long lists of prospective students directly from the testing companies, but developments in recent years—including the pandemic-era decline of standardized testing, and updates to privacy laws—have made it much harder to sell student information. As they lose access to list data, increasingly desperate schools have been turning to nontraditional sources and new technologies to form personalized connections with students. This new landscape has unlocked the potential for a fairer, more inclusive system of college recruitment, but also the threat of increased control by private consulting companies, which have begun to buy up alternative sources of student names. This moment of turmoil, education scholars argue, is the time for regulators to step in. 

The underlying incentives that spawned the enrollment management industry were established by the federal government, which has the power to set fair rules for the marketplace and ensure that federal aid is benefiting the students who need it. In the short term, that can mean regulating the sector and cracking down on its worst excesses. In the long term, truly fixing the problem will require changing the incentives that drive those excesses, which means fundamentally restructuring the way colleges are funded and their classrooms are filled. If the search for slugs has gone too far, it could be time to add a little salt. 

As soon as Karina Salazar arrived at the University of Arizona, in 2007, she became all too aware of the inequalities of college. Her 94 percent Hispanic high school, where four out of five students last year qualified for free or reduced-price lunch, had not prepared her to meet her new peers, who took European vacations in the summer while she worked full-time back home. “I didn’t realize I was low income until I went to college,” she told me. No one in her family had the experience to help her navigate the system, and so, Salazar remembers, she “stumbled around” in choosing a major, eventually settling on journalism. But even with those obstacles, she graduated in four years and then launched straight into a master’s program in education and public policy, also at UA.

Jaquette was Salazar’s instructor in a graduate-level statistics class. Afterward, Jaquette encouraged Salazar to pursue a PhD, and then became her adviser. Over the years they have formed a complementary dynamic—Jaquette the big, goofy personality, who answers business emails with chilled-out phrases like “ya mon”; Salazar the sharp, intellectually rigorous, morally clear Abbott to his Costello. Two or three years into Salazar’s doctoral program, as she was finishing up her course work and thinking about a dissertation topic, the colleagues remarked on how their stories were two sides of the same coin. Jaquette had already studied public universities’ propensity to seek wealthy out-of-state students in response to decreased state funding, which made him wonder if the disparity in his and Salazar’s recruitment might be a result of that trend. 

Together, they started thinking about how to measure where universities were recruiting. Admissions offices planning visits to high schools often post schedules on their websites, so after training themselves in data management (and eventually enlisting undergraduate researchers and a professional programmer), Salazar and Jaquette were able to pull from the web a list of thousands of visits by 150 colleges and universities in the 2017 school year. They published their findings in a New York Times op-ed. As they had suspected, recruiters largely focused their visits on out-of-state high schools in affluent white communities. One of the most wealth-seeking universities was Rutgers, whose main campus in New Brunswick, New Jersey, visited high schools across the country that served areas with median family incomes of $117,600. The median income for neighborhoods those recruiters skipped over: $67,000. As a follow-up, Salazar conducted a spatial analysis of high school visits in Dallas and Los Angeles, and found a trend that she calls “recruitment redlining.” Mapping out the visits, Salazar found herself tracing lines around poor communities of color that had been ignored by nearly all universities—much as bankers, insurers, and the U.S. government once drew actual red lines on maps to deny financial services to residents in Black neighborhoods. 

Salazar and Jaquette are careful not to portray universities as scheming villains. The scramble to recruit rich out-of-state students stems from systemic state and federal disinvestment from public universities, long-term enrollment decline, and the rising underlying cost of higher education. Aside from the top few universities in the U.S. News & World Report rankings, most schools don’t have a functionally unlimited endowment and instead rely on tuition revenues, and so of course they look for students from rich areas. Those schools will bring on some Pell Grant recipients, but not too many, because that leads to sacrifices elsewhere. As Jaquette told me, “You rob banks because that’s where the money is.” 

Enrollment managers have been helping colleges respond to these pressures for decades. In 1973, a theoretical physicist at Boston College named Jack Maguire received an unusual request from the higher-ups: Pack up your slide rule, move to the admissions office, and use your way with numbers to get us more students. Founded during the Civil War to educate Irish Catholic immigrants, the university now was nearing insolvency due to dropouts and decreased applications. Maguire and another elevated quant, the former business professor Frank Campanella, coined the term enrollment management for their innovation of aligning the formerly separate systems of recruitment, admissions, financial aid, and retention behind a unified strategy to put scholarly derrieres in seats. The first two enrollment managers conducted surveys to figure out why too few students were applying and too many were transferring away. Among other strategies, they redirected financial aid away from upperclassmen and toward the freshman class, while dangling merit scholarships to wealthy students who didn’t need them but would be enticed by the offer. “Were we the first and only ones to do it? You could make a case for lots of folks,” Maguire told the journalist Neil Swidey this year. “But the fact is, I had the great advantage of being a mathematician—a scientist—and knowing nothing about admissions.” 

A slug, to enrollment managers, is something very desirable—a low- or mid-tier student with financial means who’s likely to accept an offer of admission and pay a large share of tuition.

By 1980, application numbers had tripled. Maguire’s tactics were such a hit that he soon left to start his own private consulting firm, the first in what would become a multibillion-dollar industry. Meanwhile, other schools were copying BC, and its tactic of attracting high-income students with merit aid was spreading across the country. As the competition over rich kids heated up, colleges used federal aid dollars to offset the financial aid that they otherwise might have given to low- or middle-income students. (You’ve earned a federal Pell Grant? Good news—for your university, which will take that out of the aid it would have given you, and send that money elsewhere!) With tuition steadily rising, nonwealthy students have been forced to take on more and more loans to fill the void between aid and the total cost of college. This trend, which came to be known as financial aid “gapping,” accelerated as the pressure on higher education rose. The Reagan administration drastically reduced federal student aid, the cost of college steadily increased along with student debt, and long-term demographic change put a squeeze on the pool of applicants. Universities desperate to fill seats competed in the increasingly popular U.S. News rankings, which rewarded them for rejecting applicants, soliciting alumni donations, and raising their SAT averages. 

Swidey’s interview with Maguire, and many of these historical details, appeared this May in a book-length collection of journalism and academic research called Lifting the Veil on Enrollment Management, which was curated by Stephen Burd, a senior education policy writer and editor at New America. As Burd notes in his own chapter, schools soon learned that they could make money and attract applicants by rising in the U.S. News list. The cross-pollination between enrollment management and college rankings made higher education ever more exclusive. In 2011, for instance, Clemson had stalled in its decade-long quest to reach the top 20 public universities. Administrators had already tried every trick in the book, Burd writes. They had stopped admitting students below the top third of their high school classes; asked for micro-donations to inflate their alumni giving rate; and given competitor universities poor ratings in the reputational survey that informs the U.S. News rankings. Still, despite maintaining an acceptance rate higher than what they needed to rise in the rankings, they were enrolling too few students to fill that fall’s freshman class. They turned to private consultants. 

On the advice of Huron Education, one of the biggest enrollment consulting firms, Clemson increased its spending on merit and other non-need-based aid to wealthy students by 160 percent. By 2019, the plan had borne fruit: The acceptance rate was down, the yield rate was up, and the SAT scores of the incoming class had risen by 80 points. (Nonwealthy students, meanwhile, were forced to make up for decreased aid with risky Parent PLUS Loans, Burd writes.) Clemson was far from the only school to do this, at the behest of a fleet of private consultants that had grown throughout the 1990s and 2000s to guide schools through these treacherous waters. One pioneering consultant was Bill Royall, a former Republican political operative who popularized the mass purchasing of names and the mailing of lavish brochures like the ones that lured Jaquette to GW. His firm, Royall & Company, grew steadily through those decades and, in 2015, was acquired by the parent company of the Education Advisory Board, now just EAB, which itself was bought by private equity as part of an ongoing wave of consolidation in the industry. Profit hungry as they were, these businesses grew so quickly because they pointed to a simple truth: Every incentive pushes institutions to cater their recruiting to wealthy students and leave the rest behind.

The next phase of research started with Jaquette, who had heard from friends in admissions offices and private consulting about the world of student lists. Though it was no secret that roughly 87 percent of universities buy, collectively, tens of millions of names annually from consultants or directly from the testing companies, he and Salazar realized that no one outside the industry had attempted to quantify the practice. Doing so would get them close to answering their initial question: Why did Jaquette get the glossy brochures?

Getting access to that data wasn’t easy. Just as the research partners had had to train themselves as data managers, now they became investigative journalists, filing laborious records requests to public universities. As before, funders such as the Joyce and Kresge Foundations pitched in for additional researchers to help scale up the project. Jaquette and Salazar obtained 414 orders for student data that 14 public universities in Arizona, California, Minnesota, Illinois, and Texas made to the College Board from 2016 to 2020, along with the more than 2.5 million names those schools received. 

The researchers found that colleges routinely filtered their requests by test scores, geographic location, and other data points that tend to exclude students of color. The most common set of filters for prospective undergraduates, used in 99 of 414 requests, sorted students by class rank, GPA, SAT, and zip code. The population of students nationwide who don’t take the SATs is disproportionately nonwhite and poor; moving up the score table, the population of test takers becomes disproportionately rich, and disproportionately white and Asian. Centering one’s outreach on affluent zip codes, as the 14 schools often did in their list orders, is also an efficient tool of exclusion by both race and class. Combine multiple such filters, and the chance of a low-income student of color sneaking through becomes very slim indeed. Jaquette and Salazar made a surprising finding in that vein: The more that schools pile on filters such as SAT, GPA, and zip code, the more they exclude Black and Hispanic people across the income spectrum. In other words, universities set out looking for wealthy students to fill their coffers, and end up sorting America’s high schoolers ever more precisely by race.

Once they started digging into the results and asking schools about their practices, Jaquette and Salazar found another surprise. They had started off thinking of recruiting as a process driven by colleges and universities, and came away with a new appreciation for the influence of private consultants. At some schools that bought more than 100,000 names a year, Jaquette said, “you couldn’t find any university employee who knew anything about the names that they were buying, and why did you use these filters and why not these others.” 

College admissions offices are notoriously tough places to work, where an experienced VP often oversees a posse of inexperienced and underpaid recent grads, who are eager to serve their alma mater but overwhelmed by the volume and complexity of the work. As The Chronicle of Higher Education reported in April 2023, burnout and turnover in this critical profession, one responsible for filling the vast majority of seats in university classrooms across America, have reached crisis levels. Many university employees don’t have the time or the expertise to carefully curate student lists; they request names using the filters that a consultant thinks are best, or that were left over from their predecessor in the admissions office, or their predecessor’s predecessor. As Jaquette told me, “When you see them all doing something similar, you might assume it’s because they share common knowledge and goals, but are making individual decisions. But part of the reason we see consistent inequality in recruitment practices is because of these third-party products and enrollment management companies that are kind of telling universities what to do.”

Some university officials are capable of thinking for themselves, however. Matt Lopez, an outspoken critic of the exclusionary practices of enrollment management, took a job as deputy vice president of admission services at Arizona State University in 2016. In his new role, he pushed to bring the school’s recruiting strategy in line with its mission statement, which calls for ASU to measure itself “not by whom it excludes, but by whom it includes and how they succeed.” One of his first moves was to switch the bulk of the university’s list purchases to the PSAT, which includes more underprivileged students than the SAT itself. He also changed the filters—less outreach to out-of-state students, and more to parts of Arizona with high concentrations of rural, low-income, Indigenous, Hispanic, and Black residents. At the same time, Lopez was conscious of the university’s bottom line and how much it could realistically offer to students. He stopped reaching out to some out-of-state areas where it seemed unlikely that students could make an ASU education work financially, in order to prioritize low-income students in Arizona who could benefit from state aid. Other industry experts I spoke with praised Lopez’s approach as a way to use student lists to expand college access, rather than limit it.

But the nationwide landscape was about to change. The pandemic forced school districts to cancel in-person SATs and ACTs, which, together with rising criticism about socioeconomic inequity in standardized testing, pushed many universities to remove the exams from their admissions requirements. Meanwhile, concerns about privacy over the past decade have led at least two dozen states, including New York, Illinois, and Florida, to pass stricter laws controlling the release of student information. Regulators and privacy advocates have lately started to push for those laws, many of which were modeled on California’s 2014 Student Online Personal Information Protection Act, or SOPIPA, to apply to the selling of list data. This summer, for instance, the College Board paid $750,000 to settle a suit from the New York attorney general over alleged violations of the state’s privacy law. As a result, colleges can only get test takers’ names in those states if the students opt in afterward by signing in to an online portal and agreeing to release their data. The number of available names has plummeted as a result.

“You can’t go to College Board and just buy a million names like you used to,” Lopez told me. That’s roughly the number that ASU was buying annually from the testing companies when he arrived. Now, the pool of available names has shrunk, and the prices, which once were around 47 cents a name, have gone up tenfold in some cases, Lopez said. Bobby Andrews, a former vice president of enrollment management at Duquesne, DePauw, and other universities, told me that these well-intentioned privacy restrictions are making it harder for colleges that want to seek out low- and middle-income students to do so. 

The squeeze on student names is pushing universities from a woeful status quo into a potentially worse unknown. Now, colleges are turning to for-profit companies for student names, and one, EAB, increasingly dominates the market.

The large majority of students who take standardized tests in the classroom during weekdays are no longer showing up in databases, since most don’t know to opt in for data sharing, and also because in some states the companies aren’t allowed to solicit their signoff. The students whose information is available are those with the family resources and experience to sign themselves up for college outreach—and they tend to be rich and white. (Which perhaps would not be such a problem for some slug-hungry schools if the names were not so expensive and so few.) “In the end, you’re restricting the most populous group of underresourced students from being available to colleges who want to recruit them,” Andrews said. More recently, this spring’s rollout of the online SAT triggered internet privacy laws that restrict the selling of student data even further. 

The squeeze on student names is pushing universities from a woeful status quo into a potentially worse unknown. The duopoly of the College Board and ACT that controlled most list data were, at least, nonprofits, bound by transparency requirements and the general expectation that they would work toward some social good. Now, colleges are turning to for-profit companies for student names, and one, EAB, increasingly dominates the market. Long a leading enrollment management consultancy, EAB is on an acquisition spree, gobbling up companies that offer alternative sources of names. In 2022 it purchased the tech firm Concourse, whose online platform flips college application on its head by letting students create profiles that schools can peruse and send admissions offers to. This “reverse admissions” process is a promising innovation (see Jamaal Abdul-Alim’s article in our September/October 2023 issue, “When Colleges Apply to Students”). But the Concourse platform also captures detailed data on students, which EAB can turn around and sell to prospect-hungry colleges. EAB also recently bought the leading scholarship finding website Cappex, and created an exclusive partnership to distribute a major college recruiting platform called Intersect. Both of these deals give EAB control over sources of student data. 

If it achieves monopoly power over list data, the private equity–owned EAB will gain even more leverage to force schools to buy expensive software and consulting services that they don’t necessarily need. The transition of student lists from a lightly overseen not-for-profit space to a less unaccountable profit-driven one is speeding up. Private equity controls the second-largest enrollment management consulting company, RNL, and this April a private equity firm even bought ACT, which was forced to relinquish its nonprofit status.

Troubling as these trends may be, the federal government does have tools to curb predatory behavior in the industry. If EAB’s acquisitions are threatening competition, for instance, federal antitrust regulators like Lina Khan, chair of the Federal Trade Commission, might consider taking a hard look into those deals. 

Meanwhile, Jaquette and Salazar have advanced a creative answer to discrimination in student lists: Treat them like credit reports. Credit rating agencies such as Equifax are regulated as holders of personal data that helps businesses to determine whether or not to make loans to consumers. Those agencies are not allowed to, say, draw red lines around a Black neighborhood in Chicago and automatically assign a low credit score to anyone living there. A student list, meanwhile, is information that helps colleges to decide whom they should offer admission to—along with financial aid, which includes loans. So why not have the FTC and the Consumer Financial Protection Bureau apply similar regulations to purveyors of student data, who, according to Jaquette and Salazar’s research, systematically exclude minorities in their products? 

In addition to policing the existing market, government can help colleges and students go around the middleman. It can invest more in college advising for lower-income high schools, so those students have the knowledge and resources to navigate the application process themselves rather than be wholly at the mercy of the student recruitment industrial complex. It can also offer “direct admission,” an increasingly popular system where all resident high school students above a certain academic performance standard receive automatic admission into one or more of that state’s public universities. Under direct admission, which 10 states had implemented as of 2023, much of the recruiting that colleges have been doing on their own is, in effect, done for them by the state. 

Even with direct admission, colleges still want the ability to proactively reach out to students—to make sure they get enough to fill their classrooms, and the right ones for the programs they offer. To give colleges access to students without going through private consultants, more states should consider creating a “public option” for list data. In recent years, Arizona began providing all its public universities with high school student data that the state government already captures as the overseer of K–12 schools. Though privacy regulations limit the public lists to just a name and an address, that has been enough for ASU’s Lopez to make his state’s pilot student list program a cornerstone of his recruitment strategy. 

None of these reforms, however, will solve the underlying problem that pushes colleges to behave the way they do: the financing of higher education itself. Many states drastically underinvest in their public universities, and the Pell Grant, the federal government’s main source of direct student support, covers only a fraction of average tuition compared to what it did decades ago. Combine those pressures with the decreasing supply of high school graduates after the large Millennial generation, and it all adds up to an unavoidable need to recruit students from wealthy families and shun those from poorer ones. 

As the Monthly’s own Kevin Carey argued in 2020, American higher ed needs a foundational rebuild, one that relieves the financial pressures that encourage bad behavior. Among other measures, Carey’s plan calls for a restructuring of the federal funding system: For every college, the government would provide a flat $10,000 stipend per student, along with universal tuition reimbursement on a sliding income scale. Give schools the assurance that they can afford to pay for each student’s education, and suddenly the hunt for wealthy kids is not so urgent. Acceptance of that aid would come with conditions. No more deceptive pricing; no more diversion of federal money into wealth-chasing merit scholarships; no more confusing financial aid letters that disguise what’s a grant and what’s a loan. 

The new regime would cause sweeping change in American higher education, perhaps even creating a leveling effect where less influential regional universities and state colleges see a boost in funding while flagship universities see a slight decrease, at least from federal sources. (The most prestigious and wealthy universities wouldn’t see much of a change; Carey’s proposed structure would allow colleges to opt in, which he believes non-elite schools have every incentive to do.) It might also require all of us to reexamine our assumptions about what college is. Is it a magical place where we live in pristine dorms and “find ourselves” during study abroad in Barcelona, or is it something more modest? Which should be more bountiful—the amenities, or the outcomes? If that sounds suspiciously … European, then consider that the American higher education system worked this way as recently as the 1960s and ’70s, when the still-standing federal funding laws were written. Since then, underlying financial realities changed, forcing schools to abuse every loophole in the name of survival; now, it’s a matter of catching up.

Around the time Salazar was finishing her PhD, in summer 2019, she came up with a bright idea. Dissertation defenses are staid affairs held in a conference room before a panel of academics and just a few onlookers. Her research into colleges’ recruitment (or lack thereof) in poor neighborhoods had unusual relevance to her institution and the community around it. And so, she recalls, “rather than just present this work to a committee and my immediate family in this dusty room, I decided to do it at my local high school.”

A few months later, in the same school library where she had filed college applications years before, Salazar presented her findings to Sunnyside High School’s principal, the district superintendent, and administrators from the University of Arizona. Afterward, the local officials shook hands with UA’s vice provost for Hispanic affairs and another high-ranking dean. A partnership was born. 

Today, the district that only received visits from the Army and Marines has a full-time recruiter from the University of Arizona. Salazar, who became a professor at UA after her PhD defense, sat on the hiring committee for that job and recently helped to hire another recruiter, this one focused on outreach to local Indigenous communities. The results have been striking. Between 2019 and 2023, the annual number of graduates enrolling at UA from Sunnyside’s district, which includes two other low-income high schools, rose from 91 to 180. 

Salazar and her collaborators are seeking to create similar partnerships in other Tucson-area school districts and eventually in other areas of the state, including rural ones that don’t get attention from large public universities. But they’re doing so at a deliberate pace, seeking funding while keeping in mind how much programs like these depend on a delicate balance of personal relationships and cultural understanding. This outreach isn’t quickly scalable, and certainly not a panacea. No one solution is.

Still, there’s satisfaction in coming full circle. Not often does an academic get to shape her career around a research subject that affects her own community, and then directly introduce solutions on the ground. Salazar’s applied approach reflects the priorities of her culture, she says—research for its own sake is a fine thing, but even better is to make a difference in the world. 

In a similar way, Salazar doesn’t see her success as her own. All of Sunnyside shares in it. And what that represents to her is something larger—the potential that so many people in that neighborhood and thousands of others would have, if only they received the same opportunities. As Salazar told me this summer, “I’m not the shining example of the community. I’m not an exception.”

The post The Student Recruitment Industrial Complex appeared first on Washington Monthly.

]]>
155916 Nov-24-JaquetteHeadshot-Wolfe Ozan Jaquette Nov-24-SalazarHeadshot2-Wolfe Karina Salazar
An Unliterary Childhood https://washingtonmonthly.com/2024/10/29/an-unliterary-childhood/ Tue, 29 Oct 2024 23:20:44 +0000 https://washingtonmonthly.com/?p=155909

Why growing up in a home without books made me a lifelong fan of book reviews.

The post An Unliterary Childhood appeared first on Washington Monthly.

]]>

Most writers grow up in homes filled with books. Not this writer. My parents’ book collection, accumulated over their nearly 50-year marriage, amounted to roughly two dozen volumes, all of which fit into the nooks of their bedside tables. I’m unsure which, if any, either of them read cover to cover. 

It’s not that they didn’t read. They consumed our local St. Louis newspapers, the Post-Dispatch and the Globe-Democrat. My father devoured Time magazine each week, and my mother read the Orthodox Observer, published by the Greek Orthodox Church Archdiocese of America. They both watched the nightly TV news, and KMOX Radio, the AM news station, was on in the background all day. As citizens, my folks were well informed. They just didn’t get their information from books. 

When my brothers and I helped our mother move out of the suburban home where we grew up, the two dozen books on the nightstands were the same ones that had been there for decades. A hardback copy of The Godfather, by Mario Puzo. Dr. Spock’s Baby and Child Care. Lee Iacocca’s autobiography. Eleni, by Nicholas Gage—a memoir of the Greek Civil War purchased when the author spoke at our church, St. Nicholas. The Hour of the Bell, a historical novel about the Greek War of Independence, also purchased when the author, Harry Mark Petrakis, came by St. Nicholas. The Magus, a postmodern novel of ennui and psychological manipulation, by the British writer John Fowles, that I doubt either of my parents got more than 10 pages into. I suspect they picked it up at a church bazaar because the story was set on a Greek island. 

For a young man with literary ambitions yearning to escape into stories of love and adventure, the Glastris home was a desert. Fortunately, I was not such a young man. Reading books was not my thing. I threw my energies into backyard sports, stomping around the woods, and various short-lived enthusiasms—chemistry set experiments, drag-racing model car kits, and taxidermy. 

I was vaguely aware that other kids at Robinson Elementary School were devotees of children’s fictional series, like Nancy Drew and the Hardy Boys, and in junior high and high school, sci-fi and fantasy tomes like The Hobbit and Dune became the rage. I took no interest in any of these. 

I do not fault my parents for this indifference. When my brothers and I were young, they read to us plenty—Curious George, Dr. Seuss, D’Aulaires’ Book of Greek Myths. As we grew older, they encouraged us to read by dropping us off at the library and buying us an encyclopedia. I availed myself of these opportunities, but mainly by perusing them. On rainy days, I’d pluck out an encyclopedia volume and read a few entries that caught my attention. At the library, I’d pull a few books from the stacks, flip through them, read a few pages, and put them back. I wasn’t looking for stories so much as for information on subjects that interested me: animal life, astronomy, dinosaurs, Native Americans, undersea exploration, ancient civilizations, and anything related to the fairer sex. 

The handful of books I owned—The Boy Scout Handbook, Peterson’s Field Guide to Reptiles and Amphibians—were similarly informational. So too the magazines we subscribed to, like Boys’ Life (a perk that came with Boy Scout membership), Missouri Conservationist (free when you got a fishing license), and Rolling Stone (which I read mainly for the album reviews). 

At some point in high school, I started to notice good prose. Joel Vance, a columnist for Missouri Conservationist, wrote first-person tales—of, say, field-dressing a turkey—that were both instructive and humorously self-deprecating. Euell Gibbons, author of Stalking the Wild Asparagus (and famous for his commercials hawking Grape-Nuts cereal), could make you feel like anyone who didn’t forage lamb’s-quarters for dinner was a fool. I still recall a line from Rolling Stone describing ZZ Top as sounding “like someone rummaging through a toolbox.”

The only novels I read were those our English teachers assigned—typically short, socially conscious ones like Animal Farm and Of Mice and Men. But in my junior year of high school, I signed up for honors English. Suddenly, for the first time, I was with the “smart kids,” who were more practiced readers than I, and I found myself plowing through 50 pages a night of Light in August. It was a struggle to follow the plot and remember the characters’ names, much less identify the symbolism and other “deeper meanings.” 

Still, I found the challenge more bracing than dispiriting. I also discovered that I had a knack for organizing what I learned from novels into book reports that made cogent enough arguments with few enough grammatical errors that I earned top grades and praise from my teachers. This was quite a dopamine rush—second only to those that came from being able to talk intelligently to some of the “smart girls” in the honors classes, whom I increasingly eyed. By the middle of senior year, one of those young ladies was my girlfriend, and I was beginning to think of myself as an intellectual. 

When I showed up for pledge week at the Pi Kappa Alpha fraternity at the University of Missouri in 1977, I ostentatiously carried a copy of Crime and Punishment. The near-total incomprehension of my fellow Pikes was an early sign that maybe I had picked the wrong college; sophomore year, I transferred to Northwestern University, which was a better fit. I spent the remainder of my undergrad years holed up in the Pike house at NU with like-minded guys smoking weed and discussing elasticity of demand, Edmund Husserl’s phenomenology, and the deeper meaning of The Wall.

The main feature of my room at the Pike house was a set of shelves made of unpainted wooden boards on cinder blocks, used to house my prized possessions: stereo system, LP collection, aquarium with piranhas, and books. The shelves traveled with me as I moved on to grad school and a series of off-campus apartments. When I married, my wife Kukula, also a great reader (and later the Monthly’s books editor), thought my shelves hideous. She insisted we replace them with a wall of teak bookcases to accommodate both of our growing collections. 

While my side featured novels by celebrated authors of the day—Saul Bellow, Robert Stone, Thomas Pynchon—my reading tended more toward other genres. One was books by erudite scholars with grand visions: E. O. Wilson on sociobiology, Marvin Harris on cultural materialism, Julian Janes on the origin of consciousness, and Stafford Beer on cybernetics. The other was the literary journalism of writers like George Orwell, Joan Didion, and V. S. Naipaul, who specialized in taking the piss out of grand theories. 

That conflict, between the search for unifying explanations and a love of reporting that overturns them, led me to the Washington Monthly, where the journalism embodied that tension. It still does, as you can see in the feature stories of the current issue, which challenge conventional wisdom on everything from reindustrialization to online gambling, artificial intelligence, and college admissions.

Clearly, my unliterary childhood did not stop me from becoming a lover of books. But it did leave marks. One is that I never have learned to read “for fun.” The escapist fare beachgoers pack with their suntan lotion—mysteries, thrillers, westerns, detective stories—do not hold my attention. Nor can I read quickly and lightly for pure pleasure. When I read, it is slowly, deliberately, with 100 percent of my attention. I find reading books immensely rewarding, but a kind of work. I suppose that’s why I can’t read in bed. If I’m tired enough to be in bed, I am too tired to read with the full concentration that is the only way I know how to read. (I can scroll Twitter before nodding off, however.) 

The books on my reading table indicate my idea of a good time. There’s a history of the North Sea; a reported narrative about the brains of octopuses; the autobiographies of
Ulysses S. Grant, Frederick Douglass, Michelle Obama, Joseph Epstein, and Anthony Bourdain; a travelogue of 1930s Europe by Patrick Leigh Fermor; and a collection of essays by Yuval Noah Harari. I have finished a few of these books. The others are bookmarked a quarter of the way in and may stay that way. They’re fascinating, but I manage to acquire books faster than I can finish them. 

For someone like me, who reads widely and seriously—but slowly and primarily for information and ideas—the perfect literary genre is the nonfiction book review. Such reviews give me at least the illusion that I’m keeping up with the deluge of interesting books published every year, 99 percent of which I’ll never get to. Book reviews are the first thing I turn to in any publication. 

Not surprisingly, we commission many nonfiction book reviews for the Washington Monthly—including the nine in this “fall books issue.” (We also give a prize named for Kukula, who passed away in 2017, for the best reviews published in other outlets.) I tell our editors and writers to treat new books like reporting assignments. Summarize them fairly and clearly for the reader, highlight the facts and formulations that are new and interesting, and assess the plausibility and persuasiveness of the thesis. If a book is well structured and beautifully written, that’s worth mentioning. But even if ill-constructed and clunky, a book deserves respectful coverage if its material advances our knowledge and understanding of an important subject. A Monthly review is not so much an aesthetic judgment to help you decide whether to buy a book as an extension of our journalism, meant to give you enough information about a book that you don’t have to read it.

If you think that’s an excessively utilitarian way to treat the art of nonfiction book writing, what can I say? Blame my parents. 

The post An Unliterary Childhood appeared first on Washington Monthly.

]]>
155909
He-e-e-e-re’s Johnny! https://washingtonmonthly.com/2024/10/29/he-e-e-e-res-johnny/ Tue, 29 Oct 2024 23:20:19 +0000 https://washingtonmonthly.com/?p=155866

He dominated late-night television for 30 years, before our shoutfest era. A new biography of the reclusive Nebraskan is also an elegy for a lost America.

The post He-e-e-e-re’s Johnny! appeared first on Washington Monthly.

]]>

Bill Zehme, the longtime star writer at Esquire, veteran of Rolling Stone and Vanity Fair, and the author of terrific biographies of Frank Sinatra and Andy Kaufman, was a unique chronicler of stars who dedicated much of his dogged career to pursuing a biography of Johnny Carson. 

Carson the Magnificent by Bill Zehme with Mike Thomas Simon & Schuster, 336 pp.

Charm and wiles were crucial to Zehme’s success. (Cancer took his life in 2023 at 64.) With his leonine mane, round glasses, endearing countenance, and inexhaustible determination, he had a way of ingratiating himself to the famous. “It’s hard to explain how one can miss a journalist so much,” Sharon Stone said after Zehme’s premature death. “Bill was a wonderful person, a delicate, delicious, open, deep, robust feast of a human.” 

As a student at Loyola University Chicago in the 1970s, Zehme wrote a lengthy profile of Playboy magazine founder Hugh Hefner for the student newspaper and jimmied his way into the company’s offices off the Loop to plant copies in the elevator lobbies. As he hoped, executives alerted “Hef,” who loved the piece. The stunt earned the savvy young reporter an invitation to the Playmate of the Year bash, reward enough for many a male college student. But Zehme parlayed it into an invite to Hef’s famed bacchanalian Los Angeles mansion. 

While in California, the rangy student scribe sought a coveted ticket to a taping of The Tonight Show, then hosted by Carson and the biggest thing in late-night television. Zehme’s parents devoured the show nightly and he had loved it as a kid, listening when he should have been asleep. NBC security turned Zehme away, despite his forged press pass. Still, he pursued America’s elusive late-night host for decades, eventually winning backstage access to Carson’s final shows in 1992 that held America in sad enthrall, ending the entertainer’s 30-year run of putting the nation to bed with his top-of-the-ratings 11:30 p.m. show.

For decades, Zehme sought time with Carson, all the while dashing off books,
biographical and ghostwritten, and incisive magazine profiles of Robin Williams,
Madonna, Tom Hanks, and other giants of popular culture. Like Ahab, he didn’t live to get his white whale. But if he didn’t complete the book before its posthumous publication, he won over Carson and won his blessing to talk to everyone in the entertainer’s life. The result? Carson the Magnificent is not only his richly reported look of America’s enigmatic host, but also a glorious romp through a lost era in entertainment and American life before the nation’s media and social fabric frayed. Zehme’s friend and colleague Mike Thomas, the Chicago Sun-Times feature writer and author of books about Second City Comedy Theatre and Phil Hartman, finished the reporting and writing on his behalf. Their triumph is a great read.

As you read about what an elusive paradox Carson was, you can see why it took so long, even accounting for Zehme’s decade-long illness. The Nebraska-raised Carson was among America’s best-known persons between October 1962, when he took over The Tonight Show just days before the Cuban Missile Crisis, and 1992, as Bill Clinton wrapped up the Democratic nomination for president. Carson didn’t consent to many in-depth interviews during his decades in the limelight, let alone the sustained conversations required for a book that’s better than the ones that preceded it, including a collection of New York Post stories by the then journalist, later filmmaker and auteur Nora Ephron. After Carson’s Tonight Show run had ended and he was working out of the small Santa Monica offices of his production company down the road from his Malibu mansion, Zehme pressed and charmed him. Their conversations made the book possible because Carson opened up about family, fears, and fame.

Carson was distant from his sons and his wives; three spouses sued for divorce. He smoked too much,
which eventually killed him with emphysema. And he drank; easily soused on a little liquor—vodka tonics were a favorite—he could be mean. 

The paradox about Carson was that he was so little known at the very same time he was so well known. He was much more universally recognized than standoffish but iconic figures in the arts such as Marlene Dietrich or the hermit-like J. D. Salinger. Carson so dominated the after-your-late-local-news time slot that he kept the late-night comedy genre to himself for decades. None of the other networks would try to race against NBC’s thoroughbred. And yet, at times Carson revealed himself on air—joking about his four marriages; choking up over the loss of his second of three sons, who died when his vehicle hit a barrier on the Pacific Coast Highway; and alluding to the nine-figure wealth he had achieved. (Carson joked about being just a regular guy who relaxed at home in a hammock—until he asked the butlers holding it to let him down.) He did hardly any press, just a couple of in-depth serious interviews, including with Rolling Stone in 1979 but not with Zehme. Even to those around him, he was famously elusive. 

Carson was distant from his sons and his wives; three spouses sued for divorce. He smoked too much, which eventually killed him with emphysema. And he drank; easily soused on a little liquor—vodka tonics were a favorite—he could be mean. He overcame the booze for periods. But even sober, he was a loner, more comfortable coming home after an afternoon taping at NBC Studios in Burbank, shutting the doors, donning headphones, and playing his beloved drum set to jazz than seeing his kids or wife or even going out. (With the singer-songwriter Paul Anka, he had a writing credit for the brass-heavy, famed, and instantly recognizable Tonight Show theme song.) He was happier gazing at the Pacific from his palatial home atop Point Dume and playing tennis—singles—or tooling in his series of progressively larger yachts, the last one a 130-footer named the Serengeti. Speaking of which … His animal acts, a corny staple in the hands of bad entertainers, were wildly popular, like his pregnant pause before jumping into the arms of his on-air burly sidekick, Ed McMahon, when a baby lion roared at him. An autodidact, Carson read everything and even taught himself Swahili. There’s film of him entertaining locals on the African plains doing comic bits in their tongue. Even with tribespeople, he was more comfortable performing. Otherwise, to quote the sociologist giant of the era, he was David Riesman’s “inner-directed” man. 

Why does Carson still matter? First, he dominated an era when America, always a diverse and roiling continent-wide nation, had something resembling a unified national culture: three TV networks and maybe a half-dozen mass magazines like Life, Time, and Reader’s Digest. The showman so dominated late-night TV that what was on “Carson”—everyone called it that rather than The Tonight Show—was the buzz around the office coffeepot the next day, just as Walter Cronkite dominated evening news. This made Carson the biggest star in a way stars can no longer shine, with rare exceptions like Taylor Swift.

I’m aware as a New Jersey suburban kid—growing up in the 1970s and listening to Carson through my parents’ bedroom wall, turning the TV set off as they invariably conked out before the show ended at 12:30—that such nostalgia must seem like something from the Pleistocene Era to twentysomethings today. Indeed, I was about as far in time from Charlie Chaplin and silent films as they are from peak Carson. 

Carson’s gifts were his brand of comedy—reserved, cool, and self-effacing. He was never funnier than when he came back from a joke that bombed, with a stare that made the audience empathize with and laugh at his predicament. His other great ability was to spot a generation of talent, from David Letterman to Jerry Seinfeld. His reserved, understated style suited Marshall McLuhan’s much-discussed “cool” medium of television the way John F. Kennedy did. He forged topical comedy on television, unlike escapist fare from then-contemporary network giants such as Ed Sullivan, who dominated Sunday nights the way Carson owned Monday through Friday. From his late 30s to his late 60s, Carson was impossibly likable. America felt it knew him. 

Carson made comedy not just funny but hip. He was a suave host, handsome, known as a lady’s man not just because of his myriad nuptials but also because of his charming, flirtatious ways on camera and, more self-destructively, his constant affairs. Carson wasn’t the first comedian to do politics, but he gave America its craving for the political and topical, cranking out bipartisan jabs about Spiro Agnew or Ted Kennedy or on-the-news issues like “smog,” the ecological threat of its day. 

His cool comedy was never sectional, always universal. His first night on the air in 1962 was telling. The premiere was broadcast live—this is before it was taped in the afternoon as in Zehme’s day and how late-night comedy is still “recorded live” today. It was the night a riotous white mob met James Meredith, the first “Negro” student at the University of Mississippi. In a sly joke that tacitly expressed empathy for the civil rights pioneer and didn’t blatantly offend the still large segregationist section of his spanking-new audience, Carson wondered how the lily-white Ole Miss campus would treat the Jolly Green Giant if the canned vegetable mascot tried to enroll. It was funnier than it reads but emblematic of a comedy that walked the line.

That said, Carson was risqué within the bounds of network TV, still regulated by the Federal Communications Commission before standards loosened and largely unregulated cable TV proliferated. In one bit, the actor Ed Ames, who played the Cherokee buddy “Mingo” on television’s Daniel Boone, was tossing an ax to demonstrate how he didn’t use a stuntman and actually knew how to throw a tomahawk. (This was 1965, and Ames was not Native American; such were the times.) The baritone-turned-actor hit his target—a silhouette sketch of the human body on a big piece of wood—directly in the crotch. It took a few seconds for laughter to build as the studio audience, some sitting at a distance without the TV viewer’s up-close and direct shot, appreciated the hilarity of the blade’s entry. Carson, known for being unafraid of silences, waited a few seconds to let the laughter build and, still more, to subside before delivering the perfect riposte to the ax-meets-penis toss: “I didn’t even know you were Jewish.” The allusion to circumcision became one of his best-known spontaneous sidesplitters. 

Carson’s famous reserve has been attributed to his midwestern roots—like the old joke about the man, happily married for decades, who almost told his wife he loved her—but his demons have been attributed to his mother, including by Zehme. Born in Corning, Iowa, in 1925 and raised in Norfolk, Nebraska, Carson didn’t have a miserable boyhood. It was largely happy, in a small town where his father worked for the power company, but his mother’s withholding did scar him. Ruth Hook Carson could be outgoing, but she was also brutal about not wanting to have had two boys, because male children were dirty and feral. When Carson graced the cover of Time in 1967, the article’s author shrewdly recorded a Carson monologue and played it for Ruth to gauge her reaction. She declared it was not funny. Yet years later, Zehme reports, Carson discovered that his often condescending mother had proudly kept many newspaper clips about her boy.

If Carson’s mother was his Rosebud—the inscrutable force behind his fame, success, and turmoil—magic was his salvation. He sent away for a magician’s manual as a boy, and thus was born an obsession. As a kid in Norfolk, the skinny boy went by the name of “The Great Carsoni,” and did shows at the Rotary Club for $3. He performed in college, and the prestidigitation obsession never ended. In the film of his 1991 onstage meeting with network affiliates, where he surprised the brass at 30 Rock with his announcement that he’d leave the following year, he can be seen practicing card tricks. 

Why does Carson still matter?
He dominated an era when America, always a diverse and roiling continent-wide nation, had something resembling a unified national culture. This made Carson the biggest star in a way stars can no longer shine, with rare exceptions like Taylor Swift.

Carson had the self-awareness to know that performing gave his shy self the confidence he needed. He could control his world onstage. Meeting new people, dealing with his wives and kids—the man with everything could be at sea. Even as he came to the end of his life, dying in 2005 at 79, Carson was distant from his much younger wife—a stunning blonde whom he had met when she had strolled past his beachside home. (The star had one up high in Malibu and one at sea level.) He invited her in for a glass of wine. It was his most placid marriage because she accepted that he was a fully formed loner, unable to change. 

That Carson faced no threats in the 11:30 time slot until the 1980s—when Fox’s Arsenio Hall Show picked up younger viewers and ABC News’s Nightline won some affluent ones—would have seemed unlikely when Carson started in showbiz. His first break came in Omaha radio, where he hosted a talk show, followed by a moderately successful run as a game show host. Then came a stint writing for the very popular comedian Red Skelton’s television show, including a surprise appearance after the old vaudevillian injured himself in rehearsal that drew more of Hollywood’s attention. But he almost didn’t make it. A short-lived Johnny Carson Show on CBS was canceled after 39 weeks before he got the nod to replace Jack Paar as host of NBC’s Tonight Show

But Carson was a hit from the beginning, thanks to the magical sleight of hand that made it all look easy. He brought along McMahon, his sidekick at ABC, and their chemistry worked. They modified the guest format, interviewing one at a time on a swivel seat next to Johnny’s desk, the better to turn toward the host. As new guests arrived, everyone slid down the couch, giving the oft-stilted talk show format more of a party atmosphere. Carson’s trademark golf club swing was improvised on the first show but proved a brilliant nightly tee-off. 

It would be easy to wax nostalgic about the Carson Era in American life, when there was something closer to a national conversation than our current riven shoutfest. The existence of today’s much-lamented atomized, siloed media eco-systems—isolating and self-reinforcing—means that conservatives find comic relief in Fox News’s version of a late-night talk show, Gutfeld! And liberals can talk about Jimmy Kimmel and Stephen Colbert’s Trump bashing the next day. 

The Carson approach is better not because it’s bipartisan but because it’s funnier, especially in our day. Even if one is sympathetic to anti-MAGA humor, it’s a monotonous diet after a while. Those who avoided a firm political stance, like Carson’s progeny—Jay Leno (who is out of fashion with the cool kids) and David Letterman (forever considered edgy)—kept their own politics largely offstage. The taciturn midwesterner Carson had the loud plaid jacket of a Governor Reagan–era Corvette or Lincoln driver, which he was, but no one knew how he voted. The hyper-discretion was suited to a time when talking about politics in private life or in public was considered impolite. In that sense, Carson’s reticence to share his politics—even Zehme doesn’t crack this code—made his comedy better. The same is true for Chris Rock, who is a Democrat but takes enough whacks at “wokeness” to keep the audience guessing. The dad-friendly comedy of Jerry Seinfeld or my friend Jim Gaffigan (who, with his mix of midwestern bonhomie, makes him the perfect Tim Walz on Saturday Night Live) avoids politics, making their comedy an escape from a bruised body politic. The monotony of the one-note political comedian means that Colbert, no matter how genial and smart, can be monotonous.

Carson’s gifts were his brand
of comedy—reserved, cool, and self-effacing. He was never funnier than when he came back from a joke that bombed, with a stare that made the audience empathize with and laugh at his predicament.

Carson offered lessons for politicians, too. When he did his last show in 1992, sitting not at his iconic desk but on a stool in front of his trademark technicolor stage curtains, he thanked the audience, saying that he might reappear in their living rooms and bedrooms if he found something else he’d like to do and was worthy of them. But he never did, despite setting up that production company in Santa Monica. A 1994 appearance on David Letterman’s show reading a Top Ten list was his last television appearance. Letterman was clearly his favorite over Leno, although Carson stayed out of the succession, not offering counsel to NBC. (The executives did not seek his advice, either, which he considered an affront.) Carson, the loner, didn’t need showbiz or anything else public-facing to fill the gap for the next nine years until he died—tell-all memoirs, lucrative endorsements, corporate speeches, or Vegas residencies. He knew he was going out as number one late at night, and he didn’t need to stay past his prime or do anything else public; sailing the yacht and lunch with Zehme and others was enough. 

Joe Biden, at 81, needed to be nudged and prodded off the stage. As for the 78-year-old Republican nominee, he is all rage, private and public. In late September, he mocked the liberal comics of late night, Colbert, Jimmy Fallon, and Jimmy Kimmel. “Those three guys—they’re being blown away by Gutfeld,” Trump said, adding that “they’re all dying.” Then the former president looked back fondly on an earlier time, which is very on brand for him. “Where is Johnny Carson? Bring back Johnny … these three guys are so bad.” 

But Carson himself took shots at Trump, as a womanizer, self-promoter, and petty child of privilege. He once deadpanned that a game show Trump came up with, Trump Card, would include an “eviction of the week.” It was an apolitical joke about a pre-political Trump that landed. Unlike the Donald, Johnny welcomed guests and elevated them, whether they were lefty and prickly like Gore Vidal, or tricky and conservative like Richard Nixon, or an insult comic like his friend Don Rickles. He teased them, laughed with them. Trump doesn’t elevate anyone for long. All too often, his most fawning acolytes end up on his enemies list, which is miles longer than the one Nixon penned. Johnny Carson didn’t have enemies. There was the family he disappointed, the flashes of anger owing to drink. But cruelty was never part of his act.

The post He-e-e-e-re’s Johnny! appeared first on Washington Monthly.

]]>
155866 Nov-24-Thomas-Books Carson the Magnificent by Bill Zehme with Mike Thomas Simon & Schuster, 336 pp.
A Millennium of Conflict  https://washingtonmonthly.com/2024/10/29/a-millennium-of-conflict/ Tue, 29 Oct 2024 23:18:28 +0000 https://washingtonmonthly.com/?p=155867

Russia’s identity, not its security or the fear of NATO, has historically been the main driver of Moscow’s aggression toward Ukraine. But is the war really a genocide?

The post A Millennium of Conflict  appeared first on Washington Monthly.

]]>

An argumentative edge is a risky thing, especially in a book of history, even popular history. We want authors to have views—to see their material in a fresh light, to tell us what’s important, and to impose a frame on the raw facts that deepens our understanding of the past. Some readers even seek out history told from a particular point of view—Marxist history, for example, or postmodern history. But it’s easy for a historian to go too far, for a point of view to start to feel like a tendentious slant. Readers looking for truth quickly come to mistrust a writer who they feel has an ax to grind—especially when the case being made is an argument about genocide.

Intent to Destroy: Russia’s Two-Hundred-Year Quest to Dominate Ukraine by Eugene Finkel Basic Books, 336 pp.

Eugene Finkel, now a professor of international affairs at the Johns Hopkins School of Advanced International Studies, was born in Lviv, in western Ukraine, into a Jewish family deeply scarred by the Holocaust. As he tells us in his new book, Intent to Destroy, his grandfather Lev Finkel returned home from fighting in World War II to find that his extended family—parents, sisters, brothers-in-law, nieces and nephews—had perished. Eugene, born in 1977, went on to become a scholar of the Shoah, studying first in Israel, then the U.S. In 2017, he produced a well-received scholarly book, Ordinary Jews: Choice and Survival During the Holocaust. Clearly, he knows a great deal about genocide and has some authority to make a case about the violence being perpetrated by Russia in Ukraine. 

What he doesn’t seem to grasp is just how overused and muddy the word genocide has become, and how it might undermine rather than strengthen his case about Russia’s brutal aggression against Ukraine.

Perhaps fortunately for the reader, Intent to Destroy is really three books. Some 200 pages, more than three-quarters of the text, is solid history, a retracing of the relationship between Russia and Ukraine from medieval times through the present day. The second book, around 50 pages, is the case about genocide: a gruesome, litigious account of the past two and a half years of war. The third book, brief but thoughtful, is a discussion of what can be done: how Russians, Ukrainians, and Ukraine’s friends in the West can address the deeply rooted Russian attitudes that have given rise to the bitter history described in the book.

The history of Ukraine and Russia is a long, winding story, told very differently over the years by Ukrainians and Russians. It starts with a medieval principality, the Kyivan Rus, that flourished in the 11th century and at its height stretched from what is now Odesa to the Murmansk peninsula in northern Russia. Muscovy, founded in the second half of the 13th century, and the Russian Empire, which arose in the 18th century, claimed to be the principal cultural and political heirs of the Kyivan Rus. Many Western historians question this claim. But even if they sprang from the same seed, the two nations, Russian and Ukrainian, diverged sharply in the 13th and 14th centuries when Moscow was dominated by the Golden Horde, an authoritarian Central Asian political culture, and when Kyiv was swallowed by the Polish-Lithuanian Commonwealth, firmly anchored in the West.

It’s easy for a historian to go too far, for a point of view to start to feel like a tendentious slant. Readers looking for truth quickly come to mistrust a writer they feel has an ax to grind—especially when the case being made is an argument about genocide.

In the centuries that followed, Russia grew into a powerful global empire. Ukraine remained more an idea—a people and a national aspiration—than an established polity, with no state of its own except for a brief interlude in 1917–19, until the country we know today was born in 1991 after the dissolution of the Soviet Union. But this discrepancy in no way diminished the tensions between the two peoples or Ukraine’s significance for, first, Muscovy, then Russia and the Russian Empire, then the Soviet Union, and now the Russian Federation.

Finkel’s narrative centers on six flashpoints in this serpentine conflict: relations between Ukraine’s nomad Cossacks and early tsarist Russia; the 19th-century consolidation of the Russian Empire; World War I and the Russian Revolution; World War II; the collapse of the Soviet Union; and the present day.

It’s not a new story, and Finkel draws on no primary sources. But his history unspools briskly. He’s a good storyteller, mixing large sweeps with close-up detail and a few emblematic anecdotes. And his pointed argument helps distinguish the book from others that explore the same broad territory. 

Finkel’s thesis: “Since the mid-nineteenth century, dominating Ukraine and denying Ukrainians an independent identity, let alone a state, have been the cornerstone of imperial, Soviet, and, eventually, post-Soviet Russian policies.” It’s an unimpeachable argument, and whatever Finkel’s lapses in nuance and shading, it is useful to have it laid out concisely and compactly in a single volume.

An important secondary argument centers on Russia’s motives. Have leaders from Catherine the Great through Joseph Stalin and Vladimir Putin been driven primarily by concerns about security—threats from Ukraine and its Western allies—or national identity? How a much smaller, weaker, poorer, historically less educated people without a state could threaten the identity of the Russian colossus is the deep story of the book, and Finkel lays it out well, focusing as much on intellectual and political currents in Russia as on Ukraine. 

Russian perceptions of the threat took various forms over the years. Already in the 18th century, when Catherine began colonizing and annexing parts of Ukraine, Russian popular opinion viewed the Kyivan Rus as the crucible of Russian civilization. By the 19th century, when the Russian Empire had grown so big that losing Ukraine would reduce Russians to an ethnic minority in their own land, Moscow elites tightened their grip, determined to hold on at any cost, including brutal subjugation. Under Putin, the argument takes a new twist that Finkel stretches to classify as identity driven: a national myth of Russian power and military prowess that derives its potency from a perception of external threats—in this case, Western influence on Russia’s soft underbelly, Ukraine. 

Finkel grasps, as he puts it thoughtfully, that “concerns of identity [invariably] shape perceptions of security.” But he argues, mostly persuasively, that “identity, not security or the fear of NATO, has historically been the main driver of Russian aggression” toward Ukraine.

The consequences of this hostility make for painful reading. Again and again, Russia suppressed Ukrainian language and culture, targeting teachers and libraries, changing city and street names, blanketing the territory with Russian monuments, imposing Russian curriculum, appropriating Ukrainian artists and ideas as its own, imprisoning and eradicating intellectuals. This oppression often went hand in hand with economic exploitation and, when it was deemed necessary, economic depredation. (Putin is not the first Russian ruler to think that if he can’t have Ukraine, he must destroy it.) The apogee of this horrific urge was Joseph Stalin’s truly genocidal Holodomor, the government-driven grain and food shortages that killed some 4 million Ukrainians in 1932–33. 

Finkel’s focus on Russia—Russian intellectual history and internal political dynamics—generally serves him well. We meet several generations of militant Russian nationalists, eavesdropping on their meetings and reading their journals. We learn how Russian popular opinion was often the tail that wagged the dog of policy, as when the late Soviet leader Boris Yeltsin, who knew better, caved to public craving to maintain a hard line in Ukraine. Perhaps most chilling for today’s Western readers are the vignettes about otherwise widely revered dissident writers, Aleksandr Solzhenitsyn and Joseph Brodsky, who both called venomously for the eradication of Ukraine. 

But the book also pays a cost for this Russian focus. It sometimes seems that Ukraine—Ukrainian intellectuals and patriots—gets short shrift in the story. The protagonists of Intent to Destroy are mostly Russian, with Ukrainians often appearing more as objects than subjects, deprived of agency and treated oddly scantily in many sections of the book. 

Finkel rarely misses an opportunity to dwell on what he sees as Ukrainian collaborators in Russian oppression: Cossack elites who craved the tsar’s protection; 19th-century intellectuals who sided with the then-dominant imperial culture; 21st-century communist sympathizers nostalgic for the good old days of the Soviet Union. It’s a long list that sometimes seems unduly prominent in the narrative. 

Finkel also goes out of his way to emphasize the moments when modern Ukrainian leaders used policy to advance national identity, as if this were something driven largely top down by politicians and not a product of genuine popular yearning. Ordinary Ukrainians—peasant farmers, urban workers, intellectuals, and artists—have been chafing at Russian domination for centuries and often resisting fiercely. What’s missing from Finkel’s story, never fully evoked, is the dynamism and intensity of the Ukrainian national feeling—several hundred years of pent-up national feeling—that is now powering the response to Putin’s war of aggression.

Still, even with this skew and some quibbles about Finkel’s tendency to assert rather than demonstrate his arguments, the book makes a powerful, important case that should be required reading for Americans, both those who sympathize with Ukraine and those, like Donald Trump, who argue that the West provoked today’s war by considering NATO membership for Kyiv. “Ukraine’s desire to join NATO was met with such outrage in the Kremlin not because the move endangered Russia,” Finkel writes, “but because Kyiv sought to break free from Russian dominance … The real threat that Ukraine poses is not to Russia’s national security but to the stability of its autocratic regime.” 

It must have been a heady moment for Finkel in early 2022 when his comments about the war made headlines: “Killings in Ukraine Amount to Genocide, Holocaust Expert Says.” He and no doubt his publisher are likely hoping for the same effect with Intent to Destroy, and he spares nothing in his effort to make a grisly case. 

Finkel’s introduction includes a paragraph pointing to the first uses of the term genocide and citing the UN definition: the “intent to destroy” an ethnicity, race, or culture, a phrase that serves as the title of his book. Then, in two later chapters, he offers a selective narrative of the current war, a litany of shocking stories taken from Western and Russian media and a handful of Ukrainian interviews. We hear Russian nationalists fulminating about Ukraine and egging Putin on, as if he needed it. The goal of the operation, one militant argues, must be rendering Ukraine “impossible as a nation state.” After the victory, the writer goes on to claim, the population at large will require “reeducation” and “de-Ukrainization”; the elite must be “liquidated.” 

We also learn about the horrific behavior of Russian troops on the ground: cultural destruction—looting museums and libraries—along with filtration camps, detention centers, torture, summary execution of prisoners, and the kidnapping of perhaps a half-million Ukrainian children. Then there is Bucha, where several hundred Ukrainian civilians were killed and left in the street to rot or buried in mass graves, followed by the destruction of Mariupol, a city all but razed by Russian missiles, where thousands of civilians died.

These facts make for a bitter indictment. But do they add up to genocide? Finkel doesn’t so much argue as assert the case, never considering the possibility that the Russian invasion, monstrous as it is, might not merit his dramatic charge. “Violence against Ukrainian civilians is the defining feature of the conflict,” he writes, “an orgy of uncoordinated mass murder … a genocidal campaign.”

Again and again, Russia suppressed Ukrainian language and culture, targeting teachers and libraries, changing city and street names, blanketing the territory with Russian monuments, imposing Russian curriculum, appropriating Ukrainian artists and ideas, imprisoning and eradicating intellectuals. The apogee was Joseph Stalin’s truly genocidal Holodomor, which killed some 4 million Ukrainians in 1932–33.

Even as someone living in Ukraine, at risk daily from Russian missile strikes on critical infrastructure and other civilian targets, I find this claim somewhat exaggerated. Civilians have been targeted; many civilians have been killed. But “an orgy of mass murder” comparable to the Holocaust? I think that obscures more than it clarifies. 

In the end, no doubt, this is a matter of judgment, and hindsight may vindicate Finkel’s view. But in an age when even college curricula and museum exhibitions can be tarred with the brush of genocide, I agree with the poet and critic Adam Kirsch, who has argued that perhaps the term should be “retired.” As Kirsch wrote recently in The Wall Street Journal

“Genocide” has become one of those contested words that can only impede communication; rather than illuminating a wrong, all too often it just provokes debate about whether the wrong meets the definition of the word.

Dropping the term is no doubt out of the question for Finkel, but he could have made a stronger case for his claims if he had been even a little circumspect in asserting them. There’s not much discussion in the book of historical debates about what constitutes genocide: Is the main criterion mass murder or intent to destroy national identity, or, as Hannah Arendt proposed, both in combination? Finkel makes no mention of ongoing disputes about whether Stalin’s Holodomor, surely the worst episode in Russian-Ukrainian history, was indeed genocide. The journalist Anne Applebaum, arguably the West’s leading expert on the Holodomor, argues yes, but many international legal scholars are more hesitant. Finkel could also have bolstered his case with more careful consideration of the numbers. 

There’s a reason that human rights monitors, who know how essential it is to be credible, offer painstaking estimates, detailed methodological explanations, and ranges of casualties rather than absolute numbers. Finkel, in contrast, invariably claims the highest number he can find, with little or no explanation. Were 501 civilians massacred in Bucha, as he maintains? Or 458, the official Ukrainian number? Or somewhere between 73 and 178, as the UN High Commissioner on Human Rights estimates? Any of these totals would be horrific—and inexcusable—but Finkel does his case no service when he fails to examine the numbers more closely. 

Ordinary Ukrainians—peasant farmers, urban workers, intellectuals, and artists—have been chafing at Russian domination for centuries. What’s missing from Finkel’s story, never fully evoked, is the dynamism and intensity of the Ukrainian national feeling—several hundred years of pent-up national feeling—that is now powering the response to Putin’s war of aggression. 

Finkel’s concluding section returns to the sober, measured tone of the early part of the book: a compact but probing review of what can be done to address the deep-seated Russian attitudes that have given rise to the present war. He frames the discussion with an unsettling question: “Is coexistence between the two nations even possible?” Full reconciliation is, he recognizes, highly unlikely unless Russia is roundly defeated, an outcome hard to envision given the situation on the battlefield today. This leaves a variety of policy options, some better than others, for Russia, Ukraine, and the West. 

Of the two types of threats driving Russian belligerence—concerns about security and national identity—security is easier to address, although even there, in Finkel’s view, there are no airtight solutions. He dismisses proposals for Ukrainian neutrality, such as promising not to join NATO, and Ukrainian development of nuclear weapons as unrealistic and dangerous. So too the idea, touted by Volodymyr Zelensky and others, of turning Ukraine into a “big Israel,” a heavily militarized state, constantly on alert and ready for war. Without nuclear arms, Finkel argues, an Israeli approach is likely to go only so far. This best option, Finkel rightly says, is admitting Ukraine to NATO, the sooner the better. But even this could do only so much, leaving plenty of options—“electoral interference, support for corrupt politicians, economic pressure [and] disinformation”—for continued Russian aggression.

Finkel sees even fewer effective responses to Russians’ aggressive fixation on national identity. The West can help to contain Russia militarily; it can support Ukrainian democracy and ensure that Kyiv has the means to protect itself. But there’s very little Europe, the U.S., or Ukraine can do to drive meaningful change in Russian attitudes. Postwar Ukraine’s best course—arguably, its only course—will be to take care of
itself, focusing on reconstruction and democracy building. But all of this leaves the root cause of the problem unaddressed. 

Finkel ponders at length how Russia itself might engineer a change in public attitudes, with K–12 education, “films, plays and exhibitions,” political leadership, and more. He acknowledges that this would be difficult and likely to take a generation or more, but nevertheless seems to convince himself that it is possible. His vision: “Russians need to learn, understand, and come to believe that Ukraine is a different country and not a severed limb of Russia, that Ukrainians are not Russians who speak in a funny dialect, and that the Russian World is an invention of politicians seeking resources and prestige.” 

Amen to that. Finkel is not wrong that this would cut to the heart of the problem, relieving Ukraine and Europe of the menace on their doorstep. I only wish I could agree that it is plausible or likely.  

The post A Millennium of Conflict  appeared first on Washington Monthly.

]]>
155867 Nov-24-Finkel-Books Intent to Destroy: Russia’s Two-Hundred-Year Quest to Dominate Ukraine by Eugene Finkel Basic Books, 336 pp.
Why They Reign Supreme https://washingtonmonthly.com/2024/10/29/why-they-reign-supreme/ Tue, 29 Oct 2024 23:17:57 +0000 https://washingtonmonthly.com/?p=155868

A fresh and readable one-volume history of the Court explains how we got from Marbury to Dobbs.

The post Why They Reign Supreme appeared first on Washington Monthly.

]]>

The Supreme Court,” Frederick Douglass told a civil rights meeting in 1883, “is the autocratic point in our national government. No monarch in Europe has a power more absolute over the laws, lives and liberties of his people, than that Court has over our laws, lives, and liberties.” 

The Most Powerful Court in the World: A History of the Supreme Court of the United States by Stuart Banner Oxford University Press, 626 pp.

Stuart Banner, a legal historian at UCLA, quotes these words in his magisterial new book, The Most Powerful Court in the World, to illustrate one part of the public reaction to the Court’s disastrous decisions in the so-called Civil Rights Cases, which eviscerated the Civil Rights Act of 1875 and locked in the legality of Jim Crow for three generations. But he also includes them in part, I suspect, because they echo his central thesis—that since the dawn of the Republic the high court has been a font of unprecedented power, and that this was a feature, not a bug, in the original design. The idea is the subject of some controversy, since the text of the Constitution does not explicitly grant the “one Supreme Court” the power of reviewing state and federal laws for conformity with the Constitution. Some historians regard John Marshall as having grabbed for power in Marbury v. Madison, taking to the Court a power it did not clearly possess. The Framers, however, discussed the then-nascent practice of judicial review (which was being created in the state courts at that time), and it seems likely that they anticipated it as a feature of the new federal court. Banner reads the evidence as clear. At the Founding, he writes,

the justices were merely a small group of unelected lawyers, yet they were strong enough to set aside state and federal statutes, the ultimate outputs of the democratic process, by holding them unconstitutional. They could issue orders to states and to the other branches of the federal government. No country had ever given its judges so much authority.

Over the 30 years I have been teaching constitutional law, students and ordinary laypeople have asked me for a useful, accurate, dispassionate one-volume history of the Court and its encounter with the Constitution, and I have felt constrained to answer that such a book doesn’t exist. 

Now it does. 

This book will be a boon to anyone who wants to begin a serious study of constitutional law and its chief oracle, the Supreme Court. Though events are changing the Court rapidly as I write, the book is as of 2024 complete, readable, and authoritative. That I feel called upon to poke at the thesis a bit in what follows is a sign not of its deficiencies but of its success. A good work of history is like a rousing parlor game—readers may begin the evening in the corner feigning indifference, but by evening’s end they will be shouting out clues and guesses as part of a happy clamor. There is so much here that everybody will find something to disagree with—and much to appreciate and value. 

“The Supreme Court is the autocratic point in our national government,” Frederick Douglass said in 1883. These words echo Banner’s central thesis—that since the dawn of the Republic the high court has been a font of unprecedented power, and that this was a feature, not a bug, in the original design.

Banner has taken on the role of writing a history of the Court as an institution—how history and legal changes have slowly transformed its role from one that, though important, was also mostly peripheral to the main action into one that today clearly believes the hype its acolytes have showered on it in the years since Richard Nixon began the Court’s long march to the right. Former Solicitor General Kenneth Starr, an energetic legal conservative, in 2002 proclaimed the Court the “first among equals” in the federal government. The Court, he said, actually set national policy, and the executive and legislative branches carried out its commands. Even in the wake of Bush v. Gore, it seemed an extravagant claim at the time; but no longer. The Court has the bit in its teeth, and here is a qualification to Banner’s argument that the Court’s power was there at the Founding: The extent and pervasiveness of its legal and cultural impact seem well beyond anything that might have been foreseen at the time. Though the Court has always had the power of judicial review, over time a difference in degree—and its willingness to use that power for often-dubious ends—has become a difference in kind. Today’s Court would be unrecognizable 200 years—or, for that matter, 75 years—ago. 

Banner’s early chapters give an excellent summary of the structural indignities that restrained the Court in its early years. Not only did the justices not have their own courtroom (that would not come until 1935), but they also were saddled by Congress with the onerous task of “riding circuit”—traveling, at great trouble and expense, the primitive roadways of the new nation in order to preside, along with humbler district judges, over actual trials, which they would then review on appeal once they had staggered back to Washington, D.C. This was exhausting work for lawyers of middle age. John Jay (who resigned from the Court in 1795 to run for governor of New York) complained that circuit riding “takes me from my Family half the Year, and obliges me to pass too considerable a part of my Time, on the road, in Lodging Houses, & Inns.” Even when their “rides” were completed, they did not go home. Instead, they bunked in together, Alpha House style, in a boardinghouse. “We live very harmoniously and familiarly,” Justice Joseph Story told a friend in 1812. “We moot questions as they are argued, with freedom, and derive no inconsiderable advantage from the pleasant and animated interchange.” And while this does sound rather jolly, I imagine that some early members of the Court would have preferred to be at home with their families. (Not until 1869 would the justices begin to buy homes in the capital.) 

Oral argument must have been a burden as well. True, it featured the orations of figures like Daniel Webster, but the incessant roar of even a lion can grow tedious over days and hours—and in those days, a case’s oral argument went on from day to day until the adversaries had simply run out of things to say. Balancing that burden, however, was freedom from the constant reading that is the center of action in today’s court: There were no written briefs to read. 

But most problematic of all (and most relevant to Banner’s thesis) was the fact that the Court had to take all cases that walked in the door. While law students marvel at the wisdom of Marshall and Story in “great cases,” most of the work of the Court was, and remained for decades, appeals from ordinary real estate, commercial, and criminal case—a flood so incessant that there was little time to confer and draft opinions, most of which tended to be short and so unenlightening that they were not even published.

I hasten to add that readers curious about the “great cases” will find God’s plenty here. Banner knows why readers want to know the Court’s history. The summaries of these cases—from McCulloch v. Maryland to Dred Scott v. Sandford to Plessy v. Ferguson and on to Brown v. Board—are clear and correct. Banner seldom editorializes, even about the Court’s worst decisions; his goal is to situate each one in the institutional history of the Court. Thus it is of importance that circuit riding, despite desperate pleas from the justices, was not abolished until 1891.

Here’s where Banner’s essential thesis—that the Court today functions as it was designed to by the Framers—can be subjected to some examination. Let us stipulate that the Founding generation understood the Court to have the power of judicial review not simply of state statutes but of acts of Congress as well. The small court hidden in the Capitol basement grinding through hundreds of humdrum appeals, for all that it might formally have that power, would likely be able to exercise it only seldom. 

Indeed, after Marbury v. Madison invalidated one small portion of the federal Judiciary Act of 1789, it was not until 1857 that it struck down a federal statute again. This was the disgraceful case of Dred Scott v. Sandford, in which Chief Justice Roger B. Taney wrote an opinion explaining that Americans of African descent were not and could never be citizens, and that Congress, which had abolished slavery in the Northwest Ordinance of 1787, had never had the power to do that, and thus that the Slave Power could extend itself wherever the American flag flew. Banner traces the maneuverings around the opinion (two justices had alerted President James Buchanan of the impending decision, which he and they wrongly thought would put an end to the slavery question altogether). 

After Dred Scott, however, judicial review gradually picked up steam, until by the 1880s the Court was ready to bar any measures to protect Black Americans from discrimination while at the same time tenderly protecting nascent giant corporations from bothersome legal interference. The Civil Rights Cases were followed by United States v. Reese (Congress could not punish whites who barred Black voters from registering) and United States v. Cruikshank (courts could not punish whites who engaged in an all-out massacre of Black people in order to take over an elected local government); Giles v. Harris (the Fifteenth Amendment did not allow courts to strike down grandfather clauses and “citizenship” tests that eliminated Black voters from Alabama voting rolls); and the “segregation trio”—Plessy v. Ferguson (segregation by law on public transport fully constitutional), Cumming v. Richmond County Board of Education (county government could levy a special tax and use the proceeds to build a whites-only school, providing no school at all for Blacks), and Berea College v. Kentucky (segregated states could require even private schools to expel all Black students).

The roots of the Court’s perfidy are complex and controversial, coming as it did in the midst of the overthrow of Reconstruction and the rise of the American empire. White supremacy was undoubtedly much in vogue at that time, and was gaining in strength as the United States became an imperial power ruling people of color in the Caribbean and the Pacific. Banner writes that “the justices were men of their times.” 

The question seems to me more complicated: The justices might have been “men of their times,” but the Court, from the Civil Rights Cases on, did not so much follow public opinion as take an enthusiastic leading role in the gutting of the Civil War amendments. The justices published in their opinions some of the cruelest anti-Black rhetoric admitted into public discourse even back in that vulgar era. 

I also had questions about Banner’s analysis of the fiercely anti-government and anti-labor period we call today “the Lochner court” (after a case holding that New York could not set health regulations of bakers’ hours of work). Barron explains that the Lochner-era justices

subscribed to a nineteenth-century conception of government as empowered to advance the public good but not the private good of any individual or group. On this understanding, a law that took money or power from one group and handed it to another, for the purpose of benefiting the second group at the expense of the first, was out of bounds.

This explanation seems partial at best, for the same justices saw no problem in “redistribution” when it took the form of redirecting wealth upward; the era’s labor cases are, to say the least, extreme and to me inexplicable purely as a matter of anti-distribution economic theory. For that matter, redistribution doesn’t explain Hammer v. Dagenhart, a 1918 case that, by blocking a federal prohibition on child labor, doomed a generation of southern children to service in the mines and mills, until the New Deal Court finally overruled it in 1941. The national consensus at the time was strongly against child labor, and a powerful national movement had inspired the statute at issue. The Court set its face resolutely against its own time in that struggle. 

The extent and pervasiveness
of the Court’s legal and cultural impact seem well beyond anything that might have been foreseen at the American Founding. Over time a difference in degree has become a difference in kind. Today’s Court would be unrecognizable 200 years—or, for that matter, 75 years—ago. 

I must confess, too, that I wished for more colorful details about the assorted scoundrels who, elevated to justiceship, did their best to lower the court: Samuel (“Old Bacon Face”) Chase, a gouty wretch whose anti-Jefferson speeches from the bench led to the only successful impeachment of a justice in history (he was not removed); Salmon P. Chase, who was made chief justice largely to stop him from scheming to replace Lincoln as president and who spent the rest of his tenure angling for a presidential nomination—of either party—and sacrificing both his jurisprudence and his own family to that quest; Stephen J. Field, who wore a robe custom tailored to allow him to carry not one but two guns, and who to this day remains the only justice ever arrested for murder (he was quickly released); James C. McReynolds, a curmudgeon so unpleasant that legend attributes his appointment to the Court to Woodrow Wilson’s desire never to speak to him again; and William O. Douglas, another perennial presidential aspirant whose jurisprudence sometimes seemed calculated for electoral advantage and whose extracurricular romances made him “the first sitting justice to be divorced, and the second, and the third.” 

They are all present, to be sure, and if they get less than their due, clearly some ruthlessness was needed to contain a story this large. Banner includes plenty of interesting behind-the-scenes Court minutiae. The details of the period during and after Franklin D. Roosevelt’s makeover of the Court are great fun—this is the time when the Court earned the nickname “nine scorpions in a bottle,” and their feuds on at least one occasion became the subject of a congressional investigation. Banner paints a moving portrait of Justice Frank Murphy, who served from 1940 until his death in 1949 and was known by most insiders to be half of a lifelong stable “marriage” to a male fellow lawyer. (Barrow reports that when one socialite called Drew Pearson and told him she had celebrated so hard that she had to spend the night in Murphy’s apartment, the columnist replied, “Well, there’s no place you could be safer.”) 

Later, the rise and fall of Abe Fortas, a crony of Lyndon Johnson’s who was driven from the Court by pressure from the Nixon administration after it was learned that he had accepted funds from a millionaire, and the attempted impeachment of William O. Douglas, which so infuriated Douglas that he canceled his retirement plans and remained on the Court until driven into retirement by a stroke, are also fascinating gossip. 

But I must stop here, though I’d like to prattle on into the evening about the things I liked (or, more rarely, disliked) about The Most Powerful Court in the World. Come for the discussion of the Court’s power of judicial review, if you will, but stay for the sweeping narrative of American history. At the end, you might conclude, like me, that the present Court, with total control over its own docket and an aggressive theory of judicial review, is quite different from the powerful Court created by Article III. But no matter your theory of judicial review, if you want to deepen your understanding of the Supreme Court and its role in American history, buy this book.

The post Why They Reign Supreme appeared first on Washington Monthly.

]]>
155868 cover_banner The Most Powerful Court in the World: A History of the Supreme Court of the United States by Stuart Banner Oxford University Press, 626 pp.
The Regressive Era  https://washingtonmonthly.com/2024/10/29/the-regressive-era/ Tue, 29 Oct 2024 23:17:26 +0000 https://washingtonmonthly.com/?p=155869

A new biography of Woodrow Wilson puts the 28th president’s racism and sexism at the center of its narrative—and his world-historic domestic and international achievements on the periphery.

The post The Regressive Era  appeared first on Washington Monthly.

]]>

In February 1915, President Woodrow Wilson hosted the first screening of a motion picture at the White House. It was a gala affair, and VIPs clad in formal evening wear gathered together in the East Room, where President Abraham Lincoln had once laid in state. The movie, Birth of a Nation, an incendiary film glorifying the Ku Klux Klan, had opened in Los Angeles two weeks before, where it was met with both critical acclaim and scathing public protest. 

Woodrow Wilson: The Light Withdrawn by Christopher Cox Simon & Schuster, 640 pp.

The movie was not a random Hollywood selection. Rather, the film was based on an equally inflammatory best-selling novel, The Clansman, written by one of Wilson’s oldest and most intimate friends, Thomas Dixon. And Wilson didn’t merely endorse the movie—his own academic writings as a scholar of American history had provided the film (and book’s) historical framework. One intertitle card that accompanied the silent film quoted Wilson’s description of Reconstruction in his History of the American People as a misbegotten scheme to “put the white south under the heel of the black south.” As white-sheeted Klansmen gathered on the screen, a second intertitle quoted Wilson’s celebration of the rise of white supremacy: “At last there had sprung into existence a great Ku Klux Klan, a veritable empire of the South, to protect the Southern country.” 

Wilson was delighted by Birth of a Nation, and discussed with the director, D. W. Griffith, how the administration might use the new medium of motion pictures to sway public opinion. He volunteered to assist Griffith in future historical projects. Only months later, after Griffith and Dixon had publicly touted the White House’s implicit approval and after the movie sparked protests throughout the Northeast and Midwest (even while setting box office records that would persist for 25 years), did Wilson, under pressure from his aides, implausibly claim to have been “entirely unaware of the character” of the film. 

Today, Birth of a Nation is widely credited for normalizing the Klan and rekindling its long-dormant membership, and the White House event is often cited as a stain on the Wilson administration. But as Christopher Cox demonstrates in his deeply researched, important new biography, Woodrow Wilson: The Light Withdrawn, this event was neither an aberration in Wilson’s life nor simply a reflection of the casual racism typical of the time. Rather, Cox argues, white supremacist ideology and the related theme of the protection of white womanhood were central to Wilson’s life’s work, both in academia and in public office. Among contemporary scholars, the implicit racism of Wilson’s administration is widely acknowledged, but Cox’s biography is richly detailed and provides an array of shocking examples that might be new to armchair historians.

In recent years, cancel culture has come for Woodrow Wilson, with activists citing his academic writings and federal policies implemented during his presidency as evidence of overt racism. In 2020, in the wake of Black Lives Matter protests, Princeton—where Wilson served as a professor and university president—removed his name from a residential college and from its school of policy and international affairs. Monmouth University also removed Wilson’s name from a marquee building, and in 2022, Washington, D.C., renamed Woodrow Wilson Senior High School, the city’s largest public high school, Jackson-Reed. 

While biographers have often lauded Wilson as a liberal hero and the first modern president, Cox argues that Wilson had more connective tissue with the Confederate past than with the future. Cox assembles a convincing body of evidence that Wilson was committed to white supremacy as a matter of public policy.

Cox’s biography offers a scholarly justification for this denouncement of the 28th president and a vigorous counterargument to the generations of Wilson biographers who unequivocally celebrated their subject as a liberal hero. While such biographers have often lauded Wilson as the first modern president, Cox argues that Wilson had more connective tissue with the Confederate past than with the future. Examining Wilson primarily through the lens of racial equality and gender, Cox assembles a convincing body of evidence that Wilson was committed to white supremacy as a matter of public policy manifest not just in his racial politics but also in his hostility toward women’s suffrage. As Cox—a Republican who served as a U.S. congressman from California for 17 years—writes, “As the first southern Democrat to occupy the White House since the Civil War era, he was superbly unsuited for the moment.” 

The Light Withdrawn is an important, long-overdue complement to the existing literature. But the hefty volume is a narrow study of the 28th president, with a particular focus on Wilson’s lifelong opposition to racial equality, and how this ideology affected the federal campaign for women’s suffrage, which forms the book’s narrative heart. 

In 1948, the venerated historian Arthur Schlesinger listed Wilson among the six greatest presidents of American history. Like Schlesinger, most Wilson biographers have pointed to their subject’s many progressive reforms, ranging from the creation of a progressive income tax to the birth of government agencies such as the National Park Service, the Federal Reserve, and the Federal Trade Commission. Wilson burnished his legacy as an eloquent champion of democratic ideals during World War I and as an architect of the League of Nations, which established the principle of collective security among allies that has guided American foreign policy for a century. 

Cox treats Wilson’s many achievements—and, in particular, domestic policies—as peripheral to his main narrative, and the imbalance takes some of the force out of the book. A biographer’s responsibility, after all, is to paint as complete a portrait of their subject as possible, and a more ambitious biography would concern itself with the moral tension in Wilson’s legacy. Detailing his achievements would not serve to mitigate Wilson’s racism, but would provide a more robust, fulsome accounting of his profound influence on the 20th century. 

Born in Virginia in 1856, Wilson carried the racial prejudices of his southern upbringing for his whole life. His father was a former Confederate officer, and Wilson’s childhood home was staffed by enslaved people. Wilson spent his formative teenage years in South Carolina during Reconstruction, which shaped his worldview. 

As a professor, first at Bryn Mawr—where he expressed open contempt for the women’s college’s formidable president, Martha Carey Thomas—and then at Princeton, Wilson wrote textbooks on U.S. history and government that reflected his commitment to white supremacy. In his textbook The State, for example, Wilson constructed a racial hierarchy with Aryans at the top, and “primitive” and “savage” races, comprising most of the world’s population, at the bottom. He disparaged eastern European immigrants, describing them as “shiftless,” and supported the exclusion of immigrants from China and Japan. In the classroom, he asserted that slavery “had done more for the negro in two hundred and fifty years than African freedom had done since the building of the pyramids.” In his early writings, Wilson described universal suffrage as “the foundation of every evil in this country.” On campus, Wilson was well known for his exaggerated imitations of Black dialect and his racial jokes.

When he became president of Princeton in 1902, Wilson put his ideology into practice, squashing discussions of racial integration and musing that it would be “extremely unlikely” that admissions of Black students would “ever assume a practical form.” A 1910 research report comparing 14 elite universities noted that Princeton alone refused to admit Black students; the school was also strikingly anti-Semitic. “Harvard’s ideal is diversity,” the researcher pointedly concluded, while “the aim of Princeton is homogeneity.” 

Wilson entered politics the same year, winning his race for the New Jersey governorship, which served as a stepping-stone to the presidency. Wilson was elected president in 1912 in a fluke election, thanks to Theodore Roosevelt, whose third-party candidacy split the Republican vote. Nominated at a contested convention on the 46th ballot, Wilson was a compromise candidate for a Democratic Party divided between its northern and southern leadership and all but shut out of presidential politics since the Civil War. 

Wilson brought his racial politics with him to Washington. Within weeks of his inauguration, his cabinet began implementing Jim Crow policies in the previously integrated federal government. Segregation soon marked the entire federal civil service, with separate office spaces, cafeterias, and bathrooms designated by race. Wilson replaced senior Black appointees hired by the Taft administration with white men. When challenged, Wilson defended the segregation of the civil service as in “the best interests of both races in order to overcome friction.” Wilson’s actions cast a long shadow: The federal government remained segregated until 1948.

Wilson’s fraught relationship with the women’s suffrage movement comprises much of The Light Withdrawn’s central narrative. Cox offers a rarely told, behind-the-scenes account of the fight from the perspective of both lawmakers and suffragists. 

Wilson assumed the presidency in 1913, just as the movement was reaching critical momentum. Most histories depict him as a lukewarm proponent of suffrage, unwilling to expend much political capital on the issue, but an eventual convert and essential advocate. Still, Cox argues that Wilson deserves little credit for the passage of the Nineteenth Amendment. Rather, the president spent years trying to foil suffragists’ demands, first by ignoring them, then by censoring them, and finally by denying protesters’ civil liberties. Wilson ultimately supported women’s suffrage when doing so was politically expedient and the passage of the amendment became inevitable late in his presidency.

Until Wilson reached office, women’s suffrage had been an issue left to the states. Although a proposal for a constitutional amendment had been submitted to Congress each session since 1878, it had never received serious consideration. A bipartisan anti-suffrage coalition in Congress and throughout the country had long opposed the women’s vote because of traditional beliefs in feminine purity and the ideology of separate spheres. Wilson shared these beliefs, musing that if women were granted the vote, “it is the home that will be disastrously affected.” 

But Wilson’s opposition to women’s suffrage for most of his presidency rested on more than idealized gender roles. Like many southerners, his opposition was deeply entangled with white supremacy. 

For decades, white southerners had successfully limited Black male suffrage, relying on Jim Crow laws that restricted voting and mandated whites-only primary elections, which effectively blocked the power of Black men’s votes. But the so-called Susan B. Anthony amendment would guarantee suffrage to all citizens, including Black women, and the right would be enforceable by the federal government. White southerners considered this an existential threat. It would be “absolutely intolerable,” a Tennessee congressman asserted, “to double the number of ignorant voters by giving the colored woman the right to vote.”

Wilson understood that a race-based argument against women’s suffrage was unpalatable for a national audience. He was remarkably successful in evading the subject, even as the proposal for a constitutional amendment to guarantee women’s suffrage became the nation’s most contentious domestic issue. Year after year, he declined to mention it in his annual address to Congress. When pushed, Wilson continued to argue that the decision should remain with the states, even as he confided to the suffragist Harriot Stanton Blatch that the states’ rights argument was simply a facade. “Dismiss from your minds the idea that my party or I are concerned about states’ rights,” Wilson told her. “It is the negro question, Mrs. Blatch, that keeps my party from doing as you wish.” Meanwhile, behind the scenes, he encouraged Democrats in Congress to do what they could to block what would become the Nineteenth Amendment, and privately supported altering its language to allow states the right to control enforcement, effectively permitting racial voting restrictions.

Wilson’s opposition to women’s suffrage for most of his presidency rested on more than idealized gender roles. Like many southerners, his opposition was deeply entangled with white supremacy. As he confided to the suffragist Harriot Stanton Blatch, “It is the negro question, Mrs. Blatch, that keeps my party from doing as you wish.”

The suffragists were a perennial thorn in Wilson’s side. In 1916, Alice Paul’s militant National Woman’s Party urged already enfranchised women to vote Wilson and his party out of office, as punishment for failing to support the cause. In January 1917, after he won reelection by a whisker, the party began a campaign of quiet protest, with “silent sentinels” picketing at the gates of the White House. The protests, which continued for a year and a half, involved thousands of suffragists and initially attracted much press attention. Embarrassed by the picketers’ lingering presence, Wilson intervened, suppressing press coverage and, after the U.S. entered World War I, directing the wartime propaganda bureau to label the protests as unpatriotic. He ordered surveillance of suffrage leaders, and condoned police harassment of the pickets. 

Chillingly, Wilson was complicit in the arrest of hundreds of protesters on the trumped-up charge of obstructing the sidewalk. Suffragists were sentenced up to seven months in squalid prisons and workhouses, where they were denied adequate food and water, legal representation, and communication with their families. When some protesters began a hunger strike, prison guards—under Wilson’s direction—commenced force feedings, while Wilson directed the head of his propaganda agency to deny maltreatment of the prisoners and to assert that “the treatment of the women picketers has been grossly exaggerated and distorted.”

During the war, Wilson issued an executive order permitting government officials to restrict international travel to anyone deemed a threat to public safety; at the war’s conclusion, the administration extended the ban to deny passports to virtually all Black applicants, along with members of the National Woman’s Party.

Because Wilson’s domestic and international achievements fall outside the central narrative of The Light Withdrawn, the results feel curiously reductive. Conveying the full breadth of Wilson’s achievements wouldn’t balance the moral scales, but they are nonetheless fundamental to his complicated, contradictory, often infuriating story. Which is also, of course, the history of the United States.

Suffragists and civil rights leaders pointed to the hypocrisy of Wilson’s soaring rhetoric extolling American democracy while denying its fruits to all Americans. At the conclusion of the war, one suffragist decried Wilson’s lofty evangelism for democratic ideals. “While President Wilson has sailed away to Europe to obtain democracy for the world,” she bemoaned, “American women, after six years, know how hollow his words are.” 

Cox notes in his introduction that more than 2,000 English-language books have been written about Woodrow Wilson, but until Arthur Walworth’s Pulitzer Prize–winning two-volume study in 1958, not one had mentioned either the women’s suffrage movement or the racial segregation of the federal government. Wilson’s exalted status as a progressive titan was seldom challenged before the public reckoning of recent years. By thoroughly excavating the president’s racial and gender ideology, Cox’s book is an important contribution to the scholarship. But it has limits, too, as a corrective. 

The Light Withdrawn does not ignore Wilson’s formidable achievements altogether-indeed, Cox praises Wilson in his introduction as “enormously consequential” for a progressive laundry list ranging from the progressive income tax to the Clayton Antitrust Act. He explains that Wilson was not simply a reactionary and takes pains to show how political alignments in Wilson’s day didn’t fit neatly into contemporary categories. Today, left-leaning economic policies often go hand in hand with calls for racial equality, but in the early 20th century, white supremacy was consistent with—even foundational to—white southern progressivism. Like other progressives, Wilson was concerned with ridding government of corruption, breaking up concentrations of financial and corporate power, and empowering democracy through political reform—even as he introduced racial segregation into the civil service. 

But because Wilson’s domestic and international achievements fall outside the central narrative of The Light Withdrawn, the results feel curiously reductive, as if Wilson’s life and role in history can be distilled to his white supremacy and sexism. Conveying the full breadth of Wilson’s achievements wouldn’t balance the moral scales, but they are nonetheless fundamental to his complicated, contradictory, often infuriating story. Which is also, of course, the history of the United States. As a result, Cox’s biography feels both politically charged and incomplete, even as more traditional biographies, which venerate Wilson but ignore his racism, likewise fall short. 

Despite his damning narrative, Cox labels Wilson merely a disappointment—both to suffragists and to civil rights leaders who had trusted in his democratic ideals, and to contemporary students of history disenchanted by the president’s many shortcomings. Cox points to lesser-known historical figures like Alice Paul, the civil rights leader William Monroe Trotter, the presidential appointee and confidant Dudley Field Malone, and Representative Frank Mondell, as the true heroes in the realization of women’s suffrage. 

The author shows considerable restraint in his conclusions about Wilson. Cox writes, “As the poet Whittier teaches, all of us who are Woodrow Wilson’s heirs owe it to ourselves to remember the man in full, and to ‘pay the reverence of old days to his dead fame.’ ” Cox’s thoughtful, deeply researched biography goes a long way toward stripping away the hero worship; perhaps the next biographer will build on this scholarship to offer the more comprehensive treatment this complex historical figure merits, and readers deserve.

The post The Regressive Era  appeared first on Washington Monthly.

]]>
155869 Nov-24-Cox-Books Woodrow Wilson: The Light Withdrawn by Christopher Cox Simon & Schuster, 640 pp.