Big Tech Archives | Washington Monthly https://washingtonmonthly.com/tag/big-tech/ Tue, 04 Nov 2025 19:08:47 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg Big Tech Archives | Washington Monthly https://washingtonmonthly.com/tag/big-tech/ 32 32 200884816 Draining the Online Swamp https://washingtonmonthly.com/2025/11/02/social-media-reform-democrats/ Sun, 02 Nov 2025 23:31:09 +0000 https://washingtonmonthly.com/?p=162257 Democrats must embrace social media reform.

Instead of accepting the existing digital political battlefield as inevitable, Democrats should challenge it as a root cause of our dysfunctional politics, and vow to be the party that cleans it up.

The post Draining the Online Swamp appeared first on Washington Monthly.

]]>
Democrats must embrace social media reform.

Allowing our political discourse to be swallowed up by the internet and spit back out as chewed-up, attention-grabbing “content” has obviously not been good for the American psyche. Unfortunately, in the wake of the 2024 election, the Democratic Party seems to have decided that there’s no path forward except plunging headfirst into the online cesspool.

The DNC recently poured millions into a project called Speaking with American Men: A Strategic Plan, billed as an effort to “study the syntax, language, and content that gains attention and virality” in male-dominated online spaces. Meanwhile, a startlingly annoying crop of Democratic influencers emerged, teenage boys with resistance-lib politics who mimic online MAGA aesthetics to produce truly terrible videos like “Clueless MAGA Bro Gets SHUT DOWN During Debate.”

Politicians, too, are getting in on the fun. California Governor Gavin Newsom—apparently convinced that the key to being a successful governor is to fill the “liberal Joe Rogan” void—launched a podcast. His debut episode featured the late Charlie Kirk and was so obsequious that Kirk criticized him for being “overly effusive.” Later, after an amicable debate with Steve Bannon, Newsom switched to mocking Donald Trump in posts that mimic the president’s disjointed, braggadocious style. 

As demeaning and debasing as all this is, maybe it’s also part of a necessary correction. The left’s approach in recent election cycles—engaging in digital shaming pile-ons and strong-arming social media companies into moderating speech on their platforms—mostly backfired by convincing millions of Americans that liberals were censorious scolds. But if that strategy proved counterproductive, this frantic attempt to match the online MAGA world’s tactics in an attentional race to the bottom might be just as doomed to fail. 

I’d like to suggest a different approach, one that is already gathering adherents among liberal strategists, grassroots activists, and legal scholars. Instead of accepting the existing digital political battlefield as inevitable, Democrats should challenge it as the root cause of our dysfunctional politics, and vow to be the party that cleans it up.

The core of the issue is that we have a digital economy built on a business model of trapping people in the virtual world for as long as possible. D. Graham Burnett, a historian of science at Princeton University, calls it “human fracking,” where instead of oil, companies are extracting our attention. Should we really be surprised, then, that at the same time the internet is subsuming the real world, what’s emerging around the globe is a convulsive, reactionary politics? Addressing this crisis—rather than fueling it—should be central to the Democrats’ strategy in the digital age. 

There are abundant signs that people want something different. For example, a 2020 Pew survey found that about two-thirds of American adults believe social media is negatively affecting the country’s direction; only one in 10 think it’s helping. Perhaps the most compelling evidence of social media users’ simmering discontent comes from a study conducted by economists at the University of Chicago. Participants were asked how much money they would need to be paid to quit social media. The average answer was around $50. But the participants’ attitudes changed dramatically if asked to imagine that everyone else had already quit. In that case, they would not only give up their accounts—they’d be willing to pay to do so. That exodus may slowly be under way—time spent on social media peaked in 2022 and has slightly fallen since, with young people representing the largest decrease.

Economists asked how much money people would need to be paid to quit social media. The average answer was around $50. But if asked to imagine that everyone else had quit, participants would not only give up their accounts—they’d be willing to pay to do so.

Thankfully, there are more options in the playbook than to surrender to the internet or simply log off. As the political speech scholar Susan Benesch put it to me, the question Democrats need to answer is this: “Do you want to ape the game the other side is already playing in an environment that is bad for people, or do you want to change the game?” Democrats don’t need to abandon the digital sphere to challenge its terms; in fact, a few of its younger rising stars have shown that liberal messages can gain purchase. But to answer that hunger for reform, while also creating a level playing field for sane, fact-based discourse, Democrats must make a serious political commitment to reforming these technologies that govern our lives and yet remain subject to no meaningful democratic oversight. 

The movement to make online life healthier is already well underway. Currently leading the charge is a growing coalition of parents, scholars, and activists focused on protecting children from the most harmful aspects of the digital world. This movement has also launched a state-by-state campaign to claw back some modicum of digital privacy, with state legislators finally taking steps to curb the relentless surveillance and manipulation of individuals by tech platforms. 

Nearly every expert I spoke with agreed that these crusades offer Democrats a clear strategic opportunity: embrace these efforts, raise the profile of their issues, and campaign on federal legislation that embraces these movements. If the goal is a more dramatic restructuring of digital life, though, these measures must be seen as just the opening moves. The harms that have galvanized these reform movements are symptoms of the broader underlying sickness plaguing the online world.

Right now, MAGA channels much of the anger toward modern life by offering a fantasy of a lost, mythic past. But if Democrats were willing to engage with this dissatisfaction honestly, they could expose the central lie of the movement: Despite all its posturing, MAGA wants nothing more than for you to live your life on a screen. 

Unlike the current race toward the most optimized attentional capture possible—which is also disastrous for our politics by favoring the loudest, most outrageous voices—the lane of reform belongs to Democrats alone. With the right’s online dominance, the political conversions of platform owners like Elon Musk and Mark Zuckerberg, and Republican leaders like Donald Trump and J. D. Vance, pathologically online, the GOP is an unlikely reformer. Rather than join the stampede of human frackers, Democrats could be the one force fighting to protect your mind from being mined.

The most obvious place where the online world has impoverished the real one is childhood. Jonathan Haidt’s book The Anxious Generation documents the teen mental health epidemic that began in the early 2010s, arguing that the “great rewiring of childhood”—the shift from a play-based to a phone-based upbringing—fueled the crisis. Haidt outlines multiple causal pathways linking social media use to mental health harms: social deprivation, sleep deprivation, attention fragmentation, and addiction. To safeguard future generations, he proposes four societal norms: no smartphones before high school, no social media before age 16, phone-free schools, and a renewed emphasis on real-world independence and play. 

The book has achieved remarkable success, generating widespread media attention and sparking an eponymous movement attempting to enshrine Haidt’s norms into law. The most effective policy efforts to date have centered on making schools phone-free, with 26 states enacting legislation or executive orders to restrict or ban student cell phone use during the school day. Beyond that, some states have adopted more aggressive measures, banning social media entirely for children under 14 or implementing age verification laws that require parental consent for minors. Other states have taken aim at platform design itself, passing “Kids Codes” that turn on the highest privacy settings as the default for children or prohibiting personalized, algorithmically driven content feeds. Nearly all of these laws have faced legal challenges from NetChoice, the leading tech industry group, which has succeeded in securing injunctions in many cases as the court battles continue. 

Using the judicial system cuts both ways, though. Over the past couple of years, state attorneys general across the country have launched waves of litigation against social media platforms. The immediate outcome of these lawsuits is the exposure of troubling internal communications that reveal how these companies disregard the harms their platforms cause. For example, Kentucky’s lawsuit against TikTok alleges that the company deliberately concealed the app’s addictive design. Internal documents revealed that TikTok identified the habit-forming threshold—260 videos, or about 35 minutes—and linked compulsive use to negative mental health effects. Despite this, it took no meaningful action. A parental control feature, promoted as a safeguard with 60-minute time limits for kids, reduced usage by only 1.5 minutes on average. “Our goal is not to reduce time spent,” one project manager candidly admitted. 

In addition to revealing the true motivations behind platform decisions, these lawsuits could drive real accountability if legislation stalls. Perhaps most significant in this regard is the nearly nationwide lawsuit against Meta for knowingly harming children’s mental health and thus violating consumer protection laws, building on the documents leaked by the whistleblower Frances Haugen in 2021. “Nobody thought there could be a 44-state lawsuit against social media,” Haugen told me. “And now we have a lawsuit comparable to the tobacco lawsuit.” (The lawsuit involved 41 states and Washington, D.C.)

The other reform effort that has gained real traction is the fight for digital privacy. In recent years, data privacy advocates have slowly begun to restrict the surveillance and profiling rampant in the online world. Today, social media platforms and other tech giants engage in nearly limitless online surveillance in order to create a vast informational asymmetry between the internet platforms and their users. If corporations know everything about you—your communications, preferences, habits—it is much easier for them to manipulate you into acting in their best interest. Last year, U.S. internet users had their information shared 107 trillion times—an average of 747 exposures per person, per day.

California is headquarters to most of the companies who pioneered this business model, which is why it was the first state to wise up and pass a comprehensive consumer data privacy law in 2018. To this day, the California Consumer Privacy Act (CCPA) remains the nation’s strongest privacy regulation. The CCPA was built around the concept of “data minimization,” which means that companies should only collect and use personal information if it is for a purpose that consumers would reasonably expect. For example, a person using a ride-share app would expect the app to use their location to allow the driver to pick them up and drop them off. The user would not reasonably expect that the app continuously tracks their location well after the ride, learns that they visited a pawn shop, and then sells that information to Google, which in turn might start showing that individual ads for predatory loans.

The effort caught Silicon Valley by surprise and came as a blow to the online platforms who have no interest in scaling back their immense data collection practices. Since then, industry lobbyists and privacy advocates have clashed in state legislatures over how to shape privacy laws. Unlike California’s approach, industry-backed bills typically allow companies to collect personal data without meaningful limits, so long as they disclose it somewhere in a privacy policy. Consumers who want their data must submit individual requests to every entity that holds it (this is in the hundreds or thousands). These laws also deny individuals the right to take companies to court. The tech industry has brought enormous resources to the fight: In 2021 and 2022 alone, 445 lobbyists and firms representing tech giants were active in the 31 states considering privacy legislation. Of the 19 states that have passed privacy laws, the vast majority have adopted industry-backed models. 

With the right’s online dominance, the political conversions of platform owners like Elon Musk and Mark Zuckerberg, and Republican leaders like Donald Trump and J. D. Vance, pathologically online, the GOP is an unlikely reformer. The lane of protecting American minds belongs to Democrats alone.

But in one of the most high-profile privacy battles to date, the tech industry suffered its first major defeat since California. Maryland state Senator Sara Love, who introduced the privacy bill in 2024 while still in the House of Delegates, said she had never experienced anything like it. “It was exhausting. I have never seen that level of lobbying. Nor had a lot of other legislators,” said Love, who has served in Maryland’s state government since 2019. “The first year lobbyists were telling legislators, ‘You’re not going to get your Starbucks points anymore,’ ” she recalled with a chuckle. “The biggest bunch of malarkey. But they [other state legislators] didn’t understand the technicalities of the bill.” 

This time around Love succeeded in passing the strongest privacy regulation since the CCPA, and now a handful of other states are working on similar laws modeled after California’s or Maryland’s approach. The biggest challenge is in convincing lawmakers that defying Big Tech’s warnings won’t bring the sky crashing down. “Knowing that it can be done helps,” Love reasoned. “The more of us that get these good strong bills, the more that will follow.” 

In The Sirens’ Call, Chris Hayes compares the development of today’s attention economy to the Industrial Revolution. “Attention now exists as a commodity,” he writes, “in the same way labor did in the early years of industrial capitalism … a social system had been erected to coercively extract something from people that had previously, in a deep sense, been theirs.” In fact, Hayes thinks the digital revolution may be even more disruptive because, “unlike land, coal, or capital, which exist outside of us, the chief resource of this age is embedded in our psyches. Extracting it requires cracking into our minds.” 

Internet giants have achieved a level of power with few parallels in history. Even at its height, Ma Bell couldn’t listen to your telephone conversations, learn you were thinking of buying a house, and then sell that information to banks, which then cold-call with loan offers.

Much like in the early days of the Industrial Revolution, people today face a collective action problem: Those whose data is being mined have few institutions capable of pushing back against systemic abuse. Just as labor unions and worker protections emerged to confront the excesses of industrial capitalism, we now need digital equivalents to defend users in the age of surveillance capitalism. Rights like the ability to delete your data, to move it between platforms, and to hold companies accountable for their abuse of it should be the starting point. 

One promising step in this direction is the Digital Choices Act (DCA), a law passed in Utah this year. Doug Fiefia, the sponsor of the legislation, was inspired to act after growing disillusioned with the data practices he witnessed during his time at Google Ads. After joining Utah’s state legislature, he proposed a simple idea: Make it possible for users to take their data from one platform to another or delete it from any platform they choose. This kind of data portability could eventually allow users to organize and demand better treatment, backed by the leverage to deprive tech platforms of the data they so desperately need. “What we’re doing is taking back what we never should have given to this industry in the first place: control of our data,” Fiefia explained. “That should always have been ours.” Since Utah passed the DCA, the first legislation of its kind, six other states have reached out to Fiefia to explore introducing similar bills. 

Meanwhile, a group of legal scholars has been pushing for a promising reform called “friction-in-design” regulation. These reformers argue that the tech industry’s obsession with seamless efficiency has stripped users of meaningful choice and enabled what they call the “technosocial engineering of humans.” They propose deliberately designing pauses—like speed bumps in neighborhoods or warning labels on products—to help users reflect, exercise autonomy, and protect their well-being online. Previous attempts to regulate social media have often floundered because they focused on specific types of speech or particular actors, inviting partisan backlash. Friction-in-design offers a politically neutral alternative, one that addresses the underlying dynamics of online harm without censoring content or favoring one viewpoint over another. Just as speed bumps make roads safer without restricting where people can drive, digital friction can reduce harmful online behavior without infringing on free speech. 

There are a vast array of friction-in-design regulations that could curb harmful platform dynamics. Addictive features like infinite scroll (which continuously loads new content so users never reach a natural stopping point) and autoplay (which automatically plays the next video without user input) could be banned outright. Platforms could be forced into imposing automatic time-outs after extended continuous use of the platform. They could also be required to add a short delay after a user posts, likes, or replies to someone else. Content that begins to go viral could be deliberately slowed as it passes certain thresholds. High-reach accounts could be regulated like broadcasters, with posts that reach a certain audience threshold treated as public broadcasts. In the realm of privacy, courts could refuse to enforce automatic contracts and instead require evidence of actual deliberation by consumers. 

When tech platforms have voluntarily adopted some of these measures, they’ve worked. After a wave of gruesome lynchings in India in 2017 and 2018 that were sparked by viral false rumors of child kidnappings, WhatsApp restricted the forwarding of messages that had already been shared five or more times, allowing them to be sent to only one user or group at a time. That minor tweak resulted in the spread of “highly forwarded” messages declining by 70 percent. Of course, platforms are unlikely to adopt these measures on their own, which is why government action is necessary. 

The situation requires many more reforms than the ones that have been outlined here. Aggressive antitrust enforcement to break up the monopolization of the digital economy is crucial. Today’s internet giants have achieved a level of vertical and horizontal integration with few parallels in American history, in large part because previous generations of lawmakers worked hard to prevent it. Even at the height of its power, Ma Bell couldn’t listen to your telephone conversations, learn that you were thinking of buying a house, and then sell that information to banks, which could then cold-call you at dinner with loan offers. Government today could largely eliminate this “surveillance” business model by enforcing existing law—breaking up tech companies’ control of competing social media platforms (like Meta’s ownership of both Facebook and Instagram) and requiring that they follow the same “common carrier” rules that governed previous communications technologies. This would curb a great deal, though not all, of their most exploitative behavior. 

Modifying Section 230, which allows platforms to hide behind total legal immunity even as they algorithmically make editorial decisions, is also important. Beyond that, greater transparency around how algorithms function, and stronger oversight to ensure that they serve the public interest rather than manipulate it, will be key to creating a healthier digital ecosystem. Most major platforms’ algorithms promote the content that gets the most user engagement, which gives undue weight to outrage and misinformation. But those incentives don’t have to be written in stone. Scholars and some smaller platforms are testing out different algorithmic systems, such as “bridging-based ranking,” which sorts internet content using metrics that promote constructive disagreement—such as whether users with opposing views engage with it positively. Documents from Haugen, the Facebook whistle-blower, reveal that the company tested bridging-based ranking in its comments sections, and found that it promoted posts that were “much less likely” to be reported for bullying, hate speech, or violence. But they decided not to implement it widely.

Of course, none of this means Democrats can’t or shouldn’t engage with social media. Politicians like Alexandria Ocasio-Cortez and Zohran Mamdani have shown that it’s possible to use social media platforms effectively in service of progressive causes. But they are the exception. The structure of the current internet ecosystem overwhelmingly favors reactionary, conspiratorial content. Democratic engagement should be grounded not in mimicking that logic, but in naming the alienation the internet produces and offering a more humane alternative. 

The way we live online is not good for us. The average American now checks their phone 205 times a day, or once every five waking minutes. The average young person today spends 5.5 hours staring at screens, putting them on pace to spend 25 years of their life online. Rates of depression, anxiety, and behavioral addictions have soared; rates of friendship and romantic relationships have plummeted. 

Meanwhile, those in the tech industry want to double down on all of it. Marc Andreessen, cofounder of the venture capital firm Andreessen Horowitz, is often called Silicon Valley’s “philosopher-king.” He argues that the goal of improving material conditions on Earth is misguided, the folly of those who cannot see past their own “reality privilege”:

A small percent of people live in a real-world environment that is rich, even overflowing, with glorious substances, beautiful settings, plentiful stimulation, and many fascinating people to talk to, and to work with, and to date … Everyone else, the vast majority of humanity, lacks Reality Privilege—their online world is, or will be, immeasurably richer and more fulfilling than most of the physical and social environment around them in the quote-unquote real world … Reality has had 5,000 years to get good, and is clearly still woefully lacking for most people; I don’t think we should wait another 5,000 years to see if it eventually closes the gap. We should build—and we are building—online worlds that make life and work and love wonderful for everyone, no matter what level of reality deprivation they find themselves in.

It’s hard to imagine a more noxious ideology—or less likable messengers—to run against. To think that the answers to our most existential problems could be found by building ever more seductive online worlds to disappear into is essentially a surrender of faith in the human project. No wonder figures like Peter Thiel hesitate when asked if humanity should even survive. Let MAGA have this bleak vision. And let them own the horror that is our online world. 

Democrats, meanwhile, should start listening to other voices. Zadie Smith, more than a decade ago, wrote that the technology shaping our lives is unworthy of us. We are more interesting than it. We deserve better. It’s time to build something different.

The post Draining the Online Swamp appeared first on Washington Monthly.

]]>
162257
Monopoly Men https://washingtonmonthly.com/2025/11/02/age-of-extraction-tim-wu/ Sun, 02 Nov 2025 23:13:57 +0000 https://washingtonmonthly.com/?p=162181 The Age of Extraction: "The protectors of our industries" cartoon showing Cyrus Field, Jay Gould, William H. Vanderbilt, and Russell Sage, seated on bags of "millions," on large raft, and being carried by workers of various professions.

Big Tech platforms and the Gilded Age trusts have something in common: a stifling grip on the fundamentals of commerce.

The post Monopoly Men appeared first on Washington Monthly.

]]>
The Age of Extraction: "The protectors of our industries" cartoon showing Cyrus Field, Jay Gould, William H. Vanderbilt, and Russell Sage, seated on bags of "millions," on large raft, and being carried by workers of various professions.

As the world soured on Big Tech platforms in the “techlash” of the late 2010s, the conversation centered on how technology was harming its users. An avalanche of best-selling books, magazine think pieces, and documentaries sounded alarms about the annihilation of privacy and attention spans, crises of loneliness and teen depression, and echo chambers and political polarization. This era of commentary connected broad societal problems back to the intimate relationship between tech platforms and the individual.

The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity by Tim Wu Knopf, 224 pp.

The Age of Extraction, by the Columbia Law professor Tim Wu, takes a different route to the conclusion that Big Tech is destabilizing society: one focused on the economic relationship between platforms and other businesses. Wu, a founding father of the “neo-Brandeisian” anti-monopoly movement that influenced antitrust policy under the Biden administration, contends that platforms such as Google and Amazon have become essential commercial infrastructure and use this power to extract ever-more value from smaller businesses in the form of exploitative fees and pricing. He calls artificially intelligent platform extraction “the emergent form of economic power in our time,” and suggests that it is a driver of inequality and ultimately the rise of authoritarianism.

This argument won’t be revelatory to those steeped in anti-monopoly debates. Nor does the book serve as an especially persuasive introduction to the topic for those encountering it for the first time. Still, Wu’s writing is lively and lucid and he provides some fresh insights, especially on where the power of platforms is headed in the age of AI, and the dangers this poses to the republic.

The Age of Extraction is divided into two parts. The first delivers a rendition of the familiar Big Tech hero-to-villain story, framed around the platforms’ evolution from “enablement” of economic activity to the “extraction” of value. The second, shorter section looks at the global trend toward political instability and democratic backsliding.

In the optimistic late 1990s and 2000s, Wu reminds us, there was never supposed to be a “big” tech. Pundits predicted that the internet, by lowering barriers to business entry and favoring nimbleness, would usher in a decentralized, egalitarian economy. The business writer Seth Godin declared that Small Is the New Big; the blogger Glenn Reynolds envisioned An Army of Davids replacing old corporate Goliaths. The tech thought leader Jeff Jarvis, in his 2009 book, What Would Google Do?, declared that “the Lilliputians have triumphed. The economies of scale must now compete with the economies of small.”

According to Wu, such predictions were based in part on the perception that new tech platforms like Google and Amazon were “public-spirited town squares that existed to help others, almost like corporate charities.” The platforms played into this perception—especially Google, with its famous “Don’t Be Evil” slogan. A letter the company’s founders Larry Page and Sergey Brin wrote to investors in 2004 explained that Google would do “good things for the world even if we forgo some short-term gains.” But in the early days, they also lived up to it. Amazon created Amazon Marketplace, which empowered countless Americans to start small businesses using its built-in customer base and logistics capabilities. In return, the company asked only for a reasonable fee: about 19 percent of a seller’s revenue as of 2014.

In the optimistic late 1990s and 2000s, Wu reminds us, there was never supposed to be a “big” tech. Pundits predicted that the internet, by lowering barriers to business entry and favoring nimbleness, would usher in a decentralized, egalitarian economy.

But as Amazon cemented itself as the dominant e-commerce platform—in large part by subsidizing shoppers and hoovering up potential rivals—it began to put the squeeze on sellers. It ratcheted up monthly fees and introduced a major implicit fee by placing rows of sponsored results at the top of search results pages. (By 2024, sellers were paying Amazon more than $56 billion per year to make their products visible.) By 2023, fees averaged more than 50 percent of sellers’ revenue. And yet, with Amazon commanding such a large market, sellers couldn’t walk away. Wu shares anecdotes of entrepreneurs who built thriving e-commerce businesses largely through Amazon, only to be put out of business as the fees mounted up.

If Wu wanted to persuade readers that we are truly living in an age of extraction, some additional case studies might have been helpful. Journalists such as the Washington Monthly’s Phillip Longman have compared Big Tech platforms to the railroad monopolies of the Gilded Age for the better part of a decade. There are plenty of other examples to choose from. Google’s dominance in “ad tech,” the stack of platforms connecting advertisers and web publishers, allows it to extract 30 percent of publisher ad revenue through various fees. Apple’s commission on iOS in-app purchases reached 30 percent before a recent court ruling forced the company to allow app developers to route purchases through their own websites. Uber’s “take rate” on ride fares is dynamic and opaque, but it increased dramatically in recent years and has been shown to range from 40 to 70 percent.

Puzzlingly, The Age of Extraction leaves these examples on the table, not even giving them a brief mention. Readers might be left wondering if platform extraction is a problem that extends beyond Amazon as Wu moves ahead to explore tangentially related topics. One chapter observes that the internet failed to translate into “the rise of a new creative class holding significant wealth,” and that even the influencers who have found financial success are “a laboring class” with stressful lives—although it does not tie this reality to any specific extractive practices by platforms. (Wu doesn’t mention, for example, content creators’ paltry share of YouTube and X ad revenue.) Another chapter describes how private equity roll-ups of specialist medical practices raise patient costs while degrading quality of care, and how the mega-landlord Invitation Homes has exploited renters by consolidating local housing markets and then systematically raising rents and piling on absurd “junk fees.” Wu argues that these phenomena represent “platform power beyond tech,” because private equity-backed medical groups bill themselves to doctors they hope to buy as convenient administrative intermediaries, and Invitation Homes uses technology to buy and manage thousands of homes.

The freshest material in The Age of Extraction comes in Wu’s analysis of how Big Tech is diversifying and augmenting its platform ecosystems to maintain their power. Wu describes Google, Apple, and Amazon’s splashy ventures into entertainment and sports broadcasting as an effort to become “fully spun cocoons of life and living.” And, of course, the platforms are now “investing heavily in owning or controlling the relevant talent, data, and technologies” of the AI race. OpenAI and Anthropic are backed by Microsoft and Amazon, respectively; Google, Meta, and Elon Musk’s X are developing popular models in-house and control key distribution channels. Thus while AI technology may disrupt certain Big Tech products, Wu points out that AI market structures appear “headed in the direction of reinforcing [Big Tech’s] advantage.”

Just a few decades ago, Francis Fukuyama was predicting The End of History, “the old dictators, cranky old men, were on their way out,” and “a kinder, gentler future was meant to be on its way in,” Wu writes. “What went wrong?” His answer is a bit slippery, particularly with respect to how much weight it assigns to the tech platforms that are the main subject of his book. At first he blames “the destabilizing effects of laissez-faire capitalism,” and concedes that “the tech platforms are not nearly the entirety of this story.” At another point, he blames “the emergence of platform capitalism and broader trends in the economy.” The theory he actually fleshes out centers on corporate consolidation generally, although the tech platforms certainly fit in.

Wu sketches the progression from consolidation to authoritarianism as a “sequence in five steps, each based on known and well-studied tendencies.” Monopolization is followed by extraction, which, “by its nature … creates a narrow class of winners” and a “broader class” of losers: “consumers who pay more, workers who are paid less, and local, regional, smaller, and medium-sized businesses that are acquired or driven out of business.” This inequality leads to the emergence of mass resentment, then democratic failure—“compounded if the state is understood or credibly portrayed as supporting and perpetuating the ongoing extraction”—and ultimately the rise of the strongman. In a play on the title of the libertarian economist Friedrich Hayek’s iconic book, Wu calls this progression “the real road to serfdom.”

He presents this as a sort of natural law, and doesn’t make much of an effort to support it empirically. That’s not much of an issue with respect to the latter part of the causal chain, as the link between inequality and resentment and political instability is fairly self-evident. But readers might need some evidence to be satisfied that monopolization is a significant driver of inequality to begin with. Wu could have mentioned the work of the economists Marshall Steinbaum, José Azar, and Ioana Marinescu, who have connected employer concentration in labor markets to lower wages. From the consumer perspective, he could have surveyed anti-monopoly research into how consolidation is making household cost centers like health care and groceries more expensive. Or he could have deployed a historical example, such as that of the Gilded Age, to illustrate his point. But as with his argument about platform extraction, Wu declines to elaborate.

Nevertheless, The Age of Extraction concludes—after a few short chapters taking down the ideas that markets are self-correcting and that crypto technology will solve inequality—by presenting policy solutions to the expansion and abuse of monopoly power as a broad “architecture of equality.” The solutions begin with antitrust. Wu mentions that antitrust enforcement “staged a comeback” under the Biden administration and lists major cases, although he doesn’t explain how specifically they could mitigate extraction. Other solutions could include utility-style regulation and price caps, which Wu points out have proved successful beyond the utility sector. For instance, “swipe fees” are capped in the European credit and debit payment processing markets. New “common carrier” rules, such as those historically used to govern railroads and telecommunications networks, could prevent dominant tech platforms from discriminating in favor of their own products or services. And quarantines and “line of business” restrictions could prevent platforms from leveraging their preexisting monopolies to dominate new markets, such as artificial intelligence.

At around 200 pages, The Age of Extraction is a fun and breezy read, and it will hold the attention of casual readers even if they do not end up convinced of its grandest claims. But for neo-Brandeisian true believers, the book is likely to frustrate. At a pivotal moment for the movement, with its Biden-era champions out of power and the Trump administration reversing much of their agenda, Wu is content to retread familiar intellectual territory rather than illuminating what comes next. And amid a contentious factional battle to shape the future of the Democratic Party, the thinness of the book’s argumentation makes it unlikely to win hearts and minds. For an advocate of Wu’s talents, The Age of Extraction represents a missed opportunity on multiple fronts.

The post Monopoly Men appeared first on Washington Monthly.

]]>
162181 Nov-25-Wu-Lowman The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity by Tim Wu Knopf, 224 pp.
The Cory Doctorow Doctrine  https://washingtonmonthly.com/2025/10/30/the-cory-doctorow-doctrine-enshittification/ Thu, 30 Oct 2025 19:14:37 +0000 https://washingtonmonthly.com/?p=162376 Enshittification Age: Cory Doctorow speaking at the 2018 Phoenix Comic Fest at the Phoenix Convention Center in Phoenix, Arizona.

Why are big tech products getting worse and worse? The critic has some answers about the origins of “enshittification.” 

The post The Cory Doctorow Doctrine  appeared first on Washington Monthly.

]]>
Enshittification Age: Cory Doctorow speaking at the 2018 Phoenix Comic Fest at the Phoenix Convention Center in Phoenix, Arizona.

In the last two years, iPhone customers may have been pleasantly surprised to see a standardized USB-C charger port, allowing them to dispose of Apple’s custom Lightning wires. The world’s 1.5 billion iPhone users can thank Europe for forcing Apple’s change. The tech giant decided to switch all its new iPhones, determining it too costly to produce the USB-C port just for Europe. 

It’s only in recent years that consumers have woken up to Big Tech’s power over our attention, moods, privacy, stock market, economy, and wallets—over us. It’s fortunate then that with our heads buried in our phones scrolling through social media, consumer advocates, regulatory agencies, and litigators have been sounding the alarm on surveillance and monopoly power and delving into the drier nuts-and-bolts details of right-to-repair and interoperability regulations, like the one that led to Apple standardizing its charging port. 

Prolific tech critic Cory Doctorow, whose pronouncements make him akin to a town crier in the digital square, is among those leading the charge. After coining and popularizing the term “enshittification” to mean how tech platforms degrade over time, Doctorow has bestowed his latest book, Enshittification: Why Everything Suddenly Got Worse and What to Do About It, with the title.  

His book, derived mainly from his blog Pluralistic, will be eye-opening to consumers and those like me who are already familiar with Big Tech’s bullying methods. (I’m the editorial director of the Open Markets Institute, a think tank that seeks to regulate Big Tech monopolies and curb corporate power.) 

Enshittification is a ride through all the bait-and-switch tactics, financial trickery, and gatekeeping to which Big Tech platforms subject users. A prime example is how Google began degrading its always reliable workhorse of a product, search, in the mid-aughts, once there was no more room for its flagship segment, which had captured a 90 percent global market share, to grow. In a strategy laid bare in internal memos and emails in the Department of Justice exhibits in one of its two monopoly cases against the corporation, Google made users input more queries into the search bar to get the answers, leading to more ads and more revenue for Google. “After all, even if Google couldn’t find more people to search, or more ways to use search, they could certainly find new ways to charge for search,” Doctorow observes. “In other words, once Google stopped growing, it started squeezing.”  

Doctorow takes us through the how and why of enshittification. How tech companies enshittify proceeds in four steps: 1) first, platforms are good to their individual customers, 2) they abuse their individuals to improve things for their business customers, 3) next, they undercut their business customers to keep more profit, and 4) finally, they have turned into a giant pile of shit. 

Both Amazon and Facebook have turned on their once-prized business customers, Facebook, by raising the price of ad targeting and failing to show its users the ads advertisers paid for. News outlets, in particular, were hurt badly when the platform began downranking short excerpts of news articles in favor of longer ones, effectively cannibalizing the news business. Similarly, Amazon started to shaft the merchants who sell on its marketplace by effectively forcing them to pay to be included in Amazon Prime, forbidding them to sell their product at a lower price on any other website, including their own, and, perhaps most galling, ripping off merchants’ ideas to make its own Amazon-branded copycat products. 

According to Doctorow, the enshittifier’s “credo” is, “Your job is to create as much value on that platform as possible. Our job is to harvest all of that value, leaving behind the smaller quantum of utility that will keep the platform from imploding.” 

But it’s not just the well-known platforms. Tech companies, in general, have gotten into the game. In one of the book’s most brazen examples, Unity, a company that offers tools for video game developers, announced a change to its pricing policy: it would start charging game developer customers a fee every time they sold a video game, claiming it wanted “shared success” with its customers, the developers who used their tools.  

Unity’s customer base of video game developers balked. Doctorow likens the scheme to selling hammers to build a lemonade stand and expecting a nickel from each drink sold. “Unity is an avatar of the attitudes that produce enshittification,” he writes. “Enshittification is what happens when the executives calculate that they can force you to go along with their schemes, and when they’re right about it.” In Unity’s case, its plan to fleece its customers didn’t work 

Part of the reason is that the law allows them to get away with things traditional companies never could, owing to underregulation, copyright law, and regulatory capture. Having an app allows tech companies to break the law and then claim they didn’t because the crime was committed with an app, for instance, app-based lending platforms that ignore usury law or cryptocurrency apps that illegally trade in unregistered securities.  

Regulation hasn’t kept pace with technology, as we see most vexingly in the case of AI, which has sent government regulators worldwide scrambling. Add to underregulation the billions of dollars the tech industry has funneled into lobbying, and you have regulatory capture that has helped tech companies weaponize intellectual property laws. For instance, IP laws for apps ban “circumvention,” which means technology companies can destroy rivals that have developed anti-features allowing users to skirt the app’s undesirable features: “In other words, tech companies don’t stop with ‘It’s not a crime if we do it with an app.’ They also say, ‘It’s a crime if you fix our app to defend yourself from our crimes.’” 

Enshittification also describes the waning counterbalancing influence of the tech industry’s white-collar workforce, an aspect of Big Tech’s exceptionalism that gets little attention. Doctorow notes that Google’s way of sorting web pages came from an academic research paper on citation analysis by its founders, Larry Page and Sergey Brin, giving the company its scholarly atmosphere. Technologists were recruited from top universities worldwide, offered generous pay with stock options, and given one day a week to work on side projects. 

For years, Google deferred to its technical staff, who remained a bulwark against enshittification. It believed deeply in Google’s mission statement to “Organize the world’s information and make it universally accessible and useful.” Yet, letting engineers run the show exasperated investors, whose greed won out over workers’ idealism as we saw with the corporation’s strategy to degrade search quality. 

Google wasn’t the only giant to rein in its high-minded workers; it was among the most prominent. The rupture with their workforce, which began when Big Tech corporations ramped up their enshittificatory (yes, Doctorow uses this form of the word, too) ways over the past decade, became a chasm in 2023 when a quarter of a million tech workers were fired—despite the industry’s record profits. In 2025, the unspoken covenant was severed with tech executives’ embracing Donald Trump at his inauguration.  

Big Tech may have outfoxed regulators and its workforce, but Doctorow sees a reckoning coming. Absent U.S. regulation, as the Trump administration protects the tech platforms, we may have to rely on Europe to check the tech industry’s power.  

We saw this dissonance between the U.S. and Europe this autumn when a federal judge imposed a modest penalty on Google for its illegal search market dominance. That stood in sharp contrast to the much larger, albeit affordable for Google, $3.5 billion fine levied by the European Commission for its digital advertising monopoly. 

Enshittification may seem outdated amid the surge of AI, which is barely mentioned. It’s not that Doctorow hasn’t been thinking and talking about AI—he considers it a bubble—it’s that we have to wait for his AI book to be released next year, by which time his predictions might already have come true. 

Sneak preview: He’s not optimistic. In a recent post, Doctorow warns, “I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can’t be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people’s money and lighting it on fire. Eventually, those other people are going to want to see a return on their investment, and when they don’t get it, they will halt the flow of billions of dollars. Anything that can’t go on forever eventually stops.”  

Doctorow’s Enshittification is an indispensable guide to understanding how we got here. If Part 1 is a must-read, Part 2 will be epic. 

The post The Cory Doctorow Doctrine  appeared first on Washington Monthly.

]]>
162376
How the Digital Age Changed Us https://washingtonmonthly.com/2025/05/07/how-the-digital-age-changed-us/ Wed, 07 May 2025 09:00:00 +0000 https://washingtonmonthly.com/?p=158999

Two new books on high tech and social media examine the toll of relentless shopping, engagement, and the tyranny of the “like” button.

The post How the Digital Age Changed Us appeared first on Washington Monthly.

]]>

Not a week goes by that book publishers don’t disgorge a tome on Big Tech, whether it’s a tell-all about C-suite leaders keeping us hooked to their algorithms, a boosterish guide to Artificial Intelligence (AI) by a tech insider, or a dire warning on how the tech platforms are hollowing out our attention spans.  

Add two more volumes to the pile. One is a meditation on technology and AI by Vauhini Vara, a Wall Street Journal alumnus and the first of the paper’s reporters to cover Facebook, Searches: Selfhood in the Digital Age. The second chronicles a key piece of computer code that helped launch the social media age, Like: The Button That Changed the World. The latter’s success and the former’s sprawl suggest that a narrower aperture is a more enlightening way to view our tech world.  

Searches is a chimera of a book: part AI, pastiches of journalism, an exhortation against what she calls technological capitalism, and hints of a postmodern novel couched within a memoir. For example, Vara shares a decade’s worth of queries she has typed into Google—something we all do multiple times a day and can easily relate to: “What is wastewater charge on water bill. What is a cardigan without buttons called. What to spray in ovens to clean. What is turbot fish.” She finds the excavation “unexpectedly moving,” as many of us might. “The material that Google valued for its financial potential was, for me, valuable on its own terms,” she writes.  

Similarly, Vara, who landed her first post-undergraduate covering tech for the Journal in the mid-2000s after attending Stanford, looks back at product reviews she wrote on Amazon and lists the topics X has determined she is interested in based on its algorithm. This record of her, and by extension, our behavior, shows the creepy and long-lived surveillance we subject ourselves to when we engage with any of these platforms. The book’s experimental approach works best in these passages. By showing us evidence of, rather than declaring, how our data is the very product driving the hundreds of billions of dollars generated by the tech giants, we feel the violation.  

When she does resort to telling rather than showing, Vara is prone to familiar assessments that online life resembles but doesn’t replicate or enhance lived experience. “To live like this—endlessly comparing our imperfect fleshy selves with sanitized digital simulacra of selfhood that [sic] appears online and finding ourselves wanting, endlessly finding ourselves trapped in an infinite scroll of algorithmically advantaged outrage and scorn—exerts such a subtle psychic violence that we might not even be aware of it as it’s happening,” she surmises.  

The book’s deconstructive way of critiquing Big Tech falls flat when it becomes something of a gimmick: One in every few chapters is “written” by the latest Chat-GPT model (in this case, Chat-GPT4), which summarizes the previous two chapters in a bloodless facsimile of human writing: “Your portrayal of tech companies, particularly Amazon, captures a complex and multifaceted view that many share about the impact of these corporations on society.” If nothing else, these chapters confirm that AI will never replace the writing and art of actual humans, though it’s unclear whether Vara intends to make us feel this way.  

An aspiring novelist, Vara left her plum gig at the Journal to attend the storied Iowa Writers’ Workshop. She published her first novel in 2022, The Immortal King Rao, which narrates the rise of a visionary tech CEO from humble beginnings in India to presiding over an algorithm-fueled dystopia. In her memoir Searches, Vara appears to deploy some of the techniques picked up in Iowa of interrogating the text and calling authorship into question when she asks Chat-GPT to write about the death of her sister when Vara was still in high school, resulting in a chapter called “Ghosts” that went viral when first published a few years ago in the literary magazine The Believer. While “Ghosts” may have already found its way on syllabi in literary theory seminars, it sits uneasily alongside other chapters like the compilations of our personal data that offer a clearer takedown of Big Tech. After her close observation of Silicon Valley, it’s apparent that Vara has emerged as a critic. She lauds efforts to rein in Big Tech by former Federal Trade Commission chair Lina Khan and her mentor, Barry Lynn, executive director of Open Markets Institute, where I work as the editorial director. Despite its high-flown aspirations, Searches is ultimately a magpie’s nest of diffuse thoughts and musings on the digital world but illuminates little about the handful of companies that own it.  

Taking a radically different approach is Like: The Button That Changed the World. This seemingly circumscribed ambit is a far more successful look at the tech industry than Vara’s sprawling meditation. Like is so direct in its scope that at one point, the reader is treated to a history of when we came to use an upwardly directed thumb to indicate a positive sentiment or enthusiasm. TL;DR: It’s not entirely clear, but it may have started with gladiator fights in ancient Rome as a way for the audience to indicate whether a fallen fighter should be spared.  

One of the coauthors, Bob Goodson, is a Silicon Valley insider who helped invent an early iteration of the like button at Yelp in 2005, a few years before Facebook universalized this symbol, among the most important pieces of computer code ever to be written. Yelp’s early version didn’t include a thumbs-up symbol or the word “like,” but it did offer a way to interact with a posted review by clicking one of three buttons—“useful,” “funny,” and “cool.”  

Goodson and his coauthor Martin Reeves of BCG Henderson Institute, a think tank for developing business ideas, describe the impetus behind Yelp’s emotional reaction button as giving a website user the most effortless way of engaging with online content—one-click commenting—all while remaining on the same page and avoiding a change in URL that would trigger a page refresh. Back then, two decades ago, none of this was intuitive. It was akin to inventing the wheel.  

The book quotes former Max Levchin, the former chair of the board of directors of Yelp, on why the company’s emotional reaction buttons gained so much traction then. Noting that only a small percentage of users would write and create web content, he says, “The psychological or psycho-behavioral foundation of the like button is really about breaking out of the ‘only one percent who will say anything online’ assumption.” 

The authors don’t give Yelp all the credit, though. Silicon Valley was humming with web design ideas in the mid-2000s, and other websites, too, developed or borrowed emotive reactions. When Facebook launched the thumbs-up button in 2009, after Mark Zuckerberg had rejected the idea two years earlier, it became today’s familiar icon.  

“After Facebook finally added its like button, the feature proceeded to spread like wildfire, both in its use and through its replication on other sites,” the authors write. “It was a watershed moment.” They estimate that today, like buttons are clicked 160 billion times per day around the internet, tantamount to every human clicking a like button 20 times. 

This deep dive into the invention of the button and the ramifications of this piece of code is insightful. The authors credit the like button with underpinning the entire social media industry. Each like is a data point, after all, and once collected and combined with others, it powers algorithms driving more likes, shares, and reposts. “What seems like an ephemeral action produces a data point with a life of its own—a life that may last forever, working its way into endless other corners of influence and action,” they write. 

The authors recognize the dark side of what they call the like economy, the world of people and brands “amassing thumbs-up and finding ways to be paid for that positive attention.” They cite an internal Facebook report that found that the site’s monitoring of users’ emotional states could enable the delivery of ads when young people feel down and need a confidence boost.  

“We can’t have it both ways: we’ve given the like button a lot of credit for fueling the rise of social media, so it must share some responsibility for the repercussions,” the authors write, acknowledging the threat to society posed by social media, namely its impact on mental health and addiction, especially on young people; the invasiveness that defines surveillance capitalism whereby our data is sold to third-party brokers and used to serve us ads; and the extreme political polarization that has led to the election of a man determined to undermine democracy. 

What started as a clever, low-stakes way to keep a website’s users on the page—the like button—has helped create a threat of epic proportions. While Goodson and Reeves don’t have answers, they contribute to our understanding of how a few corporations have transformed our lives in a decade and a half. 

Correction: The original version of this story misnamed one of the co-authors of Like. He is Bob Goodson, not Bob Goodman. Max Levchin was incorrectly identified as the CEO of Yelp. He is a former chair of the board of directors of Yelp.

The post How the Digital Age Changed Us appeared first on Washington Monthly.

]]>
158999
Trump Is Steamrolling Corporate America. Democrats Are Taking Notes https://washingtonmonthly.com/2025/04/01/trump-is-steamrolling-corporate-america-democrats-are-taking-notes/ Tue, 01 Apr 2025 17:52:49 +0000 https://washingtonmonthly.com/?p=158541

Historically, Democrats have curbed their attacks on powerful interests for fear that it would cause a backlash. The president’s actions suggest they could stand to be more muscular.

The post Trump Is Steamrolling Corporate America. Democrats Are Taking Notes appeared first on Washington Monthly.

]]>

In recent months, President Donald Trump has launched a brutal campaign to bend America’s most influential institutions to his will. He has threatened and cajoled institutions in Big Law, Big Tech, elite universities, and the military-industrial complex.​ 

Consider the legal sector. Skadden, Arps, Slate, Meagher & Flom, one of the nation’s premier firms, relented under pressure to provide $100 million in pro bono services for initiatives supported by the Trump administration. This move came after the White House threatened to revoke security clearances and terminate federal contracts, actions that could have crippled the firm. Similarly, Paul, Weiss, Rifkind, Wharton & Garrison capitulated to Trump’s demands by pledging $40 million in pro bono work for Trump-approved causes and altering its internal policies, including fostering diversity, equity, and inclusion (DEI). These concessions were made under duress but were widely denounced as shameful acquiescence.  

Trump’s ire has targeted academia with equal fury. Previously renowned for its commitment to academic freedom, Columbia University saw $400 million in federal funding revoked for scurrilous reasons. The administration’s demands included placing specific departments under receivership and altering admissions policies. With these threats, Columbia’s leadership wilted. Trump enforcers are now emboldened to wage similar campaigns against other universities.  

In the tech industry, Meta (formerly Facebook) agreed to pay $25 million to settle a lawsuit filed by Trump over his account suspensions following the January 6 Capitol attack. This settlement, which strangely but tellingly included a significant donation to the nonprofit responsible for Trump’s future presidential library, was wholly unnecessary. Meta would likely have won its case and had the resources to fend off the administration easily, but it chose to comply instead.  

These capitulations represent a troubling acquiescence to authoritarian tactics. 

But they also raise interesting issues for Democrats. Historically, progressives have argued that Democrats should adopt a more confrontational stance toward powerful institutions—Big Pharma, Big Tech, the military-industrial complex, Wall Street—to enact change for regular people. Yet Democratic leaders often take a more conciliatory approach, fearing reprisal from these moneyed interests. The prevailing wisdom suggested that aggressively poking these formidable forces would provoke economic and electoral retaliation, jeopardizing Democratic incumbents. 

Trump’s actions have suggested this may be a false choice. The 47th president has aggressively targeted these institutions, causing market instability, foreign policy chaos, and economic uncertainty, yet he faces minimal resistance. The anticipated corporate pushback has not materialized; many have chosen compliance over confrontation. This raises critical questions about corporate America’s political alignments and the strategies that Democrats employ.​ 

Traditional leftist theory posits a symbiotic relationship between far-right regimes and corporate power, rooted in mutual economic benefit. Yet, under Trump, many corporations are experiencing financial losses. Despite this, their resistance to Trump is muted. This suggests that factors beyond immediate profit—perhaps ideological alignment or fear of retribution—influence C-suite acquiescence.​ 

The implications are profound. If Trump can strong-arm these institutions without significant consequence, a progressive Democratic administration could exert similar pressure—firmly and lawfully—to achieve policy goals.  

Even base Democrats, including many moderates, are furious with their leadership not just for its failure to force a government shutdown over DOGE and budget negotiations but because they feel, in a larger sense, deceived. 

The frustration is understandable and justified. For decades, Democrats were told that confronting entrenched corporate power too forcefully would provoke donor flight, destabilize markets, and invite political defeat. But now they watch Trump destabilize the economy, berate institutions, undermine global stability—and encounter, remarkably, little institutional resistance. The promised backlash if Democrats pushed too hard never materializes, even as Trump tramples through the executive suites. 

This moment is forcing a reappraisal. If Trump can use power this aggressively, should progressives do the same—only in service of justice and democracy rather than autocracy and self-enrichment? The conciliatory path of Democrats from Jimmy Carter to Joe Biden may have reflected prudence, but it increasingly looks like misjudgment. 

If powerful institutions can be bent for evil, they may bend toward the public good. The leaders of corporate America may want to reflect carefully. The next generation of Democrats may come prepared not just to persuade but to compel. 

The post Trump Is Steamrolling Corporate America. Democrats Are Taking Notes appeared first on Washington Monthly.

]]>
158541
The OpenAI CEO is Out, But the Real Story is the Need for Regulation https://washingtonmonthly.com/2023/11/20/the-openai-ceo-might-be-out-but-the-real-story-is-the-need-for-regulation/ Mon, 20 Nov 2023 06:00:00 +0000 https://washingtonmonthly.com/?p=150293

Sam Altman aside, artificial intelligence is wonderful but threatening. It demands a robust government response, and that's coming too slowly.

The post The OpenAI CEO is Out, But the Real Story is the Need for Regulation appeared first on Washington Monthly.

]]>

With the announcement of OpenAI CEO Sam Altman’s surprising ouster, this is a moment to pause and consider where we are with artificial intelligence.

The details of the Altman story will come out, but there’s lots of speculation that the schism at OpenAI was between those who want to accelerate the pace of artificial intelligence development, despite the lack of guardrails, and those who urged caution. This kind of wobbly governance and high drama underscores that we need real oversight. The AI race is out of control. No government, including the United States, has issued mandatory rules on something most experts agree could destroy humanity if left unregulated. Altman was arguably the leader of the pack, but whoever wins the race, we need firm rules for those running it.

Unfortunately, there’s a lot of laissez-faire thinking out there. In his 5,000-word “Techno-Optimist Manifesto,” posted online last month, Marc Andreessen, the billionaire venture capitalist, celebrates technology’s unbounded potential. “There is no material problem that cannot be solved by technology,” he writes. This certitude leads Andreessen to insist that slowing the development of artificial intelligence would be tantamount to murder: “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.” Andreessen’s audacious declaration reflects the so-called effective acceleration movement, or e/acc, which draws from philosopher Nick Land’s theory that technology will accelerate the creation of a Utopia. Followers of this movement often flag it in their bios and LinkedIn pages.

While I, too, am excited about AI’s awesome capabilities, we have to take seriously what the scientists and creators of this revolution are saying about the risks we face.

In his final years, with nothing to gain from a cautionary warning, Stephen Hawking, the theoretical physicist, came to this conclusion about AI and humanity: “The development of full artificial intelligence could spell the end of the human race…It would take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Hawking’s warning has become even more urgent as AI is unrolled in everything from defense to medicine. Tech leaders are not arguing that machine capabilities won’t surpass that of humans. They are debating when it will happen.

Despite the “Silicon Valley knows best” attitude of some, the AI industry itself is calling for regulation. Brad Smith, the vice-chair and president of Microsoft, which is all in on AI, has said, “Companies need to step up … Government needs to move faster.” Microsoft announced this week that it is Altman and Greg Brockman, OpenAI’s president. The two head an advanced research lab at the technology giant.

Governments are taking good first steps, but they fall short. In July, the White House announced seven companies involved in the development of artificial intelligence had voluntarily committed to managing the risks. That is no small feat, given what it takes to get the leadership of major companies to agree. On Halloween, the eve of the UK AI Safety Summit, President Joe Biden issued a 63-page executive order. Likewise, the G-7 trumpeted its Agreement on International Guiding Principles on AI and a voluntary Code of Conduct as more companies signed the voluntary agreement.

These policy moves address some of the complex issues around AI risk. The problem is that they don’t require that companies take safety and security measures. Companies need only report the measures they took.

Governments need to be courageous and pass legislation enabling effective regulation of advanced AI, and they need to do this within a tight deadline of months, not years. Choke points, kill switches, measures to stop us from going off the cliff—all must be identified and tested now. It is wise to consider our options before they disappear. For example, allowing companies to connect the largest AI systems to the Internet before we know their capabilities could prove a catastrophic, irreversible decision.

The White House Executive Order and the G-7 agreement on International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct have provided us with the roadmap to formal legislation allowing us to regulate this revolutionary technology. Mandatory rules create a level playing field for all competitors. And given the global nature of this competition, governments need to work together to enforce compliance. When the European Union established the Global Data Protection Regulation, an important privacy and cybersecurity measure, in 2016, companies had to comply globally, not just in Europe. This approach works, and we should use it quickly. The EU is set to finalize comprehensive AI regulation this year, including fines to enforce compliance, but those regulations won’t be in effect before 2025. Nobody knows how far AI will evolve in that time, making any delay a risky gamble. Ominously, Meta just disbanded its Responsible AI Team. That seems to be a sign that some companies aren’t taking the voluntary measures seriously.

There are three big strategies government can deploy. Think of them as “Go for Broke,” “Slow Down,” or “Strict Regulation.” Strict regulation based on what has already been agreed is our best bet. Get oversight in place now as we figure out the bigger plan.

We’ve been here before with Big Tech. As social media ascended during the aughts and its promoters were evangelizing the Utopia of a connected world, there were signs of the damage major platforms could inflict. But those pointing this out were ignored. Tech leaders didn’t set out to harm teenage girls, promote religious and ethnic violence, or undermine elections—that was collateral damage in the pursuit of growth. AI’s downside could be more ominous.

This isn’t the first time companies have had to figure out how to do business responsibly. When I was at Nike, we built corporate social responsibility with an aperture wide enough to take in impacts and consequences of all kinds—on the products, the profits, and people—as we dealt with issues such as labor conditions. The industry found a common point on the compass as our shared goal. And companies like Nike grew as they pursued growth and responsibility simultaneously. The AI challenge is much more formidable and will take greater collaboration by companies and governments, but the roadmap is right in front of our faces.

Thankfully, it’s not too late. We have a rare—if fleeting—opportunity to act before AI-driven tools become ubiquitous, their dangers normalized, and what’s unleashed can’t be controlled, just as we’ve seen with social media intertwined so deeply in lives it seems impossible now to reign it in. We won’t have the chance to retrofit the AI industry. Companies are creating the products; they can enact the safety controls on deadline, just as other industries do. It can, on average, take 10-15 years to get a new drug to market safely. At this moment, AI developers can just speed ahead, yelling out the window. “I’m working on those reports I’m required to submit!”

This is a historic moment, and we need the kind of binding collaboration we have with nuclear treaties. Companies and governments shouldn’t have the right to take their time when humanity is at risk. The question is what mechanisms we have to deploy, not how long we think the disaster is from happening.

Once those safety measures are robust and functioning, then there’s cause for real techno-optimism, and who runs one particular company won’t matter as much.

The post The OpenAI CEO is Out, But the Real Story is the Need for Regulation appeared first on Washington Monthly.

]]>
150293
Google’s Participation Trophies  https://washingtonmonthly.com/2023/08/27/googles-participation-trophies/ Sun, 27 Aug 2023 23:00:00 +0000 https://washingtonmonthly.com/?p=148811

Big Tech promised that online career certificates would replace the college degree. They have so far proved to be, at best, a supplement.

The post Google’s Participation Trophies  appeared first on Washington Monthly.

]]>

Earlier this year, I enrolled in a Google-sponsored online course to earn a “professional certificate” in data analytics. The course was one of a series of new, video-based classes that the company has suggested might someday replace traditional college. 

“College degrees are out of reach for many Americans, and you shouldn’t need a college diploma to have economic security,” wrote Google’s president of global affairs, Kent Walker, in a 2020 blog post announcing the new curriculum. Walker later added, on Twitter, that in its own hiring, Google “will now treat these new career certificates as the equivalent of a four-year degree for related roles.” 

Under the umbrella “Grow with Google,” the company offers three online professional certificate courses in project management and user experience (UX) design, as well as data analytics, IT, and other specialties. All are available through the online learning platform Coursera, and cost $39 or $49 a month. On its website for prospective students, Google describes its courses as an educational shortcut to a lucrative gig and lists the average median salaries that certificate earners might receive: entry-level data analytics jobs pay $92,000; entry-level project managers, $77,000.. 

Unfortunately, my experience earning—and then attempting to peddle—a Google-sponsored certificate was less triumphant. My certificate took me just two and a half weeks to get, mainly because I learned to game the system. (I watched videos at double speed and passed quizzes by trial and error.) And when I presented my shiny new credential to prospective employers in the Washington, D.C., area and scoured job postings in Silicon Valley, my credential was less a foot in the door than a plaintive knock at firmly barred gates. Almost every “entry-level” data analytics job I found required either extensive experience or, alas, a supposedly outmoded college degree. The real lesson of my ersatz professional certificate was to illuminate the pitfalls of the burgeoning certificates industry—a rapidly expanding marketplace defined primarily by a lack of data, regulation, and oversight. 

That’s not to discredit the idea behind short-term credentials like certificates. In theory, they’re great. In our rapidly changing labor market, workers need fast and affordable options to acquire additional skills and retrain without going back to college. But in reality, too many of the short-term programs available today, Google’s included, might not meet that need—largely because neither students nor employers have any way of distinguishing credible programs from scammy dreck. The value of a certificate, even when backed by a behemoth like Google, remains iffy, if not downright suspect.

That lack of accountability is bad for everyone. It’s bad for companies in need of skilled workers, and it’s bad for students hoping to keep pace with technological advancements. If professional certificates are to become a meaningful currency in the educational marketplace and act as a pathway to economic security—which they should—there’s much more work to be done. 

Professional certificates have exploded in popularity in recent years. The nonprofit Credential Engine now counts at least 112,000 certificate programs nationwide, offered by trade schools, community colleges, four-year colleges and universities, and companies like Microsoft, IBM, Facebook parent Meta, and, of course, Google. Students hoping to switch careers or boost their earning potential have been driving the trend. Coursera, for instance, reported 8.3 million enrollees worldwide in 79 different professional certificate programs as of March—an increase of 157 percent over the past year. According to a 2023 survey by Gallup and the Lumina Foundation, roughly 40 percent of adults who are considering returning to school now say they would pursue a certificate, versus just 27 percent who’d go for a bachelor’s degree. 

Low-income and minority students, who are less likely to have access to traditional higher education, enroll in these programs at disproportionately high rates. Of the roughly 150,000 U.S.-based learners who’ve earned a Google professional certificate, for example, more than half identified as Asian, Black, or Latino, and 38 percent of those who reported earned less than $30,000 a year at the time of their enrollment, according to data provided by Coursera. 

When Google entered the online certification world a few years ago, headlines echoed the tech giant’s own puffed-up promises anticipating the demise of traditional higher education. “Google’s Plan to Disrupt the College Degree Is Absolute Genius,” gushed a headline in Inc. “Is This the End of College as We Know It?” asked The Wall Street Journal. It’s easy to see why. Compared to bachelor’s degrees, professional certificates and other short-term credentials are fast and cheap. Students can continue to work or care for their families, and they don’t have to take on loads of debt. Anyone with access to a computer can usually earn a certificate in a few months for a few hundred bucks. 

Sidebar: “How I Earned (Hacked) A Google Career Certificate”

But is a certificate even worth it? The answer’s unsatisfying: We really don’t know. In most cases, no rules require Google, or any other company or institution, to disclose whether certificate earners subsequently succeed in the workforce, or whether they end up making more than they did prior to completing the course. “We know next to nothing about the composition of students who participate in these programs; the rates at which students complete programs and earn workforce-relevant credentials; and whether … program participation leads to subsequent education and training or improved workforce outcomes,” according to a Brookings Institution report published in May. 

The federal higher education database, the College Scorecard, only tracks courses that receive federal financial aid. For the most part, that’s limited to programs that provide 300 “clock hours” or more of instruction over at least 10 weeks (or 600 hours over 15 weeks, depending on the provider). Many certificate programs, including all of Google’s offerings, don’t meet that threshold. 

Of course, some certificates and short-term credentials do translate to lucrative jobs. In its analysis of the minority of programs for which Scorecard data is available, Georgetown University’s Center on Education and the Workforce finds that workers with professional certificates in engineering technologies, for example, can expect to earn a median of between $75,001 and $150,000, while certificate holders in occupations like manufacturing and construction can expect to earn between $40,001 and $50,000. (Like any educational certification, including traditional college degrees, students’ chosen field of study broadly dictates the scope of their earning power.) 

But Michael Itzkowitz, the founder and president of the HEA Group, a higher education consultancy, says the lack of good data on which programs produce real results—and which, crucially, do not—makes certificates the “riskiest” educational investment a learner can make. In May, the HEA Group published an analysis of the roughly 5,600 certificate programs for which College Scorecard data is available, and found that nearly half (46 percent) of graduates make less than $30,000 per year four years after earning their credential. “Certificates, when done well, can offer one of the fastest paths to economic mobility,” Itzkowitz says. “But they’re also disproportionately likely to lead to the majority of their students earning a very low salary after they complete.”

We should expect students who earn certificates in, say, cosmetology to earn relatively little—such is the market for hairdressers and manicurists. But the HEA Group’s analysis also found worrisome variation in salary outcomes even among students who earned certificates in nearly identical subject matter. Recipients of certificates in “computer/information technology administration and management” from the for-profit Asher College in Sacramento earned a median income of $49,525 four years later, the report found; those who earned certificates in “computer and information science” from the public South Texas College earned less than half that—a median of $18,250. 

Free market die-hards might argue that these disparities will self-correct as students flock to programs with the best reputations, leaving the poor performers to wither. But making those comparisons is tricky. Is the 10-week, full-time data analytics “boot camp” taught by Fullstack Academy through Virginia Tech for $13,495 markedly better than the nine-week online class on the same subject from Cornell University for $3,750? Is the online certificate course sponsored by IBM better than the one offered by Meta? It’s impossible to decide without knowing how graduates fare in the job market. 

I ended up choosing Google’s data analytics course in part because it was the most popular certificate program in 2022, according to Coursera. As of late July, more than 1.7 million students had enrolled. According to Bronagh Friel, the head of Grow with Google, 75 percent of the company’s certificate graduates “report a positive career impact, such as a new job, higher pay or a promotion, within six months of completion.” Those are great results, but they’re self-reported, and there’s no way to verify them. 

Public and private colleges and universities that confer four-year degrees in a variety of fields are not, of course, uniform in quality either. Graduates with diplomas in business management tend, on average, to have substantially higher salaries than graduates in fine arts. Graduates of Harvard make more money, on average, than graduates of the for-profit DeVry. But if the goal is to provide students with an educational path to a reliable salary, a bachelor’s degree—from anywhere, in any subject matter—remains the better bet. The HEA study found that 80 percent of bachelor’s graduates earn at least $40,000 per year four years after graduation (in contrast to the relatively poor performance of certificate holders noted above). In some STEM fields, like computer science, bachelor’s degree graduates earn a median of $104,799.

Part of the reason might be the substantive advantages conferred by a four-year educational experience, says Sean Gallagher, the founder and executive director of Northeastern University’s Center for the Future of Higher Education and Talent Strategy. An employer hiring a college graduate with a degree in computer science is relatively confident that the candidate can perform basic data analytics tasks. But what a company really values, Gallagher says, is that the candidate, by making it to graduation, is likely to have “the soft skills, the experiences, the perseverance, and the ability to attain a goal.” An asynchronous online course, or even a “very deep, well-designed, very expensive, few months’ training experience,” Gallagher adds, is far less likely to convey those types of skills.  

This preference for college degrees extends to the tech world, too, despite Silicon Valley’s love affair with college dropouts like Apple’s Steve Jobs and Meta’s Mark Zuckerberg, and commitments from companies like IBM to dramatically reduce the job postings that require a four-year degree. A 2022 report from the labor market research firm Lightcast found that tech companies still disproportionately require college degrees. At Oracle, more than 90 percent of job postings required a four-year degree or above, according to Lightcast. At Apple, it was 72 percent. At Google, 77 percent. 

This trend certainly held true in my admittedly superficial job hunt this summer. Armed with my new Google professional certificate, I surveyed Indeed.com for entry-level data analyst jobs in both D.C. and Silicon Valley, and found that almost every job required either a four-year degree, significant prior experience, or a nuanced set of skills that would be impossible to obtain through short-term certification programs. A June posting for “Big Data Software Engineer” at IBM, for example, did not require a college degree, but it did ask that applicants have Top Secret security clearance; could demonstrate “hands-on experience configuring and operating container-based systems such as Kubernetes, Docker and/or OpenShift”; and be able to program in Python and Java. When I asked Victoria Suarez, the manager of talent strategy and operations at Alpha Omega Integration, an IT firm based in Vienna, Virginia, if she thought my Google certificate could get me in the door, she hesitated. “Oftentimes, you can just sit there, push play on the video, and get a certificate,” she said. “We all kind of know that too.”

At best, my certificate was a signal to potential employers of my interest in the field. “It could demonstrate to an employer that, hey, this person would be worth training because they’re already investing in themselves to take the certificate and to acquaint themselves with the field,” says Matt Deneroff, the D.C. branch director for the global professional placement firm Robert Half.

My job search at Google was perhaps the most deflating. Despite Google’s promise to treat its own career certificates as “equivalent of a four-year degree,” Google’s hiring platform, which allows applicants to sort job openings by the degree required, does not list “certificate” among the filter options. In a forthcoming book, the author and higher education expert Ben Wildavsky quotes a Google executive admitting that the company-sponsored certificates “aren’t intended to prepare people to work at Google itself.” Google employees, the executive explained, “need deeper learning abilities than short-term programs typically provide.” 

The idea, trumpeted by Google and echoed by overwrought pundits, that professional certificates ought to replace traditional degrees is maddening to many champions of professional certificates. Professional certificates and college degrees are not the same, and they’re not supposed to be the same—and that’s a good thing. Professional certificates are narrowly focused courses designed to teach students a specific set of measurable skills, like how to clean up data in an Excel worksheet. A four-year bachelor’s degree in, say, computer science, is designed to confer a much broader, historical, and even philosophical understanding of information technology, along with higher-order skills like problem-solving, analysis, critical thinking, and research. 

Claiming the equivalence of these credentials is misleading, and possibly even damaging. Low-income and minority students, for instance, might get “tracked” into lesser certificate programs in lieu of college, or they might themselves opt out of college under misguided beliefs about a certificate’s value compared to traditional degrees. The right way to think about degrees and credentials, experts like Wildavsky argue, is a “both/and” approach. “We should not think of [them] as completely disconnected separate pathways,” he says, but as complementary mechanisms for producing students with both a broad education and targeted skills.

Improving the certificate marketplace is a prerequisite step toward achieving that vision. First, Congress can help buttress credible certificate programs by allowing students to access Pell Grants to pay for them. In the Senate, Virginia Democrat Tim Kaine and Indiana Republican Mike Braun have cosponsored legislation to allow federal funding for “high quality” programs resulting in “industry-recognized” credentials; in the House, North Carolina Republican Virginia Foxx and Virginia Democrat Bobby Scott have introduced similar plans. Legislation that sets rigorous quality metrics for Pell eligibility and requires comprehensive data reporting could help raise the bar for providers, standardize offerings, and, importantly, provide students with outcomes data to inform their choices.

Second, states could do much more to steer students toward high-quality short-term training programs. Virginia’s “Fast Forward” initiative, for example, funds state-approved programs in high-demand fields, like IT or commercial trucking, where the state has identified worker shortages. Under a unique pay-for-success model, the commonwealth pays two-thirds of a student’s tuition only if they graduate and earn an industry-recognized credential within six months of program completion. According to the Brookings Institution, 95 percent of Fast Forward students complete their programs, and 70 percent earn an industry credential within six months.

Third, employers and providers can help legitimize professional certificates by building certification programs designed to educate students in specific, measurable skills that businesses want—and then ensuring that companies hire those graduates. They can also partner with traditional institutions of higher education to lend pedagogical heft to coursework and underscore a certificate holder’s credibility. 

To its credit, Google is working on both of these fronts. The company is currently building “partnerships” with hundreds of community colleges offering access to Google certificates, and late last year, it announced that it would fund up to 10,000 certificates for students of the University of Texas system, its largest collaboration to date. Google has also assembled a consortium of more than 150 employers, including heavyweights like American Express, Walmart, and T-Mobile, that “recognize the certificates in their hiring processes and are eager to hire talent in these fields,” according to Google’s Friel. (The company does not have data on the number of hires that occur through this consortium.) 

In a similar vein, the University of Virginia’s School of Continuing and Professional Studies now guarantees job interviews with a half-dozen employers to students pursuing certificates in fields that are in demand, including cloud computing, cybersecurity, IT, and project management.

Neither students nor employers have any way of distinguishing credible programs from scammy dreck. The value of a certificate, even when backed by a behemoth like Google, remains iffy.

Ross McPherson, a student who lives in Blacksburg, Virginia, enrolled in UVA’s certificate program in project management to switch careers after years of working first on commercial diesel engines and then in operations for US Foods. “I needed to have some kind of competitive advantage to be able [to make] a living with my brain instead of my hands,” he says. He had already earned a bachelor’s degree and an MBA from the University of Northwestern Ohio, but he credits the certificate program for giving him a boost. Thanks to UVA’s “guaranteed-interview program,” he got an interview with Definitive Logic, a UVA partner employer, in October and was offered a job in November. “It was probably the fastest process I’ve ever been a part of,” he says. 

McPherson is, by all counts, a certificate success story, but his trajectory was both less revolutionary than the one initially spun up by Google and more promising. If the rapidly growing professional certificate industry succeeds, it will not be because it disrupted traditional institutions of higher education, but because it worked with them to build the pathways students and businesses both need. 

The post Google’s Participation Trophies  appeared first on Washington Monthly.

]]>
148811
How I Earned (Hacked) a Google Career Certificate https://washingtonmonthly.com/2023/08/27/how-i-earned-hacked-a-google-career-certificate/ Sun, 27 Aug 2023 22:55:00 +0000 https://washingtonmonthly.com/?p=148815

After writing for years about the benefits of certificates and short-term credential programs, especially for working learners, I had begun to feel inauthentic, like a self-help guru who doesn’t take her own advice. So I finally enrolled in one this spring: Google’s “career certificate” program in data analytics, available online through Coursera for $39 a […]

The post How I Earned (Hacked) a Google Career Certificate appeared first on Washington Monthly.

]]>

After writing for years about the benefits of certificates and short-term credential programs, especially for working learners, I had begun to feel inauthentic, like a self-help guru who doesn’t take her own advice. So I finally enrolled in one this spring: Google’s “career certificate” program in data analytics, available online through Coursera for $39 a month. 

With more than 6.5 million learners enrolled worldwide, Google is one of Coursera’s “top instructors,” offering a multitude of certificates in a panoply of languages from Arabic to Spanish. I thought I was gunning for a certificate. What I actually got was a lesson in humility. 

Google’s data analytics program is its most popular offering. The program consists of eight separate courses with titles like “Prepare Data for Exploration” and “Process Data from Dirty to Clean.” Course materials consist of short videos, quizzes, readings, and optional practice activities. Students can also post on message boards to engage with other enrollees. 

Production values are slick, with soft lighting and catchy music, and the content is high quality—this is Google, after all. Modules are succinct and logical; the instructors, all of whom are Google employees, are lucid and charismatic. The course does an exceptionally strong job of explaining what data analysts do, with mini monologues by upbeat Google workers describing their backgrounds and jobs. Also laudable is the focus on soft skills, like collaboration and teamwork. One module, for example, talks about the importance of finding mentors and building networks, while another segment tackles how to handle conflicts with colleagues. Instructors are also refreshingly diverse. In an industry infamously dominated by white men, only one instructor appeared to meet that description.

But the program’s central conceit—to “equip participants with the essential skills they need to get a job” with no degree or prior experience required—is also its central weakness. Google’s aim to make the program accessible to all learners means the material is basic, and the pacing frustratingly slow. It’s not until the final module of Course 4, more than 100 hours into the program, according to the syllabus, that students begin to work with formulas for data analysis, like calculating averages. It’s not until the very end, Course 7, that students finally get to work with the programming language R, and even then, we were only introduced to a handful of commands. 

Though I started out diligently enough, watching every video, clicking every link, and trying to channel the enthusiasm of my Google teachers, the format quickly turned monotonous. What should have been a progression toward mastery became an exercise in sheer endurance—which is about when I started to cheat. 

I began watching videos at double speed, skipping optional exercises, skimming through readings, and passing quizzes through trial and error. You only need 80 percent to pass a quiz, and you can take them as many times as you’d like. I also discovered that the written responses required in some quizzes didn’t have to make much sense. I once typed in, “The quick brown fox jumped over the lazy dogs performing data analysis,” and earned 100 percent. 

According to the American Council on Education, which evaluates academic programs, my Google data analytics certificate program should have taken 175 hours and more than six months to complete. My shortcuts got me mine in about two and a half weeks. My certificate—a digital document that I can share with employers—declares my competence in “tools and platforms including spreadsheets, SQL, Tableau and R.” In truth, my knowledge of the programming languages SQL and R goes about as far as knowing they’re not just letters of the alphabet.

One thing I did learn is the difficulty of completing a self-directed program with static content. Humiliatingly, I learned that I personally lack the discipline and dedication to upskill myself. As I clicked through video after video, I could feel the press of chores and other assignments, while my aspirations for learning withered. My dog suddenly needed a walk; I remembered emails I had to return and errands I needed to run. 

I can only imagine how much more challenging this must be for students in different circumstances than my own—workers with physically demanding jobs, small children at home (my kids are big), or stark financial pressures, or those who are perhaps counting on this course as a gateway to a better life. Coursera and Google do not disclose completion rates, and there are many reasons students don’t finish, but I noticed that while Course 1 in my data analytics program listed 1.9 million enrollees in July, just 328,000 had enrolled in Course 8. That suggests an 83 percent attrition rate.

Policy makers and pundits often talk blithely about re-skilling and upskilling American workers, as if access to a training program is all it takes to transform 20th-century-trained workers into a 21st-century labor force. But my experience underscored that people also need a support network—access to other humans, who can provide mentorship, guidance, motivation, and community. Almost anybody, including those who have been out of school for a long time, is theoretically capable of learning a complex set of new skills. But they can’t be expected to do it on their own. 

A second lesson I learned is that there’s no free lunch when it comes to education and training. Data analyst jobs pay as much as they do because it’s impossible to learn everything you need from a single online course. Google acknowledges as much in the final modules of its program, where it encourages students to build a portfolio and provides extensive links to additional tutorials in coding, data visualization, and other skills. Providers need to make clear that a certificate is only the first step toward a career, not a substitute for the deep knowledge, professional networks, and work experience someone needs to truly succeed in a field. (See main article.) 

I still believe that quality certificate programs can help jump-start, restart, or advance careers. But they’re neither a stand-alone solution nor a silver bullet. My experience in the educational trenches provided another useful reminder: that the champions of policy prescriptions sometimes need to take their own medicine.

The post How I Earned (Hacked) a Google Career Certificate appeared first on Washington Monthly.

]]>
148815
What Really Happened to Silicon Valley Bank https://washingtonmonthly.com/2023/03/17/what-really-happened-to-silicon-valley-bank/ Fri, 17 Mar 2023 09:00:00 +0000 https://washingtonmonthly.com/?p=146701 Federal Reserve

The hubris of Big Finance and Big Tech, the Fed dropping the ball, and the stupidity and greed of SVB were behind the crisis.

The post What Really Happened to Silicon Valley Bank appeared first on Washington Monthly.

]]>
Federal Reserve

If you thought Big Finance and its regulators learned prudence and humility from the financial crisis, the past week’s events were a rude wake-up call. The collapse of Silicon Valley Bank has several subplots. The most obvious was its astonishingly stupid investment strategy of concentrating its assets in instruments whose market value would decline as interest rates rose. But that alone doesn’t explain what’s happening here. SVB’s business was inseparable from the anything-goes venture capitalists of Silicon Valley. And the third central character in this mess is the Federal Reserve. The Fed oversees both interest rates and banks’ balance sheets. But as it pushed up interest rates to put a brake on inflation, it ignored the threat that those rising rates posed to banks whose assets would fall sharply.

At the end of 2022, SVB’s asset portfolio was dominated by $91 billion in mortgage-backed securities issued by Fannie Mae at an average interest rate of 1.64 percent and another $30 billion in U.S. Treasury securities at even lower rates. This position was defensible only so long as the Fed kept interest rates very low, and one could expect the Fed to continue to do so. A reasonable investor started to question those expectations in the early spring of 2021 when inflation began to inch up. After several months of debate over whether the inflation would persist, it became obvious that the Fed would raise interest rates soon—and it did. In 2022, the Fed raised rates seven times, from near zero to 4.75 percent; each time, the value of SVB’s assets took a hit.

Another subplot at play was the unusual character of SVB as an institution. While SVB was a “regional” institution mainly serving Silicon Valley clients, it was also enormous. At the end of 2022, the Federal Reserve ranked SVB the 16th largest bank in the United States, with $209 billion in assets. Moreover, its funding depended on a sweetheart arrangement with Silicon Valley’s large venture capital companies: SVB lent money to the VCs and the startup companies they backed, and the VCs directed their client companies to hold their proceeds from those VCs in accounts at SVB. That’s why more than 85 percent of SVB’s deposits were in accounts exceeding the FDIC’s $250,000 per account guarantee. And since those accounts held the startups’ daily and weekly operating funds, SVB’s failure was an immediate threat to the survival of hundreds of young technology companies.

A third subplot here was the panic itself, which showed that SVB’s public responses to its problems were as dull-witted as its investment strategy. As its financial position continued to deteriorate in 2023, SVB scrambled to attract new investors and prop up its stock price—and several large hedge funds took the bet. But three days before its demise, SVB decided to trumpet a new $2.25 billion stock issue and a fire sale of $21.5 billion of its assets to Goldman Sachs. SVB booked a $1.8 billion loss on the sale, not counting the $100 million fee Goldman Sachs demanded to sweeten the deal. Instead of reassuring anyone, SVB’s public campaign signaled how dire its situation was to the VCs and its own depositors.

The next day, word spread that Peter Thiel’s VC fund had advised its client companies to pull their SVB accounts. Since most of SVB’s depositors relied on those accounts to keep their doors open, and most of those accounts exceeded the FDIC guarantee, SVB was quickly overwhelmed by a modern bank run. California regulators arranged for the FDIC to take over the bank, and over the weekend, Treasury and the Fed moved to quiet the panic by waiving the $250,000 ceiling on the FDIC guarantee.

The fourth subplot in this saga is the inattention and carelessness of the regulators, especially the Federal Reserve. Much has been made of the 2018 change to the Dodd-Frank financial reforms exempting banks with assets of less than $250 billion from mandatory “stress tests” by the Fed. Those tests try to determine if a bank can survive under unusual conditions—such as fast-rising interest rates. But the fact that the test wasn’t mandatory in this case does not excuse the Fed from its primary responsibilities: Its mission, in its own words, is to “promote the safety and soundness of individual financial institutions” and “monitor financial system risks.”

It’s unsurprising that some bankers are stupid and greedy—SVB CEO Greg Becker and other bank executives sold $4.5 million of their SVB stock weeks before its collapse and distributed employee bonuses just hours before the end. But the extent of the Fed’s negligence is sobering. Certainly, the Fed cannot plead ignorance since Becker was a director of the San Francisco Federal Reserve Bank up to the day that his own bank collapsed. And although the reckless change in Dodd-Frank was not dispositive, it did bespeak the dangerous and self-serving attitude prevalent in Big Finance and Big Tech that markets always know best and government oversight is unnecessary and destructive. Now the pressing question is whether the Federal Reserve has finally learned that rigorous regulation to protect the system is far preferable to picking up the shattered pieces.

The post What Really Happened to Silicon Valley Bank appeared first on Washington Monthly.

]]>
146701
The New Political Economy  https://washingtonmonthly.com/2023/01/08/the-new-political-economy/ Mon, 09 Jan 2023 01:45:00 +0000 https://washingtonmonthly.com/?p=145095

In the immortal Federalist No. 10, James Madison wrote, The most common and durable source of factions has been the various and unequal distribution of property. Those who hold and those who are without property have ever formed distinct interests in society. Those who are creditors, and those who are debtors, fall under a like […]

The post The New Political Economy  appeared first on Washington Monthly.

]]>

In the immortal Federalist No. 10, James Madison wrote,

The most common and durable source of factions has been the various and unequal distribution of property. Those who hold and those who are without property have ever formed distinct interests in society. Those who are creditors, and those who are debtors, fall under a like discrimination. A landed interest, a manufacturing interest, a mercantile interest, a moneyed interest, with many lesser interests, grow up of necessity in civilized nations, and divide them into different classes, actuated by different sentiments and views. The regulation of these various and interfering interests forms the principal task of modern legislation, and involves the spirit of party and faction in the necessary and ordinary operations of the government.

That was back in 1787, before there was a ratified Constitution, when the U.S. government barely existed. But Madison’s framing is instructive in a number of ways. He evidently assumed that the main source of political strife in the new nation would be clashes among economic interests, and that the main task of government would be adjudicating these disputes. Anybody who has ever spent time in a legislative body at any level will recognize the rough ongoing truth of Madison’s observation. Back in the early days of the republic, globalist slave-holding plantation owners battled over trade policy with protectionist northern manufacturers. The First and Second Banks of the United States gave rise to fights over centralized financial power. There were disputes over taxation, territorial expansion, and “internal improvements” like roads and canals. Today, economic interest groups are still fighting each other: over trade, over the role of unions, over the power of Big Tech, over the transition to clean energy, and over a zillion other issues.

There is a disconnect between this version of politics and the version we typically get in contemporary public conversations. We are in the habit of thinking of noneconomic issues (abortion, immigration, policing) as being debated on their merits, often as fundamental moral questions, but of economic issues as being properly understood in terms of the technical management of “the economy” by experts, not of power struggles between interests. What’s the unemployment rate? The inflation rate? The strength of the dollar? The trade deficit? How much is the Federal Reserve going to raise interest rates at its next meeting? How will the financial markets react? These are the kinds of economic questions one is likely to see addressed on front pages and on television news. The Madisonian version of the American political economy still goes on—not exactly behind the scenes, just insufficiently noticed—but it doesn’t command our primary attention.

As the 20th century wore on, a series of venerable political-economy tools came to be seen, at least in elite circles, as counterproductive—almost silly. On this list would be trade restrictions; price controls; industrial policy; attempts to break up big economic concentrations; attempts to shore up specific cities, towns, and regions; and policies aimed at promoting unionization.

How did this happen? It seems fair—especially in the light of recent historical work that understands slavery as a form of capitalism—to say that economic issues were at the center of American politics, and were understood and debated as power struggles, from the founding until World War II. These debates were particularly intense as the economy became industrial and this generated mass immigration, urbanization, and unprecedentedly large concentrations of wealth (in individual hands) and power (in the hands of trusts and corporations). The early decades of the 20th century saw the advent, in response, of federal regulatory agencies, central banking, a government-enabled mass union movement, and a modern welfare state.

The war ended the Great Depression, and postwar prosperity softened American politics’ focus on economic battles. The progress of the welfare state stalled. The postwar years were full of assertions, including by liberals, that the United States had developed a workable economic order, dominated by heavily regulated industrial corporations that provided their employees with many of the welfare state functions that governments provided in other industrial democracies. The rise of Keynesian economics was a part of this story. In 1946, economics became an academic discipline with an official permanent presence in the White House, the Council of Economic Advisers. This was a manifestation of the new faith that by monitoring and managing fiscal and monetary policy, the government could keep the economy growing, inflation and unemployment under control, and future depressions at bay. This idea had the political advantage of not automatically entailing conflict in the way that, say, labor law or antitrust actions did. And it placed the focus on macroeconomics, instead of the endless jostling for advantage among economic interests. One could see “politics” and “interest groups” as the enemies of government management of the economy, rather than as its essence.

As the 20th century wore on, a series of venerable political economy tools came to be seen, at least in elite circles, as counterproductive—almost silly. On this list would be trade restrictions; price controls; industrial policy; attempts to break up big economic concentrations; attempts to shore up specific cities, towns, and regions; and policies aimed at promoting unionization. Many of these play out in politics as contests between economic institutions (Ida Tarbell battled Standard Oil on behalf of small-scale oil producers, like her father), but both academic economics and economic policy had become uninterested in the interplay of institutions. The overall health of the economy, and the welfare of consumers, became the only proper targets of economic policy. Inequities and disruptions could be addressed after the fact, through redistributionist tax policies. These were not just conservative ideas. Liberal administrations enthusiastically participated in the deregulation of airlines, trucking, energy, telecommunications, finance, and other industries. In 1987, The New York Times published a lead editorial (which it has since renounced) calling for the abolition of the minimum wage. The establishments of both parties supported NAFTA and a long series of succeeding free trade treaties.

This economic regime produced a steady rise in inequality, of both income and wealth, beginning in the early 1980s, that has not abated. That was change on the boiling-a-frog model: gradual rather than in the form of unmissable events. It took the 2008 financial crisis and the subsequent Great Recession to produce a strong political reaction against the economic certainties of the late 20th century. Since then, the unexpected rise of populist and nationalist movements—some on the left, more on the right, sometimes with a strong cultural element, always rooted in economic discontent—has dominated politics all over the world, sometimes in ways that fundamentally threaten the ongoing health of democracy. As happened in the early 20th century, in the early 21st voters have forced policy makers to pay much closer attention to the political economy than they had been paying. We are in the early stages of that period now.

Political economy becomes visible when life isn’t going so well for you. When the factory in your town moves offshore, you can see that trade policy has adversely affected your life. But if you’re well educated, living in a prospering metropolis, and you get a good job, it’s because free markets work. A necessary first step in reawakening our long-dormant awareness of political economy is realizing that economies are made, not born. There are many capitalist countries, each with a distinctive version of capitalism, shaped by law and custom and subject to ongoing modification. Individual companies—farms, hedge funds, auto manufacturers, social media platforms, pharmaceuticals—prosper (or not) based not just on their own work and ingenuity, but also on the way government has laid out the shape of the playing field and the rules of the game for them. Phillip Longman’s essay in this issue (“Everyday High Prices”) calls attention to this aspect of political economy: the vicious, but not publicly visible, struggles for advantage between retailers and their suppliers. These always involve government as a not always impartial referee. The economic prominence of private equity, a field that didn’t exist 50 years ago, rests on a series of little-noticed changes in federal regulations. The consolidation into “Big Four” or “Big Five” firms that has swept across industry after industry would not have happened with more robust antitrust policies. The mega success of tech companies like Google and Facebook was enabled by their exemption from legal responsibility for the content they carry. All these policies are the kind that happen in courts and hearing rooms, with lobbyists but not the press or the public paying close attention.

Just as the making of the current American political economy—featuring high inequality, dramatic regional and racial disparities, and a great deal of disruption of ordinary people’s lives—was too little noticed as it was happening, so too is its remaking, which is already well under way. One of the most underreported stories in America is the Biden administration’s dramatic departure from the economic policies of the past several administrations, including the Democratic ones. This administration is the most aggressive on antitrust in decades. It has made strong regulatory moves in the financial sector. It has made major forays into industrial policy, by, for example, trying to strengthen the domestic semiconductor industry and to jump-start the green energy industry. Barry Lynn’s essay in this issue (“Manufacturing and Liberty”) tells that story, and urges the administration to do more.

One should resist the temptation to believe that Republicans’ taking back control of the House of Representatives means that the age of significant Biden administration economic policy making has come to an end. It may be that a full revival of the multitrillion-dollar Build Back Better bill is not possible, but elements of it, like enhanced programs in education, training, and apprenticeship, and an in-effect industrial policy for “care work,” may well reappear. That is partly because the Republican Party is betting its future on its ability to continue taking working-class voters away from the Democrats, and doing this will require delivering more than just relentless rhetorical assaults on wokeness. Also, because so much of economic policy is made by courts and agencies, the Democrats’ continuing control of the Senate means that the administration can keep getting appointees confirmed who can carry out its mission.

Political economy becomes visible when life isn’t going so well for you. When the factory in your town moves offshore, you see the adverse effects of trade policy. But if you’re well educated, living in a prospering metropolis, and get a good job, it’s because free markets work. A necessary first step is realizing that economies are made, not born.

If you had to take a test on your familiarity with the Biden administration’s American Rescue Plan, the Infrastructure Investment and Jobs Act, and the Inflation Reduction Act, down to the level of spending programs of $50 million and up, would you pass? I don’t think I would. These major initiatives tend to be covered as if they were championship games that the White House wins or loses, rather than for their content. It’s vitally important right now for liberals and progressives to pay attention to the enormous changes happening in economic policy. Especially on issues like antitrust, financial regulation, labor policy, and the future of what conservatives call “the administrative state,” the people on the other side are going to be in the room where decisions are made. Will they be there alone?

Along with a closer focus on these economic issues—in general, and in detail—liberals need to develop a new economic vocabulary. If you ever took an introductory economics course, you were probably taught that government attempts to reshape economies are doomed to failure, that any economic burden placed on businesses will just be transferred to consumers, that deficit and debt are irresponsible, that industry concentration is not a problem as long as it doesn’t directly harm consumers, that trade restrictions are always a bad idea, and that creative destruction is an inevitable and healthy aspect of a market economy. Attempts to push back against these bromides are often dismissed as the tiresome and counterproductive activities of politicians trying to get pork barrel projects for their districts, as opposed to good public policy. A new set of guiding principles for economic policy would help to reframe a wide range of issues, to communicate with the many voters who feel left behind and ignored in the current economy, and to guide our assessments of specific proposals.

The Biden administration has made a dramatic departure from the economic policies of the past several administrations. This administration is the most aggressive on antitrust in decades. It has made strong regulatory moves in the financial sector. It has made major forays into industrial policy, by, for example, trying to strengthen the domestic semiconductor industry and jump start the green energy industry.

I’ll propose just a few of these principles now. First, great concentrations of economic power are not healthy, either for people’s well-being or for the health of our democracy. Economic power converts itself into political power, and that upsets the balances and the protections of minority rights that the Constitution aimed to establish. Princes of property (that’s Franklin D. Roosevelt’s phrase) don’t have to be ill-intentioned to do harm—only excessively influential and blind to the concerns of ordinary people. A country with large and growing gaps between people depending on their education levels, on their race, on where they live, on what kind of work they do, can become unjust and unstable unless the gaps are corrected. In economics as in politics—to quote James Madison again, from another of the Federalist Papers—ambition must be made to counteract ambition.

Second, the economy should be designed and managed not only to promote its overall health and growth, but also to minimize the harms to lives, to health, and to communities that constant economic disruption can bring. When large economic entities swallow up smaller ones, often through taking on debt that puts enormous pressure on them to lay off employees and otherwise behave in socially destructive ways, we should stop believing that as long as it was a free market transaction, it’s good for the country. Capitalism always produces dislocation along with dynamism. A model for how to deal with them is that prevention—the job not lost, the benefits not cut, the neighborhood not allowed to wither—is far preferable to correction after the fact.

Third, economic politics, like all politics, fundamentally entails conflicts between interests. Madison had that right back in 1787. Political economy isn’t technical. It isn’t nonpartisan. It isn’t best left to experts. It isn’t best handled by applying broad universal concepts. Sweeping assertions about the virtues of markets have often served to shut down discussions that we should have had, and that we need to have now. The future of the American political economy is in play, and that means that we need to engage in the specifics, with our closest attention. That is what this package of stories aims to do.

The post The New Political Economy  appeared first on Washington Monthly.

]]>
145095