As it turned out, nobody asked him to resign—or much of anything difficult. Despite scattered moments of pressure, the overwhelming impression left by the event was how poorly some senators grasped the issues. In the most revealing moment, Orrin Hatch, the eighty-four-year-old Republican from Utah, demanded to know how Facebook makes money if “users don’t pay for your service.” Zuckerberg replied, “Senator, we run ads,” allowing a small smile.
To observers inclined to distrust Zuckerberg, he was evasive to the point of amnesiac—he said, more than forty times, that he would need to follow up—but when the hearing concluded, after five hours, he had emerged unscathed, and Wall Street, watching closely, rewarded him by boosting the value of Facebook’s stock by twenty billion dollars. A few days later, on the internal Facebook message board, an employee wrote that he planned to buy T-shirts reading “Senator, we run ads.”
When I asked Zuckerberg whether policymakers might try to break up Facebook, he replied, adamantly, that such a move would be a mistake. The field is “extremely competitive,” he told me. “I think sometimes people get into this mode of ‘Well, there’s not, like, an exact replacement for Facebook.’ Well, actually, that makes it more competitive, because what we really are is a system of different things: we compete with Twitter as a broadcast medium; we compete with Snapchat as a broadcast medium; we do messaging, and iMessage is default-installed on every iPhone.” He acknowledged the deeper concern. “There’s this other question, which is just, laws aside, how do we feel about these tech companies being big?” he said. But he argued that efforts to “curtail” the growth of Facebook or other Silicon Valley heavyweights would cede the field to China. “I think that anything that we’re doing to constrain them will, first, have an impact on how successful we can be in other places,” he said. “I wouldn’t worry in the near term about Chinese companies or anyone else winning in the U.S., for the most part. But there are all these places where there are day-to-day more competitive situations—in Southeast Asia, across Europe, Latin America, lots of different places.”
The rough consensus in Washington is that regulators are unlikely to try to break up Facebook. The F.T.C. will almost certainly fine the company for violations, and may consider blocking it from buying big potential competitors, but, as a former F.T.C. commissioner told me, “in the United States you’re allowed to have a monopoly position, as long as you achieve it and maintain it without doing illegal things.”
Facebook is encountering tougher treatment in Europe, where antitrust laws are stronger and the history of fascism makes people especially wary of intrusions on privacy. One of the most formidable critics of Silicon Valley is the European Union’s top antitrust regulator, Margrethe Vestager. Last year, after an investigation of Google’s search engine, Vestager accused the company of giving an “illegal advantage” to its shopping service and fined it $2.7 billion, at that time the largest fine ever imposed by the E.U. in an antitrust case. In July, she added another five-billion-dollar fine for the company’s practice of requiring device makers to preinstall Google apps.
In Brussels, Vestager is a high-profile presence—nearly six feet tall, with short black-and-silver hair. She grew up in rural Denmark, the eldest child of two Lutheran pastors, and, when I spoke to her recently, she talked about her enforcement powers in philosophical terms. “What we’re dealing with, when people start doing something illegal, is exactly as old as Adam and Eve,” she said. “Human decisions very often are guided by greed, by fear of being pushed out of the marketplace, or of losing something that’s important to you. And then, if you throw power into that cocktail of greed and fear, you have something that you can recognize throughout time.”
Vestager told me that her office has no open cases involving Facebook, but she expressed concern that the company was taking advantage of users, beginning with terms of service that she calls “unbalanced.” She paraphrased those terms as “It’s your data, but you give us a royalty-free global license to do, basically, whatever we want.” Imagine, she said, if a brick-and-mortar business asked to copy all your photographs for its unlimited, unspecified uses. “Your children, from the very first day until the confirmation, the rehearsal dinner for the wedding, the wedding itself, the first child being baptized. You would never accept that,” she said. “But this is what you accept without a blink of an eye when it’s digital.”
In Vestager’s view, a healthy market should produce competitors to Facebook that position themselves as ethical alternatives, collecting less data and seeking a smaller share of user attention. “We need social media that will allow us to have a nonaddictive, advertising-free space,” she said. “You’re more than welcome to be successful and to dramatically outgrow your competitors if customers like your product. But, if you grow to be dominant, you have a special responsibility not to misuse your dominant position to make it very difficult for others to compete against you and to attract potential customers. Of course, we keep an eye on it. If we get worried, we will start looking.”
“Temperatures are expected to rise throughout the day, sir. May we remove our frock coats?” Cartoon by Frank Cotham Link copied
As the pressure on Facebook has intensified, the company has been moving to fix its vulnerabilities. In December, after Sean Parker and Chamath Palihapitiya spoke publicly about the damaging psychological effects of social media, Facebook acknowledged evidence that heavy use can exacerbate anxiety and loneliness. After years of perfecting addictive features, such as “auto-play” videos, it announced a new direction: it would promote the quality, rather than the quantity, of time spent on the site. The company modified its algorithm to emphasize updates from friends and family, the kind of content most likely to promote “active engagement.” In a post, Zuckerberg wrote, “We can help make sure that Facebook is time well spent.”
The company also grappled with the possibility that it would once again become a vehicle for election-season propaganda. In 2018, hundreds of millions of people would be voting in elections around the world, including in the U.S. midterms. After years of lobbying against requirements to disclose the sources of funding for political ads, the company announced that users would now be able to look up who paid for a political ad, whom the ad targeted, and which other ads the funders had run.
Samidh Chakrabarti, the product manager in charge of Facebook’s “election integrity” work, told me that the revelations about Russia’s Internet Research Agency were deeply alarming. “This wasn’t the kind of product that any of us thought that we were working on,” he said. With the midterms approaching, the company had discovered that Russia’s model for exploiting Facebook had inspired a generation of new actors similarly focussed on skewing political debate. “There are lots of copycats,” Chakrabarti said.
Zuckerberg used to rave about the virtues of “frictionless sharing,” but these days Facebook is working on “imposing friction” to slow the spread of disinformation. In January, the company hired Nathaniel Gleicher, the former director for cybersecurity policy on President Obama’s National Security Council, to blunt “information operations.” In July, it removed thirty-two accounts running disinformation campaigns that were traced to Russia. A few weeks later, it removed more than six hundred and fifty accounts, groups, and pages with links to Russia or Iran. Depending on your point of view, the removals were a sign either of progress or of the growing scale of the problem. Regardless, they highlighted the astonishing degree to which the security of elections around the world now rests in the hands of Gleicher, Chakrabarti, and other employees at Facebook.
As hard as it is to curb election propaganda, Zuckerberg’s most intractable problem may lie elsewhere—in the struggle over which opinions can appear on Facebook, which cannot, and who gets to decide. As an engineer, Zuckerberg never wanted to wade into the realm of content. Initially, Facebook tried blocking certain kinds of material, such as posts featuring nudity, but it was forced to create long lists of exceptions, including images of breast-feeding, “acts of protest,” and works of art. Once Facebook became a venue for political debate, the problem exploded. In April, in a call with investment analysts, Zuckerberg said glumly that it was proving “easier to build an A.I. system to detect a nipple than what is hate speech.”
The cult of growth leads to the curse of bigness: every day, a billion things were being posted to Facebook. At any given moment, a Facebook “content moderator” was deciding whether a post in, say, Sri Lanka met the standard of hate speech or whether a dispute over Korean politics had crossed the line into bullying. Zuckerberg sought to avoid banning users, preferring to be a “platform for all ideas.” But he needed to prevent Facebook from becoming a swamp of hoaxes and abuse. His solution was to ban “hate speech” and impose lesser punishments for “misinformation,” a broad category that ranged from crude deceptions to simple mistakes. Facebook tried to develop rules about how the punishments would be applied, but each idiosyncratic scenario prompted more rules, and over time they became byzantine. According to Facebook training slides published by the Guardian last year, moderators were told that it was permissible to say “You are such a Jew” but not permissible to say “Irish are the best, but really French sucks,” because the latter was defining another people as “inferiors.” Users could not write “Migrants are scum,” because it is dehumanizing, but they could write “Keep the horny migrant teen-agers away from our daughters.” The distinctions were explained to trainees in arcane formulas such as “Not Protected+Quasi protected=not protected.”
In July, the issue landed, inescapably, in Zuckerberg’s lap. For years, Facebook had provided a platform to the conspiracy theorist Alex Jones, whose delusions include that the parents of children killed in the Sandy Hook school massacre are paid actors with an anti-gun agenda. Facebook was loath to ban Jones. When people complained that his rants violated rules against harassment and fake news, Facebook experimented with punishments. At first, it “reduced” him, tweaking the algorithm so that his messages would be shown to fewer people, while feeding his fans articles that fact-checked his assertions.
Then, in late July, Leonard Pozner and Veronique De La Rosa, the parents of Noah Pozner, a child killed at Sandy Hook, published an open letter addressed “Dear Mr Zuckerberg,” in which they described “living in hiding” because of death threats from conspiracy theorists, after “an almost inconceivable battle with Facebook to provide us with the most basic of protections.” In their view, Zuckerberg had “deemed that the attacks on us are immaterial, that providing assistance in removing threats is too cumbersome, and that our lives are less important than providing a safe haven for hate.”
Facebook relented, somewhat. On July 27th, it took down four of Jones’s videos and suspended him for a month. But public pressure did not let up. On August 5th, the dam broke after Apple, saying that the company “does not tolerate hate speech,” stopped distributing five podcasts associated with Jones. Facebook shut down four of Jones’s pages for “repeatedly” violating rules against hate speech and bullying. I asked Zuckerberg why Facebook had wavered in its handling of the situation. He was prickly about the suggestion: “I don’t believe that it is the right thing to ban a person for saying something that is factually incorrect.”