A few minutes ago I posted a review for The Chaos Machine: How Social Media Rewired Our Minds. Below are selected highlights from it.
The thought process that went into building these applications,” Parker told the media conference, “was all about, ‘How do we consume as much of your time and conscious attention as possible?’” To do that, he said, “We need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content, and that’s going to get you more likes and comments.” He termed this the “social-validation feedback loop,” calling it “exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.” He and Zuckerberg “understood this” from the beginning, he said, and “we did it anyway.”
That digital amplification [of the Newsfeed introduction] had tricked Facebook’s users, and even its leadership, into misperceiving the platform’s loudest voices as representing everyone, growing a flicker of anger into a wildfire. But, crucially, it had also done something else: driven engagement up. Way up. In an industry where user engagement is the primary metric of success, and in a company eager to prove that turning down Yahoo’s billion-dollar overture had been more than hubris, the news feed’s distortions were not just tolerated, they were embraced.
“There’s this conspiracy-correlation effect,” DiResta said, “in which the platform recognizes that somebody who’s interested in conspiracy A is typically likely to be interested in conspiracy B, and pops it up to them.” Facebook’s groups era promoted something more specific than passive consumption of conspiracies. Simply reading about contrails or lab-made viruses might fill twenty minutes. But joining a community organized around fighting back could become a daily ritual for months or years. Each time a user succumbed, they trained the system to nudge others to do the same. “If they bite,” DiResta said, “then they’ve reinforced that learning. Then the algorithm will take that reinforcement and increase the weighting.”
And he’d joined Facebook to change minds, not to fight. He was getting into animal-rights activism, he said, and thought “it seemed like an interesting platform to be able to spread messages and persuade people.” But often he ended up expressing outrage at them instead. He was behaving in ways, he came to see, that had little chance of advancing the cause and, however fun in the moment, made him feel like a jerk afterward.
She had allowed the platforms to bring out in her the very behavior she otherwise loathed, she said. “And I just don’t see how any of this […] gets any less toxic without more of us realizing that, in our worst moments, we can be that bad guy.”
Online public shaming tended to be “over-determined,” she argued, poorly calibrated to the scale of the crime, and “of little or questionable accuracy in who and what it punishes.”
“I’m telling you, these platforms are not designed for thoughtful conversation,” Wu said. “Twitter, and Facebook, and social media platforms are designed for: ‘We’re right. They’re wrong. Let’s put this person down really fast and really hard.’ And it just amplifies every division we have.”
This thinking was widespread. Goodrow, the YouTube algorithm chief, had written, “When users spend more of their valuable time watching YouTube videos, they must perforce be happier with those videos.” It was a strange assumption. People routinely act against their self-interests. We drink or eat to excess, use dangerous drugs, procrastinate, indulge temptations of narcissism or hate. We lose our tempers, our self-control, our moral footing. Whole worlds of expertise organize around the understanding that our impulses can overpower us, usually to our detriment.
“If your job is to get that number up, at some point you run out of good, purely positive ways,” a former Facebook operations manager has said. “You start thinking about ‘Well, what are the dark patterns that I can use to get people to log back in?’”
The data revealed, as much as any foreign plot, the ways that the Valley’s products had amplified the reach, and exacerbated the impact, of malign influence. (She later termed this “ampliganda,” a sort of propaganda whose power comes from its propagation by masses of often unwitting people.)
The more her team parsed the gigs of data provided by the platforms, she said, the surer she became “that it didn’t matter so much whether it was Russia or anti-vaxxers or terrorists. That was just the dynamic that was taking shape as a result of this system.” For months, there had been signs of a great convergence on what had once been called “the Russia playbook” but increasingly looked like users and groups simply following the incentives and affordances of social media. The line had blurred, maybe for good, between groups that strategically pushed Russian-style disinformation and users who gave rise to it organically. Propagandists had become unnecessary; the system, DiResta feared, did the real work.
The ruthless specificity of YouTube’s selections was almost as disturbing as the content itself, suggesting that its systems could correctly identify a video of a partially nude child and determine that this characteristic was the video’s appeal. Showing a series of them immediately after sexually explicit material made clear that the algorithm treated the unwitting children as sexual content. The extraordinary view counts, sometimes in the millions, indicated that this was no quirk of personalization. The system had found, maybe constructed, an audience for the videos. And it was working to keep that audience engaged.
