“Is there a cure for hate?” NPR asked this morning on Morning Edition, shortly before not answering the question with any degree of certainty.
“I was a Holocaust denier. I ran a computer operated voice-mail system that was primarily anti-Semitic,” former White Aryan Resistance member Tony McAleer tells Eric Westervelt.
He was very much like the man who walked into a synagogue and massacred people because they were Jews.
He started a program called Life After Hate, after he “reconnected with his humanity” after getting compassion from a Jewish man, he says.
“And there’s nothing more powerful — I know because it happened to me in my own life — than receiving compassion from someone who you don’t feel you deserve it from, someone from a community that you had dehumanized.”
It seems like a wholly inefficient way to change a nation heading toward increasing hate crimes and mass shootings.
“That’s the answer I can’t provide because at this point we really don’t know,” sociologist Pete Simi tells Westervelt on the question of “scalability.”
So what else might work?
Less speech, Wired’s Jason Pontin suggests in his article today — The Case for Less Speech.
The digital world — your smartphone, for example — and all the algorithms could have enlarged the best in humanity. Instead, it “liberated the worst,” he writes.
At one time he believed society needed all the speech it could bear. No more.
I thought nothing very bad could happen when men and women said what they wished. It’s hard to believe that today. In a catalogue wearying to relate, in just 72 hours last month we learned that Cesar Sayoc was radicalized on Facebook and threatened others on Twitter, before he sent pipe bombs to more than a dozen of President Trump’s critics; that Robert Bowers shared conspiracy theories and anti-Semitic messages on Gab, before he killed eleven people and injured six others; and that Jair Bolsonaro, a right-wing Brazilian politician best known for his pullulating hatreds (for homosexuals, Afro-Brazilians, women—pluralism itself), waged a campaign of disinformation on WhatsApp, before he won election as president. Perhaps all three, even Bolsonaro, were nuts before social media; but social media licensed their malevolence in different ways.
Last month’s outrages deepened the general mood of dismay about the mechanics and influence of social media. No one is proud of their online addictions, but those addictions now seem consequential. Increasingly, research supports the intuition that hateful speech leads to hateful acts. In reaction, some of those who know social media best have rejected the liberal tradition, and suggest that since we cannot regulate our behaviors, the companies that incite and reward bad behavior should be better regulated. Writing in WIRED magazine last January, Zeynep Tufecki, a social scientist who studies the effects of emerging technologies, observed, “John Stuart Mill’s notion that a ‘marketplace of ideas’ will elevate the truth is flatly belied by the virality of fake news,” and demanded “political decisions” with “huge trade-offs.” My colleague at WIRED Ideas, Renee DiResta, the director of research at New Knowledge, which protects companies from social media disinformation attacks, reminded “pundits and politicians howling about censorship” that “free speech does not mean free reach,” and insisted we “hold tech companies accountable… and demand transparency about into how their algorithms and moderation work.”
Internet platforms were not designed to be “truth machines” and, he writes, there’s no evidence users want them to perform that role.
I don’t want speech to be less free, exactly. I want less speech absolutely and I want what is said to be less destructive. Less speech is more. Less speech, more coolly expressed, is what we all need right now: a little less goddamn talk altogether.
This, of course, will never happen, which brings us back to Westervelt’s original question which, in the absence of a realistic and scalable solution, actually has an answer.
“No.”