- Glenn Sonnenberg
Musings from the Bunker 3/17/21
Good morning and Happy Saint Patrick’s Day! I wanted to start with an observation about the number of deaths in the U.S. from COVID, currently around 530,000. Sometimes a number needs to be related to some other measure in order to appreciate its significance. By way of example, Yad Vashem, the Holocaust memorial in Jerusalem, has an exhibit illustrating the faces of six individuals, to give a manageable perspective on six million. Steven Spielberg did it with a single girl in a red overcoat in Schindler’s List. On Saturday, with the 365th day of the Musings, I included the song “Seasons of Love,” which notes that a year is 525,600 minutes. As Simon Furie pointed out, the number of days of the pandemic mirrors very closely the number of deaths in the U.S. from COVID-19 during that year. It is pretty close—one person dead per minute of the past year. Something to ponder… REGULATION OF HATE SPEECH The regulation of hate speech and the limitation on its dissemination on social media platforms is one of the great issues of our time. The vaunted “open communication” that was credited with the spread of the Arab Spring and other freedom movements has also abetted the interference with, and manipulation, of elections around the world. Social media platforms are hotbeds of political extremism, crazy conspiracy theories, and hate speech. Moreover, we often don’t know who all the players are that are producing all this “content” but we know some of them affirmatively seek to increase tension and pit American against American. The current debate on regulating social media seems to focus on what government can do through regulation (whether in regulating speech, a la the “fairness doctrine”) or what the industry can do to police itself. Either would be valuable. Some of the conversation is around how the platforms are monitoring unacceptable speech themselves. But let’s be honest about this. Facebook, Twitter and others are in the business of attracting eyeballs and maintaining attention and “clicks.” These objectives are opposite our society’s interest in ensuring truth, civil debate and non-violent rhetoric on line. Extremism and a “common enemy” sells. Anything that will cause people to remain on line as long as possible is in the interest of these platforms. Plus, even with the best intentions, it’s difficult for these platforms to monitor the hundreds of millions of posts per day, even if they chose to do so. As of 2019, Instagram experienced over 95 million posts per day. I’m thinking that today the platforms determine what we see but what if the users decide what to see? I suppose there is an argument that viewing times may drop, but I suspect so also might tempers. Shouldn’t we put the decisions and the responsibility in the hands of those who post and those who read posts? This might be a better result than relying on the purveyors of a profit making system. WHAT IS REQUIRED TO POST Right now, Facebook and Twitter must employ legions of individuals and/or complex computerized “reviewers” to search the hundreds of millions of posts per day. But at the “top of the decision pyramid” are the executives that determine the methodology and then take action based upon the information unearthed by reviewing posts. It doesn’t work and probably can’t work. My best example of the absurdity of the efforts of social media to regulate posts on their sites is the rather famous (or infamous) video of Nancy Pelosi, manipulated to make her appear drunk or demented, which was reposted over and over. When presented with the need to remove this from Facebook, Mark Zuckerberg said it as legitimate satire and, therefore, it should remain. Mr. Zuckerberg did not make the link between what he considered satire and the intentions of the person who posted the video and the conclusions reached by those viewing the video. Current technology allows us to manipulate pictures and movies to cause people to do and say things that bear no relation to reality. Ought these things fly through the ether without an identifier that it is not a product of the “real world”? Very well. Let’s accept his conclusion. Satire can stay. But what if we require that it be labeled as such? What if we require that those posting actually present the purpose for which the post is being made. Here are some “rules of posting” that could go a long way, basically requiring self-categorization of postings. This provides an identifier to those viewing the posts and would also provide a means of regulating those postings to assure that they are what they claim to be. So if you say something is factual but it is instead not factual, it gets relabeled or eliminated. And photos and videos (and writing) that clearly is “made up” must be identified as satire or be deleted. Here would be my rules:
First, establish that the person posting is a person. There are too many “bots” creating and reposting content. I don’t know about you, but I have to prove I’m a person in order to use certain websites. Why would that not be possible here? We have the technology.
Second, establish who I am. I have to prove my identify to drive a car, fly on a plane, buy a gun and perform any number of other tasks. Why not require identification on-line? And if that offends someone’s sensibilities, how about at least confirming the country and state of origin, just so we have some context.
Third, require the person posting to label their post as either news, opinion, satire/parody, or scientific fact/conclusion. In that way, those reading would be better informed. Further, the sites could then establish rules for when something is miscategorized. So I’m not censoring; just identifying.
CHOOSE THE ALGORITHM
Then there is the notion of the algorithm. The algorithms used by the moguls of Silicon Valley are closely guarded secrets. What it does is focus the feed of stories to fit what it believes we want—or should want. But what if we allow the user to choose the algorithm him or herself? What if you and I can determine whether we prefer the top stories to be reputable mainstream media, friends, clubs, products, or Proud Boys? My hunch is most people might well opt for a combination of traditional media and sources only on their “side” of the debate.
There exist now some start up “middlemen” who are trying to assist this the process. We could each retain a content algorithm of a third party not affiliated with the social media site. Think of the options:
Choose an algorithm powered with a traditional media bias
Or one with a PBS/NPR slant
Or one with a liberal slant but with “proven” reliability
Ditto, but for a conservative slant that “curates” from reliable sources
Or either of the above but from more “strident” sources
And then, if you’re into having your news manipulated and processed so that it’s unrecognizable, at least you can do that. And perhaps we can merge these two concepts and identify you in postings based upon the algorithm(s) you choose to curate your intake of information.
BANNING ON-LINE SPEECH I just think these sorts of alternatives, more sophisticated and nuanced, and less dependent upon the whims of corporations, is the direction to go. I have to say I was delighted to see that former president and current defendant Trump has been banned. His is the clear case—but it leads to question the media companies’ ability to adjudicate the harder cases. Do we really want them deciding who has a voice in the public square and who does not? I also think we should encourage the industry to create their own standards for “rating themselves,” much like the movie industry did in rating movies. Perhaps standards can be established on the labeling, monitoring, and control of speech that warrants certain ratings. Perhaps a third party, a la Consumer Reports, might jump into the fray. I just don’t think companies themselves, the industry, or even disinterested third parties can be the only sources of regulation. I am very concerned with companies making unilateral decisions to silence those who today may seem incendiary, even hateful but tomorrow might simply be inconvenient or disagreeable to those sitting in the corporate executive suites. EVOLUTION OF THE LAW It will be interesting to watch how anti-trust law may evolve. Antitrust law historically has been from the perspective of putting limits on the consumer. It must evolve to address the new antitrust issues. Other government regulation might restrict/define what it means to be speaking in the open square on-line and whether there is consequence for fomenting violence or misinforming others. In the meantime, stop-gap measures such as those I support could be a start. Have a good day, Glenn