Can Substack handle the truth?
Substack believes in free speech. But free speech can lead people towards harm as much as towards truth. What should Substack do about that?
I’ve never watched A Few Good Men, except for this scene. But it's such a powerful scene. And it’s relevant to issues of truth and opposing views.
As I was finishing this post, I had a few thoughts about Substack's opinions that startled me and led me to other provocative thoughts about opinions in general. I'll put them all here – even though they should logically go later – because you might also find them surprising and significant.
The Substack co-founders' four blog posts about content moderation (links here) emphasize freedom, openness, reason, discussion, and trust. But ironically the posts themselves aren’t part of an open and reasoned conversation. Some opposing views are strawmen (“pressure” and “censorship”) easily knocked down. I can't find a basis to trust the co-founders’ views about Substack over anyone else's. These ideas appear in a vacuum.
Not a complete vacuum. Implicitly or explicitly, these posts are responses to other views about speech and to perceived ills on other websites. And the posts have many comments, which I presume the authors have read.
But still, there's a vacuum: the statements about Substack's purported benefits don’t appear to be derived from or proved by evidence or refined by discussion. They spring from thought (perhaps refined by passing around drafts amongst themselves?). I'm left with the sense that the argument is “The old way is bad. Substack is a new way that we believe is good. Therefore, Substack is good.” That doesn't compute. In my view, Substack's opinion of itself has become more polarized from differing opinions over time.
Maybe that's the main problem with many ideas on Substack, on the internet, and in the world in general. Is everyone in a bubble, whether a bubble of one, a group of corporate executives, or a larger set of people that still exhibits groupthink? It's hard enough for each person to know and express what they personally think. Maybe going along with a group is easier and gives individuals the satisfying feeling of being right? And maybe defending and even doubling down on what your group is doing is easier than becoming more moderate? (The Substack co-founders themselves talk along these lines in a couple of paragraphs that conclude “The center does not hold.”)
Maybe what Substack needs is larger, more diverse groups, rather than just each newsletter's writers and supporters of the writers' views? (Maybe Substack as a corporation will someday develop opinions in a more distributed manner, like crypto DAOs ideally can do?)
Substack's co-founders have indicated that Substack won't be the judges or arbiters of truth. And perhaps it can't be, except for a small percentage of the thousands of newsletter posts every week.
But I think Substack has a role to play in promoting truth. It can also play a role in fostering lies and other dubious content.
In my view, Substack recognizes it should promote truth and undermine lies. It believes it does this, through a hands-off approach to content moderation. But Substack's confidence rests on flawed perceptions of human behavior and understanding.
My Substack isn't going to resolve the age-old question “What is truth?”
However, I think you'll agree that some things are true and some are false.
This post appears on Substack: TRUE (once I click “Publish”)
This post appears in the New York Times: FALSE (unless the Times makes me an offer)
As I see it, a fact should be supported by evidence and not undermined by other evidence.
Substack Inc. is a corporation: TRUE (due to much evidence in favor and none against, as far as I'm aware)
Substack Inc. was co-founded by George Soros: Almost certainly FALSE. (It's not impossible that Soros, or someone else, was a secret co-founder. But without evidence, it can't get the label TRUE.)
Some statements are matters of opinion. They're neither true nor false. Reasonable minds may agree or disagree with the opinion. You could cite facts, evidence, or other opinions to argue for or against it.
Substack is the best place to publish a newsletter: OPINION
Substack isn't the best place to publish a newsletter: OPINION
An opinion benefits from favorable facts – and is weakened by unfavorable facts – that are supported and not undermined by evidence.
Substack enables writers to make a good living: An opinion that would be supported by evidence that paid writers on Substack typically make over $50,000 per year more than by evidence of earnings under $5,000 per year.
You get the idea. Evidence-based facts and opinions supported by them have a more solid foundation than assertions with little or no evidence.
Substack's co-founders apparently recognize that some ideas, such as conspiracy theories, can be damaging:
… increasingly, there are questions about how to handle questions of free speech when the internet can spread damaging ideas faster, and when vast conspiracy theories are allowed to take root via social media persuasion.
They go on to say that “the internet is broken” and that media products “distort online discourse.” They contrast these phenomena with “healthy and productive discourse” and “reasonable discussions,” which apparently they believe Substack will facilitate through hands-off policies.
I infer that the Substack co-founders are thinking like I am about facts and opinions. “Vast conspiracy theories” are “damaging” only if they're not based in fact. (Exposing real conspiracies would be good.) Indeed, the Substack cofounders elsewhere indicate that “conspiratorial narratives” are opposed to “the pursuit of truth.” Discourse is “distorted” – rather than “productive” or “reasonable” – if it fosters misinformation and badly supported opinions.
But will a hands-off approach to content make information, ideas, and discourse better?
Substack's co-founders pin much of their hopes on readers. They believe Substack is
empowering readers to thoughtfully evaluate an argument’s merits for themselves.
They suggest that freedom of speech favors “good ideas” while censorship favors “bad ideas”:
Knowing that they are on a platform that defends freedom of expression can give writers and readers greater confidence that their information sources are not being manipulated in some shadowy way. To put it plainly: censorship of bad ideas makes people less likely, not more likely, to trust good ideas.
They assert that paid subscriptions will lead to better content:
People will hate-read and doom-scroll, but they won’t hate-pay or doom-subscribe. While people pay attention to content that makes them agitated, they’ll only pay money for content they trust and value.
I disagree with a few premises of the Substack co-founders' views.
First, I disagree that readers will “thoughtfully evaluate an argument's merits” on Substack even if they don't already do so elsewhere. Why would they? They're just as likely as elsewhere to lack enough subject-matter knowledge, objectivity, time, attention, inclination, etc. to evaluate an argument other than whether it sounds good to them.
Second, I disagree that freedom of expression gives – or should give –readers more confidence that information is trustworthy. Free speech means more license not only to tell the truth but to lie. Free speech means freedom not only to engage in sober analysis of facts but also to make hyperbolic claims about evidence-deprived speculations. Discourse between readers and writers is unlikely to help – subscribers will tend to agree with the writer (who in turn tends to seek subscribers’ approval).
Third, I disagree that “readers won't hate-pay or doom-subscribe.” They will. Simple as that. There's a long history. There's recent history: in populist political campaigns, armed conflicts, the pandemic... People will say outrageous and dangerous things. People will believe and pay for these things, even commit violence or die for them. In other words, they trust and value hate and doom. Some, hopefully most, people won't; but too many will. I'll cite examples if anyone wants. I doubt it's necessary. I await the co-founders' retraction of their statement.
If Substack gets a complaint with evidence that a newsletter is committing a significant violation of the content guidelines, then I don't see why Substack shouldn't examine the situation.
I'm not saying that Substack should ban newsletter creators every day of the week. And I'm not saying the content guidelines are immutable. (I think Substack should engage in dialogue with the community rather than promulgate the guidelines with no input.)
I am saying that substantial allegations of harm – such as harm created by misinformation or misinformed opinion – shouldn't be swept aside. The allegations undermine Substack's belief in its benefits to the world.
Substack shouldn't just assume that it can't design a scalable process that protects free speech and also protects people from harm.
In any event, Substack can do more than enforce its content guidelines. It can actively promote facts and reasonable opinions.
Substack could promote information literacy – education for writers and readers on how to evaluate a text for support by facts and reasonable opinions.
Substack could also provide a forum for civil discussion of subjects and posts. Or it could support an initiative to create such a forum or some other way to have dialogue across newsletters, and not just within each newsletter’s bubble.
What do you think?
This is an excellent summation, thank you for putting it together. Unfortunately, it outlines a problem that, at the moment, seems to lack any workable solution. I suspect that the approach currently being taken by Spotify with Joe Rogan is the new way forward - removing only problematic posts (podcasts, newsletters, tweets, whatever) and leaving the creator/author alone and un-deplatformed (is that a word?). But there are two issues with that approach that I'm not sure SubStack is able to handle:
1. Spotify's approach stands in contrast not just to Twitter's deplatforming of people, but also to services like Facebook, Tik Tok, and Tumblr who make blanket decisions about what content is and is not acceptable. SubStack is trying to avoid the latter, but sooner or later, something will come up and SubStack will have to choose one path or another (or be a lot more inventive than anyone else has been).
2. How much money is SubStack going to be willing to throw at the problem when it does arise? Hiring teams to find, analyze, and decide what to do with problematic content doesn't come cheap.
People spout half truth to willing and trusting audiences everyday. Charlatans can b me very charismatic with all the gloss and official looking copy or design as legit outlets. I don’t think they should be surprised that people will throw their hard-earned money at a creator for “the cause”, whichever they think is worth it.
I’m not sure there is anything they can do accept wait. If extremists think they’ll be able to hang a shingle here, they’ll try it. And they will likely succeed. They can only stand back, tout themselves as, ironically, a safe space and hope that audiences make a favorable choice.