The Thing Missing From Critics' Censorship Arguments About Tech Platforms And Parler

Let's Start Talking About Solutions

The first few weeks of the new year has been a whirlwind in the political world. Most of the major talking points about the events at the Capital have already been said, so I’ll spare you my thoughts on that.

What I would like to discuss is the aftermath of what’s happened with the popular, conservative-leaning app Parler. As you may have heard, it was the de-listed from major app ecosystems. Amazon, through its web hosting business Amazon Web Services (AWS), also decided to no longer host the app.

As a result, Parler is left with few options.

This comes as YouTube decided to suspend Trump’s account and after Facebook and Twitter placed similar restrictions on him in the days prior.

Immediately after all of this, supposed-advocates of free speech and critics of big tech companies became increasingly vocal about why these actions related to Parler were inappropriate. Many warned that this could be the start of a dangerous precedent.

This post isn’t about making a judgement call on what’s right or wrong or who’s right and who’s wrong. This is about what’s missing in the argument that critics are making.

What’s lacking in the current discourse is a clear view on where the free speech line is and what alternative actions should have been taken, if any. People are criticizing the fallout without proposing additional paths or clearly setting a line in the sand.

Let’s start with common ground. I think most (or at least many) of us can agree that we want the right/ability to speak freely on social platforms, so long as the message doesn’t violate the law (libel, slander, threats, etc). I think/imagine that we can also agree that when someone uses a medium to incite violent actions, there should be constraints or limitations placed on the message and/or messenger.

Agree or disagree so far? If you disagree, let me know where and why? [seriously]

What’s troubling about what I’ve read so far, primarily through social media, is the argument that platforms and companies have gone too far without more constructive ideas or suggested paths.

Here are a few examples (click through to read each):

and this…

and this…

Maybe there’s some validity to these views, maybe not - that debate is for a different time and place.

If (and that’s a strong if) people generally agree that there should be some form of limitation to what people should be able to say on these platforms, then what is the proposed/suggested action you (if you are a critic) think tech companies should take when a company violates the line?

According to Amazon, the apparent violations from Parler seem very straightforward and leaves little up for interpretation:

Does this not meet the standard that critics think should be there? If not, should no action or moderation be taken? What would level of speech would call for action?

These aren’t rhetorical or facetious questions. These are questions to spur an important conversation. I sincerely think that any app or company that facilitates this level of discourse should be held responsible, if not by the hosting platform companies, then by the law.

By focusing the conversation here, maybe we can get one step closer to agreeing on a better path forward and consensus on how to address issues like this in the future.

I may be shouting into the void here, but I’m striving to understand more about the possible solutions versus focusing solely on the problem. Critics, your turn to talk.