The growth of Bluesky over the past couple of weeks has been spectacular. The app now has 3.5 million active daily users and more than 22 million total users. Based on downloads, it is currently the top ranked Google Play app in the US and the UK, second in Australia, Belgium, Canada and Spain, and third in Denmark and France.
While the initial surge was driven by people fleeing X after the presidential election, the growth now has a self-reinforcing quality through network effects. The claim that the platform is an echo-chamber for people “selecting for misinformation that flatters them” was never really accurate, and is now just absurdly false. For example, a starter pack assembled by Thomas Prosser includes Helen Lewis, Razib Khan, Benjamin Ryan, Thomas Chatterton Williams, Helen Pluckrose, Cathy Young, David Frum, Timur Kuran, Sohrab Ahmari, Nicholas Christakis, Tyler Cowen, and Sarah Longwell. These are hardly intolerant ideologues seeking safe spaces.
What interests me most about Bluesky is the platform’s approach to content moderation, which allows for individual customization in an interesting way.
The approach is best illustrated with an example.
About a week ago, Colin Wright of the Manhattan Institute posted the phrase “Sex is not a spectrum” on Bluesky, to which an Intolerance label was attached. According to the platform’s moderation service, this warns users of content that involves “discrimination against protected groups.” The label has since been removed, and later in this post I’ll consider the wisdom of attaching it in the first place. But the main point I’d like to make here is that the content moderation system on the platform makes such labeling decisions quite inconsequential for those concerned about censorship.
On Bluesky users can decide for themselves which labeled content they would like to see.1 There are several such labels, including one for content that advocates violence, and another for material that is simply “impolite… without constructive purpose.” In each of these cases, users may choose to hide the content entirely, or to see the label before clicking through to see the content. Or they may choose to turn the label off, in which case they will not even know that the content has been flagged:
This allows individuals to decide for themselves what they would like to see with warnings attached, or not to see at all. It is not censorship in any meaningful sense.
Should the intolerance label have been attached to Wright’s post? Nicholas Christakis thinks that it should not have been, and I’m inclined to agree with him in this instance. However, more generally, I think that a convincing case can be made for flagging content that may seem completely innocuous if taken literally, especially with customizable content moderation in place.
The meaning of an expression doesn’t just depend on the meanings of its component words and the rules of language. It depends on who tends to use the expression and for what purpose.
Consider the phrase All Lives Matter. Taken literally, this expresses a sentiment that is perfectly harmless, perhaps even praiseworthy. But to some it is insulting and insensitive. Furthermore, meanings evolve over time through a process of selective adoption and abandonment. If people who want to cause harm keep using the expression while others start to avoid it, it will come to be seen (correctly) as an insult.
Similarly, the phrase Sex is Binary may be interpreted as a bland claim about biology, or as a coded way of saying that Caitlyn Jenner or Deirdre McCloskey should be prevented from using public bathrooms designated for women. The latter is a policy position that many—perhaps even Donald Trump—would consider intolerant.2
I am not arguing that All Lives Matter or Sex is Binary should be flagged as intolerant, or Let’s Go Brandon as rude, or From the River to the Sea, Palestine will be Free as extremist or threatening. One can argue for or against doing so in all these cases. My point is simply this—the content moderation system on Bluesky makes such labeling far less censorious than it would otherwise be. People can decide for themselves the level of protection they desire.
This approach has a somewhat paradoxical effect. By allowing people to mute content in fine-tuned and personalized ways, it reduces the incentive to mute people. As a result, one can continue to follow users who occasionally make statements that one finds offensive or distasteful, provided that they also provide value in other ways. This has the potential to expand reach beyond ideological boundaries. Whether Bluesky will live up to this promise remains to be seen, but if it does, it will end up being less of an echo chamber than its competitors.
I’m grateful to Renee DiResta for explaining exactly how this works. As I said in my last post, Renee has been unjustly maligned in recent years for her work examining and exposing online misinformation. I recommend her recent book on these issues, and a very balanced post on censorship alarmism by Dan Williams.