Beyond the hope, fear, and loathing wrapped in the enigma of Elon Musk's Twitter, there are some hints of possible blue skies and sunlight, whatever your politics. A new architecture document from the Bluesky project that Jack Dorsey funded points to an important strategy for how that might be achieved -- whether by Twitter, or by others. Here are some quick notes on the key idea and why it matters.
That document is written for the technically inclined, so here are some important highlights (emphasis added):
It’s not possible to have a usable social network without moderation. Decentralizing components of existing social networks is about creating a balance that gives users the right to speech, and services the right to provide or deny reach.
Our model is that speech and reach should be two separate layers, built to work with each other. The “speech” layer should remain neutral, distributing authority and designed to ensure everyone has a voice. The “reach” layer lives on top, built for flexibility and designed to scale.
Source: Bluesky |
The base layer...creates a common space for speech where everyone is free to participate, analogous to the Web where anyone can put up a website. ...Indexer services then enable reach by aggregating content from the network. Moderation occurs in multiple layers through the system, including in aggregation algorithms, thresholds based on reputation, and end-user choice. There's no one company that can decide what gets published; instead there is a marketplace of companies deciding what to carry to their audiences.
Separating speech and reach gives indexing services more freedom to moderate. Moderation action by an indexing service doesn't remove a user's identity or destroy their social graph – it only affects the services' own indexes. Users choose their indexers, and so can choose a different service or to supplement with additional services if they're unhappy with the policies of any particular service.
There is growing recognition that something along these lines is the only feasible way to manage the increasing reach of social media that is now running wild in democracies that value free speech. I have been writing extensively about this on this blog, and in Tech Policy Press (see the list of selected items).
The Bluesky document also suggests a nice two level structure that separates the task of labeling from the actioning task that actually controls what gets into your feed:
The act of moderation is split into two distinct concepts. The first is labeling, and the second is actioning. In a centralized system the process of content review can lead directly to a moderation decision to remove content across the site. In a distributed system the content reviewers can provide information but cannot force every moderator in the system to take action.
Labels
In a centralized system there would be a Terms of Service for the centralized service. They would hire a Trust and Safety team to label content which violates those terms. In a decentralized system there is no central point of control to be leveraged for trust and safety. Instead we need to rely on data labelers. For example, one data labeling service might add safety labels for attachments that are identified as malware, while another may provide labels for spam, and a third may have a portfolio of labels for different kinds of offensive content. Any indexer or home server could choose to subscribe to one or more of these labeling services.
The second source of safety labels will be individuals. If a user receives a post that they consider to be spam or offensive they can apply their own safety labels to the content. These signals from users can act as the raw data for the larger labeling services to discover offensive content and train their labelers.
By giving users the ability to choose their preferred safety labelers, we allow the bar to move in both directions at once. Those that wish to have stricter labels can choose a stricter labeler, and those that want more liberal labels can choose a more liberal labeler. This will reduce the intense pressure that comes from centralized social networks trying to arrive at a universally acceptable set of values for moderating content.
Actions
Safety labels don’t inherently protect users from offensive content. Labels are used in order to determine which actions to take on the content. This could be any number of actions, from mild actions like displaying context, to extreme actions like permanently dropping all future content from that source. Actions such as contextualizing, flagging, hiding behind an interstitial click through, down ranking, moving to a spam timeline, hiding, or banning would be enacted by a set of rules on the safety labels.
This divide empowers users with increased control of their timeline. In a centralized system, all users must accept the Trust and Safety decisions of the platform, and the platform must provide a set of decisions that are roughly acceptable to all users. By decomposing labels and the resulting actions, we enable users to choose labelers and resulting actions which fit their preferences.
Each user’s home server can pull the safety labels on the candidate content for the home timeline from many sources. It can then use those labels in curating and ranking the user timeline. Once the events are sent to the client device the same safety labels can be used to feed the UX in the app.
This just hints at the wide array of factors that can be used in ranking and recommending that I have explored in a major piece in Tech Policy Press, and in more detail in my blog (notably this post). One point of special interest is the suggestion that a "source of safety labels will be individuals" -- I have suggested that crowdsourcing can be a powerful tool for creating a "cognitive immune system" that can be more powerful, scalable, and responsive in real time than conventional moderation.
The broader view of what this means for social media and society are the subject of the series I am doing with Chris Riley in Tech Policy Press. But this Bluesky document provide a nice explanation of some basic ideas, and demonstrates progress toward making such systems a reality.
The hope is that Twitter applies such ideas -- and that others do.
No comments:
Post a Comment