There are a whole host of issues raised by the case of Guy Adams, the British journalist whose Twitter account was recently suspended and then reinstated — including the potential clash between Twitter’s desire to forge commercial partnerships with media entities like NBC and its commitment to free speech. But the kind of behavior that Twitter engaged in by banning Adams also raises some other important issues for the company: as it expands its media ambitions and does more curation and manual filtering of the kind it has been doing for NBC, Twitter is gradually transforming itself from a distributor of real-time information into a publisher of editorial content, and that could have serious legal ramifications.
To recap, Twitter suspended Adams’ account several days ago because he posted the email address of an NBC executive as part of a stream of tweets criticizing the broadcast network and its Olympics coverage. According to Twitter, doing so was a breach of its “trust and safety” guidelines because the address was considered to be private (even though it was the executive’s work address). After widespread criticism of Twitter’s decision, Adams’ account was eventually reinstated on Tuesday, and the company’s general counsel Alex MacGillivray later wrote a blog post about the incident, in which he described what happened and apologized for how Twitter handled it.
As Matt Buchanan at BuzzFeed noted, however, it’s interesting to look at what Twitter apologized for and what it didn’t: the company didn’t apologize for suspending Adams’ account in the first place, despite the fact that the email address doesn’t really meet most tests of the term “private.” What MacGillivray apologized for was that a Twitter staffer — a member of the media team working with NBC on the official Olympics hub that Twitter runs in partnership with the broadcaster — was the one who alerted the network to the offending tweet, and instructed the company in how to file a complaint and have the account suspended.
This is important because it means that Twitter itself detected the offensive content and took action, rather than waiting for a user to report the message through the usual channels, and MacGillivray’s post goes to great lengths to make it clear that the company does not do this kind of thing on a regular basis, and will never (or should never) do so, saying:
“The Trust and Safety team does not actively monitor users’ content… we do not proactively report or remove content on behalf of other users no matter who they are… we should not and cannot be in the business of proactively monitoring and flagging content, no matter who the user is.”
MacGillivray says that the company doesn’t want to do this because it “undermines the trust that our users have in us” — and there’s no question that what Twitter did brings up all kinds of questions about how much of what the network does in such cases will be determined by its corporate partnerships with giant entities like NBC, rather than its commitment to being a distributor of real-time information. That’s a real issue for the company in the future, as I tried to outline in a recent post.
But in addition to all of that, proactively monitoring content and removing it also raises some fundamental issues around Twitter’s potential liability for such behavior (which probably helps explain why the company had its general counsel respond about the Adams case instead of a PR person or even CEO Dick Costolo). To the extent that Twitter is manually curating and filtering content that flows through the network — and possibly flagging offensive messages for corporate partners — it is acting as a publisher rather than just a distributor, and therefore it could be on the hook in a legal sense.Publishers are treated differently than networks
In the United States, there are laws that protect internet providers of various kinds (what the U.S. Communications Decency Act calls “interactive computer services”) from defamation lawsuits and other forms of legal action based on comments or message posted by third parties. That’s because these kinds of services are defined as “distributors” or carriers of information — much like a phone company — and the idea is that a carrier can’t possibly read or listen to every message and check it for potentially offensive or illegal content.
If the company is filtering and selecting messages, however, and possibly letting certain parties know when a legally questionable one shows up, that is much more like what publishers do — and in many jurisdictions, publishers like newspapers are held to a different standard. Twitter has already been the subject of at least one lawsuit based on that principle: in Australia, a TV personality sued the network earlier this year for publishing allegedly defamatory tweets about her, and at least one lawyer commenting on the case made a direct comparison between what Twitter does and what a newspaper does, saying:
There’s not a lot of difference conceptually between Twitter or other internet publishing and an airmail copy of a newspaper; it’s just quicker.
So far, these kinds of cases have been few and far between. But the Adams case, and MacGillivray’s response to it, makes the point that this could be a significant challenge for Twitter in the future. Not only does it have to find some way to navigate between the demands of its users and the necessity of catering to advertisers and/or corporate partners like NBC — while still upholding its self-declared status as the “free-speech wing of the free-speech party” — but it has to be careful not to become too much of a curator or publisher of content, or face the potential legal liabilities that all publishers face. Welcome to the realities of being a media entity, Twitter.