About Section 230 (May 2023 Update)

KALM-150x150"

We update the history of Section 230 in light of the recent Supreme Court decisions. What it is, what it isn’t and how those decisions affected or didn’t affect the future of the “safe harbor” law in the US.

Featuring Tom Merritt.

MP3

Please SUBSCRIBE HERE.

A special thanks to all our supporters–without you, none of this would be possible.

Thanks to Kevin MacLeod of Incompetech.com for the theme music.

Thanks to Garrett Weinzierl for the logo!

Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit

Send us email to feedback@dailytechnewsshow.com

Episode transcript:

The US Supreme court has decided two cases that challenged protections of Section 230 of the US Communications Decency Act and in both cases the court decided not to touch those protections. In oral arguments for the cases the court indicated they felt maybe Congress should be the one to do that.
Twitter v. Taamneh argued that Twitter provided unlawful material support for failing to remove users from its platform. Gonzalez v. Google claimed that a platform, in this case, YouTube, should be liable for content it recommended to users.
A lot of people misunderstand what Section 230 does and doesn’t do. So in this updated episode, I’ll cover the basics of what it is and what it isn’t and what the court did and did not say in these landmark cases.

We covered the history and meaning of Section 230 in depth in the episode About Safe Harbor in July 2020. So if you want the deep dive please listen to that.
This episode will focus on how to properly explain and think about Section 230 no matter what argument you may or may not be trying to make. You may think Section 230 promotes censorship. You may think it protects big tech companies from responsibility. You may think it should be repealed. Those are all reasonable positions to take. But I often hear people argue these sorts of positions from a starting point that is wrong. I just want to give you the correct starting point from which you can make your argument.
So let’s start with the folks who say we should just get rid of it. There is a misconception that if we get rid of Section 230 companies would have to take responsibility for the content on their platform or that they would have to stop censoring. Neither one of those things is assured.
Without Section 230, ANY platform. And it’s worth pointing out this applies to a forum you might run on your own website, as well as to Facebook. Without Section 230, any platform would be seen in the eyes of the law as either a publisher of information or a distributor. A publisher is responsible for what it publishes. A distributor is not responsible for the contents of what it distributes.
The easiest way to think about this is a brick and mortar bookstore. The publisher of the books and magazines it sells are responsible for what’s in the books and magazines. The book store is just the distributor. In fact a 1959 Supreme Court case ruled that a bookstore owner cannot be reasonably expected to know the content of every book it sells. They should only be liable if they know or should have known that selling something was specifically illegal. Otherwise the publisher is liable for what’s in the book or magazine.
Now let’s think about that for a minute. The bookstore can decide what magazines to carry. But it’s not deciding what’s in the magazine. It isn’t allowed to sell magazines that it knows are illegal but it’s not expected to read every word of every magazine to police its content.
On the other hand, letters to the editor published in the magazines are in fact the responsibility of the publisher. Just because a reader wrote the letter doesn’t mean the publisher had to print it. It CHOSE to print it. It exercised editorial control, and therefore is liable for what the reader wrote.
The publisher of the content is not protected from liability. But the bookstore gets protection because it’s not exercising editorial control of what’s in the books. It’s a distributor.
Fast forward to the 1990s. Compuserve and Prodigy are vibrant new parts of the internet where people are talking to each other like never before.
It’s April 1990. Sinead O’Connor’s new song Nothing Compares 2 U (written by Prince) tops the Billboard charts.
Robert Blanchard has developed Skuttlebut, a database for TV news and radio gossip. It’s a new competitor for a similar service called Rumorville, published over on Compuserve’s Journalism forum. Skuttlebut and Rumorville are in stiff competition for the burgeoning online audience that wants TV and radio news industry gossip. This is FIVE YEARS before the Drudge Report mind you.
In the heat of the competition Rumorville posts that Skuttlebutt has been getting info from a back door at Rumorville, that Skuttlebutt’s owner, Robert Blanchard got “bounced” by WABC, And described Skuttlebutt as a “scam.”
So Skuttlebutt’s owner Cubby, sued Rumorville’s parent company, but also sued Compuserve as the publisher. But here’s the thing. Compuserve did not review Rumorville’s content. Once it was uploaded, it was available. Compuserve also didn’t get any money from Rumorville. The only money it made was off the subscribers to Compuserve itself, whether they read Rumorville or not.
In Cubby Inc. v Compuserve, the judge ruled that Compuserve was not a publisher. It was a distributor. It could not reasonably know what was in the thousands of publications it carried on its service. Therefore, like a bookstore, Compuserve was not liable for what was published in Rumorville.
Reminder. This is without Section 230. The platform was not exercising control over the content so it was not liable for what was in it.
On to October 1994. Boyz II Men is dominating the charts with a long run at number one with “I’ll Make Love to You.”
Prodigy’s Money Talk message board is still awash in talk about the bond market crisis. And an anonymous user posted that securities investment firm Stratton Oakmont had committed crime and fraud related to a stock IPO. Stratton Oakmont takes exception to what it considers defamation and files a lawsuit against Prodigy alleging the company is the publisher of the information.
So you’d think, given the Compuserve case that Prodigy is in good shape. It didn’t publish the comments the commenter did.
Except. It’s been a few years, and a few raging internet flame wars later, and Prodigy, like many other platforms, has developed some Content Guidelines for users to follow. It also has Board Leaders who are charged with enforcing those guidelines. And Prodigy even uses some automated software to screen for offensive language. This is all good community moderation practice right? Clear set of guidelines. Consequences if you violate them. And even some automated ways to keep some of the bad stuff from ever even showing up.
The court looked at that and said, well, looks to us like you’re exercising editorial control. You’re deciding who gets to post what. That feels a lot more like the letters to the editor than it does the bookstore. The court wrote “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice.”
In Stratton Oakmont v. Prodigy, the court ruled in favor of Stratton Oakmont.
After that case the law stands that courts will give you the protection of a distributor, as long as you don’t moderate. If you moderate the content, you’re on the hook for it.
So in other words before Section 230, you could either leave everything up or you’d have to be responsible for everything, meaning you’d have to pre-screen all posts. Your choice is either zero moderation or prior restraint.
Republican Chris Cox and Democrat Ron Wyden both thought this was not an ideal situation. So they wrote Section 230 of the Communications Decency Act which read “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Those are the 26 words usually cited as section 230. But that’s just paragraph 1 of subsection c. But there’s a second subparagraph of section c which is also important. It’s called Civil liability It reads:
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)
In other words, even if it’s protected free speech, the platform can take down content it finds objectionable and not lose its protections from liability for other content.
All of this is a long way to say if the platform didn’t create the content, it’s not responsible for it. ..with a few exceptions.
This is another part of the discussion of Section 230 that gets left out. Section 230 specifically says that this law will have no effect on criminal law, intellectual property law, communications privacy law or sex trafficking law. So the DMCA for example still has to be followed. You have to respond to copyright takedown notices.
So back to the two Supreme Court cases Twitter v. Taamneh and Gonzalez v. Google.
We have to remember that platforms are still responsible for content THEY generate.
If Facebook’s own staff post on Facebook defaming you, Section 230 does not protect it. Section 230 only means Facebook is not on the hook for what I post.
So what about recommendations? What about the stuff in my feed that Facebook chose to show me without my input? Facebook didn’t create the content but it chose to show it to me, specifically not to everyone. That would certainly count as editorial control before Section 230, but Section 230 was put in place specifically to allow a measure of editorial control– removal of posts– without having to take responsibility for all posts.
Also remember that “terrorist” content qualifies as criminal content which Section 230 does not protect. So how long can criminal content be up before a platform “should” have known about it and taken it down? Specific to the case of Taamneh vs. Twitter, is Twitter “aiding abetting” terrorists when it failed to remove such content?
Bearing on both the question of algorithms and criminal content is one more case that tested Section 230 shortly after it became law.
It’s April 25, 1995. Montell Jordan’s “This is How We Do It” tops the charts.
And someone has posted a message on an AOL Bulletin Board called “Naughty Oklahoma T-Shirts” describing the sale of shirts featuring offensive and tasteless slogans related to the Oklahoma City bombings which had happened 6 days before. The posting listed the phone number of Kenneth Zeran in Seattle, Washington who had no knowledge of the posting. He then received a high volume of calls, mostly angry about the post. Some calls were death threats. Zeran called AOL which said they would remove the post. However the next day a new post was made and new posts were made over the next four days. One of the posts was picked up by a radio announcer at KRXO in Oklahoma City who encouraged listeners to call the number. Zeran required police protection and sued KRXO and then separately AOL.
In its decision, the United States Court of Appeals for the Fourth Circuit wrote “It would be impossible for service providers to screen each of their millions of postings for possible problems. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress considered the weight of the speech interests implicated and chose to immunize service providers to avoid any such restrictive effect.”
It also wrote that Section 230 “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service. Thus, lawsuits seeking to hold a service provider liable for its
exercise of a publisher’s traditional editorial functions — such as
deciding whether to publish, withdraw, postpone or alter content —
are barred.”
Zeran argued that even if AOL wasn’t a publisher, it was a Distributor and under the 1959 case, a distributor would still need to be responsible for speech it knew was defamatory. And Zeran argued AOL knew, because he called them about it after the first post. The judge however says that AOL is a publisher not a distributor plain and simple. But Section 230 shields them from the liability normally afforded a publisher. So you can’t just redefine them.
This ended up as a stricter protection for a distributor than the 1959 case. Instead of having to take it down once you know about it. Internet services were given a broader shield.
And that became the principle justification for CDA 230.
And if the Supreme Court follows that precedent it might also consider recommendations to be publishing behavior and therefore protected.
However that’s not what happened. Instead the court seems to think that algorithmic recommendations are new enough that Section 230 doesn’t properly apply to them.
During oral arguments for Gonzalez v. Google on February 22, 2023, multiple Justices indicated they thought Congress should rule on whether algorithmic recommendations should be considered to cause liability or not.
Justice Elana Kagan said “This was a pre-algorithm statute, and everyone is trying their best to figure out how this statute applies. Every time anyone looks at anything on the internet, there is an algorithm involved.”
Justice Ketanji Brown Jackson said, “To the extent that the question today is can we be sued for making recommendations, that’s just not something the statute was directed to.”
And Justice Bret Kavanaugh said “Isn’t it better to keep it the way it is, for us, and to put the burden on Congress to change that, and they can consider the implications and make these predictive judgments?”
Then on May 18, 2023, the court issued its decision in both cases. Both unanimous.
In Twitter vs. Taamneh, the court dismissed the allegations that Twitter violated the US Antiterrorism Act by failing to remove posts before a deadly attack. Justice Clarence Thomas wrote the opinion for the unanimous decision, saying that Twitter’s failure to police content was not an “affirmative act.
And he expressed concern that if aiding-and-abetting liability is taken too far merchants could become liable for misuse of their goods. He pointed out that email service providers should not be held liable for the contents of email. In fact he explicitly compared Twitter to email and cell phone providers who aren’t culpable for their users behavior. A cell phone service provider is not culpable for the illegal drug deals made over their phones.
Specifically regarding Twitter he wrote “There are no allegations that defendants treated ISIS any differently from anyone else. Rather, defendants’ relationship with ISIS and its supporters appears to have been the same as their relationship with their billion-plus other users: arm’s length, passive, and largely indifferent.”
And he even touched on the main issue from the other case, algorithmic recommendations. He wrote, “the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.”
That all meant they could essentially dodge the entire issue in Gonzalez vs. Google, which had rested more on YouTube being liable for its recommendations.
In an unsigned opinion the court wrote that the “liability claims are materially identical to those at issue in Twitter…” And “Since we hold that the complaint in that case fails to state a claim for aiding and abetting … it appears to follow that the complaint here likewise fails to state such a claim.” And “we therefore decline to address the application of section 230.” So the claims in Gonzalez were also dismissed.
In essence these opinions are saying that if algorithms are not specific to a kind of content, then it doesn’t make recommending an “affirmative act.” And if you want to change that then Congress needs to pass a new law.
These two decisions left Section 230 unchanged.
In the end what I want folks to take away is that Section 230 doesn’t free a tech platform to do whatever it wants. It frees a platform to choose to moderate and exercise editorial control over the posts of others without having to assume responsibility for the thousands, and now millions of posts made every day.
It’s reasonable to argue that perhaps there are some responsibilities that should be restored to tech platforms through legislation. I think it’s worth pointing out that repealing Section 230 altogether would not necessarily achieve that.
So I hope now you have a firmer basis upon which to base your opinion whatever it is. In other words, I hope you know a little more about section 230.

CREDITS
Know A Little More is researched, written and hosted by me, Tom Merritt. Editing and production provided by Anthony Lemos in conjunction with Will Sattelberg and Dog and Pony Show Audio. It’s issued under a Creative Commons Share Attribution 4.0 International License.