Talk about a Rorschach test for politcal leaning. We've got comments here incorrectly stating that 230 doesn't provide safe-harbor for businesses who block content [it does; see below 230(c)(2A) and a toothless 230(d)]
, and that a business blocking a user can mean a violation of 230 [it can't; although, via 230(b)(3), Congress does want to encourage software development that expressly allows users (e.g., Trump) to block content, it doesn't suggest that 230 is grounds for suing a business that doesn't comply with that goal. Note that Judge Thomas suggested 230 should be changed so that it does, though, at least allow for it in some specific cases.]
. And then others mocking the previous poster and, in so doing, weirdly explaining that (paraphrased) "there's nothing about businesses who don't block content except for, you know, the part where there definitely is" [see below 230(c)(1)]
. And then conveniently failing to point out the subparts that expressly encourage the ability of the end user (not explicitly the business) to control what is blocked [see below, 230(b)(3)&(4)]
To summarize the relevant parts describing when the business has or doesn't have immunity from lawsuit:(Limited protection for not blocking content)230(c)(1)
immunity from being equated as the speaker* of information
*the originator, as I understand it, which the (f)(3) definitions section defines as "information content provider", i.e. the person who posted the comment. Yes, they confusingly use the word (information content) "provider" to also describe the person who posts the comment, not just the internet "provider" or software "provider".(Limited protection for blocking content)230(c)(2A)
immunity from good faith restriction of access to obscene, lewd,..., excessively harrassing, or otherwise ojbectionable* material
*in line with the adjectives that precede it, as I understand it(No protection for not providing content blocking info to parents)230(d)
no immunity* for failing to notify customers of parental controls
*except that it says this requirement can be met "in a manner deemed appropriate by the provider", which refers to "A provider
of interactive computer service"**, which means it requires the business to do something for which it has its own discretion as to what is "appropriate", which makes this subpart of the stature effectively moot. *smh*
**i.e., internet provider, twitter, etc., as defined in the (f)(2) definitions section and expanded upon in (f)(4)
It is also stated that Congress' policy is to:(Encourage businesses to give users the control to block) 230(b)(3)
...maximize user control over what information is recieved by (those) who use the internet(Encourage businesses to give users the control to block)230(b)(4)
allow incentives for blocking content that empowers parents to restrict kids access
Orogogus wrote on Apr 6, 2021, 17:42:
WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.
I get that posting rightwing falsehoods is your thing -- not exaggerations or distortions, but just straight up lies -- but this is public information that anyone can look up.
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
It specifically protects entities who do restrict content. There's nothing about entities who don't restrict content, except the blanket protection that shields providers such as ISPs, social media and message board hosts from liability for content they didn't create, regardless of whether or not they restrict content.