selection7 wrote on Apr 7, 2021, 01:49:
Talk about a Rorschach test for politcal leaning. We've got comments here incorrectly stating that 230 doesn't provide safe-harbor for businesses who block content [it does; see below 230(c)(2A) and a toothless 230(d)], and that a business blocking a user can mean a violation of 230 [it can't; although, via 230(b)(3), Congress does want to encourage software development that expressly allows users (e.g., Trump) to block content, it doesn't suggest that 230 is grounds for suing a business that doesn't comply with that goal. Note that Judge Thomas suggested 230 should be changed so that it does, though, at least allow for it in some specific cases.]. And then others mocking the previous poster and, in so doing, weirdly explaining that (paraphrased) "there's nothing about businesses who don't block content except for, you know, the part where there definitely is" [see below 230(c)(1)]. And then conveniently failing to point out the subparts that expressly encourage the ability of the end user (not explicitly the business) to control what is blocked [see below, 230(b)(3)&(4)].
To summarize the relevant parts describing when the business has or doesn't have immunity from lawsuit:
(Limited protection for not blocking content)
230(c)(1) immunity from being equated as the speaker* of information
*the originator, as I understand it, which the (f)(3) definitions section defines as "information content provider", i.e. the person who posted the comment. Yes, they confusingly use the word (information content) "provider" to also describe the person who posts the comment, not just the internet "provider" or software "provider".
(Limited protection for blocking content)
230(c)(2A) immunity from good faith restriction of access to obscene, lewd,..., excessively harrassing, or otherwise ojbectionable* material
*in line with the adjectives that precede it, as I understand it
(No protection for not providing content blocking info to parents)
230(d) no immunity* for failing to notify customers of parental controls
*except that it says this requirement can be met "in a manner deemed appropriate by the provider", which refers to "A provider of interactive computer service"**, which means it requires the business to do something for which it has its own discretion as to what is "appropriate", which makes this subpart of the stature effectively moot. *smh*
**i.e., internet provider, twitter, etc., as defined in the (f)(2) definitions section and expanded upon in (f)(4)
It is also stated that Congress' policy is to:
(Encourage businesses to give users the control to block)
230(b)(3) ...maximize user control over what information is recieved by (those) who use the internet
(Encourage businesses to give users the control to block)
230(b)(4) allow incentives for blocking content that empowers parents to restrict kids accessOrogogus wrote on Apr 6, 2021, 17:42:WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.
I get that posting rightwing falsehoods is your thing -- not exaggerations or distortions, but just straight up lies -- but this is public information that anyone can look up.
https://www.law.cornell.edu/uscode/text/47/230(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
It specifically protects entities who do restrict content. There's nothing about entities who don't restrict content, except the blanket protection that shields providers such as ISPs, social media and message board hosts from liability for content they didn't create, regardless of whether or not they restrict content.
When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230???
selection7 wrote on Apr 7, 2021, 01:49:
And then others mocking the previous poster and, in so doing, weirdly explaining that (paraphrased) "there's nothing about businesses who don't block content except for, you know, the part where there definitely is" [see below 230(c)(1)].
(snip)
To summarize the relevant parts describing when the business has or doesn't have immunity from lawsuit:
(Limited protection for not blocking content)
230(c)(1) immunity from being equated as the speaker* of information
And then conveniently failing to point out the subparts that expressly encourage the ability of the end user (not explicitly the business) to control what is blocked [see below, 230(b)(3)&(4)].
230(d) no immunity* for failing to notify customers of parental controlsI don't read that the same way. The company's discretion applies to how it decides to send the required notification, not the content of that notification. Like, a mailed letter, an email, a pop-up, part of the user agreement, etc. But there's no "or else" here, so I don't know what the consequences are for not doing anything.
*except that it says this requirement can be met "in a manner deemed appropriate by the provider", which refers to "A provider of interactive computer service"**, which means it requires the business to do something for which it has its own discretion as to what is "appropriate", which makes this subpart of the stature effectively moot. *smh*
Orogogus wrote on Apr 6, 2021, 17:42:WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.
I get that posting rightwing falsehoods is your thing -- not exaggerations or distortions, but just straight up lies -- but this is public information that anyone can look up.
https://www.law.cornell.edu/uscode/text/47/230(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
It specifically protects entities who do restrict content. There's nothing about entities who don't restrict content, except the blanket protection that shields providers such as ISPs, social media and message board hosts from liability for content they didn't create, regardless of whether or not they restrict content.
jdreyer wrote on Apr 6, 2021, 18:20:Sepharo wrote on Apr 6, 2021, 17:56:I wonder if he ever returns and reads these replies.WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.
This is not at all true and you should reevaluate the source you're getting this information from.
Sepharo wrote on Apr 6, 2021, 17:56:I wonder if he ever returns and reads these replies.WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.
This is not at all true and you should reevaluate the source you're getting this information from.
WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.
WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
Online platforms are within their First Amendment rights to moderate their online platforms however they like, and they’re additionally shielded by Section 230 for many types of liability for their users’ speech. It’s not one or the other. It’s both.It's a misconception that platforms can somehow lose Section 230 protections for moderating users’ posts.
ldonyo wrote on Apr 6, 2021, 12:50:Thankfully it is not an indication of how the entire court stands on the matter.
Clarence Thomas has more than a couple of screws loose.