14 Replies. 1 pages. Viewing page 1.
Newer [  1  ] Older
14.
 
removed
Apr 7, 2021, 06:55
14.
removed Apr 7, 2021, 06:55
Apr 7, 2021, 06:55
 
* REMOVED *

This comment was deleted on Apr 7, 2021, 09:51. Reason: Intolerance (rule 2)
"Meet the new Boss, same as the old Boss." - The Who.
Avatar 57379
13.
 
Re: Morning Legal Briefs
Apr 7, 2021, 03:02
13.
Re: Morning Legal Briefs Apr 7, 2021, 03:02
Apr 7, 2021, 03:02
 
selection7 wrote on Apr 7, 2021, 01:49:
Talk about a Rorschach test for politcal leaning. We've got comments here incorrectly stating that 230 doesn't provide safe-harbor for businesses who block content [it does; see below 230(c)(2A) and a toothless 230(d)], and that a business blocking a user can mean a violation of 230 [it can't; although, via 230(b)(3), Congress does want to encourage software development that expressly allows users (e.g., Trump) to block content, it doesn't suggest that 230 is grounds for suing a business that doesn't comply with that goal. Note that Judge Thomas suggested 230 should be changed so that it does, though, at least allow for it in some specific cases.]. And then others mocking the previous poster and, in so doing, weirdly explaining that (paraphrased) "there's nothing about businesses who don't block content except for, you know, the part where there definitely is" [see below 230(c)(1)]. And then conveniently failing to point out the subparts that expressly encourage the ability of the end user (not explicitly the business) to control what is blocked [see below, 230(b)(3)&(4)].

To summarize the relevant parts describing when the business has or doesn't have immunity from lawsuit:
(Limited protection for not blocking content)
230(c)(1) immunity from being equated as the speaker* of information
*the originator, as I understand it, which the (f)(3) definitions section defines as "information content provider", i.e. the person who posted the comment. Yes, they confusingly use the word (information content) "provider" to also describe the person who posts the comment, not just the internet "provider" or software "provider".

(Limited protection for blocking content)
230(c)(2A) immunity from good faith restriction of access to obscene, lewd,..., excessively harrassing, or otherwise ojbectionable* material
*in line with the adjectives that precede it, as I understand it

(No protection for not providing content blocking info to parents)
230(d) no immunity* for failing to notify customers of parental controls
*except that it says this requirement can be met "in a manner deemed appropriate by the provider", which refers to "A provider of interactive computer service"**, which means it requires the business to do something for which it has its own discretion as to what is "appropriate", which makes this subpart of the stature effectively moot. *smh*
**i.e., internet provider, twitter, etc., as defined in the (f)(2) definitions section and expanded upon in (f)(4)

It is also stated that Congress' policy is to:
(Encourage businesses to give users the control to block)
230(b)(3) ...maximize user control over what information is recieved by (those) who use the internet

(Encourage businesses to give users the control to block)
230(b)(4) allow incentives for blocking content that empowers parents to restrict kids access


Orogogus wrote on Apr 6, 2021, 17:42:
WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.

I get that posting rightwing falsehoods is your thing -- not exaggerations or distortions, but just straight up lies -- but this is public information that anyone can look up.

https://www.law.cornell.edu/uscode/text/47/230

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;

It specifically protects entities who do restrict content. There's nothing about entities who don't restrict content, except the blanket protection that shields providers such as ISPs, social media and message board hosts from liability for content they didn't create, regardless of whether or not they restrict content.


I've read this over and over carefully and can't figure out what your point is...
So just to cut to the chase, do you think Walt is correct when he says this:
When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230
???
Avatar 17249
12.
 
Re: Morning Legal Briefs
Apr 7, 2021, 02:32
12.
Re: Morning Legal Briefs Apr 7, 2021, 02:32
Apr 7, 2021, 02:32
 
selection7 wrote on Apr 7, 2021, 01:49:
And then others mocking the previous poster and, in so doing, weirdly explaining that (paraphrased) "there's nothing about businesses who don't block content except for, you know, the part where there definitely is" [see below 230(c)(1)].

(snip)

To summarize the relevant parts describing when the business has or doesn't have immunity from lawsuit:
(Limited protection for not blocking content)
230(c)(1) immunity from being equated as the speaker* of information

In the context of this discussion, nothing about 230(c)(1) protection rests on Facebook or Twitter's commitment to not blocking any content. They don't suddenly lose that protection if they choose to block something they deem objectionable. A company that doesn't block content enjoys the same rights as one that does.

And then conveniently failing to point out the subparts that expressly encourage the ability of the end user (not explicitly the business) to control what is blocked [see below, 230(b)(3)&(4)].

Because I felt it wasn't relevant; the protections are what they are, regardless of the intent. But if you want to talk about it, I do think you glossed over 230(b)(2): (It is the policy of the United States) "...to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation". Using the power of the government to force Twitter to do this or not do that seems like it goes directly against this principle.

Not really related:

230(d) no immunity* for failing to notify customers of parental controls
*except that it says this requirement can be met "in a manner deemed appropriate by the provider", which refers to "A provider of interactive computer service"**, which means it requires the business to do something for which it has its own discretion as to what is "appropriate", which makes this subpart of the stature effectively moot. *smh*
I don't read that the same way. The company's discretion applies to how it decides to send the required notification, not the content of that notification. Like, a mailed letter, an email, a pop-up, part of the user agreement, etc. But there's no "or else" here, so I don't know what the consequences are for not doing anything.
11.
 
Re: Morning Legal Briefs
Apr 7, 2021, 01:49
11.
Re: Morning Legal Briefs Apr 7, 2021, 01:49
Apr 7, 2021, 01:49
 
Talk about a Rorschach test for politcal leaning. We've got comments here incorrectly stating that 230 doesn't provide safe-harbor for businesses who block content [it does; see below 230(c)(2A) and a toothless 230(d)], and that a business blocking a user can mean a violation of 230 [it can't; although, via 230(b)(3), Congress does want to encourage software development that expressly allows users (e.g., Trump) to block content, it doesn't suggest that 230 is grounds for suing a business that doesn't comply with that goal. Note that Judge Thomas suggested 230 should be changed so that it does, though, at least allow for it in some specific cases.]. And then others mocking the previous poster and, in so doing, weirdly explaining that (paraphrased) "there's nothing about businesses who don't block content except for, you know, the part where there definitely is" [see below 230(c)(1)]. And then conveniently failing to point out the subparts that expressly encourage the ability of the end user (not explicitly the business) to control what is blocked [see below, 230(b)(3)&(4)].

To summarize the relevant parts describing when the business has or doesn't have immunity from lawsuit:
(Limited protection for not blocking content)
230(c)(1) immunity from being equated as the speaker* of information
*the originator, as I understand it, which the (f)(3) definitions section defines as "information content provider", i.e. the person who posted the comment. Yes, they confusingly use the word (information content) "provider" to also describe the person who posts the comment, not just the internet "provider" or software "provider".

(Limited protection for blocking content)
230(c)(2A) immunity from good faith restriction of access to obscene, lewd,..., excessively harrassing, or otherwise ojbectionable* material
*in line with the adjectives that precede it, as I understand it

(No protection for not providing content blocking info to parents)
230(d) no immunity* for failing to notify customers of parental controls
*except that it says this requirement can be met "in a manner deemed appropriate by the provider", which refers to "A provider of interactive computer service"**, which means it requires the business to do something for which it has its own discretion as to what is "appropriate", which makes this subpart of the stature effectively moot. *smh*
**i.e., internet provider, twitter, etc., as defined in the (f)(2) definitions section and expanded upon in (f)(4)

It is also stated that Congress' policy is to:
(Encourage businesses to give users the control to block)
230(b)(3) ...maximize user control over what information is recieved by (those) who use the internet

(Encourage businesses to give users the control to block)
230(b)(4) allow incentives for blocking content that empowers parents to restrict kids access


Orogogus wrote on Apr 6, 2021, 17:42:
WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.

I get that posting rightwing falsehoods is your thing -- not exaggerations or distortions, but just straight up lies -- but this is public information that anyone can look up.

https://www.law.cornell.edu/uscode/text/47/230

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;

It specifically protects entities who do restrict content. There's nothing about entities who don't restrict content, except the blanket protection that shields providers such as ISPs, social media and message board hosts from liability for content they didn't create, regardless of whether or not they restrict content.
10.
 
Re: Morning Legal Briefs
Apr 6, 2021, 18:47
10.
Re: Morning Legal Briefs Apr 6, 2021, 18:47
Apr 6, 2021, 18:47
 
jdreyer wrote on Apr 6, 2021, 18:20:
Sepharo wrote on Apr 6, 2021, 17:56:
WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.

This is not at all true and you should reevaluate the source you're getting this information from.
I wonder if he ever returns and reads these replies.

He's returned, but rarely.
He mostly just spews easily disproven information and acts as if he's the smart one for being wilfully misled.
9.
 
Re: Morning Legal Briefs
Apr 6, 2021, 18:20
9.
Re: Morning Legal Briefs Apr 6, 2021, 18:20
Apr 6, 2021, 18:20
 
Sepharo wrote on Apr 6, 2021, 17:56:
WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.

This is not at all true and you should reevaluate the source you're getting this information from.
I wonder if he ever returns and reads these replies.
'I am' is reportedly the shortest sentence in the English language. Could it be that 'I do' is the longest sentence? - GC
Avatar 22024
8.
 
Re: Morning Legal Briefs
Apr 6, 2021, 17:56
8.
Re: Morning Legal Briefs Apr 6, 2021, 17:56
Apr 6, 2021, 17:56
 
WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.

This is not at all true and you should reevaluate the source you're getting this information from.
Avatar 17249
7.
 
Re: Morning Legal Briefs
Apr 6, 2021, 17:42
7.
Re: Morning Legal Briefs Apr 6, 2021, 17:42
Apr 6, 2021, 17:42
 
WaltC wrote on Apr 6, 2021, 17:08:
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.

I get that posting rightwing falsehoods is your thing -- not exaggerations or distortions, but just straight up lies -- but this is public information that anyone can look up.

https://www.law.cornell.edu/uscode/text/47/230

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;

It specifically protects entities who do restrict content. There's nothing about entities who don't restrict content, except the blanket protection that shields providers such as ISPs, social media and message board hosts from liability for content they didn't create, regardless of whether or not they restrict content.
6.
 
Re: Morning Legal Briefs
Apr 6, 2021, 17:13
6.
Re: Morning Legal Briefs Apr 6, 2021, 17:13
Apr 6, 2021, 17:13
 
Section 230 is Good, Actually

No, Section 230 Does Not Require Platforms to Be “Neutral”
Online platforms are within their First Amendment rights to moderate their online platforms however they like, and they’re additionally shielded by Section 230 for many types of liability for their users’ speech. It’s not one or the other. It’s both.
It's a misconception that platforms can somehow lose Section 230 protections for moderating users’ posts.
- I refer to it as BC, Before Corona, and AD, After Disaster. -
Avatar 58135
5.
 
Re: Morning Legal Briefs
Apr 6, 2021, 17:08
5.
Re: Morning Legal Briefs Apr 6, 2021, 17:08
Apr 6, 2021, 17:08
 
Section 230 only provides safe-harbor for sites if they refuse to edit/delete posts from the public without a valid reason, (ie, egregious profanity or threats of violence against named individuals, are valid reasons for deletion.) When FB and Twitter delete and tag posts with prejudicial descriptions--it's the most extreme kind of editing there is. Disagreeing with the opinion expressed is not a valid reason to delete a post on a 230 site. Both of these sites can be sued for violating 230--which is what any honest government would do--but so far no government entity has stepped up to enforce 230's safe-harbor restrictions on these two web sites--the Trump administration, included. They should be fined $100k a day for every post they delete & tag with supercilious warnings, imo--and eventually they'd get the message, I feel sure...;) Today, we have an unusually timid government in Washington when it comes to enforcing our laws. Never seen it this bad.
It is well known that I don't make mistakes--so, if you should happen across an error in something I have written, you can be confident in the fact that *I* did not write it.
Avatar 16008
4.
 
Re: Morning Legal Briefs
Apr 6, 2021, 15:20
4.
Re: Morning Legal Briefs Apr 6, 2021, 15:20
Apr 6, 2021, 15:20
 
First of all, when did Thomas convert to socialism?

Second of all, it's really the ISPs that need nationalization and/or common carrier designation with the same heavy regulation the power industry has (except for TX, and we saw what happened there when you deregulate). Google/FB/Twitter, those can be handed with anti-trust regulation.
'I am' is reportedly the shortest sentence in the English language. Could it be that 'I do' is the longest sentence? - GC
Avatar 22024
3.
 
Re: Morning Legal Briefs
Apr 6, 2021, 14:45
3.
Re: Morning Legal Briefs Apr 6, 2021, 14:45
Apr 6, 2021, 14:45
 
Better get ready to force those bakeries to make those cakes for gay weddings, if we're going to force private businesses to host everyone's social media.
Avatar 54863
2.
 
Re: Morning Legal Briefs
Apr 6, 2021, 13:23
2.
Re: Morning Legal Briefs Apr 6, 2021, 13:23
Apr 6, 2021, 13:23
 
ldonyo wrote on Apr 6, 2021, 12:50:
Clarence Thomas has more than a couple of screws loose.
Thankfully it is not an indication of how the entire court stands on the matter.
Thomas's opinion was not cosigned by any other conservative justice and that should tell you all you need to know.

If social media companies lost the power to police their platforms the internet would be overwhelmed with even more garbage than there is now.
- I refer to it as BC, Before Corona, and AD, After Disaster. -
Avatar 58135
1.
 
Re: Morning Legal Briefs
Apr 6, 2021, 12:50
1.
Re: Morning Legal Briefs Apr 6, 2021, 12:50
Apr 6, 2021, 12:50
 
Clarence Thomas has more than a couple of screws loose.
14 Replies. 1 pages. Viewing page 1.
Newer [  1  ] Older