Non-motoring > Social Media Censoring / Control / Standards Auctions
Thread Author: No FM2R Replies: 11

 Social Media Censoring / Control / Standards - No FM2R
"Joe Biden has said previously that he would repeal Section 230 - the law that protects social media companies from being sued for the things people post. "

A quote from the BBC because I can't find Biden's original quote.

On the one hand you'd think it was a good thing; somebody has got to bring the current social media crap under control and nothing is as motivated as a company with it's profits on the line.

On the other hand though, if we had Trump II, should we be worried that it would give him a weapon to force social media companies to be favourable to him?

I think it's clear that the world's population is not grown up enough or clever enough to be exposed to uncontrolled conspiracy and incitement but Trump has shown how a malicious/unbalanced democratically elected leader can use fear to manipulate seemingly reasonable laws.
 Social Media Censoring / Control / Standards - No FM2R
If you care enough, and I do), this will tell you all you need to know, but it is long.

Why the most controversial US internet law is worth saving

Donald Trump and Joe Biden both want to throw out Section 230. Here’s why America should fix it instead.

by Paul M. Barrett
September 9, 2020


US president Donald Trump and his Democratic opponent, Joe Biden, agree on at least one issue: the arcane federal law known as Section 230 of the Communications Decency Act. On September 8, Trump tweeted that Republican lawmakers should “repeal Section 230, immediately.” With similar urgency, Biden had told the New York Times last December that “Section 230 should be revoked, immediately.”

Enacted in 1996 to bolster the nascent commercial internet, Section 230 protects platforms and websites from most lawsuits related to content posted by users. And it guarantees this immunity even if companies actively police the content they host.

By legally insulating online businesses, Section 230 has encouraged innovation and growth. Without the law, new internet companies would have more difficulty getting aloft, while established platforms would block many more posts in response to heightened litigation risks. Pointed political debate might get removed, and free expression would be constricted.

But many people have rightly questioned whether internet companies do enough to counter harmful content, and whether Section 230 effectively lets them off the hook. On Capitol Hill, at least a half-dozen bills have been introduced to curtail the law in various ways.

Driving this debate is the widely felt sense that the major social-media platforms—Facebook and its subsidiary Instagram; Twitter; and YouTube, which is owned by Google—do not properly manage the content they host. Evidence includes the spread of false information about elections and covid-19, conspiracy theories like QAnon, cyber-bullying, revenge porn, and much more.

There are real problems with the way Section 230 is worded today, but that doesn’t mean lawmakers should toss the whole thing out. Its core ought to be preserved, primarily to protect smaller platforms and websites from lawsuits. At the same time, the law should be updated to push internet companies to accept greater responsibility for the content on their sites. Moreover, the US needs a specialized government body—call it the Digital Regulatory Agency—to ensure that this responsibility is fulfilled. I argue for these positions in a new report for the NYU Stern Center for Business and Human Rights.

Revoke or reform?

Drafted in an era of optimism about the internet, Section 230 established a distinctly laissez-faire environment for online business. In the mid-1990s, few anticipated the overwhelming pervasiveness of today’s social-media behemoths—or the volume and variety of deleterious material they would spread.

This doesn’t mean all critiques of Section 230 are created equal. President Trump’s hostility to the law stems from his contention that platforms censor conservative speech. In an executive order he signed in late May, he singled out Twitter for having added warning labels to some of his tweets. The order called for a multi-agency assault on Section 230, involving the commerce and justice departments, the Federal Communications Commission, and the Federal Trade Commission. This appears to violate the Constitution, as the president seeks to punish Twitter for exercising the company’s First Amendment right to comment on his tweets.
Related Story
What is Section 230 and why does Donald Trump want to change it?

This provision of the Communications Decency Act is being blamed for everything from social-media bias to enabling revenge porn. Here’s how to understand the law that created the modern internet.

Meanwhile, Senator Josh Hawley, a Missouri Republican, has introduced legislation that would encourage individuals to sue platforms for making content decisions in “bad faith”—an unsubtle invitation to conservatives who feel they’ve been the targets of politically motivated slights. In fact, there’s scant evidence of systematic anti-right bias by social-media platforms, according to two analyses by The Economist and a third by a researcher at the conservative American Enterprise Institute.

Other skeptics say Section 230 allows platforms to profit from hosting misinformation and hate speech. This is Biden’s position: that by providing a shield against litigation, the law creates a disincentive for companies to remove harmful content. In a December 2019 conversation with the New York Times editorial board, Biden responded to questions about Section 230 with pique at Facebook for failing to fact-check inaccurate Trump campaign ads about him. The law “should be revoked because [Facebook] is not merely an internet company,” he said. “It is propagating falsehoods they know to be false.”

Biden’s mistake, though, is urging revocation of Section 230 to punish Facebook, when what he really seems to want is for the company to police political advertising. He has said nothing publicly in the intervening months indicating that he has altered this position.

Several more nuanced, bipartisan reform proposals do contain ingredients worth considering. A bill cosponsored by Senators John Thune, a South Dakota Republican, and Brian Schatz, a Hawaii Democrat, would require internet companies to explain their content moderation policies to users and provide detailed quarterly statistics on which items were removed, down-ranked, or demonetized. The bill would amend Section 230 to give larger platforms just 24 hours to remove content determined by a court to be unlawful. Platforms would also have to create complaint systems that notify users within 14 days of taking down their content and provide for appeals.

More smart ideas come from experts outside government. A 2019 report (pdf) published by scholars gathered by the University of Chicago’s Booth School of Business suggests transforming Section 230 into a “quid pro quo benefit.” Platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230.
Quid pro quo

In my view, lawmakers should adopt the quid pro quo approach for Section 230. It provides a workable organizing principle to which any number of platform obligations could be attached. The Booth report provides examples of quids that larger platforms could offer to receive the quo of continued immunity. One would “require platform companies to ensure that their algorithms do not skew towards extreme and unreliable material to boost user engagement.” Under a second, platforms would disclose data on content moderation methods, advertising practices, and which content is being promoted and to whom.

Retooling Section 230 isn’t the only way to improve the conduct of social-media platforms. It would also be worth creating a specialized federal agency devoted to the goal. The new Digital Regulatory Agency would focus on making platforms more transparent and accountable, not on debating particular pieces of content.

For example, under a revised Section 230, the agency might audit platforms that claim their algorithms do not promote sensational material to heighten user engagement. Another potential responsibility for this new government body might be to oversee the prevalence of harmful content on various platforms—a proposal that Facebook put forward earlier this year in a white paper.

Facebook defines “prevalence” as the frequency with which detrimental material is actually viewed by a platform’s users. The US government would establish prevalence standards for comparable platforms. If a company’s prevalence metric rose above a preset threshold, Facebook suggests, that company “might be subject to greater oversight, specific improvement plans, or—in the case of repeated systematic failures—fines.”

Facebook, which is already estimating prevalence levels for certain categories of harmful content on its site, concedes that the measurement could be gamed. That’s why it would be important for the new agency to have a technically sophisticated staff and meaningful access to company data.

Reforming Section 230 and establishing a new digital regulator may turn, like so much else, on the outcome of the November election. But regardless of who wins, these and other ideas are available, and could prove useful in pushing platforms to take more responsibility for what’s posted and shared online."

[Paul M. Barrett is the deputy director of the NYU Stern Center for Business and Human Rights.]

Last edited by: No FM2R on Tue 17 Nov 20 at 13:17
 Social Media Censoring / Control / Standards - Netsur
The problem is that the internet is anonymous. Few of us here know who the others are.

If everyone was required to post in their name and the condition for using a particular forum (say Facebook) was that you indemnified them against libel action or spreading false information such activity would reduce eventually; especially if your account was frozen enough times for such misdemeanors. Can't spread hate if no one will publish you. Hence the need for verifiable names on an account; not based upon email addresses.
 Social Media Censoring / Control / Standards - No FM2R
>If everyone was required to post in their name......

That wouldn't work very well if you were protesting against a local drug cartel, a fascist dictator or an oppressive state. You'd potentially be putting the whole family at risk.

True there are problems caused by that anonymity. Typically sad schoolboys in their pyjamas typing hate all over the internet when their mother isn't feeding them. But the lack of anonymity would be worse.

Better, I think, to say to a company making significant profit that they must find a way to make it all acceptable. That is probably the best chance of a compromise between letting people speak (and protecting their revenue) and keeping it under control (and protecting their revenue).
 Social Media Censoring / Control / Standards - Netsur
At the end of the day, it's all down to money.....

 Social Media Censoring / Control / Standards - No FM2R
No, it's absolutely not. The presence of a revenue stream gives leverage. But the ability to speak freely yet the duty not to mislead is far above just money.

People need to work out what they think about this, because I guarantee it's going to be a topic going forward.
 Social Media Censoring / Control / Standards - Netsur
What I meant was that the only way to get the companies to co-operate is to ensure that if they don't, they will disappear due to lack of funds.

The next big issue apart from this will be end-to-end encryption in messaging services like WhatsApp versus the state security apparatus. Living in a benign democracy I want GCHQ to have unfettered access to all communication as and when required by law for the purposes of keeping me safe from terror or major crime.

If I lived in Russia or Iran I certainly wouldn't want that level of intrusion for what I hope would be obvious reasons.

It's all part of the same issue. Is society trying to find a global solution to a problem which has different answers in each country.

 Social Media Censoring / Control / Standards - No FM2R
>>Living in a benign democracy I want GCHQ to have unfettered access to all communication as and when required by law for the purposes of keeping me safe from terror or major crime.

I'd like to agree, but Trump has just shown the major problem with that. You can grant it to a benign leadership and then get badly burned when you get a nutter in the seat.

If you grant access to one Government, you grant it to all future Governments. I can only imagine what advantage Trump could have made of that.
 Social Media Censoring / Control / Standards - No FM2R
A more readable opinion...

www.npr.org/2020/05/30/865813960/as-trump-targets-twitters-legal-shield-experts-have-a-warning
 Social Media Censoring / Control / Standards - Zero
It is an intractable situation, only fixable by employing a great firewall of china solution, even that needs a whole national institutional enforcement.

The real problem of course, is not the extreme and weird stuff that's published on the net, its the fact that a swaith of the global population actually believes it.

You can't fix that.
 Social Media Censoring / Control / Standards - Fullchat
"The real problem of course, is not the extreme and weird stuff that's published on the net, its the fact that a swaith of the global population actually believes it."

So true which also influences their outlook. I find it quite quite disturbing that people are seriously prepared to make comment and decisions based on deliberately edited or shortened footage. They are either/or thick or it suits their agenda.

But where is the line between freedom of speech and the censoring of agenda/politically driven misinformation.

Currently there seems to be a increasing minority of people who go round filming under the guise of 'Journalists' because they have a YouTube channnel. Because Journalism has freedoms under the Covid regulations. They are driven by Far Right or anti Police/establishment agendas.
 Social Media Censoring / Control / Standards - Netsur
I had a young member of staff who came into the office one morning and announced that Rothchilds ran the whole world economy and owned all the banks.

Before I could say anything, a non-Jewish colleague asked him, if that was the case, how come the UK Government owns a majority share in RBS? He was then educated in the wily ways of Facebook mis-information and anti-semitism.

There are too many instances of agent provocateurs who deliberately film and publish scenes 'police brutality' when it is nothing of the sort if the 'wider picture' could be seen.

Latest Forum Posts