Home > Uncategorized > Repealing Section 230: Giving Mark Zuckerberg what he wants?

Repealing Section 230: Giving Mark Zuckerberg what he wants?

from Dean Baker

I have been engaging on Twitter recently on my ideas on repealing Section 230. Not surprisingly, I provoked a considerable response. While much of it was angry ad hominems, some of it involved thoughtful comments, especially those from Jeff Koseff and Mike Masnick, the latter of whom took the time to write a full column responding to my proposals on repeal.

I will directly respond to Mike’s column, but first I should probably outline again what I am proposing. I somewhat foolishly assumed that people had read my earlier pieces, and probably even more foolishly assumed anyone remembered them. So, I will first give the highlights of how I would like to see the law restructured and then respond to some of the points made by Mike and others.

Narrowing the Scope of 230

To my view, the best way to limit the power of a Mark Zuckerberg or Jack Dorsey to shape our political debates and influence elections is to downsize Facebook and Twitter, and possibly other sites, that can grow so large as to have an outsize influence on American politics. Restructuring the protection provided by Section 230 can be a way to accomplish this goal.

As it stands, Section 230 means that Facebook and Twitter cannot be sued for defamation for third party content, either in the form of paid advertisements or for any defamatory material that might be contained in any of the billions of posts made on these sites every month. Newspapers and broadcast outlets do not enjoy this protection for third party content.

I would propose taking away this protection for sites that either accept paid advertisements or sell personal information. This means that the only sites that would still have Section 230 protection would be sites that either had paid subscriptions or survived on donations.

Since it would not be practical for a major site like Facebook to monitor every post as it was made, I proposed a notification and takedown rule similar to what now exists with material alleged to be infringing on copyrights. Under the Digital Millennium Copyright Act, a website, such as Facebook, can be subject to penalties for copyright infringement if they have been notified by the copyright holder and fail to take down the infringing material in a timely manner.

A similar rule can be put in place for allegedly defamatory material, where the person (or company) claiming defamation notifies the website, which then would have to remove the material in a timely manner in order to shield itself from potential liability.[1] Of course, many people could make complaints alleging defamation that are not justified. If a site owner has made this assessment, the site need not do anything, but it would then risk a lawsuit just as a newspaper or television station does now over circulating defamatory items.

This sort of change would not have much impact on the vast majority of websites. A business that has its own site would generally have no third party content that it would need to worry about.

Some business sites do have customer reviews of products. For example, many retail sites allow customers to comment on items they purchased. These reviews could include some potentially defamatory comments.

A business could decide to pre-emptively get rid of its review section, avoiding any potential problems. Alternatively, it could take responsibility for monitoring its reviews and be prepared to remove potentially defamatory reviews if a complaint is made. (I assume that most of these review sections already require some degree of moderation, at least to remove comments that are obscene, racist, or in other ways offensive.) It may also, as a substitute, simply have links to sites that host reviews.

There are also a large number of sites that would still enjoy 230 protection by virtue of the fact that they do not have paid advertising or sell personal information. For example, this would be the case with most websites for policy organizations, universities, or other non-profits.

There would be a clear issue with many sites that are essentially dependent on third party content for their business. This would include sites like Yelp, which is based on customer reviews of businesses, or Airbnb, which prominently feature guests’ reviews of hosts.

Without Section 230 protection these sites could be held liable for defamatory comments in these reviews. These sites could make the decision to accept responsibility for moderation (they already moderate to exclude offensive content) and be prepared to remove posts that are called to their attention as potentially defamatory.[2]

Another option would be to go to a subscription model where users paid some monthly or annual fee for use of the service. Even large sites could be supported with a fairly limited number of subscribers paying a modest fee.

As I noted in an earlier Tweet thread, the employee-employer website Glassdoor had revenue of $170 million in 2020. This could be covered by 3 million people paying $5 a month or 1.5 million paying $10 a month. That hardly seems like a big leap for a major website.

It is more than a bit far-fetched to claim such fees would make these sites exclusively for the rich. In prior decades it was common for working class and even poor people to have subscriptions to newspapers, which cost them far more in today’s dollars than $10 a month. There are currently over 290 million smartphone users in the United States and almost all of them are paying far more than $10 a month for their service. Needless to say, we do not have 290 million rich people in this country.

Of course, there is no guarantee that every service that exists today on an advertising model would survive a switch to a paid circulation model, but so what? Companies go out of business all the time, that is capitalism. If it turned out that very few people were willing to shell out money for a site like Glassdoor, we can infer that there were not very many people who valued the site.

I don’t mean to be glib about the prospect that sites that some people may value a great deal may not survive this sort of change in regimes, but almost all policy that accomplishes anything positive will also have negative effects. The growth of Internet retailing put many old-line retailers out of business. And the growth of Facebook, partially fueled by Section 230 protection, has helped to put many newspapers out of business. If we think we have a policy that won’t have any undesirable effects, then we probably don’t understand the policy.

If we saw many sites switching to a paid circulation model, it is likely that we would see aggregators that charge a fee for access to a large number of sites. This would be similar to the combination of television channels offered by major cable providers. This means that instead of individually subscribing to Yelp, Glassdoor, etc., it would be possible to subscribe to a service that provided access to a wide range of sites.

It’s understandable that people would not be happy about paying for access to sites that had previously been free, but we see this all the time. Most newspapers now have paywalls, and many don’t even allow a single article to be viewed for free. (In the past, it was common for newspapers with paywalls to allow free access to some limited number of articles per month.)

Forty years ago, free broadcast television accounted for the vast majority of viewing. Households spent just $3.15 billion on cable TV in 1980, the equivalent (relative to the size of the economy) of $22.7 billion in 2020. By comparison, households spent $96.3 billion on cable television in 2020 (more than $700 per household), more than four times as much as in 1980.[3] In short, there is ample precedent for people being willing to pay for items that were formerly available for free, if they value them.

Would This Change Hurt Facebook?

Mike argues in his piece that changing Section 230 in the way that I have proposed would work to the benefit of Facebook, arguing that Facebook is actually now pushing for eliminating Section 230. It is true that Facebook is lobbying to have Section 230 changed, but it does not seem to be advocating eliminating this protection, at least for itself.

I’ll confess to not fully understanding the changes Facebook is advocating, but according to the Electric Frontier Foundation (EFF), it would amount to protection from liability for defamation, if a company spent a certain proportion of its revenue monitoring its site for offensive, dangerous, or defamatory material. That is certainly not the same as asking Congress to eliminate Section 230 protection altogether. (The EFF piece is titled “Facebook’s Pitch to Congress: Section 230 for Me, but not for Thee.”)

If Facebook had to operate without Section 230 protection, as I am proposing, it could face liability for defamation if it left material posted after being given notice by someone claiming damages. It is possible that it would just ignore these notices and operate as it does currently, but it seems more likely that it would take down much of the material that provided the basis for complaints. In fact, if we can extrapolate from the experience with copyright infringement claims, websites have in general been overly cautious after being given notice, often removing material that is not actually infringing.[4]

If we assume Facebook goes the compliance route, many users will see posts removed from their Facebook page over claims that they are defamatory. It seems likely that this would upset users, causing many of them to look for sites that will not remove their posts. Since sites that did not depend on advertising or selling personal information would still enjoy Section 230 protection, it seems likely that many current users would opt to leave Facebook for these alternatives.

I also pointed out that as a simple financial matter, the Facebook leavers are likely to be more affluent, since they could easily afford the fees charged for a subscription site. While most households may be able to pay $5 or $10 for a subscription site, this expense would be trivial for the 30 plus percent of households with incomes over $100,000 a year.[5] This is the group that advertisers on Facebook are most interested in reaching. If a substantial percent of higher income users left Facebook, or used the site less frequently, it would be a big hit to the company’s profits.

It is also worth noting that, even if alternative sites may be many magnitudes smaller in their potential reach than Facebook, this is not likely to make much difference to the overwhelming majority of Facebook users. While Facebook may have billions of users, almost none of its users will ever reach more than a tiny fraction of the total with their posts. If their friends and family members shared a site that was 0.01 percent as large as Facebook, in almost all cases they could count on reaching just as many viewers. As a practical matter, the billions of users that will never see a person’s Facebook page are irrelevant to them.

The other possibility is that Facebook would simply ignore complaints and leave potentially defamatory material posted on its site. Masnick seems to think this is a possibility for Facebook.

“First off, the actual bar for defamation is quite high, especially for public figures. Baker, incorrectly, seems to think that merely saying something false about a public figure is defamatory. That’s not how it works. It has to meet the standard of defamation, including the actual malice standard (which is not just that you were really mad when you said it). Second, and much more important for this situation, is that if the speaker was liable, that does not automatically mean that the intermediary would be liable. Under the two key cases prior to Section 230 becoming law, Cubby v. Compuserve and Stratton Oakmont v. Prodigy, the courts had to wrestle with what makes 3rd party intermediary liability consistent with the 1st Amendment.”

Of course, the bar for defamation is high, and especially so for public figures. That doesn’t mean that they are not brought and occasionally successful. General William Westmoreland sued CBS News in 1982 for a segment it did on his conduct during the Vietnam War. This suit survived summary judgement (wrong call in my view) and was settled just before the jury reached a verdict.

More recently, the former professional wrestler Jesse Ventura won a suit against American Sniper author Chris Kyle. After securing a judgement at the trial level, Ventura received an out-of-court settlement before the case was appealed.

But the issue of public figure defamation is the less important one. The overwhelming majority of defamation claims on a site like Facebook would not involve public figures but rather comments about a business or worker, friend, neighbor, or family member. It’s not obvious why in these sorts of cases, that Facebook should enjoy a greater level of immunity (post-notification) than a newspaper or television station.

If a person had a letter printed in a newspaper, claiming that a restaurant served rotten meat, causing dozens of customers to get sick, the paper, and not just the letter writer, could be sued for libel if the claim was not true. Why should the restaurant have no recourse against Facebook, if it allowed this false claim to continue to circulate, even after they brought it to Facebook’s attention?

Apart from the cost that news organizations incur when they defend against, and possibly lose, a defamation suit, they also incur considerable expenses to avoid facing suits. News outlets carefully comb investigative pieces to ensure that they do not contain potentially defamatory material. They review third party submissions, such as columns and letters to the editor, the same way.

Section 230 ensures that Facebook does not now have to incur these expenses. Repealing this protection will unambiguously raise its costs, both relative to the outlets that do not now have Section 230 protection and also relative to sites that would still enjoy this protection.

It is not clear what constitutional issues Masnick envisions in holding intermediaries liable for carrying defamatory material. The two cases he cites both focus on whether the intermediary could have reasonably been expected to know of the defamatory material at the time it was posted. In a case where Facebook has been given a takedown notice, they obviously have knowledge of the material. The courts have apparently not seen any First Amendment issues with holding intermediaries liable for carrying material that infringes on copyrights, it’s not clear why they would then hold that the First Amendment protects them from suits on hosting defamatory material.

Is Mark Zuckerberg a Good Guy and Should We Care?

Facebook has obviously made some effort to limit the amount of false and hateful material that circulates on its site. We can be thankful for this, even if we can debate whether it has done enough.

But the more fundamental question is whether such important decisions should be left to the discretion of a private company. The disproportionate control of the media by large corporations and wealthy individuals has long been a problem, but the situation is much more serious when a single company can have the reach of Facebook.

Even if people are reasonably satisfied with Mark Zuckerberg’s moderation of Facebook, he is not going to be running the company forever. Would people be equally satisfied if some Koch-Murdoch-type billionaire were in control? Would it be okay if they started removing any content pointing out that Donald Trump lost the 2020 election by a wide margin and that the allegations of vote fraud are absurd?

When a key communications outlet gets as large as Facebook it is a real problem. We can hope that it exercises its power responsibly, but the problem is that it has the power in the first place. At the same time, no one can reasonably want the government to dictate Facebook’s moderation policy, which would raise all sorts of First Amendment issues.

The better answer lies in downsizing Facebook so that what Mark Zuckerberg or any billionaire wants, doesn’t matter so much. Taking away its Section 230 protection is an effective route to accomplish this goal.

[1] With a site like Facebook, which effectively has a record of who has viewed any post, there could be an additional requirement that all the users that viewed the defamatory item be notified that it was removed. This would be equivalent to a newspaper publishing a retraction in response to a threat of a defamation suit.

[2] A site like Airbnb could probably also get their hosts to waive their right to sue for defamation as a condition of listing on the service.

[3] These data are taken from Bureau of Economic Analysis, National Income and Product Accounts, Table 2.4.5U,  Line 217.

[4] Mike Masnick called my attention to this issue.

[5] I have argued for a tax credit system, modeled on the charitable contribution tax deduction as an alternative to copyright for supporting creative work. Such a credit would be a great way to ensure that even the poorest households could afford access to subscription sites.

  1. January 5, 2022 at 9:08 pm

    The record of who was involved opens a perfect door to resurrect the fairness doctrine, first come first serve is given free contact time to refute obvious crapola. The fairness doctrine offering this open door to every member of the mass communication audience may be too old-fashioned and honest for the current system. Well, that’s life in the fast lane.

  2. Meta Capitalism
    January 6, 2022 at 3:53 am

    The arguments for or against net neutrality may well determine if our democracy survives or ends up like Nazi Germany. For hate speech unrestrained has consequences; lies and propaganda unrestrained have consequences as Trump’s “Big Lie” is proving to have a life of its own and is being operationalized by the GOP as we debate meaningless minutiae, straining at gnats while swallowing the camel. In the same way the Nazi’s created demonic adversaries of the Jews the GOP and republicans are now spewing hate speech aimed at turning democrats into the enemy to be destroyed at all costs. Such rhetoric pumped onto social media daily has consequences and the death of democracy will its first victim.

    The Conservative takeover of the Supreme Court may well be the open door to fascism and autocracy in that it has elevated the First to an absolute in the same way Scalia elevated the 2nd to an absolute despite

    Contemporary Formulations of First Amendment Doctrine

    If the clear and present danger cases introduced the public to modern First Amendment jurisprudence, the Supreme Court’s “fighting words” doctrine in Chaplinsky v. New Hampshire advanced things even more.25 The Court upheld Chaplinsky’s conviction for violating a statute forbidding persons from using “offensive, derisive, and annoying” words and “derisive” names against persons in public places. Writing for the majority, Justice Murphy concluded that certain types of speech, such as “fighting words,” are not, and have never been, protected by the Constitution. Fighting words are epithets reasonably expected to provoke a violent reaction if addressed toward an “ordinary citizen.”26 “[S]uch utterances are no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality.”27 (Tsesis 2002, 125-126, https://a.co/fNVO2yQ)

    In Brandenburg v. Ohio,28 the Court revisited the issue of how to determine whether statutes aimed at limiting inciteful speech are constitutional. Decided in 1969, the case remains the ruling authority on this subject. The Court protected the advocacy of illegal conduct and tightened up the “clear and present danger” test in a context it found devoid of any true danger.29 At issue was a film showing the defendant, who was the leader of an Ohio Ku Klux Klan chapter, asserting that revenge might be taken against the U.S. government if it “continues to suppress the white . . . race.” Reversing the defendant’s conviction, the Court held that the First Amendment guarantee of free speech prohibits government from proscribing the “advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Further, the Court found that the Ohio statute violated the First and Fourteenth Amendments because it did not distinguish between persons who called for the immediate use of violence and those teaching an abstract doctrine about the use of force.30 However, in explaining and analyzing its decision, the Court failed to evaluate whether there were historical reasons to think that a Ku Klux Klan rally might spark racist conflict. Thus, its opinion that the speech would not incite listeners to lawless action was not grounded in an empirical foundation. (Tsesis 2002, 126, https://a.co/8a8lsVd)

    The most recent Supreme Court decision on the legality of “pure bias” laws—R.A.V. v. St. Paul—imposed a near-blanket prohibition against legislation regulating speech based on its misethnic content.31 The majority opinion, which represented the views of five Supreme Court members, differed substantially from three concurring opinions. The case arose when several teenagers made a cross and then burned it on a black family’s front yard. The juveniles were charged under a St. Paul ordinance that made it a misdemeanor to display, in public or private places, symbols such as Nazi swastikas or burning crosses that are known to arouse “anger, alarm, or resentment on the basis of race, color, creed, religion or gender.”32 (Tsesis, Alexander. Destructive Messages (Critical America) . NYU Press. Kindle Edition. ) (Tsesis 2002, 126-127, https://a.co/bh8gPdP)

    Writing for the majority, Justice Scalia found that the St. Paul ordinance violated the First Amendment because it was a form of “content discrimination.”33 Only those “fighting words” that were enumerated by the ordinance were prohibited, while other forms of potentially inflammatory utterances, such as those about persons’ political affiliations, were not so proscribed. Scalia acknowledged that St. Paul had a compelling interest in protecting the human rights of the “members of groups that have historically been subjected to discrimination.” The majority, nevertheless, held that this legislative aim could only be constitutionally effectuated by a total ban of “all fighting words, rather than focusing on hate speech.”34 Scalia made clear his belief that it is unconstitutional for legislators to adopt laws specifically intended to prohibit inflammatory racist and anti-Semitic utterances. (Tsesis, Alexander. Destructive Messages (Critical America) . NYU Press. Kindle Edition.) (Tsesis 2002, 127, https://a.co/dhkuUxe)

    Justice White, who wrote one of the concurring opinions to R.A.V., criticized the majority reasoning as being contrary to Supreme Court precedents. He argued that the Court had long allowed the regulation of low-level speech based on its content. It was disingenuous, in his view, to require the government to proscribe an entire class of utterances (i.e., fighting words) but forbid regulation of a subset of that class which “by definition [is] worthless and undeserving of constitutional protection.”35 According to Justice White, banning all or some fighting words would help eliminate social harms while not limiting the potential for ideas to compete in the marketplace of ideas. The majority’s approach “invites” persons to utilize racist expressions, which, in terms of the First Amendment, are worthless. (Tsesis, Alexander. Destructive Messages (Critical America) . NYU Press. Kindle Edition.)

    Further, Justice White warned that the majority’s decision would influence future First Amendment case law for the worse. The majority’s opinion signaled to the disseminators of racial and ethnic animus that their expressions are more worthy of governmental protections than the peace and tranquility of the targeted groups. By calling the use of fighting words a “debate,” the majority placed hate speech on the level of political and cultural discourse. Nevertheless, Justice White found that while the ordinance forbade certain speech that was unprotected by the First Amendment, it was nevertheless unconstitutional because of its “overbreadth” in prohibiting expressions that hurt feelings, caused offense, or produced resentment in others.36 (Tsesis, Alexander. Destructive Messages (Critical America) . NYU Press. Kindle Edition.)

    Justice Blackmun, in a separate concurrence, agreed with Justice White that the St. Paul ordinance was unconstitutionally overbroad. However, Blackmun declared that it was generally constitutional for cities to enact laws aimed at preventing hooligans from burning crosses intended to drive minority residents from their homes.37 (Tsesis, Alexander. Destructive Messages (Critical America) . NYU Press. Kindle Edition.)

    In yet another concurrence, Justice Stevens pointed out that there are many constitutional, governmental regulations that target utterances based on their content: for example, a city can “prohibit political advertisements in its buses while allowing other advertisements.”38 Therefore, the majority’s contention that all content-based regulations are unconstitutional is insupportable by First Amendment jurisprudence. Furthermore, Justice Stevens believed that just as a governmental entity could constitutionally restrict only certain forms of commercial speech, so too could St. Paul regulate only certain types of fighting words and not others. According to Justice Stevens, a city can regulate certain fighting words more stringently than others based on the greater social harms they caused. However, like other justices, Justice Stevens found that the St. Paul ordinance violated the First Amendment because it was overbroad.39 (Tsesis, Alexander. Destructive Messages (Critical America) . NYU Press. Kindle Edition.)

    The Supreme Court has long held that government cannot suppress ideas it finds “offensive or disagreeable.”40 But why not place some restrictions on certain forms of hate speech that have low or zero social value?41 The next chapter critically reflects on this issue in light of the Court’s doctrine. (Tsesis, Alexander. Destructive Messages (Critical America) . NYU Press. Kindle Edition.)

    — Tsesis, Alexander. Destructive Messages [How Hate Speech Paves the Way for Harmful Social Movements]. New York: New York University Press; 2002;(Critical America Book; v. 27).

    Notes:

    31. 505 U.S. 377 (1992), https://supreme.justia.com/cases/federal/us/505/377/

    • Meta Capitalism
      January 6, 2022 at 3:54 am

      “Scalia elevated the 2nd to an absolute despite” should be “Scalia elevated the 2nd to an absolute despite historical facts and evidence to the contrary.”

  3. Meta Capitalism
    January 7, 2022 at 5:34 am

    I book I think is critical for providing background to any discussion like the one above is Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics.

  4. Ken Zimmerman
    January 8, 2022 at 12:20 pm

    Sociology Professor Nella Van Dyke studies right-wing social movements, hate-speech, and the first amendment. In this Q&A she sheds light on the legal and social ramifications of free speech and hate speech. How do we know which inflammatory statements are legally protected and which are not?

    Van Dyke is an expert on social movements in relation to hate crimes, with recent studies of the movement against sexual assault, college student protest, LGBTQ+ college student experiences and racist hate crimes on campus. Her work has been published in leading journals including Social Forces, Social Problems and the American Sociological Review. She has co-edited two books: “Strategic Alliances: Coalition Building and Social Movements” and “Understanding the Tea Party Movement.”

    1. First off, please define free speech under the First Amendment for us. How might Americans understand this differently from other global citizens?
    The Constitution itself does not define free speech, but the First Amendment of the Constitution says “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” Because of this, every person in the United States has freedom of speech.

    2. Are there limits to free speech? What speech is not protected under the First Amendment? Is hate speech different than free speech? How do you know if something is hate speech versus someone just expressing an opinion or a bias?
    Because of the First Amendment, most speech is protected in the U.S., but not all types.
    Speech that threatens another individual, defames their character in a manner that causes damage, is considered obscene, incites violence or creates a hostile environment is illegal.
    The goal of hate speech is to silence and exclude. Hate speech is technically legal, unless it occurs in a repeated way in a location which the individual cannot avoid, thus creating a hostile environment, or, if it is directly threatening to the individual who hears it. Note that hate speech can be used as evidence in a hate-crime case. If hate speech occurs during the commission of a crime, it can be used as evidence that the crime was a hate crime, though the speech itself is not illegal.

    There is no question that racist and other bigoted speech is harmful to marginalized students and harmful to the university as a whole. The problem, however, is that hate speech is difficult to define. Some colleges have enacted anti-hate speech policies, but ironically, these have almost always ended up being used against the students they were intended to protect.
    If we allow authorities to enact laws against hate speech, they may use these laws against those seeking social justice. For example, during the 1800s, many Southern states in the U.S. made it illegal to speak out against slavery because they said it would incite violence. While it is understandable that many members of our community would like to see hate speech banned either on campus [and in social media] or by state or federal governments, these policies are unlikely to hold up in court, and we have to be careful about how much power we give authorities over us.

    3. Is the definition of free speech debatable? Are there any hard and fast rules for free speech to keep in mind if you are in doubt of whether your speech, or someone else’s, is protected?
    The courts are very consistent in their rulings on free speech. Decisions and definitions of what speech is allowed do change over time, but not very quickly, and challenges that go against established precedent are not very likely to succeed.
    UC Merced’s principles of community call for all of us to treat one another with dignity and respect, and to be civil when engaged in dialogue. Therefore, we should all try to avoid speech that dehumanizes, disparages or hurts another person. In terms of what is legal, we have more freedom. Legally, we should avoid threatening a specific individual with harm, trying to get others to commit crimes or acts of violence [Donald Trump?], or repeatedly using hate speech around an individual or particular group of individuals. However, we can all do better than that by following UC Merced’s principles of community and encouraging others to do so.

    4. How does free speech apply to the government versus private citizens or businesses?
    Only government entities are required to follow the direct limits imposed by the Constitution. Private actors must follow the law, but not the directives described in the Constitution. Public universities must therefore allow free speech, including hate speech. Private institutions, including businesses and private colleges and universities, can enact policies limiting speech, including anti-hate policies. Private citizens can do what they’d like in private (e.g., at home), as long as they obey the law. When they are acting within an institutional space, they must follow the rules of the space. Thus, an individual at a public university has the right to free speech and cannot get penalized for hate speech (unless it includes a direct threat or otherwise breaks the law), while someone on a private college campus could face disciplinary action for hate speech if it violates the campus’ speech policies.

    5. Twitter’s recent decision to suspend President Trump’s account amid the pro-Trump rebellion at the Capitol has caused much debate over free speech and censorship. Can you lend insight into Twitter’s decision?
    As a private company, Twitter has the right to decide what content or users it wants to allow. Therefore, legally it had the right to suspend Trump’s account. Twitter states that it banned his account because it determined that his tweets violated its policy against the glorification of violence. Twitter decided that his tweets “could inspire others to replicate violent acts and determined that they were highly likely to encourage and inspire people to replicate the criminal acts that took place at the U.S. Capitol on January 6, 2021.” It’s also possible that Twitter was concerned about liability because it is illegal to provide resources to those aiming to overthrow the U.S. government and it is illegal to participate in inciting violence.

    6. Parler, a social network often used by conservatives, made headlines for alleging it was an online space where free speech could truly exist. Many users feel the likes of Twitter and Facebook stunt social discourse. How are people to know whether their social platforms are truly respecting the First Amendment rights?
    Social media users can do research on the platforms’ terms of service and posting policies. Users should be aware that even if they have the right to post almost any content, the platforms have algorithms that decide what content to promote. Facebook’s algorithm, for example, promotes content that evokes strong emotions, and therefore has been found to amplify conspiracy theories and fake news. Twitter, Facebook and Tiktok have all recently released information about their algorithms in an effort to increase public trust, and users can find these online. Ultimately, I’m not sure anyone can be 100 percent certain that their right to free speech is being fully respected, because these are private companies that are not bound by the First Amendment.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: