[ad_1]
The story so far: This week, the Supreme Court of the United States (SCOTUS) began hearing two pivotal lawsuits that will for the first time ask it to interpret Section 230 of the U.S. Communications Decency Act of 1996, the law that has shielded tech companies from liabilities over decades and essentially shaped the internet as we know it. The lawsuits pose a long-standing question, asked since the nascent days of the internet— should digital companies be held liable for the contest that users post on their platforms?
What are the two lawsuits?
Both lawsuits have been brought by families of those killed in Islamic State (ISIS) terror attacks. The first lawsuit, Gonzalez vs. Google, has been filed by the family of Nohemi Gonzalez, a 23-year-old American killed while studying in Paris, in the ISIS terror attacks of 2015 that killed 129 people. The family is suing YouTube-parent Google for “affirmatively recommending ISIS videos to users” through its recommendations algorithm. The Court filings say that the video-sharing platform YouTube “aided and abetted” the Islamic State in carrying out acts actionable under U.S. anti-terrorism law.
In their filing, the family of late Ms. Gonzalez said: “The defendants (Google) are alleged to have recommended that users view inflammatory videos created by ISIS, videos which played a key role in recruiting fighters to join ISIS in its subjugation of a large area of the Middle East, and to commit terrorist acts in their home countries.”
The first hearing in this case took place on Tuesday, February 21. Before approaching the Supreme Court, the petitioners had sued Google, and others including Twitter and Facebook in the United States Court of Appeals for the Ninth Circuit. Twitter and Google were later removed from the case and Google denied that it helped spread extremist messages. In 2021, the Ninth Circuit Court dismissed the claim, saying that Google was protected under Section 230 of the Communications Decency Act of 1996, prompting the family to move the top court.
The second case, Twitter v. Taamneh, heard by SCOTUS for the first time on Wednesday, pertains to a lawsuit filed by the family of a Jordanian citizen killed in an ISIS attack on a nightclub in Istanbul, Turkey, in 2017. The lawsuit relies on the Antiterrorism Act, which allows U.S. nationals to sue anyone who “aids and abets” international terrorism “by knowingly providing substantial assistance.” The family argues that despite knowing that their platforms played an important role in ISIS’s terrorism efforts, Twitter and the other tech companies failed to take action to keep ISIS content off those platforms. It also says that the platforms assisted the growth of IS by recommending extremist content through their algorithms.
What is Section 230 and how did it come about?
If a news website site falsely calls someone a con artist, they can file a libel suit against the publisher. However, with Section 230 of the U.S Communications Decency Act in place, if a person posts on Facebook that the said individual is a fraud, they cannot sue the platform, but only the person who posted it. It is essentially a “safe harbour” or “liability shield” for social media platforms or any website on the internet that hosts user-generated content, such as Reddit, Wikipedia, or Yelp.
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”— these words, enshrined in Section 230, have been described by a recent book and by Lisa Blatt, Google’s lawyer in the Gonzalez case, as the “26 words that created the internet”.
As Associated Press puts it, the legal phrase acts as a shield for companies that host trillions of messages, protecting them from a deluge of lawsuits by anyone who feels wronged by something posted by someone else on their platforms, whether the complaint is legitimate or not.
Another thing Section 230 does is allow interactive computer service providers to engage in content moderation, removing posts that violate their guidelines or are, for instance, obscene. According to the statute, these platforms can remove content posted on them as long as it is done in “good faith”.
In the 1990s, the early days of the World Wide Web, a lot of content of various kinds was floating online. Concerned, some U.S. lawmakers introduced a bipartisan Bill in 1995 that resulted in the Communications Decency Act the next year, which governed obscene content online and also made platforms responsible for any such content posted on them.
Subsequently, two of the early internet platforms — CompuServe and Prodigy — got entangled in lawsuits. CompuServe, which did not moderate content on its site, was sued for something obscene posted by a user (the case was later dismissed). On the other hand, a judge ruled that Prodigy, which did moderate content, was equally liable for posted speech as a publisher, since such moderation was equivalent to editorial control.
Following the Prodigy judgement, two lawmakers, Rep. Ron Wyden and Rep. Chris Cox, fearing that the Communications Act would stop nascent internet companies from tapping into their potential, and would lead to having no editorial control over posted content , authored an amendment to the Act. Thus, Section 230 was born.
Why are tech companies and digital rights groups saying that any alteration to the law will change the internet?
In January this year, a group of tech companies, websites, academics, users of the internet, and rights groups filed amicus curiae briefs in the Supreme Court, urging it to not change Section 230, outlining the sweeping impact such a move could have on the Internet.
Twitter argued in its filing that for decades, “courts have construed Section 230(c)(1) to protect interactive computer service providers from liability arising from third-party content on their websites”. It stated that in 2020, “40 zettabytes of online data were generated worldwide”, and that Section 230 allows platforms to moderate such huge volumes of content and present the “most relevant” information to users. It added that the company has frequently relied on the statute to protect it from “myriad lawsuits’, including many pertaining to anti-terrorism law.
In their filing, Reddit, the content aggregation, and discussion forum, argued that a ruling that targets automated algorithms could in the future hamper manually curated recommendations— which is done on Reddit. “There should be no mistaking the consequences of petitioners’ claim in this case: their theory would dramatically expand Internet users’ potential to be sued for their online interactions,” said Reddit.
Microsoft, in its filing, pointed out that altering Section 230 would also impact millions of developers using the open-source software building platform GitHub. It highlighted that GitHub uses recommendation algorithms to suggest “software to users based on projects they have worked on or showed interest in previously”, and changing the statute could be “devastating” for the digital infrastructure.
Digital rights and free speech activist Evan Greer also pointed out that holding platforms liable for what their recommendation algorithms present couldlead to the suppression of legitimate third-party information of political or social importance, such as those created by minority rights groups or non-profits.
What is the initial opinion of the Court?
In the initial hearing of the Gonzalez case last week, the Justices of the Supreme Court seemed cautious about altering the liability shield that is Section 230. Chief Justice John Roberts suggested that YouTube was not “pitching something in particular to the person who’s made the request” but was just acting as a “21st-century version”, putting together a group of things the person may want to look at, something that has been happening for a long time.
Justice Clarence Thomas invoked another analogy used in previous cases against Section 230, asking whether the algorithm YouTube treats all subjects equally, whether it is to recommend “rice pilaf recipes and terrorist content”. He was answered by the defendants in the affirmative.
Justice Elena Kagan noted that algorithms were involved in everything anyone searches for on the internet, whether it was Google search, YouTube or Twitter. She asked the lawyers of the Gonzalez family whether agreeing with them would ultimately make Section 230 meaningless.
Google’s counsel, Ms. Blatt, told the Court that recommendations were just a way of organizing the mountain of information uploaded on the internet every day. While Ms. Blatt submitted that it was Congress’ move to enact Section 230 that protected platforms from liabilities and helped the internet flourish, Justice Ketanji Brown Jackson rejected this interpretation.
However, a separate brief filed in Court by the authors of the statute does state that the law was enacted to give immunity to the nascent internet and bring in a “technology-agnostic immunity provision that would protect Internet platforms from liability for failing to perfectly screen unlawful content”.
( With inputs from agencies)
[ad_2]
[ad_1]
The story so far: This week, the Supreme Court of the United States (SCOTUS) began hearing two pivotal lawsuits that will for the first time ask it to interpret Section 230 of the U.S. Communications Decency Act of 1996, the law that has shielded tech companies from liabilities over decades and essentially shaped the internet as we know it. The lawsuits pose a long-standing question, asked since the nascent days of the internet— should digital companies be held liable for the contest that users post on their platforms?
What are the two lawsuits?
Both lawsuits have been brought by families of those killed in Islamic State (ISIS) terror attacks. The first lawsuit, Gonzalez vs. Google, has been filed by the family of Nohemi Gonzalez, a 23-year-old American killed while studying in Paris, in the ISIS terror attacks of 2015 that killed 129 people. The family is suing YouTube-parent Google for “affirmatively recommending ISIS videos to users” through its recommendations algorithm. The Court filings say that the video-sharing platform YouTube “aided and abetted” the Islamic State in carrying out acts actionable under U.S. anti-terrorism law.
In their filing, the family of late Ms. Gonzalez said: “The defendants (Google) are alleged to have recommended that users view inflammatory videos created by ISIS, videos which played a key role in recruiting fighters to join ISIS in its subjugation of a large area of the Middle East, and to commit terrorist acts in their home countries.”
The first hearing in this case took place on Tuesday, February 21. Before approaching the Supreme Court, the petitioners had sued Google, and others including Twitter and Facebook in the United States Court of Appeals for the Ninth Circuit. Twitter and Google were later removed from the case and Google denied that it helped spread extremist messages. In 2021, the Ninth Circuit Court dismissed the claim, saying that Google was protected under Section 230 of the Communications Decency Act of 1996, prompting the family to move the top court.
The second case, Twitter v. Taamneh, heard by SCOTUS for the first time on Wednesday, pertains to a lawsuit filed by the family of a Jordanian citizen killed in an ISIS attack on a nightclub in Istanbul, Turkey, in 2017. The lawsuit relies on the Antiterrorism Act, which allows U.S. nationals to sue anyone who “aids and abets” international terrorism “by knowingly providing substantial assistance.” The family argues that despite knowing that their platforms played an important role in ISIS’s terrorism efforts, Twitter and the other tech companies failed to take action to keep ISIS content off those platforms. It also says that the platforms assisted the growth of IS by recommending extremist content through their algorithms.
What is Section 230 and how did it come about?
If a news website site falsely calls someone a con artist, they can file a libel suit against the publisher. However, with Section 230 of the U.S Communications Decency Act in place, if a person posts on Facebook that the said individual is a fraud, they cannot sue the platform, but only the person who posted it. It is essentially a “safe harbour” or “liability shield” for social media platforms or any website on the internet that hosts user-generated content, such as Reddit, Wikipedia, or Yelp.
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”— these words, enshrined in Section 230, have been described by a recent book and by Lisa Blatt, Google’s lawyer in the Gonzalez case, as the “26 words that created the internet”.
As Associated Press puts it, the legal phrase acts as a shield for companies that host trillions of messages, protecting them from a deluge of lawsuits by anyone who feels wronged by something posted by someone else on their platforms, whether the complaint is legitimate or not.
Another thing Section 230 does is allow interactive computer service providers to engage in content moderation, removing posts that violate their guidelines or are, for instance, obscene. According to the statute, these platforms can remove content posted on them as long as it is done in “good faith”.
In the 1990s, the early days of the World Wide Web, a lot of content of various kinds was floating online. Concerned, some U.S. lawmakers introduced a bipartisan Bill in 1995 that resulted in the Communications Decency Act the next year, which governed obscene content online and also made platforms responsible for any such content posted on them.
Subsequently, two of the early internet platforms — CompuServe and Prodigy — got entangled in lawsuits. CompuServe, which did not moderate content on its site, was sued for something obscene posted by a user (the case was later dismissed). On the other hand, a judge ruled that Prodigy, which did moderate content, was equally liable for posted speech as a publisher, since such moderation was equivalent to editorial control.
Following the Prodigy judgement, two lawmakers, Rep. Ron Wyden and Rep. Chris Cox, fearing that the Communications Act would stop nascent internet companies from tapping into their potential, and would lead to having no editorial control over posted content , authored an amendment to the Act. Thus, Section 230 was born.
Why are tech companies and digital rights groups saying that any alteration to the law will change the internet?
In January this year, a group of tech companies, websites, academics, users of the internet, and rights groups filed amicus curiae briefs in the Supreme Court, urging it to not change Section 230, outlining the sweeping impact such a move could have on the Internet.
Twitter argued in its filing that for decades, “courts have construed Section 230(c)(1) to protect interactive computer service providers from liability arising from third-party content on their websites”. It stated that in 2020, “40 zettabytes of online data were generated worldwide”, and that Section 230 allows platforms to moderate such huge volumes of content and present the “most relevant” information to users. It added that the company has frequently relied on the statute to protect it from “myriad lawsuits’, including many pertaining to anti-terrorism law.
In their filing, Reddit, the content aggregation, and discussion forum, argued that a ruling that targets automated algorithms could in the future hamper manually curated recommendations— which is done on Reddit. “There should be no mistaking the consequences of petitioners’ claim in this case: their theory would dramatically expand Internet users’ potential to be sued for their online interactions,” said Reddit.
Microsoft, in its filing, pointed out that altering Section 230 would also impact millions of developers using the open-source software building platform GitHub. It highlighted that GitHub uses recommendation algorithms to suggest “software to users based on projects they have worked on or showed interest in previously”, and changing the statute could be “devastating” for the digital infrastructure.
Digital rights and free speech activist Evan Greer also pointed out that holding platforms liable for what their recommendation algorithms present couldlead to the suppression of legitimate third-party information of political or social importance, such as those created by minority rights groups or non-profits.
What is the initial opinion of the Court?
In the initial hearing of the Gonzalez case last week, the Justices of the Supreme Court seemed cautious about altering the liability shield that is Section 230. Chief Justice John Roberts suggested that YouTube was not “pitching something in particular to the person who’s made the request” but was just acting as a “21st-century version”, putting together a group of things the person may want to look at, something that has been happening for a long time.
Justice Clarence Thomas invoked another analogy used in previous cases against Section 230, asking whether the algorithm YouTube treats all subjects equally, whether it is to recommend “rice pilaf recipes and terrorist content”. He was answered by the defendants in the affirmative.
Justice Elena Kagan noted that algorithms were involved in everything anyone searches for on the internet, whether it was Google search, YouTube or Twitter. She asked the lawyers of the Gonzalez family whether agreeing with them would ultimately make Section 230 meaningless.
Google’s counsel, Ms. Blatt, told the Court that recommendations were just a way of organizing the mountain of information uploaded on the internet every day. While Ms. Blatt submitted that it was Congress’ move to enact Section 230 that protected platforms from liabilities and helped the internet flourish, Justice Ketanji Brown Jackson rejected this interpretation.
However, a separate brief filed in Court by the authors of the statute does state that the law was enacted to give immunity to the nascent internet and bring in a “technology-agnostic immunity provision that would protect Internet platforms from liability for failing to perfectly screen unlawful content”.
( With inputs from agencies)
[ad_2]