In 1996, Section 230 was passed as a law that promoted online free speech. Now, over two decades later - there are some dark sides to online speech.
Pretty much everyone who voices their thoughts, opinions and ideas online is protected by Section 230. Section 230 is a far-reaching provision that shields not only individual users and small blogs and websites but also tech giants like Twitter and Google, as well as any other online service that facilitates user expression. Its scope has been affirmed by numerous court rulings, which have established that Section 230 precludes lawsuits against both users and service providers for sharing or hosting content produced by others, whether it be forwarding emails, hosting reviews, or sharing objectionable photos or videos. Additionally, Section 230 safeguards the curation of online speech by affording intermediaries the legal space to determine what kinds of user expression they will host and to moderate content in ways they deem appropriate.
Over the next week, the American Supreme Court will hear arguments from Gonzalez v. Google and Twitter v. Taamneh. It is being argued that if the plaintiffs are able to convince the courts to increase platforms such as Twitter and Goggle's legal exposure it could completely destroy any protections that were originally enacted as part of Section 230.
Gonzalez v. Google will discuss censorship on Google's video hosting platform YouTube. The argument is whether or not YouTube can be sued for hosting videos that are a danger to society and feature harmful content.
Things start to become a little more blurry when we consider that big tech giants such as Google and Twitter use algorithms to suggest recommended videos. The bigger question is should algorithms have the full protection of Section 230?
Algorithms offering personalised recommendations lead towards the question of how we begin to reference AI models. Built on a system of learned behaviour, AI presents a new way of consuming content in completely uncharted territory.
As an advocate for AI, I can understand the perks of integrating new technology. Whilst there are many perks and positives, I can also see why some may see the negative. There are several companies at the moment pitching new and exciting ways to search and look for answers online. The most popular being ChatGPT and the integration of AI into Microsoft's Bing search engine.
The answers and content that systems such as ChatGPT provide can be ridden with inaccuracies and potentially biased content. Technologies such as ChatGPT and Google's Bard wouldn't exist without having to rely on other people's views. This is where the law begins to become much foggier. The integration into search engines leaves the Google's of the world in danger of the promotion and spreading of false and/or defamatory information.
The search engines we know and love are protected by Section 230 whereas new innovations are likely to operate slightly outside of these protections.
The boundary between AI-generated summaries and traditional search outcomes can be ambiguous. Even in regular Google search results, there are answer boxes that provide commentary on the search outcomes. However, such features have been known to unintentionally disseminate harmful misinformation.
As the courts revisit the basic principles of internet law, they are doing so at the start of a new technological era that has the potential to revolutionize the internet. However, this could also entail legal liabilities whilst changing the way we interact with the internet forever.
Tej Kohli is a philanthropist, technologist and investor.
Find out more about Tej Kohli: Tej Kohli the technologist investing in human triumph and Tej Kohli the London tycoon with a generous streak.