Section 230 of the Communications Decency Act says services — from Facebook and Google to movie review aggregators and mom blogs with comment sections — shouldn’t be liable for most material from third parties. In these cases, it is easy to distinguish between the platform and the poster. Chatbots and AI assistants are not. Few people care whether Section 230 protects them.
Consider Chat GPT. Type in a question and it will provide an answer. Rather than displaying existing content, such as tweets, videos, or websites originally contributed by others, it writes its own contributions in real time. The law holds a person or entity liable if they “develop” content, even “parts” of it. And wouldn’t converting a list of search results into a snippet be eligible for development? What’s more, the profile of each AI contribution is heavily informed by the AI’s creators, who set the rules for their systems and shape their behavior by enforcing the behaviors they like and discouraging the behaviors they don’t like. output.
Yet at the same time, every answer on ChatGPT is, as one analyst put it, “remixes” of third-party material.The tool generates responses by predicting the next word in the sentence Based on the next occurrence of the word in the sentence on the web. Just as the creators behind a machine inform its output, so do users ask questions or engage in conversations. All of this suggests that the degree of protection offered to AI models may vary depending on the degree to which a given product is regurgitated versus synthesized, and the degree to which users intentionally induce the model to produce a given response.
So far, there is no legal clarity. Supreme Court Justice Neil M. Gorsuch said in a recent oral argument in a case involving Section 230 that AI “would generate debate today that goes beyond picking, selecting, analyzing, or digesting content”—assuming that “that content is not subject to Protect”. Last week, the article’s authors agreed with his analysis. But companies working on the next frontier deserve a firmer answer from lawmakers. To figure out what the answer should be, it’s worth taking another look at the history of the Internet.
Scholars believe that Section 230 was responsible for the strong growth of the network during its formative years. Otherwise, endless litigation will prevent any fledgling service from becoming as integral a network as Google or Facebook. That’s why many refer to Section 230 as the “26 words that created the Internet.” The problem is that, in retrospect, many believe that a lack of consequence not only allowed the Internet to grow, but to let it spin out of control. With AI, the country has the opportunity to act on lessons learned.
The lesson should not be to preemptively strip large language models of their Section 230 immunity.after all that is OK The Internet can evolve, even with its ills. Just like websites can’t scale without Section 230 protections, these products can’t hope to provide a ton of different answers in a ton of different topics, in a ton of applications — which is exactly what we should hope to do — — No legal protection. However, the United States also cannot afford to repeat its biggest mistake in Internet governance, namely, not governing at all.
Lawmakers should provide a temporary haven under Section 230 for new AI models while watching what happens when the industry starts to boom. They should address difficult questions these tools raise, such as who should be held accountable in a defamation case if the developer isn’t held accountable. They should study complaints, including lawsuits, and determine whether they could be avoided by modifying the exemption system. In short, they should make the Internet of the future behave like the Internet of the past. But this time, they’re paying attention.