The Supreme Court is said to be about to decide the legal fate of AI searches

The Supreme Court is about to reconsider Article 230, a law that has been the foundation of the internet for decades. But whatever the court decides, it could ultimately change the rules for a technology that’s only just getting started: AI-powered search engines like Google Bard and Microsoft’s new Bing.

The Supreme Court will rule on the case next week Gonzalez to Googleone of two additional legal complaints. Gonzalez is nominally about whether YouTube can be sued for hosting accounts of foreign terrorists. But the much larger underlying question is whether algorithmic recommendations should be given the full legal protection of Section 230, given that YouTube has recommended those accounts to others. While everyone from tech giants to the editors of Wikipedia has warned of potential consequences if the court cuts these protections, it raises particularly interesting questions for AI searches, a field with almost no immediate legal precedent to draw from.

Companies are pitching big language models like OpenAI’s ChatGPT as the future of search, arguing they can replace increasingly cluttered conventional search engines. (I’m ambivalent about calling them “artificial intelligence”—they’re, in fact, highly sophisticated auto-prediction tools—but the term has stuck.) They typically replace a list of links with a footnote-laden summary of text from around the world. entire web, providing conversational answers to questions.

Old-fashioned search engines can rely on Section 230, but AI-powered search engines are unfamiliar territory

These summaries are often ambiguous or indicate that they rely on other people’s views. But they can still introduce inaccuracies: Bard got an astronomical fact wrong in its very first demo, and Bing completely fabricated financial results for a public company (among other mistakes) in are first demo. And even if they just summarize other content from the internet, the web itself is full of false information. That means there’s a good chance they’ll pass something along, just like regular search engines. If those errors cross the line for spreading defamatory information or other unlawful expressions, the search providers could be at risk of lawsuits.

Trusted search engine interfaces can count on some degree of Section 230 protection if they link to false information, arguing that they are simply posting links to content from other sources. The situation for AI-powered chatbot search interfaces is much more complicated. “This would be a whole new question for the courts to answer,” said Jeff Kosseff, a U.S. Naval Academy law professor and author of The twenty-six words that created the internet on the history of Section 230. “And I think part of it will depend on what the Supreme Court does in the Gonzalez case.”

If section 230 remains largely unchanged, many hypothetical future cases will depend on whether an AI search engine repeated someone else’s illegitimate speech or produced its own language. Web services can claim Section 230 protection even if they slightly change the language of a user’s original content. (In an example Kosseff offers, a news site could edit the grammar of a defamatory comment without taking responsibility for the message.) So an AI tool that simply tweaks some words might not make it accountable for what it says. Microsoft CEO Satya Nadella has suggested that AI-powered Bing faces essentially the same legal issues as vanilla Bing — and right now, the biggest legal questions for AI-generated content revolve around copyright infringement, which is beyond the scope of this section. 230 falls.

There are still limits here. Language models can “hallucinate” incorrect facts, such as the Google and Bing errors above, and these engines to arise a mistake, they are on a shaky legal footing under any version of Section 230. How shaky? Until it goes to court, we won’t know.

“There is a real danger in making a rule very specific to 2023 technology”

But Gonzalez can make AI searches risky, even if engines just provide an accurate summary of someone else’s statement. The crux of the matter is whether a web service can lose Section 230 security by organizing user-generated content in a way that promotes or highlights it. Courts may not be eager to go back and apply this to ubiquitous services like old-fashioned search engines, and from Gonzalez plaintiffs have sought to establish that this will not happen. Even if they’re careful, they’re less likely to let newer services slack off, as they’ll be widely used under the new precedent — particularly services like AI search engines, which dress up search results as direct speech from a digital persona.

“This case involves a fairly specific type of algorithm, but it is also the first time in 27 years that the Supreme Court has interpreted Article 230,” says Kosseff. “There is a danger that everything the court does will have to suffer [another] 27 years. And I think there is a real danger in making a rule that is very specific to the technology of 2023 – if it looks completely outdated in five or 10 years.” If Gonzalez leads to tougher limits on Section 230, courts could decide that simply summarizing a statement makes AI search engines responsible for it, even if they repeat it elsewhere.

Precedents around people lightly editing posts by hand provide only a limited guide to complex, large-scale AI-generated writing. Courts should ultimately decide how much summarizing is at a lot for Section 230, and their decision could be colored by the political and cultural climate, not just the letter of the law. Judges have interpreted the protections of Article 230 extensively in the past, but amid an anti-tech backlash and a Supreme Court re-evaluation of the law, they can’t afford to each new technology, the kind of leeway that previous platforms were given. And the current Supreme Court has proven willing to overturn legal precedent by overturning the landmark Roe against Wade decision, on top of some individual judges waging a culture war around online speech. For example, Clarence Thomas specifically advocated putting Section 230 on the chopping block.

The line between AI search and conventional search is not always clear

This does not mean that all AI searches are legally doomed. Section 230 is an incredibly important law, but removing it wouldn’t automatically win people a lawsuit over every incorrectly summarized fact. In the case of defamation, for example, it must be demonstrated that the false information exists And you have been harmed by it, among other things. “Even if 230 didn’t apply, there wouldn’t be automatic liability,” Kosseff notes.

This question gets even muddier because the language people use in searches already affects their conventional search results, and you can intentionally trick language models into providing false information with leading questions. If you enter dozens of queries to get Bard to falsely tell you that some celebrity has committed murder, is that legally equivalent to Bard making the accusation simply by searching the person’s name? So far, no judge has ruled on this question, and it is not clear that it has even been asked in court.

And the line between AI summaries and conventional search isn’t always clear. Google’s regular search results page already has suggested answer boxes that provide an editorial around the search results. These have yielded potentially dangerous misinformation in the past: In one snippet of search, Google inadvertently turned a series of “don’ts” for handling a seizure into a list of recommendations. So far, this has not led to a deluge of lawsuits.

But while courts are rethinking the fundamentals of internet law, they’re doing so at the dawn of a new technology that could transform the internet, but which could pose many legal risks along the way.