Should chatbots be protected under the First Amendment? That was the central question at this year’s Ambassador Frank E. Baxter Lecture, where legal scholar and UCLA Law Professor Eugene Volokh argued that the future of free speech may depend less on who is talking and more on who is listening.
Volokh’s lecture, hosted by the Berkeley Liberty Initiative (BLI) and titled “AI, The Law and Free Speech,” examined whether the outputs of generative AI models deserve the same constitutional protections as human speech. He argued that such protections should apply even apart from any rights held by the AI companies, because of the rights of human users and audiences.
“The government shouldn't be able to say there are certain things that cannot be placed into the marketplace of ideas, and that’s true even if those things are written by people or entities who have no First Amendment rights,” he said. “So, even if AI companies' rights aren't involved, the output of AI models is valuable to readers.”
He acknowledged that a court might see an AI company’s own free speech rights as not extending to the outputs of its products, on the theory that “unlike a typical person or a company that speaks, the AI company really doesn't know what its products are outputting.” While other companies, like newspaper publishers, have humans who review every word that goes to print, AI companies create models that can produce unpredictable results, he said.
But even if that’s so, the outputs should still be constitutionally protected based on users’ rights, even if not the company’s own rights, he said. Restricting AI-generated speech, he argued, would unconstitutionally restrict people’s ability to read what the AI program can create, and to use the program to create material guided by the user-provided prompts.
“Generally speaking, speech is protected by courts under the First Amendment in part because of the interests of the speaker, and in part because of the interests of the listeners — and usually they go hand in hand,” he said. “This is a rare situation where we might imagine the interest of the speaker not being in play. The AI itself has no free speech interests. It’s also possible, though not certain, that the AI company has no interest in the output that its software generates. But even if that's so, the First Amendment protects the interest of the listener.”
To illustrate his point, Volokh walked through several examples of AI outputs with potentially harmful effects, including chatbots that might encourage self-harm or tools that can generate deepfakes. Even in these cases, he said, both the harm and the value of the speech stems from how humans use or respond to the content — not from whether the content is created by AI. For instance, he said, “The concern isn’t that the AI creates a deepfake, it’s that humans might use it to deceive people.”
The lecture was part of the annual Baxter Lecture series hosted by the BLI, an organization dedicated to ensuring diversity of viewpoints on campus. The BLI is housed in UC Berkeley’s Division of Social Sciences.
A strong dedication to free expression and open conversation lies at the heart of UC Berkeley’s values, a point underscored by Berkeley Social Sciences Dean Raka Ray and UC Berkeley Chancellor Rich Lyons at the event.
“The questions raised in Volokh’s lecture — about AI and protected/free speech — are questions we must grapple with today. If we want to be willing to disagree and debate, if we care about free speech, we have to be thinking about these issues,” Ray said. “These are the sort of questions that Berkeley Social Sciences and indeed Berkeley at large are committed to bringing to our community.”
Lyons thanked the BLI and those involved for the work they are doing to “make sure that all viewpoints are heard and put to the test as we must always do. It's so important in today's world that we engage with ideas around us and form our own thoughts and paths as individuals.”
