It doesn’t take long for a conversation about artificial intelligence to take a dark turn.
New technology poses a greater threat than climate change, some have said. It poses a “risk of extinction.” And the analogy du jour: AI development is akin to the race for nuclear weapons, so regulation should follow templates in line with the Manhattan Project or the International Atomic Energy Agency. The latter juxtaposition is so common that The New York Times recently published a quiz with the headline: “A.I. or Nuclear Weapons: Can You Tell These Quotes Apart?”
Convenient as the comparisons may be, it’s more complicated, said Cathryn Carson, a UC Berkeley historian of science who has studied the development of nuclear energy and the regulatory policies that followed. The risks are new, Carson said. But the way people and regulators respond is following a familiar set of moves that’s unfolded repeatedly in recent years.
As tech companies have gained more power and prominence, historians like Carson have argued for their work to serve as a guidebook.
That way, we don’t make the same mistakes.
“The only way we can understand how the present works is by looking at the actors and the forces at work and how they relate to each other,” Carson said. “And then combining that with understanding where the present came from and what momentum it carries.”
“How you could do that without history, I wouldn't know.”