In just a few years, AI has gone from a tech novelty to a society-altering force. It has revolutionized scientific research, classrooms and workplaces. People around the planet have turned to it to break through writer’s block and to contest parking tickets.
But that growth has not come without risks. The economy is increasingly propped up by bets about the technology’s future — 80% of U.S. stock gains last year came from AI companies. Policy battles are ramping up as some seek to rein in tech companies and others opt for an unregulated Wild West. Deepfake and explicit videos are causing harm and blurring what is real in an already fragmented information environment.
As a global leader in the development of AI technology as well as research into the ethics, policies and practices around its use, UC Berkeley’s experts are at the forefront of this rapidly changing technology.Below, UC Berkeley News asked some of the campus’s leading AI experts to summarize in 100 words — and in a short phrase — the developments they’ll be monitoring in 2026.
‘Weaponization of AI against workers’
My research focuses on AI that aims to evaluate how people speak. Programs like the Zoom Revenue Accelerator and Read AI are being used by companies, often without worker knowledge or consent, to rate employees.
These systems rate speech on parameters like “charisma,” but because they are black-box algorithms, no one knows how they work. They are trained on “idealized” speech, so they show systematic bias against neurodivergent speakers, second-language English speakers, and anyone who speaks a stigmatized dialect. I’m watching to see how systems are weaponized against workers and how the courts deal with these patterns of bias.
— Nicole Holliday, associate professor of linguistics
‘Intelligence limits and the search for truth’
I’m expecting that, in spite of the commercial pressures, we will realize that there is no such thing as general intelligence, artificial or natural. At the same time we may see progress toward more realistic models that engage and experiment with the external world, in the way that children do.
My guess is that intrinsically motivated reinforcement learning systems — where the reward is finding the truth rather than getting a good score from humans — may make real progress in this regard.
— Alison Gopnik, professor of psychology