Berkeley Social Sciences explores the promise and pitfalls of AI tools

Top row: Berkeley Social Sciences Dean Raka Ray; Sociology Chair Dave Harding; Geography Professor Clancy Wilmott Bottom row: Political Science alumna Shaudi Fulp; Cognitive Science Director David Whitney

Top row: Berkeley Social Sciences Dean Raka Ray; Sociology Chair Dave Harding; Geography Professor Clancy Wilmott

Bottom row: Political Science alumna Shaudi Fulp; Cognitive Science Director David Whitney

October 9, 2025

Generative artificial intelligence (AI) tools are a daily reality at Berkeley Social Sciences. Faculty, students and alumni are embracing AI's ability to speed research, uncover new insights and prepare students for changing careers.

But they are also warning about the risks of letting chatbots think for us as AI tools such as large language models (LLMs) also present a growing concern for research ethics and teaching across the Division.

"AI is here to stay, but our responsibility is to ensure it serves student learning and research rather than shortcutting it," Social Sciences Dean Raka Ray said. "At Berkeley Social Sciences, we are focused on helping students and faculty use these tools wisely with oversight and emphasis on critical thinking. We see generative AI as a tool to supplement — not substitute — intellectual curiosity and the pursuit of knowledge."

Challenges and opportunities
Sociology Chair Dave Harding said he believes "AI presents both opportunities and challenges." He added that "when it comes to research, it can be a useful tool when used correctly and with oversight."

Harding points to specific examples: AI can help code large sets of data, write computer programs and even assist with literature reviews. "But we also need to be careful not to outsource too much of our thinking to AI, and to validate anything that it produces," he said.

That caution extends to teaching. Harding said some students now rely on AI to summarize readings or draft essays. "That work is an important part of the learning process, and they won't learn if AI does it for them," he said. "We are still trying to figure out how to deal with that."

Still, Harding is not anti-AI. He sees promise in using it as a "tutor to help study for exams" and as "a writing aid that helps students become better writers."

Statistical thinking
Geography Professor Clancy Wilmott shares Harding's caution against outsourcing thought, but she argues the problem is more fundamental: AI has a "heavy bias" toward statistical thinking.

Wilmott notes that AI tools tend to pick the most "common and predictable factors," making it difficult for students who are still "learning the basics" to identify flaws and understand "where the AI is going wrong."

For "oversubscribed and overstressed" students, Wilmott says the temptation to use AI tools to complete class assignments is even stronger.

"I don't think we can overestimate the power of that temptation," she said. "Because for them, what AI gives back looks like good enough work."

Wilmott says it's essential for educators to prioritize teaching skills AI can't replicate: critical thinking, deep reading and original analysis.

Like Harding, she does acknowledge AI's positive uses, such as "simple task work" that "can be automated" or "is already standardized." Ultimately, Wilmott says the responsibility for the future falls on those in Berkeley Social Sciences who must "articulate what we do that AI simply cannot."

Reshaping higher education
For Berkeley Political Science alumna Shaudi Fulp, the question is not whether AI will reshape higher education, but how UC Berkeley can ensure students graduate with the skills to navigate the evolving landscape of knowledge and work.

"The question we are tackling is — how do we equip our students to thrive and compete in an economy, society and workforce that will significantly shapeshift with the inevitable and ubiquitous adoption of artificial intelligence?" Fulp asked.

Fulp, who serves on the Berkeley Social Sciences Dean's Advisory Council, has observed that students come to Berkeley for career advancement, intellectual growth and a sense of belonging. Those motivations, she said, will face new challenges in a world of AI-powered workplaces, economies and civic life.

She argues that universities must teach new skills alongside timeless academic traditions. She lists applied AI proficiency and data ethics as emerging literacies that should join critical debate, interdisciplinary collaboration and live discussion as essential components of education.

But Fulp also reflects on how higher education can cultivate what she calls the "art of attentive, spontaneous exchange." A student who can write flawlessly with AI assistance but cannot bring the same clarity and agility to live debates, team problem-solving or workplace deliberations "will face real limits," she noted. Berkeley Social Sciences, she added, is uniquely positioned to champion extemporaneous dialogue and inquiry as essential competencies — ensuring that graduates develop capacities where AI supplements rather than supplants their intellectual journey at Cal.

Solving research challenges
While others focus on learning and student preparation, David Whitney, director of the cognitive science program, imagines the possibility of AI as a way for UC Berkeley to solve a big research challenge: finding the right people for interdisciplinary projects.

Right now, Whitney said, identifying collaborators is slow and clumsy. Faculty databases are incomplete, keyword searches are outdated, and often the only way to find someone is by asking around. "Word of mouth... very 11th century," he said.

One possibility: a "distilled LLM for UCB" — a large language model trained specifically on UC Berkeley's scholarly output such as publications, public talks, books, videos, etc.

"You could even query it about themes, ideas, connections you hadn't even thought of before," Whitney said. Such a tool, he noted, could help organize conferences, recruit instructors, find authors for academic books or bridge research between different fields. "This product would be wildly useful throughout social science and far beyond."

Training students
Berkeley Social Science departments and programs are already training students in pragmatic applications of generative AI tools in social science research. Harding points to the Master of Computational Social Sciences (MaCSS) program, D-Lab, the Computational Social Science Training Program and a popular two-semester course in computational social science.

For Fulp, these programs reflect Berkeley's leadership in shaping the public conversation about AI. "Under Dean Raka Ray's leadership — and with Chancellor Rich Lyons' mandate to 'question the status quo' — the Social Sciences Division has established a common through-line for rigorous critical thinking already," Fulp said. "They will shape, not merely adapt to, the AI-augmented future."

Even with those efforts, the scholars agree that the real measure of success will be how well Berkeley Social Sciences balances technological innovation with human judgment.

"Today, the arc of intellectual growth may be further along — augmented by AI — but the fundamentals of the task remain the same: to stretch that arc so that graduates achieve a depth of mastery far beyond what they started with," Fulp said.