Philosophy alum Sarah Douglas on her lifelong effort to program computers to understand meaning

April 3, 2026

Technological advancements and ethical debates dominate the media’s coverage of artificial intelligence. AI pioneer and 1966 Cal alum Sarah Douglas asks the sort of big questions — on knowledge, meaning, and consciousness — that are often overlooked by companies and can only be answered in a philosophical context. Unfortunately, the rapid velocity of AI development has outpaced society’s capacity to consider these questions.

In response, Douglas launched a faculty fellowship and public engagement initiative at UC Berkeley, where leading scholars will probe deep philosophical debates surrounding AI. She spoke with UC Berkeley writer Alexander Rony about her trailblazing career, her concerns about how people are approaching AI, and how she helped establish the field of human-computer interaction.

What was the nature of AI research when you entered the field?

Sarah Douglas: There are three major periods of AI. The early period, with people like Allen Newell and Herbert Simon, was interested in developing algorithms based on logic that could model human problem-solving. A lot of the early work was getting a machine to learn how to play chess like a human did, because standard algorithms didn't work. They came up with this concept of heuristics, which are rules of thumb that humans develop while they're problem-solving.

I came in just as another phase developed: algorithms that modeled a human expert solving ordinary problems, like deciding what computer would be best for you. Then people tried to integrate common-sense knowledge. That ended up being a vast problem. Some people tried to use encyclopedias to codify all knowledge. It didn't work very well. 

People were making good progress with human language text translation, especially with texts that were scientific, narrow, and had well-defined vocabularies. Then people said, “Why don't we see if we can use the vocabulary and other knowledge resources of the World Wide Web to build this huge database? Then we’ll have a way of statistically picking the most common words and definitions.”

Are you referring to large language models?

Yes. They opened this whole new world. All of a sudden, AI is now in the hands of everybody. It's really changed perspectives, but it leads people to think this thing can think, which it can't. ChatGPT is a statistical correlation with an enormous database, so the possibilities of being wrong are very low with ordinary vocabulary, but it doesn't have any concept of meaning. In fact, now, they don't even attempt to model human thinking or the derivation called logic. We did that in the first two phases of AI, but that's gone by the wayside. 

When were you first drawn to computers, and how did you decide to make it your academic focus?

When I graduated from a Palo Alto high school in 1962, I was offered an internship at Palo Alto Unified School District. They had received an NSF grant to obtain a computer to teach students, and nobody knew anything about computers. For four years, I worked every summer and did contract programming. I would spend the evening trying to get the computer to understand language. I worked and worked on this question-and-answer format. I had one problem I couldn't figure out: How do I get meaning into the computer? That question has never left my mind. 

I had a full-ride scholarship to Smith College. I wanted to be a medical researcher. In the first two years, I took all the biology and chemistry courses they offered, then realized that wasn't for me. Meanwhile, my family was back in California, and I was feeling left out politically, because events were happening with civil rights and the Vietnam War. Smith College was isolated, so I returned to California and went to UC Berkeley as a junior.

When I arrived at Berkeley, I switched majors because I figured philosophy would know what meaning was and how to get it into the computer. What I learned from the philosophy department was exactly what I needed to understand algorithms for ordinary language. I tend to ask big, controversial questions. At Berkeley, I was able to explore that curiosity.

You didn’t head straight to graduate school after Berkeley. Why not?

The opportunity to do graduate studies didn't even enter my mind. Women were not really invited, and the kind of philosophy I was interested in was quite different from the mainstream at that time. I didn't begin my Ph.D. work at Stanford until I was about 35, and I had already experienced 15 years working in the data processing and computing environment as a programmer, systems analyst, and manager. 

At that time, computers were just beginning to become interactive — that is, using the immediate responses of the human running the program to determine the next actions of the program. Usually, these were simple choices, often coded as language responses, numbers, or “yes” or “no.” This was difficult because it required predicting human behavior. I wanted to study this topic when I entered Stanford. Unfortunately, few faculty had studied this except through research on human conversation, and no one had applied it to the general problem of human-computer interaction.

I went to a professor teaching a course in natural language and asked, “Who could I work with on this?” They said there's this organization called Xerox PARC. I spent two or three years there working on how people use analogy to think about the things they're learning. This would be integrated into AI programs, where the programs are the teacher, called intelligent tutoring systems.

I received my Ph.D. in 1983 on a topic I created called cognitive ergonomics, a combination of computer science, psychology, and linguistics. My dissertation was on how people learned to use computer text editors when they had used a typewriter. The idea is that if you're going to teach someone, you need to know what they're going to do wrong and how to correct that. Teach concepts that will be analogous to their prior experience. That seems really simple, but it was very controversial in the AI community at the time — and still is! 

When you earned your doctorate in 1983, it was still relatively early in AI development. What were job prospects like?

There are points in time when academic hiring is wide open, and I finished my doctoral work during one of those moments. The University of Oregon was building a computer science department with a Ph.D. program, and they really wanted me to build the computer science program and help create the Institute of Cognitive and Decision Sciences. I went to Oregon and started developing AI-related courses and doing research. I retired from Oregon after a long career. I am now an emerita professor.

A lot of my work was in interactive human discourse or conversation. Language was more complicated than I had ever imagined. There's a lot of face-to-face communication that is absolutely essential for understanding what another person is saying that the computer has no access to. The more I worked in this area, the more difficult it became. I was one of the original people who developed this field called human-computer interaction. 

At that point, AI was essentially problem-solving based on logic; first-order predicate calculus, it's called. The proofing process isn't usable in many situations. You also had to constantly keep a database of expertise up to date, so people dropped AI. Everybody said it won't work.

AI fell by the wayside. The funding sources dried up. We dropped our courses. I was teaching graduate-level courses in natural language processing, and students weren't interested. Nobody was offering jobs.

A headshot of a woman wearing glasses and a purple jacket jacket
SARAH DOUGLAS
I tend to ask big, controversial questions. At Berkeley, I was able to explore that curiosity.
Sarah Douglas
25px spacer to make rows line up
A woman with an open book sits at a table
What I learned from the philosophy department was exactly what I needed to understand algorithms for ordinary language.
Sarah Douglas
25px spacer to make rows line up
A woman with a sunhat, backpack, and hiking poles hikes in forested mountains
A woman with outstretched arms and legs does a yoga pose near the shore
I had one problem I couldn't figure out: How do I get meaning into the computer? That question has never left my mind.
Sarah Douglas