Published in Podcast

Stu Card on inventing the future

Devon Zuegel
27 min read
Stu Card

Stu Card began work in Human Computer Interaction before it even had a name. His was the first PhD in the discipline, and Stu has made fundamental contributions to HCI including the design of Engelbart's mouse and Information Foraging theory.

In his work at Xerox PARC and beyond, Stu has always emphasized "theories with a purpose", the idea that academic HCI theories should have practical value and be incorporated into practice.

STUART: Things go a lot better when you actually solve a problem, when people can't live without or open up some completely new capability. The best way to predict the future is to invent it. It's a choice you have.

The best way to predict the future is to invent it. It’s a choice you have.

DEVON: My name is Devon, and I'm a software engineer. I'm going to talk to Stu Card today about his work on human-computer interaction and interfaces. I'm really excited for this conversation because you were there when all of this stuff was just beginning. You did a lot of the foundational work that people have been building on ever since. Thanks for taking the time. Can you start by introducing yourself and some of the things that you've worked on?

STUART: I'm Stuart Card. Most of my career has been at the Xerox PARC which is the place that invented the windows type of interface that we use now. We were given 10 years in which we promised that we would try and invent a science that could be useful for designing computer programs for people. Psychology was wishy-washy in the kind of models that it did. Interesting subject, but the methods were not too rigorous.

We thought that if we used an applied context and tried to make models that you can predict things with, just like you do with chemistry, or physics, or other sciences, that were practical, that we can make difference that way.

The theory allows you to ... invent new devices. We can run this thing backwards and generate new inventions that we hadn't thought of before.

DEVON: What were some of the problems that you guys worked on and applied these theories to?

STUART: One of the first things we worked on was the mouse. Xerox was working on an office of the future at the time, because it was a big company with a little product base. It was a big company and it just sold copiers. It needed to enlarge its product base outside of that. The office of the future was a natural thing. English and Engelbart invented this thing called a mouse. English was trying to compare this to some other devices. He asked me to help him, which I agreed. Then, he got too busy, so I had to most of the work. It turned out that the mouse was the fastest device when we tested it in various tests.

What's interesting was because we were looking ways to advance theory, we modeled each device. The model for the mouse was the most interesting one because how long it takes you to move the mouse to get a target is given by a thing called Fitts' Law, which is basically proportional to the log of the size of the target over the length of it.

This theory allowed you to do three things. One is, it allowed you to take a device and understand why the performance is like it is. You could evaluate, analyze, characterize a device. The second thing that it allowed you to do was to invent new devices. We can run this thing backwards and generate new inventions that we hadn't thought of before.

Given that we have this little Fitts' Law characterization, we can say, "Well, what would it take to change that constant or proportionality?" If you just use the fingers instead of the wrist to move the device, that would give you 38 bits per second. That tells us that we can invent a new device which beats the mouse.

The third thing that theory allows you to do is it allows you to come up with abstractions that connect together to form a science or a discipline. If you look at what happened after the mouse, the mouse went out in industry, but then, people like Shuman and I did the stirring on other things that built on this theoretical base and enlarged it quite a bit.

If you want to build a discipline of human-computer interaction, it can’t just be one result after the other. You get lost in a sea of results.

That then enabled the invention of things that would've been hard to predict like the keyboard in which you move your finger along, yet it uses statistics to know how words are spelled and how words go together in sentences. Then it looks and sees which letters on the keyboard your finger went past. It takes all of that and does the equivalent or type them in the keyboard.

That's an advantage you have when you can do some theory with it. If you want to build a discipline of human-computer interaction, you can't just be one damn result after the other. Otherwise, you'd get lost at the sea of results. You need to be able to link them together. The theory provides some opportunities to do that.

DEVON: A phrase that you’re famous for is the concept of theories with a purpose, can you explain what that is?

STUART: A lot of theories in recent psychology are hard to apply right at the moment in an engineering context. They have parameters you have to fit to something. What we wanted to do is make theories that were very practical, blue-collared theories that somebody in the field could evaluate. The theories were much faster to use than doing the experiments because you just characterize the situation and multiply a few things out.

In fact, the way these theories are offering you is, like-- take the one for the mouse. People don’t really compute or calculate the formula. What they do is they say, "Well, I have this button on the other side of the screen. I know from this theory that if I make the button bigger, then it will compensate for the fact that it is way over the side of the screen." If you cared, you can calculate precisely by how much you have to do that. Just the general concept that the size of a button compensates for distance is a practical way to use the theory.

We wanted to ... make theories that were very practical, blue-collared theories that somebody in the field could evaluate.

We find a way of writing down the sequences of operations that people do based on goals, operator’s methods, and selection roles. This little notation will allow you to predict how long something will take. This was used in the design of the star, because you needed to measure novices and experts. Novices, you could find everybody’s a novice on a new system, but you can’t find experts on a system that hasn’t been designed yet.

This little method allowed the calculation of what the experts did and would do. For example, to do motor operation, I said it would take seven-tenths of a second. I could see that by tapping my finger as first as I could, the tapping rate. If you measured that, it would come out in a 10th of a second. In fact, Licklider and I had a big battle over this. He told me that I was wrong. I was sure that I wasn't, so I said, "No, I'm not. I'm right." You have to imagine me talking to the learning hybrid professor. This is over breakfast in the National Academy of Sciences. The only way we could sell that was to do an experiment.

I did the tapping. I wasn't acting, because I wanted to make sure that that was done right, or to my advantage anyway. I think Licklider timed it on his stopwatch and somebody else counted them, or something like this. Here we are in the National Academy of Sciences lodge banging on the table with spoons as fast as possible, completely disrupting the other committees. He was right, of course.

I had to adjust all of the parameters of this model as a consequence. When I did, stuff fit that never fit before. If you take the amount of time in seconds and measure it to the power of 10, then you find that they group into little bands. Done really fast is, in 10 minus 2 seconds, is biological processes.

The interesting one for us is in the cognitive band. In 10-1 seconds or 100 milliseconds, you get this perception of content. If two percepts exist together in less than 100 milliseconds, they will be fused into the same percept. If it's greater than that, then it will take two percepts. If somebody makes two noises, you won't hear two noises if they're less than 100 milliseconds together, you'll just hear a noise in a bigger room of your percept.

In around one second, that's the time to do an operation. In around 10 seconds is the time of what's called a unit task. That's how long it would take you to do a command in a text editor, and so on. Then, above that is what you may call a rational band. That's where you're doing some problem solving or something like that. Then above that is the social band. Those processes take even longer.

You can see what we worked on is starting down in the cognitive band and gradually working uphill. You can take everything that we did and put it in its little place in that hierarchy. This helps you when doing theories. Take the one where things are going to happen on the order of a second or, let's say, seven-tenths of a second. That's the time for an unprepared reaction time.

If a child runs out in front of your car, it takes you seven-tenths of a second to apply the accelerator. Because you can't do anything, can't complete any action in less than that, then it means that that time is available for doing micro-narrations. If I show you some trees expanding, you'll get the wrong notion of which branch went to the tree if I just show you before and after the expansion.

... used time which you [could not have used anyways] to make the communications more dense.

If I use my seventy millisecond to grow a part of the tree into its next step form, then you can see how it grew. Now you couldn't have done anything else in this time because it's too little time for you to act. I've used time which you didn't have to make the communications more dense. We try to take intellectual or other capabilities that you have, but aren't using, and try to match the displays to those so that we can get more capacity out of those.

DEVON: What is the process for developing these theories? Is it that you start with experiments, then you move on, you start seeing the commonalities between those, or do you start with a theory and then test?

STUART: Well, you usually start with a received problem. Problem that somebody tells you is usually never what the real problem is. It starts out as a real problem to them.

DEVON: What's an example of a received problem that you or someone else ended up adjusting to get down to what the real core problem was?

STUART: An example is the fact that Windows seemed to work fine when they were on workstations 19-inch displays or bigger, or 24-inch displays. They didn't seem to work very well in PCs. When you look at how many window faults there are, that is to say, "How many times do I have to redo the size of window or make it out of an icon or something like that?" As the screens get smaller, there is a certain point at which everything just goes to hell. You’re not able to do anything except just resize the windows and stuff. I think everybody's experienced this.

You’ve connected this interface problem with a well-known problem in software design. Now, there’s a number of interesting solutions.

What that is analogous to is virtual memory operating systems. Whenever you need a page that isn't in there, you swap one of the ones that is true with one of the ones that I’ll find this somewhere instead of in main memory. This all works unless the number of locations you need to get this working set is greater than the memory that was there. In which case, you go into this mode in which you can’t get anything done for the same reason.

That starts out with the problem of why don’t windows work and how do you make them work? Now, by modeling it, you have an analysis of why that is. Now, you why it is. Now, you can start inventing practical solutions. You've connected this interface problem with a well-known problem in software design. Now, if we look at this problem in software design, there’s a number of interesting solutions to it. Ones in which you page in the whole posse, which you page in the whole group of things without folding them in one by one. If we back project that into user interface, that's a lot like if you had a virtual screen for each problem that you're working on.

DEVON: The virtual memory metaphor is really interesting. What role has analogy played throughout your work?

STUART: I think quite a bit, because what usually ends up happening is that the theories that come to the nouveau, they come from an analysis in which you can see how to describe the situation and the abstractions that have been previously used. That then links this new little bit of theory to pieces of theory that have established results with them. You have all this stuff about operating systems that’s quite well-developed. You instantiate that all back into how would they be for an interface.

Another one is information retrieval. Information retrieval is usually measured by the metrics of precision and recall. This problem can be reformulated in terms of a thing called Optimal Foraging Theory. That reformulation is called the Information Foraging Theory. It says in the precision and recall formulation, it doesn't matter if you made the thing go a thousand times as fast. It wouldn't count in precision and recall, but most people care if they can go a thousand times faster.

What it tries to optimize is the information gain per unit time, so the gain per cost. There's a field in biology called Optimal Foraging Theory that's worked this out. This is one of the unifying concepts in biology. Because otherwise, you don't have a separate biology for this animal, that animal, every animal have a separate biology. This is one of the concepts that goes across different animals. You can predict things about feeding from it and when animals are going to leave a patch, that sort of thing.

Anyway, we connect to the set of abstractions in biology. Now, we bring that back to information retrieval. There's lots of things that you just wouldn't think of otherwise.

DEVON: An important priority for you has been not only building a theory around these things, but also allowing them to actually solve problems in the real world?


DEVON: What are examples of companies, or products, or teams that have incorporated these ideas either intuitively or by finding your research.

STUART: The mouse was of course popular at PARC. But in the rest of the corporation, it was not. They were dead set against having something that hangs off a keyboard. They were afraid the people in social security would have it around their neck during an interview. All sorts of fantastic excuses. At one time, I flew down to Los Angeles where there was this room of very hostile engineers trying to work on this question. I gave them the data for why one device is better than another. There were so many questions about why didn't you do it this way, why didn't you do it that way. Then when I came to the theory, the room fell silent. There was just a bunch of solemn faces.

We went and it worked just like I said it would. Nobody beat it in the market. One way I like to do experiments sometimes is not necessarily the approved way, is I make myself the subject. I fiddle with the parameters until things feel like and it looks like I'm making progress. Because experiments are expensive. You don't want to have huge lumbering steps for the expensive steps, you want to do lots of little steps. When I think it's ready and I think it works, then we bring in all the controls, counterbalancing and everything.

DEVON: You were one of the first people to really combine psychology and computer science. I think your Wikipedia page says that you're the first person to do a program in HCI. Maybe the word HCI didn't even exist back then. What does it feel like to be there in that moment? Did you know what you were creating?

STUART: That is the foggiest idea. When I started as a student, the college didn't even have a computer. They put in a new computer, and this little cadre of people formed around it. One day, Herbert Simon came by campus and gave this big address. A bunch of us talk to him afterwards. He told about this wonderful land at Carnegie Mellon where people would work on artificial intelligence, calling it psychology.

That was it for me. I wanted to do that. I got myself out of there and applied this as soon as possible. I was housed in the psychology department. After a number of weeks, it became clear that they were thinking of me just as a psychologist. I never wanted to be a psychologist. I used to make fun of them from the Physics Department. We used to shine lasers from our building into theirs knowing they'd never figure out what they were.

I designed a new program. I threw out half of my psychology qualifiers, substituted computer science qualifiers, and took a lot of computer science classes. This is when computer science was-- the department in itself must've been two years old. It was just getting started. My thesis was basically a bunch of experiments that we did at PARC put together in a coherent way.

There was the work on the mouse, there was work on text editing. There's the invention of this idea for how you write down user's actions called GOMS. That actually eventually became a book. The Psychology of Human-Computer Interaction which is written with Tom Moran and Allen Newel. That was the first book to use human-computer interaction in the title.

DEVON: You said that you hadn't the foggiest idea of what HCI was when you first started. What were some of the motivating questions that you had earlier on and how did they change as you got to know the space better?

STUART: When I worked in the computer center, one thing that always intrigued me was watching people try and solve problems about why the computer doesn't work. I was interested in where the structure comes from. At Carnegie Mellon, we were working on a thing called problem solving. Problem solving, a lot of it is heuristic search. They're still climbing in, I think there's seven methods. The idea that you can get it down to algorithms is pretty cool. You can solve a problem that you don't know how to solve by using heuristics and search.

My roommate for quite a while in computer science was Hans Berliner. Hans Berliner was a graduate student, but he also was the guy that all the chess studies were done in, because he was an international grandmaster. There were experiments done on him where you knock over a chessboard and you have him reassemble the pieces. If you look at every two seconds, if there's a gap of two seconds that goes by without a piece being assembled, that's a chunk foundry. By doing this, you can get his mental representation of the game.

Psychologists are enamored by all these limitations like short-term memory and everything on your performance. If you look at what a real person would do for, say, you give him a telephone number that's 10 digits and he can only remember seven digits, what you can do is pick up a piece of paper and write down the number. Predicting a large amount of behavior, all you have to know is what the rational thing would be to do, then you can predict it without a detailed theory of that thing. You just need a theory of the environment and people will do the rational thing for the environment.

DEVON: The idea is that even though your brain is really limited, you can build tools and processes around yourself to expand your —

STUART: If you get one of these linear — the great example Simon gave was something called Simon's Ant. There's this ant who's trying to go home. He comes to a pebble and he walks around it. The ant has really complicated behavior if you draw dotted line for where the ant has been. The complication is not in the ant, it's in the task environment. The ant acts really simple. It's just optimizing and doing the rational thing, taking the rational detour instead of walking over the pebble. Then Simon says, "You can apply this to humans." Humans aren't so complicated, it's the environment that's complicated.

Every technology has these, maybe basically good, cures people of cancer or something, but it has unintended side effects. Like the invention of the car was great for getting people around, but it caused the suburbs to sprawl. There're some disadvantages to that. There're some measurement things that came up at PARC that I refused to let the group work on. Not because they didn't have ways of handling them at PARC, it's just that once we invented the technology, we'd go out somewhere else, those restraints will be removed. That it would exist.

There was this thing you can clip to your lapel and it would track where you were at the building and at what time. What it allowed you to do was recall twice as many things of your day as otherwise. You can find things, find out where this information was and everything. It was too dangerous, I thought.

There’s amazing supply of problems in the world.

DEVON: How did you direct the problems that your group worked on at PARC and beyond?

STUART: Whenever we finished one thing, another problem would just pop up. There's amazing supply of problems in the world. The office in the future, we tried to plan it so that our research would actually make a commercial difference, because that would be validating the overall idea that, again, the theory. Then, Xerox famously fumbled the future. Things weren't going in that direction. When we started graphical interface, graphical systems get more power, the SGI machine and the geometry engine meant that you could do 3D animated graphics for the first time.

DEVON: A lot of really amazing groundbreaking research was done at Xerox PARC around the time you were there. The work you did on pointing devices, object-oriented programming, things on graphical user interfaces. The list goes on and on. What was so special about PARC that made it possible for that to happen?

Xerox subsidized a lunchroom as part of the research program, because they wanted people to talk shop during lunch.

STUART: First, we had an excellent staff. It said that 50 of the top 100 computer scientists worked at PARC. Since we were funded with hard money, we didn’t have to write grants, wait a year, write 10 grants to get one grant and everything before something something could be done. Xerox subsidized a lunchroom as part of the research program, because they wanted people to talk shop during lunch. We did that. Often at lunch, somebody would come up with an idea. That idea could be worked on by three or four people that afternoon.

DEVON: One of the things that made PARC really special is the people who were there. A big part of the attraction for bringing new good people in was that there was already the strong community. How did they start that community to begin with?

STUART: Bob Taylor was the guy who funded all of these people. He knew the whole community, he knew who was out of funds. He just picked them off one by one. PARC was such a good deal.

DEVON: Xerox is famous for fumbling the future. Why were those ideas not possible to translate into real products?

STUART: That’s an interesting thing. I used to think it was because they were stupid. That might be true, but that wasn’t the main reason. The main reason was this thing called the Innovator's Dilemma. It's been replicated many, many times. If you have a company and it tries to get into a new business, it's faced at various points with whether to invest in the new business, which isn’t going to bring in much return because the new business is small if it's disruptive, and doing more of what it knows will make a return.

Xerox had the principle that it was going to grow 15% each year. After a while, that meant that you had to have a new billion-dollar industry every year. You can’t just stand those up in one year from zero to a billion. When they would look at all the disruptive proposals, those didn't look like they were going to be a billion in a year. Whereas if they took one of their existing divisions and just tweet it a little bit, they can maybe get another billion out of it.

DEVON: It's sort of a hill-climbing problem?

STUART: Yes. The problem is that when you can make all the right rational decisions and you can lose your whole industry because this disruption comes through on you just incrementally. That's really not a Xerox problem. Xerox is caught in that, but that a problem that's more general.

DEVON: In your view, what were some key moments where they could have tried to break out of that innovator's dilemma?

STUART: When office of the future came on far enough, they held this big event at Boca Raton in Florida. They hired Hollywood producers and everything was a really big deal. A lot of Xerox execs came, but they weren't really interested in this whole office of the future.

DEVON: One thing you talked about a bit was how there was a moment where you could start doing more visualizations. What role did Moore's Law play in how your research changed over time?

It’s like fairy dust. You take the same problem at 10-year intervals and you get different answers. It gets easier and easier, too.

STUART: Moore's Law is great. This is really under-appreciated. You can have a capital budget, you spend your capital budget, and it still isn't gone. You can buy more because everything that you wanted to buy get cheaper during the year after you budgeted for it. It's great. It's like fairy dust. You take the same problem at 10-year intervals and you get different answers. It gets easier and easier, too. You just do the same thing over and over again. If you have 10 problems, or 6 problems, you just rotate them around.

... Bell’s law, where every time the price for computation goes down by factor 10 you get a new form of computation.

DEVON: You could fill a whole career with that.

STUART: Yes, that's right. Gordon Bell has a thing, sometimes called Bell's Law, where every time the price of computation goes down by a factor of 10, you get a new form of computation. You start out with the Air Defense System for the US, then you end up with TX-2, then you end up with the personal computer. Probably timesharing before that, then the personal computer, then iPhones.

DEVON: What's next?

STUART: Probably all these little things that go in your house. When you take the internet of things, there's not just a billion things out there, there's many billions of processors and work that do you harm or good.

DEVON: Moore's Law is slowing down by most people's count?


DEVON: How do you see that all--

STUART: I think the thing that was going to end it was that the price of foundry would go up exponentially. You can get a little bit more out of it, but it's going to cost you 10 times as much or something.

DEVON: What do you think the implications will be on the next generation of inventors?

STUART: The processor speed won't be the basis of competition. Actually, there's another thing that's likely to happen. It's that all these gizmos like graphics processors, controllers and all that sort of thing might be made more programmable and more interactive with each other than is currently the case. That would give you a lot more power.

DEVON: I guess for now fundamentally dealing with different types of problems, because even if the amount of computation that we can get out of a device isn't growing as quickly, there's so many more combinations and permutations you can have of different things just because so many other things are connected.

In the decades that you've been doing your work, Moore's Law drove a lot of changes. Now that we are not really riding that wave of increasing computational power, where do you think people should be looking for the low-hanging fruit now?

STUART: Communication and low power systems. A lot of systems are trying to harvest power from you walking in your shoes or other things. You can get systems that can be operational in the environment from batteries for a year or two. We're going to have much better bandwidth outside the home that's going to replace what the telephone company has now, the next generation. Even in the house today, the wireless I have is up to one gigabit per second.

I don’t really like what’s happening with texts that you read online, because they lose all their physicality. You get lost in what you're doing.

DEVON: My last question that I'd like to wrap up with is what are some books, or articles, or things you've read or seen that have influenced your work?

STUART: I like this book called Don't Make Me Think. Its premise is that these little glitches in the interface, while they might not look like they take very long, they're quite disruptive of the thinking process.

I like a number of the articles that Bret Victor is writing on a new way of doing explanatory texts.

I don't really like what's happening with texts that you read online, because they lose all their physicality. You get lost in what you're doing, you scroll out sideways, all this sort of stuff.

What I would like to see is building out of that rather than just sort of replacing it... We should take seriously the mechanics of moving through that book.

I think that the Codex book has been how you count maybe or the magnitude of thousand years of development, maybe 2,000 years of development. What I would like to see is building on that rather than just replacing it with something that doesn't work as good. There was a time when somebody had to invent the notion of the paragraph, notions of periods and capital letters. The Carolingian Renaissance invented small letters. Those small letters put together with the capitals allowed words to have shapes. They could be then read in faster speed.

DEVON: Just to close out, I think one of my favorite things that I've read recently was about an early printer in, I think it was 15th or 16th century, named Aldus Manutius. Reading about his life, this guy invented tables of contents, he invented the idea of page numbers, which are things we take for granted. It's really obvious today, but those things had to be invented at some point. I think it's really exciting to think about that in terms of the last 50 years and the next 50 years as well in terms of what are we going to create that people on the future end up just integrating into their lives.

STUART: I built several virtual books. The one I liked the best was the one-- it's called the three book where it's a three-dimension book figuring that people would want something that corresponds to what's physically in the world, they would want an electronic copy of that, and that we should take seriously the mechanics of moving through that book. Because, like I say, the virtual documents now are such a pain to move through.

Whenever you can map things onto the environment, onto things that you know a lot about, you can make them easier to learn, faster to use.

DEVON: You can build on this thing that has so many affordances built into it, then expand it and do even more stuff with the new technological capabilities.

STUART: Whenever you can map things to remember onto the environment, onto things that you know a lot about, like you know a lot about how books work, then you can make them easier to learn after use, that sort of thing. You should take advantage of that when you can.

Things go a lot better if when you do a piece of research, you actually solve a problem that somebody cares about. People in the laboratory will say, "What I want this, so this would be nice to have." Those aren't good problems. The ones you want are ones that people can't live without or open up some completely new capability.

Technology is very interesting because it undergoes an evolution, but the evolution is different than natural evolution. Natural evolution, the world has it's source of randomness. Then, survival of the fittest cuts through which of these random variations is going to do well. In technology, the way of generating the new products is not random, there's a lot of careful analysis and thought. You can direct that second kind of evolution.

It's like Alan Kay used to say, when people with all the time asked him about what the future is going to be, you can say, "What do you want? The best way to predict the future is to invent it. It's a choice you have."

Share this post

Brought to you by Devon Zuegel
Devon is a software engineer and writer based out of San Francisco.

Video by Slow Clap and The Land Films
Illustrations by Roman Muradov
Photos by Ulysses

Try it now

Get going on web or desktop

We also have Mac & Windows apps to match.

We also have iOS & Android apps to match.

Web app

Desktop app

Powered by Fruition