1. You have received the Turing Award and other outstanding awards and world-wide recognition for his contributions to computing personal and programming languages. What does it mean for you to be distinguished as Honoris Causa Doctor from the University of Murcia?
I think of honors as a way to bring the public’s attention to a field (such as computing) and to special ideas (such as the Personal Computer and the Internet).
The person who is given an award was almost always part of a team, especially in my field, and so we are the “symbols” of a larger effort.
In this case I’m happy to accept the award as the representative of a 10 person research group and a community of several hundred who came up with the ideas and the inventions to make Personal Computing and the Internet happen.
Your main contributions have been in two fields: programming and personal computing. We start asking you about programming.
2. In seventies, you created the Smalltalk language so that you are considered the father of object oriented programming, which is the programming paradigm more widely used nowadays. When did you aware that this paradigm would become the most appropriate way to develop software?
What I meant when I made up the term “object-oriented” is rather different from the general way the term is used today. What happened at Xerox Palo Alto Research Center was both successful and somewhat startling – so the term itself took on some magic, and there were some misunderstandings about what OOP actually meant. In the 80s appeared many languages that called themselves “object-oriented” (but weren’t) such as C++, even Object Oriented COBOL!, etc.
Another factor was that people tend to hold on to paradigms they have already absorbed (for example using data structures and procedures). The most trivial use of OOP allowed the data structure paradigm to be extended, and much of its use today is in this extensión of what I thought of as weak, fragile and poorly scalable programming styles.
The idea for OOP that I had around 1966 was to model an unlimited number of whole computers on a network of pure messages (this is partly because of my background in biology and mathematics, and partly because the ARPA research community was starting to tackle the design of an enormous pervasive network of computers that was to become the Internet). I got important insights from several systems that were “almost object-oriented”, including Sketchpad and Simula.
This way of looking at things is a simulation of behavior approach, in which the behaviors should be as high level as possible, and not like data structure programming.
It’s not that it is the “most appropriate way to develop software”, but it was very clear that it would be much more expressive and powerful than the old ways of doing things, and that it fit very well the ideas for graphical interactions that we were starting to invent. And, that it should be very well mapped to both how children and science think about the world.
3. What new paradigm will come after object orientation programming? How will programming languages evolve?
A good idea for the field would be to try doing real object-oriented programming!, and then think of what it would mean to have very capable objects (that might be analogous to the master cells of living things which are differentiated to at most several hundred kinds of working cells by parameterization. This is a lot neater and more powerful than the way e.g. C++ or Java programming is done today, with thousands of very weak classes, etc.
4. What changes are necessaries in the development software? When there will be a real software industry?
In the 70s at PARC Smalltalk programmers used a system that was completely composed of real objects, and its development system was written in itself and always active. This meant that any change could take effect in less than a second, and there were no overnight systems builds from source code and then debugging to see if everything was OK. Even though PARC did things very successfully, most software development is still using imitations of files of punched cards, external editing, compiling and linking, and weak debugging (just as it was done in the early 60s).
I think something better than Smalltalk should be done today, but this would be several large steps beyond most practice today.
5. What is your opinion about Java? Why you said that Java is the saddest thing that has happened since that MS-DOS emerged. And your opinion about Ruby?
In the old days people were chastized for “reinventing the wheel” (that is, for not learning about the good stuff from the past and trying to do better). But today most people are doing much worse: they are “reinventing the flat tire” (and not realizing it is flat and doesn’t even work as well as a wheel).
One question to ask about any system is what is the complexity to payoff ratio. That is, is the complexity actually worth it in what is delivered in power. Most (but nota ll) of the languages invented since the 1980s are just terrible in this respect, and Java is one of them.
Ruby is a cleaner design with a better score in a number of areas than Java. It takes some of its ideas from Smalltalk. And it has a bit of a meta-system, but not a full one. This is a mistake and makes many important things needlessly difficult or practically impossible to do.
Now it is the turn for the personal computing.
6. In your opinion, which have been the main inventions to facilitate that personal computers can be accessible to everyone? What role has been played by windows-based GUI? What contribution did you in the emergence of GUI?
Many important technologies have a large number of inventions that make them work. And quite a few of these – for example, jet planes – are not set up for the general public, but require a lot of special training.
This is what computers were like in the 60s – so it is not a stretch to claim that the most important single invention in personal computing was the graphical user interface which has allowed more than 2 billion people at this point to use computers.
Graphics, and the idea of making an interface on graphic displays goes back to the 50s, and there were several really good examples by the late 60s. The first window appeared in Sketchpad, Engelbart had a form of window (as “panes”), and multiple windows were done both by Ivan Sutherland and in a personal computer project that Ed Cheadle and I did in the late 60s.
But the key to the interface that everyone uses today came about at Xerox PARC when I was trying to figure out a GUI that could be easily learned and used by children. This involved a combination of ideas from the past, several principles of learning and doing from Jerome Bruner, and a few ideas of my own. The actual process required considerable iteration and further ideas, many of which were invented by Dan Ingalls of our research group. Much of this was in pretty good shape after 5 years of experience with many users by around 1975-77. In 1979 Steve Jobs visited PARC, saw many things, including the PARC GUI, and decided that these ideas should be put on the Lisa (the predecessor of the Mac).
7. You are considered a visionary. How could think 40 years ago in portable computers?
What inspired you the idea of Dynabook?
In this case, it was quite easy (providing one was not distracted by the general beliefs of the 60s – about mainframes, punched cards, etc.)
In 1965 Gordon Moore had looked at one of the simpler silicon processes (MOS) and from the near past, physics, and what should be possible with engineering, decided that “all good things” could change by a factor of 2 every one to two years (it turned out to be about every 18 months). If one accepted this argument, then one had a roadmap for the next 30 years to 1995.
Mainframe companies that did think about this thought that it would make the margins better for future mainframes.
I had been working on one of the first desktop personal computers in the late 60s (already a radical idea), and saw in 1968 the work Seymour Papert had been doing to help children learn mathematics using some of the very special and unique properties of the computer. I was very excited by this, and saw immediately that a desktop computer did not fit the active nature of children – this got me thinking about when the transistors and software system in the desktop computer could be put into a notebook sized computer. There were already 3cm by 3cm flat screen displays starting to appear, and I had seen one this year.
It would have both a stylus and a keyboard (and I used many ideas from the ARPA community here, including ideas from Engelbart, and the RAND corporation).
Using Moore’s Law I was able to guess that the first time a notebook computer could be made would be in about 10 years at the end of the 70s, and this was good because we did not have a good idea about the kinds of software and user interface such a machine should need.
The other properties were very easy. The ARPA community had been working on the first versión of the Internet, and the plan was for it to be both available through wires and wireless radio. Both of these fit well into the mobile nature of the notebook computer.
Flying back from the visit with Papert, I drew this cartoon showing two children using these machines.
And then, back in grad school, I made this cardboard model to see what it would feel like. I filled it with lead pellets to see how heavy it could be without being a problem (Ideal weight was about 2 lbs, max weight was about 4 lbs).
I calculated how many pixels you would need to do high quality text rendition on it (about 1000x1000 at about 100 to the inch).
Later at Xerox, I wrote a paper about this idea, and gave it the name Dynabook. The paper had more thinking about the features (including the idea of making the whole front surface a finger touch sensitive display and displaying the keyboard when needed), and some estimates of price (basically that it would cost about what a TV set of the time would cost – I estimated about $500).
At Xerox, it was still too early to actually make a Dynabook, but the PARC Alto (which bécame the Macintosh) was originally called “The Interim Dynabook”. So the Mac came from a successful attempt to make a desktop versión of the Dynabook.
8. Why do you claim that “the computer revolution hasn’t happened yet?
Most revolutions, like civilizations, are not destinations but a “manner of traveling”. So civilization is partly "a culture of people who are trying to become more civilized”. A revolution of personal computers is a culture of people who are trying to find out what these "instruments whose music is ideas" are actually all about and what we can become by making strong use of them.
Almost no one who uses computers today are trying to find out what they really are. So: the real computer revolution hasn’t started yet.
9. You have been always very interested in the education of children. How the computers can help in the children’s learning process.
The simpler part is to be a world wide extension of what books and libraries have meant to us for the last 500 years.
The more difficult part has to do with what is special and new about computers and how they can represent and help us think about important ideas.
Some of this is "thinking by making", and the computer is unique both in what can be made on it, and the range of ideas which can be thought about .
This will remain opaque until enough adults start to understand computers (this could take a very long time), or if the user interface that can teach new ideas can be invented, perhaps existing adults can be circumvented and the children can learn ideas the adults don't understand more directly.
10. You have worked in Walt Disney. How much computing can influence the future of animation and film industry? Did you participate in the Tron film in 1982?
Well, the first group I was a part of in grad school in 1966 was at the University of Utah where we invented “continuous tone 3D graphics” (which is the kind of 3D technology that is pervasively in use today).
So I would say that this had quite an effect on the animation and film industry!
I met my wife (the original writer of Tron) when she came up to Xerox PARC looking for a technical adviser to the Project. (So I participated quite a bit in the original scheme for Tron, but pulled out when it went to Disney because I didn’t like what they were doing with it.)
That was in the early 80s. I took my group to a very different Disney in 1996.
11. How is Internet changing our society? How do you imagine the information society in the future?
I could be like the “TV society” today, but much worse, or it could be so good that it could only be compared to the changes the printing press caused in the enormous transition from the Middle Ages to the modern era.
The first route is easy, and it is desired both by commercial interests and most people. The second route is very difficult, and education systems of the world will need to be drastically changed in order to make the good path happen.
12. You love music. Are you as good musician as computing scientific? Have you got time to practice music? Which instruments do you play?
Yes, I have always been attracted to music, and I’m a “medium musician” in pretty much all areas of playing and composing both jazz and classical music. I don’t think of my ideas in computing as being “earth shattering” either, but I think I was greatly helped by my background in not just math and biology, but also in theater and music (for thinking about the graphical user interface should be like) – and by the unformed nature of the field in the 60s – and by the présense of a few real geniuses (such as Ivan Sutherland and John McCarthy).
I play jazz guitar and classical keyboards (the main one is the baroque pipe organ).