Herbert Stoyan was next to talk at the conference. He covered many obscure details of the earliest days of Lisp. My notes are pretty sparse, but I remember him showing several pictures of McCarthy and Minsky– he said, “Marvin Minsky, he hasn’t ever changed. But if I were to show you twenty pictures of McCarthy, he’d be completely different in each one!” Somewhere in there Richard Gabriel (I think) stood up and said that, “John McCarthy is the worst programmer on the planet. He calls me up for the simplest things!” I’m pretty sure it was some sort of strange Lisp-type humor, as everyone laughed.
Stoyan has a Lisp museum in Germany, but he’s in the process of moving everything to the Computer History Museum in Mountain View, California. Everyone with Lisp artifacts needs to contact this guy and find a home for them. (The guy to talk to is Paul McJones.) As a cautionary tale, he noted that we’ve already lost the first Lisp compiler– the guy who made it was actually a mathematician and didn’t even get a PHD in Computer Science! (Stoyan had a slide of the guy’s photo.) During the questions after the talk, JonL recalled McCarthy talking about one of the newer machines in the early days. When questioned about its memory limitations, McCarthy said, “You can’t possibly cons too much on this new machine!” This story got a good laugh from the crowd.
Pascal Costanza was up next. He spoke on his research about Context Oriented Programming. The basic idea is that the computer should behave differently depending on the situation– for example, if the computer “knew” the user was angry due to heart rate, etc. then it should maybe not do certain things. He noted that you can implement this sort of thing naively by having lots of if-then blocks everywhere. That’s no fun, though. Then he made what I thought was a bombshell of a point: Model-View-Controller spreads code dealing with a single object’s behavior and spreads it all over the system. (He had a slide showning a UML diagram raging out of control.) He said the original idea of OOP was that behavior is defined just in the class where it belongs. With behavior spread all over the object model, objects no longer “know” how to behave!
Here’s how Costanza makes that point in his paper: “Among prominent domains to motivate object-oriented programming, graphics and people are the most widely used examples. A reason for their popularity is the fact that one can easily introduce the idea that objects know how to behave, that is how to react to messages…. However, when programs become more complex, the code for displaying objects is usually not contained in the classes to be displayed because there is a need to have different views on the same objects, often at the same time. Therefore, such code is separated into view objects that need to be notified of changes to model objects (such as instances of Rectangle or Person), leading to variants of the well-known Model-View-Controller (MVC) framework originally introduced with Smalltalk. Unfortunately, this distribution of responsibilities that conceptually belong to a single object complicates the original simplicity of the object-oriented paradigm. For this reason, some more recent object systems like Self and Squeak have even changed their frameworks for presenting objects on the screen back to the original idea that objects maintain their own knowledge about how to display themselves. However, with that they lose the desired property to offer different views of the same objects. ContextL provides an alternative approach that both keeps the conceptual simplicity that all of an object’s behavior is indeed associated with that object and still allows an object to be viewed in different ways depending on the context.”
Costanza then fired up Lispworks Professional Edition and started hammering out code. This was all CLOS code. He wrote some commands to define a class: define-layered-class, define-layered-function, define-layered-method, and deflayer. To call an object in a given context, you call with-active-layers and pass it a list of layers, and method expression against your object: (with-active-layers (employment) (display *pascal*)). He then paused for questions and the crowd went a little crazy. People wanted know how these features interacted with inheritance and how to code it to know that other layers are active given certain conditions. Gregor Kiczales was floored: “We in the CLOS community have failed to explain why multiple inheritance is important. The problem is complicated, but our code is simpler.” He wanted to know what we could do to actually win the “political” argument. Pascal responded, “I don’t care about the political discussions. I think we should be focused on science.” (In an informal discussion out in the hall, someone noted that the Kiczales didn’t think ideas in the lab actually proved anything– in order to really demonstrate the validity of an idea, you have to have a base of users that are actually applying it in a real situation.)
[Note: Gregor Kiczales is one of the co-authors of The Art of the Metaobject Protocol. Can you imagine presenting a paper about CLOS and having him ask you questions afterward?! Costanza did not flinch....]
The Ruby coder that had jumped in the DSM fishbowl the night before returned right about then. From the back of the room he asked (somewhat agitatedly, no less) how you could possibly save the contexts in the case of a real web application. (I thing he was coming from a… what is it… a Spring or a Hibernate standpoint or something. Something like that.) Costanza responded that if you knew what you were doing, you could get at everything. Ruby guy also wanted know how you could possibly handle exceptions with this stuff.
Costanza put up a slide showing some benchmarks. Inexplicably, he actually made the code faster in some situations just by adding contexts! Everyone wanted to know how this could be, and Costanza responded that it didn’t mean anything. He said to never optimize on the basis of guesses; efficiency is not important– give us more flexibility, he declared! Kiczales couldn’t handle this– in most of the meetings he’d ever been in, a 1% loss of efficiency would mean he’d get shown the door. “We can not ship something that is less effective than the thing we shipped before,” he said. Costanza said that most of our machines are idle– who cares?! Ruby and Python very popular and they are dog-slow…. One percent? People these days don’t care about a ten percent performance hit!
JonL closed the final remarks with a word of caution about benchmarking. The competitive impulse magnifies trivial differences. Back in the day, a meaningless difference would mean that a Lisp hacker would be dead in the water and have to go back to C. JonL said there should be no more papers on Garbage Collectors unless you had an example. There should be no more benchmarking without theory and no more theory without benchmarking– it’s not the magnitude, it’s the “why” that matters.