Notes on the 8th OOPSLA Workshop on Domain-Specific Modeling, part III (Sunday Evening)

In the final segment of the Sunday sessions, we had a “goldfish bowl.” Steven Kelly, Goren Olsen, Laurent Safa, and Arturo Sanchez each said a few words about “Evolution of DSM’s”. Then other participants could filter into the circle of chairs, ask questions and participate in the discussion, and then “fade out” when they were done.

Steve opened up with a point that (just like Tennessee in 1925!), people just don’t talk about Evolution in DSM. Everything assumes that you write this thing and then you’re done! This attitude is sufficient to get a working prototype for a research paper, but in the real world… clients may not tolerate this so much. He had a very interesting illustration of the interrelationships of the Model Language, the Models, the Generator, the Framework, and the Generated Code.  How does a change in the model language cascade through a running system?

As the session topics and crowd response indicated during the day, most people in the room really wanted tools to help cope with evolutionary issues. They wanted to know how to deploy model modifications into an environment where previous iterations are still in operation. They wanted IDE tools to be seamless… they wanted IDE’s to understand the maintenance nightmare that DSM can unleash. They want things to just work– they want “flow through.” If the model changes, they want everything downstream of the model to be patched or rebuilt automatically. They want transformations to be automatically generated to handle all of this housecleaning. There seemed to be a consensus that UML just didn’t work– it’s too general. At the same time, there were hardly any presentations that didn’t have a completely incomprehensible UML diagram in their slides.

Steve would go into detail about what sort of things tend to break among the various competing tool sets. He noted that the XML-based tools are just too new as of yet to have any sort of robust answer to DSM evolution issues. The older tools have features for this because customers insisted on it. If they got put in a situation where their models suddenly stopped working, for some reason they just didn’t tend to respond well to the notion that they were going to have do all of the grunt work of fixing everything by hand….

Arturo disagreed with all of this. He pointed out that DSM’s should emphasize the “specific” part of the acronym. If you can handle all of this evolution stuff, what you’re developing is just another general purpose programming language! Steve didn’t appear to think that an attitude like this could last long in the presence of real paying clients. Also, he pointed out that if you can’t evolve, then you’ve pretty much lost the chance to ever reuse anything.

Steve spoke in specific terms about what sort of changes the current tools can tolerate. Most of them will let you add new properties. Some XML tools would make new properties “required”, though, which would cause the old models to break. Some tools could tolerate a renaming. Luis Pedro wondered why you couldn’t use new models and old models together– he’d used a database tool that could manage to produce the right SQL for querying something regardless of the actual version in operation. Steve didn’t seem to think that such tactics could work with DSM.

At some point, a Ruby coder jumped in and said that in the real world, you just don’t have time to mess with a bunch of pointless UML diagrams and fancy modeling. He urged everyone to work toward creating a system where you could make DSL’s so quickly that it would be easier to start over than it would be to “evolve”.

At the end of the discussion, Arturo reiterated his point that mixing evolution with DSM’s was crazy. (He had a real zinger– something like, DSM’s are revolution… and if you add evolution to them, you’ll get some serious retribution. Argh. I missed the last word, but it was something like that. Devolution? Contribution?) Steve said he was depressed. The current crop of tools are all inferior to the tool he was using 13 years ago! (Hint: Steve’s tools are built on Small Talk.) Luis Pedro reiterated the point that reusability required evolution.

6 Responses to “Notes on the 8th OOPSLA Workshop on Domain-Specific Modeling, part III (Sunday Evening)”

  1. Mark Miller Says:

    What you saw at OOPSLA in general sounds typical. I listened to a podcast with Avi Bryant, the creator of the Seaside web framework, not too long ago and he talked about how he came up with it. He said it began with a visit to OOPSLA back around 2000/2002. He said all of the presentations were in Java and not that interesting. He said the most interesting stuff was the discussions that were going on in the hallways. I think he said these were people who weren’t attending the presentations. They were just small ad hoc groups. A lot of these people were talking about stuff having to do with Smalltalk and Ruby. After that he looked at Apple’s WebObjects framework, started working on his own framework based on WebObjects, in Ruby, and then eventually switched to developing it in Smalltalk.

    I may have mixed up the chronology a bit, but that was the gist of it.

    What the Ruby coder said is interesting to me, because Jeff Moser has talked about the same thing. The way he’s phrased it is developing a “Moore’s Law of software”, and the centerpiece of that for him has been his work on OMeta# which allows one to create DSLs more easily in .Net, though it’s not the endpoint of that effort.

  2. lispy Says:

    The Dolphin guys have a similar take:

    “Then to make matters worse, the computer science academia started to be seduced by the money that was available for making Java better. Blair and I attended all of the OOPSLA conferences from 1997 up until 2004. How many papers did we see on Java garbage collection or generics? All this stuff had either been done 10 years before or shouldn’t have been necessary anyway. None of it seemed to be advancing the state of computer science.” — Adam Bower

    That’s really scary stuff!

  3. Mark Miller Says:

    Hi Lispy.

    Adam Bower says just what Alan Kay has been saying the last few years, as I think you know (if not, you can read where I talk about it here. For more of what he said along these lines, check out the ACM Queue article I reference in my post). Reading Kay talk about this wasn’t so much scary as discouraging. This, along with his presentations, helped me realize that the industry I’ve been involved in and fascinated by for years is no great shakes. He’s “spoiled” it for me, but that’s a good thing. Now I’m more fascinated by the possibility of what the computer really represents in our civilization, which takes one back to the notion of developing CS as a real science.

    What’s been going on at OOPSLA, I suspect, is by and large like a bunch of scribes trying to do neat stuff with hieroglyphics on stone, papyrus, and clay tablets (Kay would use an architectural example, but I like the metaphor of a writer).

    Kay has many complaints about this. He’s blamed Java (“Java and C++ make you think that the new ideas are like the old ones. Java is the most distressing thing to hit computing since MS-DOS.”), but really he blames the commercialization of the microcomputer 28 years ago, saying it was like the introduction of television. Culturally we weren’t ready to turn it into a “high medium”, and I’m sure he’d say we still aren’t. Fortunately we still have books, though he’s complained for years that the internet is making books seem passe to most people. So we still read but by and large the quality of what we read is a lot lower. His biggest complaint about this is that in the tech industry, and even in CS as a discipline, “we don’t read”, and we don’t have a real sense of history.

    He’s lamented that because of the excitement around “pop culture” stuff like Java there isn’t much interest in putting money towards real computer science, I guess because the pop culture has convinced everyone that *it* is real computer science.

  4. lispy Says:

    Sounds like Kay is getting all “Neal Postman” on us. The book *Amusing Ourselves To Death* was pretty influential to me back when I was in college. JohnL’s remarks at Lisp50 were on the verge of striking a similar vein. Comp Sci must produce its own analog to Lewis Mumford one way or another!

    The bitterness among Smalltalkers in general is a little distressing. That’s one reason I was so impressed by Steven Kelly– he’s just supremely confident and has a pretty good gig going there… and he’s not going the “angry young man” route of shaking his fist at the uncaring “establishment”.

    He just struck me as supremely affable and patient: “Yeah… we had a better solution to this 15 years ago… and you do seem to be doing some of this the hard way… but, you know, I’ll help you along wherever you are in this.” That’s not what he said; it’s just how he came off to me. It looks like he has a sustainable gig working in a fundamentally great language/platform… and he’s not hung up on playing the sore loser game.

    I think Lisp and Smalltalk are being used in places where nothing else can compete. The claims about them making hard things possible are not an exaggeration. It’s just that… for whatever reason, there’s only so many operations that are in a position to truly tackle some of that ‘hard’ stuff effectively. The large communities surrounding the “worse is better’ approaches are the deciding factor among the “mere mortals” out there….

  5. Mark Miller Says:


    “Amusing Ourselves to Death” is a book Alan Kay has recommended frequently. What he says about the technology industry though is true. It’s not just people like him saying it, either. Coding Horror wrote about this recently, but in a different context. From what I’ve read it sounds like the “Become an X programmer in 21 days” books have taken over, though I suspect starting in the next year we’ll see a return to higher quality programmers…

    IMO Kay is the analog in the CS discipline to Mumford. I’ve followed what he’s said over the years and I think he clearly understands the development of our civilization and the role technology has played in it, and he understands that its advancement was driven by cultural, not technological, advancement. The way he says it is “technology creates an excuse” for this advancement. He also pays attention to McLuhan, that “The medium is the message”, though the way he says it is, “We shape the medium, and then it shapes us.”

    I’ve seen some of the bitterness you speak of in the Smalltalk community (though I would call them bitter old men, not young men–Seasiders being the exception), on the Squeak mailing lists, but it’s not that bad. By and large they’ve been very patient folks to newcomers like me.

    Steven Kelly sounds pragmatic. There are quite a few Smalltalkers who work with Smalltalk on their own time, but work with Java for their day jobs. I personally don’t object to using lesser architectures. I think they’re more painful to use, but I understand that a lot of people don’t understand and don’t have the time or inclination to learn something that would make their lives easier. A big part of it is, as I said earlier, culture. It’s a cultural shift to go from C++, C#, Java, etc. to Lisp, Scheme, Smalltalk, or even Ruby and Python.

    A big thing people have to give up is assuming that the language is going to “keep them safe”. The languages depend a lot more on you having a clear idea of what you intend to create. You in effect are creating your own language and architecture, your own abstractions, and then working with them. Different assumptions also take hold. From what I’ve heard TDD was invented in the Smalltalk community largely out of necessity. When you don’t have static typing, for example, it becomes a lot more important to test the data flow through a program to make sure everyone’s getting what they expect.

    I think you’re right about where Lisp and Smalltalk are used. I think it’s unfortunate because they could be used in a lot more places. There are plenty of enterprise settings where I think they would work better than what’s currently used. Over the years I’ve heard stories of these places being overburdened with maintenance issues, and code libraries so massive and complicated they make writing software very difficult. They sound like situations where they’re just barely holding everything together.

    Some startups have used them, which I think is a forward-looking idea. Kay has always said that we should use architecture that scales well, and what better example of the need for that than a startup–a small company that has hopes of becoming big.

    Re. your last point, the industry is largely based around architectures that are easy to teach, but don’t scale well. The reason they don’t scale well is because the ideas implemented in them are limited in sophistication. It takes learning some advanced ideas to get the better architectures. Beyond that it takes learning other disciplines besides CS, and a solid grounding in math and science, to entertain even more advanced architectures.

  6. Why UML Fails to Add Value to the Design and Development Process « Learning Lisp Says:

    […] UML Fails to Add Value to the Design and Development Process While attending the Domain Specific Modeling workshop at OOPSLA 2008, I heard many pointed criticisms of UML. No one went into detail, so I bought a book […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: