In the afternoon sessions, everything began to blur together: several of the 20-minute long presentations (often in broken English) were either overly specific or only vaguely comprehensible to me. As the morning’s coffee overdose began to wear off, I slipped into a muddled despondency.
The most dynamic presentation (literally!) was by the guys that did “ModelTalk: A Framework for Developing Domain Specific Executable Models”. Atzmon Hen-tov gave a rundown of the overall design; then Lior Schachter walked through a ten-minute change to an existing web application. These guys weren’t academics stuck up in some ivory tower. They maintained over twenty systems with a team of over fifty developers. They needed to deal with an environment of pervasive customization while delivering frequent updates.
This was a whirlwind demo, but I gather they were building in a style more along the lines of the spirit of CLOS and Smalltalk than the typical Java approach. They wanted to be able to extend the models without having to recompile binaries. What would happen is… if Java didn’t have an object specification for a specific model, it would just use something similar in its place:
I think they referred to this as dependency injection. Code completion, automatic syntax checking, and dynamic error checking were demonstrated– stuff in the model DSL’s had exactly the same level of tool support as the straight Java side. The developer appeared to jump back and forth between the two contexts with ease. (The crowd seemed to be fairly impressed by this feat.)
These developers emphasized repeatedly that their productivity gains were primarily due to their declarative and interpretative approaches. Now, I was probably the dumbest person in the room… but I just couldn’t understand how someone excited by this architectural approach could possibly stay committed to using something like Java. My impression was that they were investing huge amounts of developer effort in order to work around the inherent limitations of the Java language. You’d think that there would be other platforms out there that would be a little more friendly to a dynamic approach to handling their requirements.
One thing they said that was telling was that, because of their interpretative approach, they could use new classes at run time as long as they didn’t need any new behavior. (What they were trying to avoid was having to recompile the binaries.) This didn’t make any sense to me. If you’ve got “new” classes without new behavior… then really all you have is old classes with new property values. This leaves an impression that these guys were bending over backwards to deal with DSM and XML specifications just to work around Java’s pathological type system– and they weren’t really gaining anything from the DSM that a more traditional data-driven architecture could give. Surely I missed an important point in there somewhere….
In another session, the speaker talked about how people tend to wrap a DSL around a specific framework… but then what do they do when they begin to outgrow the framework? (They also had a cool slide for this point.) After the presentation, Steven Kelly noted that you could wrap the framework in a very thin DSL… and then your main DSL should just talk to that instead of going directly to the framework: this way you could switch frameworks without having to modify your DSL– all changes would be restricted to the thin “buffer” layer. One of the other attendees dismissed this strategy with a wry remark: “There’s nothing in CS that cannot be solved by simply adding another layer of misdirection.” In an informal discussion afterwards, I mentioned that this was an idea presented in the classic SICP as a linguistic abstraction barrier. Nobody in our corner of the room had heard of the famous “wizard” book, though.
Here’s a sample section that illustrates how the idea is described there:
“Map is an important construct, not only because it captures a common pattern, but because it establishes a higher level of abstraction in dealing with lists. In the original definition of scale-list, the recursive structure of the program draws attention to the element-by-element processing of the list. Defining scalelist in terms of map suppresses that level of detail and emphasizes that scaling transforms a list of elements to a list of results. The difference between the two definitions is not that the computer is performing a different process (it isn’t) but that we think about the process differently. In effect, map helps establish an abstraction barrier that isolates the implementation of procedures that transform lists from the details of how the elements of the list are extracted and combined…. This abstraction gives us the flexibility to change the low-level details of how sequences are implemented, while preserving the conceptual framework of operations that transform sequences to sequences.” — SICP Section 2.2.1
The point is made much more forcefully in Abelson and Sussman’s video lectures but you should get the idea. “Linguistic abstraction barrier” sounds really cool, but it really is the most basic component of defensive programming. If I have an object model that I have to interact with a lot– and that is likely to change drastically due to new developments– then it’s crazy to embed calls to that object, say, throughout my GUI event routines. I should wrap the object inside another one that “speaks” a higher level “language” that maps more directly to what typically goes on in the GUI. This sort of barrier frees the GUI from having to “know” anything about the underlying framework. It also provides a high level organization and commentary to what the framework is actually getting used for. This situation is isomorphic to the DSL/Framework issue!
Okay, okay… this is a real minor point. But a guy can’t pass up a chance like this to make oblique references to SICP! (I guess MIT isn’t quite the cult phenomenon in Europe that it is here?) Seriously, though. You just can’t have people scoffing at the idea of an abstraction barrier.