Archive for October, 2007

How Studying SICP Made Me a Better Programmer, part II

October 27, 2007

Okay, I told “Jon” that I would follow this up, so here goes.

The problem I had with the code last time was that I was leaning too much on bad habits I developed staring at too much relational database code. I used a hash table to store everything about the data structures. In all my functions for manipulating the data, I had to tag the arguments with an additional reference to to the hash. This was pointlessly cumbersome. What I really needed to do was eliminate my dependency on a data store and just use cons, car, and cdr to construct everything. This allows me to treat my data structures (and pieces of my data structures) as primitives in a linguistic abstraction layer. As a bonus, all of the standard common lisp operators work with them without any “crutches” or extraneous code getting in the way.

One thing I learned going down this road is that the Visitor Pattern is hardly anything more than an application of mapcar. I remember trying to learn about Design Patterns a few years ago and I just couldn’t pick many of them up. The code examples for them and the UML diagrams seem to me to do a good job of obfuscating the whole point. Your brain has to “compile” so much information just to get to the point– its no wonder so many developers mangle them up in the application of them. After reading SICP, it’s much easier to parse them. In a lot of cases they just do things that are very similar to what I want to do in my more functional language anyway.

Another thing I picked up in this exercise was that dynamic variables and hash tables store references to the “conses”. This means you have a lot of control over the lists that are stored in them. It also means that I don’t have to work too hard to write code to manipulate or transform them. Life certainly gets easier coding in Lisp once you think in terms of “pairs” so fluently that they are more natural than anything else. The example code for this post demonstrates some implications of this in detail, so folks that are just beginning their studies with Lisp will want to take note of what’s going on there when we start doing the “forbidden” destructive operations….  (The output of the Common Lisp code should look like this.)

Finally, after having slogged through most of the first two chapters of SICP, I can finally understand much better how bad a developer I’ve been. On the one hand, I can comprehend a much wider range of architectural approaches… but as a consequence it’s sinking in more and more just how limited my mental programming vocabulary was up until now. I really had no clue how ignorant I was. I knew I was struggling with things that should have had tidier answers, but I really thought I could only ever get just a little bit better than what I was. Now I can sense all those several orders of magnitudes that I have to go and I wish I could consume a dozen more books before going on to the next project– but life and learning don’t work quite like that….

The main thing is that I now see that I never really understood OOP near as well as I thought that I did. A lot of people objected to the recent Golf ball example that a Perl hacker recently used to criticize typical object oriented “design,” but that guy sure had me pegged. Of course, good OOP design is going to be almost indistinguishable from functional programming approaches in some cases– even if it’s a little more verbose– but I don’t think I would ever have understood that so well without my many hours spent with Lisp and SICP.

What You Don’t Know About Closures Can Hurt You

October 25, 2007

Some of you out there may still be wondering why it is that closures and macros are so great. “What exactly can you do with them that I can’t do in my more widely used Turing-complete language,” you ask. That’s reasonable. Well, I don’t think you’ll fully appreciate closures until you grasp some of the basics of functional programming, so let’s start there.

Functional programming languages accord a first-class status to procedures:

* They may be named by variables.
* They may be passed as arguments to procedures.
* They may be returned as the results of procedures.
* They may be included in data structures.

In other words, you can treat procedures as primitives. In the same way that you operate on integers, juggle them, shuffle them, combine them, transform lists of them, do anything you want with them… a functional programmer does the exact same thing with procedures.

Now if you’ve never thought much about this or experimented with these ideas, this will sound bizarre. You might have a hard time imagining how this can possibly be so useful. The only way I can think of to describe it is to imagine that you lived in a world where there was no such thing as subroutines, yet– and then suddenly some experimental language came out that had that feature. People would not admit that it was particularly useful and they would point to their terrible hacks and workarounds as evidence that such “ivory tower” concepts were useless in the Real World. Of course, you’ve seen almost the exact same thing in your workplace already, more than likely. When you see code repeated across the code base and you abstract it out into a class that is referenced by multiple projects, suddenly your coworkers are peeved at you because they can’t understand a section of code that they’re looking at without hitting “right-click — Definition” to jump to your new class file. This is why the default code architecture for a crummy business app is to open up a hundred separate form files and freely mix, match, and repeat business logic, data access code, and gui instructions all together. It’s a wonder that these guys even use subroutines…. (Oh wait, you see that, too: gigantic subs that go on for screen after screen that you can’t understand….)

Anyways, when you get procedures being treated as first-class citizens of the language, new abstractions are possible. You can create procedures that take procedures as an argument and that return procedures as a value. This means you can take several similar functions and abstract out the basic process that they all share. The procedures that you use as arguments can be used customize as many other procedures as you like. So on the one hand, you get code reuse via the basic “skeleton” procedures… and on the other, you get opportunities to reuse the code that is being used to customize the “skeletons.” Anonymous functions can be used to in-line the customizations when they’re not going to be reused, so you end up with cleaner, more expressive code.

Before I looked much into it, functional programming sounded like a step backward. I thought that the whole idea of separating code and data out and encapsulating it into classes was the answer to everything. Of course, you don’t have to give up on classes altogether. You can use functional style techniques to expand the configurability of your objects. In some cases this might be a more appropriate choice than inheritance, for example.

Now, one thing I’ve noticed with my OOP designs is that I end up spending a lot of time tinkering with my interfaces. It seems like I spend more time fiddling with the properties and methods lists and avoiding breaking the Law of Demeter than I do actually working on real code. And then I go back and break apart classes and rework the object model some more. With first class procedures, you can often skip the tedious object instantiation ritual and the maintenance of a separate class file and just jump right to what you’re trying to do. And if you can apply your procedure to a wide variety of data structures, so much the better.

Along into this scene arrives the closure. Again, if you aren’t jumping up and down with excitement for what first class procedures can do for you, you won’t really appreciate what closures add in to the mix. A closure is just a procedure that carries an environment with it. It’s like a class that has just one method and as many private variables as you’d want to cram into it. If you’re entrenched in an OOP mindset, you might think that’s ridiculous– you can make such things in your current language, you think to yourself. But this is one of those cases where less is more… and when mixed with other functional techniques you get some powerful options at your disposal.

So a closure turns out to be just a procedure with state. It’s sort of like an extremely stripped down object. The first thing people tend to show off when they’re going over closures is how you can add a counter to a procedure. Not very exciting, eh? But yeah, you can actually use that simple counter to package all kinds of logging and diagnostic functionality with your procedures. You can make a procedure that takes a procedure as an argument and returns that exact same function with your diagnostic code bolted onto the inside of it. The rest of your code doesn’t care if you’re using the plain vanilla procedure or the one with the extra logging utilities running with it.

Another thing that attaching a counter to a procedure makes possible is the creation of iterators. You might have seen an article recently that talked about how one of these days you just might write your last for loop. This, my friends, is how you get rid of for loops: you write a procedure that takes a procedure and an integer as arguments and returns a closure as its value. Each time you call the procedure, the counter gets incremented and it returns the next value from the procedure. When the counter reaches the same value as the integer argument, the closure returns a null value. Then what you need is a means of applying your other procedures to the values that are kicked out of your iterators– and with that you are now set free from the tedium of cryptic for-next loop code.

Now you may be thinking that there’s nothing wrong with your trusty for-next loop. And if you’re an OOPy type of person, you might think that your for-each loop does the job just fine. But how would you say “for each prime number from 200 to 300, execute the following function with that number as an argument.” Functional programming gives you a very expressive way to code exactly that. Another situation where iterators are useful is where you’re working with infinite lists or loops that repeat back on themselves. You might not be able to store an entire list of objects in a collection all at once– but with iterators you can simulate that very thing. You probably already use something very similar to an iterator when you open up a text file and operate on it on a line by line basis.

It occurs to me as I write this, that there was a time when I’d write “do while not rs.eof” at least a dozen times a day. I’d crash my IDE at least once every week or so because I forgot to put in “rs.movenext” at the back end of loop. That redundant code peppered throughout the application was a major liability– especially when we ended up having to move to a different data access paradigm. Oh the pain! What we needed was an iterator that could be instantiated with a SQL string. Then we could have written procedures to operate on those iterators. No more annoying do-loops! As a special bonus, if we had been writing unit tests back then, we could have made dummy iterators that don’t even call a live database– they just ould have pulled in a stream of test data instead!

There’s a lot more we could delve into on this topic, but the point is that with even just a smidgen of functional programming techniques, we could have made our code much more concise. We would have had additional means of separating out redundant code, and it would have been much easier to “rewire” the code to change its behavior.

Now, I mentioned briefly already that in functional programming we’ll strive for making our procedures able to operate on a wide range of data structures. It turns out that the key to making your functions really powerful in a wide range of situations is to use recursion. In your Imperative/OOP world, you may have only used recursion in a few rare cases– like searching through directories or setting up a tree view. Well, in functional programming, the default approach to just about everything is to use recursion. Your data structures are often defined recursively, so your functions that operate on them are consequently recursive as well. Recursion becomes your meat and drink and even the air that you breath… and you become intimate with every variety of it.

And just as closures are always introduced with a lame “counter” example, so recursion is introduced with fibonacci numbers. This is a really good bad example, because it shows not only how recursion can be the easiest way to express something, but also that recursion can spin dangerously out of control if it is mishandled.

Anyways, as you study recursion, it often takes a lot of trial and error to get it right. You test your function with several arguments to make sure it’s coming out right. You add in code to print stuff out so that you can get a feel for what’s going on. You take out the text code when you think you’re done and you put it back in when you find a new problem you need to fix. There has to be a better way. Wouldn’t it be great if we could write code to help us write code? Sounds like a job for a macro!

What we’ve got for show and tell in today’s code example is a tool called memoization. When you memoize a procedure, you create a closure that contains the procedure and a hashtable to go with it. Each time you call it, the closure checks to see if the argument is already in the hash table. If it is, it returns what’s in the hash table instead of running the potentially time consuming code to determine the value in the usual way. If the argument is in the hashtable, the closure finds the value using your procedure and then stores it in the hash table. In the case of the typical fibonacci function, this results in a dramatic performance increase.

Looking at the code, we have two defuns, a defmacro, and a macro call. The testing-memoize function takes a function as an argument and returns a closure. The print statements we use to show us what our recursive function is doing are here. We also added a “back door” to allow us access to the hash table that is attached to the function. The other function, tree-substitutes, is a recursive function that searches a hierarchical list for a specific symbol and splices in a list of things in its place. The macro takes the same arguments as a defun, but sets up a parameter to hold the closure, defines the a slightly modified version of the function using the tree-substitutes function, stores the closure in the parameter, and then redefines the function to call the closure instead. Finally, we have an example of how to apply our memoization macro with our ubiquitous fibonacci procedure.

If we like the behavior of our recursive function, we can turn off the memoization by changing “memoize” to “defun” in our procedure definition. If we like the performance gains and want to keep them, we can create our closures with a different momoization technique that doesn’t print out diagnostic text or expose a back door.

So, we can see that closures give us a means of abstracting away any extraneous or temporary behavior in our procedures. These techniques don’t replace subroutines and libraries, but they do provide an additional means of eliminating redundant code. We also see that macros can be used to set up all kinds of behind-the-scenes code by giving us a means of establishing additional ways of defining functions. We could potentially set up “testing” and “production” variants of our definition macros to radically change the behavior of our application by switching between which package we reference.

Hopefully you now have a better idea of what you can gain from studying functional programming, closures and macros. We’ve only scratched the surface, of course. If you’d like to know more, I highly recommend the classic Structure and Interpretation of Computer Programs. If you’d rather learn from a cool Perl hacker (instead of a “smug Lisp weenie”) then you might try Mark Jason Dominus’s Higher Order Perl instead.

(By the way, if you take a peek at the hash table after running the fibonacci code, it makes a pretty picture for you at the REPL prompt.)

Lisp Isn’t Really a Programming Language

October 24, 2007

In Lisp, DSL’s are free. In fact, practically any program written in Lisp could be considered to be a DSL. First Class Procedures are a big part of this. Any procedure you write becomes a part of the programming language– your procedures are indistinguishable from the built-in ones. You don’t write Lisp code so much as you extend the Lisp language until it becomes a custom language for whatever it is you’re trying to do. Its a subtle difference, but a significant one.

Car, cons, and cdr can be used to define almost any data structure. You create an interface for your data structure with procedures– just like the constructors you write for objects in that OOP language you use at work. Recursive functions using car, cons, and cdr can perform just about any manipulation or transformation with your custom defined data structures.

A dispatch table is a list of names and their corresponding function objects. If you were to create one for another programming language, you could read a text file and instantiate variables and functions from it. You could set up functions that would allow you to load other text files. With a dispatch table of function names (and their arguments) that’s read into your program sequentially, you get an imperative programming customization language for free that allows your users to customize the program environment as much as they like. Languages that have first class functions make implementing dispatch tables trivial.

And Lisp of course has a dispatch table that’s wide open for you to play with however you like. That’s where all the functions you write live… and their names are symbols that allow you to look them up and reference them. And because Lisp allows you to mess with its dispatch table, you can redefine it even while its running. I mean you can actually redefine the language itself as your application is running. It is this feature that allows people to debug programs running on satellites travelling to distant planets… and also to debug programs before a client can hang up on the phone to report it to you.

Now there’s an interesting thing about those data structures you make with car, cons, and cdr. They’re an abstract syntax tree. Now in most other languages, if someone wants to do something cool, they convert what they’re working with to an abstract syntax tree and then go to town with it. If you want to convert something from in-fix notation to something more lisp-like, its trivial once you can get that original notation into an abstract syntax tree. But the thing is, if you’re using lisp, you’re using lists. And Lisp’s lists are essentially abstract syntax trees by definition. So in Lisp, a lot hard things are easy… by default.  Finally, your Lisp code is itself written in terms of list data structures. This means its easy to write code to execute transformations of your code. This is what macros do and this is why other languages can’t do this.

Now, you may not like the conventions of the Lisp language. But the thing is, the Reader itself can be redefined. It’s just as wide open as the dispatch table is that you store all of your functions in. This means you can program the reader to transform just about any programming language text into actual Lisp code… and from there do anything else you can imagine.

So Lisp isn’t really a programming language. It’s a consistent system for defining programming languages in general. That is why learning it is difficult– you’re going up into a level abstraction that you never imagined could exist. But if you can master it… you can code in the environment that all the other languages “hack” into when they need to get real work done. So in some sense, Lisp is the Platonic ideal that all other languages fall short of.   It is the truth that most programmers glimpse only in brief moments of lucidity. All the other languages are shadows cast on the wall of the cave… but Lisp is is what lies behind the veneer of what we think of as “reality” in programming.

Just Put All Monopoly Rules in the .monopoly File and Be Done With It!

October 23, 2007

(Raganwald posted a while back that implementing Monopoly was a great interview question.  I wrote this before reading his blog post– and before reading too closely about this guy’s approach in Ruby.  This problem is very similar to one that has stumped me for years… and it appears to me that Lisp and Scheme provide a very “obvious” means of separating “rules” from code while allowing for a tremendous amount of configurability.  Should this answer get me invited back for a second round, or…?  What do you think?)

Let’s start looking at this by looking at the core iterators that run the game and define state. There is an (effectively) infinite list each of die rolls, Chance cards, Community Chest Cards, players and board positions– and iterators are a great way to model the wrap-around affect of the board structure.  A typical sequence of play will be to increment the player iterator, then move increment the player’s board position a number of times equal to the result of incrementing the die roll iterator. Technically, the very concept of the sequence of play as we described it is a rule, but for now we’re going to take it as a given: we’re going to focus on implementing just the set of all “Monopoly-like” games.

To define the shape of the board, we need a means of defining variables or objects. Each one will have a Name and an arbitrary number of properties stored in a hash table. Each one will have to reference the name of the next square on the board so that the board Iterator will be able to function.

We’ll need a constructor for the player iterator that takes a list of player names as an argument.

We’re going to need a domain specific language to handle instructions on the cards and some of the board positions. The language has to be able to handle moving money to a player’s account, moving them around the board, and triggering the draw of a card. We’re going to need a means of defining new functions for the language such as “Go To Jail.” It has to be possible to implement any possible change of state in the game through the language.

So far we can configure our game with a text file that defines the squares of the board, the cards in the game, and the instructions that go with them. What we need next is a list of options for the player to choose from based on a given state of the board. Each option will have an associated script written in the same DSL that we used for the cards.

In order to allow for additional customization, we can create a set of scripts for various events that occur during our sequence of play. So our final configuration text file will include board, card, and die roll definitions… some general function definitions, some definitions for each player option that determines when each one is available and what its action is, and also some event override code. All of the rules of the game– with the exception of some fundamental sequence of play rules– are separated out into the configuration file. If, for example, someone wanted to add the atrocious game-destroying house rule of placing money in the pot for when people land on free parking, then the user would modify the award-money-from-card function and alter the trigger script for the Free Parking square.

If we allow the users to redefine the iterator definitions and the core sequence of play, then they could create even more radical customizations. At the very least, developers will want to be able to override the die roll iterator so that they could set up their unit tests to work against a set series of rolls.

Call it EMOPS or something like that….

Two Minutes Hate: “Why Functional Programming is a Pointless Dead End”

October 16, 2007

Okay, folks. Don’t forget what the Judges said: “So programmers should worry less about languages and more about good old complexity.” You can take from that the lesson that, for hard problems, raw analytical power and mathematical ability is more important than “coding skills.” But what the judges were talking about was the fact that while C programmers could get away with a slightly more “brute force” approach, a good data structure implemented in Haskell would have been fast enough to do the job as well. The issue was not so much one of language choice, but raw problem solving.

But I really don’t understand the hate that keeps emerging in this discussion. “After three years of functional programming I’m nearly out of that dark cave I’ve been living in, and back into this real world…. At least for me, functional programming seems to have been an infatuation – well worth the time spent on it but ultimately not useful.” Why the religious tone? He might say the same thing for all of those years he spent dabbling in Zen or Christianity.

I keep seeing similar “anti-testimonies” cropping up. Stuff like, “Oh I studied that in university.” Oh yeah… like going to university can make anyone an expert on programming. Others claim to have “tried” Haskell or have “looked into” functional programming. This supposedly gives them the authority to dismiss functional programming languages altogether, and maybe accuse the egg-heads of being lazy… in the “wrong” way.

Admit it guys, this is pretty shallow. I think it’s perfectly okay to pick a random church in your area, show up randomly, and decide that maybe Christianity is not for you. Just like its okay to look at a few random news items from the last years and conclude that Islam is a violent religion. Just like you can shave your head and eat vegetarian food as a Hare Krishna for a few months before getting bored with it. But while there’s not much chance of us ever (collectively) coming up with an agreed upon standard for judging the merits of the major world religions, I think we do have a means at our disposal for judging the merits of your opinion of functional programming: it’s called blogging.

Pick an off-beat language. Start studying. Attempt hard things. Write up your solution in a “show and tell” fashion. Try to become a better programmer. Then give us your opinion on whether or not you’d like to use Haskell or Lisp or whatever instead of your usual language. At least then we can have an objective way of judging your skill and depth of experience.

I don’t care about your “testimony”. It doesn’t add one whit to your credibility whatsoever. Programming is not a religion; show me your code! Tell me what’s hard for you and how you’re overcoming your limitations. If you’ve learned something cool, put it in a format where even I can understand it. Chances are that the way things are going, you’ll be able to apply the bulk of what you learn in your non-functional language anyway. But please… support your opinion with real code examples.

Echo Chamber: Backstabbing Lisp Hackers and the War to End All Language Wars

October 15, 2007

Interesting stuff from an article I found on raganwald’s feed:

You might know that James Gosling (one of the creators of Java) once wrote a version of Emacs. But did you know that his version did not have a real Lisp implementation running under the hood?

You might know that MIT had a really cool AI lab that Richard Stallman worked in. But did you know that the fallout between two competing Lisp machine companies put an end to the the primordial hacker culture that once thrived there? And did you know that Stallman spent two years implementing a free version of every single feature that Symbolics came out with for their Lisp machine in order to punish them for stabbing his fellow hackers (at Lisp Machines Incorporated) in the back?

You might know that TCL is a language used to write GUI code. But did you know that Sun once tried to make it the “de-facto standard extension language” of the world?

You might know that Guile is the GNU implementation of Scheme. But did you know that Stallman intended Guile to become the standard extensibility package for all GNU programs?

You might know that Lisp dialects can be used to write embedded languages. But did you know that such a capability could be marshalled to end the language war in many applications? Stallman: “But when you have a powerful language that can implement others by translating into it, then you give the user a choice of language and we don’t have to have a language war anymore. That’s what we’re hoping ‘Guile’, our scheme interpreter, will do.”

How Studying SICP Made Me a Better Programmer

October 13, 2007

While working through the problems of SICP section 2-2 this week, I had a flash of insight. Things that had frustrated and stymied me for years were suddenly becoming trivial. One data structures problem that had particularly bugged me had become pretty easy, too– though I had to apply everything I’d learned in SICP about recursion, first class functions, and the closure property to do it. Here’s a quick tour of my proof of concept from the REPL prompt:

CL-USER> (setf c (make-hash-table))
#S(HASH-TABLE EQL)
CL-USER> (create-widget c ‘division-a ‘mega-corp ‘(factory-1 factory-2 factory-3))
(DIVISION-A MEGA-CORP (FACTORY-3 FACTORY-2 FACTORY-1) #S(HASH-TABLE EQL))
T

CL-USER> (create-widget c ‘division-b ‘mega-corp ‘(dist-1 dist-2 dist-3 main-office))
(DIVISION-B MEGA-CORP (MAIN-OFFICE DIST-3 DIST-2 DIST-1) #S(HASH-TABLE EQL))
T

We’re starting out by defining a hash-table to store all of our work in. I guess I could have stored everything in a global variable so that I wouldn’t have to keep referencing the store all of the time, but I figured it was worth it not to have all of my functions hard coded to a particular variable. With the above code we assigned division-a and division-b to a company we like to call Mega-Corp. We created some factories, distribution centers, and a main office while we were at it.

CL-USER> (gethash ‘mega-corp c)
(MEGA-CORP NIL (DIVISION-B DIVISION-A) #S(HASH-TABLE EQL))
T

CL-USER> (gethash ‘division-a c)
(DIVISION-A MEGA-CORP (FACTORY-3 FACTORY-2 FACTORY-1) #S(HASH-TABLE EQL))
T

CL-USER> (gethash ‘factory-1 c)
(FACTORY-1 DIVISION-A NIL #S(HASH-TABLE EQL))
T

Here we take a look at what we’re actually storing in our hash table. Each item in the hash is a list in the format of (WidgetName Parent-Widget Children-Widgets-List Properties-Hash-Table). You can see above that Mega-Corp has children, but no parent. Division-a has both a parent and children. And Factory-1 has a parent but no children.

CL-USER> (create-widget c ‘office-staff-f1 ‘factory-1 ‘(bob fred joe))
(OFFICE-STAFF-F1 FACTORY-1 (JOE FRED BOB) #S(HASH-TABLE EQL))
T

CL-USER> (create-widget c ‘office-staff-d1 ‘dist-1 ‘(james mike sally bert))
(OFFICE-STAFF-D1 DIST-1 (BERT SALLY MIKE JAMES) #S(HASH-TABLE EQL))
T

CL-USER> (gethash ‘office-staff-f1 c)
(OFFICE-STAFF-F1 FACTORY-1 (JOE FRED BOB) #S(HASH-TABLE EQL))
T

CL-USER> (gethash ‘bert c)
(BERT OFFICE-STAFF-D1 NIL #S(HASH-TABLE EQL))
T

Here we extend our data structure even further by adding some office staff groups. We can extend the structure as much we want– and in principle, there’s nothing preventing us from writing procedures to copy and combine such data structures however we choose. That wide-open combinatorial possibility is what the closure property is all about.

CL-USER> (setprops c ‘(bob fred joe james mike sally bert) ‘salary ‘(40100 55050 50010 35025 33099 75042 20010))
(40100 55050 50010 35025 33099 75042 20010)
CL-USER> (setprops c ‘(factory-1 factory-2 factory-3 dist-1 dist-2 dist-3 main-office) ‘assets ‘(800000 950000 725000 225000 235000 230000 125000))
(800000 950000 725000 225000 235000 230000 125000)
CL-USER> (getprop c ‘factory-2 ‘assets)
950000
T

CL-USER> (getprop c ‘sally ‘salary)
75042
T

These functions use mapcar to set a slew of property values all at once– without having lots of repetitive code. The getprop procedure allows us to read those settings back out again.

CL-USER> (getval #’+ 0 c ‘sally ‘salary)
75042
CL-USER> (getval #’+ 0 c ‘office-staff-d1 ‘salary)
163176
CL-USER> (getval #’+ 0 c ‘office-staff-f1 ‘salary)
145160
CL-USER> (getval #’+ 0 c ‘mega-corp ‘salary)
308336
CL-USER> (getval #’+ 0 c ‘mega-corp ‘assets)
3290000
CL-USER> (getval #’+ 0 c ‘division-a ‘assets)
2475000
CL-USER> (getval #’+ 0 c ‘division-b ‘assets)
815000

Here we’re using a recursive procedure to apply the plus function to all of the property values from a certain point in the hierarchy down. The zero in the s-expressions there is an identity value– just like the ones we coded in SICP section 1-3. I had to implement that just in case there were all NIL’s in my hash table for a certain property. With a careful use of property names, you can get some benefits of stratified design (covered in SICP section 2-2) in this kind of data structure. We can code in all kinds of information about the separate divisions at one level… and way down the hierarchy we can have all kinds of other information stored about the employees. While working at either end, we can effectively ignore all of the other levels. At the same time we can “roll up” related values however we like at any level we care to focus on.

(There’s probably a few other things I could have done to make this better, faster, or slicker. If you’re working through SICP, too, please check out this complete code file and let me know if you have any tips or suggestions.  I used Common Lisp instead of Scheme, though. I really needed to stop a while and apply what I was learning in SICP to problems that I’d developed a personal stake in. Most people tend to abandon the book by section 2-2, so I better get back to work if I want to finish it!)

Back in the day, I would have avoided a recursive data structure like this if I was working on a similar task. I would have (horrors!) hard-coded the hierarchy at maybe 3 or 4 levels deep, coded all my tables as separate entities, and more than likely built in some work-arounds for the places where such choices were inadequate. (Thinking that OOP was the “right way,” I would have coded a class file for companies, divisions, offices, and employees– and then been stuck storing and retrieving data in as many separate RDBMS tables and figuring out how to map them back and forth somehow.  Writing nested hash tables to a file, on the other hand, is trivial in Common Lisp.) The end result would have been a maintenance nightmare…. It’s a good thing I was too lazy to finish any of those pet projects back then! Funny, I could see clearly the problems of coding 50 slightly different forms when one generic form could be rigged up to do the same thing, but I couldn’t break through my mental limitations to think three-dimensionally about data structures. Thanks to many painful hours spent working through SICP, I can now synthesize recursion, first class functions, and the closure property to make more elegant solutions to things that used to be next-to-impossible for me.

Update 10/26/07: I did manage to follow this post up here.

My First Emacs Hack: A Jumpstart Guide

October 11, 2007

While it’s nice to have a function key to bring up an IDE so that we can automate our applications, we see that Emacs goes even further by allowing code to be edited and run from any window. There is no special, separate application that is used for extending or modifying the program. This is strange and unusual– and very fun. This short tutorial will show how to use one of Emacs’ most versatile commands, how to retrieve the code for executing complex commands, how to put them into function definitions, and how to attach them to hot-keys. (This more comprehensive introduction is very good if you need extra information.)

A basic out-of-the-box feature of Emacs is the C-M-% query-replace-regexp command. You can use this to search and replace patterns of text. I try to explain this to people and it doesn’t quite register: they think, “oh yeah, I’ve got find/replace built into my IDE, what are you talking about?!” (Evidently, you can get a Master’s degree in Computer Science and not know what a regular expression is….) I’m not talking about searching and replacing not just specific strings, but patterns as well.

Suppose you have inherited a project that has lots of data hard coded into visual basic files. You might have something that looks like this:

strOptions = “OptionOne, OptionTwo, OptionThree,”
strOptions = strOptions & “OptionFour,” ‘ Don’t forget this one!
strOptions = strOptions & “OptionFive, OptionSix”

How can we get rid of all the code and get ourselves a nice simple list of options?

What we first want to do is search for lines that have a series of upper and lowercase letters, equal signs, and/or ampersands… followed by upper and lower case letters and commas inside a pair of double quotes… followed, perhaps, by a code comment. We could search for that specific pattern of text and then replace it with just the stuff that’s inside the double quotes.

To do that, just use C-M-% to search for this:

[A-Za-z=& ]*”/([A-Za-z ,]*/)”[‘ A-Za-z!.,()@#$%^&*]*

And replace it with this:

/1

The [A-Za-z=& ]* part represents the first part of the pattern. The double quotes represent (surprise!) double quotes. The [A-Za-z ,]* represents the stuff inside the double quotes. The /( and /) markers are used to tag regions of text so that we can refer to them later. The [‘ A-Za-z!.,()@#$%^&*]* represents the comment. The /1 is a reference back to the text that was between the double quotes.

After we apply query-replace-regexp with these two arguments, we should end up with this:

OptionOne, OptionTwo, OptionThree,
OptionFour,
OptionFive, OptionSix

(Note that to apply the replace to a section of highlighted text, press the space bar. To apply it to every line after the cursor, press the “!” key. Note that the search begins at the place that the cursor is….)

To get rid of the commas and get a nice simple list of options, we’ll need to know a little Emacs trick. To enter a carriage return as part of an argument to a command like query-replace-regexp, you need to hit C-q C-j. (I’ll use a ~ character to denote that key chord.) So to remove the commas and line up our options we, search for:

,[ ~]*

And replace it with “nothing.” (Just hit enter when prompted for what to replace it with.) Running the command should produce the following:

OptionOne
OptionTwo
OptionThree
OptionFour
OptionFive
OptionSix

Now… you may want to reuse these sorts of commands that have complex arguments… or you may want to be able to save and modify them some more when you find out they don’t work quite like you want them to. To see these commands as Emacs sees them, hit C-x esc esc. This will display in the mini-buffer the last complex command you’ve entered. You can then hit enter to rerun it or cut and paste it to somewhere else.

The skeleton below shows a basic outline for an elisp function. You can paste complex command “stolen” with C-x esc esc into the place marked <<<PUT YOUR CODE HERE>>>:

(defun your-function (n)
“Put your function’s documentation here”
(interactive “p”) ; n = value passed by the C-u “universal argument”
<<<PUT YOUR CODE HERE>>>)

The “interactive p” part means that the command can be run with from the mini-buffer by typing M-x, entering the name of the function, and then hitting “enter”. For that to work, though, you must first get the function into the environment. You can do this by moving the cursor just to the right of the very last parenthesis in your function definition. From there hit C-x C-e. The name of your function should appear in the mini-buffer when you do this. In fact, you can evaluate any lisp expression that way. Type “(+ 30 12)” and hit C-x C-e after the final parentheses and you should see “42” appear in the mini-buffer!

To wire in a hot-key, evaluate a function like this:

(global-set-key [f8 ‘your-function)

You can collect your code into a text file marked “.el”. Files with that extender will be highlighted properly when you load them into the editor. To execute all of the code in a file, type M-x load-file and then enter the path and/or filename of the code you want running in your environment. Here’s an example file with the code from our tutorial set to run with the function keys f3 and f4.

With regular expressions and the above commands, you can automate just about any complicated text manipulation. In some cases, it can take a while to figure out a good regex to solve a problem, but that’s generally more fun than editing loads of text files by hand.

Enjoy!

Kicking the ctrl-x/ctrl-c/ctrl-v habit with Emacs

October 11, 2007

There’s plenty you can do with Emacs… and lots of documentation floating around on the web. This is just a quick tutorial on some basic Emacs techniques that demonstrate how it can be a pleasure to use. I’m mainly writing this for myself, but maybe someone getting started will find this helpful if they’re just getting past the initial learning curve and are daunted by the prospect of having to “research” things that were trivial with their previous text editors.

Two of my favorite features in Emacs are find-file (C-x C-f) and iswitchb-buffer (C-x b). Emacs doesn’t bother me with a file dialog box that takes forever to appear. How many times in a Windows program have I accidentally hit “Open” or “F1” by accident and then have to wait while my computer stalls? The Emacs autocompleting feature makes it much faster to home in on a file I’m looking for and I never have to take my hands off the keyboard. And as far as buffer switching goes, Windows can be annoying no matter how it is set up. If you’ve got tabs on the screen, then you’re wasting on-screen “real estate.” If you are switching buffers using “control – F6” then you’re stuck cycling through the windows in a certain order. Emacs understands that I might want to work with a slew of buffers at once– but also that I tend to switch back and forth between a pair of buffers quite a bit. Accordingly, the buffer switcher lists the names of my open files and autocompletes the file name if I start typing. And the buffers are all listed in order of how recently I’ve touched them, so the default buffer is usually the one I need. Anyways, the point is that you can generally trust the designers of Emacs to do things in a really slick way. A lot of things will seem unnatural at first, but the more you use it, the more you appreciate it.

So fire up Emacs and type “one”, hit return, type “two”, hit return, type “three”, and so on until you’ve typed the words for the numbers one to ten each on their own line. Move the cursor so that it rests on the “o” in the word “one”. You can do this by hitting C-p ten time to move line by line. Or you can type C-u 10 C-p if you want to be fancy. You could even try M-{ C-n. (In Emacs, you should avoid using arrow keys and page up/down.)

Once you get to the “o” in “one” type C-space to set the mark. Type C-n to move down a line and then hit C-w to kill the text. Then alternate between the commands C-n and C-w until you’ve killed all ten of your lines. Hitting C-y will “yank” the most recent bit of text from the kill-ring.

If you type C-y a few times, you should get several lines of text that say “ten”. Try M-y and the last “ten” that you yanked will become a “nine”. Hit M-y a few more times and you’ll keep moving back through the kill-ring. Generally, your position in the kill ring just moves backwards as you keep hitting M-y. To jump several kills back hit C-u and then a number. Enter zero you move one notch forward in the kill ring… and enter a negative number to move several notches forward.

Type C-y to “yank” some text again… and then hit M-y to cycle back until your yanked text becomes a “one”. Type C-y to paste several lines that say “one”. Then type C-u 0 C-y. (That middle key there is a zero.)  This should make a new line that says “two”. Type C-u -2 M-y to change the “two” to a “four”. (The -2 there is negative two.) If you ever jump forward too far, just hit M-y to cycle back through the kill ring until you get where you want.

So you can move around fairly easily using next-line and previous-line commands. (Plus, knowing about using C-u to supply arguments to commands will help you with other tasks.) The curly braces commands to do next-paragraph and previous-paragraph are pretty useful, too. Use them to get into position and C-space to set your mark. Use the movement keys to go to the other end-point of your text section and type C-w to “cut” what you need. (You can use M-w to copy text without “cutting” it.)  Finally, you can use C-y to “paste”, and then M-y (in conjunction with C-u) to navigate a list of things you’ve previously “cut.”

Now that you’ve mastered the basic use of the kill-ring, you’ll be able to shift your text around however you need. Combine these techniques with the the already good file selector and buffer switcher and you have a really powerful tool for navigating and manipulating text files. It takes practice, but if you keep at it the mouse will start looking more like an impediment than a shortcut.

We are in an echo chamber…

October 4, 2007

 He’s right.  This does seem to be something of an echo chamber…

Echo Chamber