How to Deploy Your Skunkworks Application– and Take Over Your IT Department’s Software Development in the Process

The first thing you have to realize is that everyone else in the business is scared to death of software development. If you allow your development to be driven entirely by fear, then you’ve got a one-way ticket to maintenance hell. But how do you get control of the development process if your authority and influence is limited? The direct approach will not work. Your arguments for investing in skills, design, and refactoring will fall on deaf ears. Your only hope is to combine an array of guerilla tactics with an understanding of the terms with which those in control view reality.

There’s only three units of time that register in the consciousness of project managers: 5 minutes, two days, and two weeks. For most programmers, there’s been many times that these little “5 minute” changes have exploded into hours of pain, so most developers never let on that it’s possible to get anything done that fast. So really, the smallest unit of development time that a project manager is going to hear is going to be in the two day range– no one’s going to let on that anything faster is possible, and (in the tradition of Mr. Scott) everyone will pad their estimates by at least a factor of four.

“Two weeks” is a code phrase signifying that the people involved in the problem really have no idea how difficult the problem is or how long it should take. If there’s ever something really hard that has to be done, a manager will assume that two weeks is plenty of time to accomplish it. Anything that *you* want to do that will take two weeks or more to accomplish is going to be immediately rejected. The trick is to never ask to take on anything that’s that ambitious in scope, but rather to focus your efforts over time to gradually increase what’s possible in the “5 minute” range. Once you have a framework in place, the general demands of the users will allow you enough leverage to take on more sophisticated tasks that can take up to two days or so to implement. That’s the general idea, anyway. Below are some specific techniques and maxims you can use to work your way up and take control of the development process:

* While no manager will listen to your crazy ideas, they generally don’t care how you accomplish your tasks. This is the leeway you’ll have to leverage to turn things around in your IT department– at the start it’s your only avenue for improvement.

* If something actually does turn out to take the “five minutes” that you thought it would, use the extra time that you’ve scored to improve the elegance of the code or to learn a better way to accomplish the same task. (How this impacts the project is irrelevant: you’re doing this to invest in *yourself*.)

* When other developers are wasting time playing solitaire, trading stocks, or surfing the web, invest your time in developing tools that help you manage your tasks, problems, code, and data.

* How do you know what tools you should build? Look at where the friction is. Anything that’s tedious, irritating, and boring is something you should look at automating or tooling up for.

* A quick-and-dirty tool can help you get a job done in about the same time as the other fellow that does it the hard way. Tasks that you have to get done weeks or months down the road will give you an excuse to improve your tool(s).

* The thing that’s going to give you an edge is the fact that you’ll have a whole library of tricks, a framework that integrates them, and a “sense” for what fancy-smancy techniques are worth learning and applying on the fly.

* At some point, someone’s going to come in with a task that you’ve pretty much already solved. This is your chance! Basically, you can throw together a simple GUI, code a couple of routines, and suddenly you have pretty sophisticated solution that’s much more robust than what the other guys would have thrashed out as a hand crafted one-off. The trick is that your GUI is just a thin veneer on top of the tools you’ve been using for yourself all along!

* Your solution is not something you could have proposed at the very beginning. You just didn’t know enough back then. So don’t fault your manager for not listening to you back at the start!

* People’s ears will perk up when you say, “I’ve actually got something pretty close to that built already.” The fact that you’ve been the primary user for your stuff for a long while will mean you’ve got a lot of confidence in it. And nobody’s going to take a chance on “vaporware” or “castles in the sky.” “This is practically done,” you say, “I’ve just got to make a couple of quick modifications and we’re there.”

* The fact that you can solve an entire class of problems in 5 minutes that would take your peers two days is what will get people’s attention. Nobody cares what you can do in two weeks time. They don’t even care what you can do in two days time. Five minutes is all anybody’s going to care about at this point.

* Once your solution is deployed, there’s a whole world of forces that your pride and joy will be subjected to. The users are going to be a great resource for new ideas, though. Go watch them work and keep talking to them until they can give you some real constructive feedback. After one minute they’ll assure you that everything’s fine. After five they’ll mention a couple of things that could be better. After ten or so, they’ll be telling you what you really want to know: “Now I don’t know anything about computers, but what I really need is….”

* The worst thing about in-house development is that applications only have to be “good enough.” Once they get that far along, the creativity stops and everything freezes into maintenance mode. You can identify a complete lack of design in an IT department when you see a proliferation of forms, screens, procedures, and programs. You’ve got a lowest common denominator philosophy because nobody’s taking the time to find the common denominator to all of these forms, screens, procedures, and programs.

* If you’re *designing* software, you’re not just adding new screens or modifying existing ones. Real design is going to incorporate subtraction and abstraction as well. A good abstraction is a technique that allows you to synthesize many ideas at once– and once it’s implemented, you’ll be able to subtract away a lot of unnecessary code.

* As your users begin to get the hang of your application, be “jonnie-on-the-spot” when it comes to all of their “5 minute” type requests. If they feel like you’re listening to them and taking care of them, they’ll be more willing to give you a chance to invest in the stuff that will provide the real payoffs.

* Anything that the users ask that’s more in the “two hour” range” should be put on hold until you have at least a dozen such requests. List them all on a whiteboard and kick ideas around with the one person at the company that’s willing to listen to you. Then wait some more and let these ideas percolate a while.

* Code up proof-of-concepts for some of the hard stuff and expose it inside your application with little regard for design. Walk some of your power users through special “previews” of these features so that you can have a chance to soak up as many good ideas as you can from them.

* Now go back and look at the white board. Erase it and reorganize everything into related “buckets” of functionality. In the bottom half of the board, sketch out the GUI for something that could address the bulk of the requirements. Underline the stuff you can address right away with only a minor reorganization of existing code along with new GUI elements. Separate out stuff that appears to be hardest.

* There are undoubtedly areas of the code that have known quantities of suckage in them. There’s areas that really irritate you every time you have to go in to modify them. Avoid wasting time fixing this sort of thing until they’ve had time to fester for a while. As you’re about to go in and fix everything up for your users, be sure to invest some time in cleaning these things up for yourself while you’re making such big changes. If there’s a huge visible leap in the functionality, your users will at least have something they can blame it on. “By the way, when you added this hot feature that I use every day now, this minor thing over here stopped working the way I want it to.” You’ll never have time to work on something just to make it objectively cleaner for “no reason”, though.

* If this takes longer than what folks want, don’t sweat it. Remember, in-house software only has to be “good enough.” You’ve already addressed all of the critical stuff that people absolutely have to have to do their jobs. This reorganization of the code combined with a dozen sorta-optional type features just isn’t as urgent. It’s more important for you to control the growth of the application in a sustainable way. Listen to them as if they’re the first person to suggest whatever feature it is that they’re asking for, but remind them that as long as the code is “good enough” for them to do they’re jobs, they’re just going to have to wait. If you have to respond to these demands immediately, make sure that whatever they’re asking for fits in to your overall scheme for how the application should be structured. Better yet, use some of the time you spend focusing on their needs to flesh out, extend, or improve areas of the code that you can use to nail down your vision further down the road.

* You’ve gotten this far by investing in tools first and foremost for *yourself*. You’ve consolidated your power by training the users to think in terms of tools they can use to solve their problems– as opposed to a screen with buttons on it that they just go in and push.

* You may not always have managerial support for each new order of magnitude increase of functionality in your application. If you’re in doubt of this, build it anyway. Do it on your on time if you have to. Roll the application out on a “Beta” basis to a few of your more savvy users. When your managers find out about the application, they’ll more than likely make you stop everything and load it up on their machines and pester you like crazy for new features.

* Each order of magnitude of increase in application functionality will take more time to develop than the one before– but each one will put more and more features within the “five minute” range of your grasp.

* The final coup de grâce is when you’ve developed your application so much that you’ve gone through several refactorings and paradigm shifts. Your design is so clean and your code so expressive… you’ve rolled with anything the users could throw at you and made something truly beautiful. Then someone comes by and mentions that folks in another department have very similar needs as the ones you’ve been coding for. You listen to their requirements for a while, stroke your chin thoughtfully and say, “I’ve actually got something pretty close to that built already. If I make just a few modifications to the framework I’m using for this other application, I think we’d really only be a day or two away from having something that could work for them.” And so, you get the green light for another project and you are able to justify ever more ambitious features and refactorings because each hour of effort you invest in either can be harnessed to benefit both projects.

* Once you’re this far along, the question will come up as to what the company will do when you “get hit by a bus.” If you get cornered on this, just start talking about how they really have a good point on that. Explain how you’ve had to sacrifice code quality in order address all of those urgent feature requests– and they’re some areas of the code that even you’ve forgotten how they work. Really, the company would benefit greatly if you spent a month or so polishing and cleaning up that stuff. Even better, you should take some time to write articles about the trickier stuff you’ve done in the application. Once published, the articles will generate valuable feedback from more sophisticated developers that simply cannot keep quiet when they see a technical flaw or innefficiency. Those articles can then be folded back into the project documentation for whoever inherits the code. (Cue maniacal laughter….)

* Of course, you and I both know that no matter how great your application is, all it takes is one random IT manager to come in two years down the road and insist that 6 mediocre developers get hired to rewrite everything in the latest Blub IDE or whatever. More than likely, your application will remain “good enough” for quite some time and the average developers that come after you will make only surface changes to it. What will probably happen is that the creative integration of various requirements will cease and the new developers will gradually get bogged down as they create one lame hand crafted form/screen/procedure after another and begin to spend all of their time on maintenance instead of design. But that’s okay. They only needed to make something that was “good enough” for the moment. By the time the company fails because of their negligence, no one will suspect how much they were responsible for so many folks losing their jobs….

Advertisements

15 Responses to “How to Deploy Your Skunkworks Application– and Take Over Your IT Department’s Software Development in the Process”

  1. Ben Says:

    this is the most accurate post i have ever seen. you just described my last six months of development AND my next six months. thanks.

  2. giles bowkett Says:

    I remember discovering this.

    You left out an absolutely crucial step: resolving never to work for that kind of company ever again. You’ll get to it soon, though.

    “Maniacal” is mis-spelled.

    The reason you will at some point resolve to avoid companies of that nature is that you’re swimming upstream; you’re investing your energy in circumventing or defeating the flaws in your company’s process and culture, on the assumption that these flaws are universal. In fact they are universal for a certain type of company. They are unheard of in other types of companies. These other companies are harder to find, but worth finding. In practice even after you resolve to never work for this bad kind of company ever again, you probably will, several times, before finding a healthier company you can work for, but please trust me when I say it’s worth it. All that energy you spend outfoxing morons isolates your code in a silo, and when that happens, the major downside isn’t just that you might get hit by a bus, it’s that isolated code is inherently limited to what one person can come up with. This is only useful when that one person is significantly more inventive and resourceful than everyone else around them. This is the norm at many, many companies, but not all, and it is infinitely more valuable for your skills at a programmer to work for programmers who are better than you than it is to work for corporate automatons who are worse.

    I think every programmer should do what you’re describing *if* they’re in an IT department, but I also think that any programmer smart enough to pull this off should avoid companies with IT departments in the first place.

  3. Dan Bernier Says:

    Giles points out that “maniacal” is mis-spelled, ftw!

    I made it partway through this sequence of steps at my last large corporate gig, before quitting.

    While I still work for a company that has an IT department, it has more programmers like this, than is average (from my experience). It becomes kind of a mix between what Lispy & Giles described: we still have to manage the business customers (our manager helps enormously with this), but there are a bunch of us, so we’re not limited to one person’s cleverness.

  4. lispy Says:

    I fixed the maniacal thing. I almost changed it to something about “manacle laughter” but that was too weird, so I left it.

    I’m currently at a conference for a certain large application that I will not name. I think the folks here are mostly high-up accounting managers and IT heavies. It frightens me to talk to so many of them. It’s as if I were speaking through a tube or something…. There’s this gulf between us. It troubles me that there is so little common ground.

    When I wrote this, I thought what I was discovering was just the way things were: this was the only way forward in a world of intractable problems and conflicting points of view…. But after reading Giles here… maybe all of those other IT departments are much much worse and my current circumstances much luckier than I could have appreciated.

    We grow up in school and always have our assignments and to do lists laid out for us. And when we go to work, there’s this tacit assumption that you can just sit back and wait for some manager to come tell you what to do. But this is a career that forces you to take so much initiative… and yet, I meet so few people that seem “awake” enough to articulate what’s going on in the greater game….

    And these managers that I’m with at this conference… when I talk about some of my ideas and the issues that I see, there’s almost no connection there. I think it’s more than likely that my analysis/crusade would be forced down this “guerilla” path even more so if I were stuck with them.

    But the idea of getting to work with other people that are “illuminated”… that’s so far beyond my capacity to imagine, it’s almost scary to try to think about that. I’d have to defend my ideas with people that are 10x better than me and unafraid of doing “real” development instead of cheap codemunging. I’d be forced to improve even faster… and I couldn’t conceal anything….

    How do you find those shops and how do you get in?

  5. Logan Says:

    “coup de gras” should be “coup de grâce”. Other than that, excellent article!

  6. lispy Says:

    Fixed; thanks, Logan….

  7. Mark Miller Says:

    Liked the post, and it answered one of the questions I had from your previous one on this. I used to do a version of what you describe. When I would fix bugs, I would analyze the code I was working on and see if it could be refactored as well. Sometimes it was too much of a hairball to deal with, so I’d just fix the bug. A lot of times though there wasn’t a lot of code touching it. So I’d refactor, take care of the dependencies, and I’d be done. There were also a few occasions where I’d have some free time on my hands, and I’d refactor some code just for the sake of doing it. My fellow developers didn’t particularly like me doing this, because they felt like I was “changing the rules” on them. Up until recently I’ve always worked in group projects. It was like that old saw about the wife cleaning up the husband’s mess, organizing it nicely, and the husband complaining, “I can’t find anything now!” The other developers tolerated it, probably because they realized I was refactoring, and that it was “a good thing”–like eating your vegetables. It wasn’t something they wanted to do, but thought they should do.

    It sounds to me like you take this up another level, to not just improving code, but improving how you create solutions as well.

    The way you are writing this makes it sound like you are not working as a group member, but just finding solutions to company problems by yourself, and doing it effectively. Am I right?

    Reading Giles’s response was interesting to me, because he kind of had me pegged on this as well. I’ve hoped that there are enlightened places to work, but wondered, “Where the heck are they? How do I find them?” The same question you’re asking. The image I have is that these are places that I’d find tough to get into because they’ll always ask what I call “the genius puzzle-solver” test questions. They won’t ask me to code something. Instead they’ll have me solving tricky math problems, which I find rather infuriating, because I feel like, “Look. How about you ask me what my philosophy of computing is and its relationship to society, or ask me about the philosophy of programming with objects, or ask me to define what programming is, or have me define my own metalanguage for data access, etc.?” I’d even be willing to entertain some lambda calculus problems. You know, something in relation to computing as opposed to testing my puzzle solving skills!

    This is something I realized that I think is wrong with many of the practitioners in our profession: We’re all trying to perfect our puzzle-solving skills. Software development is not just a problem-solving/puzzle-solving exercise. Sure, these are valuable skills to have, but I now think that if puzzle solving is our only skill then what we’ll produce is exactly what you see most of the time: poorly designed hacks.

  8. lispy Says:

    Yeah, Mark… I am the “lone wolf” here.

    After talking to dozens of IT Managers, Developers, and CFO types the past few days, I can see that things are much worse than I thought. There simply isn’t enough common ground between these groups. Even savvy developers that I admire can have radically different terms of viewing things… and we all suffer from a collective dearth when it comes to having a shared language for discussing architecture and software design.

    The only way forward is for someone to take initiative and gradually prepare themselves (and their code) for the day when they can open things up with a substantial proof of concept. You have to be addressing real needs at every step and you have only five minutes to walk anyone through a given use case. The attention spans and the default fear is just too great.

    Even for myself, seeing this and knowing this… there are so many practical business concerns that are impacted by the stream of business data– and no one else is in a position to address them. This is why the CPA that picks up technical skills and/or the tech that picks up business skills can end up becoming the de facto controller of the company. No one else is in the position or has the leverage & connections from which to act decisively. There’s a whole world of action there that I have yet to dedicate myself to…. I’m not even sure I want to go down that path. But if someone carried the same intensity to the data and business processes that I take to the code architecture….

  9. Mark Miller Says:

    @Lispy:

    Perhaps what you are describing is another angle of this trend I’ve heard about for a few years, that what businesses really want is someone who is technically knowledgeable and has some domain knowledge as well, like someone with CS knowledge, and understands how the railroad industry works, too, for example.

    Your description of a lack of a common language, even within groups, is real interesting to me. The first thing that came to mind was “Design Patterns”, written by the GOF. In the introduction I remember they said what they were searching for when they came up with DP was what they have in the field of architecture (as in buildings), called a “pattern language”. Because of the book they wrote there is now a recognized pattern language for certain structures. From what I’ve heard though the particular patterns that were described in the book have become canonized, and some of the message of the book was lost. The point was not “these are the only patterns you will ever need”. It was supposed to be a “conversation starter”. The authors derived those patterns from constructing a word processing application. The fact that these patterns have been found useful in many other types of applications shows that they did some good work in finding a new architecture that improves what we do, but there are many more patterns to be discovered.

    What I’ve thought about from time to time, prompted by a brief conversation I had with Alan Kay by e-mail, is that the very programming languages we use are pattern languages. What most programmers miss is they are in fact using an architecture with every line of code they write. Instead they use these languages as though they’re standardized communication protocols, both for the computer and each other. We know somebody came up with them, but we don’t seem to care how, or even if they’re appropriate to the task. In any case they’re godly smart, we assume, and so we should just use them without question.

    I think a big part of this is the majority of programmers are never asked to learn this stuff. They are presented with frame-based solutions, which can be reliably used for certain classes of problems. As the name implies they also have boundaries, which are comfortable to people who are used to case-based learning.

    Even our modalities in software design are themselves designed around frame-based solutions. Many expert developers are only familiar with the terminologies used for them, those particular pattern languages.

    Really innovating with design means realizing that there’s a world beyond the frame-based solutions, and instead of trying to shoehorn every problem into these frames, you can ask the question, “What are we really trying to do? What are we as an organization really trying to be?” With your training in SICP you can take a modeling approach to real problems. I think that’s what you’re seeing. You can literally model portions of a business, using a pattern language/architecture that’s appropriate to the domain, and express it in a language of your own making. Secondly, you can see the a computer as a dynamic simulation environment, and as such, you should be able to a) experiment and dabble in ideas, to try to find the best solution to a problem, and b) see results of changes quickly and easily. That’s a tall order to most people, but it’s an idea that I find really interesting. I’ve yet to try it out. It’s only something that’s been gelling in my head.

  10. lispy Says:

    Yes… that dynamic simulation environment is very key. Part of what happens in the 5 minute demos is that people always insist on something that nobody else has asked for. Because the front end is just a GUI that’s bound to an environment (that’s initialized with the DSL), you need to have ways that you can alter that environment at runtime. It’s a sort of a REPLerization process that I’m about to code up– doesn’t take much, but I want to turn it into something the power users can take advantage of, too.

    My dream is to take the scripts generated by the power users and integrate them into my solution somehow. Basically… I’m bringing the spirit of the EMACS community directly into a crappy database application. I don’t think any of the dozens of people I’ve talked to the past few days would even know what I was talking about if I tried to discuss this with them… and even if they could they would not see the business value of it.

    Maybe it isn’t “worth it”. For me, though… to be able to go to those little 5 minute demo’s… and to be able to answer their criticisms and issues right when they’re looking over my shoulder…. Now we can go on and deal with the real issues instead of getting stalled. And if that stuff’s what we want in the release, all I do is kick out a script for what we just did and fold it into the scripts that initialize the system. No recompile. Or I could send a test script for the power users to beta for me first if I wanted….

    None of the vendors here can do that. One application we use forces us to maintain modifications by hand that could have had something like what I’m working on, but there’s no vision for anything like this sort of dynamism/programmability.

    I’m so far out of the mainstream… I don’t think any of the hundred people that I’ve talked to the past few days could even listen long enough to get to this. None of these people have even heard of Unit Testing or Regression Frameworks… and many of them do not use source control. There’s a generation or a culture gap. None of them have this attitude of experimentation and exploration built in as a component of every development project they undertake. No immediate perceived business value, I guess.

    So it gets back to how I want code. I can set up my architecture however I want…. Why not do so in a way that makes everything easier for me and that puts so much more within the 5 minute zone??

    But maybe there is no need for me in this setting. Maybe it’s just overkill. I wish I could be two people so I could handle more of the hard core business stuff on top of this. But keeping up with all of the applications and support and technologies that are thrust on me is plenty as it is. Yet, all of those issues that I contributed to eliminating… that were never on anyone’s to-do list because they weren’t even something that anybody imagined could be addressed with a little software… that *did* have an impact on our business. The pat on the back may not have been there– and we’re back to “what have you done for me lately”–, but it was significant what I did.

    I think the thing to do is continue to follow my destiny even though no one understands. I’m mentally in shock because there’s a huge need in front of me now that’s many magnitudes greater in scope than the simple problems I’ve just solved. I can address it without transforming into a CFO/CPA type. My work style is very unorthodox, but once I understand something and codify it, I never have to think about again. (“Good enough” for most people is nowhere close to that.) There’s just a whole new class of users that I have to engage now and the relationships aren’t there yet.

    But Mark… don’t blow my solution out of proportion. It’s just a start. It emerged out of nowhere as something that grew on top of a very simple application that just needed to print some stuff out. It merged with some of my homegrown tools that I used daily and ate part of a pet project I’d been spending a few week ends on… and serendipitously, I just addressed a dozen minor requirements with an idea that came out of nowhere. I hardly even decided to make an application– it just happened. It’s a glorified proof of concept at this point… but the question is… what will happen over time as I face more wide ranging and more thorny problems… and grow into them… and discover new opportunities that I never thought of.

    I don’t even know what the next thing is beyond the very next order of magnitude refactoring and extension…. No idea at all. Why is it that I feel like I have the most comprehensive plan for software development of anybody on my block… but I just know that to the normal people here I’d sound like I’m just some crazy guy that doesn’t know what he’s doing?

    I think… maybe… the fact that the problem is indeed so hard… maybe that mean’s that contrary to “common sense”, my approach really is the best fit for this. The status quo for my industry is so far from what’s tolerable to me… and if I’m going to be responsible for it, I have to take control of it and put it in terms that I can assimilate.

    It’s a *right way* mentality dropped into a maelstrom of needs, unknowns, and entrenched mediocrity. You pick your battles and bide your time, but I think there is value in it even if nobody else around me can see it until it’s five minutes away.

  11. Mark Miller Says:

    None of these people have even heard of Unit Testing or Regression Frameworks… and many of them do not use source control.

    Gosh, that’s scary! They don’t use friggin source control??? Gaagh!! Well, I gotta admit. When I was working a job on my own a couple years ago I didn’t use source control either. I just ZIPped up my source directories and put a version number on it. It’s not the way I prefer to work, but all the source control systems I knew about at the time were commercial, and I wasn’t sure if I wanted to put down the dough to get one. I’d heard of CVS, and I had used it before, but didn’t particularly like it.

    There’s a generation or a culture gap. None of them have this attitude of experimentation and exploration built in as a component of every development project they undertake. No immediate perceived business value, I guess.

    More like they don’t know how to do it. Believe me, it’s not because it doesn’t need to be done. It’s not merely a “right way” orthodox approach either. Alan Kay has talked some about this, that the tech culture is rather primitive, even though it pretends it’s “up-to-date” and “advanced”. There is some consideration now for how people interact with computers, but not enough. None of the popular development environments allow you to do what you’re doing. They’re all early-bound environments.

    What surprises me a little is that nobody you’re talking to can even relate what you’re talking about to something they know. I remember several years ago seeing articles in .Net programming magazines (maybe like two or three) that talked about extending your .Net app. via. scripts. I knew a Visual C++ developer (coworker) who talked with me about doing that with C++ back in 2000.

    I have met developers who know about the idea of creating “data driven” apps., that is creating an app. framework that responds to parameters entered into a database repository. So there are analogies around.

    I’m getting this image that these people you’ve been hanging around are “plug it together” developers, who know how to deal with component-based frameworks, but that’s it.

    But maybe there is no need for me in this setting. Maybe it’s just overkill.

    I can understand why it would feel that way. I think that’s what a lot of typical developers would tell you, “You’re making it way too complicated for what it is. It doesn’t need all this.” I think a quote from Alan Kay addresses this concern well. This is from his 1997 speech, “The Computer Revolution Hasn’t Happened Yet”:

    “I just played a very minor part in the design of the ARPANet. I was one of 30 graduate students who went to systems design meetings to try and formulate design principles for the ARPANet, also about 30 years ago. The ARPANet of course became the internet–and from the time it started running, which is around 1969 or so, to this day, it has expanded by a factor of about 100 million. So that’s pretty good. Eight orders or magnitude. And as far as anybody can tell–I talked to Larry Roberts about this the other day–there’s not one physical atom in the internet today that was in the original ARPANet, and there is not one line of code in the internet today that was in the original ARPANet. Of course if we’d had IBM mainframes in the orignal ARPANet that wouldn’t have been true. So this is a system that has expanded by 100 million, and has changed every atom and every bit, and has never had to stop! That is the metaphor we absolutely must apply to what we think are smaller things. When we think programming is small, that’s why your programs are so big! . . .

    [The] way to stay with the future as it moves, is to always play your systems more grand than they seem to be right now.”

    The point of his speech is just what you’re talking about: Most developers don’t think like this. We need to! What he was saying is by thinking “this is all I need right now” for a small app., you’re dooming yourself to creating a huge mess if and when your “small app.” needs to become huge. By thinking about architecture ahead of time, even for small problems, you’re ensuring that your solution can scale up.

    One of the things Kay is really into is creating systems that are malleable enough to deal with change well, and are able to go from small to huge without major disruptions.

    I get the sense that your solution has kind of been hacked together. I don’t know. Maybe you just think it’s hacked together, but you’re using a good enough architecture so it won’t create problems. You could run into limitations with your approach eventually. That’s the price we pay if we don’t think about architecture. It sounds though that you’re getting “real bang for your buck” because what you’ve done is better than what anyone else there has tried. This isn’t surprising, but a compliment to what you’ve learned.

    It merged with some of my homegrown tools that I used daily and ate part of a pet project I’d been spending a few week ends on… and serendipitously, I just addressed a dozen minor requirements with an idea that came out of nowhere. I hardly even decided to make an application– it just happened. It’s a glorified proof of concept at this point… but the question is… what will happen over time as I face more wide ranging and more thorny problems… and grow into them… and discover new opportunities that I never thought of.

    This is the reason I brought up the issue of scale above. If you’re using good architecture it’s more likely to scale well to those future unknowns than what anyone else is trying.

    I know you’ve told me your opinions of OOP before, and I’m not trying to sell you on it here. I talked with Alan Kay briefly about programming in general via. e-mail last year, and he went on at length about the importance of architecture, and asking questions about the relationships between components of a system. He talks about OOP some, but his main point is thinking about architecture, which could just as easily be non-OOP:

    “I would characterize the ARPA [work we did in the 1960s] (and then the concentration at PARC) as mainly interested in a ‘no-centers’ style scaling architecture (personal computers [as opposed to centralized mainframes], Internet, Ethernet, objects, ‘no OS’ [as it’s traditionally defined], ‘no Apps’ [they created ‘widgets’], etc.).

    So a lot of the interest was how to get things done without having to concentrate the knowledge in one or a few places (because this would require these places to have to control ‘ingredients’). …

    In Computer Science, I think one of the things we (are supposed to) do is to try to abstract much simpler models of the artifacts that still are able to do what the artifacts do. So if someone does a word processor or operating system, we should be asking what is the simplest organizational working model that will also make that word processor or OS (or perhaps a better one). This will often lead to better designs for applications, and also better programming languages, systems architectures, etc. And this process can be turned on programming languages and systems themselves.

    This is what we did at PARC. Smalltalk partly came from ’20 examples that had to be done much better’, and ‘take the most difficult thing you will have to do, do it, and then build everything else out of that architecture’. This paragraph and the one above it have (new kinds of) math lurking as the pathways and principles towards greater ease of expression for programming. …

    ‘In matters of scaling, architecture dominates materials’, so we should first ask questions about architecture before worrying about details of the materials. This is how dynamic objects came about: by worrying about how largish systems could function — ‘no centers’ is a good answer here — this brings ‘messaging’, and then ‘to what?’, and then ‘how?’, etc. Similarly, we need to ask about the architecture of the proposed model above before worrying about details. Some of these questions are algebraic: i.e. are there abstractions that could simplify the number of concepts needed to span the model? It seems there are: both structural and behavioral. Can we separate ‘meaning’ from ‘optimization’? Yes, in many if not most parts of the system. And so forth. …

    In any case, to me, most of programming is really architectural design (and most of what is wrong with programming today is that it is not at all about architectural design but just about tinkering a few effectors to add on to an already disastrous mess). … Lots of programming today is not unlike lots of blogs today: more opinion than knowledge, skills, or style. This is the pop culture infiltrating into a world of ideas that are usually at their best when developed rather than graffitied. …

    If we think of a big computer system as necessarily being made out of components at one or many levels of scale, then we really want to be asking questions about the relations between them. This will come down (partly, but in a pretty large way) to what the components know how to do vs. how much they have to be told how especially in the moment. Data structures don’t know how to do anything and have to be told all – this doesn’t scale – so I abandoned this idea in 1966. Objects can know quite a lot and thus don’t have to be told much. We did not go nearly far enough in this direction in the 70s (in no small part because of the weak and small computer resources — and our weak and small minds).”

    So if you want to take a look at a software system, you could look at its fundamentals: data access/serialization, rules, system communication, human-computer interaction, etc. and ask questions (to yourself I suppose) about the relationships between them, and then write bits of software that will deal with those domains, and facilitate the relationships between them, bringing in the same properties that you’ve come to like about your current creation.

    Another thing Kay communicated to me is that software architecture is the area of computer science where the most research is needed. CS in academia tends to not talk about architecture much, so not much work has been done on it. I say this so you won’t feel bad if you feel inadequate/overwhelmed by the idea of trying to form a better architecture for whatever large system you’re looking at. One of the things he said he’d like to learn is how do people learn architectures and abstractions. So even he doesn’t have a good answer. His advice seems to be to just go for it and give it your best shot. You’ll learn something valuable from it, and I would add, “even if you fail,” though I understand perfectly well that failing is a scary prospect.

  12. To Develop a Good Design, You Must Begin by Caring About People « Learning Lisp Says:

    […] Learning Lisp (notes from an average programmer studying the hard stuff) « How to Deploy Your Skunkworks Application– and Take Over Your IT Department’s Software D… […]

  13. ProjectX Blog » Blog Archive » Xlinks Digest - 27 / 05 / 2008 Says:

    […] How to deploy a skunkworks application Added on 05/21/2008 at 09:17AM […]

  14. Leonardo Says:

    EXCELLENT article, thanks for taking the time to write it. I wish I knew more enthusiastic people like you in my workplace; all I ever see is people who make just good enough an effort at their job to keep it.

  15. using the sphincter-sigil to abuse Perl « Learning Lisp Says:

    […] lisp-ish DSL by itself.) In Blub, we hacked someone’s custom evaluator to go even further to craft our own language from scratch. This proved to be an unsustainable amount of work beyond a certain level of complexity. With Perl, […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: