Tuesday, March 20, 2012

They Eat Their Own Dog Food

Surprisingly, Wikipedia does not offer an exact origin to the term 'eat their own dog food' but puts it vaguely in the 1980s. It's most common use now is in reference to computer software makers that use their own products like Google's use of their Open Office. I get evidence every day how few products are used, no less exhaustively tested, by the people who make them. Today it was my coffee maker.

I love my current coffee maker. It has a burr grinder integrated into it so I dump water and beans into the device and program it to have my steaming pot of Joe ready when I stumble out of bed. It is nearly perfection, nearly. The one flaw that makes me crazy is that the cover for the carafe is often difficult to remove. The affordances for turning are minimal so it can sometimes be a significant challenge to remove it. This was not a problem when the device was relatively new, it only happens after many uses and can be reduced, though not eliminated, by scrupulously cleaning it each day. I can't help but wonder how long they used the prototype before it went to production and whether any senior managers actually use the products they sell.

I find this problem in software all the time. You can tell when a product was made by committee and lacks a single, cohesive design. I believe it was Brooks who made the observation that a single unifying design is the single most important attribute for a software product to possess. Yet this may be one of the single most difficult attributes to quantify.

I had a difference of opinion the other day with someone who did not share my belief in the centrality of the user in software design. I found it odd since this is part of the Agile Manifesto and one of the big truths I've learned through hard won experience. I hope I can eventually drill down on this difference in viewpoint but for now I'll avoid speculating on its origin and instead elaborate on my own.

Without humans there is no software. Not simply because there is no one to write it but because there is no use for software. I am finding it harder and harder to maintain a bright line between hardware and software as the years go by simply because without hardware there is no product. While the products become more software-centric, it is a simple fact of physics that without the hardware to control, the software does nothing. It is not the software alone that solves the problem set at the designer's door but the system of hardware and software than creates the solution.

Navigating from the solution space to the design space has always been a challenge. I think Christopher Alexander's work illustrates the challenge as well as anyone. The client engages professionals to perform this navigation on their behalf because this is a challenge that requires extensive education and experience to do with any level of predictability. Where I see the biggest challenges is in the work to define the boundary between this problem space and the solution space. At this border we see two vital artifacts in the SDLC, the requirements and the specification. Well done, the requirements document is completely agnostic to the solution that will be employed. A good specification document will define the solution that will exactly mate to the problem space with no gaps or overlaps. So much for theory.

Requirements suffer from many shortcomings. One in particular is the ability to anticipate every question a designer may ask about the problem. Even one attribute that is required for a good product to meet the needs of the client can result in the development of a sub-optimal solution. These qualities of software-in-use have been notoriously difficult to pin down. Take my coffee maker as an example.

The qualities of a software product as it is used over time are often overlooked in software development products. The focus is always on the present and the foreseeable future. Often what is foreseeable is surprisingly nearsighted. Just as my coffee maker's maker erred in not testing the product over a sufficiently long enough time to discover how the oils and acids of the coffee would affect the seal, so too most designers are so consumed with getting the most obvious qualities of the product right that secondary affects are often overlooked. These affects do not assert themselves until some time after the initial development team has been disbanded and the product is now in the nebulous world of maintenance. I can think of many instances of the same error in design that I now know to look for but which still must be taught to each new designer.

Putting data into a data store is a fundamental need in most any business system. What is less obvious is that eventually data must be taken out lest the data store changes its quality. Here I mean quality to mean mean-time-for-retrieval, backup time or some other quality dependent on scale. A database table can use an index to speed retrieval but the need for that index is often overlooked when the table has not yet grown to its steady state size, or worse when no provision to remove has been made. In this case the effects of scale will slowly creep up over the life of the product until a response is forced. It's easy for me to clean my coffee pot and see the relationship between use and action that will return the product to its original function. Not so with a complex system.

This is where "eating your own dog food" has its advantages. At worst, you are learning the latent defects of the product in parallel with the maker. But the difference is that a business dependent upon the same product you are is likely to be more attentive to the dangers of a latent defect than one that is merely providing a product to a marketplace. The first can perceive an existential threat while the second merely a potential threat to their market position. This is where I see the open source movement to derive its greatest motive force. People contributing to open source projects more often than not are dependent upon these software products and highly motivated to make them easier to use, more robust and to more nearly solve the problems that they have as people in the service of a larger organization.

Open source developers consume their own products most often because for them they are their tools. Every worker yearns to be able to craft the tools that will make their jobs as easy as possible. As a child, I had briefly apprenticed myself to a retired Danish cabinet maker who was using my idle time to help him with his work. He told me, "never stand when you can sit, never sit when you can lie". He taught me how to make jigs to help with the repetitive work we needed to do and many other ways to simplify by employing tools. Software work is no different than wood work in this regard. The market place has provided many good tools but the need to define and even build our own tools has always been there and only we know how our needs grow and change over time. Software development is less often a solo activity than popular images suggest. And the tools that are needed when programming in the large are very different than programming in the small. But the open source movement has allowed alienated developers who share these unfulfilled needs to band together into functional organizations that can effectively create the solutions that they know are needed removing intermediaries such as third parties or markets.

If end users could directly program their own devices, the need for systems development as a work for hire would not exist. Despite some improvements in end-user computing, it does not look like this will change soon. Clients will continue to be dependent upon software engineers to create their systems. If developers have had such a difficult time elaborating on their needs so as to create such a vibrant and productive movement in open source software, what hope is there for clients? Is it a realistic expectation that clients can articulate their problem space to such detail that a designer with less than perfect knowledge of their point of view can create a specification so perfect that the product will be right the first time? This implied water fall thinking is a major contributor to the Agile Manifesto and the central need for iterations. Rather than blame the client because they neglected to mention something, make them an integral part of the process and accept their shortcomings just as you expect them to accept your shortcomings when the final product is found to have defects that are traced to your own failures. And please stop calling them bugs. They did not fly through the open windows and ruin your perfect program. They are there because of some failure of communication or logic.

So having clients intimately involved in the development process is a hallmark of a successful project and an indicator of why open source software is often so successful at creating good products. If the developer and the client are one in the same, you cannot keep them out of the process. In the world of commercial software development, it is still too common to see the client's voice marginalized and I believe this is one of the great sociological challenges faced by software engineering today.

Thursday, March 15, 2012

Where does the study of economics live?

I posed a question last week about where at UC Davis does the study of economics reside? This is interesting to me since it is not always where you think. Here is what I'm learning:

* at Univ Chicago, the department of economics is in the division of social sciences
* at Harvard, the department of economics is in the Faculty of Arts and Sciences
* at Berkeley, the department of economics is in the School of Letters and Science
* at Davis, the department of economics is in the College of Letters and Science
* at Yale, the department of economics is in the School of Arts and Sciences
* The London School of Economics and Political Science
* Georgia Tech's Ivan Allen College of Liberal Arts, the School of Economics


Saturday, March 10, 2012

Industrial Design

"If the point of contact between the product and the people becomes a point of friction, then the industrial designer has failed. If, on the other hand, people are made safer, more comfortable, more eager to purchase, more efficient -- or just plain happier -- the designer has succeeded."
Henry Dreyfuss, Designing for People (1955)
found in the Preface to Software Psychology, Ben Shneiderman (1980)

A new programming language

http://mythryl.org/

Sunday, March 4, 2012

In my future research, I expect to perform analysis of a great deal of textual data. I accidentally stumbled on this and want to share it with everyone.


WEFT QDA

Weft QDA is an easy-to-use, free and open-source tool for the analysis of textual data such as interview transcripts, fieldnotes and other documents. The software isn't being maintained or updated, but the current version is available and includes some standard CAQDAS features. 


http://www.pressure.to/qda/


Friday, March 2, 2012

To Err is Human...

I stumbled across a YouTube of P giving a techtalk at Google. I am always amazed that he says so much I immediately agree with. That is rare for me. It is so gratifying to find someone who seems to see the world the way I do. I am very lucky.

But being as hypercritical as I am, I have a bone to pick. P and others in the group use the term "bug". It really bugs me (pun intended) since I subscribe to Dijkstra's view that there are no bugs, only programmer errors. While this is primarily a semantic issue, I think it has profound pedagogical and professional implications and I refuse to let my students use the term. As you may know, the term stems from the early mainframe days when they contained many relays, were required to work at night because of their power draw and were in areas that had warm evenings. This required opening windows in the days before AC and moths were inevitable. Since machine failures were often the result of the pesky bugs interposing their bodies in the relays, it was necessary to "de-bug" the relays by sweeping them out of the relays prior to running the program. So a bug at that time had nothing to do with a mistake made in programming.

My objection to the term is that it distances the programmer from the mistake that was made. While I appreciate the humanness of not wanting to continually make mistakes, waving them away as something coming from the atmosphere and inevitable in the course of writing programs is not helpful to achieving the highest-quality software. What is needed is for programmers to aspire to writing the code correctly the first time through careful thought and planning. With the average module size shrinking with OOP, this is more possible now than it was in the bad-old-days with goto's and large modules. At least locally.

Once you can appreciate the pain and discomfort of being in a profession where achieving the perfection demanded of the formal machine is recognized as the inhuman task it is, you can appreciate my desire to find improved methods for the translation of a consistent, comprehensive and complete requirements document into a design and implementation that can be error free. This has been the domain of formal methods but my belief system allows that something more human is possible.

An aspect of this was driven home to me today when I needed to activate an HSA debit card. The details aren't interesting but the result was a very frustrating experience. The experience was diminished by a series of process errors that individually were easy to accept but collectively left me ready to wring someone's neck. They included a voice response system that sent me in circles, a customer service rep that was unrealistically chipper yet incompetent, a claim by said rep that she was only allowed to wait up to two minutes for me to find the information she was asking for (despite having kept me waiting for a collective 6 minutes while she needed to go somewhere else to handle parts of the transaction) and a headset that made it very difficult for me to hear her.

If this kind of customer service snafu was unusual, it wouldn't be worth the pixels to complain. But this is now more common than not when requiring any interaction beyond the common and repetitive from a service organization. The problems go beyond programmer error since they are failures of the processes at a higher level and include both the automated and human systems that comprise the service product. My aim is to get to the bottom of how organizations can rush such half-baked processes into production and find it so incomprehensible when customers become increasingly dissatisfied when they are increasingly assured that excellent customer service is what they are all about? I contend that the failures to properly design the overall process is the same failure that leads to code bugs.

Another colleague gave a presentation on an intriguing plan to create a database of "bugs" that can be queried for insight into the source of the "bug". I accept that such work is needed since we are a long way from methods that will yield error-free code. But shouldn't the emphasis be on understanding what cognitive slip caused the coder to create that error in the first place and not merely the repair after it is discovered? P's talk at Google struck me as closer to the mark since I can see how it could be adapted to the code-generation phase as well as contribute to the repair phase. Despite how passe they are, I still believe that P's talk is closer to patterns than anyone is willing to say out loud. But then this talk was from 2008 and that was soooo long ago.

I have a lot of catching up to do.

Wishing you beautiful code
d