24 January 2014

Pragmatic Agile - Done is Better than Perfect

Over my years in software I've heard a lot about using a common sense or pragmatic approach to building systems. I've worked in organizations that borrow heavily from the 37 Signals books Getting Real and Rework, which essentially define pragmatism in the context of shipping software. There's a lot to like about their messages and they're certainly in line with what I've learned and applied since I "joined" the Agile world in 2000. I highly recommend both to anyone who wants to improve how they ship systems.

There's one particular aspect of this approach, though, that can be worrisome.
"Done is better than Perfect."
On its face, this seems like great advice. Don't gold plate the software! Ship something for crying out loud! I won't argue with that recommendation at all. Hell, I give that advice to people myself and even wrote about the evils of Perfection Obsession a while back.

However, you have to be very mindful of what Done means in your context. Another tenet of the 37 Signals approach is "Half, Not Half-Assed". This means simply that you provide partial but meaningful and useful functionality to the consumers of your software which, from a technical standpoint, has quality that's approaching perfect! This is the key part of the message that I have repeatedly seen ignored.

"Done" and "perfect" refer to the scope of the functionality, not its quality.

Quality is non-negotiable. It has to be so good that the people who use the software simply don't notice. It has to be so good that if something does go wrong, it was such a strange corner case that no one could have thought of it beforehand.

One of my key takeaways from Extreme Programming has been that I've learned how to push quality to near zero defects at a reasonable cost. I've been able to work from the assumption that anything greater than zero defects is unacceptable rather than the opposite, where defects are simply expected as a natural part of developing software. Couple that with Exploratory Testing performed by people who really understand how to test software, and again the quality level rises and defect rates decrease even more.

All this is well and good, but what if a new feature of the software passes all of its automated checks, the code has been peer-reviewed and been given all the requisite blessings, it has been thoroughly explored and the testers have become bored, the software is shipped into production, but no one uses that feature? What happens if it doesn't actually do what the users of the system wanted? Was it actually done? It certainly wasn't perfect!

"Done" also needs to contain the notion that there's a valid business reason for delivering the functionality. If even the smallest feature is delivered "just because", then the time, effort and money was wasted.  This doesn't mean, though, that you shouldn't deliver features that haven't been requested by consumers of your system. After all, they may not even realize that they have a need for the feature until they get their hands on it. It does mean, though, that especially in that case you would want to deliver the tiniest possible functionality to expose the assumptions to the real world in order to validate that they were correct.

Shipping something is by far the best way to obtain the feedback required to build systems that people love to use - that delight their customer. Sacrificing quality to do that may seem to help you move faster initially, but it will slow you down over time as the system (and the defect list) grows.

Ship very small slices of functionality as quickly as you can and your market can bear, but do so with the view that the quality of that functionality must be nothing other than "the best".

If you need help doing that, I happen to know a coach... :)

19 January 2014

The First Rock

Dave Thomas and Andy Hunt, the Pragmatic Programmers, used the concept of "broken windows" to describe the condition of software entropy. I've developed a very simple heuristic for determining when a software system has reached a tipping point with respect to quality. Following the analogy, the first rock has been thrown that broke a window.

This heuristic can be used to make some crucial decisions about the future of the system and where the people developing it need to focus their efforts. It may also be an early indication that you need to think seriously about simply replacing the system altogether.

Suppose your system has a list of defects, and we'll assume that the list has been curated well enough to remove items that aren't true defects but rather feature changes or additions. The tipping point for quality occurs the moment your defect list has grown large enough and contains defects complex enough that there's a decision to mark lower priority defects with the status of "WON'T FIX", or to simply close them without fixing.

Once that happens, once you compromise to that extent on quality, you're starting on the steep downslope towards the immense pain of making changes to the system that break seemingly unrelated functionality. You're wandering into the wilderness of simple changes that take increasingly longer to make. Your ability to quickly react to market opportunities gradually erodes away.

How do you prevent this? The answer is quite simple - work from the perspective that anything more than zero defects is unacceptable.

I know you're laughing at this point. What a ridiculous thought! Software is far too complex! Defects are inevitable!

At some point in the history of software development, a person or group of people somewhere made a conscious decision to change from "defects are unacceptable" to "defects are inevitable". I completely understand that when computers were powered by vacuum tubes you were at the mercy of blown tubes and actual, living bugs causing short-circuits, but those days were well over a decade before I wrote my first Hello World program in BASIC in 1981. As I wrote in Waterfall Works!, the assumptions underlying many of the techniques we use and approaches we take today are based on the realities of computing environments in the 1950's, 60's and 70's. They don't need to apply anymore.

When I learned about Extreme Programming in 2000, the ideas of working on very small slices of features with rapid feedback, all backed by acceptance criteria and multiple levels of automated checks opened my eyes to possibility being able to ship software with near-zero defects at essentially no extra cost over the medium to long term. Suddenly, the impossible seemed possible, and yet another outdated assumption bit the dust. Since that time I've also learned about Exploratory Testing and other complementary techniques to drive quality even higher.

Since being introduced to these concepts, I've seen many groups and systems where User Stories are created in prodigious quantity, but not a single one has acceptance criteria. There are thousands upon thousands of automated checks such as unit tests, but defects constantly fall through that net because few people are taking the time to use Exploratory Testing.

If we simply change our mind-set from one of "defects are inevitable" to "anything more than zero defects is unacceptable", and think hard about what it would take to achieve that goal, we can reach the nirvana of great software with levels of quality of which we had only once dreamed.

Ideally, you simply realize the value of the change and start down that path. Failing that, though, ask yourself if the first rock has already broken a window. Have you simply closed defect reports without fixing them? Do you have a "Won't Fix" status in your tracking system, and if so have you used it?

So, has that rock already been thrown?

15 January 2014

Pareto Must Die!

A couple of years ago I wrote a post entitled Fibonacci Must Die in which I discussed how the mathematical series had been abused by lending pseudoscientific credence to software estimation. I see similar abuses of the Pareto Principle, the famous 80/20 Rule.

I have watched entire conversations stop when Pareto is invoked.  I've seen work shipped as good enough under the auspices of Pareto when in fact it didn't work properly or had unexpected side effects.

Like Fibonacci, there's an air of science when using like Pareto but the problem is that it's very dependent on the context. Would you like the software developers writing the code for a pacemaker to invoke Pareto? OK, so that's life critical software. How about those at Amazon saying that the code that calculates shipping costs is good enough after hitting the 80% threshold? Suppose that code overcharges on 10% of orders and you don't even realize it. You could be losing a few dollars on each shipment, multiplied by a bazillion other customers. What if you were undercharged for shipping and a friendly Fedex or UPS person showed up at the door asking for a few more dollars? That isn't the end of the world, but is certainly embarrassing for both you and Amazon. The Pareto Principle shouldn't be an excuse for doing a half-assed job!

The real issue is that we don't know if the ratio is 80/20, 70/30 or 60/40 or any other arbitrary combination.  For a pacemaker I'd certainly want it to be 100/0. What if the correct ratio is 0/100, meaning that the code shouldn't have been written in the first place? By what measure are we coming to this magical 80%? Lines of code? Time spent? Budget? Finger in the wind?

When using Pareto, there's a form of confirmation bias occurring. When the principle is phrased as, "You get 80% of the value for 20% of the work", it just sounds so good that we believe it must be true. We want it to be true! This bias is the origin of most urban legends, and is the raison d'être for Snopes.com!

So, because it sounds like it must be true we endeavour to ship software quickly with that 80% of the value.

That isn't completely bad, since my experience has consistently shown that shipping smaller groups of features earlier is a more effective way of delivering most types of software. Effective feature or story splitting enables this, and it's one case where I'm more than happy to encourage obtaining feedback as early as practical.  If, for example, you're still working on "must have" features as you near the end of a 12 month project, then it's likely that you haven't split those features as effectively as you could have. Again, though, why 80/20? Why not 99/1?

While I'm not the biggest fan of the research methodology behind the Standish CHAOS reports, I do think that their surveys about feature usage are interesting - that 45% of features in software are never used. Of course there's room for context and interpretation there, but it would be interesting to know in each context whether those features that aren't used were part of the 20% rather than the 80.

We also want to be careful not to use Pareto as a means to avoid work that we perceive to be difficult or monotonous. Just because something is difficult/expensive/monotonous doesn't mean we shouldn't do it. Similarly, I've often heard the phrase "low-hanging fruit" used to find features that are easy and a development team could complete quickly in order to provide a sense of progress. When I've heard this,  though, it has been in a context that's devoid of any notion of what's important to the business.

I'm much more a fan of what Josh Kerievsky calls "bargains". These are features that provide value to the business that's much higher than the effort required to ship them. That evaluation of value and effort is what's really required to iteratively determine the sequencing of features.

Iterating using Pareto is good, though... perhaps get 80% and evaluate.  Then get 80% of what's remaining and evaluate. Repeat until the confidence level of more than one person is high enough for the context in which you're working. Substitute the appropriate ratio for your circumstances.

So, while I have no issues with the concept behind Pareto as a means of shipping only the most important and impactful features first, I believe that the 80/20 ratio has been misused. Features have been shipped with much lower quality. The features shipped haven't been the ones that are most important to the people consuming them. Worst of all, 80/20 has been used as a way to kill conversations that could fix the first two issues.

Learn how to break features down into extremely thin slices, and deliver those with the highest possible quality.  If they're truly thin, maximizing quality doesn't cost much more than shipping crap.

13 January 2014


We all know that the only constant in life is change, and with that in mind I'd like to let the world know that I've moved along from a great 20-month stint at Shopify and I'm returning to the consulting world.

It was great being surrounded by such a vibrant group who were so engaged with shipping software and making their own rules as they went along!  I learned a ton about Ruby and Rails as well as the ins and outs of scaling software to many tens of thousands of users, and crazy request per minute (RPM) numbers.  I was fortunate to work with a number of real craftspeople who cared deeply about what they did and sought constantly to improve.  I was equally fortunate to work with some of the most approachable, enlightened management I've seen from the CEO, Tobi Lütke, on down.

As for the future, I'm now available to work with organizations to build or improve their product delivery capabilities.  I can work with your developers to help them learn techniques to drastically improve quality in order to enable them to sustainably ship software faster.  I can work with your product management people to help them identify what to build and how to break it down into minimally small pieces in order to obtain feedback as quickly as possible and delight your customers.  I can work with your support organization to help them engage more directly with both customers and the development people in order to respond to real issues faster.  And, I can work with your management group to help them establish effective to structure and oversee your organization to effectively support your people.

If you want more information about what I've done in the past, have a look at my LinkedIn profile.

If you're interested in some coaching or just want to chat about your organization, feel free to e-mail me at dave.rooney@westborosystems.com .