11 October 2014

Uh Oh... We Discovered More Stories!

As I've said before, I'm a huge fan of Jeff Patton's Story Mapping technique. While Story Mapping goes a long way towards identifying the work that needs to be completed to deliver a viable system, you will inevitably miss some stories. This is a natural outcome of the discovery process that is inherent to software development.

When you discover that some functionality is missing or incomplete, it's time for a conversation. The team, including the Product Owner/Customer must get together to determine what to do next. Again, spend some time to use the process of breaking that functionality up into thin slices. This discussion can occur at any time and doesn't have to be restricted to specific meetings like Scrum's backlog grooming or sprint planning. It can be held anytime that the team is able to have the discussion.

16 September 2014

How to Enable Estimate-Free Development

Most of us have been there... the release or sprint planning meeting to goes on and on and on and on. There is constant discussion over what a story means and endless debate over whether it's 3, 5 or 8 points. You're eventually bludgeoned into agreement, or simply too numb to disagree. Any way you look at it, you'll never get those 2, 4 or even 6 hours back - they're gone forever! And to what end? Some of the Stories could have been completed in the amount of time it took to estimate them, while others drag on for days and even weeks longer than anticipated due to different interpretations of what the story meant.

It doesn't have to be like that!

24 August 2014

"How Thin is Thin?" An Example of Effective Story Slicing

Graphene is pure carbon in the form of a very thin, nearly
transparent sheet, one atom thick. It is remarkably strong for
 its very low weight and it conducts heat and electricity
with great efficiency. Wikipedia
If you have spent any time at all working in an Agile software development environment, you've heard the mantra to split your Stories as thin as you possibly can while still delivering value. This is indeed great advice, but the term "thin" is relative - our notion of what thin means is anchored by our previous experience!

To help communicate what I mean when I say that Stories should be thinly sliced, I'm going to provide examples from a recent client who was building a relatively standard system for entering orders from their wholesale customers.

14 August 2014

An Appetite for Change

I've been part of a discussion on Twitter about the vices of imposed Agile adoptions versus the virtues of the approach put forth by Daniel Mezick, OpenAgile Adoption. Regardless of the arguments for or against each approach, creating this dichotomy misses the point.

In May 2012, organizational change consultant Maureen Cunningham gave a talk at Agile Ottawa about Change. She used a number of very interesting exercises and the talk was enthralling. What caught my attention most, and is something I've used a number of time since, was some simple math Maureen presented. Yes, math.

She drew this formula to represent what is needed for change to be successful:
D x V x F > R
  • D is the Dissatisfaction with the Status Quo;
  • V is a clear, compelling, believable Vision;
  • F is the First and reinforcing steps;
  • R is resistance to the change.

12 August 2014

An Existence Proof and The Value of Coaching

I found a tweet I saw this morning rather disconcerting:
The clear implication is that coaches, like all consultants, follow the mantra, "If you can't be part of the solution, there's plenty of money to be made prolonging the problem." In the case of this tweet and numerous others in his Twitter stream, Daniel is suggesting that companies that provide coaching services are simply in the business to make a buck. Any value provided is a nice bonus.

I just completed a 4-month coaching contract in which I spent 4 days a week with a team doing hands-on development, testing and process coaching work. This team didn't ask to "go Agile", their management picked their project as the next one after the pilot. According to Daniel's assertions, this situation was ripe for failure because the approach was mandated rather than accepted by the team.

21 April 2014

Agile Transformation Phase 2 - Get Real With Your Portfolio

If you're scanning my blog for "Phase 1", you will do so in vain! Phase 2 is a reference to the infamous "Underpants Gnomes" from South Park:

In the case of an Agile transformation, Phase 1 is the good old pilot project. If all we coaches ever did was to assist with pilot projects, we'd be rock stars! Pilot projects are by definition set up for success. Mountains are moved in order to ensure that the team is populated with as many of the organization's best people as possible. A team room is created despite a facilities policy of cube farms. Executives remove red tape and silo issues on a regular basis, because everyone wants to ensure the pilot project succeeds.

Like the underpants gnomes stealing from our dresser drawers and laundry baskets, Phase 1 is relatively easy. Senior management is pleased and wants more... much more of what they saw. They want as much of Phase 3 as they can get!

Just like the gnome in the video, there's that poor, ignored little detail called Phase 2. Follow-on projects don't have the ability to say, "Give us a team room!", and have it happen. They can't say, "We want a cross-functional team!", and expect any more than the standard "resource allocation" from which the existing matrix management policies handle the demand for skills. Mountains won't be moved. Roads won't be straightened. In some cases I've seen, the organization's immune system will actively seek out and attack this infection called Agile.

The typical manifestation of these problems is that the next group to attempt to use an agile approach on a project will run face first into the reality of, "Well, I'm allocated to this project for 40% of my time", or, "I will join this project in a few weeks after project XYZ finishes", which of course doesn't finish in a few weeks. This is very simply an issue of an IT organization not understanding its real capacity for work. They have been using the same approach for years or even decades, and systems do indeed ship. However, even the most jaded people recognize that yet another Death March isn't terribly interesting to them. I have never seen an IT group that had too little work to do, and yet so many of them take on far more work than they should and have too much in progress at any given time.

So how do you break out of this trap? The only viable way to make Phase 2 successful that I've seen is to get real and actually manage the project portfolio of the organization. Johanna Rothman literally wrote the book on that topic, and she provides a ton of information, examples and suggested remedies for the issues an organization will face when attempting to rein in their portfolio.

Once a portfolio has been brought under control or at least made completely visible to all those concerned, then the effects of decisions can be tracked. If teams continue to use the same approach to delivering those projects, you at the very least have a baseline from which to track any changes. Suppose the organization starts an experiment of using teams that are fully dedicated to a project for its duration. You can see the effect of that on the portfolio as a whole. Similarly, if teams start using automated testing approaches where they hadn't done so before, that should be visible in the progression of projects through their lifecycle.

Finally, an initiative as large and disruptive to an organization as an Agile transformation truly needs a rational, value & capacity-driven portfolio in order to create an environment for success. If that doesn't happen, the transformation will be challenged at best. After much time, effort and money is spent, the people who "sign the cheques" will wonder what value they've received for their money. If all of the projects are thrashing because they're fighting over getting QA people or waiting for a DBA to do the database work because no one on the team is either able or allowed to do so, then the organization won't see anywhere near the type of improvements that can be achieved by using Agile approaches.

So, in short, if you're in the middle of an Agile pilot project and are wondering what the next step should be - what the question mark is in Phase 2 - then you need to grab a copy (or 10!) of Managing Your Project Portfolio and start determining what work your organization really should be doing at that moment.

Don't be the Underpants Gnomes and end up with a giant pile of underwear and no one to wear them!

24 March 2014

Gourmet Crow, or Wearing a Different Hat

For readers to whom English isn't their first language, we use the phrase "eating crow" to describe a situation when you must admit that you were wrong after taking a rather strong position about something. While this isn't exactly that case, hence the second title, it does illustrate a lesson in perspective.

First Hat - "Sherpa"

While I was working at Shopify, one area on which I focused was product quality. The good news was that there were (and still are) tons and tons of automated tests for the system. What I felt needed improvement, though, was a more widespread application of testing practices to ensure that the system worked as intended beyond a superficial run on a developer's laptop.

This differentiation is likely why people like Michael Bolton prefer to use the term "check" rather than test, because "testing" is a very different activity. Using that terminology, my view at the time was that Shopify had plenty of checks but not enough testing.

Like any large system, there were defects and my position was that the group needed to improve their testing skills in order to prevent more defects from making it to production.

Second Hat - "Merchant"

Then something rather funny happened. After I left Shopify and rejoined the consulting world, I decided to use my own Shopify store as the platform for my services and some fun Agile swag ideas in The Agile Store. I was no longer viewing the system from the perspective of an insider who knew about the defects, but rather as a customer working with the product.

In 2 weeks of setting up and tweaking my store, I believe I encountered one single "glitch" that I barely even noticed and easily worked around. Even my critical eye was surprised at how smoothly it all worked. That isn't to say that the software is perfect - I know that by entering boundary or out of range values in some places I can cause some errors - but in everyday use from the perspective of a consumer of the software, it works not just fine but quite well indeed.

It was a good lesson on the need to balance striving for perfection from a development perspective vs. striving for the best customer experience from a business perspective.

Of course, you have to determine the optimum balance for your particular domain.  For example, it's tax season, and let's think about the tax return preparation software you use. Imagine that it has a defect that only occurs under very specific circumstances of income and family size, etc., and it leads you to believe that you're going to receive a generous refund. You happily submit your return and await that nice fat cheque! Except the software was wrong and you actually owe money, so you could be charged interest and possibly penalties. Not only that, the information submitted by the faulty tax return software tweaked something that triggered an audit by your tax authority.

That would likely be considered a suboptimal customer experience, to say the least.

There are domains and circumstances where we can put up with plenty of defects and just keep going. If a game crashes, we likely just start over. If we really like the game, our tolerance of defects is much greater.

And there's always the good old standby of the air traffic control system. Not only does the customer experience have to be good, i.e. the air traffic controllers can easily understand what they see and make appropriate inputs, but the defect level of the software has to be extremely low.

So, have the discussion about the level of tolerance of defects within your domain. At the same time, strive for the best possible software. Even after my experience as a consumer, I would still advocate for better testing. The trick is finding the sweet spot between perfect software and what allows the business to grow sustainably. I still maintain that we can drive software defects to near-zero levels at a reasonable cost, but perhaps my lesson in this case has tempered the passion with which I convey that message.

Oh, and does anyone have any wine pairing suggestions for crow?

20 March 2014

Mandated Agile - A Contrarian View

There was an interesting exchange recently on Twitter about Agile adoptions/transformations that are mandated. Dan Mezick asserted that:
Between 75% and 85% of all #agile adoptions fail to last. 99% of these adoptions are implemented as mandates. Any connection here?
I responded, asking for Dan's source to those stats. His answer was that it was "pro coaches" like myself. What ensued was a long conversation (such as one can have on Twitter) including people such as Mike Cottmeyer, George Dinwiddie, Glenn Waters and others.

Dan's position is that mandated Agile doesn't work, and his Open Space-flavoured version called Open Agile Adoption is much more inclusive and grassroots-driven.

Just to be clear, I have no arguments against Open Agile Adoption and I'm a big fan of Open Space and how it can be used to ensure that many more people are engaged in the work process. There are a couple of things, though, that bother me about Dan's statements.

First, even if his statistics are accurate, I want to see that his sources are somewhat more rigorous than anecdotes from other coaches. Laurent Bossavit has made a side career for himself of challenging statements such as Dan's, with much of what he's found catalogued in his book The Leprechauns of Software Engineering. I'm not suggesting that Dan's numbers are wrong, just that if he's using them to market his own product or service then it behooves him to "show his work", as a multitude of math teachers and profs told me.

My second issue is with the implication mandated Agile is wrong. Of course it would be better if a change such as that began and grew from the grassroots rather than as an imposition from management. Except... I was among the many who had great success with XP in the early 2000's on a single team, but precious little if any success trying to grow it beyond that point. Forget about management, other teams simply weren't interested, regardless of how successful we were.

There's also another funny aspect to this. If I'm working for a private company and the owner wants something done a certain way, it's absolutely her prerogative to do so! If the head honcho wants Agile and says, "Make it so!", then you're faced with a choice: you either work with the owner to make it so, or you can choose to leave.

While that view doesn't fit the mold of how we believe organizations should be run, it is how 99% of them are. OK, so I just grabbed that number out of the air. :) My experience has been invariably that this is how organizations are managed, for better or for worse.

We hear about the Semco's and the Gore and Associates of the world because they're so different, not because many other organizations are like them. Of course we should take lessons from those companies and apply them! But we also have to be wary of cargo-culting such as was done with the North American auto makers with Toyota's manufacturing model.

In the end, though, most businesses are not a democracy. Good ones are a benevolent dictatorship, and the leaders in those companies are much more inclusive of others in the decision making process. The people in those organizations feel valued and are motivated to do great things.

But even in those companies, every so often decisions are made by the top leadership without consulting the masses. Those decisions affect everyone, and are imposed via a mandate. If the people trust that the organization's leadership is making these decisions for solid business reasons, then there really isn't a problem. If the leadership communicates those reasons and the vision behind the change, then the people on whom this mandate has been imposed are much more likely to support it.

Not all mandates are bad, and some are necessary. Creating such a false dichotomy serves no one in the long term.

Since I've now given Dan's Open Agile Adoption some free advertising, I would like to state that my own position is to help groups determine what is most effective in their context. The definition of effective will change from team to team, even within the same organization. It will also change from domain to domain. I have accumulated a lot of great principles and practices in 25+ years, as well as the wisdom to know that "one size fits all" means "it doesn't fit anyone properly". If you think that's interesting, come on over to DaveRooney.ca to see how I can help.

19 March 2014

Solve the Right Problem, Solve the Problem Right

Over twenty years ago I learned a valuable lesson about solving the right problem for the people who used the software that I was building.

I was working on a training management system, specifically on the reports that were needed by the people who handled training for a relatively large organization. There were a number of 'canned' reports, meaning that the format was fixed and the query options were limited to a small set of options. One of the reporting requirements was for an ad hoc report generator that would allow the people in the training department to create and save their own reports. This report engine was intended to have the flexibility that the canned reports didn't.

At the time, I looked at several options including off the shelf report packages (I think an early version of Crystal Reports). However, my software developer "build a better mousetrap" instincts took over and I decided to write the report generator myself.

I used an existing report file format as the starting point after finding some documentation about its binary format. I then extended it to contain some extra information that I needed for my report engine. The big chunk of the work was in building a quasi-WYSIWYG report designer that would give the people the ability to see how fields were being positioned, headers, footers, groups, etc. It was anything but a trivial task and took me about 3 to 4 weeks to have it working in a reasonable manner.

But no one used it.

I gave some demos. I sat with people and helped them create a report. They still didn't use it. From a functional perspective, my report generator was as good as anything you could get off the shelf and as easy to use... at least from my perspective as a developer. From the perspective of the consumers of the system, they just didn't have the time to learn how to use the tool well enough. So, they simply didn't use it.

The a funny thing happened. I received an urgent request for a report that had to join data from several tables and aggregate results. My report engine wasn't built to handle such a report, so I had to quickly throw together a program that could do the work. Once I had the basic report in place, I had the person who requested it come and sit with me to review what I had done. There were a couple of tweaks to make, but it was mostly OK. I asked him if this was a one-off situation, or if that report was going to be needed again in the future. The answer, of course, was the latter - this was a new requirement from upper management.

The minor problem, though, was that I was working on another system for another group in the same organization and couldn't take the time to add the new report into the training management system as one of the canned reports. Also, by that point policies around releasing desktop systems had changed and each system had to go through a test process to ensure that it would play well with other systems used by that organization.

So, I quickly created a UI for the person to be able to enter some query parameters, rolled it all together into an app and handed it to the person on a disk. He was very happy, to say the least, and I may have spent half a day on the work.

A month or two later, he came by and sheepishly said that he was given another report request from upper management. Again, it was just different enough that the ad hoc report generator couldn't handle it. Being a lazy developer, I simply copied the code from the previous report, replaced the report generation code with what was needed for the new report and change the UI to handled the different parameters. Pack it up, ship it off, and you have another happy customer!

Then, the third request arrived. This time, I copied the code but created a 'skeleton' of the app that would generate the report. The UI was blank except for buttons, as was the method that created the report. I now had a template app that just needed the code for the report and UI to allow the user to change the report parameters.

Again, I churned out a report and my customer was a happy camper. Another half day at the most.

The fourth request arrived. After half a day, there was the report's app on a disk and I believe at that point I also provided an installer to simplify that process.

I worked in that organization for another 4 years, and I don't know how many more of those reports I generated. My customer was very happy the entire time. There were other people for whom I built these quick reports as well. In those 4 years, not a single person other than me used the ad hoc report generator that I spent 3-4 weeks of my life building.

The moral of this story is that I wasn't solving the right problem with the ad hoc report generator. Yes, the customer had requested the ability to create ad hoc reports. I translated that requirement into a need to build something into the system rather than the simpler approach of writing a small app for each need. The 3-4 weeks I spent building the report generator was quite likely more than the time spent building the individual report apps.

I also didn't take the time to let anyone else try to build a report using the generator I had written. I likely would have seen much sooner that someone who wasn't me and knew the tool inside and out would struggle creating a report from scratch, and that another approach was needed.

Essentially, if I had thought more critically about solving the problem of ad hoc reporting the system could have shipped 3 to 4 weeks earlier than it did.

And that, I suppose, was the cost to learn what problem I actually needed to solve and how to solve it "right". It's a lesson that has stuck with me, and has been the rope that allowed me to climb out of a number of rat holes since by pausing and asking,

Am I solving the right problem here? Is this the right solution to the problem?

9 March 2014

Interesting - Packing List for Your Agile Journey Virtual Training

My good friend Gil Broza, author of The Human Side of Agile, pointed me towards an upcoming virtual training event called the Packing List for Your Agile Journey. It's a 5-day event with a tremendous "cast of characters", including Johanna Rothman, Arlo Belshee, Ted Young and Paul Carvalho among others.

Gil's approach is to interview each of these 10 industry leaders, having them discuss their own experiences - the up's and down's of moving to Agile methods across the spectrum of organization size and business domains. Some of the topics to be covered include:
  • Organizational Support
  • Team Collaboration
  • Adapting Agile as you Learn
  • Whole Product Thinking
  • Making Quality a Mind-set
  • and many more...
I've known Gil for a long time now, and my advice is really simple - if he's involved, you'll want to hear what they have to say! You can register here.

6 March 2014

Upcoming Book: Effective Software Delivery - Agility Without the Dogma

I've started working on a new book with the rather lofty goal of cutting through the marketing hype and near religious dogma of the various brands of Agile. My focus is on conveying what is effective in the context of a group of people building software in their particular domain.

Effectiveness is the book's overarching concept. There are a multitude of different ways to deliver software, but in the end effectiveness can be distilled into two key activities:
  • Ship something
  • Reflect on how you shipped it in order to improve
Whether you're a lone app developer working in coffee shops or a multinational corporation building equipment that costs millions of dollars, Ship and Reflect still apply if you're going to be effective.

Of course, the details of how you ship and how you reflect are going to vary from team to team and domain to domain, and that's exactly why people and organizations struggle with the different Agile brands. David Anderson, the originator of the Kanban method, made a very interesting statement on the Kanban Dev Yahoo Group back in 2008:
So while I have heard of agile teams that appear to exhibit high maturity behaviors - objectivity, use of leading indicators, low degrees of variability, and (maybe, just maybe) continuous improvement in a failure tolerant culture, I have not heard of one that existed without the direct leadership of one of the personalities in our community. At this point, it is impossible to take the "David" or "Jeff" or "Israel" factor out of the achievement of the high maturity.
That statement really stuck with me, and even more than five years later I have seen teams face similar struggles. That's why, I believe, that a focus on effectiveness versus following a prescribed process is a better approach. What's effective for that lone app developer likely won't be effective for a 50+ person development team building the avionics for a new airliner. What's effective for a group of people customizing a CRM package may not be effective for people maintaining a legacy system running on a mainframe.

The book is intended to cut through the dogma of individual processes to help reveal practices and approaches that are effective in the context of the reader.

Currently I'm self-publishing the book on Leanpub, and you can subscribe to receive updates as they're produced. If you'd like to review the book, please let me know.

5 March 2014

Video of Effective Software Delivery - Agility Without the Dogma

I'm in the process of writing a book entitled Effective Software Delivery - Agility Without the Dogma. This video is an interview with me describing the history and concepts behind the book.

You can register to be notified of updates on the book's Leanpub page.

Please share if you like the video and concept for the book! I will be releasing updates as they're available, with a target of the end of 2014 for full publication.

28 February 2014

The Agile Store Has Been Launched!

After spending nearly two years working at Shopify, I took the plunge and actually created my own online store!

Remote Coaching
There are two aspects to the store. The first represents a simple way to view and purchase the coaching and training services I provide. For example, you can purchase blocks of time for both Remote and Onsite Coaching simply using a credit card. There's no need for hassle of creating a purchase order or RFP!

The second is The Agile Store - a collection of 'swag' for people who have an interest in Agile methods. The items in The Agile Store are a fun way to show off that you're into Agile, and include T-Shirts, Mugs and Mousepads.

For example, for the Agile Coaches out there we have this mug to show off your effective but low-tech approach to work:

And who wouldn't want to rock a T-shirt like this one, showing your undying support for the Agile Manifesto?!

I Want to Believe - Agile Manifesto T-Shirt (black)
But wait... that's not all! Just to make sure you hammer the message home, you can get these TDD Cycle Mugs that will keep it front and centre.
TDD Cycle Mug

And there's plenty more in The Agile Store!

Sleazy Hype Guy
OK, so enough of sounding like this guy. :) But do please swing by the store and have a look. If you don't see anything you like, let me know and I'll try to get it up there for you.

The underlying message of the store is that we can have fun while delivering software systems. We also need to be mindful of the original reasons why the Agile Manifesto was created, and to ensure that we get back to its values and principles.

And just as one last teaser, the first 50 people who buy from the store can use the discount code FE26QKD146BP at checkout to receive 10% off their order.

27 February 2014

The Prodigal Son

I want Agile back.
Click for more!
(This post was inspired by Tim Ottinger.)

At Agile Coach Camp 2012 here in Ottawa, I led a session called "I'm done with Agile". The discussion during that session focused on how the marketing of various 'brands' had fractured the Agile world to the detriment of people and organizations who were trying to become more effective at their software delivery efforts.

I jokingly made a biblical reference, saying that if the Agile Manifesto represented the Ten Commandments, then we were in need of The Sermon on the Mount. Following the religious metaphor, Susan Davis had a much better way of expressing the sentiment:
I don't want to burn down the temple, I just want to throw the merchants out!
Religion and the politics of the Agile community aside, what the people in that session didn't know was that I really did feel that I was done with "Agile". I had grown so terribly disenchanted with trying to help organizations that were either beyond help or simply weren't ready for the kind of changes needed. Yes, there were some bright spots - most of which I didn't realize until later - but I was literally and figuratively tired of working so hard and seeing so little progress.

At that point, I had left the coaching/consulting world and had taken a full-time position at Shopify. Here was a young, vibrant, successful organization that was the antithesis of many places where I had coached. They were being an agile company vs. trying to do Agile. My role at Shopify was effectively an internal coach, but I wanted so little to do with anything 'Agile' that I came up with 'Sherpa' for my title.

So, I mostly dropped out of the Agile community. I stopped attending Agile Ottawa events, and left the organizing group. I blogged very rarely. I was still active on Twitter, but much less so about agile topics.

During my time at Shopify I watched a group of about 100 people in the development capacity ship software consistently, with many production deployments a day. Testing was taken seriously, product management worked closely with the developers, and everyone strove to keep improving. It wasn't perfect, but it was a damned sight better than most places I had coached.

What I realized during that time was that teams weren't using an Agile process like Scrum or XP (out of 20+ dev teams, no two used the same process), but they were being quite effective at shipping software. That word effective is the key - if you follow Scrum or XP to the letter, but aren't shipping what the consumers of your software need when they need it, you aren't being effective!

When Shopify and I mutually parted ways as an internal coach, I decided that I was ready to return to the Agile world. To beat the religious metaphor once again, I would be the prodigal son returning.

This time, though, I have a different perspective. While I do believe that XP is a great process and my default way of building software, I'm much more aware of what's effective in the context of a team, the organization in which that team exists, and even the technical environment the team has.

I don't want to teach Agile, I want to teach effective.

You won't see me with the letters CST, KCP or SPC after my name. I'm not interested in tying myself to a brand or spending ridiculous amounts of money for certifications. I'm interested solely in helping people, teams and organizations become more effective at delivering software. So interested, in fact, I'm writing a book about it!

Being effective is what truly represents the values and principles of the Agile Manifesto, and is a return to the roots of the Agile movement. As Tim Ottinger says, "I want Agile back." So do I... I want to believe.

21 February 2014

Technical Competence

I recently read an excellent book by David Marquet entitled Turn The Ship Around. David is a former U.S. Navy nuclear submarine captain who was given command of a ship that was the worst-performing in the submarine fleet. He leveraged his experience as a junior officer in previous assignments to attempt to bring his sub up to the expected norms of the Navy.

While the story is based in a military setting, anyone in the agile community would recognize that David was a classic servant leader, delegating responsibility and decision-making to the people closest to where the decisions needed to be made. Indeed, I found out about David because he was the keynote speaker at the Gatineau-Ottawa Agile Tour 2013!

I highly recommend reading the book, because it's essentially a textbook about how to turn around the mindset and actions of a group of people. I found that there were very direct applications of his concepts to what we encounter in agile adoptions for organizations larger than a single team! I won't go into all the details with a comprehensive review of the book, but I would like to touch on one point that really struck a chord with me.

David wrote an entire section of the book on Competence. When he started pushing control and decision making to lower and lower levels of the crew, they started to encounter safety and operational issues. He realized that the mistake he had made was to assume that the people had the technical competence to make those decisions when in fact they didn't have it. He states,
Control without competence is chaos.
This wasn't a knock against the people or their training, because under traditional leadership models they had relied on the competence of their commander to know what decisions were appropriate. So, David sat down with his officers and chiefs (chief petty officers - the naval equivalent of sergeants), and they hashed out the values and principles for the ship. Core to those principles was constant, continuous learning. Not just learning, but learning by doing.

This had the effect of pushing the ability (competence) to make decisions further and further down the chain of command on the ship. In the space of a year, the submarine went from having the worst ratings in its squadron to the best, receiving awards for its performance.

Anyone who has been involved in a transition to a self-organized, self-managed model would recognize many of the actions David took. He speaks of replacing a leader-follower organization with one that's leader-leader. The notion of empowering people, and the resulting increase in engagement and motivation are quite familiar.

The one thing, though, that I believe that David saw that many involved in agile transitions don't is the effect of not having the technical competence to make decisions. For example, if developers have been empowered to go ahead and change workflows within a product, do those developers have the proper skills to understand the consequences of those changes? Even if those developers aren't touching external-facing functionality, do they have the knowledge and skill to understand how their changes will affect the overall architecture of the system? When business people are making decisions about features and schedules, do they really understand the capacity of the group or groups who will implement those features? Are they including them in the discussions? When operations people are making infrastructure changes, do they understand the ramifications on the business if there is an outage? Do people anywhere in the organization feel that they can question decisions by their leader without repercussions because they have enough knowledge to be able to understand the effects of the decision?

To truly be effective, people building systems need to ensure that constant learning is taking place and being shared in order to push the required knowledge and skills to as many people as possible so that they can make these decisions. This will create the kind of environment where people are motivated to do great work and feel they can experiment in order to try different approaches to technologies. This is the fertile soil from which innovation grows, but it can only work if everyone is constantly learning.

As a final note, it took David a full year for the submarine he commanded to transform. There were potholes along the way, and David mentions more than once that he questioned himself and the process and thought about simply reverting to the old way. He had his commander's support, and persevered. This isn't something that can happen overnight, and it does that time and patience.

In the end, though, the engagement and innovation creates a great environment in which the whole of the people will become greater than the sum of the parts.

18 February 2014

Forgiveness vs. Permission

It's easier to ask forgiveness than it is to get permission. -- Grace Hopper 
This phrase is oft-cited in the world of agility. Don't wait for permission to do something - just go do it! If someone complains after the fact, simply beg forgiveness. After all, the business will be better off owing to your initiative. Innovation doesn't come from committees, nor from the faint of heart who fear the consequences of action.

So, just freaking do it and sort out the problems while the dust settles.

We celebrate this brash entrepreneurial streak in western society as something to which we all should aspire. People like Steve Jobs and Richard Branson are held up as poster children for this attitude that rules are made to be broken! But for every Jobs and Branson, how many people are actually punished for their actions, their impudence, their insubordination?

I have been, and I probably will be again.

A number of times in my career I've taken that initiative because I felt that it was the right thing to do. I quietly gave some key users early access to a system in order to obtain feedback. I broke the corporate rules and set up a dial-in remote access box so that I could perform support & maintenance without having to waste time traveling from building to building. I've gone ahead and published no-so-great-news when it wasn't likely to be received well. I did those things because they were the right things to do in those situations.

In some cases, I was simply asked not to do it again (and quietly thanked). In another case, it contributed to me losing that job. And that's the key...
If you are going to simply do it and ask for forgiveness later, you had better be prepared for the repercussions.
If you rock the boat, you may indeed get the job done. You may indeed get information to those who need it when it would otherwise be hidden. You may also be called out as "not a team player". You may be ostracized by those who have a vested interest in the status quo. You may simply be punished because those in power don't know what you know! Regardless, there will be consequences to your actions.

Barry Schwartz gave this great TED talk a few years back called Our Loss of Wisdom:

In the end, which do you believe you should do? What's right, or what the rules say or what the cultural norms of your organization dictate? I was willing to accept the consequences of what I did because I knew that I was doing what was right. I'm also quite certain that I don't fit very well in organizational cultures where you're expected to say only good news and "toe the party line".

So, if you feel the need to ask for forgiveness rather than permission, good on you! Just be sure you realize that such actions have consequences.

24 January 2014

Pragmatic Agile - Done is Better than Perfect

Over my years in software I've heard a lot about using a common sense or pragmatic approach to building systems. I've worked in organizations that borrow heavily from the 37 Signals books Getting Real and Rework, which essentially define pragmatism in the context of shipping software. There's a lot to like about their messages and they're certainly in line with what I've learned and applied since I "joined" the Agile world in 2000. I highly recommend both to anyone who wants to improve how they ship systems.

There's one particular aspect of this approach, though, that can be worrisome.
"Done is better than Perfect."
On its face, this seems like great advice. Don't gold plate the software! Ship something for crying out loud! I won't argue with that recommendation at all. Hell, I give that advice to people myself and even wrote about the evils of Perfection Obsession a while back.

However, you have to be very mindful of what Done means in your context. Another tenet of the 37 Signals approach is "Half, Not Half-Assed". This means simply that you provide partial but meaningful and useful functionality to the consumers of your software which, from a technical standpoint, has quality that's approaching perfect! This is the key part of the message that I have repeatedly seen ignored.

"Done" and "perfect" refer to the scope of the functionality, not its quality.

Quality is non-negotiable. It has to be so good that the people who use the software simply don't notice. It has to be so good that if something does go wrong, it was such a strange corner case that no one could have thought of it beforehand.

One of my key takeaways from Extreme Programming has been that I've learned how to push quality to near zero defects at a reasonable cost. I've been able to work from the assumption that anything greater than zero defects is unacceptable rather than the opposite, where defects are simply expected as a natural part of developing software. Couple that with Exploratory Testing performed by people who really understand how to test software, and again the quality level rises and defect rates decrease even more.

All this is well and good, but what if a new feature of the software passes all of its automated checks, the code has been peer-reviewed and been given all the requisite blessings, it has been thoroughly explored and the testers have become bored, the software is shipped into production, but no one uses that feature? What happens if it doesn't actually do what the users of the system wanted? Was it actually done? It certainly wasn't perfect!

"Done" also needs to contain the notion that there's a valid business reason for delivering the functionality. If even the smallest feature is delivered "just because", then the time, effort and money was wasted.  This doesn't mean, though, that you shouldn't deliver features that haven't been requested by consumers of your system. After all, they may not even realize that they have a need for the feature until they get their hands on it. It does mean, though, that especially in that case you would want to deliver the tiniest possible functionality to expose the assumptions to the real world in order to validate that they were correct.

Shipping something is by far the best way to obtain the feedback required to build systems that people love to use - that delight their customer. Sacrificing quality to do that may seem to help you move faster initially, but it will slow you down over time as the system (and the defect list) grows.

Ship very small slices of functionality as quickly as you can and your market can bear, but do so with the view that the quality of that functionality must be nothing other than "the best".

If you need help doing that, I happen to know a coach... :)

19 January 2014

The First Rock

Dave Thomas and Andy Hunt, the Pragmatic Programmers, used the concept of "broken windows" to describe the condition of software entropy. I've developed a very simple heuristic for determining when a software system has reached a tipping point with respect to quality. Following the analogy, the first rock has been thrown that broke a window.

This heuristic can be used to make some crucial decisions about the future of the system and where the people developing it need to focus their efforts. It may also be an early indication that you need to think seriously about simply replacing the system altogether.

Suppose your system has a list of defects, and we'll assume that the list has been curated well enough to remove items that aren't true defects but rather feature changes or additions. The tipping point for quality occurs the moment your defect list has grown large enough and contains defects complex enough that there's a decision to mark lower priority defects with the status of "WON'T FIX", or to simply close them without fixing.

Once that happens, once you compromise to that extent on quality, you're starting on the steep downslope towards the immense pain of making changes to the system that break seemingly unrelated functionality. You're wandering into the wilderness of simple changes that take increasingly longer to make. Your ability to quickly react to market opportunities gradually erodes away.

How do you prevent this? The answer is quite simple - work from the perspective that anything more than zero defects is unacceptable.

I know you're laughing at this point. What a ridiculous thought! Software is far too complex! Defects are inevitable!

At some point in the history of software development, a person or group of people somewhere made a conscious decision to change from "defects are unacceptable" to "defects are inevitable". I completely understand that when computers were powered by vacuum tubes you were at the mercy of blown tubes and actual, living bugs causing short-circuits, but those days were well over a decade before I wrote my first Hello World program in BASIC in 1981. As I wrote in Waterfall Works!, the assumptions underlying many of the techniques we use and approaches we take today are based on the realities of computing environments in the 1950's, 60's and 70's. They don't need to apply anymore.

When I learned about Extreme Programming in 2000, the ideas of working on very small slices of features with rapid feedback, all backed by acceptance criteria and multiple levels of automated checks opened my eyes to possibility being able to ship software with near-zero defects at essentially no extra cost over the medium to long term. Suddenly, the impossible seemed possible, and yet another outdated assumption bit the dust. Since that time I've also learned about Exploratory Testing and other complementary techniques to drive quality even higher.

Since being introduced to these concepts, I've seen many groups and systems where User Stories are created in prodigious quantity, but not a single one has acceptance criteria. There are thousands upon thousands of automated checks such as unit tests, but defects constantly fall through that net because few people are taking the time to use Exploratory Testing.

If we simply change our mind-set from one of "defects are inevitable" to "anything more than zero defects is unacceptable", and think hard about what it would take to achieve that goal, we can reach the nirvana of great software with levels of quality of which we had only once dreamed.

Ideally, you simply realize the value of the change and start down that path. Failing that, though, ask yourself if the first rock has already broken a window. Have you simply closed defect reports without fixing them? Do you have a "Won't Fix" status in your tracking system, and if so have you used it?

So, has that rock already been thrown?

15 January 2014

Pareto Must Die!

A couple of years ago I wrote a post entitled Fibonacci Must Die in which I discussed how the mathematical series had been abused by lending pseudoscientific credence to software estimation. I see similar abuses of the Pareto Principle, the famous 80/20 Rule.

I have watched entire conversations stop when Pareto is invoked.  I've seen work shipped as good enough under the auspices of Pareto when in fact it didn't work properly or had unexpected side effects.

Like Fibonacci, there's an air of science when using like Pareto but the problem is that it's very dependent on the context. Would you like the software developers writing the code for a pacemaker to invoke Pareto? OK, so that's life critical software. How about those at Amazon saying that the code that calculates shipping costs is good enough after hitting the 80% threshold? Suppose that code overcharges on 10% of orders and you don't even realize it. You could be losing a few dollars on each shipment, multiplied by a bazillion other customers. What if you were undercharged for shipping and a friendly Fedex or UPS person showed up at the door asking for a few more dollars? That isn't the end of the world, but is certainly embarrassing for both you and Amazon. The Pareto Principle shouldn't be an excuse for doing a half-assed job!

The real issue is that we don't know if the ratio is 80/20, 70/30 or 60/40 or any other arbitrary combination.  For a pacemaker I'd certainly want it to be 100/0. What if the correct ratio is 0/100, meaning that the code shouldn't have been written in the first place? By what measure are we coming to this magical 80%? Lines of code? Time spent? Budget? Finger in the wind?

When using Pareto, there's a form of confirmation bias occurring. When the principle is phrased as, "You get 80% of the value for 20% of the work", it just sounds so good that we believe it must be true. We want it to be true! This bias is the origin of most urban legends, and is the raison d'être for Snopes.com!

So, because it sounds like it must be true we endeavour to ship software quickly with that 80% of the value.

That isn't completely bad, since my experience has consistently shown that shipping smaller groups of features earlier is a more effective way of delivering most types of software. Effective feature or story splitting enables this, and it's one case where I'm more than happy to encourage obtaining feedback as early as practical.  If, for example, you're still working on "must have" features as you near the end of a 12 month project, then it's likely that you haven't split those features as effectively as you could have. Again, though, why 80/20? Why not 99/1?

While I'm not the biggest fan of the research methodology behind the Standish CHAOS reports, I do think that their surveys about feature usage are interesting - that 45% of features in software are never used. Of course there's room for context and interpretation there, but it would be interesting to know in each context whether those features that aren't used were part of the 20% rather than the 80.

We also want to be careful not to use Pareto as a means to avoid work that we perceive to be difficult or monotonous. Just because something is difficult/expensive/monotonous doesn't mean we shouldn't do it. Similarly, I've often heard the phrase "low-hanging fruit" used to find features that are easy and a development team could complete quickly in order to provide a sense of progress. When I've heard this,  though, it has been in a context that's devoid of any notion of what's important to the business.

I'm much more a fan of what Josh Kerievsky calls "bargains". These are features that provide value to the business that's much higher than the effort required to ship them. That evaluation of value and effort is what's really required to iteratively determine the sequencing of features.

Iterating using Pareto is good, though... perhaps get 80% and evaluate.  Then get 80% of what's remaining and evaluate. Repeat until the confidence level of more than one person is high enough for the context in which you're working. Substitute the appropriate ratio for your circumstances.

So, while I have no issues with the concept behind Pareto as a means of shipping only the most important and impactful features first, I believe that the 80/20 ratio has been misused. Features have been shipped with much lower quality. The features shipped haven't been the ones that are most important to the people consuming them. Worst of all, 80/20 has been used as a way to kill conversations that could fix the first two issues.

Learn how to break features down into extremely thin slices, and deliver those with the highest possible quality.  If they're truly thin, maximizing quality doesn't cost much more than shipping crap.

13 January 2014


We all know that the only constant in life is change, and with that in mind I'd like to let the world know that I've moved along from a great 20-month stint at Shopify and I'm returning to the consulting world.

It was great being surrounded by such a vibrant group who were so engaged with shipping software and making their own rules as they went along!  I learned a ton about Ruby and Rails as well as the ins and outs of scaling software to many tens of thousands of users, and crazy request per minute (RPM) numbers.  I was fortunate to work with a number of real craftspeople who cared deeply about what they did and sought constantly to improve.  I was equally fortunate to work with some of the most approachable, enlightened management I've seen from the CEO, Tobi Lütke, on down.

As for the future, I'm now available to work with organizations to build or improve their product delivery capabilities.  I can work with your developers to help them learn techniques to drastically improve quality in order to enable them to sustainably ship software faster.  I can work with your product management people to help them identify what to build and how to break it down into minimally small pieces in order to obtain feedback as quickly as possible and delight your customers.  I can work with your support organization to help them engage more directly with both customers and the development people in order to respond to real issues faster.  And, I can work with your management group to help them establish effective to structure and oversee your organization to effectively support your people.

If you want more information about what I've done in the past, have a look at my LinkedIn profile.

If you're interested in some coaching or just want to chat about your organization, feel free to e-mail me at dave.rooney@westborosystems.com .