29 May 2011

Waterfall Works!

When I'm providing training or giving a talk on Agile Software Development, I love to shock the attendees with the following statement:
Waterfall works!
Gasps of disbelief abound... "WTF?!  This guy who has just described how he has been working with Agile for over a decade is telling me that Waterfall works?!"

The truth, however, is undeniable.  Tens and probably hundreds of thousands of software systems and products have been shipped and used over the last 40+ years since Winston Royce's paper Managing the Development of Large Software Systems was published.  That process has been applied to everything from modern web-based applications to massive telecom projects, from tiny programs built by one person in a few weeks to those with 10's of millions of lines of code built by hundreds of people over multiple years.

We simply cannot say that Waterfall doesn't work.  The catch is, though, that it's at best a sub-optimal way to deliver systems, especially now in the 21st century.  When I present this to people, though, I frame it in the context of when Dr. Royce wrote his paper and presented it at the IEEE Wescon conference.

First, if you actually read the paper you'll notice the classic waterfall model on the top of page 2 showing serial steps with each being fully completed before the next starts.


If you read the very first sentence of the very first paragraph after that figure you will see this:
I believe in this concept, but the implementation described above is risky and invites failure.
So, Dr. Royce actually knew that the process adopted by so many people was flawed from the start!  He goes on in the paper to show an iterative model that would be quite familiar to anyone in the Agile community:


Too bad most people didn't read past page 2 of the paper!

The second point I make is that we need to take into consideration when Dr. Royce presented this paper.  This occurred at the IEEE Wescon conference in August 1970.  I was a month away from starting Kindergarten then, and Ron Jeffries was still in his first decade of programming professionally. ;)

Let's also consider the computing environment in 1970.  A mainstream IBM System/360 in 1970 had a single processor capable of up to 0.034 MIPS (34 KIPS), a theoretical maximum of 16MB of memory (typically 256KB main memory and 8MB secondary), 225MB disk space.  It cost about $50K per month (in 1970 dollars!) to lease, and about $15 million to buy one.


Compare that to my mainstream (2011) Android phone which has a processor running at 740 MIPS, 384 MB RAM and 16GB SD secondary storage.  It's retail price was $349.


We also need to consider the programming environment in 1970.  Most work was done with punch cards, and it could take hours to determine if a large card deck had even compiled let alone ran, let alone worked properly.  Given that environment, of course you're going to spend a lot of time up front designing, writing and reviewing the code before creating the cards and before submitting them.
Contrast that with contemporary IDE's such as Eclipse, Intellij, Visual Studio, XCode, etc.  Those tools usually have a preference setting for how long they should wait before highlighting a syntax error or compiler warning!  We have tools and frameworks for unit and acceptance testing that can execute thousands of tests per second on our desktop machines.  We have automated tools to check for problems in our designs, and automated tools to safely refactor our code to improve the design incrementally.

The final point I try to make is that, at the time he wrote the paper, Dr. Royce was working with the IBM Federal Systems Division on projects for the U.S. Department of Defence.  These projects were typically very large, hence the name of the paper, and the contracting model employed treated the development of software the same as the construction of a building.  Considering the tools and development environment in 1970, that analogy was at that time much closer to the truth than it is now, but it was still a flawed view of software development.  We know now that software development is a design activity not unlike the work that architects do in designing a building, and aerospace engineers do in designing an aircraft.  Both of those domains use many iterations to create and refine the designs prior to the actual construction work.  The key difference is that software development is almost entirely a design effort, the construction aspect being compilation, linking and deployment.

So, yes, the waterfall model works - you can deliver software that way.  Our view of how software is developed, however, has refined over the decades and the dizzying pace of technological advances has enabled us to move to more iterative, incremental approaches to delivering systems.

In the end, it's not that waterfall doesn't work, it's that we no longer need it.

25 May 2011

The "Real" Work

Quite often I hear teams lamenting about the number and duration of meetings they must attend.  I've encountered this in organizations large and small, public and private sector.  It usually manifests itself like this:
When can we finish with all these meetings and get some real work done?!
I can certainly sympathize with that feeling... it seems like teams are meeting constantly:
  • Daily Standups
  • Sprint Planning Parts 1 & 2
  • Sprint Review
  • Retrospective
  • Backlog Refinement
  • Design Workshops
  • Release Planning
Those are typical meetings that occur in the single-team, single-backlog version of Scrum.  If you are working in a scaled environment with multiple teams, you likely need to add:
  • Joint Backlog Refinement
  • Joint Retrospective
  • Scrum of Scrums for team coordination
That's a lot of meeting, and a lot of time not doing real work.  Or is it?

Ron Jeffries had an e-mail signature line that I saw years ago that really struck me:
We accomplish what we understand. If we are to accomplish something together, we need to understand it together.
In other words, it's the shared understanding of the work to be performed that is critical to the success of that work.  Every requirements process in existence, from binders full of the system shall's to a business person sitting beside a developer giving instructions, seeks to achieve the same goal - a shared understanding of the work to be done.

As this video shows, a shared understanding is critical to success:



While the video is intended to be humorous, how many examples of misinterpreted requirements can you remember from previous projects?  The words may have been clearly written in a document, but what the author of the requirement meant was "see if he's still alive" while the development team interpreted the requirement to mean "please ensure he's dead".  Even when the requirement is stated face-to-face with a representative from the team, that person may take his misinterpretation back to the team.

In most organizations in which I've coached, the transition to an Agile process meant that all team members were now involved in the meetings mentioned above, rather than a select few.  There are some exceptions, such as the joint meetings held in the scaled model of Scrum, but the general rule is to be much more inclusive about who attends meetings rather than limiting attendance to more senior people for example.

On the face of it, this inclusive nature seems rather wasteful.  Where you once had a weekly team meeting for an hour or two, you now have daily meetings that can take 15 minutes each plus follow-on discussions.  Where you once had perhaps two team members attending requirements meetings of a few hours, you may now have an entire team.  So, yes, the amount of time spent per person in meetings does increase.

The benefit of that increase is that everyone is hearing the same message, contributing to the same discussions, and moving closer to a shared understanding of the work - shared not only within the development team, but with the business for whom the work is being done.  By doing so, the risk of misinterpretation drops significantly and the probability of getting the work right the first time increases.  These prevent costly rework when a mistake is discovered, and even avoid unnecessary work altogether when the discussion leading to the shared understanding uncovers simpler ways to accomplish the business goal.

In other words, those meetings are real work.

18 May 2011

A Survival Guide for New Agile Coaches - It's Quiet... Too Quiet

Kids make noise.  It's that simple.  They talk, they yell, they bang and crash.  Their toys rattle and buzz and play music.  Put multiple kids together and the noise level increases with the square of the number of children.*

This is just a simple fact of life, and as long as the noise doesn't indicate injury or imminent doom it's also a good thing.  You see, when young children suddenly become quiet then all sorts of nefarious things can be occurring.  Little Johnny, who only learned to walk last week, could be silently climbing the dining room cabinets or applying Mom's makeup to the dog.  Yes, children who are doing something they aren't supposed to be doing are as silent as a Ninja, moving rapidly seemingly without even touching the ground.

So, noise is good - noise is safe!

* Sorry no hard data to back that up, but go to a birthday party with 8 kids of any age from 1-21 and then dispute my assertion! :)

Coaching Point

When working with teams new to Agile, a common concern I hear is that moving to an open "team room" will reduce productivity because of the noise.  People won't be able to concentrate because all of their co-workers will be talking about last night's episode of [insert TV show here], or have death metal screaming away from their speakers.  My experience just doesn't support those concerns.

While the transition may involve some discomfort as people become used to working away from the confines of a tiny cubicle, the benefits of having a team in very close proximity are clear and proven.  The principles of the Agile Manifesto are clear about this:
The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
Face to face conversations are very high bandwidth communication channels.  Think of them as 10GB Ethernet right to the back of your computer.  Not only is the content of the conversation being passed, but the tone of voice and body language of the people involved factors into the communication.

If you move that conversation to a video chat, you're still doing OK - perhaps a 100MB connection - but some of the intimacy is lost as senses such as touch and smell are removed from the equation.  Move that conversation to a phone call, and you're down to a DSL connection since the visual aspect is lost and for all you know the other person is making faces at you while you speak. :)

If that conversation moves further to e-mail, you now have a 56K modem because not only are all of the physical aspects of the conversation now lost, but the conversation has lost a time element as well.  E-mails are asynchronous - if you send me a message, I don't have to respond to it immediately.  An e-mail, being written text, can also easily be misunderstood - how many times have you been in trouble because you forgot to put a smiley in a message? :)

Finally, using a written document to communicate is the equivalent of the old "phone in the cups" acoustic coupler 300-baud modem.  For those of you not old enough to know what I mean, here's a picture:



As a communication medium, a written document is simply awful.

So, we've made the case for face to face conversations, but can't we have those in a cube farm just as effectively as in an open team area?  Well, no, we can't.  The simple walls of cubes create barriers to communication, even if someone is only a few cubes away.  An open workspace without internal walls encourages communication.  Indeed, the entire cubicle concept originated at Herman Miller in the late 60's as a way to make the effective "bullpen" work area concept more comfortable.  As we all know, it was subsequently bastardized into the Dilbert-esque cube farms in order to cram more people into the same amount of floor space as a cost-saving measure.

An open workspace where the team members can see and hear each other fosters face to face communication because the probability of communication is much higher.  MIT professor Thomas Allen studied this in the late 70's and his results showed that the probability of face to face communication decreases rapidly with distance.  This is known as the Allen Curve, and I've personally witnessed its effect over distances of only 4-6 metres:

So, if you have a teams of 7 +-2 people working in open workspaces all communicating face to face, isn't there going to be a lot of noise?  Well, to an extent, yes.  This isn't a crowded pub - everyone can't be yelling!  What you should hear, though, is a 'buzz' on the floor.  There should be constant conversations going on that aren't loud enough for everyone to hear, but effective for the people involved.  Team members have to be respectful of others, and I've found that simple tools such as foam bricks and Nerf weaponry are very effective at enforcing that point. :)

In the end, a workplace with a buzz on the floor signifies a healthy workplace.  After all, silence indicates trouble.  Noise is good - noise is safe!

15 May 2011

Simplicity, Planning and the Weather

(Thanks to George Dinwiddie for inspiring me to finish this post, which had languished for several weeks!)

Simplicity is a core tenet of Agile Software Development.  The 10th principle of the Agile Manifesto has to do with Simplicity:
Simplicity--the art of maximizing the amount of work not done--is essential.
Simplicity is a core value of Extreme Programming:
Simplicity: We will do what is needed and asked for, but no more. This will maximize the value created for the investment made to date. We will take small simple steps to our goal and mitigate failures as they happen. We will create something we are proud of and maintain it long term for reasonable costs.
Scrum was born out of the need to simplify heavyweight processes that didn't work:
Scrum, a scalable, team-based “All-at-Once” model, was motivated by the Japanese approach to team-based new product development combined with simple rules to enhance team self-organization.
Lean Software Development also supports the notion of simplicity.  In Mary & Tom Poppendieck's seminal book Lean Software Development, the words "simple", "simplicity" and "simplify" appear 76 times (thanks Kindle!).

Most people with whom I've worked assume that simplicity is to be applied to the actual architecture & code of the software itself.  That's absolutely true but as a principle and value, simplicity needs to be extended to the process as well.

For example, I've dealt with many teams who, during Sprint Planning, will spend anywhere from 10 minutes to an hour calculating the team's capacity in person-hours.  They try to factor in every possible contingency to ensure that the number is as accurate and precise as possible.  For a team new to Agile, this makes sense - if you haven't been used to delivering something every couple of weeks, you need to at the very least enumerate all of the things that teams members do that uses their time.

However, after the first sprint the team will have one very powerful data point - the amount of work that was completed to that team's definition of done.  Regardless of the unit of measure used for the backlog items, e.g. Story Points, that amount of work is fundamentally more accurate than any predictive calculation.

My experience, and that of many other coaches, has been that a team will be slower in their first sprint than in any other.  That's fine - the process is new and the team members need to get used to a new estimation process, etc.  After 2 to 4 sprints, though, the velocity stabilizes.  For most teams I've encountered, if you took the average velocity of sprints 2 to 4 you would have a very close approximation of their long term velocity, barring changes to the team.

So, the amount of time and effort spent during Sprint Planning to determine the team's capacity and how much work they can pull from the backlog quickly becomes wasteful.  They could simply use the technique from XP, coined by Martin Fowler, called Yesterday's Weather:
This is the principle that says you'll get as much done today as you got done yesterday. In iterative projects it says that you should plan to do as much this iteration as you did last iteration.
Martin goes on to say:
I remembered a story I think I might have read while at school.
Some country decides to build a sophisticated computer system to predict the weather. After spending more money than I can imagine, they come up with a wonderful result - and proudly claim that the system is 70% accurate. Somebody then figures out that in this country if you predict today's weather will be the same as yesterday's weather you will be 69.5% accurate. 
The point of course is that while Yesterdays Weather is a crude mechanism, it ends up being not significantly less accurate than more sophisticated (i.e. complicated) ways of doing it.
So, by simply using the velocity from the last sprint you can determine how much work the team should pull for the current sprint in about, oh, 5 seconds.  If there are going to be significant disruptions to the team, such as people taking vacations, etc., then you can factor that in.  That should increase the amount of planning time to about 30 seconds.

If you didn't complete any work to the done state in the previous sprint, which does occasionally happen, you can either use the velocity of the last sprint in which work was completed or use a historical average of velocity.  That may bump up the planning time to a minute.

Regardless, once a team has completed a couple of sprints determining capacity and pulling the appropriate amount of work from the backlog according to that capacity shouldn't take any more than 1 minute... hell, I'll be generous and give you 2 minutes.

I'm serious.  It's that simple.