17 November 2011

Riding One Syllable to Success

Flow.  Play.  Slow.

Although I haven't read nearly as much as I would have liked, over the past decade I've spent a lot of time with my nose in a book learning whatever I could about the many facets of XP, Scrum, Kanban and other Agile topics.  Over the past two years, though, my thinking has been markedly influenced by three single-syllable words and the books about them.

Back in early 2010, it was Flow.  I presented Confessions of a Flow Junkie at the Agile Ottawa meetup in January and again at the main Agile Conference in Orlando that summer.  I spent a lot of time delving into work by people such as Mihaly Csikszentmihalyi and Don Reinertsen about how to establish and maximize the flow of work, knowledge and learning through a team, group and organization.  While many of the concepts and practices were well known to me, I did learn much more about system thinking and took a much deeper dive into the world of Lean than I had up to that point.

Late in 2010, it was Play.  My wife had read Dr. Stuart Brown's book of the same name for a course she was taking, and suggested I read it.  It took me a good number of months, but I finally got to it and was immediately hooked.  Within the first 10 pages I had laughed out loud, cried and recognized myself in a lot of what Dr. Brown was saying.  I've always tried as much as possible to make work fun, but Play really drove home the need to use the techniques of play in order to foster creativity and help drive innovation.  Play also provided ways (or at least reasons) to break stagnant teams out of their funk.  It made it OK to have fun at that endeavour that takes nearly half our waking hours as an adult - work!  Indeed, I mentioned before in The "F" Word that every team with whom I've worked has valued 'fun' as something important to them in their work.  Dr. Brown's Play illustrates how that isn't only something pleasant, but from a business perspective it's quite lucrative - happy workers are productive workers, and it results in happy customers!

Finally, in late summer of 2011 I started reading Carl Honoré's (In Praise of) Slow.  Again, this was a book my wife had read for a course, but I have to admit I was skeptical.  Yeah, yeah... we have to slow down and live like organic-farming, granola-eating, Birkenstock-wearing hippies, yadda yadda yadda, peace, love, understanding and all that.  Then I saw something on the back cover of the book about the author receiving a speeding ticket in Italy (they have those there?!) while rushing to a meeting with one of the people he interviewed, and saw just a little of myself (really, only a little!).  So, I broke down and started reading it.  Again, there were many, many insights into a lot of what I was starting to feel was broken in how we worked and even how we lived.  I thought of a new ScrumMaster at a client who said one day,
All I want to do is not feel guilty for leaving work at a time that let's me be with my family.
She was by no means the only person who was feeling that way, and part of my job at that client as a coach was to accelerate their work even more... at least that's what her management thought!

Slow, the book, delved into deceleration in all aspects of our lives while keeping one foot in the reality of 21st century society.  Yes, we have technology and the ability to communicate like never before, but we don't always have to be 'on'.  We actually need to slow ourselves down at times in order to promote Csikszentmihalyi Flow, which goes back to my first single-syllable word.

While there was nothing specifically about software development in Slow, there were so many aspects of Honoré's message that apply.  Even teams, groups and organizations that successfully apply Agile processes such as Scrum can end up running in a hamster wheel, feeling like they can't stop to take a breath.  Nothing is further from the truth, and failing to do so is actually more harmful.  Like a golf swing, slowing down can lead to much more effective teams with the ability to deliver constantly in a sustainable way over the long term.

So, these three words - flow, play and slow - may seem simple (or simplistic), but they have much deeper meaning.  Focusing on them and making changes that promote each one can radically improve how a group works, not only today or this week, but for years.

Oh, by the way, I really don't mind organic food, actually like granola and find Birkenstock's quite comfortable. :)

9 November 2011

What I Do When Coaching

In the past few months I've seen more and more articles, discussion group entries and blog posts that talk about Agile Coaches using words such as charlatans, snake oil salesmen and '#@$!!!#$%$#' (I'm not exactly sure what that translates to, but I believe I can safely assume it isn't complimentary).

Of course, I have no control over what other people do to hang up their shingle as an Agile Coach, but I can talk about what I do as a Coach.

First and foremost, my job is to listen and listen a lot! From my initial meetings with a potential client to daily standups with teams with whom I've been working for weeks or months, I have to listen not only to what's being said, but how it's being said, and often what isn't being said.  Every coaching engagement starts with a discussion about what problem or problems the client is trying to solve.  Only then will I have the most vague idea of how to help, and even then I almost always don't have the full picture.  That leads directly to the second aspect of my job.

I ask questions... a lot of questions.  If you're annoyed by a toddler constantly asking, "Why", then there's a good chance I'm going to annoy you as well.  However, to be able to help I really need to ask all those questions.  Sometimes the answers alone are sufficient, but again I listen to how a question is answered, observe the body language, and look for avoidance or deflection in the answers.  Questions that are given direct, clear answers are usually things that either aren't problems, or at least the people are conscious of the problem and can find ways to solve them.  The hairy issues are the ones that are behind questions that people don't want to answer, or deflect the question using responses like, "It has always been like that here!", or, "That group never cooperates with us!", or one of my favourites, "It's above my pay grade to worry about that!".

The third aspect of how I coach is to challenge assumptions.  Again, this can annoy people, but I usually point out that someone has hired me because they believe there's a problem that I can help solve.  Why does your build take 15 minutes after a trivial one-line change?  Has anyone tried building on a local machine?  Has anyone tried optimizing the build scripts or partitioning the build such that you only have to rebuild a small fraction of the system after that one-line change?  Has anyone actually tried having teams work in a common work area?  Has anyone actually tried pair programming?  Did someone from management actually say to the teams that they shouldn't refactor code so that it's maintainable over the long term?  Has management actually said to the teams that they expect them to refactor the code in order to make it maintainable over the long term?  I could go on and on...

The fourth aspect is mentoring in any number of Agile practices.  I come from an XP background, so I'm often called upon to work with teams on technical practices.  I also do everything from provide ab initio training to teams and working with them directly for a few iterations, to simply acting as a sounding board for people who have been working in an Agile environment for a while and they want a different perspective on what they're doing.  In these respects, I'm an expert by virtue of my 10+ years experience, and I'm constantly learning myself!

Finally, the most important part of what I do as a coach is to work myself out of a job.  A single team should be able to fly on its own after a few iterations of coaching.  The most important things I help them with is learning how to reflect on their work and make their own improvements.  I can show them the mechanics of a process very quickly, but effective retrospectives are key to long-term and sustained improvement.  Lean principles are also a key aspect to this.  Essentially, I try to impart a mentality of "Things are good, but we can always find a way of doing better!"

On larger engagements with many teams, another way I work myself out of a job is to help develop an internal training and coaching capability for the client.  That allows the client to sustain the changes they have decided to make, and especially to tailor the training and coaching to their specific domain.  While I do learn a reasonable amount about a client's business domain during my engagement, there's no substitute for people who have been working in that domain for many years.  That perspective allows the internal coaches to provide better situational coaching that I might be able to do, because they already know the work, the people, the politics, possibly the code base, etc.  That said, looking back at my second and third points, my external voice can shake loose some assumptions about the current environment that an internal coach may see as issues or positions that can't be changed!

So, to summarize, here's what I do as an Agile Coach:
  • Listen
  • Ask boatloads of questions
  • Challenge assumptions
  • Teach/Coach Agile practices
  • Work myself out of a job
The length of an engagement can anywhere from a few hours to many months depending on the situation.  I had one client where I spent about 6 hours going through the steps above, because their actual need was relatively small.  Other clients have taken longer due to the number of issues, the number of teams and a combination of both.  There's no set formula for determining how much coaching a client needs, although it generally becomes apparent once the engagement starts.

So, yes I call myself an Agile Coach, and yes I specialize in helping teams to use Agile values, principles and practices.  However, the actual work that I do may not be to teach a client Scrum.  It may not be to show them XP technical practices.  It may not be to talk about Lean.  It does, however, always follow the steps above.

Now that you know what I do as a coach, if you think I can help then give me a call! :)

13 October 2011

Agile is a Cop-Out?

In a blog entry entitled "Agile Software Is A Cop-Out, Here’s What’s Next", Forrester's Mike Gualtieri makes some bold statements about what he sees as hype and a lack of empirical evidence of success from the Agile community.

Now, Mike's post is bound to raise the hackles of many people in the Agile community, but I do agree he has some good points.

He doesn't specifically use the term "hype", but certainly the promise of hyperproductivity coming from Agile thought leaders like Jeff Sutherland doesn't help.  I have witnessed trainers routinely telling their students to expect incredible gains in productivity once they drink the Scrum kool-aid.  It's a great sales pitch - others are doing this and if you aren't achieving those sorts of increases then you must need more training or coaching!  The amount of Bad Agile or Bad Scrum that has resulted is a testament to both the hype that has pulled people into trying Agile and the difficulty in actually doing it well.

I will defend the authors of the Agile Manifesto, though, in that they were making a statement about the world of software development as it was in the late 90's and into early 2001.  Mike cites Steve Jobs using "insanely great" to describe products, but in February 2001 the world still hadn't seen it's first iPod.  OSX was a month away from going GA.  The aluminum-cased MacBook Pro and MacBook Air were 5 and 7 years away respectively.  The iPhone was a pipe-dream.  Tablets had come and gone several times at that point, long before the iPad was developed.  So, most of that insanely great design was still well ahead of us when the Manifesto was written.

Also, as an industry we sucked at building software, and the dot-com boom hadn't helped much to fix that.  You had either code and fix or some hard-ass waterfall process to follow, and precious little in between.  It's no wonder that the Manifesto's authors focused on that problem.  "Working Software" wasn't narcissistic, as Mike states, it was something that was sorely lacking in early 2001 (and arguably still is today).

Mike mentions that he first wrote about his approach, Parallel Immersive Software Studio (he acknowledges the acronym that creates!), in 2008 and has refined the ideas since then.  Well, the software development world is a little different today than it was in 2001.  The success of the products from Apple are mainly due to the total user experience, which is something that has grown and been refined over the ensuing decade.

I see a lot of good ideas in Mike's approach.  I certainly agree fully with the ISS parts of his approach, but I do have issues with Parallel.  Any time that groups go off in isolation to work in parallel, you incur multiple risks:
  • One or more groups go in the wrong direction;
  • You have to integrate the work at some point, and the longer you wait the more difficult the integration will be;
  • Based on Mike's description of parallel work, you are isolating experts in different areas thereby decreasing your Truck Number
Another point Mike makes is, "Parallel work streams can reduce uncertainty and the number of costly iterations."  Could I please see some empirical data to support that?  If he's going to criticize the Agile community for not providing empirical data, then I believe it behooves him to do the same.

Mike concludes the post with 5 key points about his process:
  1. Software is not code. It creates experience.
  2. Development teams are not coders. They are experience creators.
  3. Technical talent is “table stakes”. Great developers must be design and domain experts.
  4. Process is bankrupt without design. You get what you design so you better get the design right.
  5. Software is a creative endeavor, not an industrial process like building automobiles. The methodology is structured to support the creative talent.
I agree with all of these, although #4 concerns me.  What is Mike's definition of design?  What does it mean for a design to be 'right'?  How do you know when a design is right?  Is design ever done?  How much design is enough?

This segue's nicely into my growing interest in the Lean Startup community.  Lean Startup assumes you're going to use good engineering practices (many from the XP world) to build the software because they enable you to obtain the kind of rapid feedback required to refine a total product design into something for which people will actually pay money.  What I don't see in Mike's post is anything about that sort of validation - his process may ensure that you build some kick-ass, well-designed systems, but if you don't sell the product because you didn't realize that no one wanted it, have you really succeeded?

Mike has some very good ideas in his post.  I also saw some good ideas in a similar "post-agile" blog entry by Michael Dubakov.  While I don't agree with all of the points the respective authors make, I do think this sort of "what's next" thinking is healthy and will help make the second decade after the creation of the Agile Manifesto even better.

After all, reflection and continuous improvement are core concepts in Agile and the ideas put forth by Mike, Michael and Eric Ries of Lean Startup provide exactly that.

10 October 2011

Great New Scrum Extensions!

This past week, Jeff Sutherland and Ken Schwaber annouced via Scrum.org that Scrum is now open for modification and extension.  This is good, and an issue that has been simmering in the Scrum community for some time now.

Fortunately, there have been some forward-thinking folks who got a jump on the rest of us, and had their extensions ready to go at the time of the announcement:


This extension, known by the rather provocative name Extreme Programming, was created by Kent Beck and is apparently targeted towards smaller teams and organizations.  It adds to Scrum by providing several engineering practices that many have indicated were missing.

If you are targeting a larger group or organization, then this extension known as Industrial XP may be a better fit:


This process was created by Joshua Kerievsky and adds business facing practices to Extreme Programming (and hence to Scrum) such as Project Chartering and Test-Driven Management.

My congratulations to these gentlemen for possessing the foresight to have extensions to Scrum ready the minute the announcement was made.

:)

5 October 2011

Goodbye Steve Jobs

It was about this time of year in 1981 that I dropped into my high school's Computer Science and Data Processing classroom to talk to a buddy of mine.  At the time I thought I was on a track that would lead me through high school to an aeronautical engineering or commercial pilot program somewhere, and I would become a professional aviator.  That would have surprised exactly no one, since I had been engrossed in aviation since... well, as long as I can remember.

I asked my friend what he was doing, and he showed me some little BASIC program he was writing that did some silly thing with the low-res graphics on the Apple ][ computer he was working on.  Hmmm... that's kinda cool, I thought.

It wasn't long after that I saw another buddy who had taken the same Data Processing class writing some more code.  I asked what he was doing, and this time he explained the program to me.  I don't remember what it was, but I remember how I could understand what the code meant... although he did have to explain why some of his variables had a "$" at the end.

Coincidentally, the Grade 11 Physics class I was taking was studying rectilinear motion, and we had learned several equations.  While I didn't have any problem doing the equations by hand, I wondered if I would be able to use this computer thing to do the calculations for me.  So, I asked one of my friends what material they had on BASIC, and was shown a shelf containing a raft of Apple manuals.  I politely asked the teacher if I would be able to use one of the 4 computers to write some programs during lunch and after school, and he happily agreed.

After some initial fumbling and stumbling and asking a lot of questions of the guys who seemed to know what they were doing, I figured out the mechanics of writing and executing a program.  I was then able to run calculations for one of the equations, cross-checking the result with my hand-calculated version (an early acceptance test!).  If I recall correctly, the next thing I did was purchase my first 5.25" floppy disk for, I believe, $8 from the teacher so I could save the program.  When I wanted to move the next equation, I created my first UI that allowed me to select which equation I wanted to run.

After a week or so, including some time with one of the math teachers to explain how to solve quadratic equations - which we hadn't learned yet - I had a fully functional program that some of the other students were interested in using.

And, I was hooked.

Next came explorations into graphics, both the clunky low-resolution and the much more interesting high-resolution modes available on the Apple.  I played with shape tables, learning how to move shapes smoothly around the screen.  When BASIC code became too slow for what I wanted to do, I had one of my more advanced fellow geeks show me this "assembler" thing I had heard was pretty fast.  That led to my first computer book purchase so I could dig even deeper.  I had gone over to the dark side... I could now lock up the computer faster than anyone had imagined!

I also started reading magazines such as Byte at the local library, and learned about the Apple founders, Steve Jobs and Steve Wozniak, and their vision for computers everywhere.  It was now early 1982, and my  choice of future vocations had completely changed.

I remember the 1984 announcement of the Mac, and seeing one first-hand at my Uncle's place a little while later.  I still remember how easy it was to use, and how the floppy disks were so tiny!  The future had arrived, or so I thought.

When I went off to university, now to become a professional software developer, I was struck by how primitive the mainframes I now had to used seemed compared to the desktop Apple ][ I had.  These so-called "powerful" computers couldn't do anything remotely like the graphics I was able to write on the Apple and, dammit, I couldn't even directly access specific memory locations!

A couple of years later I was back on Macs while working summers at Bell-Northern Research, and back to that familiar, easy to use feeling.  It all just made sense, and it all just seemed to work (for the most part).

It wasn't to last, though.  At some point I had to work with PC's, and I was appalled at how primitive, how clunky and how... un-fun they were.  Over time I became used to the PC, although UAE's in Windows 3.0 constantly made me want to go back to Macs.  Unfortunately, all of the clients I served were PC shops.

It wasn't until 2006 that I was able to work with a Mac again, this time a MacBook Pro.  What a fantastic machine, although I had to leave it behind when I moved on from that company.  In the ensuing years it became more and more clear that not only were all the cool kids using Macs, but the people who I respected and did serious programming work used them.

I resisted the iPhone, initially, kind of as a matter of principle about Apple's closed model for the App Store.  Once the iPad came out, though, I was once again hooked and picked one up.  I haven't looked back.  I'm typing this from a MacBook Air, whose Bluetooth mouse has about as much computing power as the Apple ][ that started this 30-year journey.

Steve Jobs' vision made computers approachable to everyone.  Yes, there are more PC's Windows out there, but Windows wouldn't exist if not for the original Mac.  Steve Jobs' vision also focused squarely on making the total user experience a primary driver in what they created and how it worked.  He will certainly be missed, but he does make St. Augustine's words ring true:
If you want to be immortal, live a life worth remembering.
I could have been that voice from the pointy end of the airliner you're strapped into, telling you how we're number 14 in line for takeoff at O'Hare and it'll "only" be another 30 or 40 minutes until we're ready to roll.  Because of Steve Jobs, I'm here writing about software development and helping others to improve how they do it.

Thanks, Steve.  My heartfelt condolences to your family, and may you rest in peace.

3 October 2011

Slow

Some recent events came together in one of those "The Universe is trying to tell you something, Dave" sort of ways.  It could all just be a coincidence, of course, but that wouldn't make much of a story, would it?

First, I've been reading Carl Honoré's book "In Praise of Slow", which discusses the rather negative effects that closely watching time has had on our society.  In the book Carl looks back at how we have become more and more accelerated as we've been able to keep better and more precise time.  Carl wrote about the Curator of Time at the Science Museum in London, who oversees a collection of some 500 timepieces from ancient sundials to atomic clocks

This curator has what Carl described as a "claustrophobic relationship with time", including a wristwatch that had a radio receiver that read broadcast time signals in order to set itself to maintain accurate time.  Carl spoke of how the curator would experience anxiety when the day's signal was missed and the watch could be 'off' my a millisecond or two.  Essentially the curator was obsessed with precise, accurate time.

The curator's name is David Rooney.  There's nothing quite like seeing your own name in a bestselling book about slowing down to catch your attention!!

The second event was actually a conglomeration of a few things, including watching the rain run down my home office window, that made me thing of a quote I saw years ago from Kent Beck.  I recalled that it went something like, "when you have water and a mountain in conflict, bet on the water".  Off to Google I went to find the exact quote and the very first hit was a short blog post I wrote about it in 2006... evidently it wasn't the first time I've found that quote interesting.

The post referred back to a message in the XP Yahoo Group, in which Kent said:
Bruno pointed out the effects of water. If you have a mountain and water in conflict, bet on the water. The water knows where it is going--downhill. It actively works at getting there. If it gets blocked, though, it doesn't hammer away at the mountain, it flows around. Eventually, the water gets where it is going. When I keep in mind where I am going, when I respond to resistance with listening instead of belligerence, when I keep doing my work as best I can, then I have influence. Sometimes I miss the lack of drama and spotlight, but I appreciate the results.
These events on their own were amusing and informative, but then a third thing happened.  At a client this past week, a manager asked to talk to me.  I was asked for advice on how they could implement several things that I had been pushing them to do for about 18 months.

My frustration at the perceived lack of action had been growing, and I have wondered about my skills as a coach.  I suppose the skill I should have been wondering about the most was Patience.  I expected people to change immediately and listen to and implement every suggestion I made.  After all, I've been at this Agile thing for over 10 years... I'm a bloody expert!!  It seemed as if no one was listening, but in reality they were.  The people just either weren't ready to change, or had other forces in play that were preventing them from changing.  I simply needed the patience to act like water does - just flow around the rocks in order to get where you need to go.

Flowing water also has a funny way of eroding through rock in order to make it easier to get where it's going.  It just doesn't do it very quickly on a human time-scale.

The lesson to take from this is that we can make huge changes, but there are few circumstances where that can occur quickly.  The Badlands in Alberta and Montana in western Canada and the U.S. were formed by sudden, catastrophic outflows from huge lakes at the end of the melting ice sheets that covered North America about 10,000 years ago.  The Grand Canyon, however, took millions of years to create and occurred as the result of several other long-term geological processes that increased the speed of the Colorado River and thus it's ability to erode the rock.

Both processes created roughly the same spectacular results, but there just aren't that many ice dams available holding back vast quantities of glacial meltwater that can quickly carve out new Badlands.  Similarly, there just aren't that many opportunities for Agile Coaches to quickly change the direction of an organization, and when there are it's surely to have similarly catastrophic side effects to the floods that created the Badlands.

The much more prevalent type of organization requires changes that occur over a geological time scale, not a human one.  That requires us to have patience.  It requires us to go slow.

It would seem that, like the people on whom I was pushing my views, I wasn't ready to hear or act upon the message.

Hey Universe... I get it now, OK?!

28 September 2011

The Need for Speed... and Why That's Bad!

I've written before about how software development teams are obsessed with going as fast as possible, and how that isn't necessarily a good thing.  That obsession isn't something that magically appeared with Agile methods - it has been around as long as I've been in the business.  I would assume that the mantra of "we want it yesterday, perfect and for free" came from somewhere! :)

I would suggest, though, that the 'need for speed' has become more acute with Agile.

Groups with whom I've worked are typically coming from a world of serial, phase gate processes.  A ton of time is spent defining and analyzing what will be built up front, and the construction phase is only a relatively small portion of the overall effort (at least on the Gantt charts!).

The approach that Agile methods take is to shift the beginning of construction forward, working from the start to verify the assumptions and decisions in the form of a real system.  This is absolutely a good thing, but I routinely find groups who think that ALL of time that would have been spent analyzing and designing the system can be replaced by a few days in a requirements workshop.  After a couple of days in a week-long workshop people become frustrated and start saying,
Can't we just go start building this now?!
Generally this sentiment comes from people who had not been involved very much in the requirements and analysis phases while using a traditional process.  By the time these people had received the specs for the thing they were supposed to build, they were already under schedule pressure and thus felt that any time discussing what is to be built is waste.

Using an Agile process is different.  Yes, we absolutely advocate spending much less time up front determining what is to be built.  We don't, however, advocate believing that in a few days the requirements for a complex system that integrates with other complex systems can be fleshed out into user stories that several teams can immediately pull into an iteration!  Jeff Patton suggests that 1 to 2 weeks is sufficient to create plans for 3 to 6 months of work.  Others suggest anything from a few hours to "as long as it takes".

Context, of course, is everything in this case.  One system may be able to make use of a Feature Fake that takes a couple of hours to hack together in order to run an experiment which will drive the real requirements.  For another system, the Feature Fake concept isn't just impossible, it would be illegal!

So, in many cases you can't simply jump from many weeks or months of analysis and design to a couple of hours in a workshop and expect to have an adequate level of understanding of what's to be built.  You need to take the time - slow down - at the beginning to create a shared understanding of the business problem to be solved, and to an extent how you're going to solve it.

"HERESY!! Agile is all about going faster!", you say?

Well, no, it's not.  It's about doing enough to be able to build the right thing at the right time.  The definition of enough will vary from domain to domain, system to system, and even team to team.  By virtue of not doing things that you don't need and deferring things that aren't important right now, you will indeed appear to go faster.

It's altogether possible that given the same set of high level requirements, an agile process may take longer to deliver all of them than a serial process.  An agile process, though, would seek to identify the most important subset of those requirements and deliver those as early as possible in order to obtain real-world feedback and incorporate that into the product.

Agile doesn't simply mean "fast".  It also means, "able to change direction quickly".

22 September 2011

Who Should Perform the Sprint Review?

A concerned ScrumMaster came to me recently lamenting the fact that a Product Owner "ran" a sprint review.  My response was, "That isn't necessarily a bad thing."  The ScrumMaster then went on to describe how the PO in question directed everything in the meeting and didn't allow much discussion.  OK, so some context helps clarify things. :)

The question remains, though, is it always a bad thing for the Product Owner or Customer to perform the demo at the end of the iteration or sprint?

The Scrum Guide is certainly clear that the team performs the demo, but I come from the XP world where the Customer is involved on a daily basis (remember the 4th principle of the Agile Manifesto) and is seeing and accepting completed work as near to when it's completed as possible.  When that's the case, the demo is no longer for the benefit of the Product Owner but instead for the stakeholders outside of the team.

When I train & coach teams, I actively encourage them to have the Customer/PO perform the demo.  This, in turn, encourages the behaviour that the Customer is actively engaged with the team and that the work is truly done to the Customer's satisfaction such that they can present it to the stakeholders.  Since the Customer either is from or represents the business, the work completed is presented from that business perspective rather than from the team's technical perspective.  Being able to speak the same language as the stakeholders is a key aspect of this.

Of course, there is no absolute rule here.  I've seen many demos where the Customer sets the stage and the team members do the hands-on part.  For example, early in a system's development a development tool may be used to show that some background process worked because the stories behind that functionality had higher priority than the stories that would allow the system to show the results of the process.  The Customer may not know how to use the tool, but a team member does!

I've also seen demos performed by the team where the Customer is seeing the work for the first time at the demo, and they have gone just fine.  I've seen Customers who are more technical-facing than business-facing micromanage the work that the team is doing.  I've seen Customers who are completely disengaged and don't even show up to the demos, complaining many iterations later that the system doesn't do what they want.

In general, though, if your Customer or Product Owner is business-facing then I have found that work flows much more smoothly when that Customer is actively engaged with the team and is the person performing the demos.

I'm very interested in your thoughts... please comment!

16 September 2011

Technical Excellence in Scrum

Jeff Sutherland recently posted a message on the Scrum Development Yahoo Group regarding Scrum and Technical Debt.  Jeff mentioned why Scrum eschewed technical practices, specifically:
In 1995, Kent Beck asked me for everything on Scrum.  In a famous email he said he wanted to use everything he could and not reinvent the wheel.  The first Scrum team was doing all the XP practices in some form.  However, in 1995 when Ken Schwaber and I started rolling out Scrum to the industry, Ken though [sic] we should focus on the framework as it would lead to more rapid adoption and teams should use the impediment list to bring in the engineering practices as needed.
I applaud Jeff for saying this, and one certainly can't argue with Ken's assertion that avoiding technical practices would lead to more rapid adoption.  I would argue, though, that very few teams use the impediment list as a means for improving their technical practices on an as needed basis.  In my own coaching experience, that number is painfully close to 0.

Jeff goes on to call out the coaching and training community:
At Snowbird this year, Agile leaders from all over the world convened to do a retrospective on 10 Years Agile. The prime directive that was unanimously agree upon by all present was that in the next tens years Agile leaders must Demand Technical Excellence. Failure to do that means you are not an Agile leader. We are sloppy in our coaching and training. If stuff is not done and/or has bugs at the end of the sprint, the team is not showing technical excellence and is not agile. We need to be clear about that.
And:
One of the reasons we have so much technical debt is Agile leaders are not coaching and training well enough.
I suppose that from a results-based view, Jeff is right.  We obviously aren't coaching and training well enough.  However, I don't feel that's universally the case, and I've certainly pushed for technical excellence since I first started telling others about XP in 2001.

I responded to Jeff, venting some of my own frustation in the process.  Here is that response:

Hello Jeff,

I agree 100% with everything you say on this topic except, "...Agile leaders must Demand Technical Excellence. Failure to do that means you are not an Agile leader. We are sloppy in our coaching and training."

If I had a dollar for all of the times I have pointed out practices that were contributing to technical debt, I could take a very nice vacation.  If you add the number of times I've shown practices (through demos, pairing, etc.) that help reduce technical debt, I could extend that vacation significantly.  If you add the number of times I have received the "we don't have time" excuse, despite my pleas to consider all of the issues leading to debt as impediments, I'd be getting into early retirement territory.

I learned "Agile" from XP back in 2000.  Prior to that I did what I could to achieve technical excellence.  The XP technical practices, most importantly TDD, improved my level of excellence considerably, and I've been teaching those practices ever since.  I'm constantly astounded by the number of supposedly competent software developers who don't believe that writing any automated tests for their code is worthwhile, let alone using a practice like TDD.  I have repeatedly shown how TDD results in simpler, more robust code, and yet there is still a huge amount of skepticism about the practice.  Don't get me started on Pair Programming.

I have beaten myself up many times thinking that I'm not coaching well since the people with whom I'm working aren't using these practices, and don't see the value despite their velocity eroding after a half-dozen sprints or so.  I've beaten myself up when these teams don't think it's a big deal when backlog items aren't complete at the end of a sprint, despite my advice to simply finish what can be finished to the DoD and use the Retrospective to figure out why some items weren't completed.  I've beaten myself up when teams don't listen to my advice about ensuring that all of the team members are on the team 100% of their time, and people get pulled away and not all items are completed in a sprint.

Frankly, I'm tired of beating myself up when people don't want to listen, but that's a different issue.

Over the past few weeks I've done a lot of soul searching with respect to my work as a coach, discussing it with other people ranging from my wife to the original XP Coach himself.  I'm by no means perfect, but my conscience is clear when I say that I have NOT been sloppy in my coaching and training.  I have busted my ass trying to get people to adopt the types of practices that will lead them to technical excellence.

I suppose you can lead a person to knowledge, but you can't make him think.

18 August 2011

The Simplest Thing That Could Possibly Work

In April 2011 I attended the Agile Games 2011 Un-conference in Cambridge, MA.  Given it's proximity to MIT, I'd like to believe I came away smarter simply through osmosis, but time will tell. :)

On the first day after Luke Hohmann's keynote, I hummed and hawed about which Deep Dive session to attend.  While many of the games sessions sounded interesting, I kept coming back to Adam Sroka's Coding Dojo.  So, I decided to attend and write me some code!

The goal was to show how to use the dojo concept as a practice or teaching technique, and in our case the goal was to build a poker game using true TDD.  To get the groups up and running quickly, Adam suggested that we spend a little time simply creating the basics for the card game War, which would provide the foundation for the rest of the work we would do.

When we started, I witnessed for the umpteenth time something that seems to be really, really hard for software people to abandon... doing the Simplest Thing That Could Possibly Work.

This principle is something that I learned early in my XP days.  It goes hand in hand with TDD and says,
"Don't do any more than is required to make the currently failing test pass."
With the card game dojo, the group with whom I was working immediately started whiteboarding out the entire system with classes and methods, including some way to display the cards.  While that isn't necessarily bad, they were going way too far for what was needed at that particular time.

After about 15 minutes of trying to wrangle these designers, I asked them, "So, what's the first test?"  Crickets.  I asked, "What's the most fundamental thing we need to be able to do in order to implement this game?  What's the first test to start doing that?"  More crickets, followed by a return to the whiteboard.

Moss Collum, however, did get what I was asking and he & I simply sat down at his laptop.  We figured that the most fundamental part of any card game is to be able to compare the value of two hands against each other, which means being able to compare two cards against each other.

So, using ping-pong pairing, Moss wrote the first test which was something like:
eightOfClubsIsGreaterThanTwoOfHearts
I wrote the code to make that test pass, but to do so I only needed to implement the value of the card.  The test had no concept of the card's suit being valuable!  So, once the test pass we refactored just the test name to something like:
eightIsGreaterThanTwo
Moss and I gave out a cheer when our first test was passing, which caught the attention of the folks at the whiteboard.  I wrote another test, and Moss wrote the bare minimum code to make the test pass, then refactored.  He wrote another test, and we invited another group member to sit down and make it pass.  Rinse and repeat.  Over time we built out the game to the point of detecting specific poker hand patterns.

It was a constant struggle, though, to keep people from implementing more functionality than the current test required.  Several times I heard, "But we're going to need this anyway!"  Maybe.  Maybe not.  It's in that speculation that the danger in over-thinking and over-engineering a solution lies.  If you're speculating, some of that speculation is bound to be incorrect.

This struggle isn't limited to coding dojos at a Games conference - I hear it constantly when coaching teams.  After all, they're different - they have all sorts of reasons to write complex code!  Well, their problem domain may be complex, but that doesn't mean that their code has to be as well.

Don't confuse Simple with Doesn't Work in the Real World, either.  If you're writing a quick utility application to crunch some data for yourself, you may not need to worry about scalability.  If you're writing code for telecom switching equipment, you certainly do need to deal with a different list of "-ilities" that need to be considered.  However, at the core the code should be as simple as is required to solve the problem at hand.  Test-Driven Development is an excellent way to do this, since the tests focus the developers on writing just enough code to solve that problem.

By sticking to the mantra of Do the Simplest Thing That Could Possibly Work, you will avoid or at least certainly delay the type of code bloat that results in Big Balls of Mud.  You will also avoid a ton of time spent on code that wasn't needed, not to mention time saved maintaining the code in the future.

Of course, that isn't a problem for you... you have plenty of spare time on your hands, right?

15 August 2011

I'm Not a Resource

A good number of people have written that we should not refer to people as "resources", and there are plenty of reasons why this is so.  However, I continue to hear the term resource applied to human beings on a daily basis.

I've created a video on YouTube called, "I'm not a Resource".  If you agree, please comment either here or on YouTube.  If you would like, send a short (no more than 10 seconds) video clip to daverooneyca (at) gmail (dot) com of you saying, "I'm not a resource.  I'm a person.", and over time I'll create an aggregate video of all the responses.

So, here is "I'm not a Resource":



28 July 2011

They are All Wrong

My wife grew up spending a half of every summer living in Algonquin Park.  My father-in-law was a bush pilot for the Ontario Ministry of Natural Resources and was based there for all of August every year, living at the cottage at Smoke Lake.  While she was there, she had the opportunity to live with a whole lotta nature literally right at the door step, from chipmunks to moose to black bears.

She was taught that you had to be wary of bears, but other than when you got in between a mother and her cubs, bears were actually more afraid of us than vice versa.  A few years ago while camping with the kids, we were startled by 2 gunshots at around 11:30 PM.  It was a park ranger who had shot a "nuisance bear" that was wandering around the campsite rummaging through the kitchen tents and cookware of people who were too stupid to put away anything that would attract bears to their site.  This adolescent black bear wasn't the huge menacing grizzly that you typically see on television, but rather a relatively small 200 pound creature just trying to find some food and availing itself of the easy pickings left by ignorant campers.

This event sparked my wife to start doing something to prevent a similar occurrence in Algonquin.  She did some research about other parks in Canada and the U.S., and spoke directly with the park Superintendent.  He was actually quite cooperative, and the park management did implement some measures to at least warn people that they need to be more "bear-aware".

She also discovered the work of Minnesota-based biologist Dr. Lynn Rogers.  He has been working with bears since 1967, and you can find videos of Dr. Rogers where he simply walks up to the bears with whom he works and interacts with them as if it were nothing special.  A couple of months ago, while surfing Dr. Rogers' site, my wife discovered that the 20th International Conference on Bear Research and Management was going to be held here in Ottawa and that Dr. Rogers would be attending.  So, she signed up for a 1-day pass and attended.

Dr. Rogers is an advocate of actually providing food for bears that will supplement their natural diet, with the goal of helping to avoid having bears go searching for food around people.  His research has shown that bears will naturally eat foods that are nutritionally the best for them, and given the choice of easy pickings from humans and a supply of berries and nuts they will happily avoid humans and choose the latter.  His program provides supplies of food that aren't a good for the bears as their natural diet, but is good enough that a bear that hasn't had enough of their regular diet will choose the planted food over making contact with humans.

Needless to say, this flies in the face of conventional wisdom - I've grown up with the saying, "A fed bear is a dead bear." - and Dr. Rogers is in the minority with his approach.

At the conference my wife attended a panel discussion on Bear-Human Interaction that was moderated by Dr. David Garshelis of the University of Minnesota.  She said that Dr. Garshelis was quite obviously against Dr. Rogers' approach to the point of being outwardly hostile, as were other panel members.  When she said this my immediate response was,
He must be right, then.
From Bears to Testing Tools

Paul Carvalho wrote an interesting post entitled Quality Centre Must Die on his experiences this past week in vendor training for HP's Quality Centre product.  At the end of the post he describes a discussion with an automation expert about randomizing tests:
[The Test Automation expert] said that what I proposed was not a "best practice" and that everyone, the whole industry, was using the tools in the way that he described how they should be used. 
My response was to simply say "they are all wrong."
My immediate response upon reading Paul's post was,
Paul must be right, then.

25 July 2011

Whither Requirements?

We accomplish what we understand. If we are to accomplish something together, we need to understand it together.
Signature from Ron Jeffries e-mail, circa 2005.

A common issue I've encountered with teams starting to use an Agile approach such as Scrum is that they feel they must abandon all existing methods of requirements gathering and start writing nothing but User Stories.

Requirements for a system can be expressed in a multitude of ways, including but not limited to:
  • Traditional "the system shall" type of system requirements
  • Use Cases
  • User Stories
  • People sitting beside each other discussing what needs to be done
These approaches have some significant differences.  Traditional requirements tend to focus on very specific system functionality without a direct view to the ultimate goals of the people who will be using the system.  Use Cases can also have the same problem if they focus on the "how" as opposed to the "what" of a system, although they do represent an improvement.  They also tend towards verbosity with early expression of detail that can lock in specific implementations.  User Stories explicitly focus on the outcomes from a customer value perspective, and intentionally avoid details until they are required.  Sitting with the person or people for whom you are building a system and just talking and showing the work as it progresses avoids all confusion about the work and the need for any formal documentation.

In each of these cases, it's quite possible to change each point from a weakness to a strength and vice versa!  When you get right down to it, all of these forms of requirements and the process of using them have a single underlying goal.

That goal is the achievement of a SHARED UNDERSTANDING of the work to be done.

As Ron's e-mail signature above suggests, it doesn't matter if you use smoke signals, crystal balls or Vulcan mind melds as long as everyone has the same shared understanding of what must be done!  How you achieve that understanding is up to you, although a collaborative approach is almost always the easiest way to get there.  The closer together the people who need the system and those who implement it work, the higher the probability that a shared understanding will occur.  The more frequent the opportunities for feedback, the higher the probability of a shared understanding.  The more you can defer the expression of details in the requirements until they are really needed, the higher the probability.

So, the form in which the requirements are recorded is a secondary consideration to achieving the shared understanding.  You need to be cognizant, though, of the inherent strengths and weaknesses of each approach.  An unrecorded discussion between two people may be the most efficient and lightweight approach, but if that understanding needs to be shared with a larger group then it falls short.  A detailed requirements document may achieve the goal when it was written, but as time passes the understanding erodes and the requirements that were recorded may become obsolete.

Regardless of the context and the choice of requirements approach, you need to keep the overall goal in mind and use it as a test:
Does everyone have the same understanding of what this work means?
How you get there is up to you!

22 July 2011

Raising the Water Level

A phrase you commonly hear in the Agile world that comes from Toyota is that you want to "lower the water level" in your value stream so that you can "see the rocks" that disrupt flow and, to extend the metaphor, could damage your boat!  By exposing the rocks, you can then deal with them appropriately and thus reduce or eliminate the disruptions to the flow of value.

This is a wonderfully evocative metaphor, since practically everyone has seen a stream or river with water flowing around rocks and they can visualize the effects those rock have.  However, long before I ever heard of any of this Lean or Agile 'stuff', I read a story about how raising the water level was the more appropriate way of dealing with those rocks.

A Little History Lesson
I live in Ottawa, Ontario which is the capital of Canada.  It wasn't always the capital, in fact originally it was a rough and tumble lumber town along the Ottawa river which acted as a highway for transporting logs from many kilometres upstream.  Ottawa is also where the Rideau River flows from the south into the Ottawa river, and is the general route for the Rideau Canal that connects Ottawa with the St. Lawrence River and Lake Ontario at Kingston.

The Rideau Canal owes its existence to those pesky Americans who live to the south.  In the 21st century, Canada and the U.S. are close friends, allies and trading partners, but it hasn't always been this way.  During the War of 1812, the threat of invasion from the south or at least a blockade of the St. Lawrence River forced the British to consider an alternative route from Montréal to their naval base at Kingston.  A canal from Ottawa to Kingston along the Rideau and Cataraqui Rivers was approved and construction began in 1826, led by Colonel John By.

Today, a large portion of the land in which Col. By had to work is open farmland but that wasn't the case in the late 1820's.  Work on the canal was at best a very tough grind made worse by malaria-carrying mosquitoes that infected the workers by the thousands.  Another aspect of the local geography that presented challenges to Col. By was the tough granite of the Canadian Shield.  This required blasting in many areas which, since this was decades before the invention of TNT, meant the use of black powder.

When you consider the conditions and the era in which this took place, construction flew along at a quite vigorous pace until 1829 when construction had reached Rideau Lake.  This was a key area since Rideau Lake was the source for the Rideau River that flowed north to Ottawa, and nearby Mud Lake (now Newboro Lake) was the source for the Cataraqui River that flowed south to Kingston.  The original plan was to dig a 1,500 metre (4800 foot) channel between Mud and Rideau Lakes, but the construction workers discovered that they weren't dealing with the mud and gravel they expected but rather with the hard granite bedrock of the Canadian Shield.  Costs began to spiral out of control, and the local contractors even quit the project.  Col. By had a problem and needed a solution quickly.

He examined the area's geography more closely and came up with a novel plan.  Rideau Lake had a narrow section, and By decided to build an extra, unplanned dam there as well as a lock station.  This split Rideau Lake into Upper Rideau and Big Rideau Lakes and raised the water level in Mud Lake by 1.5 metres (almost 5 feet).

Doing so dramatically reduced the amount of excavation required to connect the two watersheds and construction of the remainder of the canal continued it's brisk pace until completion in 1832.

Raising the Water Level with Your Team
We talk quite a bit about Agile being the "art of the possible".  If your team is one of the early adopters of Agile in a company it's quite likely that "possible" in your context may not be what you read in Agile literature or are taught in courses.  I'm a firm believer in lowering the water level to expose the rocks but I also know that in some cases, like Col. By, raising the water level may be the best way to proceed for the time being.

For example, coming from the XP world I advocate for very short iterations, ideally 1-2 weeks.  When you're dealing with just software, I have yet to encounter a situation where a team couldn't produce work with meaningful business value in that time frame.  Indeed, those very short iterations will expose many rocks that can and should be dealt with immediately.

When you start working with physical hardware, though, it simply may not be possible to complete a piece of work into a short iteration.  In fact, it may be quite inefficient to force fit some physical board or card design work into a fixed length iteration, or to deliver it incrementally - if a card requires 5 heat sinks, what value does delivering the design of the first 3 provide?

In that case, a longer iteration length or a process such as Kanban where cadence is decoupled from the size of the work items is likely more appropriate.

Another example of raising the water level to deal with the reality of the situation is with a team's Definition of Done.  New teams have a tendency to put a lot more steps or items in their DoD than they actually can complete in an iteration.  This isn't a big problem, and actually is a very good exercise at showing all the work required to deliver something.  After the end of the first iteration, though, many teams realize that their initial DoD was too ambitious and they may need to pare it back somewhat in order to deliver work.  That isn't to say that they don't report and deal with the impediments that prevented them from achieving their DoD, but it may mean raising the water level temporarily until the team or their organization is capable of meeting the more expansive version.

A third example is when a team is dealing with a large legacy code base (to which you respond, "Who isn't?!").  It may be very, very difficult to create microtests and use Test-Driven Development at a low level from the start (although there are plenty of resources available to help team eventually do that).  In that case, you may choose to raise the water level initially and start doing Acceptance Test-Driven Development instead.  This would provide automated test coverage at a higher level where it may be much easier to create the tests, and thus provide more overall value than spending the time doing the remedial work to clean up the code to make it more amenable to TDD at the microtest level.  Again, though, books such as Working Effectively with Legacy Code will help you to deal with that Big Ball of Mud and introduce low-level TDD which will improve the overall quality of the system.

So, you may need to apply some outside-the-box thinking like that of Col. By in order to deal with the rocks that you encounter in a team's transition to Agile.  As a coach, I constantly advocate for teams to push their limits of "possible" in order to gain improvements in how they work and in the quality of the work they produce.  However, sometimes raising the water level may be an appropriate way to achieve the art of the possible.

14 July 2011

The Fighter Pilot Who Changed Software Development

For those in the Agile Software Development community who have spent any time at all looking into the Lean underpinnings of Agile, the name John Boyd should be familiar.  Boyd was the originator of the OODA - Observe, Orient, Decide, Act - loop that represents a more detailed refinement of Scrum's Inspect and Adapt cycle.  While OODA was created as a strategy for fighting a war, its underlying goal is to provide agility in any situation and to ideally allow a unit to out-think its opponent.

One of Boyd's 'acolytes', Chet Richards, wrote in his book "Certain to Win: The strategy of John Boyd applied to Business":
It is not more command and control that we are after. Instead, we seek to decrease the amount of command and control that we need. We do this by replacing coercive command and control methods with spontaneous, self-disciplined cooperation based on low-level initiative, a commonly understood intent, mutual trust, and implicit understanding and communications.
As with OODA being a much more detailed strategy than Inspect and Adapt, in one paragraph this explanation details why self-organization is desirable and what is required for it to be effective!

If we dig a bit deeper into Boyd's career, though, there is another very interesting parallel to Agile.  He served briefly in Korea during the last couple of months of that conflict flying F-86 Sabres against Soviet-built MiG-15's. By that point in the war, the American pilots had a 14-1 ratio of kills to losses, and a 10-1 ratio overall during the full 3 years of the conflict.  Boyd was curious as to why this was the case when, on paper at least, the MiG-15 was superior to the F-86.  After the war he revisited this conundrum, and realized that the Sabre pilots enjoyed much better visibility owing to the large bubble canopy, and they could switch from offensive to defensive manoeuvres and back more quickly because the Sabre had fully hydraulic controls.  In other words, the Sabre provided the ability to better observe the situation and the agility to react as required more quickly than the MiG.

Boyd went on to become a student at the Air Force's Fighter Weapons School (FWS), the equivalent to the better known U.S. Navy's Top Gun.  It should be noted that at the time - the early to mid-1950's - the Air Force was essentially a bomber force under the doctrine of massive retaliation to an anticipated attack from the Soviet Union.  There was very little effort and fewer resources allocated to developing fighter tactics, and indeed the belief was that the fighter battles of previous wars were a quaint throwback and had become obsolete.  Boyd, however, worked in near obscurity at the FWS, writing a paper entitled "A Proposed Plan for Ftr. Vs. Ftr. Training".  In it he advised that pilots needed to think differently about how they flew, concentrating not only on their current manoeuvre but also on the effects that manoeuvre would have on their speed and how their enemy would react to the manoeuvre.

E-M Theory
This research began to solidify during the early 1960's into Boyd's Energy-Manoeuvrability Theory, which stated that it wasn't just speed or engine power that gave a pilot the advantage over his adversary, but rather that it was his total energy level.  At this time, Boyd met Tom Christie, who was able to work with Boyd and provide access to the computer time required to develop and validate the equations required to prove the theory.  Indeed, Boyd "stole" the computer time, as it was charged against other projects for which Christie used their authorization codes!

In the end, Boyd and Christie developed tables and graphs that normalized for all aircraft and were based on the simple notion of knowing how quickly a pilot could gain energy when applying full power with the throttle. It was on this basis that they were able to prove that all U.S. fighter aircraft of the mid-60's were inferior to those of the Soviets, with a few exceptions in some limited part of the flight envelope.  This and lack of focus on fighter tactics were being proven by the dismal performance of U.S. fighters in Vietnam despite the technological superiority they supposedly enjoyed.

F-15 Eagle
Based on his experience and calculations, Boyd had been advocating for a small, lightweight, highly manoeuvrable fighter as the next generation to replace the Century Series fighters and the F-4 Phantom.  He was making some progress when fate intervened in the form of the large MiG-25 that appeared at the Domodedevo Air Show in July 1967.  It was reputed to have a top speed of nearly Mach 3, and it was believed that the twin tails made it a highly agile dogfighter.  (A sample was flown in 1976 to Japan by a defector, and the true nature of the MiG-25 as a bomber interceptor with limited range and manoeuvrability was exposed.)

The fear that the Soviets had a larger, better fighter prompted the U.S. Air Force to build the large, complex and very expensive F-15 Eagle.  Boyd railed against the size and proposed Mach 3.0 speed of the F-15, arguing that it was unnecessary and would compromise the agility and range of the aircraft.

Enter the "Mafia"
While the F-15 project proceeded, Boyd was able to convince a small group of generals that a smaller, lightweight fighter should be explored in case the F-15 project failed.  At the time (1967), the future success of the F-15 was anything but certain and both the Air Force and Navy were concerned after the F-111 debacle.  So, Boyd and his group were able to obtain funding to perform a design study on the lightweight fighter.  This group became known as the Fighter Mafia, named by Col. Everest Riccioni, and was an ironic play on the Bomber Mafia of the 1920's led by Gen. Billy Mitchell who faced similar challenges when the Air Corps didn't believe that bombers were important or necessary.

This group split the funds for the design study between two companies - General Dynamic for the YF-16 and Northrop for the YF-17 (the "Y" indicates a prototype aircraft).  Meanwhile, the F-15 project was becoming hugely expensive, and the already expensive F-14 of Top Gun fame was suffering from poor performance.  The government wanted to rein in the procurement process, and the lightweight fighter study seemed to be a good place to start.  As a result, in 1970 they received funding to proceed with the development of the two prototype aircraft and to perform a fly-off evaluation of them in order to make the decision based on real data as opposed to projections.

During 1974 the two aircraft were evaluated, and competed in a much larger competition for NATO orders.  In early 1975 the F-16 was declared the winner, and has since gone on to achieve considerable success with nearly 4,500 being delivered since full production started in 1976.

The YF-17 didn't fade into obscurity, though.  It was redesigned to Navy standards to be able to operate from aircraft carriers and became the F/A-18 Hornet.  It, too, was intended to complement a much larger, more expensive aircraft - the F-14 Tomcat.  As of 2011, the F/A-18 has replaced not only the F-14, but all other strike aircraft on the U.S. carriers.

Ironically, Boyd had argued against any "gold-plating" of the lightweight fighter in the form of powerful radar or computer systems, but those systems have indeed been applied with great success to both aircraft.  The increasing miniaturization and decreasing cost of electronics allowed those systems to be used without the exorbitant costs of previous generations of fighters.

Parallels
So, beyond Boyd's OODA loop, how does this apply to software product development?  Look at the parallels between the push to use small, agile, low-cost fighters in the context of their time.  The doctrine of the Air Force from its creation as a separate service in 1947 until the Vietnam war in the 1960's was that a massive nuclear strike capability was all that stood between the Soviets and an invasion of western Europe.  (Whether that fear was founded or not is beyond the scope of this post.)  This drove the U.S. to focus on aircraft that could deliver the bulky and heavy nuclear weapons of the time, which meant bigger and faster ruled the day.  Similarly, in a defensive role the focus was on stopping the Soviet bombers which mean large interceptors capable of high speeds carrying missiles that could destroy a bomber from a great distance away.  It was assumed that the dogfight had been consigned to the history books, and that technology would rule the skies.

It was the hard lessons of the Vietnam War that proved those assumptions to be mostly incorrect, and indeed the Kennedy administration admitted that the nuclear doctrine served to create more conventional regional conflicts rather than prevent them.

Another aspect is that the military-industrial complex in the U.S. became self-serving with big contracts and big projects resulting in big systems and big aircraft.  These all served to eventually force the need for the lightweight fighter programs that Boyd had advocated.  While the resulting aircraft, the F-16 and F-18, are anything but 'cheap', in the context of the huge F-111, F-14 and F-15 programs of the late 60's they have been unqualified successes.

Now think about the Agile Software Development world:
  • In a world of increasingly large and heavy process, one person and later a small group explore and quantify why agility is more effective
  • These people and their work are dismissed, most often by those with a vested interest in the status quo of large projects with large processes, often in large and very large companies
  • Economic constraints force a second look at their work
  • Their work begins to accumulate acceptance as the assumptions behind large process and large projects become exposed as flawed
  • Their work becomes fully accepted and begins to flourish
  • Others start to add more to their work, despite admonishments to the contrary
  • Those additions act to make the original work more robust and even more accepted
Sound familiar?

Agile as we know it in 2011 has its roots in the 1970's.  However, it was the massive build-up of large processes and supporting tools in the 80's and 90's that lead to the "lightweight process mafia" convening at Snowbird in early 2001 in an effort to change how world built software.  That effort exposed the larger, heavier methods as being the massive retaliation targeted at an enemy that would never come, and the dot-com bust would force the post-Vietnam style soul-searching that allowed Boyd's ideas to reach the mainstream.

Today, I hear people talk about OODA constantly, and how it should be used to truly drive agility into the core of a team and organization.  Indeed, the notion of being able to process the loop faster than your competition is a key component of the nascent Lean Startup movement.

We should also be thankful that John Boyd's drive to determine why agility was important, to quantify it in real terms and to implement it in actual practice in the face of the conventional wisdom of the time was shared by those who attended the discussions at Snowbird in 2001, and those who have refined the practice of the Agile Values and Principles since.

    13 July 2011

    A New Home for The Survival Guide for New Agile Coaches!

    The Survival Guide for New Agile Coaches now has a new home of its own at AgileCoachSurvivalGuide.com.

    The original posts for that series will remain here on Practical Agility, but all new posts will be made on the new blog.

    So please stop on by, and let your friends know about the new site! :)

    30 June 2011

    A Survival Guide for New Agile Coaches - Are We There Yet?

    I recall driving to Toronto one time with the kids.  It's about a 4 hour trip, and somewhere around 30 seconds after we left my son asked, "Are we there yet?".  Of course, I patiently responded, "No, we have about 4 hours to go!"

    After another hour and several thousand additional queries about our arrival status, I decided to change things up.  When my son asked, "Are we there yet?", I cheerfully responded, "Yes!"  He answered, "No we're not!", to which I not-so-patiently responded, "THEN DON'T ASK AGAIN!!!".  He had nothing to come back with, and I basked in my victory for at least a few minutes.

    Coaching Point

    There are a number of points that can be taken from this story.

    First is the need for releases that are as short as possible.  When you work away for months at a time and can't see the goal towards which you're working, your motivation can erode somewhat.  Imagine climbing a hill shrouded in fog - you just keep climbing and have no idea when you're going to reach the top.  You may eventually become bored with the climb and start back down, when in fact you were only a few metres from the top that you couldn't see!  Release cycles of 24, 18, 12 and even 8 months can have this effect, whereas cycles of 1 day out to 6 months are within "sight" of a team, allowing them to focus better.

    Second, there are situations and or products that do require longer cycles (at least initially).  Visual management can help communicate your current and projected locations reasonably well.  For example,
    I now like to use our GPS for any long trips even when the route is well-known and we don't really need a map.  Our GPS also shows Distance Remaining, Distance to Next Waypoint and ETA/Arrival Time.  Everyone in the vehicle can see this information and as a result the question in the title of this post is now rarely, if ever, asked.  What information could you display for your project or product that could achieve this?

    Third, teams that have been using an Agile process for a good length of time can find themselves in a rut or becoming caught in the daily and iteration-length 'grind' of the process.  On several occasions I've heard team members and even entire teams speak of working away from Iteration Planning, through the iteration, performing a demo, holding a retrospective and then get right back into the next Iteration Planning.  The common theme has been, "We don't have time to catch our breath!"  This can be a symptom of a few issues:
    • No slack built into the iteration to allow for unexpected work or work that was larger than expected
    • No slack built into the iteration to allow team members to 'sharpen their saw' or to spend a small amount of time refreshing their minds
    • This may sound silly, but the use of the term "Sprint" for Iteration - the words we use can and do affect they way we perceive the world.  If a team is constantly "sprinting", they can have the feeling that they're out of breath at a certain point
    The key here is to understand that you can't simply fill every single minute of time within an iteration with work directly from the backlog.  Not only is that unrealistic, it will burn out the people on the team.  The excellent book Slack by Tom DeMarco says,
    Slack is the lubricant required to effect change, it is the degree of freedom that enables reinvention and true effectiveness.
    In other words, by giving teams some time where they aren't working on their regular work, they will be more effective and efficient.  They may also come up with new ideas and even products!  Regardless, look for ways to allow team members to recharge.  Ask the team for ideas, or suggest some team-building activities or even just some 'free' time off, i.e. not charged to their vacation.

    Finally, you need to consider what questions are really being asked.  When my kids were asking, "Are we there yet?", the real question is, "When are we going to get there?"  I solved that problem somewhat by using the GPS which displays that information, although when the kids were younger the concept of 3 hours was as abstract as 3 days or 3 months!

    If a team member says, "We're stuck in a rut... one sprint after another with no time to breathe", what's the real issue?  Do they have any slack?  Not enough slack?  Is there external pressure to do more, requiring communication with stakeholders?  Do they need to go out for a beer after the iteration demo and forget about the retrospective and iteration planning until Monday?  All of the above?  The point is to not necessarily take the question or statement at face value - try to identify the real question or the root cause of the statement.

    While you can simply answer, "No" to the question, "Are we there yet?", that rarely appeases the person or people who ask it whether they're 3, 33 or 63.  Ask yourself what you can do or suggest to either find the real question, make the information required visible or even change the circumstances such that the question never needs to be asked in the first place!

    29 June 2011

    Worth Repeating - XP Bills of Rights

    While doing the electronic equivalent of cleaning the attic yesterday, I stumbled across an internal paper I wrote at a client back in late September 2001 describing Extreme Programming.  While waxing nostalgic, I did notice a section that remains important today and is worth repeating - the Customer and Developer Bills of Rights from XP Explained and XP Installed.  The text below is unedited from 2001 - you can substitute Product Owner for Customer and Team Member for Developer.

    Since communication is a critical aspect of XP, the people involved must know up front
    what they can expect, and what is expected of them. As such, the following are lists of
    "rights" that the Customer and Developers are accorded in XP:

    Customer Bill of Rights
    As the customer, you have the right to:
    • An overall plan, to know what can be accomplished, when, and at what cost;
    • Get the most possible value out of every programming week;
    • See progress in a running system, proven to work by passing repeatable tests that you specify;
    • Change your mind, to substitute functionality, and to change priorities without paying exorbitant costs;
    • Be informed of schedule changes, in time to choose how to reduce scope to restore the original date, even cancel at any time and be left with a useful working system reflecting investment to date.
    Developer Bill of Rights
    As the Developer, you have the right to:
    • Know what is needed, with clear declarations of priority;
    • Produce quality work at all times;
    • Ask for and receive help from peers, superiors, and customers;
    • Make and update your own estimates;
    • Accept your responsibilities instead of having them assigned to you.

    These rights resonated with me in 2001, and I believe we need to revisit them from time to time in order to ensure that the values of Agile are instilled within an organization.

    A Survival Guide for New Agile Coaches - Patience, Persistence... and Ear Plugs

    For the first couple of months after my son was born, we had a heck of a time getting him to sleep in the evening. After a couple of months I realized that I just had to let him scream until he fell asleep. Once he did he slept like, well, a baby! 

    Getting to that realization was tough. He was literally screaming as loud as he could into my ear (I've been tested, and the hearing in that ear is diminished!), and he would wriggle away. I figure that he was ticked about having to go to bed! Regardless, the screaming became a routine, and once that routine was established it wasn't as stressful. 

    After a few months the patience and persistence paid off. He screamed less and less, eventually going down to sleep without a fight. 

    Coaching Point 
    When a team is first starting their transition to Agile, you will hear a lot of screaming... mostly in the figurative sense, but sometimes literally. For most of the organizations with whom I've worked, Agile represents a fundamental change to how they think about their organization and their work. Any change that significant will create fear and stress. People will think that they may lose their job. There will be those who lose what they believed to be a prestigious title. Sacred cows may be slain in the name of moving to Agile.

    Be prepared to put the team on your shoulder (figuratively, of course) and let them cry it out. Give them time to learn how to work in short cycles.  Give them the support and time needed to learn test automation.  Give them the support and tools needed to effectively inspect and adapt.  When the team has gripes & complaints and wants to give up, listen patiently and with compassion.

    If that support is available to them, over time their need to "cry it out" will diminish and eventually disappear.