28 July 2011

They are All Wrong

My wife grew up spending a half of every summer living in Algonquin Park.  My father-in-law was a bush pilot for the Ontario Ministry of Natural Resources and was based there for all of August every year, living at the cottage at Smoke Lake.  While she was there, she had the opportunity to live with a whole lotta nature literally right at the door step, from chipmunks to moose to black bears.

She was taught that you had to be wary of bears, but other than when you got in between a mother and her cubs, bears were actually more afraid of us than vice versa.  A few years ago while camping with the kids, we were startled by 2 gunshots at around 11:30 PM.  It was a park ranger who had shot a "nuisance bear" that was wandering around the campsite rummaging through the kitchen tents and cookware of people who were too stupid to put away anything that would attract bears to their site.  This adolescent black bear wasn't the huge menacing grizzly that you typically see on television, but rather a relatively small 200 pound creature just trying to find some food and availing itself of the easy pickings left by ignorant campers.

This event sparked my wife to start doing something to prevent a similar occurrence in Algonquin.  She did some research about other parks in Canada and the U.S., and spoke directly with the park Superintendent.  He was actually quite cooperative, and the park management did implement some measures to at least warn people that they need to be more "bear-aware".

She also discovered the work of Minnesota-based biologist Dr. Lynn Rogers.  He has been working with bears since 1967, and you can find videos of Dr. Rogers where he simply walks up to the bears with whom he works and interacts with them as if it were nothing special.  A couple of months ago, while surfing Dr. Rogers' site, my wife discovered that the 20th International Conference on Bear Research and Management was going to be held here in Ottawa and that Dr. Rogers would be attending.  So, she signed up for a 1-day pass and attended.

Dr. Rogers is an advocate of actually providing food for bears that will supplement their natural diet, with the goal of helping to avoid having bears go searching for food around people.  His research has shown that bears will naturally eat foods that are nutritionally the best for them, and given the choice of easy pickings from humans and a supply of berries and nuts they will happily avoid humans and choose the latter.  His program provides supplies of food that aren't a good for the bears as their natural diet, but is good enough that a bear that hasn't had enough of their regular diet will choose the planted food over making contact with humans.

Needless to say, this flies in the face of conventional wisdom - I've grown up with the saying, "A fed bear is a dead bear." - and Dr. Rogers is in the minority with his approach.

At the conference my wife attended a panel discussion on Bear-Human Interaction that was moderated by Dr. David Garshelis of the University of Minnesota.  She said that Dr. Garshelis was quite obviously against Dr. Rogers' approach to the point of being outwardly hostile, as were other panel members.  When she said this my immediate response was,
He must be right, then.
From Bears to Testing Tools

Paul Carvalho wrote an interesting post entitled Quality Centre Must Die on his experiences this past week in vendor training for HP's Quality Centre product.  At the end of the post he describes a discussion with an automation expert about randomizing tests:
[The Test Automation expert] said that what I proposed was not a "best practice" and that everyone, the whole industry, was using the tools in the way that he described how they should be used. 
My response was to simply say "they are all wrong."
My immediate response upon reading Paul's post was,
Paul must be right, then.

25 July 2011

Whither Requirements?

We accomplish what we understand. If we are to accomplish something together, we need to understand it together.
Signature from Ron Jeffries e-mail, circa 2005.

A common issue I've encountered with teams starting to use an Agile approach such as Scrum is that they feel they must abandon all existing methods of requirements gathering and start writing nothing but User Stories.

Requirements for a system can be expressed in a multitude of ways, including but not limited to:
  • Traditional "the system shall" type of system requirements
  • Use Cases
  • User Stories
  • People sitting beside each other discussing what needs to be done
These approaches have some significant differences.  Traditional requirements tend to focus on very specific system functionality without a direct view to the ultimate goals of the people who will be using the system.  Use Cases can also have the same problem if they focus on the "how" as opposed to the "what" of a system, although they do represent an improvement.  They also tend towards verbosity with early expression of detail that can lock in specific implementations.  User Stories explicitly focus on the outcomes from a customer value perspective, and intentionally avoid details until they are required.  Sitting with the person or people for whom you are building a system and just talking and showing the work as it progresses avoids all confusion about the work and the need for any formal documentation.

In each of these cases, it's quite possible to change each point from a weakness to a strength and vice versa!  When you get right down to it, all of these forms of requirements and the process of using them have a single underlying goal.

That goal is the achievement of a SHARED UNDERSTANDING of the work to be done.

As Ron's e-mail signature above suggests, it doesn't matter if you use smoke signals, crystal balls or Vulcan mind melds as long as everyone has the same shared understanding of what must be done!  How you achieve that understanding is up to you, although a collaborative approach is almost always the easiest way to get there.  The closer together the people who need the system and those who implement it work, the higher the probability that a shared understanding will occur.  The more frequent the opportunities for feedback, the higher the probability of a shared understanding.  The more you can defer the expression of details in the requirements until they are really needed, the higher the probability.

So, the form in which the requirements are recorded is a secondary consideration to achieving the shared understanding.  You need to be cognizant, though, of the inherent strengths and weaknesses of each approach.  An unrecorded discussion between two people may be the most efficient and lightweight approach, but if that understanding needs to be shared with a larger group then it falls short.  A detailed requirements document may achieve the goal when it was written, but as time passes the understanding erodes and the requirements that were recorded may become obsolete.

Regardless of the context and the choice of requirements approach, you need to keep the overall goal in mind and use it as a test:
Does everyone have the same understanding of what this work means?
How you get there is up to you!

22 July 2011

Raising the Water Level

A phrase you commonly hear in the Agile world that comes from Toyota is that you want to "lower the water level" in your value stream so that you can "see the rocks" that disrupt flow and, to extend the metaphor, could damage your boat!  By exposing the rocks, you can then deal with them appropriately and thus reduce or eliminate the disruptions to the flow of value.

This is a wonderfully evocative metaphor, since practically everyone has seen a stream or river with water flowing around rocks and they can visualize the effects those rock have.  However, long before I ever heard of any of this Lean or Agile 'stuff', I read a story about how raising the water level was the more appropriate way of dealing with those rocks.

A Little History Lesson
I live in Ottawa, Ontario which is the capital of Canada.  It wasn't always the capital, in fact originally it was a rough and tumble lumber town along the Ottawa river which acted as a highway for transporting logs from many kilometres upstream.  Ottawa is also where the Rideau River flows from the south into the Ottawa river, and is the general route for the Rideau Canal that connects Ottawa with the St. Lawrence River and Lake Ontario at Kingston.

The Rideau Canal owes its existence to those pesky Americans who live to the south.  In the 21st century, Canada and the U.S. are close friends, allies and trading partners, but it hasn't always been this way.  During the War of 1812, the threat of invasion from the south or at least a blockade of the St. Lawrence River forced the British to consider an alternative route from MontrĂ©al to their naval base at Kingston.  A canal from Ottawa to Kingston along the Rideau and Cataraqui Rivers was approved and construction began in 1826, led by Colonel John By.

Today, a large portion of the land in which Col. By had to work is open farmland but that wasn't the case in the late 1820's.  Work on the canal was at best a very tough grind made worse by malaria-carrying mosquitoes that infected the workers by the thousands.  Another aspect of the local geography that presented challenges to Col. By was the tough granite of the Canadian Shield.  This required blasting in many areas which, since this was decades before the invention of TNT, meant the use of black powder.

When you consider the conditions and the era in which this took place, construction flew along at a quite vigorous pace until 1829 when construction had reached Rideau Lake.  This was a key area since Rideau Lake was the source for the Rideau River that flowed north to Ottawa, and nearby Mud Lake (now Newboro Lake) was the source for the Cataraqui River that flowed south to Kingston.  The original plan was to dig a 1,500 metre (4800 foot) channel between Mud and Rideau Lakes, but the construction workers discovered that they weren't dealing with the mud and gravel they expected but rather with the hard granite bedrock of the Canadian Shield.  Costs began to spiral out of control, and the local contractors even quit the project.  Col. By had a problem and needed a solution quickly.

He examined the area's geography more closely and came up with a novel plan.  Rideau Lake had a narrow section, and By decided to build an extra, unplanned dam there as well as a lock station.  This split Rideau Lake into Upper Rideau and Big Rideau Lakes and raised the water level in Mud Lake by 1.5 metres (almost 5 feet).

Doing so dramatically reduced the amount of excavation required to connect the two watersheds and construction of the remainder of the canal continued it's brisk pace until completion in 1832.

Raising the Water Level with Your Team
We talk quite a bit about Agile being the "art of the possible".  If your team is one of the early adopters of Agile in a company it's quite likely that "possible" in your context may not be what you read in Agile literature or are taught in courses.  I'm a firm believer in lowering the water level to expose the rocks but I also know that in some cases, like Col. By, raising the water level may be the best way to proceed for the time being.

For example, coming from the XP world I advocate for very short iterations, ideally 1-2 weeks.  When you're dealing with just software, I have yet to encounter a situation where a team couldn't produce work with meaningful business value in that time frame.  Indeed, those very short iterations will expose many rocks that can and should be dealt with immediately.

When you start working with physical hardware, though, it simply may not be possible to complete a piece of work into a short iteration.  In fact, it may be quite inefficient to force fit some physical board or card design work into a fixed length iteration, or to deliver it incrementally - if a card requires 5 heat sinks, what value does delivering the design of the first 3 provide?

In that case, a longer iteration length or a process such as Kanban where cadence is decoupled from the size of the work items is likely more appropriate.

Another example of raising the water level to deal with the reality of the situation is with a team's Definition of Done.  New teams have a tendency to put a lot more steps or items in their DoD than they actually can complete in an iteration.  This isn't a big problem, and actually is a very good exercise at showing all the work required to deliver something.  After the end of the first iteration, though, many teams realize that their initial DoD was too ambitious and they may need to pare it back somewhat in order to deliver work.  That isn't to say that they don't report and deal with the impediments that prevented them from achieving their DoD, but it may mean raising the water level temporarily until the team or their organization is capable of meeting the more expansive version.

A third example is when a team is dealing with a large legacy code base (to which you respond, "Who isn't?!").  It may be very, very difficult to create microtests and use Test-Driven Development at a low level from the start (although there are plenty of resources available to help team eventually do that).  In that case, you may choose to raise the water level initially and start doing Acceptance Test-Driven Development instead.  This would provide automated test coverage at a higher level where it may be much easier to create the tests, and thus provide more overall value than spending the time doing the remedial work to clean up the code to make it more amenable to TDD at the microtest level.  Again, though, books such as Working Effectively with Legacy Code will help you to deal with that Big Ball of Mud and introduce low-level TDD which will improve the overall quality of the system.

So, you may need to apply some outside-the-box thinking like that of Col. By in order to deal with the rocks that you encounter in a team's transition to Agile.  As a coach, I constantly advocate for teams to push their limits of "possible" in order to gain improvements in how they work and in the quality of the work they produce.  However, sometimes raising the water level may be an appropriate way to achieve the art of the possible.

14 July 2011

The Fighter Pilot Who Changed Software Development

For those in the Agile Software Development community who have spent any time at all looking into the Lean underpinnings of Agile, the name John Boyd should be familiar.  Boyd was the originator of the OODA - Observe, Orient, Decide, Act - loop that represents a more detailed refinement of Scrum's Inspect and Adapt cycle.  While OODA was created as a strategy for fighting a war, its underlying goal is to provide agility in any situation and to ideally allow a unit to out-think its opponent.

One of Boyd's 'acolytes', Chet Richards, wrote in his book "Certain to Win: The strategy of John Boyd applied to Business":
It is not more command and control that we are after. Instead, we seek to decrease the amount of command and control that we need. We do this by replacing coercive command and control methods with spontaneous, self-disciplined cooperation based on low-level initiative, a commonly understood intent, mutual trust, and implicit understanding and communications.
As with OODA being a much more detailed strategy than Inspect and Adapt, in one paragraph this explanation details why self-organization is desirable and what is required for it to be effective!

If we dig a bit deeper into Boyd's career, though, there is another very interesting parallel to Agile.  He served briefly in Korea during the last couple of months of that conflict flying F-86 Sabres against Soviet-built MiG-15's. By that point in the war, the American pilots had a 14-1 ratio of kills to losses, and a 10-1 ratio overall during the full 3 years of the conflict.  Boyd was curious as to why this was the case when, on paper at least, the MiG-15 was superior to the F-86.  After the war he revisited this conundrum, and realized that the Sabre pilots enjoyed much better visibility owing to the large bubble canopy, and they could switch from offensive to defensive manoeuvres and back more quickly because the Sabre had fully hydraulic controls.  In other words, the Sabre provided the ability to better observe the situation and the agility to react as required more quickly than the MiG.

Boyd went on to become a student at the Air Force's Fighter Weapons School (FWS), the equivalent to the better known U.S. Navy's Top Gun.  It should be noted that at the time - the early to mid-1950's - the Air Force was essentially a bomber force under the doctrine of massive retaliation to an anticipated attack from the Soviet Union.  There was very little effort and fewer resources allocated to developing fighter tactics, and indeed the belief was that the fighter battles of previous wars were a quaint throwback and had become obsolete.  Boyd, however, worked in near obscurity at the FWS, writing a paper entitled "A Proposed Plan for Ftr. Vs. Ftr. Training".  In it he advised that pilots needed to think differently about how they flew, concentrating not only on their current manoeuvre but also on the effects that manoeuvre would have on their speed and how their enemy would react to the manoeuvre.

E-M Theory
This research began to solidify during the early 1960's into Boyd's Energy-Manoeuvrability Theory, which stated that it wasn't just speed or engine power that gave a pilot the advantage over his adversary, but rather that it was his total energy level.  At this time, Boyd met Tom Christie, who was able to work with Boyd and provide access to the computer time required to develop and validate the equations required to prove the theory.  Indeed, Boyd "stole" the computer time, as it was charged against other projects for which Christie used their authorization codes!

In the end, Boyd and Christie developed tables and graphs that normalized for all aircraft and were based on the simple notion of knowing how quickly a pilot could gain energy when applying full power with the throttle. It was on this basis that they were able to prove that all U.S. fighter aircraft of the mid-60's were inferior to those of the Soviets, with a few exceptions in some limited part of the flight envelope.  This and lack of focus on fighter tactics were being proven by the dismal performance of U.S. fighters in Vietnam despite the technological superiority they supposedly enjoyed.

F-15 Eagle
Based on his experience and calculations, Boyd had been advocating for a small, lightweight, highly manoeuvrable fighter as the next generation to replace the Century Series fighters and the F-4 Phantom.  He was making some progress when fate intervened in the form of the large MiG-25 that appeared at the Domodedevo Air Show in July 1967.  It was reputed to have a top speed of nearly Mach 3, and it was believed that the twin tails made it a highly agile dogfighter.  (A sample was flown in 1976 to Japan by a defector, and the true nature of the MiG-25 as a bomber interceptor with limited range and manoeuvrability was exposed.)

The fear that the Soviets had a larger, better fighter prompted the U.S. Air Force to build the large, complex and very expensive F-15 Eagle.  Boyd railed against the size and proposed Mach 3.0 speed of the F-15, arguing that it was unnecessary and would compromise the agility and range of the aircraft.

Enter the "Mafia"
While the F-15 project proceeded, Boyd was able to convince a small group of generals that a smaller, lightweight fighter should be explored in case the F-15 project failed.  At the time (1967), the future success of the F-15 was anything but certain and both the Air Force and Navy were concerned after the F-111 debacle.  So, Boyd and his group were able to obtain funding to perform a design study on the lightweight fighter.  This group became known as the Fighter Mafia, named by Col. Everest Riccioni, and was an ironic play on the Bomber Mafia of the 1920's led by Gen. Billy Mitchell who faced similar challenges when the Air Corps didn't believe that bombers were important or necessary.

This group split the funds for the design study between two companies - General Dynamic for the YF-16 and Northrop for the YF-17 (the "Y" indicates a prototype aircraft).  Meanwhile, the F-15 project was becoming hugely expensive, and the already expensive F-14 of Top Gun fame was suffering from poor performance.  The government wanted to rein in the procurement process, and the lightweight fighter study seemed to be a good place to start.  As a result, in 1970 they received funding to proceed with the development of the two prototype aircraft and to perform a fly-off evaluation of them in order to make the decision based on real data as opposed to projections.

During 1974 the two aircraft were evaluated, and competed in a much larger competition for NATO orders.  In early 1975 the F-16 was declared the winner, and has since gone on to achieve considerable success with nearly 4,500 being delivered since full production started in 1976.

The YF-17 didn't fade into obscurity, though.  It was redesigned to Navy standards to be able to operate from aircraft carriers and became the F/A-18 Hornet.  It, too, was intended to complement a much larger, more expensive aircraft - the F-14 Tomcat.  As of 2011, the F/A-18 has replaced not only the F-14, but all other strike aircraft on the U.S. carriers.

Ironically, Boyd had argued against any "gold-plating" of the lightweight fighter in the form of powerful radar or computer systems, but those systems have indeed been applied with great success to both aircraft.  The increasing miniaturization and decreasing cost of electronics allowed those systems to be used without the exorbitant costs of previous generations of fighters.

Parallels
So, beyond Boyd's OODA loop, how does this apply to software product development?  Look at the parallels between the push to use small, agile, low-cost fighters in the context of their time.  The doctrine of the Air Force from its creation as a separate service in 1947 until the Vietnam war in the 1960's was that a massive nuclear strike capability was all that stood between the Soviets and an invasion of western Europe.  (Whether that fear was founded or not is beyond the scope of this post.)  This drove the U.S. to focus on aircraft that could deliver the bulky and heavy nuclear weapons of the time, which meant bigger and faster ruled the day.  Similarly, in a defensive role the focus was on stopping the Soviet bombers which mean large interceptors capable of high speeds carrying missiles that could destroy a bomber from a great distance away.  It was assumed that the dogfight had been consigned to the history books, and that technology would rule the skies.

It was the hard lessons of the Vietnam War that proved those assumptions to be mostly incorrect, and indeed the Kennedy administration admitted that the nuclear doctrine served to create more conventional regional conflicts rather than prevent them.

Another aspect is that the military-industrial complex in the U.S. became self-serving with big contracts and big projects resulting in big systems and big aircraft.  These all served to eventually force the need for the lightweight fighter programs that Boyd had advocated.  While the resulting aircraft, the F-16 and F-18, are anything but 'cheap', in the context of the huge F-111, F-14 and F-15 programs of the late 60's they have been unqualified successes.

Now think about the Agile Software Development world:
  • In a world of increasingly large and heavy process, one person and later a small group explore and quantify why agility is more effective
  • These people and their work are dismissed, most often by those with a vested interest in the status quo of large projects with large processes, often in large and very large companies
  • Economic constraints force a second look at their work
  • Their work begins to accumulate acceptance as the assumptions behind large process and large projects become exposed as flawed
  • Their work becomes fully accepted and begins to flourish
  • Others start to add more to their work, despite admonishments to the contrary
  • Those additions act to make the original work more robust and even more accepted
Sound familiar?

Agile as we know it in 2011 has its roots in the 1970's.  However, it was the massive build-up of large processes and supporting tools in the 80's and 90's that lead to the "lightweight process mafia" convening at Snowbird in early 2001 in an effort to change how world built software.  That effort exposed the larger, heavier methods as being the massive retaliation targeted at an enemy that would never come, and the dot-com bust would force the post-Vietnam style soul-searching that allowed Boyd's ideas to reach the mainstream.

Today, I hear people talk about OODA constantly, and how it should be used to truly drive agility into the core of a team and organization.  Indeed, the notion of being able to process the loop faster than your competition is a key component of the nascent Lean Startup movement.

We should also be thankful that John Boyd's drive to determine why agility was important, to quantify it in real terms and to implement it in actual practice in the face of the conventional wisdom of the time was shared by those who attended the discussions at Snowbird in 2001, and those who have refined the practice of the Agile Values and Principles since.

    13 July 2011

    A New Home for The Survival Guide for New Agile Coaches!

    The Survival Guide for New Agile Coaches now has a new home of its own at AgileCoachSurvivalGuide.com.

    The original posts for that series will remain here on Practical Agility, but all new posts will be made on the new blog.

    So please stop on by, and let your friends know about the new site! :)