Key paradoxes in agile software development, which generate its non-linear results

After having worked in an Agile environment for two years, I realized that though Agile’s productivity gains are rather paradoxical, even counterintuitive, particularly for traditional project managers. Paradoxes are counterintuitive gems, especially for software teams. They hold significant wisdom, even though they are often easy to express.

In particular, Agile project techniques take advantage of a number of paradoxes. After taking a closer look at the more popular agile methodologies, I think each has its merits. The best possible approach is to use all of them simultaneously: XPishLeanScrumKanban. Ha! Well, kind of.

Each style has its stronger points, and because they are similar enough, you will probably get the most value from combining them. When combining them, you risk missing what each one tries to achieve, or how it relates to agile fundamentals. The end result is a mess. While I work in a scrum environment, it is interesting to think of potential performance boosts from incorporating elements of the other approaches. Here’s what I think is the most worthwhile in each approach:


Scrum focusses on delivering working software every 30 day iteration (or shorter). The short interval gives product teams greater flexibility, to adjust team priorities, while still getting high quality product at the end of a sprint (iteration).

scrumFrom a business point of view, the above aspect of Scrum turns a programming project into a “real option“. The project generates more value because of the newfound flexibility. Traditional waterfall projects that plan everything up front straightjacket teams before they really understand the problems they are trying to solve. It’s better to be more flexible, and incorporate changes based on what you learn from inspecting a problem domain more closely. This prevents spend months of time on work which is either not needed or not useful to the end-user. While the other agile processes also promote an iterative incremental approach, scrum most clearly promotes this approach in my mind. Basically, this flexibility has a very real value for stakeholders, particularly since product life cycles are shrinking in most industries, including finance.

The effectiveness of scrum depends largely on how clear the team’s “definition of done” is, and how well the team holds itself accountable to it, according to scrum’s co-creator Ken Schwaber. The “definition of done” includes all criteria for accepting that a team “did” a requirement (user story). The requirement is then officially “done”. Typically, this includes not only development, testing, but also any work required by the end-user, such as documentation.

When really following pure scrum, according to the spirit of the scrum guide,  scrum co-creator Jeff Sutherland claims productivity goes up about 5x, measured by the amount of extra revenue generated. If combined with other approaches, like XP, it can go up 15-20x. This depends on a number of factors potentially external to the team, such as sales. The numbers are roughly analogous to the software “function points”, generated by the  teams tracked in his studies. Function points are a traditional unit of measure of functionality, reduced to a least common denominator, which were used to track productivity of software developers in the 1980s, thus allowing for a reasonably accurate comparison of scrum output against traditional waterfall approaches.

What I found initially counterintuitive, was that planning the long-term future was much less valuable, than planning the immediate future in detail. If you spend relatively more time on short-term planning, your long-term horizon will take care of itself. Of course it is good to have a long-term project vision or goal, but beyond that it makes little sense to go into minute details what the fifth month of a project will look like. Instead you can get the biggest gain by knowing in extreme detail what you will complete, and how you will complete the most important tasks in the first month. You ensure you get great results now, particularly since scrum focusses on delivering working software which can immediately generate business value. Moreover, the flexibility you get is liberating. You can even estimate a financial value of it if needed.

Extreme Programming (XP)

Specifications and RequirementsIn my opinion, XP’s greatest insight is that certain technical practices eliminate the need for technical documentation; instead you get well designed software in an executable specification, which itself is a deliverable, with a very low amount of bugs, assuming you write decent tests. Teams plan and design on a “just-in-time” basis, instead of preparing a big “up-front” design document. This  allows you to learn about the problem domain and the technology you are using as you go. XP also stresses managing explicitly by goals instead of tasks. You are extremely focussed on being effective, i.e. you don’t spend a single minute on working on functionality that isn’t required now. The four key technical practices are refactoring, TDD, pair programming, and continuous integration.

Back in the late 90s, I remember the 800 page requirements document that a customer gave my team, on a railway transit project. We then created a specifications document that was 300 pages long, and a page-turning series of design documents about how we were going to create a piece of software before even writing a line of code. This was pretty much standard practice at the time. Thinking about it now in the context of what refactoring, TDD (or at least unit and integration testing), and CI do together, it was all mad.

The most important of these practices is refactoring in my opinion. Refactoring means constantly improving the structure of the code as you work on it, so that it stays as flexible and easy to work with, at the moment you started. The more clearly you express concepts in code, at least compared to how people think, the easier you can continue working productively with it. A large code base that isn’t refactored tends to accumulate “technical debt”, which is essentially “borrowing forward” time. If this gets out of hand, the software becomes very hard to work with. The productivity levels of developers can become very low. Moreover, most developers will avoid making new changes to unrefactored code, due to fear of unexpected side effects.  In contrast, if code is well-refactored, it is easy to introduce changes into code, as it is pretty clear what the code is meant to do. It’s the software equivalent of cleaning up your kitchen after you’re done using it.

I haven’t seen explicit productivity numbers for XP on its own. Scrum combined with XP technical practices can give up to 15x productivity gains, even when globally distributed. While this is not isolated, it does give some sense of the business value of this approach. XP itself, though, is hard to do. It requires certain technical skills such as TDD, which is difficult to learn because the knowledge is very context sensitive. For example, it’s much easier to do this on Web technologies than large legacy C++ projects. It also requires a willingness to stick to the process, which is quite difficult. The benefit is that you get a massive amount of feedback as you go, which enables you to constantly course-correct continuously.

XP taught me that spending time on refactoring saves time overall. Once a concept is clearly expressed in code, it becomes easier for anyone to maintain it in the future-whether it’s you or someone else. Jangled code wastes everyone’s time, and expresses a lack of clarity in thought about the problem. Moreover, you can supposedly see indicators of most of a team’s dysfunctions (or lack thereof) in the code they maintain, based on how effective they are at refactoring. Well designed software should be easy to understand by any new developer added to your team. Good software design follows how we think-as people-about the problem we are solving. In contrast,  the compiler doesn’t really care how you structure your code, as long as it can compile it.


Moving slightly outside of the realm of pure software development, Kanban helps teams  self-manage by increasing visibility of problems. Kanban has its roots in a manufacturing environment, where work is more sequential than typically in software development, but there are more parallels in a software maintenance environment to make it using worthwhile. Kanban helps make problems more concrete and visible using “cards” which represent issues, so that the team can move them through various “swim lanes” or stages of a process. There are no prescribed explicit structures, like scrum’s various meetings, as the team decides what it feels is best and acts accordingly.magnifying glass

Personally, I think the most valuable aspect of Kanban is the focus on visibility. Traditionally project managed projects hide a lot of complexity behind a big stack of documents, Gantt charts and risk reports. Paradoxically, the more documentation that is produced, the harder it is for any member of the team to actually know what is going on–in totality.

Kanban forces the most important elements of a specific project to be constantly visible for everyone, not only the team, but also management. It helps focus on the biggest impediments, as removing them tends to provide the largest jumps in output.  It feels uncomfortable to face the impediments, but they are typically bottlenecks on performance. Simultaneously, Kanban is flexible enough to accommodate for any kind of problems. As a result it helps everyone focus on the most important issues at any given time, because of the visibility it generates. The board where the cards are placed visually constraint the amount of things available. Even more importantly, the work in process is always limited to a small amount – typically 4. This accommodates for the small “working memory” we have as humans, where we are only capable of effectively processing a small number of concepts (or problems) at once. This approach also minimizes the effort required to switch between tasks. By keeping the right amount of high priority items in front everyone’s face, Kanban prevents wasting time by getting lost in unimportant details. It’s immediately clear to everyone involved what the priority is, which makes it easier to actually “do” it.

My personal experience with kanban is in my Kitchen. After having read Jim Benson & Tonianne DeMaria Barry’s Personal Kanban and David Starr’s Agile Practices for Families, I tried implementing Kanban to keep track of what goes on around the house, and what we need to do during a week-long iteration. It…kinda worked. Although it’s good to put everything up on the board, often I don’t feel like going ahead with the next work item. After all, I’m not at work. I want to relax at home. Although I do have to admit, my wife and I got through a major project that had hung over us for years when we initially implemented Kanban. You could say that we have had some success with it.

Kanban speaks to my belief that transparency helps clean up a lot of problems. As long as you can’t see what is going on, anything can be happening. It’s very difficult to move forwards if you don’t know exactly where you are. In theory, Kanban can be especially useful in a highly politicized situation, where people are more concerned with looking good, than with making progress. Kanban makes it clear what the most important problems are, and what discussions really need to happen. Once this starts, everyone can focus on the current key issues. Over time, you reduce internal politics, and get a massive spike in effectiveness.


“Remove waste”. That’s the very simple message Lean promotes, even though the methods of waste removal can be quite complex. By getting rid of waste, you are capable of generating much more using the same inputs. Lean tries to separate the activities from the goals they are addressing, and figuring out if the same goals can be achieved in a better way. Traditionally, lean came from manufacturing, but the basic ideas can be transplanted into an office environment as Ole Dam says, even a software environment. Tom and Mary Poppendieck have trumpeted about the benefits of lean in software for a long time.

The most useful counter-intuitive insight I’ve discovered in Lean was the effect of removing a bottleneck. Eli Goldratt in his fantastic educational novel with a cheesy title, The Goal, points out that removing one bottleneck in a system has dramatically non-linear effect on the output of the whole system. Goldratt  illustrates basic Lean principles, while telling the story of Alex Rogo, a frantic manufacturing plant manager who needs to rapidly turn around an ineffective plant. There are a number of parables woven throughout the text, such as how the slowest person on a hike (subsystem) determines the speed of the whole expedition (system).

Many of these insights have a direct implication for personal life also. At any given moment, removing your biggest bottleneck should be your only priority. A top priority, by definition, is only one. Every other activity you can undertake will have significantly less impact on your life than addressing this particular problem. Therefore, you will be most effective if you concentrate all of your resources on this top priority: time, money, effort, the soul of your firstborn child. While it may initially feel more risky, in fact you will get much more accomplished with fewer resources.

If you want to have a listen to someone discussing lean software on your next commute or “fun run”, try this eposide of Herding Code: BDD and Lean Software Development. Some of the comments by Scott Bellware are intentionally harsh, but I think he does manage to make a number of useful points, particularly about Lean in a software engineering context.

Key Takeaways

Regardless of whether you apply this to a software project, or consider the below as pseudo-scientific self help bulls**t, the below are probably the most useful parts of Agile:

  1. Iterative incremental work tends to be more effective than massively detailed plans.
  2. Clean up after yourself continously.
  3. High visibility helps clear problems quickly.
  4. There is always only one top priority: your biggest bottleneck.

Interestingly enough, they are rather “uncorrelated” in terms of how valuable they are. You can simultaneously reap the benefits of all of them.

What about you? Have you tried any of these approaches? What do you think is the most worthwhile?

9 Responses to “Key paradoxes in agile software development, which generate its non-linear results”
  1. Agile says:

    Awesome article.
    Agile Management tool capable of connecting various aspects of your development like stories, tasks, issues and relevant patches of code connected. Thanks for posting this blog

  2. Just desire to say your article is as surprising. The clarity on your post
    is just cool and i could assume you’re knowledgeable in this subject. Well along with your permission let me to snatch your RSS feed to keep updated with forthcoming post. Thank you one million and please carry on the rewarding work.

Check out what others are saying...
  1. […] assumptions of economics. More interestingly, much of the data they gather seems to confirm a significant increase in productivity even a 15x growth in function points generated per month, thus showing that the theory of specialization breaks down in this […]

  2. […] What Makes Agile special on the key paradoxes that make Agile work […]

  3. […] Key paradoxes in agile ( […]

What do you think? Leave a comment.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s