Last week I had an interesting course by Roger Sessions on Snowman Architecture. The perishable nature of Snowmen under any serious form of pressure fortunately does not apply to his architecture principles, but being an agile fundamentalist I noticed some interesting patterns in the math underlying the Snowmen Architecture that are well rooted in agile practices. Understanding these principles may give facts to feed your gut feeling about these philosophies and give mathematical proof as to why Agile works.
Complexity
“What has philosophy got to do with measuring anything? It’s the mathematicians you have to trust, and they measure the skies like we measure a field. “ – Galileo Galilei, Concerning the New Star (1606).
In his book “Facts and Fallacies of Software Engineering” Robert Glass implied that when the functionality of a system increases by 25% the complexity of it effectively doubles. So in formula form:
This hypothesis is supported by empirical evidence, and also explains why planning poker that focuses on the complexity of the implementation rather than functionality delivered is a more accurate estimator of what a team can deliver in sprint.
Basically the smaller you can make the functionality the better, and that is better to the power 3 for you! Once you start making functionality smaller, you will find that your awesome small functionality needs to talk to other functionalities in order to be useful for an end user. These dependencies are penalized by Roger’s model.
“An outside dependency contributes as much complexity as any other function, but does so independently of the functions.”
In other words, splitting a functionality of say 4 points (74 complexity points) in two equal separate functions reduces the overall complexity to 17 complexity points. This benefit however vanishes when each module has more than 3 connections.
An interesting observation that one can derive from this is a mathematical model that helps you to find which functions “belong” together. It stands to reason that when those functions suffer from technical interfacing, they will equally suffer from human interfaces. But how do we find which functions “belong” together, and does it matter if we get it approximately right?
Endless possibilities
“Almost right doesn’t count” – Dr. Taylor; on landing a spacecraft after a 300 million miles journey 50 meter from a spot with adequate sunlight for the solar panels.
Partitioning math is incredibly complex, and the main problem with the separation of functions and interfaces is that it has massive implications if you get it “just about right”. This is neatly covered by “the Bell number” (Wiki Bell number).
These numbers grow quite quickly e.g. a set of 2 functions can be split 2 ways, but a set of 3 already has 5 options, at 6 it is 203 and if your application covers a mere 16 business functions, we already have more than 10 billion ways to create sets, and only a handful will give that desired low complexity number.
So how can math help us to find the optimal set division? the one with the lowest complexity factor?
Equivalence Relations
In order to find business functions that belong together or at lease have so much in common that the number of interfaces will outweigh the functional complexity, we can resort to the set equivalence relation (Wiki Equivalence relation). It is both the strong and the weak point in the Snowmen architecture. It provides a genius algorithm for separating a set in the most optimal subsets (and doing so in O(n + k log k) time). The equivalence relation that Session proposes is as follows:
Two business functions {a, b} have synergy if, and only if, from a business perspective {a} is not useful without {b} and visa versa.
The weak point is the subjective measurement in the equation. When played at a too high level it will all be required, and on a too low level not return any valuable business results.
In my last project we split a large eCommerce platform in the customer facing part and the order handling part. This worked so well that the teams started complaining that the separation had lowered their knowledge of each other’s codebase, since very little functionality required coding on both subsystems.
We effectively had reduced complexity considerable, but could have taken it one step further. The order handling system was talking to a lot of other systems in order to get the order fulfilled. From a business perspective we could have separated further, reducing complexity even further. In fact, armed with Glass’s Law, we’ll refactor the application to make it even better than it is today.
Why bother?
Well, polynomial growing problems can’t be solved with linear solutions.
As long as the complexity is below the solution curve, things will be going fine. Then there is a point in time where the complexity surpasses our ability to solve it. Sure we can add a team, or a new technology, but unless we change nature of our problem, we are only postponing the inevitable.
This is the root cause why your user stories should not exceed the sprint boundaries. Scrum forces you to chop the functionality into smaller pieces that move the team in a phase where linear development power supersedes the complexity of the problem. In practice, in almost every case where we saw a team breaking this rule, they would end up at the “uh-oh moment” at some point in the future, at that stage where there are no neat solutions any more.
So believe in the math and divide your complexity curve in smaller chunks, where your solution capacity exceeds the problems complexity. (As a bonus you get a happy and thriving team.)
Want to learn more about using martial arts in product management? Go order the book from bol.com if you are in the Netherlands or Belgium or sign up to get the international edition.