Improving the quality of software delivery utilizing technology, process and people

Each organization involved in creating software eventually has a need to deliver that software. It is what we call the software delivery process. Typically, software delivery starts at the moment that a developer has written code locally and wants to publish it. Or, as Martin Fowler puts it: From the developer finishing the feature to getting that feature into production. At Qxperts, we have a more holistic view on software delivery.

Read more →

Mental models: a reflection on AWS outage

In November 2020 AWS had a major outage, which started with their Kinesis service, having a cascading failure over some services. Several articles and analyses of the outage, including the official note from AWS. This blog post reflects the outage, but rather focus on the technical aspects, I will deep dive into the social ones, namely mental models.

Read more →

How to do Planning Poker online with video conferencing?

Due to corona should use video conferencing tools while doing Planning Poker. Which tool can you use the best to collaboratively determine our estimates? So we are prepared for the next Sprint Planning in Scrum.

Why you should you keep doing Planning Poker?

You’d like to keep doing this, because this Refinement practice naturally raises:

1. The teams awareness what work is expected and get clarification.
2. Knowledge and tactics will be shared
3. Assumptions will become clear
4. Negotiation with the Product Owner will occur to keep it small AND valuable

Which Planning Poker tool to use online in video conferencing?

But which Planning Poker tool would you use in Zoom or Teams? I had a discussion with my Agile coach colleagues within Xebia. We agree analog cards trumps any online tool. Why?

A. First, analog cards are easily made available to anyone. So, no firewall to jump over. And no licenses needed. Just create them!

B. Secondly, if you let them create by the team, this naturally creates ownership and some will introduce some fun and/or improvement. Especially, if any team member could introduce an extra card. Examples I think of are ‘time for an energizer’ or ‘we try to estimate too soon’.

C. Most importantly, it will naturally enforces any team member to put the camera on. Raising opportunities to spot non-verbal signals of anyone. That’s great!

This together comes close to mirror the office situation. And as an advantage, the screen sharer doesn’t need to switch applications all the time.

See my deck of cards. Easily made of some thick paper and a fat black marker, for instance.

Any questions? Raise them below!

Your potential next step to increase effectiveness:

Would you like to become the best Scrum Master? Join our Advanced Online class

Would you like to deepen your tactics in great Refinement? Join classes like
Specification by examply by Gojko Adzic It is the cornerstone of any successful requirements and testing strategy with Agile and Lean processes, like Scrum, Extreme Programming, and Kanban. This workshop teaches you how to apply SBE to bridge the communication gap between stakeholders and implementation teams, build quality into software from the start, design, develop, and deliver systems fit for purpose.

Make your team stronger. Distribute the workload of testers among time better and become better as a team? Understanding cross functional competences and needs will increase while applying Test Driven Development.

Staying Ahead Of The Competition With Executable Specifications

Any company wants to adapt quickly because of new or changed business ideas, or  because of changes in the market. Only by adapting quickly, you can stay ahead of the competition. For a company that relies heavily on software built in-house, like Picnic, this means software needs to change rapidly. This is where things get difficult. 

In this post, I’ll explain how rapid feedback in the software development process is required to speed up software change, and how executable specifications can reduce this feedback loop. I’ll describe how we apply this methodology at Picnic and what benefits we’re reaping because of it.

Read more →

3 tips for maintainable unit tests

Although having a good collection of unit tests makes you feel safe and free to refactor, a bad collection of tests can make you scared to refactor. How so? A single change to application code can cause a cascade of failing tests. Here are some tips for avoiding (or fighting back) from that situation.

Tip 1: Test behaviour not structure

The behavior of the system is what the business cares about and it is what you should care about as well from a verification point of view. If requirements change drastically then changes to the system are expected, including the tests. The promise of good unit test coverage is that you can refactor with confidence that your tests will catch any regressions in behavior. However if you are testing the structure of your application rather than the behavior, refactoring will be difficult since you want to change the structure of your code but your tests are asserting that structure! Worse, your test suite might not even test the behavior but you have confidence in them because of the sheer volume of structural tests.

If you test the behavior of the system from the outside you are free to change implementation and your tests remain valid. I am not necessarily talking about integration style tests but actual unit tests whose entry point is a natural boundary. At work we have use-case classes that form this natural entry-point into any functionality.

So let’s look at an example of structural testing, and see the what happens when we try to make a change to the implementation details. As an example, we have a test against a CreatePerson use-case that creates a Person class and persists it if it is a valid person object. The initial design takes in an IValidator to determine whether the person is valid.

Notice how we are asserting against a dependency (IValidator) of the use-case (CreatePerson). Our test has structural knowledge of how CreatePerson is implemented. Let’s see what happens when we want to refactor this code…

Your team has been trying to bring in some new practices like Domain-Driven Design. The team discussed it and the Person class represents an easy start learning. You have been tasked with pulling behavior into the the Person entity and make it less anemic.

As a first try you move the validation logic into the Person class.

Looking at the use-case, we no longer need to inject IValidator. Not only is what we test going to have to change, we are going to have to change the test completely because we no longer have a validator to inject as a mock. We have seen the first signs of our tests being fragile.

Let’s try make our test focus on the behavior we expect instead of relying on the structure of our code.

Don’t worry too much about InMemoryPersonRepository people = Given.People; for now, we will come back to it. All you need to know is that InMemoryPersonRepository implements IPersonRepository.

Since we no longer need IValidator and it’s implementation, we delete those. We also get to delete the test CreatingPerson_WithValidPerson_CallsIsValid as we have a better test now CreatePerson_WithValidName_PersistsPerson that asserts the behavior we care about, the use-case creating and persisting a new person. Yay, less test code, better coverage!

At this point you might be saying “Wait! Unit tests are supposed to test one method, on one class”. No! A unit is whatever you need it to be. I am by no means saying write no tests for your small implementation details, just make sure you are comfortable deleting them if things change. With our focus on behavior tests we can delete those detailed tests freely and still be covered. In-fact, I often just delete the tests after I am done developing the component as I just used TDD for fast feedback loop on the design and implementation. Remember that test code is still code that needs maintenance so the more coverage for less the better.

So back to the code. What does our use-case look like now?

Thats ok. We got rid of a dependency and moved some logic to our Person entity but we can do better. On reviewing your pull request someone in the team pointed out something important. You should be aiming to make unrepresentable states unrepresentable. The business doesn’t allow saving a person without a name so let’s make it so that we can’t create an invalid Person.

Look at that! We refactored the implementation without having to update our test. It still passes without any changes.

This was a contrived example to illustrate the point but I hope this tip helps you write more maintainable tests.

Tip 2: Use in-memory dependencies

You have already seen InMemoryPersonRepository so this tip should be less verbose to explain. The claim is simply that the maintainability of your tests can be increased by using in-memory versions of your dependencies a little more and using mocking frameworks a little less.

I find in-memory versions of something like a repository that speaks to a database preferable to mocking frameworks for a few reasons:

  1. They tend to be easier to update than a mocking framework, especially if creation of the mocks is done in every test or fixture
  2. Coupled with some tooling (see next tip) they lead to far easier setup and readability
  3. They are simple to understand
  4. Great debugging tool

On the down side, they do take a little time to create.

Let’s take a quick look at what the one looks like for our code so far:

Super simple! Put in the work and give it a try, it may not be as sexy as a mocking framework but it really will help make your test suite more manageable.

Tip 3: Build up test tooling

Test tooling in this context means utility classes to make readability and maintainability of the tests easier. A big part of this is about making your tests clear about the setup while still keeping it concise.

Let’s discuss a few helpers you should have in any project…

In-memory dependencies

This was already discussed above. I can’t stress enough how much this improves maintenance and simplifies reasoning about tests.


Builders can be used as an easy way to setup test data. They are a great way of simultaneously avoiding dozens of different setup methods for your tests and a way to make it clear what the actual setup of your test is without diving into some setup method that looks like all the other setup methods.

A little trick is to put an implicit conversion to the class you are building up. Also take a look at Fluency for helping with the creation of builders.

A final note on this point. Just because I use builders a lot does not mean I completely throw mocking frameworks out the window. I just tend to use mocking frameworks for things I really don’t care about and really aren’t likely to change. I also tend to use them within other builders rather than directly in tests. This gives way more control over the grammar that you use to setup your tests.


Not sure what else to call these but it is useful to have a static class that makes access to builders and other types you would use in setup simple. Typically I have Given and A.

This allows me to write some very concise setup code. For example if I needed to populate my person repository with 3 random people I could do so like this:

For completeness the PersonBuilder implementation:

Wrapping up

So those are my 3 tips for making your tests more maintainable. I encourage you to give them a try. Without investing in the maintainability of your tests they can quickly become a burden rather than a boon. I have seen the practices above improve things not only in my teams but other colleagues have converged on similar learnings with the same positive results. Let me know if you find this helpful, or even if there are any points you strongly disagree with. I would love to discuss in the comments. Happy coding!

If you enjoyed this article you might like some of my others on testing:

I am a specialist at Qxperts. We empower companies to deliver reliable & high-quality software. Any questions? We are here to help!

Threat Modeling – Start using evil personas

Agile teams often use the concept of personas to create more tailored user stories, so could you use evil personas to describe malicious behavior?

Personas are “synthetic biographies of fictitious users of the future product” and “a powerful technique to describe the users and customers of a product in order to make the right product decisions“. The purpose of using personas is to “understand who the beneficiaries of the product are and what the goals they pursue”.

In essence, personas help teams understand if the designed functionality actually fits the end-user desires. This makes it a powerful approach to also identify possible risks by introducing malicious users or ‘evil personas’.

Read more →

Mob Programming in COVID-19 Times

What Is Mob Programming?

Simply put, mob programming is about getting together with at least three developers and start coding on a single keyboard. At any given time one developer is actually typing, the ‘Driver’. All other developers take the ‘Navigator’ role. They all review, discuss and describe what the Driver should be doing and the Driver narrates. The roles are swapped very frequently to keep everyone fresh and engaged. It’s the ultimate form of collaboration and peer review.

Mobbing During Lock Down

So now you know that mob programming is about live coding together on the same piece of code. But how do you do this when everyone is working remotely during this pandemic. With my current team we decided to give it a go regardless. There’s excellent online collaboration tools available these days, so it must be possible to exercise mob programming fully online. We’ve practiced for three days in a row in a mob programming hackathon. Below I’ll describe my experience.

Read more →

Organisational structures to create autonomy: what I’ve learned from my daughter

I’m grateful to learn from my daughter. Be able to see how the brain develops and picks up new concepts, skills and words. Nowadays, I enjoy to sit down and watch her play. As a parent, I also need to help her to achieve her autonomy: emotionally, mentally and physically. It is the job, and my wife and I are navigating through it. There is not a guide on how to parent, and we discuss what is working and what is not working. Sometimes is just not the time for a new skill, other times our daughter doesn’t develop an interest for a particular activity. Well, we are all different, and that is what makes the world colourful!

Read more →

Security by design? Don’t create a YAPWAV!

Security is about making risks visible and mitigating the impact of possible incidents to an acceptable level. The ‘security by design’ philosophy aims for every application or system to be at an acceptable risk level, all the time.

When starting with a ‘secure by design’ approach, often existing security processes are simply bolted onto the development life-cycle. One of the major pitfalls of this approach is requiring teams to do a YAPWAV. YAPWAV stand for the developer’s hell called: Yet Another Process Without Added Value. A YAPWAV is an activity a team solely has to do to please a stakeholder, without noticeably improving the product they’re building.

A classic example of a YAPWAV is the mandatory risk assessment for each software deployment, just for the purpose of satisfying a documentation process. These kinds of security processes are bound to fail as they add no (visible) value to the product the team is building. In the agile philosophy, every action or activity should contribute to the value of the product. The moment an activity is introduced that doesn’t add visible value, teams will decide it’s not worth the effort and stop doing it.

Read more →

Remote collaborative modelling part 1: Check-in

Collaborative modelling is not only an essential practice in Domain-Driven Design for creating a shared understanding of the domain. I believe it is vital in building sustainable and inclusive quality software. Covid-19 has constrained us to move collaborative modelling sessions online, and for almost everyone, this is uncharted territory and can be quite overwhelming. In these series of posts, I hope to give people some guidance and heuristics to start doing more of collaborative modelling in a remote world so that we can build more sustainable and inclusive quality software. We start this series with a practice that can make or break a collaborative modelling session, a Deep Democracy check-in.

Read more →