Structured Logging That Makes Everybody Happy

When we run our software, we obviously want to see and understand what is happening and how well our software performs. To achieve this, we need observability as a key characteristic for our software. Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. This definition, borrowed from control theory, infers that metrics, tracing, and logging are key topics to be implemented in your software system.

Two of these pillars, metrics and tracing, are also of great importance to allow yourself to paint the complete picture. In this blog post, I will focus on getting the most benefits from your logging.

Read more →

Improving Security by influencing Human Behavior

We all know that the hardening of a system or implementing 2FA does not magically improves the security of an organisation. For a successful implementation of IAM, PKI a holistic approach is needed. Also for the successful improvement of security in your organisation, a holistic approach is needed. Implementing and improving security demands your approach to cover both people, process and technology.

This blog provides you with a mental model on how to change behavior of people and how to change the culture of an organisation. To change the culture of your organisation you need to change the structures and lead by example. And there is more to it, why this works in changing the behavior of individual persons. 

I also highlight material to facilitate a workshop that helps you in making the mental models behind the behavior of people explicit.

Read more →

Threat modeling without a diagram

Most threat model approaches (like e.g. STRIDE) assume you have a technical overview like a Data Flow Diagram. An interesting question therefore is; can you threat model when there is no such thing available? A common situation would be when your are forming an epic, but as an exercise let’s take a legal contract or service level agreement; can you threat model that? Let us find out….

At first sight this might be a stretch or weird thing to do as there are no assets to protect or technical risks to identify, but I will show you can still get interesting results by tweaking the process and making a translation first.

Read more →

From Build to Run: Pointers on Secure Deployment

Our experience with resources on secure deployment

Have you ever searched for resources on “Secure Software Deployment”? Most of the results revolve around the pentesting or putting security tools in your CI/CD pipeline. It would be the same as researching how to improve your cake baking skills, but end up with manuals of kitchen appliances. We want to address this gap: in this blog, we want to give you key pointers for a secure deployment.

person holding black fruit near cake for secure deployment analogy
You definitely want to protect this cake from malicious actors by ‘deploying it securely’ 🙂

So, what should you think of? We’ll start with a few aspects that we believe are important to think of when you work on a secure deployment. After that, we will touch upon the areas that you need to work on to actually achieve it. Finally, we’ll advise where to go from here.

Read more →

Improving the quality of software delivery utilizing technology, process and people

Each organization involved in creating software eventually has a need to deliver that software. It is what we call the software delivery process. Typically, software delivery starts at the moment that a developer has written code locally and wants to publish it. Or, as Martin Fowler puts it: From the developer finishing the feature to getting that feature into production. At Qxperts, we have a more holistic view on software delivery.

Read more →

Mental models: a reflection on AWS outage

In November 2020 AWS had a major outage, which started with their Kinesis service, having a cascading failure over some services. Several articles and analyses of the outage, including the official note from AWS. This blog post reflects the outage, but rather focus on the technical aspects, I will deep dive into the social ones, namely mental models.

Read more →

How to do Planning Poker online with video conferencing?

Due to corona should use video conferencing tools while doing Planning Poker. Which tool can you use the best to collaboratively determine our estimates? So we are prepared for the next Sprint Planning in Scrum.

Why you should you keep doing Planning Poker?

You’d like to keep doing this, because this Refinement practice naturally raises:

1. The teams awareness what work is expected and get clarification.
2. Knowledge and tactics will be shared
3. Assumptions will become clear
4. Negotiation with the Product Owner will occur to keep it small AND valuable

Which Planning Poker tool to use online in video conferencing?

But which Planning Poker tool would you use in Zoom or Teams? I had a discussion with my Agile coach colleagues within Xebia. We agree analog cards trumps any online tool. Why?

A. First, analog cards are easily made available to anyone. So, no firewall to jump over. And no licenses needed. Just create them!

B. Secondly, if you let them create by the team, this naturally creates ownership and some will introduce some fun and/or improvement. Especially, if any team member could introduce an extra card. Examples I think of are ‘time for an energizer’ or ‘we try to estimate too soon’.

C. Most importantly, it will naturally enforces any team member to put the camera on. Raising opportunities to spot non-verbal signals of anyone. That’s great!

This together comes close to mirror the office situation. And as an advantage, the screen sharer doesn’t need to switch applications all the time.

See my deck of cards. Easily made of some thick paper and a fat black marker, for instance.

Any questions? Raise them below!

Your potential next step to increase effectiveness:

Would you like to become the best Scrum Master? Join our Advanced Online class

Would you like to deepen your tactics in great Refinement? Join classes like
Specification by examply by Gojko Adzic It is the cornerstone of any successful requirements and testing strategy with Agile and Lean processes, like Scrum, Extreme Programming, and Kanban. This workshop teaches you how to apply SBE to bridge the communication gap between stakeholders and implementation teams, build quality into software from the start, design, develop, and deliver systems fit for purpose.

Make your team stronger. Distribute the workload of testers among time better and become better as a team? Understanding cross functional competences and needs will increase while applying Test Driven Development.

Staying Ahead Of The Competition With Executable Specifications

Any company wants to adapt quickly because of new or changed business ideas, or  because of changes in the market. Only by adapting quickly, you can stay ahead of the competition. For a company that relies heavily on software built in-house, like Picnic, this means software needs to change rapidly. This is where things get difficult. 

In this post, I’ll explain how rapid feedback in the software development process is required to speed up software change, and how executable specifications can reduce this feedback loop. I’ll describe how we apply this methodology at Picnic and what benefits we’re reaping because of it.

Read more →

3 tips for maintainable unit tests

Although having a good collection of unit tests makes you feel safe and free to refactor, a bad collection of tests can make you scared to refactor. How so? A single change to application code can cause a cascade of failing tests. Here are some tips for avoiding (or fighting back) from that situation.

Tip 1: Test behaviour not structure

The behavior of the system is what the business cares about and it is what you should care about as well from a verification point of view. If requirements change drastically then changes to the system are expected, including the tests. The promise of good unit test coverage is that you can refactor with confidence that your tests will catch any regressions in behavior. However if you are testing the structure of your application rather than the behavior, refactoring will be difficult since you want to change the structure of your code but your tests are asserting that structure! Worse, your test suite might not even test the behavior but you have confidence in them because of the sheer volume of structural tests.

If you test the behavior of the system from the outside you are free to change implementation and your tests remain valid. I am not necessarily talking about integration style tests but actual unit tests whose entry point is a natural boundary. At work we have use-case classes that form this natural entry-point into any functionality.

So let’s look at an example of structural testing, and see the what happens when we try to make a change to the implementation details. As an example, we have a test against a CreatePerson use-case that creates a Person class and persists it if it is a valid person object. The initial design takes in an IValidator to determine whether the person is valid.

Notice how we are asserting against a dependency (IValidator) of the use-case (CreatePerson). Our test has structural knowledge of how CreatePerson is implemented. Let’s see what happens when we want to refactor this code…

Your team has been trying to bring in some new practices like Domain-Driven Design. The team discussed it and the Person class represents an easy start learning. You have been tasked with pulling behavior into the the Person entity and make it less anemic.

As a first try you move the validation logic into the Person class.

Looking at the use-case, we no longer need to inject IValidator. Not only is what we test going to have to change, we are going to have to change the test completely because we no longer have a validator to inject as a mock. We have seen the first signs of our tests being fragile.

Let’s try make our test focus on the behavior we expect instead of relying on the structure of our code.

Don’t worry too much about InMemoryPersonRepository people = Given.People; for now, we will come back to it. All you need to know is that InMemoryPersonRepository implements IPersonRepository.

Since we no longer need IValidator and it’s implementation, we delete those. We also get to delete the test CreatingPerson_WithValidPerson_CallsIsValid as we have a better test now CreatePerson_WithValidName_PersistsPerson that asserts the behavior we care about, the use-case creating and persisting a new person. Yay, less test code, better coverage!

At this point you might be saying “Wait! Unit tests are supposed to test one method, on one class”. No! A unit is whatever you need it to be. I am by no means saying write no tests for your small implementation details, just make sure you are comfortable deleting them if things change. With our focus on behavior tests we can delete those detailed tests freely and still be covered. In-fact, I often just delete the tests after I am done developing the component as I just used TDD for fast feedback loop on the design and implementation. Remember that test code is still code that needs maintenance so the more coverage for less the better.

So back to the code. What does our use-case look like now?

Thats ok. We got rid of a dependency and moved some logic to our Person entity but we can do better. On reviewing your pull request someone in the team pointed out something important. You should be aiming to make unrepresentable states unrepresentable. The business doesn’t allow saving a person without a name so let’s make it so that we can’t create an invalid Person.

Look at that! We refactored the implementation without having to update our test. It still passes without any changes.

This was a contrived example to illustrate the point but I hope this tip helps you write more maintainable tests.

Tip 2: Use in-memory dependencies

You have already seen InMemoryPersonRepository so this tip should be less verbose to explain. The claim is simply that the maintainability of your tests can be increased by using in-memory versions of your dependencies a little more and using mocking frameworks a little less.

I find in-memory versions of something like a repository that speaks to a database preferable to mocking frameworks for a few reasons:

  1. They tend to be easier to update than a mocking framework, especially if creation of the mocks is done in every test or fixture
  2. Coupled with some tooling (see next tip) they lead to far easier setup and readability
  3. They are simple to understand
  4. Great debugging tool

On the down side, they do take a little time to create.

Let’s take a quick look at what the one looks like for our code so far:

Super simple! Put in the work and give it a try, it may not be as sexy as a mocking framework but it really will help make your test suite more manageable.

Tip 3: Build up test tooling

Test tooling in this context means utility classes to make readability and maintainability of the tests easier. A big part of this is about making your tests clear about the setup while still keeping it concise.

Let’s discuss a few helpers you should have in any project…

In-memory dependencies

This was already discussed above. I can’t stress enough how much this improves maintenance and simplifies reasoning about tests.

Builders

Builders can be used as an easy way to setup test data. They are a great way of simultaneously avoiding dozens of different setup methods for your tests and a way to make it clear what the actual setup of your test is without diving into some setup method that looks like all the other setup methods.

A little trick is to put an implicit conversion to the class you are building up. Also take a look at Fluency for helping with the creation of builders.

A final note on this point. Just because I use builders a lot does not mean I completely throw mocking frameworks out the window. I just tend to use mocking frameworks for things I really don’t care about and really aren’t likely to change. I also tend to use them within other builders rather than directly in tests. This gives way more control over the grammar that you use to setup your tests.

Accessors

Not sure what else to call these but it is useful to have a static class that makes access to builders and other types you would use in setup simple. Typically I have Given and A.

This allows me to write some very concise setup code. For example if I needed to populate my person repository with 3 random people I could do so like this:

For completeness the PersonBuilder implementation:

Wrapping up

So those are my 3 tips for making your tests more maintainable. I encourage you to give them a try. Without investing in the maintainability of your tests they can quickly become a burden rather than a boon. I have seen the practices above improve things not only in my teams but other colleagues have converged on similar learnings with the same positive results. Let me know if you find this helpful, or even if there are any points you strongly disagree with. I would love to discuss in the comments. Happy coding!

If you enjoyed this article you might like some of my others on testing:

I am a specialist at Qxperts. We empower companies to deliver reliable & high-quality software. Any questions? We are here to help! www.qxperts.io