Asynchronous workflow pattern
The asynchronous workflow pattern, also known as the publish-subscribe pattern, is an architecture pattern which is typically used to asynchronously perform resource intensive and time consuming tasks. To separate the request from the task itself we can use a queue where the sender puts messages that another service can pick up.
This pattern is a subset of the CQRS (Command-Query Responsibility Segregation) pattern. CQRS defines a clear separation of a command and query model [MF-CQRS], while the asynchronous workflow pattern only defines a command model without caring how the result of a command is being read.
A simple, effective test automation strategy
In my whitepaper I wrote about different types of testing and what to consider when choosing a test automation strategy. More than a few readers asked how to put this advice into practice, and a good friend inspired me to write another blog post about it.
DevOps teams automate everything. When you’re getting started with test automation, it’s important to wonder why someone would want to automate tests. And what is testing, anyway? This article describes what testing is and what parts of that process can be automated. Fortunately, there’s no way test automation will eliminate manual testing; it will just make that process more efficient.
Monitoring AWS EKS audit logs with Falco
AWS recently announced the possibility to send control plane logs from their managed Kubernetes service (EKS) to CloudWatch. Amongst those logs are the API server audit events, which provide an important security trail regarding interactions with your EKS cluster
Sysdig Falco is an open-source CNCF project that is specifically designed to monitor the behavior of containers and applications. Besides monitoring container run-time behavior, it can also inspect the Kubernetes audit events for non-compliant interactions based on a predefined set of rules.
Wouldn’t it be nice if you could automatically monitor your EKS audit events with Falco? In this blog post we will show you how to make this work.
Software Automation Testing Tools series: Cypress vs TestCafé – part one: an introduction
At Xebia we focus on building the right thing the right way. To do so we need to continuously receive feedback on the quality of our code. As such, a testframework that supports our way of working is paramount to success.
In this article we will have a look at Cypress and TestCafé CLI.
Read more →
Auto-Scaling on Alibaba Cloud
When you deploy your application on compute instances on-premise or in the cloud, you have to make an educated guess about the utilisation of the resources you provision. If you have an older version of the same application running with proper monitoring, you can base your guesstimate on the current usage of compute nodes. But when this is the first time your application goes to production you are out of luck. How many users will you have each day? What will users do when they start your application and how are usage peaks distributed? Do you expect growth in the number of users? How about long term? If you have answers to all of these questions, you might be well-equipped to go with your gut-feeling and just deploy the application on the number of nodes you came up with. Let’s assume you don’t have all the answers though (which is probably the case). This is where auto-scaling comes in.
Visualise coupling between contexts in Big Picture EventStorming
A Big Picture EventStorming is a type of EventStorming where you get business and IT from an organisation into one room to explore the entire line of that business. This way we can find competing goals, ambiguity in the language, communication boundaries between contexts, and most important we share knowledge! We end up with a visual overview of our business architecture and can map our IT systems on or do for instance a value stream mapping. But we can also map and visualise coupling between contexts in Big Picture EventStorming. In this blog post, I will share my insights on how I visualise contexts boundaries in a Big Picture EventStorming.
Retrospectives should be a natural and continuous process
As a Xebian my typical day is spent working on one of the projects we do for our clients. And for those projects that I do together with other Xebians I end every day with a fifteen minute chat; discussing what we have done that day, sharing our observations, learning lessons and adjusting our plans for tomorrow. Personally I love this short feedback cycle: the learning is more deliberate and it allows me to take into account all the details.
Retrospectives provide an opportunity for a team and its individuals to learn and improve. In their most simple form you answer the questions ‘what went well?’ and ‘what do we want to improve?’. This is followed by a number of action points that the team commits to picking up in the next sprint: iteratively becoming better.
However, if your retros consistently result in a long list of improvement and/or action points you have a serious problem. Your team is either failing to make the necessary improvements. Or, more importantly: your internal feedback cycles are way too long.
As developers we spend a lot of time and effort on building pipelines that can securely and automatically promote our code increments to a production environment. We slice our user stories in such a way that small chunks of value are delivered at a consistently high pace. We do this because we value the quick feedback cycles on our work. Are we really delivering business value?
We have to apply those same quick feedback cycles to how we work together. Nothing is stopping you from holding your own mini-retrospective once a day. Or from having a five minute chat with a colleague as soon as you notice something that can be improved, followed by: improving it. Does the improvement take more time? Slice it as you would with any user story.
Once your retrospectives are no longer just a bi-weekly ritual, but a continuous process you can instead ask the following questions: ‘what went well?’ and ‘what improvements have we made?’. A retrospective is not just a bi-weekly ritual, a recurring event on our calendars, but rather it’s a continuous improvement proces!
Heuristics on approaching Example Mapping
While Bruno Boucard, Thomas Pierrain, and I were preparing our DDDEurope 2019 workshop, we discussed how to approach Example Mapping. For the workshop, we were combining EventStorming and Example Mapping to go from problem space to solution space. The way I have been approaching Example Mapping was slightly different then Thomas and Bruno did. Mine followed up more on EventStorming, standing in front of a wall storming examples first with stickies. Bruno and Thomas do it the way that was described byMatt Wynne from cucumber, standing in a group around a table, starting with a user story and one rule written on index cards. So we began to discuss in short what these difference are, and what the trade-off was when we did these. In this post, I will explain the different heuristics on approaching Example Mapping in this post.
Developing for Google Assistant with Dialogflow
You can do version control and CI/CD with Dialogflow. Although it may look like Dialogflow is not created for developers, you can set up a nice developer flow. This makes it possible to scale development to a team of developers. This article will show you the best practices for an effective development process.
When you start off using Dialogflow you can get a user friendly web interface. You can program phrases that your voice assistant should support. Even though the Dialogflow interface is useable by non programmers, you are still programming. For a software developer it is important to have access to the source code of what they’re programming.