Which DevOps topology is right for me?

Why should I read this?

You’re working in an organisation that aims to explore the benefits of working according to DevOps principles. You’ve heard terms like “platform team” and “SRE” and you have an idea what “you build, you run it” means. These terms, however, have made your exploration into DevOps more complicated and now you even have to choose how to organise your team(s). This blog provides an overview of the three most applied DevOps topologies and which conditions make a specific topology a good fit for your company.

As a reference, Matthew Skelton’s “DevOps topologies” (http://web.devopstopologies.com/) page gives a nice overview of all kinds of organisational topologies. These topologies have been implemented by companies around the world in their quest for agility and operational excellence through DevOps. Although many topologies have been documented, I believe that they are all variants of these three topologies:

1. All teams are product teams. Each team does everything that is needed to run their software including the use of any infrastructure components, usually cloud-based PaaS.

2. Internal platform team(s) and Product team(s). Product-teams make use of the infrastructure/platform-services provided by internal platform-team(s). Services provided by the platform-team(s) can range from infrastructure and “run” services such as monitoring to Continuous Integration tools and dashboarding tools.

3. Internal Platform team(s), Product team(s) and Site Reliability Engineering team(s) (SRE). This topology is based on Google’s best practices around running software. Product teams can gain the SRE teams’ support in running their software if they need it and if their software adheres to standards defined by SRE teams. SRE teams can also share on-call responsibility with product teams. The platform-team(s) provide infrastructure/platform-services.

The DevOps topology that will have the best fit within your organisation is dependent on your current organisational hierarchy, scale, regulatory requirements and people’s skills. It is also important to recognise that any chosen topology has its pitfalls, which need to be dealt with.

Read more →

Deep dive into Windows Server Containers and Docker – Part 3 – Underlying implementation of Hyper-V Containers

Last April I visited DockerCon 2017 and while they announced many new great things like the LinuxKit and the Moby Project, one of the most appealing announcements for me definitely was the announcement of John Gossman that Microsoft and Docker made it possible to run Linux containers natively on Windows Hosts by using the same Hyper-V isolation layer as Hyper-V containers. So, time for me to create a blogpost about Hyper-V containers and to explain how this Hyper-V container virtualization layer works.

Read more →

SPACEMAP – organising a motivational environment

Introducing the SPACEMAP! A ‘map’ to gain insight into motivational problems within an organization/department, so that coaches and managers no longer have to talk about vague motivation problems, but instead tackling the lack of autonomy within the teams.

The map consists of generic ‘work factors’ that are required to create a motivational environment.

SPACEMAP - organising a motivational environment

the SPACEMAP was developed by Xebia coaches

Read more →

Using Specification by Example / BDD for your refinements

When I’m joining new teams at clients it often becomes clear that the added value of refinements is not always seen. Team members complain that hours are wasted. The refinement sessions shouldn’t be long draining meetings with endless discussions. Refinements should instead provide a clear added value in the form of requirements that the whole team can work with to deliver added value. How do you shape your refinements in a way that they add value? Read on to see how BDD / Specification by Example can help you!

Read more →

Kubernetes and on-demand CI builders

Let’s say you’ve got a CI/CD pipeline and you would like to run builds in containers. You could just configure Docker on one of your machines and point your builds there, but why not use something a bit more scalable? Enter Kubernetes, a leading container orchestration platform, which luckily offers several options for using its self-contained pods as on-demand CI builders. Things like sharing sources between containers and networking will be handled for you, so all you’ll have to worry about is specifying the desired image.

In this blog, I’ll be exploring two of those options in commonly used CI tools, namely Gitlab and Jenkins, and will explain how to configure the Gitlab-runner or Jenkins Kubernetes plugin to run on-demand CI builders on a Kubernetes (or “K8s”) cluster.

Read more →

Running Kubernetes locally with Docker on Mac OS X

Fast feedback loops are instrumental to gaining confidence in changes and achieving a steady pace of delivery. In many teams, Docker has been an important force behind removing delays in the pipeline to production. Taking control of your environments is a powerful move to make as a scrum team.

Seeing that Kubernetes appears to be becoming the leading orchestration platform, chances are that the containerized applications you and your team are working on will land on a Kubernetes cluster. That is why I was excited to see that in the latest version of Docker it is now possible to run a local Kubernetes cluster. In this blog you will learn how to start a local Kubernetes cluster with the latest Docker version.

Read more →

This one crazy DevOps language you should learn (during Advent Of Code)

Random bits I learned about an underappreciated language by having fun during Advent Of Code.

This Friday the 2017 edition of Advent Of Code started, a daily treat of small programming puzzles for the holidays. Kudos to Eric Wastl for creating such a fun competition!

Last year I wanted to add a DevOps theme to my participation, so I choose to write solutions in the most important DevOps language. No, not Go. No, also not Python. Definitely not Java. Number one of course is… Bash: present in all Linux systems, almost all Docker containers, heck, even in Windows now. Gluing together systems for almost 30 years. [More…]

Refactoring to Microservices – Introducing Docker Swarm

In my [previous blog] I used local images wired together with a docker-compose.yml file. This was an improvement over stand alone containers. Networking is now more robust because code in images uses names instead of IP addresses to access services. This time my goal is to introduce Swarm so I can distribute components over multiple hosts and run more instances if necessary. Next, I’ll describe step one: migrate the docker-compose-single-host setup to a Docker Swarm multi-host version. [More].

Refactoring to Microservices – Using Docker Compose

In the previous version of the shop landscape (see tag ‘document_v2’ in this [repository]) services were started with a shell script. Each depended on Rabbit MQ to run, so there was a URL with an IP address that depended on whatever address the host it runs on got from its DHCP server. This was brittle, so I decided to introduce docker-compose. Actually, I should say ‘re-introduce’ because my colleague Pavel Goultiaev built a previous version using compose. In this version, I copied and finished his code.

read more

This blog is part of my Trying-to-understand-Microservices-Quest, you can find the previous [installment here].

Being An Agile Security Officer: Spread Your Knowledge

This is my fifth and last part of my blog series about Being an Agile Officer

In the previous parts I showed how Security Officers can align with the Agile process and let security become a standard considered quality attribute again. Unfortunately many teams not only need to be made aware of security requirements, but also need technical advise and guidance in designing and implementing them. As an Agile Security Officer you therefor need not only to act as a Stakeholder, but also as a Domain Expert for Security.

Read more →