Frictionless checkouts for GAMMA and KARWEI

Over the years, Xebia has been the driver of Agile software development at Intergamma, known for the GAMMA and KARWEI DIY stores. A year ago, we set out to replace the checkout process for their webshops. The existing checkout was slow and cumbersome to use, and no longer on par with other parts of the website. We knew there was a lot of room for improvement. In order to justify the investment we needed measurable results quickly. Within a year, we’re consistently seeing a significant conversion rate improvement.

Before starting out to replace the checkout, we discussed the technology stack and general approach. We had prior experience with React and Next.js, but decided against Next.js because of the complexity it adds. A checkout app also doesn’t need to be indexed by search engines, which is why you’d otherwise want to use Next.js. We decided to stick with a standard React setup, which enables us to focus and keep things simple. In order to measure our success, we set up A/B testing to directly compare the old and new checkout running in parallel.

Minimum Valuable Product

Our initial goal was to go to production as soon as possible. This meant building a minimal version of the new checkout, without sacrificing quality or usability. We call this the Minimum Valuable Product. Because we were shipping this product to our customers, we couldn’t compromise on quality, usability or performance. However, we could omit features that are hardly used, especially with control over who’s seeing our new product. Taking the Minimum Valuable Product approach means you should:

  • Minimize the work required to go live
  • Offer a working, usable product
  • Not compromise your quality standards
  • Measure your results, and adapt

Our approach was twofold: reduce the number of features and keep things simple. In order to get away with delivering less features, we set up our cart page to direct a customer to the old or new checkout based on the contents of their shopping cart. This allowed us to avoid implementing complex parts of the checkout such as in-store delivery, big parcel handling and separate shipments, as well as dealing with logged-in users and multiple languages. We also initially offered only iDEAL and credit card payments. However, the features that we did build were fully functional, including any edge cases. With this approach we were able to go live in just a few months.

Reduced friction, improved performance

Reducing friction for the customer was our main goal for the new checkout. We did this in many ways:

  • Avoiding page reloads by putting everything on one page rather than 3 separate pages and calculating shipping costs on the fly.
  • Change the order of steps so we avoid asking for information we don’t always need (i.e. phone number) and explain why we need it if we do.
  • Make sure the browser can properly autofill the form fields, and offer suggestions for possible mistakes, such as a typo in the email address.
  • Offer a zipcode autocomplete, but also allow entering the address manually.
  • Integrate directly with our payment service provider to avoid navigation to an external payment page.
  • Automatically scroll the viewport when necessary to reveal form fields.
  • Validate user input as soon as possible and provide actionable error messages.
  • Provide ARIA hints for screen readers and allow keyboard navigation.

Improved performance was the second goal of the project because it has measurable impact on customer satisfaction and ultimately, conversion. Unfortunately many parts of the process are hard to optimize, and we didn’t want to touch the back-end too much. We focused on improving the perceived performance instead. This means having the site’s skeleton load as quickly as possible, showing signs of progress while data is loading in the background. The front-end is essentially a static website, served entirely from a Content Delivery Network (CloudFront) for optimal performance. This is known as the App Shell Model promoted by Google.

Component-Driven Development

The webshop we built for Intergamma is whitelabel, used by its two brands in two countries. This means being able to change the site’s visual branding and serving it in multiple languages. To manage complexity, the various parts of the webshop are built as separate web applications, currently including Shopfront, Checkout and MyAccount. To make sure these parts form a consistent whole, we reuse a lot of components between applications. These shared components are built separately as part of a design system, using Storybook for development, collaboration and documentation. The components are published on a private npm registry so they can be downloaded from a central place. Each component is dynamically themed to match the brand experience and translated based on user preference.

One great aspect of Storybook is being able to build components in isolation, so developers can focus on building the user interface, without the need for a fully functional application. First we focus on building individual components, then we group them together to form a greater whole – first sections of the UI, then entire screens. In development we work with fake data to iterate faster and be able to cover every use case. This approach is known as Component-Driven Development.

Going live

Part of our strategy was to release the new application as quickly as possible, so we could start gathering customer feedback early on. The deployable application shell is fully static – just a bunch of JavaScript files and static assets. This has great benefits: hosting files on AWS S3 is cheap and files are served by the CloudFront CDN to make it load as quickly as possible. We did however need to setup Lambda@Edge to dynamically add CORS and CSP headers. Luckily all of this is automated using Terraform from our CI pipeline, so there’s no manual work involved.

We deployed the new application in parallel to the existing one, and used dynamic routing to direct customers to the new or the existing application. This was done based on a set of predefined rules, such as the type of products in the shopping cart and whether or not the customer is logged in. That way we could direct customers with “complicated” shopping carts to the existing checkout, while customers with “simple” shopping carts had a chance to be directed to the new checkout. This allowed us to go live in a very short time, with only a subset of features.

Automatic database sharding with Alibaba Cloud Table Store

At some point in your application’s lifecycle, there might come a time when you need to start scaling your data storage. If you are storing media files or other blobs that have no relations between them, you can easily add storage capacity to solve the problem. For (semi-)structured data in a database however, scaling is a whole different story. Simply adding database instances is not enough. You will need to reconsider the usage patterns and decide what solution solves the problem you have. If your database is hitting resource limits because it is accessed very frequently, adding an asynchronous read replica might be the way to go. If the size of the data is the issue and lookups become very slow, you might consider sharding your database.

Read more

Auto-Scaling on Alibaba Cloud

When you deploy your application on compute instances on-premise or in the cloud, you have to make an educated guess about the utilisation of the resources you provision. If you have an older version of the same application running with proper monitoring, you can base your guesstimate on the current usage of compute nodes. But when this is the first time your application goes to production you are out of luck. How many users will you have each day? What will users do when they start your application and how are usage peaks distributed? Do you expect growth in the number of users? How about long term? If you have answers to all of these questions, you might be well-equipped to go with your gut-feeling and just deploy the application on the number of nodes you came up with. Let’s assume you don’t have all the answers though (which is probably the case). This is where auto-scaling comes in.

Read more

Developing for Google Assistant with Dialogflow

You can do version control and CI/CD with Dialogflow. Although it may look like Dialogflow is not created for developers, you can set up a nice developer flow. This makes it possible to scale development to a team of developers. This article will show you the best practices for an effective development process.

When you start off using Dialogflow you can get a user friendly web interface. You can program phrases that your voice assistant should support. Even though the Dialogflow interface is useable by non programmers, you are still programming. For a software developer it is important to have access to the source code of what they’re programming.

Read more →

Facilitated discussion as a format for learning and improvement

Sharing knowledge is import to us at Xebia. It’s one of the four core values the company is built on. We share knowledge at our clients and with the community, through meetups and conferences. Every second week we organise a Xebia Knowledge Exchange (XKE), our bi-weekly mini-conference. Filled with lots of different sessions, on all sorts of topics. There is always something interesting to learn!

During a recent XKE we came up with the idea to do a peer conference. The topic of the conference was the consultancy work that we do. We all have our own approach and experience, so there is always a lot we can learn from each other. At the conference we experimented with using K-Cards.Read more →

Theming in Vue single file components

There are situations where it’s beneficial to build different CSS files for the same web app. An often seen example is theming an application. When you’re using Vue with its single file component Webpack loader, you’re in luck! You get a lot of flexibility that makes it straightforward to build such a feature.

Read more

How to use Azure AD Single sign on with Cypress

The challenge

At my current assignment we recently introduced Azure active directory based single sign on(SSO). Since we are building a React app we were able to leverage the react-adal library and implementing SSO on the front-end side was a matter of hours instead of days.

This however did pose a challenge for our end-to-end tests. We aim to perform a cycle that is as complete as possible in our end-to-end tests and decided that a valid JWT token and its validation should also be part of that suite. Cypress is our end-to-end testing tool and this offers a recipe for testing applications that use single sign on. Unfortunately this recipe didn’t provide us with a working solution, mainly because the (react-)adal library utilizes cross origin iframes for (re-) authentication. Cypress also runs the application under test in an iframe so we cannot leverage the existing iframe detection offered by react-adal.

Read more →

Pub-Sub messaging with AWS SNS and SQS

When setting up a new application or platform, one of the most important things that you will need is messaging. As every part of the platform has a certain need for data, either realtime or after the fact, messages from other services within the application boundary need to be processed as efficiently as possible. This inter-executable messaging can go from simple notifications about something that happened in a domain (‘customer X changed his name to Y’) to a queue of outstanding jobs that are to be executed by workers. In any architectural case, be it (distributed) monolith, microliths, microservices or anything in between, messages will be there.

Read more →

Why I chose Rust

Why did I choose Rust? Rusts’ memory management introduces a steep learning curve. Its ecosystem isn’t developed as much as that of some other languages. Yet, Rust performs great, comes with some of the best support for web-assembly, and still manages to be an expressive language. Let’s review these properties in the context of an actual use case.

Read more →