Blog

Testing UI changes in large web applications

10 Aug, 2015
Xebia Background Header Wave

When a web application starts to grow in terms of functionality, number of screens and amount of code, automated testing becomes a necessity. Not only will these tests prevent you from delivering bugs to your users but also help to maintain a high speed of development. This ensures that you’ll be focusing on new and better features without having to fix bugs in the existing ones.

However even with all kinds of unit-, integration- and end-to-end tests in place,  you’ll still end up with a huge blind spot: does your application still looks like it’s supposed to?

Can we test for this as well? (hint: we can).

Breaking the web’s UI is easy

A web application’s looks is determined by a myriad of HTML tags and CSS rules which are often re-used in many different combinations. And therein lies the problem: any seemingly innocuous change to markup or CSS could lead to a broken layout, unaligned elements or other unintended side effects. A change in CSS or markup for one screen, could lead to problems on another.

Additionally, as browsers are often being updated, CSS and markup bugs might be either fixed or introduced. How will you know if your application is still looking good in the latest Firefox or Chrome version or the next big new browsers of the future?

So how do we test this?

The most obvious method to prevent visual regressions in a web application is to manually click through every screen of an application using several browsers on different platforms, looking for problems. While this solution might work fine at first, it does not scale very well. The amount of screens you’ll have to look through will increase, which will steadily increase the time you’ll need for testing. This in turn will slow your development speed considerably.

Clicking through every screen every time you want to release a new feature is a very tedious process. And because you’ll be looking at the same screens over and over again, you (and possibly your testing colleagues) will start to overlook things.

So this manual process slow downs your development process, it’s error prone and, most importantly, it’s no fun!

Automate all the things?

As a developer, my usual response to repetitive manual processes is to automate them away with some clever scripts or tools. Sadly, this solution won’t work either. Currently it’s not possible to let a script determine which visual change to a page is good or bad. While we might delegate this task to some revolutionary artificial intelligence in the future, it’s not a solution we can use right now.

What we can do: automate the pieces of the visual testing process where we can, while still having humans determine whether a visual change is intended.

Also taking into account our quality and requirements in regards to development speed, we’ll be looking for a tool that:

  • minimizes the manual steps in our development workflow
  • makes it easy to create, update, debug and run the tests
  • provides a solid user- and developer/tester experience

Introducing: VisualReview

To address these issues we’re developing a new tool called VisualReview. Its goal is to provide a productive and human-friendly workflow for testing and reviewing your web application’s layout for any regressions. In short, VisualReview allows you to:

  • use a (scripting) environment of your choice to automate screen navigation and making screenshots of selected screens
  • accept and reject any differences in screenshots between runs in a user-friendly workflow.
  • compare these screenshots against previously accepted ones.

With these features (and more to come) VisualReview’s primary focus is to provide a great development process and environment for development teams.

How does it work?

VisualReview acts as a server that receives screenshots though a regular HTTP upload. When a screenshot is received, it’s compared against a baseline and stores any differences it finds. After all screenshots have been analyzed someone from your team (a developer, tester or any other role) opens up the server’s analysis page to view any differences and accepts or rejects them. Every screenshot that’s been accepted will be set as a baseline for future tests.

VisualReview-how-it-works
Sending screenshots to VisualReview is typically done from a test script. We already provide an API for Protractor (AngularJS’s browser testing tool, basically an Angular friendly wrapper around Selenium), however any environment could potentially use VisualReview as the upload is done using a simple HTTP REST call. A great example of this happened during a recent meetup where we presented VisualReview. During our presentation a couple of attendees created a node client for use in their own project. A working version was running even before the meetup was over.

Example workflow

To illustrate how this works in practice I’ll be using an example web application. In this case a twitter clone called ‘Deep Thoughts’ where users can post a single-sentence thought, similar to Reddit’s shower thoughts.
VisualReview-example-site
Deep Thoughts is an Angular application so I’ll be using Angular’s browser testing tool Protractor to test for visual changes. Protractor does not support sending screenshots to VisualReview by default, so we’ll be using visualreview-protractor as a dependency to the protractor test suite. After adding some additional protractor configuration and made sure the VisualReview server is running, we’re ready to run the test script. The test script could look like this:

[code language=”javascript”]
var vr = browser.params.visualreview;
describe(‘the deep thoughts app’, function() {
it(‘should show the homepage’, function() {
browser.get(‘https://xebia.com/blog:8000/#/thoughts’);
vr.takeScreenshot(‘main’);
});
[…]
});
[/code]

With all pieces in place, we can now run the Protractor script:

protractor my-protractor-config.js

When all tests have been executed, the test script ends with the following message:

VisualReview-protractor: test finished. Your results can be viewed at: https://xebia.com/blog:7000/#/1/1/2/rp

Opening the link in a browser it shows the VisualReview screenshot analysis tool.

VisualReview analysis screen

For this example we’ve already created a baseline of images, so this screen now highlights differences between the baseline and the new screenshot. As you can see, the left and right side of the submit button are highlighted in red: it seems that someone has changed the button’s width. Using keyboard or mouse navigation, I can view both the new screenshot and the baseline screenshot. The differences between the two are highlighted in red.

Now I can decide whether or not I’m going to accept this change using the top menu.

Accepting or rejecting screenshots in VisualReview

If I accept this change, the screenshot will replace the one in the baseline. If I reject it, the baseline image will remain as it is while this screenshot is marked as a ‘rejection’.  With this rejection state, I can now point other team members to look at all the rejected screenshots by using the filter option which allows for better team cooperation.

VisualReview filter menu

Open source

VisualReview is an open source project hosted on github. We recently released our first stable version and are very interested in your feedback. Try out the latest release or run it from an example project. Let us know what you think!

 

Qxperts. We empower companies to deliver reliable & high-quality software. Any questions? We are here to help! www.qxperts.io

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts