Blog

Cypress – Dealing with flaky tests

02 Dec, 2016
Xebia Background Header Wave

Test automation is all about feedback. Feedback that gives you quality updates about the features your team has built. A continuous green build is always the goal because this should give you the confidence you need to go to production. Unfortunately, I’m more used to a “traffic light build”, a build which passes and fails intermittently, mainly because of flaky tests. That is one of the worst things about end to end testing in my opinion.

Why on earth do we still put software into production when we can’t trust our test automation?! Well, that’s because we retry the build a couple of times until we have a lucky hit: the build is green and we’re ready to go to production. Although this is a solution, it still feels like a pretty silly thing to do.

Another option, which has my preference, is refactoring those tests until they actually work. The annoying part about this is that you don’t know what’s going on, and most of the time it is impossible to reproduce the failures. Reproducing them takes a lot of time debugging, analysing and mostly guessing where the problem is.

What if we could actually see what’s going on?

A while ago I tried out Cypress.io, a new hero in the web application testing world. Cypress simplifies the effort and complexity of writing and debugging integration tests. Cypress uses Chromium instead of PhantomJS / Selenium WebDriver, which gives you the opportunity to debug and run your tests at the same time.

Let’s give it a try. I copied some of our flaky tests from our current Protractor setup and I ran them in exactly the same way using Cypress instead. Now the amazing part: because the tests and application are running in the same instance, Cypress presented me with some very useful logging. This resulted in identifying the following issues with my existing Protractor setup:

  • The tests included clicking on elements that weren’t yet available in the DOM.
  • The tests resulted in API calls being aborted in order to complete the next command (e.g. clicking on an element).
  • API calls and clicks on elements appeared to be clearly influencing each other.

Assertion on element failed, while API is still loading its data:
cypress3

Assertion on element after we explicitly wait for an API call:
cypress1

The test is written in JavaScript, including some Cypress syntax.
cypress2

So in our case, APIs (both mocked and real APIs) were mostly the problem with our existing test setup. Cypress automatically waits for the full JavaScript to load, and gives you the option to wait for a specific API call. It was actually quite convenient to convert a flaky test to a green test again.

Want to play with Cypress and experience it for yourself? Join our Test Masters Meetup on Cypress on December 15th, in Hilversum, the Netherlands.

Qxperts. We empower companies to deliver reliable & high-quality software. Any questions? We are here to help! www.qxperts.io

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts