Modern Web applications in most of the cases struggling to render API data as fast as possible and batch things for UI rendering. Especially with React, sometimes we have to check how our web application load/rendering time is affected by some 3rd party library or new functionality.

How it started

When we published our web application/dashboard to production about 2 years ago, we have’t concerned about loading speed or rendering times, because application was small, without that’s match of complexity. But overtime when we started to publish feature over feature, application rendering time become higher. That’s the usual process with almost all kind of applications development process, but the key here is that we started to measure performance metrics only after it was significantly slow.

When it comes to that point, it means you have to refactor a lot of components on UI side, or investigate API response times. For our case we just blocked new feature development for a few sprints and processed with code refactoring and load/render time measuring, slowly improving original application speed and load time.

BUT! Here is the key thing: we’ve should be done it on every feature release, when code improvements could be done in a small chunks.

Website metrics during CI

When we realized that we had to spend a lot of time refactoring most of the application components, we decided to somehow track application performance metrics, fix them, and then track how it goes up for a new feature. So we just integrated headless Google Chrome in our automated test process, and started to keep performance metrics in our time series database, to track changes over time, and jump on refactoring if app started to slow down.

import puppeteer from 'puppeteer';
import devices from 'puppeteer/DeviceDescriptors';

const page = await browser.newPage();

// Getting Desktop metrics
await page.setViewport({ width: 1366, height: 768 });
await page.goto(url);
const metricsDesktop = await page.metrics();

// Getting Tablet metrics
await page.emulate(iPad);
await page.reload();
const metricsTablet = await page.metrics();

// Getting Mobile metrics
await page.emulate(iPhone);
await page.reload();
const metricsMobile = await page.metrics();

This process happens every time when we are pushing code to master branch and final app tests are starting to run. If performance metrics are extremely bad, our CI system rejects new release, so that we will know that, there is some significant load/rendering issues.

Automating results capturing

As a part of our project we started to collect website metrics all over the Web, and make some internal ranking about what is the average load times for React apps or Angular apps. This will help to provide API based CI integration for having always available data about web app metrics, and keep track for the new feature release performance.

Generally every CI platform can be integrated with Google puppeteer and use headless chrome as a part of server side browser testing. But keep in mind that overall performance metrics for your web app could be completely different on CI platform and on your laptop. Usually CI platforms using shared server space for 100’s of tests, where each test is running inside container with not more than 512Mb of ram and 1 CPU. This means that all website metrics are going to be relative to what is going ton on that CI Server, BUT overall process shouldn’t be that different. For our case we have set up minimum and maximum trash-holds which is OK in most cases, but of course sometimes we are having test failures, without any specific performance downgrade, it’s just turning out our CI server is just being overloaded and performance metrics going up.


Congrats you got to this point!! Now we know why it is important to keep website metrics with full load time and app rendering. It helps to keep users happy and your code keeps growing more optimized.

Clap 👏 for this article and share your experience with fully automated tests!