RUM tooling has no magical diagnostic powers – That’s down to the analyst!

Browser “real user” performance data has been around for several years – Gomez having led the way with their BRUM/Actual XF product which launched in 2006/7.

More recently, the availability of W3C browser-based navigation metrics has led to many Vendors jostling to enter the market – Neustar, SOASTA, and even more recently Keynote and New Relic (not to mention Google and others) all have their wares in the shop window.

“Real user” or “end user” experience monitoring (I will use “RUM”) certainly has a lot to bring to the party. Particularly when used at high sample rates to intensively instrumented sites, RUM brings an important dimension to performance monitoring not offered either by exclusively infrastructure based tooling (a long list from CA Introscope to App Dynamics) or synthetic external “active” monitoring (e.g. Site Confidence + a cast of thousands).

Although RUM is powerful, it is important to keep in mind that, in the absence of detailed “root cause” data from object level and/or infrastructure metrics, a core aspect of interpretation of RUM data is that it has a large inferential element.

For example, RUM can give a good sense of page or transaction performance to end users by location. This is particularly useful to owners of sites with a highly distributed customer base. A RUM dashboard can give a useful “rule of thumb” indication (in real-time) not only of how well customers are being served, but also, by inference, the value added by any acceleration technologies used, typically, in this case, Content Delivery Networks (CDNs). That weasel word again – “inference”.

Provided, as analysts, that we keep squarely in mind the limitations of any metrics presented to us, we are in good shape. However, it is fatally easy to fall into an “absolutist” elephant trap. Two examples illustrate this fairly well (one from a recent Vendor webinar, another from my client practice).

The Vendor example was from a demonstration by a major performance tooling Vendor of their new RUM product. This seemed to tick all the usual boxes in terms of functionality, but the relevance to this discussion came when demonstrating the map display. The presenter identified poor performance in a particular area, hovered over it, and then dismissed it on the basis of ‘low traffic’. He may well have been correct, but it may not be as simple as that… which brings me to my second example.

I was working with a major retailer in southern Europe some years ago to define and implement a performance monitoring strategy. As part of this, I was seeking to understand the key geographies to include in their test matrix (this was before the days of RUM). They were very clear about one thing – “We don’t need to worry about Valencia”. Reason – “We don’t get significant digital business from there”. This seemed odd at the time given the strength and prominence of the brand. It became clearer when we subsequently discovered a systemic issue with a tertiary ISP with a large customer base in that area. In other words, poor digital uptake was due to poor service (slow page response times to end users), not some cultural inclination to prefer bricks and mortar (or whatever).

So the bottom line is – be prepared to be counterfactual. Always question the data presented – don’t just treat at face value. High level business judgements can be made about performance and revenue across the customer base. These hypothetical “benchmarks” can then be compared with on-going real data. Any disparities should then be flagged for further investigation using standard “best practice” approaches. For example: tracking the patterns of performance over the business cycle, and running confirmatory tests (particularly in low traffic areas/times) using synthetic monitoring with the appropriate browser agent.

Clearly, nobody is going to spend a lot of time chasing their tails to isolate a potential issue in delivery to a handful of users in the Nevada desert or the arctic tundra – but if those users represent the potential digital revenue from Las Vegas, then worth looking into a bit more closely.

  • Set performance KPIs
  • Inform them with knowledge/projection of performance from known use cases
  • Identify and record differences from expected behaviour (revenue, traffic, browser/device mix)
  • Flag patterns and trends
  • Support RUM data with targeted “active” external synthetic end user monitoring.
    • Bear in mind that you have no customers in ISP datacentres
  • Isolate root cause issues
    • Third party (combined external ISP and end user testing)
    • Delivery infrastructure using APM tooling

Learn from scientific practice – create explanatory hypotheses, validate them using specific tests.

Above all – don’t rely on a passive acceptance of what a particular test type is telling you – you’re better than that!

Happy testing.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s