Introduction to FEO – Tooling Pt 2

This post continues the introductory survey by concluding an examination of tooling approaches.

Other titles are:

    • FEO – reports of its death have been much exaggerated – 22 February 2016  [also published in APM Digest]
    • Introduction to FEO – Tooling Part 1 – 1 March 2016
    • Introduction to FEO – Tooling Part 2 – 8 March 2016
    • Introduction to FEO – Operations – Process – 15 March 2016
    • Introduction to FEO – Granular Analysis Part 1 – 22 March 2016
    • Introduction to FEO – Granular analysis Part 2 -29 March 2016
    • Introduction to FEO – Granular analysis Part 3 – 5 April 2016
    • Introduction to FEO – Edge Case management – 12 April 2016
    • Introduction to FEO – Summary, bibliography for further study – 12 April 2016

Tooling Part 1 considered synthetic (otherwise known as active monitoring) of PC based sites – examining data from replicate ‘heartbeat’ external tests in known conditions. Now let’s consider complimentary monitoring of actual visitor traffic, and aspects of mobile device monitoring.

Passive monitoring, variously known as Real User Monitoring [RUM], End User Monitoring [EUM], or User Experience Monitoring [UEM], is based on the performance analysis of actual visitors to a website. This is achieved by (manual or more usually automatic) introduction of small JavaScript components to the webpage. These typically record and return by (means of a beacon) the response values for the page, based on standard W3C navigation metrics – DOM ready time, page onload time, etc. It is worth noting in passing that these are not supported by all browsers – notably older versions of Safari and others. However, the proportion of user traffic using unsupported versions of non-Safari browsers will probably be fairly negligible today, at least for core international markets.

RUM dashboard img AppD 250216sharp

A RUM dashboard, showing near real-time performance by geography, device, etc.  [AppDynamics]

Modern RUM tooling increasingly captures some information at object level as well (or can be modified to do so). A useful capability, available in some tools, is the ability to introduce custom end points. If supported, these can be coordinated with appropriately modified synthetic tests (as discussed in blog 2), providing the ability to read across between active and passive test results.

A further useful capability in some RUM tools is Event Timing. Event timing involves the placing of flags to bracket and record specific user invoked events (for example the invocation of a call to a Payment Service Provider as part of an eCommerce purchase.

The ability to report on transaction timings (as opposed to single page or page group performance) is particularly useful, although relatively rarely supported. When present, this extends the ability to monetise performance, that is, to understand the association between page response to end users and business-relevant metrics such as order size or transaction abandonment.

Creation of such performance: revenue decay curves (for different categories of user) – together with an understanding of performance relative to key competitors- enable decision support regarding optimal site performance – ie avoiding under or over investment in performance.

Another approach to monetisation is to use the events database analytics extensions offered by some APM Vendors. Examples include New Relic Insights, and AppDynamics Analytics. These types of offers certain provide powerful visibility through the ability to perform multiparameter SQL-like interrogations of rich business and application big-data sets. To obtain maximal value, such products should ideally support relational joins – to, for example, compare conversion rates between transaction speed ‘buckets’. It is worth delving into the support (immediate or planned) from a given Vendor for the detailed outputs that will underpin business decision support in this important area.

A key question:

  • What is the optimum performance goal balancing investment vs revenue return?

12hr revenue by BT perf histogram 240216 sharp

Monetisation: Revenue bearing transaction performance vs revenue [AppDynamics]

Perf vs Abandonment FEO blog

Monetisation: Key page response vs transaction abandonment rate [dynaTrace]

Mobile device monitoring:

Effective monitoring and analysis of application delivery to mobile devices is crucial, given the predominance of mobile users to many sites. Tooling categories are outlined below, together with their use case. It is likely that a combination of tools will be required.

A core distinction is between emulated and real device testing. Emulation testing has the advantage of convenience and the ability to rapidly test delivery across a wide variety of device types. It also uses a consistent, powerful PC based platform. This can be useful depending on the precise nature of the testing undertaken. Emulation consists of ‘spoofing’ the browser user string such that the request is presented to the target site as a mobile device. Given that it is important to replicate (a range of) realistic user conditions to gain an understanding of actual performance in the field, the most useful tools will permit comparison across a variety of connection types and bandwidths – ‘hardwired’; Wi-Fi; and public carrier network.

Many tools (eg WebPage Test, browser dev tools) only offer hardwired connectivity throttled to provide a range on connection speeds. This can be appropriate during ‘deep dive’ analysis. It is however insufficient for monitoring comparisons.

Chrome dev tools img sharp

Emulation testing – ‘device’ selection [Chrome Developer Tools]

Real device monitoring

Testing from real mobile devices has a number of advantages. Access to the GIU for script recording (as, for example, in Perfecto Mobile) enables visual end point recording. Transactions may be recorded, not only for web sites but also native mobile applications. A further advantage of testing from real devices is the enhanced control, and understanding the performance influence of, device characteristics. The performance to a given device is likely to be influenced by system constraint. These may be inherent eg processor and memory capacity, Operating System version, or dynamic – battery state, memory and CPU utilisation etc. In addition, user behaviour and environmental factors can have a significant influence – everything from applications running in the background, number of browser tabs open, or even the ambient temperature.

Perfecto img sharp

Testing from real device – device selection [Perfecto mobile]

Its that control word again – the more accurate your modelling of particular test conditions (particularly edge states), the more accurate and relevant your interpretation will become.

Native mobile Application analysis & monitoring.

Two approaches are possible here. For monitoring/visitor analysis, the most widely used approach (and that adopted by APM tooling) is to provide Software Development Kit (SDK) – based measurement. The application is instrumented by introducing libraries to the code via the SDK. The degree of visibility can usually be extended by introducing multiple timing points, eg for the various user interactions across a logical transaction. Errors are reported, usually together with some crash data.

All the major Vendors support both Android and iOS. Choices for other OS’s (RIM, Windows Mobile) are much more limited due to their relatively small market share. Among the Gartner APM ‘Magic Quadrant’ APM vendors I believe that only New Relic have any support in this area, via Titanium (cross platform) – at the time of writing, anyway.

NR SDK options img. sharp

Other tools exist for point analysis of native apps. AT&T’s Application Resource Optimiser (ARO) utility [] is a useful (open source) example. This screens applications against best practice in 25 areas, based on initial download size and network interactions (pcap analysis) via a VPN probe.

ATT ARO img sharp

AT&T ARO – rules based best practice analysis (25 parameters) for native mobile apps

APM based external monitoring

Most modern APM tools will offer both synthetic and passive external monitoring to support ‘end to end’ visibility of user transactions. Although it is possible to integrate ‘foreign’ external monitoring into an APM backend, this is unlikely to repay the effort and maintenance overhead. The key advantage of using the APM vendor’s own end user monitoring is that the data collected is automatically integrated with the core APM tool. The great strength of APM is the ability to provide a holistic view of performance. The various metrics are correlated, this supporting a logical drilldown from a particular end user transaction to root cause, whether application code or infrastructure based.

It is important to understand any limitations of the RUM and active test capabilities offered, both to assist in accurate interpretation and to make provision for supplementary tooling to support deep dive FEO analytics.

Ultimately, the strength of an APM lies in its ability to monitor over time against defined KPIs and Heath Rules, to understand performance trends and issues as they occur, and to rapidly isolate the root cause of such issues.

These are powerful benefits. They do not well support detailed client side analysis against best practice ‘performance by design’ principles. These are best undertaken as a standalone exercise, using independent tools designed specifically with such analysis in mind. The key use case for APM is to support an understanding of ‘now’ in both absolute & relative terms, and to support rapid issue isolation/resolution when problems occur.

APM EUM back end integration sharp img

Front-End : Back-End correlation – RUM timings, link (highlighted) to relevant backend data

I will cover some aspects of standalone optimisation in blog 5 of this series [Granular analysis].


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s