Introduction to FEO – Tooling Pt 1

This is Blog 2 of my ‘introduction to Front End Optimisation [FEO] series. They are designed to provide an overview to effective best practice in external monitoring and Front End Optimisation for newcomers to the field, particularly non-technical business Managers. Many advanced reference works exist for those wishing to develop a specialism in this field. Some are listed in Blog 7 – summary & bibliography.

Other titles are:

    • FEO – reports of its death have been much exaggerated [also published in APM Digest] – 22 February 2016
    • Introduction to FEO – Tooling Part 1 – 1 March 2016
    • Introduction to FEO – Tooling Part 2 – 8 March 2016
    • Introduction to FEO – Operations – Process – 15 March 2016
    • Introduction to FEO – Granular Analysis Part 1 – 22 March 2016
    • Introduction to FEO – Granular analysis Part 2 -29 March 2016
    • Introduction to FEO – Granular analysis Part 3 – 5 April 2016
    • Introduction to FEO – Edge Case management – 12 April 2016
    • Introduction to FEO – Summary, bibliography for further study – 12 April 2016

This is second in my nine-post blog series for newcomers to Front End Optimisation and analysis. APM tooling certainly has its place here, particularly for integrated, ongoing monitoring. However, it is probably useful to think of FEO as an extension activity, undertaken separately to the core KPI tracking and issue resolution supported by APM. I will reference APM tooling in the context of the various categories considered. In order to keep the size manageable, I will split the tooling consideration into two posts: introduction & synthetic testing (this one); and RUM (including mobile) [Tooling Part 2].

Let’s start with a summary of available tool types (split into two parts), and then a structured FEO process.  I am assuming an operations- rather than developer- centric approach. Certainly, the most robust approach to ensuring client side performance efficiency is to bake it in from inception, using established ‘Performance by Design’ principles and cutting edge techniques. However, as in most cases “I wouldn’t have started from here” is not exactly a productive recommendation, let’s set the scene for approaches to understanding and optimising the performance of existing web applications.

So, tooling. Any insights gained will start with the tools used. The choice will depend upon the technical characteristics of the target (e.g. ‘traditional’ HTTP Website, Single Page Application, WebApp, Native Mobile App), and the primary objective of the test phase [the spectrum of (ongoing) Monitoring through to (point) Analysis].

Note: I will use examples drawn from many tools to illustrate particular points. These do not necessarily represent overall endorsement of the specific tools. Any decision should be made given a broad consideration of your individual needs and circumstances.

The first hurdle is gaining appropriate visibility. However, it must be noted that any tool will produce data, the key is effective interpretation of the results. This is largely a function of knowledge and control of the test conditions.

A good place to start in tool selection is to stand back from the data and understand the primary design goal of the particular class of tool. As examples consider two tools, both widely used, neither of which is appropriate to FEO work, whilst being superficially relevant.

The first, Google Analytics. This powerful and mass market product certainly will generate some performance (page response) data. However, the tool is primarily designed for behavioural web analytics. The information that it provides can be extremely useful for defining analysis targets, both in terms of key transaction flows and specific cases eg top ranked SEO destination pages with high bounce rates. It is of limited use for FEO analysis for a number of detailed reasons, but mainly because the reported performance figures are averaged from a tiny sample of the total traffic, and granular component response data is absent.

Secondly, Sauce Labs. This is more of a niche Vendor than GA, but certainly is a fine product of its type. Sauce Labs offer comparative cross browser and device testing, both emulated and real-device. All testing originates in the US, introducing high and unpredictable latency into the testing. This tooling is excellent for functional testing, which is what it is designed to do. Different choices are required for effective FEO support.

So, what are the relevant categories of front end test tooling? The following does not seek to provide a blow-by-blow comparison of the multiplicity of competitors in each category – and in any case, the best choice for you will be determined by your own specific circumstances. Rather, it is a high level category guide. As a general rule of thumb, examples of each category will ideally be used to provide a broad insight into end user performance status and Front End Optimisation. Modern APM tools increasingly tick many of these boxes, although some of the more arcane (but useful) details are yet to appear.

As we will see when considering process, FEO practice in Operations essentially consists of two aspects. One is understanding the outturn performance to external end points (usually end users). This is achieved through monitoring, that is, obtaining an objective understanding of transaction, page, or page component response from replicate tests in known conditions, or of site visitors over time.

Monitoring provides information relative to patterns of response of the target site or application, both absolute and relative to key competitors or other comparators.

The other aspect is Analysis of the various components delivered to the end user device.  These components fall into three categories: static, dynamic, or logic (JavaScript code). Data for detailed analysis may be obtained as a by-product of monitoring, or from single or multiple point ‘snapshot’ tests. Component analysis will be covered in a subsequent post.

Tools for monitoring of external performance fall into two distinct types: active or passive.

Active (also called Synthetic) monitoring involves replicate testing from known external locations. Data captured is essentially based on reporting on the network interactions between the test node and the target site.

  1. Understanding the availability of the target site
  2. Understanding site response/patterns in consistent test conditions – for example to determine long term trends, the effect of visitor traffic load, performance in low traffic periods, or objective comparison with competitor (or other comparator) sites
  • Understanding response/patterns of individual page components. These can be variations in the response of the various elements of the object delivery chain – DNS resolution, Initial connection, First byte (ie the dwell time between the connection handshake and the commencement of data transfer over the connection – a measure of infrastructure latency), and content delivery time. Alternatively, the objective may be to understand the variation in total response time of a specific element, for example 3rd Party content (useful for Service Level Agreement management).

Increasingly, modern APM tools offer a synthetic monitoring option. These tend to be useful in the context of the APM – ie holistic, ongoing performance understanding, but more limited in terms of control of test conditions and specific granular aspects of FEO point analysis such as Single Point Of Failure (SPOF) testing of third party content.

In brief, key aspects of such tooling for FEO analysis are:

  • Range of external locations – geography and type
    • eg Tier 1 ISP/LINX test locations; end user locations; private peer (ie specific known test source)
    • PC and mobile (the latter increasingly important)
  • Control of connection conditions – hardwired vs wireless; connection bandwidth
  • Ease & sophistication of transaction scripting – introducing cookies, filtering content, coping with dynamic content (popups etc)
  • Control of recorded page load end point

As a rule of thumb, the more control the better. However, a good compromise position is to take whatever is on offer from the APM Vendor – provided you are clear as to exactly what is being captured; and supplement this with a ‘full fat’ tool that is more analysis-centric – Web Page Test being a popular, open source choice – though beware variable test node environments if using the public network.

dT synthetic end user test peers img sharp

Synthetic testing – custom end user peer clusters – note the flexibility in terms of geography and connection speed (dynaTrace Synthetic ‘Last Mile’ PC testing)

A final word on page load end points. ‘Traditional’ synthetic tools (such as Gomez/dynaTrace synthetic in the above example) relied on the page onload navigation marker. It really is essential to define an end point more closely based on end user experience – ie browser fill time. With older tools this needs to be done by introducing a flag to the page. This can either be existing content such as an image appearing at the base of the page (at a given screen resolution), or by introducing such content at the appropriate point. This marker can then be recorded by modification of the test script.

Note that, given the dynamic nature of many sites, attempting to time to a particular visual component can be a short lived gambit. Introducing your own marker, assuming that you have access to the code, is a more robust intervention.

Some modern tooling (eg AppDynamics APM) have introduced this as a standard feature. It is likely that competitors will follow suit. Use of the onload marker will produce results that do not bear any meaningful relationship to end user experience, particularly in sites with high affiliate content loads.

Modifications of standard testing to meet the requirements/manage misleading results in specific cases eg server push, Single Page Applications, will be covered in a subsequent post.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s