Introduction to FEO -granular analysis Pt 1

This post, part of an introductory series to Front End Optimisation practice, considers detailed analysis of clientside components. It is a relatively high level treatment. A list of sources for more detailed study will be provided in the final (summary) post.

Other titles in this blog series are:

    • FEO – reports of its death have been much exaggerated – 22 February 2016  [also published in APM Digest]
    • Introduction to FEO – Tooling Part 1 – 1 March 2016
    • Introduction to FEO – Tooling Part 2 – 8 March 2016
    • Introduction to FEO – Operations – Process – 15 March 2016
    • Introduction to FEO – Granular Analysis Part 1 – 22 March 2016
    • Introduction to FEO – Granular analysis Part 2 -29 March 2016
    • Introduction to FEO – Granular analysis Part 3 – 5 April 2016
    • Introduction to FEO – Edge Case management – 12 April 2016
    • Introduction to FEO – Summary, bibliography for further study – 12 April 2016

Granular FEO analysis

In earlier posts, I gave an overview of the types of tooling available for use as part of a Front End Optimisation effort, and sketched out a suggested process for effective results in this area.

Having understood the external performance characteristics of the application, in both ‘clean room’ and, more particularly, in a variety of end user monitoring conditions, we now approach the core of Front End Optimisation. Monitoring will give a variety of ‘whats’, but only detailed granular analysis will provide the ‘whys’ necessary for effective intervention.

The initial monitoring activity should have provided a good understanding of how your site/application performs across a range of demand conditions. In addition, regardless of the absolute speed of response, comparison with the performance of competitor and other sites should indicate how well visitor expectations are being met, and the initial goals for improvement.

Before plunging into hand-to-hand combat with the various client side components of your site, it is worth taking time to ensure that whoever is charged with the analysis knows the site in detail. How is it put together? What are the key constraints – business model, regulation, 3rd party inclusions, legacy components – it’s a long list… Whilst being prepared to challenge assumptions, it is good to know what the ‘givens’ are and what is amenable to modification. This provides a good basis for detailed analysis. The team at Intechnica typically adopt a structured approach as outlined below, bearing in mind that the focus of investigation will differ depending on what is found during the early stages.

As these posts are aimed at the ‘intelligent but uninformed’ rather than leading edge experts, it is also worth ensuring that you are aware of the core principles. These are well covered in a number of published texts, although things move quickly, so the older the book, the more caution is required. A short suggested reading list is provided in the ‘Summary’ post at the end of this blog series .

In summary, a logical standard flow for the analysis phase could be as follows:

  • Rules based screening
  • Anomaly investigation
  • Component-level analysis
  • Network-based investigation
  • Recommendations & ongoing testing

The above applies to all investigations, although tooling will differ depending on the nature of the target application. We take a similar approach to all PC based applications. Analysis of delivery to mobile devices, whether web, webapp, or native mobile applications, benefit from some additional approaches, and these are also summarised below.

Taking the various stages in turn:

  • Rules based screening:

AJAX ed BP RAG sharp

Flippantly, traditional rules based tools have the advantage of speed, and the disadvantage of everything else! Not quite true of course, but it is necessary to interpret results with caution for a number of reasons, including:

  • developments of technology and associated best practice (eg adoption of HTTP2 makes image spriting – formerly a standard recommendation – an antipattern)
  • limitations of practical interpretation/priority (eg rules based on the percentage gains from compression can flag changes that are small in absolute terms)
  • Just plain wrong (eg rules which interpret CDN usage as ‘not using our CDN’)

Perhaps for a combination of these reasons, the number of free screening tools is rapidly diminishing –YSlow and (the excellent) SmushIt image optimisation and dynaTrace AJAX Edition tools have all been deprecated over the last year or so. Page Speed Insights from Google is a ‘best in class’ survivor. This is incorporated within a number of other tools. It provides speed and usability recommendations for both mobile and PC.

So the message is – rules based screening is a good method for rapidly getting an overall picture of areas for site optimisation, but a) use recent tools and b) interpret judiciously.

In general, the developer tools provided by the browser vendors are an excellent resource for detailed analysis. Access (highlighted) to Page Speed Insights via the Chrome developer tool illustrated below

PageSpeedInsights sharp

Automated (rules-based) analysis – Google PageSpeed Insights

Rules based screening should provide an insight into the key areas for attention. This is particularly valuable in time-intensive screening of multiple components (eg cache settings).

  • Anomaly investigation – slow vs fast vs median

A next logical step is to investigate the underlying root cause of anomalies highlighted in the preliminary monitoring phase. Average traces are useless (for all except possibly long term trend identification), so it will be necessary to identify outliers and other anomalies on the basis of scattergrams of raw data. Seek to associate underlying causes. Prior to detailed ‘drilldown’, consider possible high level effects.

Common amongst these are traffic (compare with data from RUM or web analytics), poor resilience to mobile bandwidth limitations, and delivery infrastructure resource impact – from background batch jobs or cross over effects in multitenant providers.

Multitenant effects sharp

Multitenant platform effects – base HTML object response during Black Friday peak trade weekend 2015 (reference site in blue)

The amount of detail available will obviously depend upon the tooling used for the initial monitoring, although recurrent effects, if identified, should enable focused repeat testing with other, more analytics focused products such as WebPageTest.

A few notes:

  • Statistical analysis of individual components is powerful – compare maximum, minimum and dispersion of individual components (DNS time, connect time etc) from median and outlier responses. Progressively remove specific content (eg 3rd party tags) and compare the effect.

Visual delivery WPT sharp

Visual progress charts with and without 3rd party affiliates (WebPage Test)

GA Traffic patterns sharp

Daily traffic patterns to major UK eCommerce site (Google Analytics)

intraday analysis sharp

Intraday analysis – peak vs low traffic

  • Beware distortion – particularly if page load endpoints have been insufficiently well defined (see earlier posts). Waterfall charts should always be inspected to detect ‘gotchas’ such as below the fold asychronous content or server push connections. Caution needs to be exercised in interpretation of short responses as well as long. Compare payloads – these are often impacted by variable implementation of server side compression, or content failure.

APM Baseline data may be useful here – although baseline management deserves a post to itself!

Further consideration will be given to detailed  analysis in the next post [Granular analysis Part 2].

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s