Introduction to FEO – granular analysis Pt 3

This post, part of an introductory series to Front End Optimisation practice, considers detailed analysis of clientside components. It is a relatively high level treatment. A list of sources for more detailed study will be provided in the final (summary) post.

Other titles in this blog series are:

  • FEO – reports of its death have been much exaggerated [also published in APM Digest] – 22 February 2016
  • Introduction to FEO – Tooling Part 1 – 1 March 2016
  • Introduction to FEO – Tooling Part 2 – 8 March 2016
  • Introduction to FEO – Operations – Process – 15 March 2016
  • Introduction to FEO – Granular Analysis Part 1 – 22 March 2016
  • Introduction to FEO – Granular analysis Part 2 -29 March 2016
  • Introduction to FEO – Granular analysis Part 3
  • Introduction to FEO – Edge Case management
  • Introduction to FEO – Summary, bibliography for further study

This final post on granular analysis, as applied to Front End Optimisation, briefly considers the increasingly important area of performance to mobile devices.

  • Mobile device analysis

The high and increasing traffic from mobile device users make careful consideration of the end user experience a key part of most current FEO efforts.

Investigation typically uses a combination of emulation (including browser developer tool) based analysis – including rules based screening (eg PageSpeedInsights discussed above), and real device testing.

The key advantage of testing from ‘real’ mobile devices as opposed to spoofed user string/PC based testing is that the interrelationship between device metrics and application performance can be examined. As discussed in the ‘tools’ post, ensuring good, known control conditions, whether of connectivity (bandwidth, SIM public carrier or WiFi) and device environment is crucial to effective interpretation of results.

Most ‘cross device’ tools are designed for functional (or in some cases load) testing rather than performance testing per se. This limits their value. The choices are between:

  • Limiting investigation to browser dev tools
  • Building/running an in house device lab with access to presentation layer timings and device system metrics
  • Using a commercial tool – these are thin on the ground, but Perfecto Mobile is worth a look
  • Using the real device testing offered by Vendors such as TestPlant (eggOn), or Keynote.

Four approaches to understanding the performance of native application are possible:

  • Consider Perfecto Mobile’s combination of visual endpoint and core metric testing (www.perfectomobile.com)
  • Instrument the application code using a Software Developer Kit [SDK] (this is the approach adopted by the APM vendors. Typically stronger on end user visibility rather than control of test conditions or range of device metrics. Inclusion of crash analytics can be useful.
  • Use a PCAP approach – analysing the initial download size and ongoing network traffic between the user device and origin. This is the approach taken by the AT&T ARO tool (https://developer.att.com/application-resource-optimizer)
  • Build your own in-house device lab. This is  potentially more problematical than it may appear, for many reasons.  This presentation, by  Destiny Montague and Lara Swanson of Etsy given at the 2014 Velocity conference, provides a good overview from a corporate team that have successfully embraced this approach
    • Part 1 here> https://www.youtube.com/watch?v=QOatJD_3bTM
    • Part 2 here> https://www.youtube.com/watch?v=YBn_bQrdVRI

Either way, having defined your control conditions within the constraints of the tool/approach selected, key aspects include:

  • Timeline – understand the interrelationship between the various delivery components – JavaScript processing, image handling etc and CPU utilisation
  • System metrics – when delivering cached & uncached content. These include:
    • CPU – O/S (Kernel), User, Total
    • Memory – Free, Total
    • Battery state
    • Signal strength
  • Crash analytics
  • Impact of third party content
  • Association of issues with delivery infrastructure/core application performance. This coordination is effectively provided by many modern APM tools.

 

CPU utilisation chartsharp 1

CPU utilisation trace during test transaction – Android device (Perfecto Mobile)

The next post in this series considers monitoring approaches to a number of edge case conditions.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s