In his post “Mobile support in Performance Test tools – What does it really offer?“, Ian Molyneaux makes some useful (and thought-provoking) points. As always, however, slightly different criteria apply when considering native mobile application performance “in the wild” – that is, understanding and assuring performance to end users on an ongoing basis.
Performance load testing is essential for capacity planning, and (however it is approached) forms an important part of pre release testing – and, like the proverbial Chicago voter, testing “early and often” pays many dividends in reducing overall cost of issue remediation.
As is often the case, however, slightly different criteria apply to consideration of ongoing monitoring.
For a start, end-user experience is where push comes to shove in terms of revenue generation (or other beneficial business outcome). Regardless of whether an individual visit results in a shopping basket transaction, the risk / opportunity to the brand is there. Poor brand perception = loss of loyalty, whether the interaction is mobile, fixed wire or bricks and mortar.
Also, the levers are longer with mobile, particularly with regard to connectivity. Whereas the hard-wired world is server by dozens of tertiary ISPs (OK, the basic knitting is provided by fewer, but even so…), in the UK 4 wireless carriers serve the entire roaming mobile population. The smallest of these (Hutchison 3G / Three) has a subscriber base of some 8 million users, whilst the others serve in excess of 20 million subscribers each – bear in mind that mobile penetration exceeds 100%, in other words there are more subscribers than inhabitants. What are the implications? If you have a systemic issue around delivery to a particular carrier, you are trashing your brand and potential revenue to/from as many as 30 million users, so its well worth
a) being aware, and
b) spending some time with your ISP to determine the cause.
Ian is of course correct that you cannot do much about transients, but the image below shows that more persistent issues can, and do, arise.
Ian’s parallel between native mobile applications and traditional ‘fat client’ apps is well made. However, there are some fundamental differences of key importance to performance monitoring. The fat client model is similar, agreed – but think through the differences. A traditional fat client application, typically deployed across a corporate intranet or WAN has few variables – client PC specification and network provision may vary somewhat, but is usually essentially standardised down to a few “trickle down” variations.
The world of mobile delivery is fundamentally different – a kaleidoscope of operating system variants and device capabilities, form factors etc. This is without the inherent variation even when considering an individual user – battery state, signal strength, and user behaviour (impacting free memory) to name but three.
So what are the implications? It is not possible to control these factors. It is therefore of key importance to understand them, inasmuch as they impact the performance of your application to real users across your actual demand cycle, geographic reach etc.
Tooling is increasingly becoming available. Established performance skills can be called on. Effective use of the results obtained enable the design and development of new applications to be informed by the reality of real world usage constraints – and interventions, when necessary, can be implemented before brand damage occurs.
A structured approach to objective, action-centric management follows, minimising business risk and maximising cost-effective business outcomes, in which both pre-launch “performance by design” and post-launch feedback and tuning act to complement one another.