DEV Community

HeadSpin
HeadSpin

Posted on • Originally published at headspin.io on

How to Identify Client-Side Performance Bottlenecks

NimbleDroid, recently acquired by HeadSpin, helps mobile developers identify performance bottlenecks and improve user experience in mobile apps. Co-Founder and CEO Junfeng Yang, now Chief Scientist at HeadSpin, built NimbleDroid because he’s passionate about helping engineers build software faster and improving the quality of the mobile tooling ecosystem.

But, how exactly does identifying client-side performance bottlenecks help your business, and why is app-performance so crucial?

Whether regarding native or hybrid apps, it’s well-documented that any time users experience even minor delays when interacting with apps, they’re unhappy. For Amazon, a 100 millisecond latency translates to a percent loss in sales, and, for Google, a 500 millisecond latency results in a 20% decline in user requests.

At HeadSpin, latencies between 250 and 500 milliseconds are noticeable to users. At Nimble, we’ve listened to our customers and realized three reasons why understanding performance is complex:

  1. The frequency with which developers make code changes. Therefore, by the time you realize there's a performance impact, many commits have already happened.

  2. The compound impact of performance regressions. Detecting a 300-millisecond slowdown might be a challenge, but once you add three subsequent releases with a 300-millisecond slowdown each, it becomes quite perceivable.

  3. The wide use of third-party libraries and SDKs. Because such code is external, there's poor visibility as to how it impacts overall app performance.

To add to the complexity, Android and iOS have entirely different tooling and ecosystems, each bringing its own set of challenges. Because many organizations don't even have a dedicated performance team, let alone people trained to assess performance, understanding performance becomes a question of engineering resources, hours, and dollars.

Nimble's solution provides visibility and control into the performance of a given mobile app in the development stage. By integrating with CI, we can continuously monitor every build and can alert developers as soon as regressions are detected, making it a lot easier to backtrack (to previous builds) and understand which commit has caused the issue at hand.

And, because the Nimble product also provides fine-grain diagnostics to help developers pinpoint the parts of their code causing the slow-down, developers know exactly where to target improvements. The idea is to shift left and identify issues earlier in the cycle.

Below are some use cases of Nimble.

  • Developers use Nimble to check their latest build against performance numbers, which they can do via CI integration.

  • Customers use the product as a release criteria by setting acceptable performance numbers for typical UI interactions, and then check against those numbers.

  • Teams can rerun any customer-reported issues through Nimble and examine them further.

When it comes to performance, Nimble makes it simple to examine why code might have regressed.

Specific reasons could include:

  • Elements on the UI thread that might block or hang the UI

  • Hung-wait methods, or methods on the UI thread waiting for a background thread to finish

  • Methods that run in the background, but for longer than a 100 milliseconds

Because Nimble detect performance issues by comparing current and previous builds, developers can use that intel to rewrite specific code as needed, fixing the issue.

Another reason our customers love Nimble is because our numbers are reliable, run over run. We know that to be true because we have physical real devices in a farm which are used to install the app, run it, and profile it.

While typical device farms have many different devices to track how apps perform across devices, HeadSpin prioritizes build-over-build performance testing, made possible by the fact that we're testing on the same devices, configured in the same way, sitting on the same network pipe in the same location, every time.

This way, when a new build is submitted and numbers are different, there are only two possible causes:

  • Network traffic took a different amount of time

  • There were changes made to the code, which the HeadSpin platform can identify

In the case of network traffic, we can actually examine the call stack and identify whether the issue was network- or CPU-related, pinpointing the issue further. And, Nimble can help you understand whether slow-downs are caused by your code, or somebody else’s.

Nimble enables our customers to accurately monitor and profile every critical user flow of iOS and Android apps, and now web apps as well. We integrate seamlessly into your existing workflow, meaning you can insert it into CI and run with it.

View the Full Post

Top comments (0)