How Actually Companies Test their Mobile APPS

Everyone who has ever been involved in the development of a mobile app knows the painstaking procedure of taking a mobile device and pushing and prodding the app through every single button and element.

Meticulousness is crucial to ensure that all newly developed features work and that nothing has been broken in the process.

The question that is literally in the air: How does the testing process for mobile apps differ for companies of different sizes? With worldwide experience working with both small and large mobile companies, we can gain insight into best practices as well as general inefficiencies in current testing methods.

One strong correlation that was discovered is between the size of a company and its typical test setup. As such, we can map this correlation by grouping the common characteristics with the corresponding company size.

Small companies with less than 10 people test… What?

Small companies tend to be chaotic, often leaving end-user testing to the end, as the final “leftover” task before shipping. Test plans that list user scenarios are uncommon in these small companies.

Testing end-user scenarios is very time-consuming, so these tests are kept as lightweight as possible and may not cover enough use cases. This results in regrets – usually, when users complain, app ratings fall, the app gets rejected, and revenue is lost.

Small companies rely heavily on feedback and bug reports from their alpha users. It takes a long time for these users to discover bugs, and they won’t necessarily find them all.

Depending on this small base of users to test often results in long email threads for each bug, and a constant need for clarification.

Instead of generating excitement about the new app, your alpha users are sending screenshots of issues and answering developer questions about what exact action resulted in the error.

After a great deal of frustration, several long nights, delays, and some very stressed founders, the new release can finally be submitted (or re-submitted).

Building the next app or updating/improving the current app means that major testing is off the agenda for the next 2-4 weeks… and then the entire cycle begins again.

For these small startups, the only automation approach that seems to be common is the implementation of unit tests.

Developers find it easier to automate unit tests than to automate UI testing. For such a company, an excellent solution would be to find an outsourcing software testing consultant right from the start.

The partner can always be asked about topics of interest and save a lot on testing because getting a dedicated team of testers for a while is much cheaper than maintaining such a group in the state. Moreover, it is literally impossible for a company that appreciates every penny.

Growing about 11 to 50 employee

As the company grows, the testing process becomes somewhat more formalized but no less time-consuming. A typical discussion of testing in a company of this size may occur from time to time, but it does not add real results.

Often, such companies have a test plan that lists all the scenarios (where such a plan came from is another matter).

In reality, it takes so much time to consider all the details of the plan. Companies try to focus on core scenarios but end up spending hours on each device, OS version, user interface, and so on.

As soon as the company is a little larger, makes revenue, or receives funding, it’s common to see the creation of an official QA position.

Companies tend to better understand the importance of having adequate testing procedures in place. Many employ multiple QA staff and actively seek better testing processes and tools.

The QA person often stocks a variety of devices with different OS versions that are shared amongst the team.

The QA person usually also supports the website QA. In parallel to augmenting the QA staff, mobile engineers often work with testing frameworks such as UI Automation. It is common to set up Continuous Integration to further enhance their process. Some companies also use third-party solutions to automate their builds.

For a company of this size, keeping up with all possible devices and OS versions is inefficient and expensive. In many cases, the company depends primarily on simulators/emulators for coverage.

However, this exposes the application to device-specific bugs and crashes. Any experienced developer will tell you he has seen numerous apps or app functions that worked perfectly on a simulator, but not on a particular real device.

Companies with over 51 employees

Larger companies tend to have a dedicated QA Department. QA is typically organized in one of two ways:

The first approach puts the developer directly in charge of QA. The senior developer makes his own decisions regarding which testing tool/framework and procedures he wants to use, similar to smaller companies.

If the company has more than a hundred employees, additional “tool” engineers will help to extend these frameworks and build new tools that can be used for the testing process. This in turn helps to generate the testing procedures and guidelines for the company. However, company specific tools often require a considerable amount of resources and can result in a stagnant process, resistant to much-needed change.

The second approach is “QA driven,” where app builds are handed over to dedicated QA staff, who then take care of the (very often) manual testing.

The QA department might get support from an offshore center, or they might outsource parts of their testing to a company that offers support in manual testing. This approach merely finds cheaper resources to cover bad processes.

Overall Outlook

An inordinate amount of chaotic, manual testing is executed across all sizes of companies in pursuit of the best app quality and user experience. Only relying on manual testing is cumbersome and error prone. The increasing trend of device fragmentation only makes the job harder for mobile developers and their QA colleagues.

In today’s fast growing era a new and time-fit mobile testing strategy is needed in most companies, one that ensures shorter iterations between testing and development, increases test coverage, and reduces the risk of app failures and bad reviews.

If bugs can be found, replicated, and fixed faster in the development cycle, it’s a win across the board. Stressful last-minute “device poking” should not be the default testing methodology, no matter the size of your company.

The benefits of automated testing should be available to the startup company, to drive innovation, and create a better standard of apps. Automation benefits will multiply quickly with a ‘write once’ and run on multiple devices’ approach.

This is not to say that automation should completely replace manual testing, of course. In order to fully replicate user experience regarding the “look and feel” of an app, it will always be crucial to take a phone or tablet in hand and “play around.” Automation can’t cover 100% of user scenarios.

Relying solely on manual device testing is not efficient or economical. Automation of a good portion of testing can effectively test app builds for functionality on real devices while giving the app developer/QA staff a comprehensive test report within minutes.

This test report will provide focus and clarity to any manual testing still needed.

Mobile developer tools and frameworks are in their infancy. Roughly 30% of app-development resources are spent on QA – it’s worth making the process more efficient.

Teams with a mix of automation and manual skills are much more effective than a large, purely manual testing team.

Leveraging state of the art tools and techniques allows companies of any size to better introduce new features, ultimately winning over the approval and support of app users.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button