By Jean Ann Harrison
Mobile software apps (apps) are becoming more prevalent in our daily lives. A recent study published by digital lifestyle reporter, Andrea Smith on Mashable, highlighted just how much we as a society are addicted to our mobile apps, going so far as to say, “some people confess to using over 50 apps a day.”
We see them everywhere, waiting in line, walking down the street, or even attending a sporting event. In fact, 82 percent of Smith’s survey respondents believed they couldn’t be without their mobile apps for longer than one day. If this sample reflected society, then it is important that these mobile apps work correctly, consistently, and meet user needs.
In addition to society’s reliance on apps, more are being created. The need for people to not only test these apps, but have the knowledge on how to test these products on various types of devices is becoming a challenge. This article describes some of the configuration tests for testers to consider.
A Weekend Testing Americas session I facilitated on configuration testing of mobile devices used Facebook as a native app. It brought to light a variety of consistency concerns. We had an array of devices—a combination of Apple iPhones and Android phones, Apple iPads and Android tablets. Despite the various devices and operating systems, testers experienced radical differences when using the same mobile app. The result, testers had an eye-opening experience and were able to broaden their perspective for when they went back to test their own mobile apps.
Some of the observations found from the weekend testing session included the sorting of the Newsfeed posts, which appeared different based on the device used. The Facebook app showed information based on the size of the device’s displayable area and the tablet displayed more information than the phone. Default display and functional settings on all three configurations—including tablet, desktop/laptop, and phone—varied, including the Friends lists, refresh settings, and timestamps. Search functionality also behaved differently on the tablet versus the laptop/browser app.
Based on these observations, it is clear that understanding how to do something on one device configuration doesn’t necessarily mean you will automatically know how to perform the function on another configuration. This especially concerns users that frequently switch between configurations.
To avoid consistency issues, consider testing what appears on the screen on one device and the output using a different device. Even among different Android phones, there is a difference in physical sizes of the viewing area. When designing your test cases, not only do you need to consider general real estate of the mobile app, but also how the app appearance differs on various sized devices.
For example, does the Facebook native app fill out the Android phone on both the four-and the 5.5-inch screens? Now the question arises, how would you automate the testing for such differences? Should you even automate? Such a test may not be worth automating, especially if that part of the code is not updated between releases. Not all tests should be automated, and with mobile apps becoming more critical for companies to produce, testing projects should be carefully planned. An assessment of when to invest in automation for mobile tests that are conducted is crucial.
Have you ever compared the Facebook app on a tablet to that of the mobile app on your phone? Even if both configurations share the same operating system, they are almost completely different apps or versions of code with radical variations in the display. So how would you plan your testing based around one mobile app? Factor in the different device configurations. Does the rotation of the device change any viewable functionality? It might be that you only conduct these tests once in a release, but they should be tested at some point.
In mobile app versions, icons display differently in Facebook depending on the configuration. Some test considerations for ease of use and transitioning from one configuration to another should be included. What constitutes ease of use? Who determines the definition? Currently these factors should be tested before design and coding begin. Remember, as a tester, you need to have clearly defined requirements or a clear understanding in how your app is used on each configuration. If not, the lack of a seamless experience can have a destructible impact on a company’s market reputation.
Ease of Use
Learnability is another factor. Do your users typically switch from one configuration to the next? Tests that include visual and functional transition between configurations should be considered as part of a release. With some mobile phone apps differing from their counterpart tablet versions, is the transition comfortable for the user? Testing for comfort or ease of use is a subjective call. Mobile testers need to know about their users and how they interact with the app. This is where sales, marketing, and any other customer-facing team members can share experiences and user stories.
As we’ve progressed into using so many mobile apps, personal bias and prejudices have built up in our minds. We have different expectations in display, usage, timing of feedback, and in functionality. Users who are more often than not using laptops and desktops to conduct daily activities do not typically use mobile apps the same way as someone who has no access to a laptop or desktop. Their bias in usage is completely different; therefore ease of use has a different meaning. Testing should consider different expectations based on the configuration and usage.
Network connectivity while using the mobile app is yet another configuration test consideration. For example, a tablet is generally used in fixed locations like an armchair in front of the television or at a favorite coffee shop. Once connectivity is established, there is little fluctuation due to lack of mobility. This is not necessarily true of a mobile phone. How often are you walking or in a moving vehicle engaging the Internet? If your app requires Internet connectivity, add appropriate testing based on the configuration.
How many different types of tests exist specifically for mobile devices and mobile apps? This article offers introductory considerations for general functionality, usability, and appearances of different configurations depending on the mobile device. It is important to remember that all tests do not apply to all configurations. Definitions of usability must be carefully quantified in the requirements. Usability of the app may depend on the specific market of customers expected to use it. Work closely with your stakeholders to learn as much as you can about the users/customers to understand their perspective.
Finally, continue to practice testing on mobile devices and mobile apps. The more time spent testing mobile apps, the more inspiration—and better mental model—a mobile tester gains as to what kinds of tests to implement. The different types of performance, notification, and network communication tests apply, along with general functional and behavioral tests. Understanding that there are more types of tests beyond GUI functionality is critical to planning mobile testing projects. SW
Jean Ann Harrison has worked in the software testing field for more than 14 years including eight years testing mobile software on various devices including medical devices, city police ticket generators, phones, tablets, and various other proprietary devices. SW
Jean Ann Harrison has been in the Software Testing field for over 14 years including eight years working with testing mobile software on various devices including medical devices, city police ticket generators, phones, tablets, and various other proprietary devices.
Dec2014, Software Magazine