About the Blog

This blog is purely an academic platform designed to cross-fertilize ideas, a knowledge sharing medium for students, lecturers and professionals in the field of Information technology, and most specifically Users' experience (Ux) in Information Systems, and Information and Knowledge management in software engineering as core areas of interest.

Posts are exclusively minor academic works by the author, while published academic papers are linked to its archive by one of the links provided on this blog.
Have a wonderful academic blogging moments!!

Thursday, 21 June 2012

Game Evaluation and Usability Testing Issues




Game Evaluation and Usability Testing Issues


As a game evaluator, you are required to evaluate a game on iPad.  Please select one of the evaluation methods available and explain the steps involve during evaluation. Explain why you chose that method.
Evaluating methods of game on iPad can be understood from the general background of evaluating mobile game (since iPad is a mobile device).
As a game evaluator, I will suggest and rather use Playability Heuristics for mobile games evaluation.
The reason for this choice is because Playability Heuristics has a more encompassing model that makes it more suitable than other evaluation methods. This evaluation model consists of Game usability, Mobility and Gameplay (Korhonen & Koivisto, 2006).
To achieve a good game experience, the evaluating process mainly centre on the user interface, hence it must be made convenient, reliable and usable enough to achieve a concentrating mood from the player.
Playability Heuristics is a more befitting integrated product from the general usability method and heuristic evaluation, designed to suit the key peculiarities and special characteristics of Game applications. The development of Playability Heuristic evaluation for game is discussed below:
a.       Defining the aspect to be evaluated: Any of the three major issues of concern here could be solely evaluated, however their inter-relationship pointing towards the ease of playing the game makes the holistic evaluation more advisable.
b.      Defining the mobility characteristics: Features to help smooth usage of mobile devices are listed: showing incoming call alerts or unread message inbox even when the users are busy using the device for game playing.
The characteristics of mobile devices like screen size to ensure easy navigation; audio capabilities, processing power and battery limitation are evaluated using a standard rubric.
c.       Defining the initial Playability Heuristics: This entails analysis of the mobile phone (iPad) and its context of use as it affects the task to be performed and the type of mobile phone used. Then, Heuristics are presented from the context analysis with the help of the review of Nielsen heuristics and game design guideline.
Examples of initial heuristics stated for the evaluation are:
K1: Don’t waste the player’s time, K2: Prepare for interruption, etc.
Incorporating the Game Usability heuristics, examples of the following heuristics will be added:
H1: Audio-visual representation supports the game, H2: Screen layout is efficient and visually pleasing, etc.
Also is the addition of game play heuristics featuring- G1: The game provides clear goals or supports player-created goals, G2: The player sees the progress in the game and can compare the result.
The heuristics designed are used as hypotheses that are then answered through the game testing with the assistance of a game designer, a usability engineer and a game player (as minimum number of evaluators). Therefore, the responses are statistically analysed to find if the evaluated mobile device is suitable for games.
3. Current Issues and Challenges of Usability testing method
            Nelson (2001) defined Usability as one of the standard concepts in designing and developing information systems, and it was fundamentally defined as ease of use (as cited by Alshamari & Mayhew, 2009). This definition forms the basis, but further added to by ISO defining Usability with specific goals (effectiveness, efficiency and user satisfaction) that the users of the information systems intended to achieve.
            Usability testing which is otherwise called Usability evaluation methods are standard procedures and methods used to test and ensure that software developed meet the standard usability goals. Examples of these evaluation methods are Heuristics evaluation, Guideline review, consistency inspection, cognitive walkthrough, metaphors of human thinking (MOT), and  formal usability inspection (Shneiderman & Plaisant, 2010).
As posited by Alshamari & Mayhew (2009), current issues of Usability testing are factors affecting usability testing and its results. Examples of these issues are usability measures, evaluator’s role, users, tasks, usability problem report, and the test environment and so on. Explaining these issues further:
a.       Usability measures and problem analysis: Before any usability test is expected to be conducted, experts involved in the testing must be aware of the measure and tests to be involved especially to be in accordance with the three major ISO standards which are efficiency, effectiveness and the user satisfactions.
Hornbaek (2006) asserted that the difficulty involved in choosing the method of measuring system’s usability, its elements and the appropriateness of the method choosen has been responsible for a recorded weakness in measuring usability, then suggested the dimensions of metrics to be used.
It is also note worthy that usability problem must be identified before being given a judgemental position, with a clue that any issue that disallows users from completing a task is a usability issue (Alshamari & Mayhew, 2009).
b.      Evaluator’s role: This is a sensitive issue in usability testing, because expertise employed to perform an evaluating role tend to differ in the detection of usability problem, and even could be inefficient in the problem detention exercise.
c.       Users: Using Users’ assessment approach as a usability testing method, the number of users needed to be involved for the evaluation process. Alshamari & Mayhew (2009) in reference to many previous studies showed variation in the number suggested users: five, three, nine are suggestion made, however with an emphasis that the choice of the users must depend on their level of system experience.
d.      Tasks: The task to be involved in the usability testing must be tasks that are related and will influence the usability evaluation.
e.       Test Environment: The inconsistency of the controlled test laboratory and the real life experience is also an issue in usability testing. Cost and inherent doubts to generalize such experimental results are some of the reason while lab testing is not supported by some expertise in HCI.
In conclusion, real system’s functionality, sufficient effort to task description, well represented usability problem report and stated-problem priority are suggested ways to overcome the interaction between usability evaluation and design stage (Hornbaek & Stage, 2006).


References

Alshamari, M. & Mayhew, P. (2009). Current Issues of Usability Testing. Technical Review. IETE Tech, Rev, 26: 402-6. Retrieved from http://www.tr.ietejournals.org/article.asp?issn=0256-4602;year=2009;volume=26;issue=6;spage=402;epage=406;aulast=Alshamari
Hornbaek, K. & Stage, K. (2006). The Interplay Between Usability Evaluation and User Interaction Design. International Journal of Human-Computer Interaction. 21, 117-123.
Hornbaek, K. (2006) . Current practice in measuring usability: Challenges to usability studies and research. International Journal of Human-Computer Studies. 64, 79- 102.
Korhonen, H. & Koivisto, E. (2006). Playability Heuristics for Mobile games. MobileHCI’06, Helsinki, Finland. Retrieved from http://research.nokia.com/files/p9-Korhonen%20-%20Authors%20Version.pdf
Shneiderman, B. & Plaisant, C. (2010). Designing the User Interface: Strategies for Effective Human-Computer Interaction, 5th Edition, USA. Pearson. ISBN-13: 978-0-321-60148-3




No comments:

Post a Comment