CI6310 Usability Test Report
To enable anonymous marking, please do not display your name on any cover sheet or header.
Module Learning Outcomes assessed in this piece of coursework
The learning outcomes assessed in this piece of coursework are:
·Evaluate the quality of users’ experience
·Research user needs and the implications of technology for work practice
·Analyse users and their activities, and carry forward lessons learned
·Design input modalities, output media and interactive content to appeal to an audience
·Reflect upon design practice and discuss the strengths and weaknesses of alternative techniques
The coursework is also an opportunity for you to develop as individuals, and for employment, though this is not explicitly assessed.
·Act as independent, self-guided problem-solvers and learners
·Perceive opportunities for User Experience Design to achieve organisational objectives
·Develop communication and collaboration skills
·Work in an ethical, social and security–conscious manner
This Assignment Brief and Assessment Criteria will be discussed within a formally timetabled class.
The Usability Test Report should:
i)usability test a real-world, data-intensive desk-top or cloud-hosted application accessed from a large-screened device. This helps to ensure that the issues you address are sufficiently complex. You may choose from the following themes:
1.systems development tools (diagrammers, modellers, programming environments; database configuration tools);
2.retail auction sites and marketplaces;
3.network management tools (modelling, monitoring, analysing);
4.games and digital media tools (image editors, renderers; games development tools, tools for ux design);
5.project management tools;
6.fantasy sports websites.
7.Examples include: www.prezi.com; Gimp2.0; www.basecamp.com; www.mindmeister.com. www.aptana.com fantasy.premierleague.com
I SUGGEST YOU AVOID: craigslist, lingscars, and other examples, which are bizarre or whose brand is ‘old school’, or ‘poor ease of use’. a tv remote controls, a cash point machine, and other common textbook cases
ii)apply the CIF standard method and report format presented in the lectures. This method defines what is “sound, ‘B’ grade” practical work and reporting. Top grades are achieved when you adapt parts of the CIF method to more fully address the unique characteristics of the issues at hand, for example, to assess user experience, not just usability;
iii)test between 3 and 5 participants. Say, four - two individuals, representing two personas, attempting 5 tasks each. This is usually sufficient to test against a range of user personas and tasks, and to identify a range of usability issues, but it depends upon the research questions you need to answer. If your results are boring, run different people and set them different tasks, until you discover something interesting.
iv)ask your friends and family to participate in the usability test, or play ‘participant’ for each other during workshop hours – no need to approach strangers!
v)Scope your test to answer specific research questions, by studying selected users and tasks – you cannot test everything and get ‘the answer’. If the usability test session lasts more than 1hr, then you might want to reduce the scope.
Suggested Structure : Target Word Length 5,000 words
Please follow the structure suggested for the usability test report. The CIF standard defines the core of this report format, to help ensure complete reporting and easy access to information, which is necessary for readers to reach agreement about the results. The suggested structure also maps onto the marking scheme. The details of each section, however, may ideally be adapted to meet the needs of your unique problems, and the way you need to address it.
1. Introduction and Background
1.1. Recent Developments and Trends (approx..1 – 2 sides)
What real world changes in business or society set the context for this usability test? Why are these trends important and of interest to many people?
Which system / software /website are you going to test? What organisation ‘owns’ the software?
What are the business goals of the software?
How does usability and user experience enable these business goals?
Why is it timely to evaluate usability and user experience now? (why evaluate it now )?
Remember to back up any claims with facts and figures (evidence) from credible, cited sources
1.2. The Existing User Interface (approx. 1-2 sides)
Describe the current interaction - walk through a task and illustrate the flow of interaction with linked screen shots annotated with user actions.
What *kinds* of usability issue do you expect users will encounter with this interface? Give an example of each kind of issue (no need to be exhaustive yet – just illustrate possible issues, and the results will speak for themselves).
2 Aims .(approx. ½ side)
Statement of Aim the over-arching purpose is , at least, ..to evaluate and enhance …
List of Objectives (Deliverables) the tangible, intermediate ‘products’ that will be produced by the process that finally outputs a Usability Test Report, for example, …. method … raw data … data analysis …
Table of Research Questions
State the ‘questions about actual usability and user experience’ that your study will answer (aka Problem Statements). Give reasons for focussing upon these research questions
3 Method (approx. 2-3 sides)
3.1. Method overview (approx. ½ side)
You are expected to conduct a CIF-based usability test for this coursework. Describe the kind of method that CIF is, and explain why you need to apply e.g. how rich and representative is the data set obtained by a usability test, and why is observation preferred to subjective reports
To answer most research questions about usability, a ‘one-shot’ experimental design is sufficient (‘How usable is …’). Some research questions require a comparison e.g. ‘Is website A more usable than website B’? ‘Which is preferred?’. Research questions about learnability, may need a repeated measures design (before vs after).
Characterise the target user group and the sample of participants that you need to recruit to answer your research questions. Profile the actual participants in a table. How and why were participants recruited and selected? Give a reason for the decisions you made. In which respects is the sample not fully representative of the target user group? Why is this sample, nevertheless, representative enough?
Identify and outline the tasks you set participants. How and why were these tasks selected and defined? How were task instruction sheets laid out and why? Give a reason for the decisions you made, and argue for their importance and ‘realism’.
Identify and define the ux criteria you will be using in the evaluation, and state how each criteria will be measured. If you are going beyond standard CIF metrics, state the additional evaluation criterion (‘construct’), its indicator (directly observable correlate), and explain your decisions. Support any claims with evidence, and cite credible sources.