Evaluating Interface Designs Evaluating Interface Designs
SE3830 J U b i SE3830 - Jay Urbain
Credits: Ben Shneiderman, Catherine Plaisant, Roger J. Chapman
Evaluating Interface Designs Evaluating Interface Designs SE3830 - - PowerPoint PPT Presentation
Evaluating Interface Designs Evaluating Interface Designs SE3830 SE3830 - Jay Urbain J U b i Credits: Ben Shneiderman, Catherine Plaisant, Roger J. Chapman This graph depicts the relationships between 2367 people How to evaluate Interface
Credits: Ben Shneiderman, Catherine Plaisant, Roger J. Chapman
This graph depicts the relationships between 2367 people
2 Clear user flows and “calls to action”
Find doc: (try Google first)
– lumbar disc herniation milwaukee – Lumbar Endoscopic Discectomy MCW p y Expectations 1. Finds an MCW doc appropriate for user needs 2 Effectively communicates quality care most skilled latest tech & 2. Effectively communicates quality care – most skilled, latest tech, & they care about you. 3. Effectively communicates corporate values – I would think this would be quality of care! would be quality of care! 4. Clear user flows and “calls to action” 5. Supports power users versus browser types 6. Spotlights and links support key objectives and site goals 7. Overall appropriate tone can brand strategy 8. Easy to scan and read, maximized page density
– stage of design – novelty of project – diversity of expected users y p – criticality of the interface – costs of product, and finances available for testing – time available time available – experience of the design and evaluation team
– Competition Competition – Failed contracts – Liability
problems during the lifecycle of an interface.
1. Strive for consistency – Consistent sequences, commands, and terminology 2. Cater to universal usability E bl f t t h t t – Enable frequent users to use shortcuts 3. Offer informative feedback – For every operator action, there should be some system feedback feedback. 4. Design dialogs to yield closure – Sequences of actions should be organized into groups with a beginning, middle, and end. 5. Prevent errors 6. Permit easy reversal of actions 7. Support internal locus of control 8 R d h t t 8. Reduce short term memory – Keep displays simple, multiple page displays be consolidated, window-motion frequency reduced, and sufficient training time.
Neilsen's Usability Heuristics: 1. Visibility of system status 2. Match between system and the real world 3 User control and freedom 3. User control and freedom 4. Consistency and standards 5. Error prevention 6 R iti th th ll 6. Recognition rather than recall 7. Flexibility and efficiency of use 8. Aesthetic and minimalist design 9. Help users recognize, diagnose, and recover from errors 10. Help and documentation
Android UI Design Guidelines
Apple Human Interface Guidelines
nceptual/AppleHIGuidelines/XHIGIntro/XHIGIntro.html iOS H I t f G id li iOS Human Interface Guidelines
eptual/mobilehig/Introduction/Introduction.html Windows User Experience Interaction Guidelines
UI Design with Qt
1980s 1980s.
produce significant cost savings.
– One for the participants to do their work – Another separated by a half-silvered mirror, for testers and observers,
– Test UI hypotheses, find errors, and validate theories – Refine user interfaces rapidly
– domain knowledge, computing background, task experience, motivation education and level of ability with the natural motivation, education, and level of ability with the natural language used.
for showing designers or managers the problems that users for showing designers or managers the problems that users encounter.
y y g – Paper mockups – Discount usability testing – Competitive usability testing – Universal usability testing – Field test and portable labs – Remote usability testing C b k thi t t – Can-you-break-this tests
acceptable companion for usability tests and expert reviews acceptable companion for usability tests and expert reviews.
– Clear goals in advance Clear goals in advance – Development of focused items that help attain the goals
Action Interface model (previous lecture) of interface design Action Interface model (previous lecture) of interface design.
specific aspects of the interface such as the representation of: p p p – task domain objects and actions – syntax of inputs and design of displays.
– users background (age, gender, origins, education, income) – experience with computers (specific applications or software packages, length of time, depth of knowledge) p g , g , p g ) – job responsibilities (decision-making influence, managerial roles, motivation) – personality style (introvert vs extrovert risk taking vs risk personality style (introvert vs. extrovert, risk taking vs. risk aversive, early vs. late adopter, systematic vs. opportunistic) – reasons for not using an interface (inadequate services, too complex too slow) complex, too slow) – familiarity with features (printing, macros, shortcuts, tutorials) – the state of their feelings after using an interface (confused vs. clear frustrated vs in control bored vs excited) clear, frustrated vs. in-control, bored vs. excited).
needed for distribution and collection of paper forms.
y p p p y p y screen, instead of filling in and returning a printed form, – although there is a potential bias in the sample.
– Getting people to do them (pay them!) – Time required to put a good survey together (pay them!) – Normalizing results across users (pay them to get a enough users). – Bias results based on who is willing to fill out the survey (people who need money ;-)
measurable goals for evaluating system performance (qualities) measurable goals for evaluating system performance (qualities).
must be reworked until success is demonstrated.
measurable criteria for the user interface can be established: Ti t l ifi f ti – Time to learn specific functions – Speed of task performance – Rate of errors by users Human retention of commands over time – Human retention of commands over time – Subjective user satisfaction
communities.
p g , y period of field testing before release.
managers user services personnel and maintenance staff managers, user-services personnel, and maintenance staff.
possible. p
– Interviews with individual users can be productive because the interviewer can pursue specific issues of concern. Group discussions are valuable to ascertain the universality of – Group discussions are valuable to ascertain the universality of comments.
Th f hi h ld k i f – The software architecture should make it easy for system managers to collect data about: – The patterns of system usage – Speed of user performance Speed of user performance – Rate of errors – Frequency of request for online assistance – A major benefit is guidance to system maintainers in optimizing j g y p g performance and reducing costs for all participants.
M f l d if th k th i h – Many users feel reassured if they know there is a human assistance available – On some network systems, the consultants can monitor the user's computer and see the same displays that the user sees p p y
– Electronic mail to the maintainers or designers. – For some users, writing a letter may be seen as requiring too much effort.
Permit postings of open messages and questions – Permit postings of open messages and questions – Some are independent, e.g. America Online and Yahoo! – Topic list – Sometimes moderators – Social systems – Comments and suggestions should be encouraged. gg g
interaction might comprise these tasks: interaction might comprise these tasks: – Deal with a practical problem and consider the theoretical framework (research) – State a lucid and testable hypothesis yp – Identify a small number of independent variables that are to be manipulated – Carefully choose the dependent variables that will be measured S l t bj t d i bj t t – Select subjects and assign subjects to groups – Control for biasing factors (non-representative sample of subjects or selection of tasks, inconsistent testing procedures) – Apply statistical methods to data analysis Apply statistical methods to data analysis – Resolve the practical problem, refine the theory, repeatm and give advice to future researchers
C ll d i h l fi i h h
interface of actively used systems.
Performance could be compared with the control group.
subjective satisfaction, error rates, and user retention over time.
reviews, usability tests, surveys, and acceptance tests.
performance evaluations using interviews, observation, surveys, or by logging user performance.
usability.
disabilities.
interviews, discussion groups, user meetings, etc.