Lecture 10a:
Evaluation and Testing
What You Will Learn Today
- Perform a design critique of a site.
- Apply criteria to evaluate a site's effectiveness.
- Use a scientific evaluation process.
- Create a usability test plan to design and perform a
usability test.
- Evaluate the site development process.
- Use site validation and management
tools.
Site Critique
- A site critique is an evaluation of a site based on principles and
criteria such as those we have studied.
- The purpose of a site critique is to help the designer(s) (e.g. your
classmate) to improve the site.
- How successful does the site appears to be in meeting its goals
and satisfying the needs of the target users?
- Begin with an objective description of the site, including goals
and target users, content and format.
- Select aspects of the site to critique (see below).
- In each aspect of the site, attempt to balance positives and
negatives.
- A balanced critique is less likely to be biased or closed-minded.
- Write about as many positive observations as negative observations.
- Try to think of at least one positive and one negative for each aspect.
- State positives before negatives.
- Provide evidence for any judgmental claims you make about the site.
- Summarize with a conclusion that gives an overall judgment
and recommendations for improvement.
- What aspects of your site do you want to evaluate?
- What questions do you have about these aspects of your site?
- Design decisions where you are unsure of which design alternative to
choose
- How the user will actually use the site, e.g. what the user will
read/look at, what links he/she will click, in what order
- User success, satisfaction and approval
- User problems, complaints and recommended changes
Empirical Evaluation Process
- Questions: What would you like to know about your design?
- List important questions that your users or an expert might be able to
help you answer.
- Hypotheses: What would you guess the users might think about your
design?
- Give possible answers to the questions.
- Test Design: What evidence would help to answer the questions?
- Design a test to collect data and confirm or deny the hypotheses.
- e.g. critique by an expert, observations of users, interview of users,
site access statistics recorded by web server
- Observations: What did the user say or do?
- Give non-judgmental, objective statements of fact.
- Findings: What the user was thinking/feeling? What did the user
expect/want/try to do?
- Generalize from observations to make evidence-based judgments that
answer the questions.
- Look for trends rather than relying on a single user's experiences with
or opinions of your site.
- Revisions: How should the site be redesigned? What findings support
this?
- List recommend site changes based on each of the findings.
- No site is perfectly usable, and a usability test that finds no problems
is probably a poor test.
Usability Test Plan
- First write answers to steps 1-3 above.
- How many people are necessary to test your site?
- The biggest issues can usually be detected with just a few participants.
(For your project you need 3.)
- How authentic are your test subjects?
- For each subject, record relevant demographic characteristics.
- e.g. name, age, student/faculty/staff, years computer/Internet
experience, English abilities, knowledge of the content area
- Do they resemble your target audience in the characteristics listed in
your proposal?
- How authentic do they need to be to get accurate data?
- What materials and facilities will be used for your test?
- Will you use paper sketches of your web pages, or an early
HTML-based version of your site?
- Rough prototypes are better than a fully functional version
because it is easier to change early in the process.
- The user is less likely to criticize an electronic version because they
know it will be a lot of work to change it.
- How much time will the test take?
- People are busy and will not want to give you much of their time (e.g.
15-20 minutes).
- Why should they help you? How will they benefit?
- How will the test be conducted?
- How will you introduce the test to the user?
- What will you tell the user to do?
- How will you observe and record the results?
- It is important to write down what you will say so the test subjects
will get consistent instructions.
- What questions will you ask the user at the end?
- Paper-based questionnaires should be short and objective.
(demographics, yes/no, agree/disagree)
- Most people can't be bothered to write long paragraphs to evaluate web
sites.
- Interview questions can be more detailed and open-ended. (What
did you think of x? Why did you do y?)
- Example questions (Sklar p. 199):
- Did you find the information you needed?
- Was it easy or difficult to access the information you needed?
- Was the web site visually attractive?
- Was the content easy to read?
- Was the web site easy to navigate?
- Did you think the information was presented correctly?
- Did the information have enough depth?
- What areas of the web site did you like the best/least? Why?
- Would you recommend the web site to others?
- How did I originally plan to develop the site? (See
development process and
development tasks)
- What development steps did I imagine, and how long were they
expected to take?
- How did my actual process differ from my plans?
- What took more time than expected, and what took less?
- How did I spend my time productively and efficiently, and how did I
waste time?
- What problems did I encounter?
- What software tools and techniques did I learn?
- What people, books and other learning resources did I consult?
- What did I learn about the web development process?
- How would I do things differently if I had to do it again?
- There are many validation tools on the Internet.
- correct HTML and CSS, display broken links, display accessibility
problems like missing alt attributes and noframes tags, etc.
- HTML Tidy is
a powerful utility to convert non-standard HTML to XHTML.
- Many web site programs allow you to search and replace a string in
multiple files.
- In UNIX (caution: make a backup first):
perl -pi -e
's/string1/string2/g' filenames