When conducting Web site usability testing for government entities, there are specific compliance standards and guidelines to follow. According to government guidelines, to ensure the best possible methods of testing are used, a test should be conducted where real participants interact with real scenarios. This ensures that the data collected is based off a participant’s success, speed of performance and satisfaction not off of a broad set of usability guidelines like in an ‘inspection evaluation.’ Government guidelines also ask that a repetitive approach be used when evaluating a Web site. Once the first test results are given to developers, they should make the proper changes, followed by a second test. Studies show that the more this method is repeated during testing, the better the Website will be. Listed below are a few examples of basic government guidelines applicable to usability testing for government websites.
Website Usability Testing
- Usability Test Participants: Ask for comments from usability test participants’ either during or after a usability test. This gives valuable feedback to developers when making changes to a Website.
- Evaluate Possible Changes: Evaluate websites before and after making changes to them. This helps developers determine if the changes actually made a difference in usability or not.
- Usability Magnitude Estimation Measure: Prioritize the fixing of usability issues using The Usability Magnitude Estimation measure. This measure helps participants judge how difficult or easy a task will be before trying to do it. Each task is place into one of four categories which determines which issues to fix first.
- Prioritize the Changes: Further prioritize usability issues by using frequency and severity data to help determine what also needs changing. The most severe issues should always be fixed first.
- Select a Strategic Amount of Testers: Use the right number of participants when conducting usability evaluations. Using too few may reduce the usability of a Website and using too many wastes valuable resources.
- Understand the Evaluator Effect: Be careful of the ‘evaluator effect’ when conducting inspection evaluations. The ‘evaluator effect’ occurs when multiple evaluators evaluating the same Web site detect vastly different sets of problems. No one evaluator is likely to detect the majority of the severe problems that will be detected collectively by all evaluators.
- Usability Testing Software: Apply automatic evaluation methods when conducting initial evaluations on a Website. An automatic evaluation method is one where software is used to evaluate a Website. Software combines past performance testing with optimized automated results, it is important to be utilizing a variety of factors and opinions to determine the correct changes.
- Correct Obvious Issues First: Use caution when conducting cognitive walkthroughs. This method is employed to help resolve obvious problems before conducting extensive performance tests. These tests often detect far more potential problems than actually do exist.
- Calculate Severity Ratings: Use severity ratings with caution because research shows that even highly experienced usability specialists cannot agree on which usability issues will have the greatest impact on usability.
At Magic Logix, we develop websites in order to meet the set of user requirements, in order to ensure that the website meets the client expectations, setting usability goals and expectations prior/during the development of the website.