Actions

Difference between revisions of "User-Centered Evaluation (UCE)"

(User-centered evaluation is understood as evaluation conducted with methods suited for the framework of user-centered design as it is described in ISO 13407 (ISO, 1999). User-centered evaluation may also focus on the user experience through communication)
m (The LinkTitles extension automatically added links to existing pages (https://github.com/bovender/LinkTitles).)
Line 1: Line 1:
User-centered evaluation is understood as evaluation conducted with methods suited for the framework of user-centered design as it is described in ISO 13407 (ISO, 1999). Within the framework of user-centered design, it is typically focused on evaluation the usability (ISO, 1994) of the system, possibly along with additional evaluations of the users' subjective experiences including factors like enjoyment and trust (Brandtzæg, Følstad, & Heim, 2002; Egger, 2002; Jordan, 2001; Ljungblad, Skog, & Holmquist, 2002). User-centered evaluation may also focus on the user experience of a company or service through all channels of communication.<ref>What is User-Centered Evaluation (UCE)? [https://www.researchgate.net/publication/277311024_Challenges_in_Conducting_User-Centered_Evaluations_of_Mobile_Services Asbjørn Følstad, Odd-Wiking Rahlff]</ref>
+
User-centered [[evaluation]] is understood as evaluation conducted with methods suited for the [[framework]] of user-centered [[design]] as it is described in ISO 13407 (ISO, 1999). Within the framework of user-centered design, it is typically focused on evaluation the [[usability]] (ISO, 1994) of the [[system]], possibly along with additional evaluations of the users' subjective experiences including factors like enjoyment and trust (Brandtzæg, Følstad, & Heim, 2002; Egger, 2002; Jordan, 2001; Ljungblad, Skog, & Holmquist, 2002). User-centered evaluation may also focus on the user experience of a company or [[service]] through all channels of communication.<ref>What is User-Centered Evaluation (UCE)? [https://www.researchgate.net/publication/277311024_Challenges_in_Conducting_User-Centered_Evaluations_of_Mobile_Services Asbjørn Følstad, Odd-Wiking Rahlff]</ref>
  
  
User-Centered Evaluation (UCE) is defined as an empirical evaluation obtained by assessing user performance and user attitudes toward a system, by gathering subjective user feedback on effectiveness and satisfaction, quality of work, support and training costs or user health and well-being.<ref>[Definition of User-Centered Evaluation (UCE) This definition is partly based on the definition of human-centered design in ISO guideline 13407: ‘human-centered design processes for interactive systems’ (ISO, 1999)]</ref>
+
User-Centered Evaluation (UCE) is defined as an empirical evaluation obtained by assessing user performance and user attitudes toward a system, by gathering subjective user [[feedback]] on effectiveness and satisfaction, [[quality]] of work, support and training costs or user health and well-being.<ref>[Definition of User-Centered Evaluation (UCE) This definition is partly based on the definition of human-centered design in ISO [[guideline]] 13407: ‘human-centered design processes for interactive systems’ (ISO, 1999)]</ref>
  
The term user-centered evaluation refers to evaluating the utility and value of software to the intended end-users. While usability (ease of use of software) is certainly a necessary condition, it is not sufficient.
+
The term user-centered evaluation refers to evaluating the utility and [[value]] of [[software]] to the intended end-users. While usability (ease of use of software) is certainly a necessary condition, it is not sufficient.
  
  
'''Goals of User-Centered Evaluation (UCE)'''<ref>Goals of User-Centered Evaluation (UCE) [https://pdfs.semanticscholar.org/cce5/20b94df2bd65d867f72bffa6e4b4b40f9577.pdf Lex Van Velsen et al.]</ref><br />
+
'''[[Goals]] of User-Centered Evaluation (UCE)'''<ref>Goals of User-Centered Evaluation (UCE) [https://pdfs.semanticscholar.org/cce5/20b94df2bd65d867f72bffa6e4b4b40f9577.pdf Lex Van Velsen et al.]</ref><br />
User-centered evaluation (UCE) can serve three goals: supporting decisions, detecting problems and verifying the quality of a product (De Jong & Schellens, 1997). These functions make UCE a
+
User-centered evaluation (UCE) can serve three goals: supporting decisions, detecting problems and verifying the quality of a [[product]] (De Jong & Schellens, 1997). These functions make UCE a
 
valuable tool for developers of all kinds of systems, because they can justify their efforts, improve upon a system or help developers to decide which version of a system to release. In the end, this
 
valuable tool for developers of all kinds of systems, because they can justify their efforts, improve upon a system or help developers to decide which version of a system to release. In the end, this
 
may lead to higher adoption of the system, more ease of use and a more pleasant user experience.
 
may lead to higher adoption of the system, more ease of use and a more pleasant user experience.
Line 18: Line 18:
  
 
'''Formative User-Centered Evaluation'''<ref>What is Formative User-Centered Evaluation? [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.1612&rep=rep1&type=pdf Joseph L. Gabbard,Deborah Hix, J. Edward Swan II]</ref><br />
 
'''Formative User-Centered Evaluation'''<ref>What is Formative User-Centered Evaluation? [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.1612&rep=rep1&type=pdf Joseph L. Gabbard,Deborah Hix, J. Edward Swan II]</ref><br />
Formative user-centered evaluation is an empirical, observational evaluation method that ensures usability of interactive systems by including users early and continually throughout user interaction development. The method relies heavily on usage context (for example, user task, user motivation, and so on) as well as a solid understanding of human-computer interaction (and in the case of VEs, human-VE interaction). Therefore, a usability specialist generally proctors formative user-centered evaluations. Formative evaluation aims to iteratively and quantifiably assess and improve a user interaction design. Figure 2 shows the steps of a typical formative evaluation cycle. The cycle begins with development of user task scenarios, which are specifically designed to exploit and explore all identified task, information, and work flows. Note that user task scenarios derive from results of the user task analysis. Moreover, these scenarios should provide adequate coverage of tasks as well as accurate sequencing of tasks identified during the user task analysis. Representative users perform these tasks
+
Formative user-centered evaluation is an empirical, observational evaluation method that ensures usability of interactive systems by including users early and continually throughout user interaction development. The method relies heavily on usage context (for example, user task, user motivation, and so on) as well as a solid understanding of human-[[computer]] interaction (and in the case of VEs, human-VE interaction). Therefore, a usability specialist generally proctors formative user-centered evaluations. Formative evaluation aims to iteratively and quantifiably assess and improve a user interaction design. Figure 2 shows the steps of a typical formative evaluation cycle. The cycle begins with development of user task scenarios, which are specifically designed to exploit and explore all identified task, information, and work flows. Note that user task scenarios derive from results of the user task analysis. Moreover, these scenarios should provide adequate coverage of tasks as well as accurate sequencing of tasks identified during the user task analysis. Representative users perform these tasks
as evaluators collect data. These data are then analyzed to identify user interaction components or features that both support and detract from user task performance. These observations are in turn used to suggest user interaction design changes as well as formative evaluation scenario and observation (re)design. Note that in the formative evaluation process both qualitative and quantitative data are collected from representative users during their performance of task scenarios. Developers often have the false impression that usability evaluation has no “real” process and no “real” data. To the contrary, experienced usability evaluators collect large volumes of both qualitative data and quantitative data. Qualitative data are typically in the form of critical incidents, which occur while a user performs task scenarios. A critical incident is an event that has a significant effect, either positive or negative, on user task performance or user satisfaction with the interface. Events that affect user performance or satisfaction therefore have an impact on usability. Typically, a critical incident is a problem encountered by a user (such as an error, being unable to complete a task scenario, or user confusion) that noticeably affects task flow or task performance. Quantitative data are generally related, for example, to how long it takes and the number of
+
as evaluators collect [[data]]. These data are then analyzed to identify user interaction components or features that both support and detract from user task performance. These observations are in turn used to suggest user interaction design changes as well as formative evaluation scenario and observation (re)design. Note that in the formative evaluation [[process]] both qualitative and quantitative data are collected from representative users during their performance of task scenarios. Developers often have the false impression that usability evaluation has no “real” process and no “real” data. To the contrary, experienced usability evaluators collect large volumes of both qualitative data and quantitative data. Qualitative data are typically in the form of critical incidents, which occur while a user performs task scenarios. A critical incident is an event that has a significant effect, either positive or negative, on user task performance or user satisfaction with the interface. Events that affect user performance or satisfaction therefore have an [[impact]] on usability. Typically, a critical incident is a problem encountered by a user (such as an error, being unable to complete a task scenario, or user confusion) that noticeably affects task flow or task performance. Quantitative data are generally related, for example, to how long it takes and the number of
errors committed while a user performs task scenarios. These data are then compared to appropriate baseline metrics. Quantitative data generally indicate that a problem has occurred; qualitative data indicate where (and sometimes why) it occurred.
+
errors committed while a user performs task scenarios. These data are then compared to appropriate [[baseline]] [[metrics]]. Quantitative data generally indicate that a problem has occurred; qualitative data indicate where (and sometimes why) it occurred.
  
 
[[File:UCE1.png|300px|Formative User-Centered Evaluation]]<br />
 
[[File:UCE1.png|300px|Formative User-Centered Evaluation]]<br />
Line 46: Line 46:
  
 
===Further Reading===
 
===Further Reading===
*User centered evaluation of interactive data visualization forms for document management systems [https://ac.els-cdn.com/S2351978915006708/1-s2.0-S2351978915006708-main.pdf?_tid=8cff219e-7fd7-47a3-904e-24e5d47b037f&acdnat=1522846260_be439c84b0579e44148dee4c112b9165 Antje Heinicke Chen Liao Katrin Walbaum et al.]
+
*User centered evaluation of interactive data visualization forms for document [[management]] systems [https://ac.els-cdn.com/S2351978915006708/1-s2.0-S2351978915006708-main.pdf?_tid=8cff219e-7fd7-47a3-904e-24e5d47b037f&acdnat=1522846260_be439c84b0579e44148dee4c112b9165 Antje Heinicke Chen Liao Katrin Walbaum et al.]
*User-Centered Evaluation Methodology for Interactive Visualizations [http://www.dis.uniroma1.it/beliv08/pospap/oconnel.pdf NIST]
+
*User-Centered Evaluation [[Methodology]] for Interactive Visualizations [http://www.dis.uniroma1.it/beliv08/pospap/oconnel.pdf NIST]
 
*User-centered evaluation of adaptive and adaptable systems: a literature review [https://pdfs.semanticscholar.org/cce5/20b94df2bd65d867f72bffa6e4b4b40f9577.pdf University of Twente]
 
*User-centered evaluation of adaptive and adaptable systems: a literature review [https://pdfs.semanticscholar.org/cce5/20b94df2bd65d867f72bffa6e4b4b40f9577.pdf University of Twente]

Revision as of 19:03, 6 February 2021

User-centered evaluation is understood as evaluation conducted with methods suited for the framework of user-centered design as it is described in ISO 13407 (ISO, 1999). Within the framework of user-centered design, it is typically focused on evaluation the usability (ISO, 1994) of the system, possibly along with additional evaluations of the users' subjective experiences including factors like enjoyment and trust (Brandtzæg, Følstad, & Heim, 2002; Egger, 2002; Jordan, 2001; Ljungblad, Skog, & Holmquist, 2002). User-centered evaluation may also focus on the user experience of a company or service through all channels of communication.[1]


User-Centered Evaluation (UCE) is defined as an empirical evaluation obtained by assessing user performance and user attitudes toward a system, by gathering subjective user feedback on effectiveness and satisfaction, quality of work, support and training costs or user health and well-being.[2]

The term user-centered evaluation refers to evaluating the utility and value of software to the intended end-users. While usability (ease of use of software) is certainly a necessary condition, it is not sufficient.


Goals of User-Centered Evaluation (UCE)[3]
User-centered evaluation (UCE) can serve three goals: supporting decisions, detecting problems and verifying the quality of a product (De Jong & Schellens, 1997). These functions make UCE a valuable tool for developers of all kinds of systems, because they can justify their efforts, improve upon a system or help developers to decide which version of a system to release. In the end, this may lead to higher adoption of the system, more ease of use and a more pleasant user experience.


User-Centered Evaluation (UCE)
Figure 1. source: Lex Van Velsen et al.


Formative User-Centered Evaluation[4]
Formative user-centered evaluation is an empirical, observational evaluation method that ensures usability of interactive systems by including users early and continually throughout user interaction development. The method relies heavily on usage context (for example, user task, user motivation, and so on) as well as a solid understanding of human-computer interaction (and in the case of VEs, human-VE interaction). Therefore, a usability specialist generally proctors formative user-centered evaluations. Formative evaluation aims to iteratively and quantifiably assess and improve a user interaction design. Figure 2 shows the steps of a typical formative evaluation cycle. The cycle begins with development of user task scenarios, which are specifically designed to exploit and explore all identified task, information, and work flows. Note that user task scenarios derive from results of the user task analysis. Moreover, these scenarios should provide adequate coverage of tasks as well as accurate sequencing of tasks identified during the user task analysis. Representative users perform these tasks as evaluators collect data. These data are then analyzed to identify user interaction components or features that both support and detract from user task performance. These observations are in turn used to suggest user interaction design changes as well as formative evaluation scenario and observation (re)design. Note that in the formative evaluation process both qualitative and quantitative data are collected from representative users during their performance of task scenarios. Developers often have the false impression that usability evaluation has no “real” process and no “real” data. To the contrary, experienced usability evaluators collect large volumes of both qualitative data and quantitative data. Qualitative data are typically in the form of critical incidents, which occur while a user performs task scenarios. A critical incident is an event that has a significant effect, either positive or negative, on user task performance or user satisfaction with the interface. Events that affect user performance or satisfaction therefore have an impact on usability. Typically, a critical incident is a problem encountered by a user (such as an error, being unable to complete a task scenario, or user confusion) that noticeably affects task flow or task performance. Quantitative data are generally related, for example, to how long it takes and the number of errors committed while a user performs task scenarios. These data are then compared to appropriate baseline metrics. Quantitative data generally indicate that a problem has occurred; qualitative data indicate where (and sometimes why) it occurred.

Formative User-Centered Evaluation
Figure 2. source: Joseph L. Gabbard et al.


See Also

User-Centered Design (UCD)
Human-Centered Design (HCD)
Usability
User Experience Design (UX)
User Interface Design (UI)
User Interface
User Acceptance Testing (UAT)
User Datagram Protocol (UDP)
Adaptive Web Design (AWD)
Progressive Enhancement (PE)
Responsive Web Design (RWD)


References

  1. What is User-Centered Evaluation (UCE)? Asbjørn Følstad, Odd-Wiking Rahlff
  2. [Definition of User-Centered Evaluation (UCE) This definition is partly based on the definition of human-centered design in ISO guideline 13407: ‘human-centered design processes for interactive systems’ (ISO, 1999)]
  3. Goals of User-Centered Evaluation (UCE) Lex Van Velsen et al.
  4. What is Formative User-Centered Evaluation? Joseph L. Gabbard,Deborah Hix, J. Edward Swan II


Further Reading