|Title:||Search result list evaluation versus document evaluation: similarities and differences|
|Author(s):||Iris Xie, (School of Information Studies, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA), Edward Benoit III, (School of Information Studies, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA)|
|Citation:||Iris Xie, Edward Benoit III, (2013) “Search result list evaluation versus document evaluation: similarities and differences”, Journal of Documentation, Vol. 69 Iss: 1, pp.49 – 80|
|Keywords:||Comparison, Document evaluation, Evaluation activities, Evaluation criteria, Evaluation elements, Evaluation time, Information retrieval, Relevance criteria, Search result list evaluation, Searching|
|Article type:||Research paper|
|DOI:||10.1108/00220411311295324 (Permanent URL)|
|Publisher:||Emerald Group Publishing Limited|
|Acknowledgements:||The authors thank the University of Wisconsin-Milwaukee for its Research Growth Initiative program for generously funding the project, and Tim Blomquist and Marilyn Antkowiak for their assistance on data collection and Huan Zhang for her assistance on data analysis. The authors would also like to thank the anonymous reviewers for their constructive comments.|
|Abstract:||Purpose – The purpose of this study is to compare the evaluation of search result lists and documents, in particular evaluation criteria, elements, association between criteria and elements, pre/post and evaluation activities, and the time spent on evaluation.
Design/methodology/approach – The study analyzed the data collected from 31 general users through prequestionnaires, think aloud protocols and logs, and post questionnaires. Types of evaluation criteria, elements, associations between criteria and elements, evaluation activities and their associated pre/post activities, and time were analyzed based on open coding.
Findings – The study identifies the similarities and differences of list and document evaluation by analyzing 21 evaluation criteria applied, 13 evaluation elements examined, pre/post and evaluation activities performed and time spent. In addition, the authors also explored the time spent in evaluating lists and documents for different types of tasks.
Research limitations/implications – This study helps researchers understand the nature of list and document evaluation. Additionally, this study connects elements that participants examined to criteria they applied, and further reveals problems associated with the lack of integration between list and document evaluation. The findings of this study suggest more elements, especially at list level, be available to support users applying their evaluation criteria. Integration of list and document evaluation and integration of pre, evaluation and post evaluation activities for the interface design is the absolute solution for effective evaluation.
Originality/value – This study fills a gap in current research in relation to the comparison of list and document evaluation.