Iris Xie, Edward Benoit III, (2013) "Search result list evaluation versus document evaluation: similarities and differences", Journal of Documentation, Vol. 69 Iss: 1, pp.49 - 80
Abstract: Purpose – The purpose of this study is to compare the evaluation of search result lists and documents, in particular evaluation criteria, elements, association between criteria and elements, pre/post and evaluation activities, and the time spent on evaluation.
Design/methodology/approach – The study analyzed the data collected from 31 general users through prequestionnaires, think aloud protocols and logs, and post questionnaires. Types of evaluation criteria, elements, associations between criteria and elements, evaluation activities and their associated pre/post activities, and time were analyzed based on open coding.
Findings – The study identifies the similarities and differences of list and document evaluation by analyzing 21 evaluation criteria applied, 13 evaluation elements examined, pre/post and evaluation activities performed and time spent. In addition, the authors also explored the time spent in evaluating lists and documents for different types of tasks.
Research limitations/implications – This study helps researchers understand the nature of list and document evaluation. Additionally, this study connects elements that participants examined to criteria they applied, and further reveals problems associated with the lack of integration between list and document evaluation. The findings of this study suggest more elements, especially at list level, be available to support users applying their evaluation criteria. Integration of list and document evaluation and integration of pre, evaluation and post evaluation activities for the interface design is the absolute solution for effective evaluation.
Originality/value – This study fills a gap in current research in relation to the comparison of list and document evaluation.