Norming a VALUE rubric to assess graduate information literacy skills

Authors

  • David J. Turbow Outcomes Assessment Coordinator, University of St. Augustine for Health Sciences, 700 Windy Point Drive, San Marcos, CA 92069
  • Julie Evener MLIS, Director of Library Services, University of St. Augustine for Health Sciences, 1 University Boulevard, St. Augustine, FL 32086

DOI:

https://doi.org/10.5195/jmla.2016.13

Keywords:

Information Literacy, Calibration, Educational Measurement, Education, Graduate, Interdepartmental Relations, Cooperative Behavior

Abstract

Objective: The study evaluated whether a modified version of the information literacy Valid Assessment of Learning in Undergraduate Education (VALUE) rubric would be useful for assessing the information literacy skills of graduate health sciences students.

Methods: Through facilitated calibration workshops, an interdepartmental six-person team of librarians and faculty engaged in guided discussion about the meaning of the rubric criteria. They applied the rubric to score student work for a peer-review essay assignment in the ‘‘Information Literacy for Evidence-Based Practice’’ course. To determine inter-rater reliability, the raters participated in a follow-up exercise in which they independently applied the rubric to ten samples of work from a research project in the doctor of physical therapy program: the patient case report assignment.

Results: For the peer-review essay, a high level of consistency in scoring was achieved for the second workshop, with statistically significant intra-class correlation coefficients above 0.8 for 3 criteria: ‘‘Determine the extent of evidence needed,’’ ‘‘Use evidence effectively to accomplish a specific purpose,’’ and ‘‘Access the needed evidence.’’ Participants concurred that the essay prompt and rubric criteria adequately discriminated the quality of student work for the peer-review essay assignment. When raters independently scored the patient case report assignment, inter-rater agreement was low and statistically insignificant for all rubric criteria (kappa¼[1]0.16, p.0.05–kappa¼0.12, p.0.05).

Conclusions: While the peer-review essay assignment lent itself well to rubric calibration, scorers had a difficult time with the patient case report. Lack of familiarity among some raters with the specifics of the patient case report assignment and subject matter might have accounted for low inter-rater reliability. When norming, it is important to hold conversations about search strategies and expectations of performance. Overall, the authors found the rubric to be appropriate for assessing information literacy skills of graduate health sciences students.

Downloads

Published

2016-09-12

Issue

Section

Surveys and Studies