Read e-book Information Literacy Assessment: Standards-Based Tools And Assignments

Free download. Book file PDF easily for everyone and every device. You can download and read online Information Literacy Assessment: Standards-Based Tools And Assignments file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Information Literacy Assessment: Standards-Based Tools And Assignments book. Happy reading Information Literacy Assessment: Standards-Based Tools And Assignments Bookeveryone. Download file Free Book PDF Information Literacy Assessment: Standards-Based Tools And Assignments at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Information Literacy Assessment: Standards-Based Tools And Assignments Pocket Guide.

The assessment presents students with a series of 12 tasks, each of which consists of a the description of a situation in which information must be handled using digital technology and b simulated digital tools e. The assessment system records student interactions with the simulated tools and automatically scores students' responses as well as solution process metrics that, through field studies and other empirical research, have been shown to be construct relevant.

The scenarios cover a range of workplace, academic, and personal uses of information in the context of technology. Overall, iSkills provides evidence of many key dimensions of DIL skill—including the six dimensions featured in our proposed definition of DIL define, access, evaluate, manage, integrate, and create; see Table 3 , plus an added dimension of communicating about and with digital information. The extent to which this richer assessment format yields enhanced reliability and validity as compared to other assessment formats is discussed in the following section.

A review of published reliability and validity evidence for the information literacy assessments discussed previously appears in Table 6. In the following sections, we summarize the available validity evidence with respect to reliability and various types of validity: content, construct, concurrent, and predictive validity. Note that across the assessments reviewed, the amount of available validity evidence varies considerably, and there is variation in the level of details that are reported e.

Account Options

When such figures were not indicated by the study authors, we note this. Cameron et al. Gross and Latham Radcliff et al. Radcliff and Salem Hill et al.


  • Main content.
  • Moons Choice (Survivors, Book 0.3).
  • Iphigeneia at Aulis (Greek Tragedy in New Translations);
  • Faculty Resources & Services: Information Literacy.
  • Information literacy assessment : standards-based tools and assignments;
  • Information Literacy Assessment: Standards-based Tools and Assignments?
  • Information Literacy Assessment: Standards-Based Tools and Assignments | ALA Store?

Mery et al. Detlor et al. Lym, Grossman, Yannotta, and Talih O'Connor et al. Ivanitskaya Ivanitskaya et al.

About the Book

Ratcliff et al. Clark and Catts Haber and Stoddart n. Davis and Cleere Klein et al. Zahner OECD b. Katz Katz et al. Katz and Elliot Ali and Katz Beile et al. Hignite et al. Snow and Katz Tannenbaum and Katz Wallace and Jefferson Reliability evidence reflects the extent to which a test consistently measures the same construct.


  • Information Literacy?
  • Information literacy assessment : standards-based tools and assignments - Union College.
  • Library instruction and information literacy 2014;
  • Intelligent data warehousing: from data preparation to data mining.
  • The Hidden Queen.
  • Information Literacy.
  • Information Literacy Assessment: Standards-Based Tools and Assignments.

One form of evidence is internal consistency reliability, which assesses the extent to which items within a test yield consistent results. Across all assessments reviewed, the majority of reliability estimates for the total test coefficient alpha range from.

Information Literacy Assessment Pilot - Info Session

The reliability of subscores was also examined. Subscores are generally less reliable than estimates for the total test, although SAILS reports that subscore reliabilities exceed. The iSkills assessment does not provide subscore reliabilities, citing results from factor analyses suggesting a unidimensional construct Katz, Attali, et al.

Of course, even subscores that achieve acceptable levels of reliability might not provide information about students' knowledge and skills beyond what the total test score provides cf.

Content validity evidence involves examining the correspondence or alignment of assessment items and the subject matter domain being assessed i. Content validity is often evaluated by asking subject matter experts or test users to make judgments of the alignment between test content and curriculum or standards; this type of expert review was conducted as a part of the validation efforts for a number of assessments.

Contact Information

For example, assessments aligned to information literacy standards have examined university librarians' judgments of the alignment of test items to those standards, with test developers using the data to revise or eliminate items; this approach was used for ILT Cameron et al. Further evidence of content validity for iSkills is provided by Ali and Katz , who examined the relevance of DIL skills to business contexts by surveying a sample of human resource consultants, finding that over half rated the elements as important or essential for new hires; further, a survey of business school faculty revealed that despite the perceived importance of DIL skills, few of these were noted as frequently or always taught.

Overall, the assessments reviewed took considerable effort to validate the content of the assessment items, in terms of either their alignment with academic standards or their correspondence with the skills that are required for success in the workforce. Construct validity concerns the extent to which a measure assesses the intended underlying theoretical construct. Few of the assessments we reviewed had published evidence of construct validity, in terms of examining the internal structure of the assessment.

The construct validity of the iSkills assessment was examined using a confirmatory factor analysis Katz, Attali, et al. Taken together, the limited construct validity evidence suggests that the DIL construct is unidimensional; this could help to explain the low subscore reliability estimates for assessments that report them. Another form of validity evidence involves inspecting the relationship between assessment performance and performance on other measures given at the same or similar times i.

Convergent evidence is for measures expected to be related either positively or negatively to the assessment, whereas discriminant evidence is for measures expected not to be related to the assessment i. The pattern of convergent and discriminant measures provides a picture of the construct assessed by the assessment of interest. This pattern aids interpretation of assessment performance by painting a picture of the knowledge and skills that tend to be related and unrelated to the target construct.

In some cases, there might be a theory to predict the expected relationships; in other cases, the inspection of concurrent measures might be more exploratory, such as when a construct that has not been extensively studied is the intended construct for an assessment. Studies have typically found a positive relationship between DIL assessments and several general academic ability measures. For example, researchers have reported correlations in the.

Information Literacy Assessment: Standards-Based Tools And Assignments

In general, DIL assessments show moderate to high correlations with standardized cognitive assessments. Results for year in school generally show increased performance on DIL assessments with increasing school experience Cameron et al. As SAILS includes questions that measure information skills in both print and digital contexts, it is not clear from these data whether the reported improvements are due specifically to the development of DIL skills, or more general competency with library research methods. The RRSA shows significantly higher performance for graduate students as compared to both seniors and sophomores Ratcliff et al.

milexmd / links

The activities were created based on the iSkills digital literacy framework. Interestingly, simply performing these activities i. Confidence ratings have also been examined for the ILT, with Cameron et al. Other validation efforts have examined correlations among DIL assessments and other measures designed to assess the same or similar constructs. This correlation, based on a sample of students who had taken both tests, improved to. In all of these cases, the correlations suggest that existing assessments tap similar knowledge and skills related to DIL.

A more extensive study compared performance of several samples of undergraduates on iSkills and an information literacy rubric in use at the university Katz, Elliot, et al. The relation between iSkills scores and other academic ability measures were largely replicated as in previously cited studies. Overall, then, while iSkills and other DIL assessments appear to measure a similar construct, these constrained instruments do not completely correspond to the range of information literacy experiences that students encounter in their coursework. An assessment may be used for placement, either with respect to placing out of an otherwise required course or being asked to take a remedial course in order to further develop one's limited skills.

In either case, validity evidence should be collected so that performance on the assessment in some way is predictive of performance in the course to be taken or skipped. If it can be shown that performance on the assessment taken before a course is related to students' performance in the course e. Although the SAT is a selection, rather than a placement exam, a similar validity logic holds. A typical predictive validity study for the SAT follows a cohort of students who took the SAT into their first year at college. A study was conducted using iSkills as a predictor for performance in an analytical business writing course at a college in California.

This degree of prediction corresponds to students who did well on iSkills being approximately six times more likely to earn an A in the class compared with students who performed poorly on iSkills. This study provides validity evidence for the use of iSkills as a prescreener for a business writing course; for example, students who perform poorly might be asked to take a remedial course that covers the basics of conducting a focused search of a literature and communicating conclusions reached through synthesis and evaluation of the identified information i.

Thus, while the iSkills assessment shows some evidence of predictive validity in business writing contexts, the predictive validity of other DIL assessments has been understudied and perhaps underreported in the literature. Across the assessments included in the current review, several conclusions may be drawn. First, assessments of DIL appear to capture the construct as defined by academic standards, workforce requirements, or other theoretical conceptions of the DIL construct, despite some variation in what is measured across these various instruments.

Second, it appears that performance on DIL assessments is moderately to highly correlated with other measures of this construct, as well as more general measures of academic success, such as GPA or course grades.

View Information Literacy Assessment Standards Based Tools And Assignments

Because DIL skills are associated with academic success, and because these skills appear to develop over the course of students' undergraduate and graduate training, it is important to obtain valid measures of these skills for use in evaluating the extent to which students possess DIL proficiency across various institutional contexts. The preceding review provides an overview of currently available measures that institutions could choose to implement; in the following section, we turn to a discussion of important characteristics of and considerations for the design of a new DIL assessment, which could serve as a measure of SLOs.

Often, a primary driver of the selection among alternative assessments is the extent to which a given instrument aligns with the intended construct; this is particularly important given the evolving nature of conceptions of DIL. Developers appear to take one of two approaches when considering how to operationalize the construct of DIL for purposes of developing assessments. The first constitutes identifying a particular framework or standard that aligns with the developers' definition of the construct and then writing items aligned to those objectives or components of a standard.