Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

CSU-Wide Library Assessment Toolkit

This Toolkit contains resources to help you with library assessment needs for the following 3 areas: Information Literacy, Library Collections, and Space Usage.


Why Are You Doing Assessment?

As you begin an assessment project, there are some core questions to ask you and your team. Defining the ultimate purpose and goals of your project or assessment program will help organize your methodologies, rubrics and expected outcomes more effectively.

Key questions:

A.   Are you doing assessment primarily to improve library operational effectiveness, focusing more on internal functions and improvements to operations? If so, your methodologies should identify expected assessment outcomes that will assist in making more effective management decisions that may impact budgets, staffing, resource allocation or equipment upgrades.

B.    Are you focused instead on the students themselves, measuring the impact of library services, space or collections on defined rubrics of their success, satisfaction, engagement, and/or measurable outcomes like improved GPA? In this case, your methodologies would put student at the center of the assessment effort to see how your library impacts their scholastic and student life achievements.

C.    Are you doing assessment more for an external or third party need, such as university administration, with the hope of using solid data to show the library’s value to the university? A common presidential request  is how the library impact student success, retention or graduation rates. Methods of assessment here have to seek evidential data that syncs with university norms and expectations, where working closely with offices of institutional effectiveness or research may prove critical.

D.   Other macro-level purposes, such as, in CSU’s case, chancellor’s office or state mandates or an accreditation agency such as WASC.

Each of these purposes for doing assessment (there may be others) will influence your objectives, data capture methods, rubrics, overall methodology, and partners.

The sequence of key questions to ask your team first, before you even think of designing a methodology, include but are not limited to:

a.     What are you trying to measure/assess?

b.     Why are you doing these measurements and assessments?

c.     What is the desired outcome of the assessment effort?

d.     What user population or service or thing will be measured?

e.     What is a valid sample size?

f.      By what means will they be measured?

g.     How will data be captured?

h.     How will you account or screen for variables, identify unforeseen influences or skews on your retrieved data?

i.      How will you analyze your data rigorously and measure outcomes?

j.      How do you intend to test the assumptions you draw from your data analysis?

k.     Who will see/analyze/use these data? And for what purposes?

l.      How will you then present your results?

m.   How might these assessment data influence strategic or tactical changes in your operations?

n.     How will you review your methodologies to retool and improve future assessment?

o.     Will you publish the results of your assessment effort(s)?