There’s been plenty of talk around the Library 2.0 theme on the idea of evaluation or assessment. At Information Wants to be Free, Meredith Farkas says what she wanted to see come out of Library 2.0 was a greater focus on assessment. I certainly want to see libraries have a greater focus on assessment, too, and I want to see them publishing about it. (Particularly public libraries. We just don’t publish enough.)
Why aren’t we (libraries in general) publishing about the success (or failure) of our 2.0 projects? Why is there virtually no data to be found that quantifies some of the outcomes of 2.0 projects? We’ve been on this 2.0 bandwagon long enough for studies and assessments and evaluations to have been undertaken. For a movement that’s intrinsically tied up with quick publishing channels like blogs and wikis, it seems strange that there is a real dearth of published studies on 2.0 projects. Why is that?
Walt Crawford had this to say in a recent post on his two blog survey books:
Maybe there’s a clear desire not to know how library blogs are doing in the real world, other than a few cherry-picked examples. I’d like to think that’s not the case. It would be unprofessional to tell people about how wonderful library blogs are, and encourage them to create such blogs, without giving them honest and broad-ranging information on what’s actually happening with such blogs.
I’d like to think that’s not the case, too. But I wonder. I wonder a few things:
- Is the lack of publishing indicative of a lack of success? (And a fear of talking about it?)
- Is the lack of publishing indicative of a perceived lack of success, a perception that might be formed because we’re not collecting the right data? (eg. How are we measuring ROI? Do we just count comments on blog posts? Or do we look at exit links, time spent on the page, holds on titles blogged about, impact on online resource usage stats…? I certainly hope all of these metrics and more are informing libraries’ evaluations of their blogs, because if we’re just relying on comments to measure user engagement, then we’re not seeing the full picture.)
- Is the lack of publishing indicative of a lack of evaluation? (And if so, why aren’t we evaluating? Because we don’t know how? Because we don’t have time? Because we don’t want to know?)
- Or, is it just that we’re not publishing about our evaluations?
I’ve got a blogging project in the pipeline at mpow. It’s germinating quite slowly, because I want to see it well planned. We want a well planned implementation, but also a well planned, multi-faceted evaluation. If it works, I want to know about it, and I want us to be able to reflect on what we did and make links to what worked. If it doesn’t work, I want to know about it just as much (if not more), because I want to be able to reflect on what we did, look for ways we could improve, and ultimately, pull the pin if that’s what we need to do.
Blogs (and all things shiny and 2.0) are just great. They’re fun for staff to work on, and have huge potential to engage our users. But none of us have time to run services that don’t work. If we don’t evaluate, we have no ability to know whether
We know that “because we always did it that way” is not a good reason to keep doing the things we’ve always done, whether they work or not. But neither should a failure to evaluate be the reason we keep on keeping on with our 2.0 services.
If you have evaluated your 2.0 service, publish about it! And if you have published, I’d love to receive some links.