About a week ago I did something I’ve rarely done before. I said a “No” in response to a request for information.
The request was legitimate, and not at all unusual. My counterpart at another university was gathering Phonathon performance numbers from various universities, and they wanted to see ours. Initially I agreed, and so she sent me a detailed questionnaire. In the end, though, my response was, “Thanks, but no thanks.”
Normally my bias is to be as helpful and “sharing” as possible. Why the brick wall all of a sudden? Well, I had asked the person distributing the questionnaire if the participants would be getting a summary of responses; I didn’t need to know who the participants were, I just wanted the results. The answer was sorry, but no — they hadn’t arranged permissions with other participants prior to gathering their information. She could provide her own institution’s numbers, but no one else’s.
I hate to be a party pooper, but count me out.*
I’m picking on this recent and innocent requester only as an example of what I find is becoming an issue. It’s not just that we are sometimes asked to fill out questionnaires. It’s that every day, you and I and everyone else is asked to chime in on one more “quick survey”. The listservs are full of them. A lot of inter-university surveying goes on, especially at this time of year. How much of it is of any value to the programs that offer up the information, or of lasting value to the ones who receive it? Not much, I’d say.
“What was your participation rate this year?” That’s a common one on the listservs. I never respond to those queries because immediately all I have are more questions: How do you calculate participation? What’s the composition of the population you called? And so on. It’s not worth starting the discussion, because invariably the asker wants quick and dirty answers.
In place of these informal, one-off surveys, even the carefully thought-out ones, I would prefer to see more effort put into proper benchmarking. First, everyone who takes the trouble to provide their numbers ought to receive the benefit of a set of results (anonymous if need be). And second, one-off questionnaires tend to gloss over important differences between institutions, programs, and appeals, leading to invalid observations and comparisons.
The discipline imposed by benchmarking forces everyone to agree on exact definitions of terms and the exact methods of calculating the metrics — otherwise the results are not comparable. Even apparently self-evident terms such as “alumnus”, “donor”, or “acquisition” are devilishly hard to standardize across institutions.
The standardization of data makes benchmarking hard, and slow to get done. No wonder some prominent analytics vendors sell program benchmarking as a service. So I get it: Sometimes we just want a rough answer, often only to satisfy someone higher-up. That doesn’t make it right.
We can fill out each others’ questionnaires and respond to quickie listserv surveys all day, but at some point we need to conserve our limited time for something more useful. Share? Yes, but share smarter. From now on, your first question should be about how the results will be distributed to participants. Not even full-on benchmarking, just sharing back. If there’s no plan, then do yourself a favour and decline.
* P.S.: In this case, the requester went back to the participants and arranged to share the information with everyone, and I relented.