We know that participation in surveys is correlated with giving, and that responses to some questions are more strongly correlated with giving than others. Today I delve into which question topics are the most predictive.
If you’ve seen earlier posts, you’ll know that I’ve been working with two survey response data sets related to alumni engagement surveys — one for a relatively small, primarily undergraduate university, and one for a larger university that grants graduate degrees in a number of faculties as well as undergraduate degrees. These were two very different exercises: One was aimed at benchmarking engagement against other peer universities, the other was simply a snapshot of alumni engagement without a benchmarking component. But both were wide-ranging surveys, aimed at all living alumni (or a very large representative sample of alumni), and each contained almost a hundred questions.
Evaluating the predictive power of every single question is a tedious task — work that isn’t necessarily going to be rewarded at the end of the day. If your response set is small in comparison with your total number of alumni, most variables you create are going to have their significance lost in the wash. More testing is better than less, but if you’re pressed for time, or you doubt the value of investing the time, you have three other options.
- First, you can create a variable for simple participation in the survey: a 1 for alumni who participated, a 0 for alumni who were invited but did not participate, and a neutral middle value (0.5) for alumni who were not invited to participate (but might have if they had been invited).
- Second, if it’s a true engagement survey that results in some kind of calculated score for each individual, you can use that score as the predictor variable. Again, alumni who were invited but did not participate can be coded zero; alumni who were not invited can be assigned the average score of those who did respond.
- Or third, you can zero in on the category of questions which beats all others for predictive value: ANYTHING to do with the subject of giving a gift to your school.
I suggest you pick the third option. In today’s post I’ll show you how giving-related questions outperformed all other types of questions in the two survey response sets I’ve worked with, and why they are superior to an overall engagement score.
Question by question
In the first engagement survey I dealt with, the benchmarking study, every question carried equal weight in the final calculation of an engagement score. The core engagement questions were Likert scale statements, i.e. an assertion such as “I am proud to tell people I graduated from University XYZ,” followed by a series of choices between “strongly disagree” and “strongly agree.” Statements of this type are typically worded so that the lower values can be interpreted as negative (in terms of engagement), and higher values positive. So a respondent’s overall score is simply the average of all of his or her scale answers.
But we know that questions don’t carry equal power to predict giving. The score itself will be highly correlated with giving, but there is much to be learned by testing its constituent parts.
First I’ll show you what I learned from a question-by-question analysis of one of these surveys, and then generalize to broader question topics.
The scale statements you’ll find in an engagement survey cover the whole range of an alum’s interaction with their alma mater, from their academic and extracurricular experience as a student, to their experience as an alumnus/na, and their attitudes toward volunteering, attending events, and being involved generally, to their perceptions about the school’s reputation — and, yes, to their feelings about giving back.
Out of 96 scale statements in the first survey, I found that responses to 16 statements had stronger correlations with Giving than the general score did with Giving. The remaining 80 statements had correlations that were weaker than the overall score correlation.
What did those 16 questions have in common? Every single one of them related in some way to giving to alma mater. Clearly, if you really want to get at the predictive meat of a survey like this, focus on the giving-related questions.
A question-by-question analysis probably isn’t necessary if the survey is well-designed. Questions that tend to be highly correlated with each other should be grouped together in themes, as they were with this survey. I was able to average responses across related questions and check those averages against giving:
- Pearson’s r for the strength of correlation between overall engagement score and Lifetime Giving (or, rather, the natural log of LT Giving) for this data set was 0.231.
- Question categories that had correlations below that level included student experience (both academic and extracurricular), awareness of and pride in the school’s reputation, and awareness of opportunities to become involved as an alumni volunteer, or likelihood to get involved.
- Categories of questions with correlations above that level, in some cases significantly higher than the general score correlation, included: awareness of the school’s needs and priorities (0.244), awareness of the impact that support will have on the school (0.346), likelihood that an alum will donate (0.408), and finally, the degree to which an alum actually does support (or claims to support) the school (0.502).
Independent, or just independent enough?
It may not be surprising that attitudes regarding the act of giving, or one’s intention to give, are highly predictive. In fact, for a while I was concerned that some of these questions were too closely related to Giving to use as predictors. I mean, if someone agrees with the statement, “I support University XYZ to the best of my ability,” then they’re a donor, right? Where’s the insight there?
The truth is, respondents are quite unreliable when it comes to reporting on whether and how often they support the school with donations. When I checked responses to those questions against actual giving, I discovered that quite a few people who claimed to be regular donors in fact were not (about one-third of respondents). And vice-versa: quite a few regular donors claimed not to be what they were (again, about one-third). That seemed really puzzling. But in hindsight, I wonder if there was some signaling of intention going on here: Some non-donors were aspiring to donor-hood, and some donors were signaling that they intended to lapse?
The answer is probably not that simple. The bottom line, though, is that you should just go ahead and use these as predictors. Sure, they are closely related to Giving — but they are definitely not Giving itself!
Reinforcement of the theme
I haven’t said much about the second engagement survey, the non-benchmarking one. This time there was no overall engagement score, but helpfully, the responses were again gathered into themes. Even better, each theme was given an overall score “index,” an average of an individual’s responses to questions within that theme.
The following table shows Pearson’s r values for the strength of correlation between each theme and Lifetime Giving.
Clearly, the Donor theme average, which is all about feelings towards giving to the university, is going to be far more predictive of giving than any other part of this survey.
Conclusion
You should keep tabs on what the alumni office and other departments are getting up to in the way of really big surveys (smaller, special-purpose surveys will not be of great value to you). First of all, you’ll want to ensure they don’t make the mistake of wasting their time on anonymous surveys. Second, you might want to suggest a question or two that can inject some juice into your predictive model.
The foregoing discussion makes it pretty clear that any question having to do with making a gift is going to be predictive. If the survey designers are shy about asking direct questions about giving intentions, that’s okay. A question that nibbles at the margins of the subject will also work, such as awareness of the school’s current needs and priorities. As well, put giving-related questions at the end of the survey, to prevent people from abandoning part-way through.
If the survey is already done (maybe years-ago done), your colleagues in the alumni office are probably sitting on a very intimidating mountain of data. I don’t envy them. Interpreting this data in order to set priorities for programming can’t be easy. Fortunately for we fundraising types, mining the data intelligently is not difficult — once we know what to look for.