CoolData blog

11 May 2015

A new way to look at alumni web survey data

Filed under: Alumni, Surveying, Vendors — Tags: , , , , — kevinmacdonell @ 7:38 pm

Guest post by Peter B. Wylie, with John Sammis

 

Click to download the PDF file of this discussion paper: A New Way to Look at Survey Data

 

Web-based surveys of alumni are useful for all sorts of reasons. If you go to the extra trouble of doing some analysis — or push your survey vendor to supply it — you can derive useful insights that could add huge value to your investment in surveying.

 

This discussion paper by Peter B. Wylie and John Sammis demonstrates a few of the insights that emerge by matching up survey data with some of the plentiful data you have on alums who respond to your survey, as well as those who don’t.

 

Neither alumni survey vendors nor their higher education clients are doing much work in this area. But as Peter writes, “None of us in advancement can do too much of this kind of analysis.”

 

Download: A New Way to Look at Survey Data

 

 

18 November 2010

Survey says … beware, beware!

Filed under: Alumni, skeptics, Surveying — Tags: , , — kevinmacdonell @ 4:45 pm

I love survey data. But sometimes we get confused about what it’s really telling us. I don’t claim to be an expert on surveying, but today I want to talk about one of the main ways I think we’re led astray. In brief: Surveys would seem to give us facts, or “the truth”. They don’t. Surveys reveal attitudes.

In higher education, surveying is of prime importance in benchmarking constituent engagement in order to identify programmatic areas that are underperforming, as well as areas that are doing well and for which making changes therefore entails risk. Making intelligent, data-driven decisions in these areas can strengthen programming, enhance engagement, and finally increase giving. And there’s no doubt that the act of responding to a survey, the engagement score that might result, and the responses to individual questions or groups of questions, are all predictive of giving. I have found so myself in my own predictive modeling at two universities.

But let’s not get carried away. Survey data can be a valuable source of predictor variables, but it’s a huge leap from making that admission to saying that survey data trumps everything.

I know of at least one vendor working in the survey world who does make that leap. This vendor believes surveying is THE singular best way to predict giving, and that survey responses have it all over the regular practice of predictive modeling using variables mined from a database. Such “archival” data provides “mere correlates” of engagement. Survey data provides the real goods.

I see the allure. Why would we put any stock in some weak correlation between the presence of an email address and giving, when we can just ask them how they feel about giving to XYZ University?

Well.

I have incorporated survey data in my own models, data that came from two wide-ranging, professionally-designed, Likert-type surveys of alumni engagement. Survey data is great because it’s fresh, independent of giving, and revealing of attitudes. It is also extremely biased in favour of highly-engaged alumni, and is completely disconnected from reality when it comes to gathering facts as opposed to attitudinal data.

Let me demonstrate the unreliability of survey data with regard to facts. Here are a few examples of statements and responses (one non-Likert), gathered from surveys of two institutions:

  • “I try to donate every year” — 946 individuals answered “agree” or “strongly agree” — but 12.3% of those 946 had no lifetime giving.
  • “I support XYZ University regularly” — 1,001 individuals answered “agree” or “strongly agree” — but 18.7% of them had no lifetime giving.
  • “Have you ever made a charitable gift to XYZ University (Y/N)?” — 1,690 individuals said “Yes” — but 8.1% of them had no lifetime giving.
  • “I support XYZ University to the best of my capacity” — 1,498 individuals answered “agree” or “strongly agree” — but 39.6% of them had no lifetime giving!

And, even stranger:

  • “I try to donate every year” — 1,371 answered “disagree” or “strongly disagree” — but 27.7% of those respondents were in fact donors!

Frankly, if I asked survey-takers how many children they have, I wouldn’t trust the answers.

This disconnect from reality actually works in my favour when I am creating predictive models, because I have some assurance that the responses to these questions is not just a proxy for ‘giving’, but rather a far more complicated thing that has to do with attitude, not facts. But in no model I’ve created has survey data (even carefully-selected survey data strongly correlated with giving) EVER been more predictive than the types of data most commonly used in predictive models — notably age/class year, the presence/absence of certain contact information, marital status, employment information, and so on.

For the purposes of identifying weaknesses or strengths in constituent engagement, survey data is king. For predicting giving in its various forms, survey data and engagement scores are just more variables to test and work into the model — nothing more, nothing less — and certainly not something magical or superior to the data that institutions already have in their databases waiting to be mined. I respect the work that people are doing to investigate causation in connection with giving. But when they criticize the work of data miners as “merely” dealing in correlation, well that I have a problem with.

24 June 2010

Big alumni survey? Save time and mine the data that matters

Filed under: Alumni, Predictor variables, Surveying — Tags: , — kevinmacdonell @ 6:00 am

(Creative Commons license. Click image for source.)

We know that participation in surveys is correlated with giving, and that responses to some questions are more strongly correlated with giving than others. Today I delve into which question topics are the most predictive.

If you’ve seen earlier posts, you’ll know that I’ve been working with two survey response data sets related to alumni engagement surveys — one for a relatively small, primarily undergraduate university, and one for a larger university that grants graduate degrees in a number of faculties as well as undergraduate degrees. These were two very different exercises: One was aimed at benchmarking engagement against other peer universities, the other was simply a snapshot of alumni engagement without a benchmarking component. But both were wide-ranging surveys, aimed at all living alumni (or a very large representative sample of alumni), and each contained almost a hundred questions.

Evaluating the predictive power of every single question is a tedious task — work that isn’t necessarily going to be rewarded at the end of the day. If your response set is small in comparison with your total number of alumni, most variables you create are going to have their significance lost in the wash. More testing is better than less, but if you’re pressed for time, or you doubt the value of investing the time, you have three other options.

  • First, you can create a variable for simple participation in the survey: a 1 for alumni who participated, a 0 for alumni who were invited but did not participate, and a neutral middle value (0.5) for alumni who were not invited to participate (but might have if they had been invited).
  • Second, if it’s a true engagement survey that results in some kind of calculated score for each individual, you can use that score as the predictor variable. Again, alumni who were invited but did not participate can be coded zero; alumni who were not invited can be assigned the average score of those who did respond.
  • Or third, you can zero in on the category of questions which beats all others for predictive value: ANYTHING to do with the subject of giving a gift to your school.

I suggest you pick the third option. In today’s post I’ll show you how giving-related questions outperformed all other types of questions in the two survey response sets I’ve worked with, and why they are superior to an overall engagement score.

Question by question

In the first engagement survey I dealt with, the benchmarking study, every question carried equal weight in the final calculation of an engagement score. The core engagement questions were Likert scale statements, i.e. an assertion such as “I am proud to tell people I graduated from University XYZ,” followed by a series of choices between “strongly disagree” and “strongly agree.” Statements of this type are typically worded so that the lower values can be interpreted as negative (in terms of engagement), and higher values positive. So a respondent’s overall score is simply the average of all of his or her scale answers.

But we know that questions don’t carry equal power to predict giving. The score itself will be highly correlated with giving, but there is much to be learned by testing its constituent parts.

First I’ll show you what I learned from a question-by-question analysis of one of these surveys, and then generalize to broader question topics.

The scale statements you’ll find in an engagement survey cover the whole range of an alum’s interaction with their alma mater, from their academic and extracurricular experience as a student, to their experience as an alumnus/na, and their attitudes toward volunteering, attending events, and being involved generally, to their perceptions about the school’s reputation — and, yes, to their feelings about giving back.

Out of 96 scale statements in the first survey, I found that responses to 16 statements had stronger correlations with Giving than the general score did with Giving. The remaining 80 statements had correlations that were weaker than the overall score correlation.

What did those 16 questions have in common? Every single one of them related in some way to giving to alma mater. Clearly, if you really want to get at the predictive meat of a survey like this, focus on the giving-related questions.

A question-by-question analysis probably isn’t necessary if the survey is well-designed. Questions that tend to be highly correlated with each other should be grouped together in themes, as they were with this survey. I was able to average responses across related questions and check those averages against giving:

  • Pearson’s r for the strength of correlation between overall engagement score and Lifetime Giving (or, rather, the natural log of LT Giving) for this data set was 0.231.
  • Question categories that had correlations below that level included student experience (both academic and extracurricular), awareness of and pride in the school’s reputation, and awareness of opportunities to become involved as an alumni volunteer, or likelihood to get involved.
  • Categories of questions with correlations above that level, in some cases significantly higher than the general score correlation, included: awareness of the school’s needs and priorities (0.244), awareness of the impact that support will have on the school (0.346), likelihood that an alum will donate (0.408), and finally, the degree to which an alum actually does support (or claims to support) the school (0.502).

Independent, or just independent enough?

It may not be surprising that attitudes regarding the act of giving, or one’s intention to give, are highly predictive. In fact, for a while I was concerned that some of these questions were too closely related to Giving to use as predictors. I mean, if someone agrees with the statement, “I support University XYZ to the best of my ability,” then they’re a donor, right? Where’s the insight there?

The truth is, respondents are quite unreliable when it comes to reporting on whether and how often they support the school with donations. When I checked responses to those questions against actual giving, I discovered that quite a few people who claimed to be regular donors in fact were not (about one-third of respondents). And vice-versa: quite a few regular donors claimed not to be what they were (again, about one-third). That seemed really puzzling. But in hindsight, I wonder if there was some signaling of intention going on here: Some non-donors were aspiring to donor-hood, and some donors were signaling that they intended to lapse?

The answer is probably not that simple. The bottom line, though, is that you should just go ahead and use these as predictors. Sure, they are closely related to Giving — but they are definitely not Giving itself!

Reinforcement of the theme

I haven’t said much about the second engagement survey, the non-benchmarking one.  This time there was no overall engagement score, but helpfully, the responses were again gathered into themes. Even better, each theme was given an overall score “index,” an average of an individual’s responses to questions within that theme.

The following table shows Pearson’s r values for the strength of correlation between each theme and Lifetime Giving.

Clearly, the Donor theme average, which is all about feelings towards giving to the university, is going to be far more predictive of giving than any other part of this survey.

Conclusion

You should keep tabs on what the alumni office and other departments are getting up to in the way of really big surveys (smaller, special-purpose surveys will not be of great value to you). First of all, you’ll want to ensure they don’t make the mistake of wasting their time on anonymous surveys. Second, you might want to suggest a question or two that can inject some juice into your predictive model.

The foregoing discussion makes it pretty clear that any question having to do with making a gift is going to be predictive. If the survey designers are shy about asking direct questions about giving intentions, that’s okay. A question that nibbles at the margins of the subject will also work, such as awareness of the school’s current needs and priorities. As well, put giving-related questions at the end of the survey, to prevent people from abandoning part-way through.

If the survey is already done (maybe years-ago done), your colleagues in the alumni office are probably sitting on a very intimidating mountain of data. I don’t envy them. Interpreting this data in order to set priorities for programming can’t be easy. Fortunately for we fundraising types, mining the data intelligently is not difficult — once we know what to look for.

Create a free website or blog at WordPress.com.