CoolData blog

31 January 2017

Are we missing too many alumni with web surveys? (Part 2)

Filed under: Alumni, John Sammis, Peter Wylie, Surveying — Tags: , — kevinmacdonell @ 6:22 am

Guest post by Peter B. Wylie, with John Sammis

 

Download a printable PDF version of this paper: Are We Missing Too Many Alumni P2.

 

It seems everyone we know, no matter how young or old, has an email address or uses Facebook. So we might assume that nowadays online surveys will reliably deliver a representative sampling of a school’s alumni population.

 
 

In this guest post, Peter Wylie and John Sammis demonstrate that alumni available and willing to be polled online differ from non-online constituents in potentially significant ways. Although current practice tends towards online-only surveying, the evidence suggests this probably skews the conclusions we can draw about our constituencies, with key differences that go well beyond just age.

 
 

(This is “part 2” of an earlier piece. To download the first paper, click here: Are We Missing Too Many Alumni With Web Surveys?)

 
 

Again, the link for Part 2:  Are We Missing Too Many Alumni P2.

 
 

Advertisement

11 May 2015

A new way to look at alumni web survey data

Filed under: Alumni, Surveying, Vendors — Tags: , , , , — kevinmacdonell @ 7:38 pm

Guest post by Peter B. Wylie, with John Sammis

 

Click to download the PDF file of this discussion paper: A New Way to Look at Survey Data

 

Web-based surveys of alumni are useful for all sorts of reasons. If you go to the extra trouble of doing some analysis — or push your survey vendor to supply it — you can derive useful insights that could add huge value to your investment in surveying.

 

This discussion paper by Peter B. Wylie and John Sammis demonstrates a few of the insights that emerge by matching up survey data with some of the plentiful data you have on alums who respond to your survey, as well as those who don’t.

 

Neither alumni survey vendors nor their higher education clients are doing much work in this area. But as Peter writes, “None of us in advancement can do too much of this kind of analysis.”

 

Download: A New Way to Look at Survey Data

 

 

28 March 2012

Are we missing too many alumni with web surveys?

Filed under: Alumni, John Sammis, Peter Wylie, Surveying, Vendors — Tags: , — kevinmacdonell @ 8:04 am

Guest post by Peter B. Wylie and John Sammis

(Download a printer-friendly PDF version here: Web Surveys Wylie-Sammis)

With the advent of the internet and its exponential growth over the last decade and a half, web surveys have gained a strong foothold in society in general, and in higher education advancement in particular. We’re not experts on surveys, and certainly not on web surveys.  However, let’s assume you (or the vendor you use to do the survey) e-mail either a random sample of your alumni (or your entire universe of alumni) and invite them to go to a website and fill out a survey. If you do this, you will encounter the problem of poor response rate. If you’re lucky, maybe 30% of the people you e-mailed will respond, even if you vigorously follow-up non-responders encouraging them to please fill the thing out.

This is a problem. There will always be the lingering question of whether or not the non-responders are fundamentally different from the responders with respect to what you’re surveying them about. For example, will responders:

  • Give you a far more positive view of their alma mater than the non-responders would have?
  • Tell you they really like new programs the school is offering, programs the non-responders may really dislike, or like a lot less than the responders?
  • Offer suggestions for changes in how alumni should be approached — changes that non-responders would not offer or actively discourage?

To test whether these kinds of questions are worth answering, you (or your vendor) could do some checking to see if your responders:

  • Are older or younger than your non-responders. (Looking at year of graduation for both groups would be a good way to do this.)
  • Have a higher or lower median lifetime giving than your non-responders.
  • Attend more or fewer events after they graduate than your non-responders.
  • Are more or less likely than your non-responders to be members of a dues paying alumni association.

It is our impression that most schools that conduct alumni web surveys don’t do this sort of checking. In their reports they may discuss what their response rates are, but few offer an analysis of how the responders are different from the non-responders.

Again, we’re talking about impressions here, not carefully researched facts. But that’s not our concern in this paper. Our concern here is that web surveys (done in schools where potential responders are contacted only by e-mail) are highly unlikely to be representative of the entire universe of alums — even if the response rate for these surveys is always one hundred percent. Why? Because our evidence shows that alumni who have an e-mail address listed with their schools are markedly different (in terms of two important variables) from alumni who do not have an e-mail address listed: Age and giving.

To make our case, we’ll offer some data from four higher education institutions spread out across North America; two are private, and two are public. Let’s start with the distribution of e-mail addresses listed in each school by class year decile. You can see these data in Tables 1-4 and Figures 1-4. We’ll go through Table 1 and Figure 1 (School A) in some detail to make sure we’re being clear.

Take a look at Table 1. You’ll see that the alumni in School A have been divided up into ten roughly equal size groups where Decile 1 represents the oldest group and Decile 10 represents the youngest. The table shows a very large age range. The youngest alums in Decile 1 graduated in 1958. (Most of you reading this paper were not yet born by that year.) The alums in Decile 10 (unless some of them went back to school late in life) are all twenty-somethings.

Table 1: Count, Median Class Year, and Minimum and Maximum Class Years for All Alums Divided into Deciles for School A

 

Now look at Figure 1. It shows the percentage of alums by class year decile who have an e-mail address listed in the school’s database. Later on in the paper we’ll discuss what we think are some of the implications of a chart like this. Here we just want to be sure you understand what the chart is conveying. For example, 43.0% of alums who graduated between 1926 and 1958 (Decile 1) have an e-mail listed in the school’s database. How about Decile 9, alums who graduated between 2001 and 2005? If you came up with 86.5%, we’ve been clear.

Go ahead and browse through Tables 2-4 and Figures 2-4. After you’ve done that, we’ll tell you what we think is one of the implications of what you’ve seen so far.

Table 2: Count, Median Class Year, and Minimum and Maximum Class Years for All Alums Divided into Deciles for School B

 

Table 3: Count, Median Class Year, and Minimum and Maximum Class Years for All Alums Divided into Deciles for School C

 

 

 

Table 4: Count, Median Class Year, and Minimum and Maximum Class Years for All Alums Divided into Deciles for School D

 

 

The most significant implication we can draw from what we’ve shown you so far is this: If any of these four schools were to conduct a web survey by only contacting alums with an e-mail address, they would simply not reach large numbers of alums whose opinions they are probably interested in gathering. Some specifics:

  • School A: They would miss huge numbers of older alums who graduated in 1974 and earlier. By rough count over 40% of these folks would not be reached. That’s a lot of senior folks who are still alive and kicking and probably have pronounced views about a number of issues contained in the survey.
  • School B: A look at Figure 2 tells us that even considering doing a web survey for School B is probably not a great idea. Fewer than 20% of their alums who graduated in 1998 or earlier have an e-mail address listed in their database.

Another way of expressing this implication is that each school (regardless of what their response rates were) would largely be tapping the opinions of younger alums, not older or even middle-aged alums. If that’s what a school really wants to do, okay. But we strongly suspect that’s not what it wants to do.

Now let’s look at something else that concerns us about doing web surveys if potential respondents are only contacted by e-mail: Giving. Figures 5-8 show the percentage of alums who have given $100 or more lifetime by e-mail address/no-email address across class year deciles.

As we did with Figure 1, let’s go over Figure 5 to make sure it’s clear. For example, in decile 1 (oldest alums) 87% of alumni with an e-mail address have given $100 or more lifetime to the school. Alums in the same decile who do not have an e-mail address? 71% of these alums have given $100 lifetime or more to the school.  How about decile 10, the youngest group? What are the corresponding percentages of giving for those alums with and without an e-mail address? If you came up with 14% versus 6%, we’ve been clear.

Take a look at Figures 6-8, for schools B, C and D. Then we’ll tell you the second implication we see in all these data.

The overall impression we get from these four figures is clear: Alumni who do not have an e-mail address listed give considerably less money to their schools than do alumni with an e-mail address listed. This difference can be particularly pronounced among older alums.

Some Conclusions

The title of this piece is: “Are We Missing Too Many Alumni with Web Surveys?” Based on the data we’ve looked at, we think the answer to this question has to be a “yes.” It can’t be a good thing that many web surveys don’t go out to so many older alums who don’t have an e-mail address, and to alums without an e-mail address who haven’t given as much (on average) as those with an e-mail address.

On the other hand, we want to stress that web surveys can provide a huge amount of valuable information from the alums who are reached and do respond. Even if the coverage of the whole alumni universe is incomplete, the thousands of alums who take the time to fill out these surveys can’t be ignored.

Here’s an example. We got to reading through the hundreds and hundreds of written comments from a recent alumni survey. We haven’t included any of the comments here, but my (Peter’s) reaction to the comments was visceral. Wading through all the typos, and misspellings, and fractured syntax, I found myself cheering these folks on:

  •  “Good for you.”
  • “Damn right.”
  • “Couldn’t have said it better myself.”
  • “I wish the advancement and alumni people at my college could read these.”

In total, these comments added up to almost 50,000 words of text, the length of a short novel. And they were a lot more interesting than the words in too many of the novels I read.

As always, we welcome your comments.

18 November 2010

Survey says … beware, beware!

Filed under: Alumni, skeptics, Surveying — Tags: , , — kevinmacdonell @ 4:45 pm

I love survey data. But sometimes we get confused about what it’s really telling us. I don’t claim to be an expert on surveying, but today I want to talk about one of the main ways I think we’re led astray. In brief: Surveys would seem to give us facts, or “the truth”. They don’t. Surveys reveal attitudes.

In higher education, surveying is of prime importance in benchmarking constituent engagement in order to identify programmatic areas that are underperforming, as well as areas that are doing well and for which making changes therefore entails risk. Making intelligent, data-driven decisions in these areas can strengthen programming, enhance engagement, and finally increase giving. And there’s no doubt that the act of responding to a survey, the engagement score that might result, and the responses to individual questions or groups of questions, are all predictive of giving. I have found so myself in my own predictive modeling at two universities.

But let’s not get carried away. Survey data can be a valuable source of predictor variables, but it’s a huge leap from making that admission to saying that survey data trumps everything.

I know of at least one vendor working in the survey world who does make that leap. This vendor believes surveying is THE singular best way to predict giving, and that survey responses have it all over the regular practice of predictive modeling using variables mined from a database. Such “archival” data provides “mere correlates” of engagement. Survey data provides the real goods.

I see the allure. Why would we put any stock in some weak correlation between the presence of an email address and giving, when we can just ask them how they feel about giving to XYZ University?

Well.

I have incorporated survey data in my own models, data that came from two wide-ranging, professionally-designed, Likert-type surveys of alumni engagement. Survey data is great because it’s fresh, independent of giving, and revealing of attitudes. It is also extremely biased in favour of highly-engaged alumni, and is completely disconnected from reality when it comes to gathering facts as opposed to attitudinal data.

Let me demonstrate the unreliability of survey data with regard to facts. Here are a few examples of statements and responses (one non-Likert), gathered from surveys of two institutions:

  • “I try to donate every year” — 946 individuals answered “agree” or “strongly agree” — but 12.3% of those 946 had no lifetime giving.
  • “I support XYZ University regularly” — 1,001 individuals answered “agree” or “strongly agree” — but 18.7% of them had no lifetime giving.
  • “Have you ever made a charitable gift to XYZ University (Y/N)?” — 1,690 individuals said “Yes” — but 8.1% of them had no lifetime giving.
  • “I support XYZ University to the best of my capacity” — 1,498 individuals answered “agree” or “strongly agree” — but 39.6% of them had no lifetime giving!

And, even stranger:

  • “I try to donate every year” — 1,371 answered “disagree” or “strongly disagree” — but 27.7% of those respondents were in fact donors!

Frankly, if I asked survey-takers how many children they have, I wouldn’t trust the answers.

This disconnect from reality actually works in my favour when I am creating predictive models, because I have some assurance that the responses to these questions is not just a proxy for ‘giving’, but rather a far more complicated thing that has to do with attitude, not facts. But in no model I’ve created has survey data (even carefully-selected survey data strongly correlated with giving) EVER been more predictive than the types of data most commonly used in predictive models — notably age/class year, the presence/absence of certain contact information, marital status, employment information, and so on.

For the purposes of identifying weaknesses or strengths in constituent engagement, survey data is king. For predicting giving in its various forms, survey data and engagement scores are just more variables to test and work into the model — nothing more, nothing less — and certainly not something magical or superior to the data that institutions already have in their databases waiting to be mined. I respect the work that people are doing to investigate causation in connection with giving. But when they criticize the work of data miners as “merely” dealing in correlation, well that I have a problem with.

24 June 2010

Big alumni survey? Save time and mine the data that matters

Filed under: Alumni, Predictor variables, Surveying — Tags: , — kevinmacdonell @ 6:00 am

(Creative Commons license. Click image for source.)

We know that participation in surveys is correlated with giving, and that responses to some questions are more strongly correlated with giving than others. Today I delve into which question topics are the most predictive.

If you’ve seen earlier posts, you’ll know that I’ve been working with two survey response data sets related to alumni engagement surveys — one for a relatively small, primarily undergraduate university, and one for a larger university that grants graduate degrees in a number of faculties as well as undergraduate degrees. These were two very different exercises: One was aimed at benchmarking engagement against other peer universities, the other was simply a snapshot of alumni engagement without a benchmarking component. But both were wide-ranging surveys, aimed at all living alumni (or a very large representative sample of alumni), and each contained almost a hundred questions.

Evaluating the predictive power of every single question is a tedious task — work that isn’t necessarily going to be rewarded at the end of the day. If your response set is small in comparison with your total number of alumni, most variables you create are going to have their significance lost in the wash. More testing is better than less, but if you’re pressed for time, or you doubt the value of investing the time, you have three other options.

  • First, you can create a variable for simple participation in the survey: a 1 for alumni who participated, a 0 for alumni who were invited but did not participate, and a neutral middle value (0.5) for alumni who were not invited to participate (but might have if they had been invited).
  • Second, if it’s a true engagement survey that results in some kind of calculated score for each individual, you can use that score as the predictor variable. Again, alumni who were invited but did not participate can be coded zero; alumni who were not invited can be assigned the average score of those who did respond.
  • Or third, you can zero in on the category of questions which beats all others for predictive value: ANYTHING to do with the subject of giving a gift to your school.

I suggest you pick the third option. In today’s post I’ll show you how giving-related questions outperformed all other types of questions in the two survey response sets I’ve worked with, and why they are superior to an overall engagement score.

Question by question

In the first engagement survey I dealt with, the benchmarking study, every question carried equal weight in the final calculation of an engagement score. The core engagement questions were Likert scale statements, i.e. an assertion such as “I am proud to tell people I graduated from University XYZ,” followed by a series of choices between “strongly disagree” and “strongly agree.” Statements of this type are typically worded so that the lower values can be interpreted as negative (in terms of engagement), and higher values positive. So a respondent’s overall score is simply the average of all of his or her scale answers.

But we know that questions don’t carry equal power to predict giving. The score itself will be highly correlated with giving, but there is much to be learned by testing its constituent parts.

First I’ll show you what I learned from a question-by-question analysis of one of these surveys, and then generalize to broader question topics.

The scale statements you’ll find in an engagement survey cover the whole range of an alum’s interaction with their alma mater, from their academic and extracurricular experience as a student, to their experience as an alumnus/na, and their attitudes toward volunteering, attending events, and being involved generally, to their perceptions about the school’s reputation — and, yes, to their feelings about giving back.

Out of 96 scale statements in the first survey, I found that responses to 16 statements had stronger correlations with Giving than the general score did with Giving. The remaining 80 statements had correlations that were weaker than the overall score correlation.

What did those 16 questions have in common? Every single one of them related in some way to giving to alma mater. Clearly, if you really want to get at the predictive meat of a survey like this, focus on the giving-related questions.

A question-by-question analysis probably isn’t necessary if the survey is well-designed. Questions that tend to be highly correlated with each other should be grouped together in themes, as they were with this survey. I was able to average responses across related questions and check those averages against giving:

  • Pearson’s r for the strength of correlation between overall engagement score and Lifetime Giving (or, rather, the natural log of LT Giving) for this data set was 0.231.
  • Question categories that had correlations below that level included student experience (both academic and extracurricular), awareness of and pride in the school’s reputation, and awareness of opportunities to become involved as an alumni volunteer, or likelihood to get involved.
  • Categories of questions with correlations above that level, in some cases significantly higher than the general score correlation, included: awareness of the school’s needs and priorities (0.244), awareness of the impact that support will have on the school (0.346), likelihood that an alum will donate (0.408), and finally, the degree to which an alum actually does support (or claims to support) the school (0.502).

Independent, or just independent enough?

It may not be surprising that attitudes regarding the act of giving, or one’s intention to give, are highly predictive. In fact, for a while I was concerned that some of these questions were too closely related to Giving to use as predictors. I mean, if someone agrees with the statement, “I support University XYZ to the best of my ability,” then they’re a donor, right? Where’s the insight there?

The truth is, respondents are quite unreliable when it comes to reporting on whether and how often they support the school with donations. When I checked responses to those questions against actual giving, I discovered that quite a few people who claimed to be regular donors in fact were not (about one-third of respondents). And vice-versa: quite a few regular donors claimed not to be what they were (again, about one-third). That seemed really puzzling. But in hindsight, I wonder if there was some signaling of intention going on here: Some non-donors were aspiring to donor-hood, and some donors were signaling that they intended to lapse?

The answer is probably not that simple. The bottom line, though, is that you should just go ahead and use these as predictors. Sure, they are closely related to Giving — but they are definitely not Giving itself!

Reinforcement of the theme

I haven’t said much about the second engagement survey, the non-benchmarking one.  This time there was no overall engagement score, but helpfully, the responses were again gathered into themes. Even better, each theme was given an overall score “index,” an average of an individual’s responses to questions within that theme.

The following table shows Pearson’s r values for the strength of correlation between each theme and Lifetime Giving.

Clearly, the Donor theme average, which is all about feelings towards giving to the university, is going to be far more predictive of giving than any other part of this survey.

Conclusion

You should keep tabs on what the alumni office and other departments are getting up to in the way of really big surveys (smaller, special-purpose surveys will not be of great value to you). First of all, you’ll want to ensure they don’t make the mistake of wasting their time on anonymous surveys. Second, you might want to suggest a question or two that can inject some juice into your predictive model.

The foregoing discussion makes it pretty clear that any question having to do with making a gift is going to be predictive. If the survey designers are shy about asking direct questions about giving intentions, that’s okay. A question that nibbles at the margins of the subject will also work, such as awareness of the school’s current needs and priorities. As well, put giving-related questions at the end of the survey, to prevent people from abandoning part-way through.

If the survey is already done (maybe years-ago done), your colleagues in the alumni office are probably sitting on a very intimidating mountain of data. I don’t envy them. Interpreting this data in order to set priorities for programming can’t be easy. Fortunately for we fundraising types, mining the data intelligently is not difficult — once we know what to look for.

1 June 2010

Asking the Income Question

Filed under: Predictor variables, Surveying — Tags: — kevinmacdonell @ 7:03 am

(Creative Commons license. Click image for source.)

My mind is on alumni surveys — what works, what doesn’t, and what responses are most useful for Development models. Not long ago I was handling a data set from a survey of alumni engagement and deriving predictor variables from it to use in my models. Today I’m doing the same thing with a similar survey from another university.

Similar, but not the same: One university avoided asking alumni about their level of household income; the other went for it. Who was smarter? I have to admit, I’m not really sure.

The Income Question seems like dangerous territory to me. I don’t know about Americans, but my impression is that Canadians are reluctant to divulge. The taxman gets to know, and the census-taker, but that’s it — it’s no one else’s business. People will either skip the question or, worse, abandon the survey. An additional risk for alumni surveys, which may have many goals related to alumni programming, is that it feeds cynicism about what contact from alma mater is really about.

In the data set before me now, a respectable 63.5% of survey participants chose to answer the question, which is lower but compares favourably with the response rate for other, more innocuous questions. The designers of this survey observed some common best practices in handling the question: It was left to the very last (with the easy-to-answer questions loaded toward the front), and the question was worded to make it clear that it was optional.

But why was the question asked at all? Was it a hunt for prospects? Was it feeding into a research study? I don’t know yet. But my concern today is whether the responses are predictive of giving.

The act of answering the income question itself does not seem to be indicative of likelihood (positive or negative) of being a donor; the responders and non-responders are donors in equal proportions. However, alumni donors who skipped the question have average lifetime giving that is twice as high as the responder donor group; their median giving is also higher, so it’s not just a few very generous donors who skipped the question who are distorting the picture.

That seems a little strange: donors AND non-donors are equally OK with the question, but the non-responders are the bigger donors?

Then, when I look at what people actually answered for income levels, it’s a whole different story again. The survey offered the respondent six income ranges to choose from — and lifetime giving increases dramatically with each increase in income. Both average and median lifetime giving shoot upward, and so does participation: At the lowest income level, more than 75% have never given; at the highest two levels (where the number of respondents is limited — 47 and 12), only about 20% have never given.

So while answering the question is not indicative of anything, what you answer appears to have a connection to what you give. That makes intuitive sense, but does this information provide us with something useful and predictive? I’m not sure yet — I have yet to introduce the variable into a model — but already I know that there will be plenty of interaction (or overlap) with other variables. For example, the lower-income alumni tend to be young and female, and the higher-income alumni older and male.

In other words, it seems likely that the income question data will be partly or completely eclipsed by gender, age and other variables. My internal jury is still out. But for now, I’d advise that should you have any input into the design of an upcoming alumni survey, think twice before you put the income question at the top of any list of must-haves.

Older Posts »

Blog at WordPress.com.