CoolData blog

11 May 2015

A new way to look at alumni web survey data

Filed under: Alumni, Surveying, Vendors — Tags: , , , , — kevinmacdonell @ 7:38 pm

Guest post by Peter B. Wylie, with John Sammis

 

Click to download the PDF file of this discussion paper: A New Way to Look at Survey Data

 

Web-based surveys of alumni are useful for all sorts of reasons. If you go to the extra trouble of doing some analysis — or push your survey vendor to supply it — you can derive useful insights that could add huge value to your investment in surveying.

 

This discussion paper by Peter B. Wylie and John Sammis demonstrates a few of the insights that emerge by matching up survey data with some of the plentiful data you have on alums who respond to your survey, as well as those who don’t.

 

Neither alumni survey vendors nor their higher education clients are doing much work in this area. But as Peter writes, “None of us in advancement can do too much of this kind of analysis.”

 

Download: A New Way to Look at Survey Data

 

 

Advertisement

22 January 2013

Sticking a pin in acquisition mail bloat

Filed under: Alumni, Annual Giving, Vendors — Tags: , , — kevinmacdonell @ 6:45 am

I recently read a question on a listserv that prompted me to respond. A university in the US was planning to solicit about 25,000 of its current non-donor alumni. The question was: How best to filter a non-donor base of 140,000 in order to arrive at the 25,000 names of those most likely to become donors? This university had only ever solicited donors in the past, so this was new territory for them. (How those alumni became donors in the first place was not explained.)

One responder to the question suggested narrowing down the pool by recent class years, reunion class years, or something similar, and also use any ratings, if they were available, and then do an Nth-record select on the remaining records to get to 25,000. Selecting every Nth record is one way to pick an approximately random sample. If you aren’t able to make this selection, the responder suggested, then your mail house vendor should be able to.

This answer was fine, up until the “Nth selection” part. I also had reservations about putting the vendor in control of prospect selection. So here are some thoughts on the topic of acquisition mailings.

Doing a random selection assumes that all non-donor alumni are alike, or at least that we aren’t able to make distinctions. Neither assumption would be true. Although they haven’t given yet, some alumni feel closer affinity to your school than others, and you should have some of these affinity-related cues stored in your database. This suggests that a more selective approach will perform better than a random sample.

Not long ago, I isolated all our alumni who converted from never-donor to donor at any time in the past two years. (Two years instead of just one, in order to boost the numbers a bit.) Then I compared this group with the universe of all the never-donors who had failed to convert, based on a number of attributes that might indicate affinity. Some of my findings included:

  • “Converters” were more likely than “non-converters” to have an email in the database.
  • They were more likely to have answered the phone in our Phonathon (even though the answer was ‘no pledge’)
  • They were more likely to have employment information (job title or employer name) in the database.
  • They were more likely to have attended an event since graduating.

Using these and other factors, I created a score which was used to select which non-donor alumni would be included in our acquisition mailing. I’ve been monitoring the results, and although new donors do tend to be the alumni with higher scores, frankly we’ve had poor results via mail solicitation, so evaluation is difficult. This in itself is not unusual: New-donor acquisition is very much a Phonathon phenomenon for us — in our phone results, the effectiveness of the score is much more evident.

Poor results or not, it’s still better than random, and whenever you can improve on random, you can reduce the size of a mailing. Acquisition mailings in general are way too big, simply because they’re often just random — they have to cast a wide net. Unfortunately your mail house is unlikely to encourage you to get more focused and save money.

Universities contract with vendors for their expertise and efficiency in dealing with large mailings, including cleaning the address data and handling the logistics that many small Annual Fund offices just aren’t equipped to deal with. A good mail house is a valuable ally and source of direct-marketing expertise. But acquisition presents a conflict for vendors, who make their money on volume. Annual Fund offices should be open to advice from their vendor, but they would do well to develop their own expertise in prospect selection, and make drastic cuts to the bloat in their mailings.

Donors may need to be acquired at a loss, no question. It’s about lifetime value, after all. But if the cumulative cost of that annual appeal exceeds the lifetime value of your newly-acquired donor, then the price is too high.

6 June 2012

How you measure alumni engagement is up to you

Filed under: Alumni, Best practices, Vendors — Tags: , , , — kevinmacdonell @ 8:02 am

There’s been some back-and-forth on one of the listservs about the “correct” way to measure and score alumni engagement. An emphasis on scientific rigor is being pressed for by one vendor who claims to specialize in rigor. The emphasis is misplaced.

No doubt there are sophisticated ways of measuring engagement that I know nothing about, but the question I can’t get beyond is, how do you define “engagement”? How do you make it measurable so that one method applies everywhere? I think that’s a challenging proposition, one that limits any claim to “correctness” of method. This is the main reason that I avoid writing about measuring engagement — it sounds analytical, but inevitably it rests on some messy, intuitive assumptions.

The closest I’ve ever seen anyone come is Engagement Analysis Inc., a firm based here in Canada. They have a carefully chosen set of engagement-related survey questions which are held constant from school to school. The questions are grouped in various categories or “drivers” of engagement according to how closely related (statistically) the responses tend to be to each other. Although I have issues with alumni surveys and the dangers involved in interpreting the results, I found EA’s approach fascinating in terms of gathering and comparing data on alumni attitudes.

(Disclaimer: My former employer was once a client of this firm’s but I have no other association with them. Other vendors do similar and very fine work, of course. I can think of a few, but haven’t actually worked with them, so I will not offer an opinion.)

Some vendors may make claims of being scientific or analytically correct, but the only requirement of quantifying engagement is that it be reasonable, and (if you are benchmarking against other schools) consistent from school to school. In general, if you want to benchmark, then engage a vendor if you want to do it right, because it’s not easily done.

But if you want to benchmark against yourself (that is, over time), don’t be intimidated by anyone telling you your method isn’t good enough. Just do your own thing. Survey if you like, but call first upon the real, measurable activities that your alumni participate in. There is no single right way, so find out what others have done. One institution will give more weight to reunion attendance than to showing up for a pub night, while another will weigh all event attendance equally. Another will ditch event attendance altogether in favour of volunteer activity, or some other indicator.

Can anyone say definitively that any of these approaches are wrong? I don’t think so — they may be just right for the school doing the measuring. Many schools (mine included) assign fairly arbitrary weights to engagement indicators based on intuition and experience. I can’t find fault with that, simply because “engagement” is not a quantity. It’s not directly measurable, so we have to use proxies which ARE measurable. Other schools measure the degree of association (correlation) between certain activities and alumni giving, and base their weights on that, which is smart. But it’s all the same to me in the end, because ‘giving’ is just another proxy for the freely interpretable quality of “engagement.”

Think of devising a “love score” to rank people’s marriages in terms of the strength of the pair bond. A hundred analysts would head off in a hundred different directions at Step 1: Defining “love”. That doesn’t mean the exercise is useless or uninteresting, it just means that certain claims have to be taken with a grain of salt.

We all have plenty of leeway to chose the proxies that work for us, and I’ve seen a number of good examples from various schools. I can’t say one is better than another. If you do a good job measuring the proxies from one year to the next, you should be able to learn something from the relative rises and falls in engagement scores over time and compared between different groups of alumni.

Are there more rigorous approaches? Yes, probably. Should that stop you from doing your own thing? Never!

28 March 2012

Are we missing too many alumni with web surveys?

Filed under: Alumni, John Sammis, Peter Wylie, Surveying, Vendors — Tags: , — kevinmacdonell @ 8:04 am

Guest post by Peter B. Wylie and John Sammis

(Download a printer-friendly PDF version here: Web Surveys Wylie-Sammis)

With the advent of the internet and its exponential growth over the last decade and a half, web surveys have gained a strong foothold in society in general, and in higher education advancement in particular. We’re not experts on surveys, and certainly not on web surveys.  However, let’s assume you (or the vendor you use to do the survey) e-mail either a random sample of your alumni (or your entire universe of alumni) and invite them to go to a website and fill out a survey. If you do this, you will encounter the problem of poor response rate. If you’re lucky, maybe 30% of the people you e-mailed will respond, even if you vigorously follow-up non-responders encouraging them to please fill the thing out.

This is a problem. There will always be the lingering question of whether or not the non-responders are fundamentally different from the responders with respect to what you’re surveying them about. For example, will responders:

  • Give you a far more positive view of their alma mater than the non-responders would have?
  • Tell you they really like new programs the school is offering, programs the non-responders may really dislike, or like a lot less than the responders?
  • Offer suggestions for changes in how alumni should be approached — changes that non-responders would not offer or actively discourage?

To test whether these kinds of questions are worth answering, you (or your vendor) could do some checking to see if your responders:

  • Are older or younger than your non-responders. (Looking at year of graduation for both groups would be a good way to do this.)
  • Have a higher or lower median lifetime giving than your non-responders.
  • Attend more or fewer events after they graduate than your non-responders.
  • Are more or less likely than your non-responders to be members of a dues paying alumni association.

It is our impression that most schools that conduct alumni web surveys don’t do this sort of checking. In their reports they may discuss what their response rates are, but few offer an analysis of how the responders are different from the non-responders.

Again, we’re talking about impressions here, not carefully researched facts. But that’s not our concern in this paper. Our concern here is that web surveys (done in schools where potential responders are contacted only by e-mail) are highly unlikely to be representative of the entire universe of alums — even if the response rate for these surveys is always one hundred percent. Why? Because our evidence shows that alumni who have an e-mail address listed with their schools are markedly different (in terms of two important variables) from alumni who do not have an e-mail address listed: Age and giving.

To make our case, we’ll offer some data from four higher education institutions spread out across North America; two are private, and two are public. Let’s start with the distribution of e-mail addresses listed in each school by class year decile. You can see these data in Tables 1-4 and Figures 1-4. We’ll go through Table 1 and Figure 1 (School A) in some detail to make sure we’re being clear.

Take a look at Table 1. You’ll see that the alumni in School A have been divided up into ten roughly equal size groups where Decile 1 represents the oldest group and Decile 10 represents the youngest. The table shows a very large age range. The youngest alums in Decile 1 graduated in 1958. (Most of you reading this paper were not yet born by that year.) The alums in Decile 10 (unless some of them went back to school late in life) are all twenty-somethings.

Table 1: Count, Median Class Year, and Minimum and Maximum Class Years for All Alums Divided into Deciles for School A

 

Now look at Figure 1. It shows the percentage of alums by class year decile who have an e-mail address listed in the school’s database. Later on in the paper we’ll discuss what we think are some of the implications of a chart like this. Here we just want to be sure you understand what the chart is conveying. For example, 43.0% of alums who graduated between 1926 and 1958 (Decile 1) have an e-mail listed in the school’s database. How about Decile 9, alums who graduated between 2001 and 2005? If you came up with 86.5%, we’ve been clear.

Go ahead and browse through Tables 2-4 and Figures 2-4. After you’ve done that, we’ll tell you what we think is one of the implications of what you’ve seen so far.

Table 2: Count, Median Class Year, and Minimum and Maximum Class Years for All Alums Divided into Deciles for School B

 

Table 3: Count, Median Class Year, and Minimum and Maximum Class Years for All Alums Divided into Deciles for School C

 

 

 

Table 4: Count, Median Class Year, and Minimum and Maximum Class Years for All Alums Divided into Deciles for School D

 

 

The most significant implication we can draw from what we’ve shown you so far is this: If any of these four schools were to conduct a web survey by only contacting alums with an e-mail address, they would simply not reach large numbers of alums whose opinions they are probably interested in gathering. Some specifics:

  • School A: They would miss huge numbers of older alums who graduated in 1974 and earlier. By rough count over 40% of these folks would not be reached. That’s a lot of senior folks who are still alive and kicking and probably have pronounced views about a number of issues contained in the survey.
  • School B: A look at Figure 2 tells us that even considering doing a web survey for School B is probably not a great idea. Fewer than 20% of their alums who graduated in 1998 or earlier have an e-mail address listed in their database.

Another way of expressing this implication is that each school (regardless of what their response rates were) would largely be tapping the opinions of younger alums, not older or even middle-aged alums. If that’s what a school really wants to do, okay. But we strongly suspect that’s not what it wants to do.

Now let’s look at something else that concerns us about doing web surveys if potential respondents are only contacted by e-mail: Giving. Figures 5-8 show the percentage of alums who have given $100 or more lifetime by e-mail address/no-email address across class year deciles.

As we did with Figure 1, let’s go over Figure 5 to make sure it’s clear. For example, in decile 1 (oldest alums) 87% of alumni with an e-mail address have given $100 or more lifetime to the school. Alums in the same decile who do not have an e-mail address? 71% of these alums have given $100 lifetime or more to the school.  How about decile 10, the youngest group? What are the corresponding percentages of giving for those alums with and without an e-mail address? If you came up with 14% versus 6%, we’ve been clear.

Take a look at Figures 6-8, for schools B, C and D. Then we’ll tell you the second implication we see in all these data.

The overall impression we get from these four figures is clear: Alumni who do not have an e-mail address listed give considerably less money to their schools than do alumni with an e-mail address listed. This difference can be particularly pronounced among older alums.

Some Conclusions

The title of this piece is: “Are We Missing Too Many Alumni with Web Surveys?” Based on the data we’ve looked at, we think the answer to this question has to be a “yes.” It can’t be a good thing that many web surveys don’t go out to so many older alums who don’t have an e-mail address, and to alums without an e-mail address who haven’t given as much (on average) as those with an e-mail address.

On the other hand, we want to stress that web surveys can provide a huge amount of valuable information from the alums who are reached and do respond. Even if the coverage of the whole alumni universe is incomplete, the thousands of alums who take the time to fill out these surveys can’t be ignored.

Here’s an example. We got to reading through the hundreds and hundreds of written comments from a recent alumni survey. We haven’t included any of the comments here, but my (Peter’s) reaction to the comments was visceral. Wading through all the typos, and misspellings, and fractured syntax, I found myself cheering these folks on:

  •  “Good for you.”
  • “Damn right.”
  • “Couldn’t have said it better myself.”
  • “I wish the advancement and alumni people at my college could read these.”

In total, these comments added up to almost 50,000 words of text, the length of a short novel. And they were a lot more interesting than the words in too many of the novels I read.

As always, we welcome your comments.

23 September 2011

Who needs analytics vendors?

Filed under: External data, Vendors — Tags: , — kevinmacdonell @ 6:08 am

I’ve written a guest post for Andrew Urban’s blog, Return on Mission. Andrew is the author of a great little book called “The Nonprofit Buyer,” which is subtitled: “Strategies for Success from a Nonprofit Technology Sales Veteran.” It’s all about helping nonprofits make better choices when it comes to dealing with vendors of technology products and services. You can find out more on Andrew’s blog.

I’m pleased he’s asked me to contribute to Return on Mission, where I write on a topic I haven’t addressed on my own blog. Readers of CoolData know that my focus is the in-house analytics capability of nonprofits and higher-education institutions.  So what do I think about analytics and analytic services purchased from vendors?

Well, if you want to find out, you’ll have to follow the link: Knowledgeable Purchasers — 4 Easy Rules

Blog at WordPress.com.