CoolData blog

3 May 2010

The tough job of bringing in new alumni donors

Filed under: Alumni, Donor acquisition, John Sammis, Peter Wylie — Tags: , , — kevinmacdonell @ 8:48 am

Guest post by Peter Wylie and John Sammis

(Click here: Donor acquisition – Wylie and Sammis – 2 May 2010 – to download Microsoft Word version of this paper.)

Most alumni have never given a cent to their alma maters. “Whoa!” you may be saying, “What’s your evidence, fellows? That’s hard to swallow.”

We would agree. It’s not a pretty picture, but it’s an accurate one. For some documentation you can read “Benchmarking Lifetime Giving in Higher Education”. Sadly, the bottom line is this: In North America the lifetime hard credit alumni participation of at least half of our higher education institutions is less than 50%. If you look at only private institutions, the view is better. Public institutions? Better to not even peek out the window.

We do have a bit of optimism to offer in this paper, but we’ll start off by laying some cards on the table:

  • We’re specialists in data analysis. If we’re not careful, Abraham Maslow’s oft-quoted dictum can apply to us: “If you’re only tool is a hammer, every problem starts looking like a nail.” We don’t have all the answers on this complex issue. In fact, we believe that institutional leadership (from your president and board of trustees) is what’s most important in getting more alums involved in giving. Data driven decision making (the underpinning of all our work) is only part of the solution.
  • Donor acquisition is hard. If you don’t believe that, talk to anyone who runs the annual fund for a large state university. Ask them about their success rates with calling and mailing to never-givers. They will emit sighs of frustration and exasperation. They will tell you about the depressing pledge rates from the thousands and thousands of letters and postcards they send out. They will tell you about the enervating effect of wrong numbers and hang-ups on their student callers. They will tell you it isn’t easy. And they’re right; it isn’t.
  • RFM won’t help. (RFM stands for “Recency of Giving,” “Frequency of Giving,” and “Monetary Value of Giving.” It’s a term that came out of the private sector world of direct marketing over 40 years ago.) Applying that concept to our world of higher education advancement, you would call and mail to alums who’ve given recently, often, and a lot. Great idea. But if we’re focused on non-donors … call it a hunch … that’s probably not going to work out too well.

So … what’s the optimism we can offer? First, we’ve had some success with building predictive models for donor acquisition. They’re not great models, but, as John likes to say, “They’re a heck of a lot better than throwing darts.” In the not too distant future we plan to write something up on how we do that.

But for now we’d like to show you some very limited data from three schools — data that may shed just a little light on who among your non-giving alums are going to be a bit easier than others to attract into the giving fold. Again, nothing we show you here is cause for jumping up and down and dancing on the table. Far from it. But we do think it’s intriguing, and we hope it encourages folks like you to share these ideas with your colleagues and supervisors.

Here’s what we’ll be talking about:

  • The schools
  • The data we collected from the schools
  • Some results
  • A makeshift score that you might test out at your own school

The Schools

One of the schools is a northeastern private institution; the other two are southeastern public institutions, one medium size, the other quite small.

The data we collected from the schools

The most important aspect of the data we got from each school is lifetime giving (for the exact same group of alums) collected at two points in time. With one school (A), the time interval we looked at stretched out over five years. For the other two (B and C), the interval was just a year. However, with all three schools we were able to clearly identify alums who had converted from non-donor to donor status over the time interval.

We collected a lot of other information from each school, but the data we’ll focus on in this piece include:

  • Preferred year of graduation
  • Home Phone Listed (Yes/No)
  • Business Phone Listed (Yes/No)
  • Email Address Listed (Yes/No)

Some Results

The result that we paid most attention to in this study is that a greater percentage of new donors came from the ranks of recent grads than from “older” grads. To arrive at this result we:

  • Divided all alums into one of four roughly equal size groups. If you look at Chart 1, you’ll see that these groups consisted of the oldest 25% of alums who graduated in 1976 and earlier, the next oldest 25% of alums who graduated between the years 1977 and 1980, and so on.
  • For each class year quartile we computed the percentage of those alums who became new donors over the time interval we looked at.

Notice in Chart 1 that, as the graduation years of the alums in School A becomes more recent, their likelihood of becoming a new donor goes up. In the oldest quartile (1976 and earlier), the conversion rate is 1.2%, 1.5% for those graduating between 1977 and 1990, 3% for those graduating between 1991and 1997, and 7.5% for alums graduating in 1998 or later. You’ll see a similar (but less pronounced) pattern in Charts 2 and 3 for Schools B and C.

At this point you may be saying, “Hold on a second. There are more non-donors in the more recent class year quartiles than in the older class year quartiles, right?”

“Right.”

“So maybe those conversion rates are misleading. Maybe if you just looked at the conversion rates of previous non-donors by class year quartiles, those percentages would flatten out?”

Good question. Take a look at Charts 1a, 2a, and 3a below.

Clearly the pool of non-donors diminishes the longer alums have been out of school. So let’s recompute the conversion rates for each of the three schools based solely on previous non-donors. Does that make a difference? Take a look at Charts 1b, 2b, and 3b.

It does make some difference. But, without getting anymore carried away with the arithmetic here, the message is clear. Many more new donors are coming from the more recent alums than they are from the ones who graduated a good while back.

Now let’s look at the three other variables we chose for this study:

  • Home Phone Listed (Yes/No)
  • Business Phone Listed (Yes/No)
  • Email Address Listed (Yes/No)

Specifically, we wanted to know if previous non-donors with a home phone listed were more likely to convert than previous non-donors without a home phone listed. And we wanted to know the same thing for business phone listed and for email address listed.

The overall answer is “yes;” the detailed answers are contained in Charts 4-6. For the sake of clarity, let’s go through Chart 4 together.  It shows that:

  • In School A, 5.8% of previous non-donors with a home phone listed converted; 3.7% without a home phone listed converted.
  • In School B, 3.7% of previous non-donors with a home phone listed converted; 1.2% without a home phone listed converted.
  • In School C, 1.0% of previous non-donors with a home phone listed converted; 0.4% without a home phone listed converted.

Looking at Charts 5 and 6 you can see a similar pattern of differences for whether or not a business phone or an email address was listed.

What comes across from all these charts is that the variables we’ve chosen to look at in this study (year of graduation, home phone, email, and business phone) don’t show big differences between previous non-donors who converted and previous non-donors who did not convert. They show small differences. There’s no getting around that.

What’s encouraging (at least we think so) is that these differences are consistent across the three schools. And since the schools are quite different from one another, we expect that the same kind of differences are likely to hold true at many other schools.

Let’s assume you’re willing to give us the benefit of the doubt on that. Let’s further assume you’d like to check out our proposition at your own school.

A Makeshift Score That You Might Test at Your Own School

Here’s what we did for the data we’ve shown you for each of the three schools:

We created four 0/1 variables for all alums who were non-donors at the first point in time:

  • Youngest Class Year Quartile – alums who were in this group were assigned a 1; all others were assigned a 0.
  • Home Phone Listed — alums who had a home phone listed in the data base were assigned a 1; all others were assigned a 0.
  • Business Phone Listed — alums who had a business phone listed in the data base were assigned a 1; all others were assigned a 0.
  • Email Listed — alums who had an email address listed in the data base were assigned a 1; all others were assigned a 0.

For each alum who was a non-donor at the first point in time, we created a very simple score by adding each of the above variables together. Here’s the formula we used:

SCORE = Youngest Class Year Quartile (0/1) + Home Phone Listed (0/1) + Business Phone Listed (0/1) + Email Listed (0/1)

An alum with a Score of 0 was not in the Youngest Class Year Quartile, did not have a home phone listed, did not have a business phone listed and did not have an email address listed. An alum with a Score of 1met only one of these criteria, but not the other three, and so on up to an alum with a score of 4 who met all the criteria.

Charts 7-9 show the relationship of the Score to new donor conversion. We’d like you browse through them. After you do that we have a few concluding comments.

Some final thoughts:

  1. We think the charts are interesting because they show that using just a little information from an alumni database can point to folks who are far more likely to convert than other folks. Obviously, the score we created here (and suggest you try out at your own school) is very simple. Far more accurate scores can be developed using more advanced statistical techniques and the vast amount of information that’s included in almost all alumni databases.
  2. If you’ve taken the trouble to read this far, we’re, of course, pleased. We believe so fundamentally in data driven decision making that it brightens our day whenever someone at least entertains our ideas. But the problem may be with all the decision makers and opinion influencers out there who are not reading this piece and who would be, at best, bored by it. These are vice presidents and directors and bloggers and vendors who seem unwilling to make a commitment to the use of internal alumni database information — information that could save millions and millions of dollars on appeals (both mail and calling) to alums who are very unlikely to ever become donors.
  3. If you agree with us on point 2, the question becomes, “What can we do to change their minds, to get their attention?” First of all, we strongly encourage you to suppress the urge to grab them by the scruff of the neck and scold them. That won’t work. (Would that it did.) What we suggest is patience combined with persistence. New ideas and ways of doing things take a long time to take hold in institutions. How long has the idea of converting print based medical records to electronic form so they can be quickly shared among physicians and other who must make life altering decisions on the spot every day been around? If memory serves, it’s been a while. But don’t give up making the case and pushing politely but assertively. They’ll come around. We’re (all of us) a benevolent juggernaut whose opinions will eventually prevail.
Advertisements

5 Comments »

  1. Peter,

    These are interesting results, but the correlations seem random and perhaps misleading. (Note that my perspective is that of a data manager, not a statistician.)

    It seems to me that for the next 50 years or so we will by definition have more email addresses for younger alumni than older classes, since many schools will have email addresses for all new grads, whereas many older classes graduated before email even existed. I also wonder whether there’s a difference between email addresses provided to alumni by the schools (e.g., “email for life” addresses that the alumni may not even use) and “real” email addresses.

    I also wonder about the method by which email addresses and home phones were acquired. Does it matter (and can you tell) whether that data was volunteered by the alumni or added through a data append service?

    Business phones seem most likely to be correlated with actual engagement since they were more likely to have been provided by the alumni rather than the Registrar’s office or an append service.

    Robert

    Comment by Robert Weiner — 3 May 2010 @ 11:57 am

    • Robert, thanks for this comment. I’m interested to hear what the authors say, but in the meantime, here is my take on it.

      True: the method by which we acquire predictor data does matter. But I do not see how we go astray in using it – as long as we know our own data, and test for correlation without making any assumptions. Yes, there IS a difference in email addresses automatically assigned to alumni and the addresses they themselves provide. (In my own analysis, I leave out ’email for life’, because I can, and because I know it’s not predictive of anything.)

      For the rest, the proof is in the correlation. If there is a correlation, the data is predictive and therefore useful. I’m not sure how a correlation can be either ‘random’ or ‘misleading’; it either exists or it doesn’t, with a specific direction and degree of strength. Although it may be quite weak, when its effect is added together with that of other correlated predictors, it will have predictive power. (An aside: Very weak predictors should perhaps not be used except in a regression model, to prevent over-weighting of trivial variables.)

      Everyone’s data is “layered” to one degree or another. A very few institutions will be quite specific about coding the source of information and it will be possible to separate out voluntary from involuntary disclosure. More data is more grist for testing for correlation with ‘giving’. We expect that voluntarily-provided data will be much more predictive – but we must test.

      However, data miners should NOT shun predictors whose provenance is uncertain. It’s more common sense than statistics … if the vast majority of your alumni have an email address, then that variable probably isn’t going to be terribly useful as a predictor. If everyone from the Class of 2000 and younger has an email address, and no one else does, same deal – the variable is just a proxy for age. And hypothetically: If everyone with an email is a donor, and everyone without is not, then your institution receives donations exclusively online and email is not a predictor!

      But it’s more likely that an institution will have emails for alumni of ALL ages, and ALL donor levels. What if one does not know where those emails came from? Not a problem, I say: If those emails were acquired without the involvement of the alums, the correlation with giving will be weak. If they provided their addresses voluntarily, the correlation is likely to be highly significant. And if it’s a mix of the two scenarios (most likely), then the correlation will be mid-range – but still useful!

      The bottom line is: know your data – but recognize a valid correlation when it presents itself.

      Kevin

      Comment by kevinmacdonell — 6 May 2010 @ 8:59 am

  2. great piece, and timely!!

    Comment by Martha Valerio Lauria — 5 May 2010 @ 3:11 pm

  3. I enjoyed this article, thanks for sharing.

    This was sort of touched on in the first comment, but I’m curious about the phone numbers and email addresses: did you consider the date that the info was added and whether it may have been a result of something like filling out a pledge form or responding to a gift solicitation? I assume this would skew your results results. Interested to hear your thoughts on this.

    Thanks again for sharing,

    John

    Comment by John — 11 May 2010 @ 2:12 pm

    • John: If the data set were rich (with a date-stamp for every email and phone update), and if it were easy to draw a neat line between voluntary and involuntary data prior to drawing variables for a model, then it would be interesting and useful to do so. Alas, the data is often not available, or too much trouble to obtain, or is too time-consuming to parse in this way. Those of us who begin our models with 50 or more potential independent variables usually don’t have the time to work them over that hard, and I’m not sure it would be worth it in the end. Are our predictors all truly independent? Far from it. But they are partially so, and for practical purposes that might have to do. Are there key variables that we should at least try to make the distinction for? Maybe … and I’d like to hear suggestions from anyone for which ones would be most appropriate to study.

      Comment by kevinmacdonell — 11 May 2010 @ 7:40 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: