CoolData blog

3 October 2016

Grad class size: predictive of giving, but a reality check, too


The idea came up in a conversation recently: Certain decades, it seems, produced graduates that have reduced levels of alumni engagement and lower participation rates in the Annual Fund. Can we hope they will start giving when they get older, like alumni who have gone before? Or is this depressed engagement a product of their student experience — a more or less permanent condition that will keep them from ever volunteering or giving?


The answer is not perfectly clear, but what I have found with a bit of analysis can only add to the concern we all have about the end of “business as usual.”


For almost all universities, enrolments have risen dramatically over the decades since the end of the second World War. As undergraduate class sizes ballooned, metrics such as the student-professor ratio emerged as important indicators of quality of education. It occurred to me to calculate the size of each grad-year cohort and include it as a variable in predictive models. For a student who graduated in 1930, that figure could be 500. For someone who graduated in 1995, it might be 3,000. (If you do this, remember not to exclude now-deceased alumni in your count.) A rough generalization about the conditions under which a person received their degree, to be sure, but it was easy to query the database for this, and easy to test.


I pulled lifetime giving for 130,000 living alumni and log-transformed it before checking for a correlation with the size of graduating class. (The transformation being log of “lifetime giving plus 1.”) It turned out that lifetime giving has a strong inverse correlation with the size of an alum’s grad class, for that alum’s most recent degree. (r = -0.338)


This is not surprising. The larger the graduating class, the younger the alum. Nothing is as strongly correlated with lifetime giving as age, therefore much of the effect I was seeing was probably due to age. (The Pearson correlation of LTG and age was 0.395.)


Indeed, in a multiple linear regression of age on lifetime giving (log-transformed), adding “grad-class size” as a predictor variable does not improve model fit. The two predictors are not independent of each other: For age and grad-class size, r = -0.828!


I wasn’t ready to give up on the idea, though. I considered my own graduation from university, and all the convocations I had attended in the past as an Advancement employee or a family member of a graduate. The room (or arena, as the case may be) was full of grads from a whole host of degree programs, most of whom had never met each other or attended any class in common. Enrolment growth has been far from even across faculties (or colleges or schools); the student experience in terms of class size and one-on-one access to professors probably differs greatly from program to program. At most universities, Arts or Science faculties have exploded in size, while Medicine or Law have probably not.


With that in mind, I calculated grad-class size differently, counting the size of each alum’s graduating cohort at the faculty (college) level. The correlation of this more granular count of grads with lifetime giving was not as negative (r = -0.283), but at the same time, it was less tied to age.


This time, when I created a regression of age on lifetime giving and then added grad-class size at the faculty level, both predictors were significant. Grad class size gave a good boost to adjusted R squared.


I seemed to be on to something, so I pushed it farther. Knowing that an undergrad’s experience is very different from that of a graduate student, I added “Number of Degrees” as a variable after age, and before grad-class size. All three predictors were significant and all led to improvements in model fit.


Still on the trail of how class size might affect student experience, and alumni affinity and giving thereafter, I got more specific in my query, counting the number of graduates in each alum’s year of graduation and degree program. This variable was even less conflated with age, but despite that, it failed to provide any additional explanation for the variation in lifetime giving. There may be other forms of counts that are more predictive, but the best I found was size of grad class at the faculty/college level.


If I were asked to speculate about the underlying cause, the narrative I’d come up with is that enrolments grew dramatically not only because there were more young people, but because universities in North America were attracting students who increasingly felt that a university degree was a rite of passage required for success in the job market. The relationship of student to university was changing, from that of a close-knit club of scholars, many of whom felt immensely grateful for the opportunity, to a much larger, less cohesive population with a more transactional view of their relationship with alma mater.


That attitude (“I paid x dollars for my piece of paper and so our business here is done”), and not so much the increasing numbers of students they shared the lecture halls with, could account for drops in philanthropic support. What that means for Annual Fund is that we can’t bank on the likelihood that a majority of alumni will become nostalgic when they reach the magic age of 50 or 60 and open their wallets as a consequence. Everything’s different now.


I don’t imagine this is news to anyone who’s been paying attention. But it’s interesting to see how this reality is reflected in the data. And it’s in the data that we will be able to find the alumni for whom university was not just a transaction. Our task today is not just to identify that valuable minority, but to understand them, communicate with them intelligently, connect with their interests and passions, and engage them in meaningful interactions with the institution.


14 June 2011

Special variables for predicting young alumni giving

Filed under: Alumni, Predictor variables — Tags: — kevinmacdonell @ 8:03 am

Do your newest alumni have email accounts or some other type of university account that remained active after they graduated? If data on active and inactive accounts is available to you, I suggest you have a look at it, because it can be indicative of affinity with the institution. Having an active account itself may not indicate affinity — but keeping it active might. The variables require some special handling in order to detect the association, though, which is what I’ll describe today.

What kind of data am I talking about? “Email for life” is a common example: Alumni either get to keep the email address they had when they were students, or are given alumni addresses. Normally it’s not a true email account with storage and so on, but rather an email forwarding service which allows graduate to keep a single, permanent address even as they move and change jobs. (It used to be a great idea, but uptake for email-for-life is not high these days — Gmail and other services serve the purpose. But some alumni like to have their address reflect the school they attended. An email address with a prominent college name looks better on a resume than @hotmail, after all.)

Besides email, an account might also allow former students to access an online community, retrieve academic transcripts, or update their personal information. Whatever the purpose of the account, the pattern is the same: Practically everyone in the most recent graduating class has an active account, but the proportion of accounts that are used and remain active declines steeply with every passing year.

I have a data set here as an example. This chart shows the percentage of alumni whose university account is still active, broken down by the year of graduation.

Overall, 30% of alumni who graduated in 2000 or later have an account coded as “still active.” You might expect that alumni who kept an active account would be more likely to be donors, but that’s not the case: The “inactive” alumni are the better donors, with a participation rate more than twice the rate of the “active” alumni.

Why is that? Look again at the chart above. From a high of 98.3% for the class of 2011 (near-total penetration), the ‘active’ percentage rapidly declines to under 3% for alumni who are ten years out. In its current form, the “active account” variable is merely a proxy for “year of graduation.” Older graduates (even only slightly older) are more likely to have a history of giving — therefore the alumni with lapsed accounts appear to be better donors.

I was reluctant to abandon this variable before seeing if I could transform it somehow. What if I gave each alum a score based on how many years since graduation his or her account has stayed active? In my stats software (Data Desk), I created a new derived variable (let’s call it ‘Active Score’), which was calculated this way:

  • If the account is active, then ‘Active Score’ is equal to 2011 minus Class Year.
  • Otherwise, ‘Active Score’ is equal to zero.

Therefore, this year’s grads will be zero whether their account is active or not. Given that nearly 100% of them have an active account, one can assume that no actual choice was being exercised — their account is merely active by default. It’s the persistence of activity that matters here. In my data set, ‘Active Score’ ranges from zero to 11 years. As I had hoped, the score seems to have an association with participation in the Annual Fund. At the high end, alumni with the longest-active accounts had a very high participation rate (for young alums) of 25%:

My new variable could still be a proxy for Class Year — only an alum who graduated more than 10 years ago could have a score of 11 — but my early testing indicates this variable offers some explanatory value even when I control for the number of years since graduation. It makes sense to me that having a still-active account at eight years out is more meaningful than a still-active account at five years out, which in turn is more meaningful than an active account held by someone who just graduated and may not even know they still have any account.

A final note: This post follows a previous one called Young alumni are a whole different animal, which was about building models dedicated to predicting young-alumni giving. Variables such as the one I’m talking about today are useful only for models built to predict giving by young alumni, not all alumni. It makes no sense to create a score for alumni who never had an account at all. None of them will be active and, because they are older, an active account will be a negative predictor in any model built on an all-ages sample of alumni.

9 June 2011

Young alumni are a whole different animal

Filed under: Alumni, Annual Giving, Model building, Predictor variables — Tags: , — kevinmacdonell @ 12:23 pm

My Phonathon program hires about thirty students a year. These are mature, reliable employees whom I’d recommend to any prospective future employer. They’re also, well, young. When I was in university, many of them hadn’t even been born.

So, yeah, they’re different from me. They’re different in terms of girth, taste in music and facility with pop-culture references. And they’re different in the data.

Grads who are just beginning their careers as alumni will lack most of the engagement-related attributes we usually rely on for predictive models: event attendance, volunteer activity, employment updates, a business phone. Therefore, variables that relate to their recent student experience are likely to loom larger for them than for their older counterparts. At the same time, recent grads tend to have a richer variety of data in their records, as database usage has increased across the enterprise through the years.

These two differences mark young alumni as a distinct population: One, differences in the distribution of variables that all alumni share, and two, the existence of variables that only younger alumni can have.

It makes me wonder why I’m still lumping young alumni in with older alumni in my predictive models. You might recall that a while ago I was bragging about how well my Phonathon model worked to predict propensity to give in response to phone solicitation. I also mentioned that, unfortunately, the model under-performed in predicting acquisition of young donors.

Okay, it didn’t under-perform — it failed. I concluded that young alumni need their own, separate model.

Where do we draw the line for “young alumni”? One possibility is that we go with our program’s definition of young alums — for me, that’s anyone who has earned a degree in any of the past three years and is under 35. Others might use graduates of the last decade.

This might be fine, but keep in mind that the training sample in a predictive model doesn’t have to follow the strict definition of the population that the appeal is targeting. We need a critical mass of donors in our sample population in order to train the model, therefore we might be more successful if we drew a larger, more loosely-defined sample. Our sample will include some alumni who are slightly older than the alumni who will get the “young alum” appeal — that’s okay, because they’re in the sample for only one reason: training the model.

However you draw the line, the distinction rests on the answer to this question: Is the data that describes one group different from the data that describes another? They may all be alumni, but can they also be thought of as separate populations, in terms of the data that was collected on them?

If you audit the data in certain tables, you might be able to find an “information bump”. That’s what I call the approximate year in which an institution started collecting and storing a lot more information on incoming students. In the data I’m familiar with, that bump has occured in the last ten to 15 years.

One of the most noticeable areas where data recording has increased is in personal information. Nowadays you can find Social Security Number (or in Canada, Social Insurance Number), religion, ethnicity, next-of-kin information, citizenship, driver’s license status, even eye and hair colour. Auditing these fields will tell you when data collection was ramped up, but probably won’t yield many useful predictors as they don’t have much to do with engagement. Certain types of personal information may also be off limits to you.

Investigate personal information if you can, but be sure to look around for other, more relevant data. Some examples:

  • Whether they lived in residence — If you don’t have direct access to this, the answer might be lurking in the alum’s past address data.
  • Athletics involvement — Count of activities, or a yes/no indicator.
  • Club and society activities — Count of activities, or a yes/no indicator.
  • Greek society membership — Yes/no.
  • Whether they were transfer students or received all of their degree credits from your institution
  • Whether they were employed on campus while a student
  • Whether they were recipients of awards, prizes, scholarships or bursaries
  • Whether they signed up for Email for Life, or otherwise kept their university email address or other university login active — In my data, more than 98% of the most recent grad class has an active university login. That drops to about 84% for the grad class of 2010, then 38% for 2009. The percentages continue to fall gradually from there. This attrition effect might hide the fact that retaining a student login past graduation is a strong indicator of affinity. I will write more on this topic in a future post.
  • Online community membership or activity

Oh, and don’t ignore the usual variables, such as marital status! In any conventional predictive model I’ve ever worked on, having a marital status of “single” in the database was a strong negative predictor of giving. But when I reduced my sample to graduates from the past ten years who were no older than 35, I was surprised to see that predictor turn into a strong positive. Although married alumni were still more likely to give, the “singles” were right behind them — and far ahead of the alumni for whom the marital status was missing. In my new model, I will use both “married” and “single” as predictors. Although the marrieds are more likely to be donors, there are relatively few of them; being coded single in our database could well prove to be a leading predictor of giving. (You will need to know, of course, why some alums are coded and others not. I’m still investigating.)

When September rolls around, I’ll be another three months older, and there’s nothing I can do about that. At least I’ll know my hard-working callers will be well-focused, talking to the recent grads who are most ready to make their very first gift to the Annual Fund.

5 April 2011

Validation after the fact

Filed under: Model building, Phonathon, regression, Validation — Tags: , , — kevinmacdonell @ 8:11 am

Validation against a holdout sample allows us to pick the best model for predicting a behaviour of interest. (See Thoughts on model validation.) But I also like to do what I call “validation after the fact.” At the end of a fundraising period, I want to see how people who expressed that behaviour broke down by the score they’d been given.

This isn’t really validation, but if you create some charts from the results, it’s the best way to make the case to others that predictive modeling works. More importantly, doing so may provide insights into your data that will lead to improvements in your models in their next iterations.

This may be most applicable in Annual Fund, where the prospects solicited for a gift may come from a wide range of scores, allowing room for comparison. But my general rule is to compare each score level by ratios, not counts. For example, if I wanted to compare Phonathon prospects by propensity score, I would compare the percentage (ratio) of each score group contacted who made a pledge or gift, not the number of prospects who did so. Why? Because if I actually used the scores in solicitation, higher-scoring prospects will have received more solicitation attempts on average. I want results to show differences among scores, not among levels of intensity of solicitation.

So when the calling season ended recently, I evaluated my Phonathon model’s performance, but I didn’t study the one model in isolation: I compared it with a model that I initially rejected last year.

It sounds like I’m second-guessing myself. Didn’t I pick the very best model at the time? Yes, but … I would expect my chosen model to do the best job overall, but perhaps not for certain subgroups — donor types, degree types, or new grads. Each of these strata might have been better described by an alternative model. A year of actual results of fundraising gives me what I didn’t have last year: the largest validation sample possible.

My after-the-fact comparison was between a binary logistic regression model which I had rejected, and the multiple linear regression model which I actually used in Phonathon segmentation. As it turned out, the multiple linear regression model did prove the winner in most scenarios, which was reassuring. I will spare you numerous comparison charts, but I will show you one comparison where the rejected model emerged as superior.

Eight percent of never-donor new grads who were contacted made a pledge. (My definition of a new grad was any alum who graduated in 2008, 2009, or 2010.) The two charts below show how these first-time donors broke down by how they were scored in each model. Due to sparse data, I have left out score level 10.

Have a look, and then read what I’ve got to say.

Neither model did a fantastic job, but I think you’d agree that predicting participation for new grads who have never given before is not the easiest thing to do. In general, I am pleased to see that the higher end of the score spectrum delivered slightly higher rates of participation. I might not have been able to ask for more.

The charts appear similar at first glance, but look at the scale of the Y-axis: In the multiple linear regression model, the highest-scoring group (9, in this case) had a participation rate of only 12%, and strangely, the 6th decile had about the same rate. In the binary logistic regression model, however, the top scoring group reached above 16% participation, and no one else could touch them. The number of contacted new grads who scored 9 is roughly equal between the models, so it’s not a result based on relatively sparse data. The BLR model just did a better job.

There is something significantly different about either new grads, or about never-donors whom we wish to acquire as donors, or both. In fact, I think it’s both. Recall that I left the 10s out of the charts due to sparse data — very few young alumni can aspire to rank up there with older alumni using common measures of affinity. As well, when the dependent variable is Lifetime Giving, as opposed to a binary donor/nondonor state, young alumni are nearly invisible to the model, as they are almost by definition non-donors or at most fledgling donors.

My next logical step is a model dedicated solely to predicting acquisition among younger alumni. But my general point here is that digging up old alternative models and slicing up the pool of solicited prospects for patterns “after the fact” can lead to new insights and improvements.

12 May 2010

Donor acquisition: From ‘giving history’ to ‘giving future’

Filed under: Annual Giving, Donor acquisition, John Sammis, Peter Wylie, Phonathon — Tags: , , — kevinmacdonell @ 8:18 am

I hope you’ve had a chance to read “The tough job of bringing in new alumni donors” by Peter Wylie and John Sammis. What did you think of it? I’m sure the most common reaction is “That’s very interesting.” There’s a big gap, though, between reaction and action. I want to talk about converting this knowledge into action.

The subject of that guest post is a lot more than just “interesting” for me. I’ve recently changed jobs, moving from prospect research for major gifts at a university with 30,000 living alumni to running an annual fund phonathon program at a university with more than three times as many alumni. For the first time, I will be responsible not only for mining the data, but helping to design a program that will make use of the results.

Like most institutions, my new employer invests heavily in acquiring new donors. Calling and mailing to never-donors yields a return on investment that may be non-existent in the short term and difficult to quantify in the (future) long term.

Yet it must be done. ROI is important, but if you write off whole segments based only on ROI in the current year, ignoring long-term value, your pool of donors will shrink through attrition. Broadening the base of new donors costs money — an investment we hope to recoup with interest when new donors renew in future years. (See “The Habit of Giving“, by Jonathan Meer, on the subject of a donor tendency to renew. Also see “Donor Lifetime Value“, by Dan Allenby, from his Annual Giving Exchange blog, for a brief discussion of the importance of estimating donor lifetime value vs. costs of continuing to solicit. I also owe thanks to Chuck McClenon at The University of Texas at Austin for helping me understand the dangers of focusing on short-term ROI.)

I have argued that in phonathon we ought to give highest priority to propensity to give (i.e., from our predictive model), and stop using giving history (LYBUNTs, etc.) to determine calling priority. (Previous post: Rethinking calling priority in your phonathon campaign.) The results of our predictive model will give us the ROI side of the equation. But I’m growing increasingly convinced that propensity to give must be balanced against that other dimension: Lifetime value.

Dan Allenby says, “Donor lifetime value means predicting the sum of donors’ gifts over the course of their lives,” and later cautions: “This kind of calculation is complicated and imperfect.” This is so true. I certainly haven’t figured out an answer yet. I presume it will involve delving into our past data to find the average number of years a new donor continues to give, and what the average yearly renewal gift is, to establish a minimum lifetime value figure.

And I suspect this as well: The age of the donor at conversion is going to figure prominently. In this life there are three things that are inescapable: death, taxes, and a call from the Annual Fund! The number of years a donor has left to give is a function of age. We can assume, then, that early conversion is more desirable than late conversion. (Not to mention death-bed conversion.)

The discussion by Wylie and Sammis (to return to that) really seals the deal. Not only do younger alumni have more time left to give, so to speak, but Wylie/Sammis have clearly demonstrated that younger alumni are probably also more likely than older alumni to convert.

If you’re already using predictive modeling in your program, think about the implications. Year after year, the biggest factor in my giving models is age (i.e., class year). Older alumni tend to score higher, especially if my dependent variable is ‘lifetime giving’ going back to the beginning of time. This flies in the face of the evidence, provided by Wylie and Sammis, that non-donor alumni are less and less likely to convert the older they get.

We need a counterbalance to raw propensity-to-give scores in dealing with non-donors. What’s the answer? One possibility is a propensity-to-convert model that doesn’t undervalue young alumni so much. Another might be a matrix, with propensity-to-give scores on one axis, and some measure of lifetime value (factoring in age) on the other axis — the goal being to concentrate activity on the high-propensity AND high-lifetime value prospects, and work outwards from there.

I don’t know. Today all I know is that in order to broaden the base of your donor pool and maximize returns over the long term, you must call non-donors, and you must call the non-donors with the most potential. That means embracing younger alumni — despite what your model tells you to do.

3 May 2010

The tough job of bringing in new alumni donors

Filed under: Alumni, Donor acquisition, John Sammis, Peter Wylie — Tags: , , — kevinmacdonell @ 8:48 am

Guest post by Peter Wylie and John Sammis

(Click here: Donor acquisition – Wylie and Sammis – 2 May 2010 – to download Microsoft Word version of this paper.)

Most alumni have never given a cent to their alma maters. “Whoa!” you may be saying, “What’s your evidence, fellows? That’s hard to swallow.”

We would agree. It’s not a pretty picture, but it’s an accurate one. For some documentation you can read “Benchmarking Lifetime Giving in Higher Education”. Sadly, the bottom line is this: In North America the lifetime hard credit alumni participation of at least half of our higher education institutions is less than 50%. If you look at only private institutions, the view is better. Public institutions? Better to not even peek out the window.

We do have a bit of optimism to offer in this paper, but we’ll start off by laying some cards on the table:

  • We’re specialists in data analysis. If we’re not careful, Abraham Maslow’s oft-quoted dictum can apply to us: “If you’re only tool is a hammer, every problem starts looking like a nail.” We don’t have all the answers on this complex issue. In fact, we believe that institutional leadership (from your president and board of trustees) is what’s most important in getting more alums involved in giving. Data driven decision making (the underpinning of all our work) is only part of the solution.
  • Donor acquisition is hard. If you don’t believe that, talk to anyone who runs the annual fund for a large state university. Ask them about their success rates with calling and mailing to never-givers. They will emit sighs of frustration and exasperation. They will tell you about the depressing pledge rates from the thousands and thousands of letters and postcards they send out. They will tell you about the enervating effect of wrong numbers and hang-ups on their student callers. They will tell you it isn’t easy. And they’re right; it isn’t.
  • RFM won’t help. (RFM stands for “Recency of Giving,” “Frequency of Giving,” and “Monetary Value of Giving.” It’s a term that came out of the private sector world of direct marketing over 40 years ago.) Applying that concept to our world of higher education advancement, you would call and mail to alums who’ve given recently, often, and a lot. Great idea. But if we’re focused on non-donors … call it a hunch … that’s probably not going to work out too well.

So … what’s the optimism we can offer? First, we’ve had some success with building predictive models for donor acquisition. They’re not great models, but, as John likes to say, “They’re a heck of a lot better than throwing darts.” In the not too distant future we plan to write something up on how we do that.

But for now we’d like to show you some very limited data from three schools — data that may shed just a little light on who among your non-giving alums are going to be a bit easier than others to attract into the giving fold. Again, nothing we show you here is cause for jumping up and down and dancing on the table. Far from it. But we do think it’s intriguing, and we hope it encourages folks like you to share these ideas with your colleagues and supervisors.

Here’s what we’ll be talking about:

  • The schools
  • The data we collected from the schools
  • Some results
  • A makeshift score that you might test out at your own school

The Schools

One of the schools is a northeastern private institution; the other two are southeastern public institutions, one medium size, the other quite small.

The data we collected from the schools

The most important aspect of the data we got from each school is lifetime giving (for the exact same group of alums) collected at two points in time. With one school (A), the time interval we looked at stretched out over five years. For the other two (B and C), the interval was just a year. However, with all three schools we were able to clearly identify alums who had converted from non-donor to donor status over the time interval.

We collected a lot of other information from each school, but the data we’ll focus on in this piece include:

  • Preferred year of graduation
  • Home Phone Listed (Yes/No)
  • Business Phone Listed (Yes/No)
  • Email Address Listed (Yes/No)

Some Results

The result that we paid most attention to in this study is that a greater percentage of new donors came from the ranks of recent grads than from “older” grads. To arrive at this result we:

  • Divided all alums into one of four roughly equal size groups. If you look at Chart 1, you’ll see that these groups consisted of the oldest 25% of alums who graduated in 1976 and earlier, the next oldest 25% of alums who graduated between the years 1977 and 1980, and so on.
  • For each class year quartile we computed the percentage of those alums who became new donors over the time interval we looked at.

Notice in Chart 1 that, as the graduation years of the alums in School A becomes more recent, their likelihood of becoming a new donor goes up. In the oldest quartile (1976 and earlier), the conversion rate is 1.2%, 1.5% for those graduating between 1977 and 1990, 3% for those graduating between 1991and 1997, and 7.5% for alums graduating in 1998 or later. You’ll see a similar (but less pronounced) pattern in Charts 2 and 3 for Schools B and C.

At this point you may be saying, “Hold on a second. There are more non-donors in the more recent class year quartiles than in the older class year quartiles, right?”


“So maybe those conversion rates are misleading. Maybe if you just looked at the conversion rates of previous non-donors by class year quartiles, those percentages would flatten out?”

Good question. Take a look at Charts 1a, 2a, and 3a below.

Clearly the pool of non-donors diminishes the longer alums have been out of school. So let’s recompute the conversion rates for each of the three schools based solely on previous non-donors. Does that make a difference? Take a look at Charts 1b, 2b, and 3b.

It does make some difference. But, without getting anymore carried away with the arithmetic here, the message is clear. Many more new donors are coming from the more recent alums than they are from the ones who graduated a good while back.

Now let’s look at the three other variables we chose for this study:

  • Home Phone Listed (Yes/No)
  • Business Phone Listed (Yes/No)
  • Email Address Listed (Yes/No)

Specifically, we wanted to know if previous non-donors with a home phone listed were more likely to convert than previous non-donors without a home phone listed. And we wanted to know the same thing for business phone listed and for email address listed.

The overall answer is “yes;” the detailed answers are contained in Charts 4-6. For the sake of clarity, let’s go through Chart 4 together.  It shows that:

  • In School A, 5.8% of previous non-donors with a home phone listed converted; 3.7% without a home phone listed converted.
  • In School B, 3.7% of previous non-donors with a home phone listed converted; 1.2% without a home phone listed converted.
  • In School C, 1.0% of previous non-donors with a home phone listed converted; 0.4% without a home phone listed converted.

Looking at Charts 5 and 6 you can see a similar pattern of differences for whether or not a business phone or an email address was listed.

What comes across from all these charts is that the variables we’ve chosen to look at in this study (year of graduation, home phone, email, and business phone) don’t show big differences between previous non-donors who converted and previous non-donors who did not convert. They show small differences. There’s no getting around that.

What’s encouraging (at least we think so) is that these differences are consistent across the three schools. And since the schools are quite different from one another, we expect that the same kind of differences are likely to hold true at many other schools.

Let’s assume you’re willing to give us the benefit of the doubt on that. Let’s further assume you’d like to check out our proposition at your own school.

A Makeshift Score That You Might Test at Your Own School

Here’s what we did for the data we’ve shown you for each of the three schools:

We created four 0/1 variables for all alums who were non-donors at the first point in time:

  • Youngest Class Year Quartile – alums who were in this group were assigned a 1; all others were assigned a 0.
  • Home Phone Listed — alums who had a home phone listed in the data base were assigned a 1; all others were assigned a 0.
  • Business Phone Listed — alums who had a business phone listed in the data base were assigned a 1; all others were assigned a 0.
  • Email Listed — alums who had an email address listed in the data base were assigned a 1; all others were assigned a 0.

For each alum who was a non-donor at the first point in time, we created a very simple score by adding each of the above variables together. Here’s the formula we used:

SCORE = Youngest Class Year Quartile (0/1) + Home Phone Listed (0/1) + Business Phone Listed (0/1) + Email Listed (0/1)

An alum with a Score of 0 was not in the Youngest Class Year Quartile, did not have a home phone listed, did not have a business phone listed and did not have an email address listed. An alum with a Score of 1met only one of these criteria, but not the other three, and so on up to an alum with a score of 4 who met all the criteria.

Charts 7-9 show the relationship of the Score to new donor conversion. We’d like you browse through them. After you do that we have a few concluding comments.

Some final thoughts:

  1. We think the charts are interesting because they show that using just a little information from an alumni database can point to folks who are far more likely to convert than other folks. Obviously, the score we created here (and suggest you try out at your own school) is very simple. Far more accurate scores can be developed using more advanced statistical techniques and the vast amount of information that’s included in almost all alumni databases.
  2. If you’ve taken the trouble to read this far, we’re, of course, pleased. We believe so fundamentally in data driven decision making that it brightens our day whenever someone at least entertains our ideas. But the problem may be with all the decision makers and opinion influencers out there who are not reading this piece and who would be, at best, bored by it. These are vice presidents and directors and bloggers and vendors who seem unwilling to make a commitment to the use of internal alumni database information — information that could save millions and millions of dollars on appeals (both mail and calling) to alums who are very unlikely to ever become donors.
  3. If you agree with us on point 2, the question becomes, “What can we do to change their minds, to get their attention?” First of all, we strongly encourage you to suppress the urge to grab them by the scruff of the neck and scold them. That won’t work. (Would that it did.) What we suggest is patience combined with persistence. New ideas and ways of doing things take a long time to take hold in institutions. How long has the idea of converting print based medical records to electronic form so they can be quickly shared among physicians and other who must make life altering decisions on the spot every day been around? If memory serves, it’s been a while. But don’t give up making the case and pushing politely but assertively. They’ll come around. We’re (all of us) a benevolent juggernaut whose opinions will eventually prevail.

Blog at