Guest post by Peter B. Wylie, with John Sammis
(Printer-friendly PDF download of this post available here: Lopsided Nature of Alum Giving – Wylie)
Eight years ago I wrote a piece called “Sports, Fund Raising, and the 80/20 Rule”. It had to do with how most alumni giving in higher education comes from a very small group of former students. Nobody was shocked or awed by the article. The sotto voce response seemed to be, “Thanks, Pete. We got that. Tell us something we don’t know.” That’s okay. It’s like my jokes. A lot of ’em don’t get more than a polite laugh; some get stone silence.
Anyway, time passed and I started working closely with John Sammis. Just about every week we’d look at a new alumni database, and over and over, we’d see the same thing. The top one percent of alumni givers had donated more than the other ninety-nine percent.
Finally, I decided to take a closer look at the lifetime giving data from seven schools that I thought covered a wide spectrum of higher education institutions in North America. Once again, I saw this huge lopsided phenomenon where a small, small group of alums were accounting for a whopping portion of the giving in each school. That’s when I went ahead and put this piece together.
What makes this one any different from the previous piece? For one thing, I think it gives you a more granular look at the lopsidedness, sort of like Google Maps allows you to really focus in on the names of tiny streets in a huge city. But more importantly, for this one I asked several people in advancement whose opinions I respect to comment on the data. After I show you that data, I’ll summarize some of what they had to say, and I’ll add in some thoughts of my own. After that, if you have a chance, I’d love to hear what you think. (Commenting on this blog has been turned off, but feel free to send an email to firstname.lastname@example.org.)
I mentioned above that I looked at data from seven schools. After some agonizing, I decided I would end up putting you to sleep if I showed you all seven. So I chopped it down to four. Believe me, four is enough to make the point.
Here’s how I’ve laid out the data:
To make sure all this is clear, let’s go through the data for School A. Take a look at Table 1. It shows the lifetime giving for all alumni donors at the school divided into ten equal size groups called deciles. Notice that the alums in decile 10 account for over 95% of that giving. Conversely, the alums in decile 1 account for two tenths of one percent of the giving.
Table 1: Amount and Percentage of Total Lifetime Giving in School A for all Alumni by Giving Decile
Moving on to Table 2. Here we’re looking at only the top decile of alumni givers divided into one percent groups. What jumps out from this table is that the top one percent of all givers account for more than 80% of alumni lifetime giving. That’s five times as much as the remaining 99% of alumni givers.
Table 2: Amount and Percentage of Total Lifetime Giving at School A for Top Ten Percent of Alumni Donors
If that’s not lopsided enough for you, let’s look at Table 3 where the top one percent of alumni givers is divided up into what I’ve called milliles. That is, tenth of a percent groups. And lo and behold, the top one tenth of one percent of alumni donors account for more than 60% of alumni lifetime giving. Figure 1 shows the same information in a bit more dramatic way than does the table.
Table 3: Amount and Percentage of Total Lifetime Giving at School A for Top One Percent of Alumni Donors
What I’d recommend is that you go through the same kinds of tables and charts laid out below for Schools B, C, and D. Go as fast or as slowly as you’d like. Being somewhat impatient, I would focus on Figures 2-4. I think that’s where the real punch in these data resides.
Table 4: Amount and Percentage of Total Lifetime Giving in School B for all Alumni by Giving Decile
Table 5: Amount and Percentage of Total Lifetime Giving at School B for Top Ten Percent of Alumni Donors
Table 6: Amount and Percentage of Total Lifetime Giving at School B for Top One Percent of Alumni Donors
Table 7: Amount and Percentage of Total Lifetime Giving in School C for all Alumni by Giving Decile
Table 8: Amount and Percentage of Total Lifetime Giving at School C for Top Ten Percent of Alumni Donors
Table 9: Amount and Percentage of Total Lifetime Giving at School C for Top One Percent of Alumni Donors
Table 10: Amount and Percentage of Total Lifetime Giving in School D for all Alumni by Giving Decile
Table 11: Amount and Percentage of Total Lifetime Giving at School D for Top Ten Percent of Alumni Donors
Table 12: Amount and Percentage of Total Lifetime Giving at School D for Top One Percent of Alumni Donors
When I boil down to its essence what you’ve just looked at for these three schools, here’s what I see:
What Some People in Advancement have to Say about All This
Over the years I’ve gotten to know a number of thoughtful/idea-oriented folks in advancement. I asked several of them to comment on the data you’ve just seen. To protect the feelings of the people I didn’t ask, I’ll keep the commenters anonymous. They know who they are, and they know how much I appreciate their input.
Here are a few of the many helpful observations they made:
Most of the big money in campaigns and other advancement efforts does not come from alumni. I’m a bit embarrassed to admit that I had forgotten this fact. CASE puts out plenty of literature that confirms this. It is “friends” who carry the big load in higher education fundraising. At least two of the commenters pointed out that we could look at that fact as a sad commentary on the hundreds and hundreds of thousands of alums who give little or nothing to their alma maters. However, both felt it was better to look at these meager givers as an untapped resource that we have to do a better job of reaching.
The data we see here reflect the distribution of wealth in society. The commenter said, “There simply are very few people who have large amounts of disposable wealth and a whole lot of hard working folks who are just trying to participate in making a difference.” I like this comment; it jibes with my sense of the reality out there.
“It is easier (and more comfortable) to work with donors rather than prospective donors.” The commenter went on to say: “The wealthier the constituency the more you can get away with this approach because you have enough people who can make mega-gifts and that enables you to avoid building the middle of the gift pyramid.” This is very consistent with what some other commenters had to say about donors in the middle of the pyramid — donors who don’t get enough attention from the major giving folks in advancement.
Most people in advancement ARE aware of the lopsidedness. All of the commenters said they felt people in advancement were well aware of the lopsided phenomenon, perhaps not to the level of granularity displayed in this piece. But well aware, nonetheless.
What you see in this piece underestimates the skew because it doesn’t include non-givers. I was hoping that none of the commenters would bring up this fact because I had not (and still have not) come up with a clear, simple way to convey what the commenter had pointed out. But let’s see if I can give you an example. Look at Figure 4. It shows that one tenth of one percent of alumni givers account for over 48% of total alumni giving. However, let’s imagine that half of the solicitable alumni in this school have given nothing at all. Okay, if we now double the base to include all alums, not just alum givers, then what happens to the percentage size of that top one tenth of one percent of givers? It’s no longer one tenth of one percent; it’s now one twentieth of one percent. If you’re confused, let’s ask someone else reading this thing to explain it. I’m spinning my wheels.
One More Thought from Me
But here’s a thought that I’ve had for a long time. When I look at the incredible skewness that we see in the top one percent of alumni donors, I say, “WHY?!” Is the difference among the top millile and the bottom millile in that top one percent simply a function of capacity to give? Maybe it is, but I’d like to know. And then I say, call me crazy, LET’S FIND OUT! Not with some online survey. That won’t cut it. Let’s hire a first rate survey research team to go out and interview these folks (we’re not talking a lot of people here). Would that cost some money to go out and get these answers? Yes, and it would be worth every penny of it. The potential funding sources I’ve talked to yawn at the idea. But I’ll certainly never let go of it.
As always, let us know what you think.
There’s been some back-and-forth on one of the listservs about the “correct” way to measure and score alumni engagement. An emphasis on scientific rigor is being pressed for by one vendor who claims to specialize in rigor. The emphasis is misplaced.
No doubt there are sophisticated ways of measuring engagement that I know nothing about, but the question I can’t get beyond is, how do you define “engagement”? How do you make it measurable so that one method applies everywhere? I think that’s a challenging proposition, one that limits any claim to “correctness” of method. This is the main reason that I avoid writing about measuring engagement — it sounds analytical, but inevitably it rests on some messy, intuitive assumptions.
The closest I’ve ever seen anyone come is Engagement Analysis Inc., a firm based here in Canada. They have a carefully chosen set of engagement-related survey questions which are held constant from school to school. The questions are grouped in various categories or “drivers” of engagement according to how closely related (statistically) the responses tend to be to each other. Although I have issues with alumni surveys and the dangers involved in interpreting the results, I found EA’s approach fascinating in terms of gathering and comparing data on alumni attitudes.
(Disclaimer: My former employer was once a client of this firm’s but I have no other association with them. Other vendors do similar and very fine work, of course. I can think of a few, but haven’t actually worked with them, so I will not offer an opinion.)
Some vendors may make claims of being scientific or analytically correct, but the only requirement of quantifying engagement is that it be reasonable, and (if you are benchmarking against other schools) consistent from school to school. In general, if you want to benchmark, then engage a vendor if you want to do it right, because it’s not easily done.
But if you want to benchmark against yourself (that is, over time), don’t be intimidated by anyone telling you your method isn’t good enough. Just do your own thing. Survey if you like, but call first upon the real, measurable activities that your alumni participate in. There is no single right way, so find out what others have done. One institution will give more weight to reunion attendance than to showing up for a pub night, while another will weigh all event attendance equally. Another will ditch event attendance altogether in favour of volunteer activity, or some other indicator.
Can anyone say definitively that any of these approaches are wrong? I don’t think so — they may be just right for the school doing the measuring. Many schools (mine included) assign fairly arbitrary weights to engagement indicators based on intuition and experience. I can’t find fault with that, simply because “engagement” is not a quantity. It’s not directly measurable, so we have to use proxies which ARE measurable. Other schools measure the degree of association (correlation) between certain activities and alumni giving, and base their weights on that, which is smart. But it’s all the same to me in the end, because ‘giving’ is just another proxy for the freely interpretable quality of “engagement.”
Think of devising a “love score” to rank people’s marriages in terms of the strength of the pair bond. A hundred analysts would head off in a hundred different directions at Step 1: Defining “love”. That doesn’t mean the exercise is useless or uninteresting, it just means that certain claims have to be taken with a grain of salt.
We all have plenty of leeway to chose the proxies that work for us, and I’ve seen a number of good examples from various schools. I can’t say one is better than another. If you do a good job measuring the proxies from one year to the next, you should be able to learn something from the relative rises and falls in engagement scores over time and compared between different groups of alumni.
Are there more rigorous approaches? Yes, probably. Should that stop you from doing your own thing? Never!
Some of the best predictors in my models are related to the presence or absence of phone numbers and addresses. For example, the presence of a business phone is usually a highly significant predictor of giving. As well, a count of either phone or address updates present in the database is also highly correlated with giving.
Some people have difficulty accepting this as useful information. The most common objection I hear is that such updates can easily come from research and data appends, and are therefore not signals of affinity at all. And that would be true: Any data that exists solely because you bought it or looked it up doesn’t tell you how someone feels about your institution. (Aside from the fact that you had to go looking for them in the first place — which I’ve observed is negatively correlated with giving.)
Sometimes this objection comes from someone who is just learning data mining. Then I know I’m dealing with someone who’s perceptive. They obviously get it, to some degree — they understand there’s potentially a problem.
I’m less impressed when I hear it from knowledgeable people, who say they avoid contact information in their variable selection altogether. I think that’s a shame, and a signal that they aren’t willing to put in the work to a) understand the data they’re working with, or b) take steps to counteract the perceived taint in the data.
If you took the trouble to understand your data (and why wouldn’t you), you’d find out soon enough if the variables are useable:
An in-house predictive modeler will simply know what the case is, or will take the trouble to find out. A vendor hired to do the work may or may not bother — I don’t know. As far as my own models are concerned, I know that addresses and phone numbers come to us via a mix of voluntary and involuntary means: Via Phonathon, forms on the website, records research, and so on.
I’ve found that a simple count of all historical address updates for each alum is positively correlated with giving. But a line plot of the relationship between number of address updates and average lifetime giving suggests there’s more going on under the surface. Average lifetime giving goes up sharply for the first half-dozen or so updates, and then falls away just as sharply. This might indicate a couple of opposing forces: Alumni who keep us informed of their locations are more likely to be donors, but alumni who are perpetually lost and need to be found via research are less likely to be donors.
If you’re lucky, your database not only has a field in which to record the source of updates, but your records office is making good use of it. Our database happens to have almost 40 different codes for the source, applied to some 300,000 changes of address and/or phone number. Not surprisingly, some of these are not in regular use — some account for fewer than one-tenth of one percent of updates, and will have no significance in a model on their own.
For the most common source types, though, an analysis of their association with giving is very interesting. Some codes are positively correlated with giving, some negatively. In most cases, a variable is positive or negative depending on whether the update was triggered by the alum (positive), or by the institution (negative). On the other hand, address updates that come to us via Phonathon are negatively correlated with giving, possibly because by-mail donors tend not to need a phone call — if ‘giving’ were restricted to phone solicitation only, perhaps the association might flip toward the positive. Other variables that I thought should be positive were actually flat. But it’s all interesting stuff.
For every source code, a line plot of average LT giving and number of updates is useful, because the relationship is rarely linear. The relationship might be positive up to a point, then drop off sharply, or maybe the reverse will be true. Knowing this will suggest ways to re-express the variable. I’ve found that alumni who have a single update based on the National Change of Address database have given more than alumni who have no NCOA updates. However, average giving plummets for every additional NCOA update. If we have to keep going out there to find you, it probably means you don’t want to be found!
Classifying contact updates by source is more work, of course, and it won’t always pay off. But it’s worth exploring if your goal is to produce better, more accurate models.
Here is my attempt at compiling an exhaustive list of every predictor variable I have ever tested in the environment of my data – 85 of them! Not every variable is listed separately – some are grouped together by type or source. In some cases I’ve indicated whether the variable is an indicator variable (0/1) or a continuous variable, as necessary. A few variables are peculiar to the institution where I created my models. Variables that came from external sources are marked with an asterisk.
Some of these predictors were never used in a model because they were eclipsed by other, related variables that had stronger correlation with the dependent variable. Others (such as gender) proved problematic and were left out of my models for specific reasons. And some were tested and found not to be predictive at all. (A final model may contain only 15 to 20 good predictor variables.) Still, I include them all here, because any one of them might add value to models you build for your own data.
Also note: A number of predictors, listed at the end, are based on giving history. These are NOT to be used when your predicted value is ‘giving’. These variables were used in other models, such as Planned Giving potential and likelihood to attend events.
Every year I discover new data points hiding in our database. Many other variables are out there, but often the data exists only for our youngest alumni. Someday, I’m sure, this additional data will yield cool new predictors. For ideas on other variables to look for in your data (including non-university data), refer to the list that begins on page 138 of Joshua Birkholz’s book, “Fundraising Analytics.”
University offices record all kinds of things in their databases simply in order to run their own processes: mailing the alumni magazine, ticketing for events, coding mailing preferences and on and on. Finding novel predictors for your models requires talking to colleagues in your department (and around campus) about the database screens they use, and the things they track. Exploring these avenues can be rewarding and rather social as well!
Here are a few variables I’ve tested which might be lesser-known than the ones I’ve written about earlier. These aren’t likely to appear near the top of your list of variables that are most highly correlated with giving, but it certainly won’t hurt to throw some into a regression analysis. Some variables will be more or less valuable depending on what you’re trying to predict. Some of these are negative predictors; that’s hardly a bad thing, as negative predictors will help to further differentiate the prospect pool, allowing your best prospects to stand out from the crowd.
Here we go:
Does your institution have a records researcher? When mail is returned as undeliverable to the alumni office, this person is busy coding alumni as “lost”, which marks them for later research. These codes may persist in your database after the alum is found, or they might be replaced with another code. In either case, I’ve found that alumni who allow themselves to become lost are less likely to give. A great negative predictor.
Does your alumni magazine have a “green delivery” option? Some alumni opt to access their magazine exclusively by electronic means, as a PDF download perhaps. Mailing preferences are tracked in your database, and often any sort of stated preference is a predictor.
You may already be using ‘number of phonathon refusals’ as a variable, but does your calling program record the reasons for refusal? “Financial reasons” might be a negative predictor, but not all reasons have to be negative. I’ve found that alumni who refuse because they want to handle the donation on their own (for example, mail a cheque when and if they feel like it) are excellent donors. They’re just rather phone-averse.
What about cross-references? We record family relationships among alumni – even grandparent/grandchild and in-laws. I’ve found ‘number of cross-references’ to be a significant predictor.
Alumni who want to be excluded from affinity programs (credit cards, insurance etc.) may be coded in your database so they do not receive unwanted mailings for those products. A negative predictor.
There might be a weird variable or two lurking in people’s names. For certain models, I’ve found that having a first or middle name that consists of a single initial is a positive predictor. This is somewhat correlated with age, but even after adding ‘class year’ to my regression, this variable will still improve the fit of the model. As well, Peter Wylie has written about the character length of an entire name (Prefix, First, Middle, Last, Suffix) being a predictor. Try it.
A year or so ago, I figured out how to query the database to easily retrieve the number of address updates for each alum. This only works when your records personnel create a new address record every time, instead of replacing the previous record. If an alum keeps their alma mater informed of their whereabouts, they’re probably more engaged – and more likely to give (and attend events). Ditto for number of phone updates and number of employment updates.
The previous idea is related to “class notes” for the alumni magazine. Some universities enter alumni submissions into their database so they can run their notes as a report. We don’t, but I wish we did, because I know ‘number of notes’ would be a predictor.
This might be the tip of the iceberg. Think of all the other great sources of variables that result from normal daily processes (gift processing data, online social networking data, automated call centre data, survey data …), have those conversations with your colleagues, and figure out how to get your hands on those variables for testing.