CoolData blog

13 April 2014

Optimizing lost alumni research, with a twist

Filed under: Alumni, Best practices, engagement, External data, Tableau — Tags: , , , , — kevinmacdonell @ 9:47 am

There are data-driven ways to get the biggest bang for your buck from the mundane activity of finding lost alumni. I’m going to share some ideas on optimizing for impact (which should all sound like basic common sense), and then I’m going to show you a cool data way to boost your success as you search for lost alumni and donors (the “twist”). If lost alumni is not a burning issue for your school, you still might find the cool stuff interesting, so I encourage you to skip down the page.

I’ve never given a great deal of thought to how a university’s alumni records office goes about finding lost alumni. I’ve simply assumed that having a low lost rate is a good thing. More addressable (or otherwise contactable) alumni is good: More opportunities to reengage and, one hopes, attract a gift. So every time I’ve seen a huge stack of returned alumni magazine covers, I’ve thought, well, it’s not fun, but what can you do. Mark the addresses as invalid, and then research the list. Work your way though the pile. First-in, first-out. And then on to the next raft of returned mail.

But is this really a wise use of resources? John Smith graduates in 1983, never gives a dime, never shows up for a reunion … is there likely to be any return on the investment of time to track him down? Probably not. Yet we keep hammering away at it.

All this effort is evident in my predictive models. Whenever I have a variable that is a count of ‘number of address updates’, I find it is correlated with giving — but only up to a point. Beyond a certain number of address updates, the correlation turns sharply negative. The reason is that while highly engaged alumni are conscientious about keeping alma mater informed of their whereabouts, alumni who are completely unengaged are perpetually lost. The ones who are permanently unreachable get researched the most and are submitted for data appends the most. Again and again a new address is entered into the database. It’s often incorrect — we got the wrong John Smith — so the mail comes back undeliverable, and the cycle begins again.

Consider that at any time there could be many thousands of lost alumni. It’s a never-ending task. Every day people in your database pull up stakes and move without informing you. Some of those people are important to your mission. Others, like Mr. Smith from the Class of 1983, are not. You should be investing in regular address cleanups for all records, but when it comes down to sleuthing for individuals, which is expensive, I think you’d agree that those John Smiths should never come ahead of keeping in touch with your loyal donors. I’m afraid that sometimes they do — a byproduct, perhaps, of people working in silos, pursuing goals (eg., low lost rates) that may be laudable in a narrow context but are not sufficiently aligned with the overall mission.

Here’s the common sense advice for optimizing research: ‘First-in, first-out’ is the wrong approach. Records research should always be pulling from the top of the pile, searching for the lost constituents who are deemed most valuable to your mission. Defining “most valuable” is a consultative exercise that must take Records staff out of the back office and face-to-face with fundraisers, alumni officers and others. It’s not done in isolation. Think “integration”.

The first step, then, is consultation. After that, all the answers you need are in the data. Depending on your tools and resources, you will end up with some combination of querying, reporting and predictive modelling to deliver the best research lists possible, preferably on a daily basis. The simplest approach is to develop a database query or report that produces the following lists in whatever hierarchical order emerges from consultation. Research begins with List 1 and does not proceed to List 2 until everyone on List 1 has been found. An example hierarchy might look like this:

  1. Major gift and planned giving prospects: No major gift prospect under active management should be lost (and that’s not limited to alumni). Records staff MUST review their lists and research results with Prospect Research and/or Prospect Management to ensure integrity of the data, share research resources, and alert gift officers to potentially significant events.
  2. Major gift donors (who are no longer prospects): Likewise, these folks should be 100% contactable. In this case, Records needs to work with Donor Relations.
  3. Planned Giving expectancies: I’m not knowledgeable about Planned Giving, but it seems to me that a change of address for an expectancy could signal a significant event that your Planned Giving staff ought to know about. A piece of returned mail might be a good reason to reach out and reestablish contact.
  4. Annual Giving Leadership prospects and donors: The number of individuals is getting larger … but these lists should be reviewed with Annual Fund staff.
  5. Annual Fund donors who gave in the past year.
  6. Annual Fund donors who gave in the year previous.
  7. All other Annual Fund donors, past five or 10 years.
  8. Recent alumni volunteers (with no giving)
  9. Recent event attendees (reunions, etc.) — again, who aren’t already represented in a previous category.
  10. Young alumni with highest scores from predictive models for propensity to give (or similar).
  11. All other non-donor alumni, ranked by predictive model score.

Endless variations are possible. Although I see potential for controversy here, as everyone will feel they need priority consideration, I urge you not to shrink from a little lively discussion — it’s all good. It may be that in the early days of your optimization effort, Annual Fund is neglected while you clean up your major gift and planned giving prospect/donor lists. But in time, those high-value lists will become much more manageable — maybe a handful of names a week — and everyone will be well-served.

There’s a bit of “Do as I say, not as I do” going on here. In my shop, we are still evolving towards becoming data-driven in Records. Not long ago I created a prototype report in Tableau that roughly approximates the hierarchy above. Every morning, a data set is refreshed automatically that feeds these lists, one tab for each list, and the reports are available to Records via Tableau Server and a browser.

That’s all fine, but we are not quite there yet. The manager of the Records team said to me recently, “Kevin, can’t we collapse all these lists into a single report, and have the names ranked in order by some sort of calculated score?” (I have to say, I feel a warm glow when I hear talk like that.) Yes — that’s what we want. The hierarchy like the one above suggests exclusive categories, but a weighted score would allow for a more sophisticated ranking. For example, a young but loyal Annual Fund donor who is also a current volunteer might have a high enough score to outrank a major gift prospect who has no such track record of engagement — maybe properly so. Propensity scores could also play a much bigger role.

However it shakes out, records research will no longer start the day by picking up where the previous day’s work left off. It will be a new list every morning, based on the actual value of the record to the institution.

And now for the twist …

Some alumni might not be addressable, but they are not totally lost if you have other information such as an email address. If they are opening your email newsletters, invitations and solicitations, then you might be able to determine their approximate geographic location via the IP address given to them by their internet service provider.

That sounds like a lot of technical work, but it doesn’t have to be. Your broadcast email platform might be collecting this information for you. For example, MailChimp has been geolocating email accounts since at least 2010. The intention is to give clients the ability to segment mailings by geographic location or time zone. You can use it to clue you in to where in the world someone lives when they’ve dropped off your radar.

(Yes, yes, I know you could just email them to ask them to update their contact info. But the name of this blog is CoolData, not ObviousData.)

What MailChimp does is append latitude and longitude coordinates to each email record in your account. Not everyone will have coordinates: At minimum, an alum has to have interacted with your emails in order for the data to be collected. As well, ISP-provided data may not be very accurate. This is not the same as identifying exactly where someone lives (which would be fraught with privacy issues), but it should put the individual in the right city or state.

In the data I’m looking at, about half of alumni with an email address also have geolocation data. You can download this data, merge it with your records for alumni who have no current valid address, and then the fun begins.

I mentioned Tableau earlier. If you’ve got lat-long coordinates, visualizing your data on a map is a snap. Have a look at the dashboard below. I won’t go into detail about how it was produced, except to say that it took only an hour or so. First I queried the database for all our alumni who don’t have a valid preferred address in the database. For this example, I pulled ID, sum of total giving, Planned Giving status (i.e., current expectancy or no), and the city, province/state and country of the alum’s most recent valid address. Then I joined the latitude and longitude data from MailChimp, using the ID as the common key.

The result was a smallish data file (less than 1,000 records), which I fed into Tableau. Here’s the result, scrubbed of individual personal information — click on the image to get a readable size.

map_alums

The options at top right are filters that enable the user to focus on the individuals of greatest interest. I’ve used Giving and Planned Giving status, but you can include anything — major gift prospect status, age, propensity score — whatever. If I hover my cursor over any dot on the map, a tooltip pops up containing information about the alum at that location, including the city and province/state of the last place they lived. I can also zoom in on any portion of the map. When I take a closer look at a certain tropical area, I see one dot for a person who used to live in Toronto and one for a former Vancouverite, and one of these is a past donor. Likewise, many of the alumni scattered across Africa and Asia last lived in various parts of eastern Canada.

These four people are former Canadians who are now apparently living in a US city — at least according to their ISP. I’ve blanked out most of the info in the tooltip:

manhattan

If desired, I could also load the email address into the tooltip and turn it into a mailto link: The user could simply click on the link to send a personal message to the alum.

(What about people who check email while travelling? According to MailChimp, location data is not updated unless it’s clear that a person is consistently checking their email over an extended period of time — so vacations or business trips shouldn’t be a factor.)

Clearly this is more dynamic and interesting for research than working from a list or spreadsheet. If I were a records researcher, I would have some fun filtering down on the biggest donors and using the lcoation to guide my search. Having a clue where they live now should shorten the time it takes to decide that a hit is a real match, and should also improve the number of correct addresses. As well, because a person has to actually open an email in order to register their IP with the email platform, they are also sending a small signal of engagement. The fact they’re engaging with our email is assurance that going to the trouble to research their address and other details such as employment is not a waste of time.

This is a work in progress. My example is based on some manual work — querying the database, downloading MailChimp data, and merging the files. Ideally we would automate this process using the vendor’s API and scheduled data refreshes in Tableau Server. I can also see applications beyond searching for lost alumni. What about people who have moved but whose former address is still valid, so the mail isn’t getting returned? This is one way to proactively identify alumni and donors who have moved.

MailChimp offers more than just geolocation. There’s also a nifty engagement score, based on unsubscribes, opens and click-throughs. Stay tuned for more on this — it’s fascinating stuff.

16 July 2013

Alumni engagement scoring vs. predictive modelling

Filed under: Alumni, engagement, predictive modeling — Tags: , , , — kevinmacdonell @ 8:06 am

Alumni engagement scoring has an undeniable appeal. What could be simpler? Just add up how many events an alum has attended, add more points for volunteering, add more points for supporting the Annual Fund, and maybe some points for other factors that seem related to engagement, and there you have your score. If you want to get more sophisticated, you can try weighting each score input, but generally engagement scoring doesn’t involve any advanced statistics and is easily grasped.

Not so with predictive modelling, which does involve advanced stats and isn’t nearly as intuitive; often it’s not possible to really say how an input variable is related to the outcome. It’s tempting, too, to think of an engagement score as being a predictor of giving and therefore a good replacement for modelling. Actually, it should be predictive — if it isn’t, your score is not measuring the right things — but an engagement score is not the same thing as a predictive model score. They are different tools for different jobs.

Not only are engagement scoring schemes different from predictive models, their simplicity is deceptive. Engagement scoring is incomplete without some plan for acting on observed trends with targeted programming. This implies the ability to establish causal drivers of engagement, which is a tricky thing.

That’s a sequence of events — not a one-time thing. In fact, engagement scoring is like checking the temperature at regular intervals over a long period of time, looking for up and down trends not just for the group as a whole but via comparisons of important subgroups defined by age, sex, class year, college, degree program, geography or other divisions. This requires discipline: taking measurements in exactly the same way every year (or quarter, or what-have-you). If the score is fed by a survey component, you must survey constantly and consistently.

Predictive models and engagement scores have some surface similarities. They share variables in common, the output of both is a numerical score applied to every individual, and both require database work and math in order to calculate them. Beyond that, however, they are built in different ways and for different purposes. To summarize:

  • Predictive models are collections of potentially dozens of database variables weighted according to strength of correlation with a well-defined behaviour one is trying to predict (eg. making a gift), in order to rank individuals by likelihood to engage in that behaviour. Both Alumni Relations and Development can benefit from the use of predictive models.
  • Engagement scores are collections of a very few selectively-chosen database variables, either not weighted or weighted according to common sense and intuition, in order to roughly quantify the quality of “engagement”, however one wishes to define that term, for each individual. The purpose is to allow comparison of groups (faculties, age bands, geographical regions, etc.) with each other. Comparisons may be made at one point in time, but it is more useful to compare relative changes over time. The main user of scores is Alumni Relations, in order to identify segments requiring targeted programming, for example, and to assess the impact of programming on targeted segments over time.

Let’s explore key differences in more depth:

The purpose of modelling is prediction, for ranking or segmentation. The purpose of engagement scoring is comparison.

Predictive modelling scores are not usually included in reports. Used immediately in decision making, they may never be seen by more than one or two people. Engagement scores are included in reports and dashboards, and influence decision-making over a long span of time.

The target variable of a predictive model is quantifiable (eg. giving, measurable in dollars). In engagement scoring, there is no target variable, only an output – a construct called “engagement”, which itself is not directly measurable.

Potential input variables for predictive models are numerous (100+) and vary from model to model. Input variables for engagement scores are limited to a handful of easily measured attributes (giving, event attendance, volunteering) which must remain consistent over time.

Variables for predictive models are chosen primarily using statistical methods (correlation) and only secondarily using judgment and “common sense.” For example, if the presence of a business phone number is highly correlated with being a donor, it may be included in the model. For engagement scores, variables are chosen by consensus of stakeholders, primarily according to subjective standards. For example, event attendance and giving would probably be deemed by the committee to indicate engagement, and would therefore be included in the score. Advanced statistics rarely come into play. (For more thoughts on this, read How you measure alumni engagement is up to you.)

In predictive models, giving and variables related to the activity of giving are usually excluded as variables (if ‘giving’ is what we are trying to predict). Using any aspect of the target variable as an input is bad practice in predictive modelling and is carefully avoided. You wouldn’t, for example, use attendance at a donor recognition event to predict likelihood to give. In engagement scoring, though, giving history is usually a key input, as it is common sense to believe that being a donor is an indication of engagement. (It might be excluded or reported separately if the aim is to demonstrate the causal link between engagement indicators and giving.)

Modelling variables are weighted using multiple linear regression or other statistical method which calculates the relative influence of each variable while simultaneously controlling for the influence of all other variables in the model. Engagement score variables are usually weighted according to gut feel. For example, coming to campus for Homecoming seems to carry more weight than showing up for a pub night in one’s own city, therefore we give it more weight.

The quality of a predictive model is testable, first against a validation data set, and later against actual results. But there is no right or wrong way to estimate engagement, therefore the quality of scores cannot be evaluated conclusively.

The variables in a predictive model have complex relationships with each other that are difficult or impossible to explain except very generally. Usually there is no reason to explain a model in detail. The components in an engagement score, on the other hand, have plausible (although not verifiable) connections to engagement. For example, volunteering is indicative of engagement, while Name Prefix is irrelevant.

Predictive models are built for a single, time-limited purpose and then thrown away. They evolve iteratively and are ever-changing. On the other hand, once established, the method for calculating an engagement score must not change if comparisons are to be made over time. Consistency is key.

Which is all to say: alumni engagement scoring is not predictive modelling. (And neither is RFM analysis.) Only predictive modelling is predictive modelling.

18 April 2013

A response to ‘What do we do about Phonathon?’

I had a thoughtful response to my blog post from earlier this week (What do we do about Phonathon?) from Paul Fleming, Database Manager at Walnut Hill School for the Arts in Natick, Massachusetts, about half an hour from downtown Boston. With Paul’s permission, I will quote from his email, and then offer my comments afterword:

I just wanted to share with you some of my experiences with Phonathon. I am the database manager of a 5-person Development department at a wonderful boarding high school called the Walnut Hill School for the Arts. Since we are a very small office, I have also been able to take on the role of the organizer of our Phonathon. It’s only been natural for me to combine the two to find analysis about the worth of this event, and I’m happy to say, for our own school, this event is amazingly worthwhile.

First of all, as far as cost vs. gain, this is one of the cheapest appeals we have. Our Phonathon callers are volunteer students who are making calls either because they have a strong interest in helping their school, or they want to be fed pizza instead of dining hall food (pizza: our biggest expense). This year we called 4 nights in the fall and 4 nights in the spring. So while it is an amazing source of stress during that week, there aren’t a ton of man-hours put into this event other than that. We still mail letters to a large portion of our alumni base a few times a year. Many of these alumni are long-shots who would not give in response to a mass appeal, but our team feels that the importance of the touch point outweighs the short-term inefficiencies that are inherent in this type of outreach.

Secondly, I have taken the time to prioritize each of the people who are selected to receive phone calls. As you stated in your article, I use things like recency and frequency of gifts, as well as other factors such as event participation or whether we have other details about their personal life (job info, etc). We do call a great deal of lapsed or nondonors, but if we find ourselves spread too thin, we make sure to use our time appropriately to maximize effectiveness with the time we have. Our school has roughly 4,400 living alumni, and we graduate about 100 wonderful, talented students a year. This season we were able to attempt phone calls to about 1,200 alumni in our 4 nights of calling. The higher-priority people received up to 3 phone calls, and the lower-priority people received just 1-2.

Lastly, I was lucky enough to start working at my job in a year in which there was no Phonathon. This gave me an amazing opportunity to test the idea that our missing donors would give through other avenues if they had no other way to do so. We did a great deal of mass appeals, indirect appeals (alumni magazine and e-newsletters), and as many personalized emails and phone calls as we could handle in our 5-person team. Here are the most basic of our findings:

In FY11 (our only non-Phonathon year), 12% of our donors were repeat donors. We reached about 11% participation, our lowest ever. In FY12 (the year Phonathon returned):

  • 27% of our donors were new/recovered donors, a 14% increase from the previous year.
  • We reached 14% overall alumni participation.
  • Of the 27% of donors who were considered new/recovered, 44% gave through Phonathon.
  • The total amount of donors we had gained from FY11 to FY12 was about the same number of people who gave through the Phonathon.
  • In FY13 (still in progess, so we’ll see how this actually plays out), 35% of the previously-recovered donors who gave again gave in response to less work-intensive mass mailing appeals, showing that some of these Phonathon donors can, in fact, be converted and (hopefully) cultivated long-term.

In general, I think your article was right on point. Large universities with a for-pay, ongoing Phonathon program should take a look and see whether their efforts should be spent elsewhere. I just wanted to share with you my successes here and the ways in which our school has been able to maintain a legitimate, cost-effective way to increase our participation rate and maintain the quality of our alumni database.

Paul’s description of his program reminds me there are plenty of institutions out there who don’t have big, automated, and data-intensive calling programs gobbling up money. What really gets my attention is that Walnut Hill uses alumni affinity factors (event attendance, employment info) to prioritize calling to get the job done on a tight schedule and with a minimum of expense. This small-scale data mining effort is an example for the rest of us who have a lot of inefficiency in our programs due to a lack of focus.

The first predictive models I ever created were for a relatively small university Phonathon that was run with printed prospect cards and manual dialing — a very successful program, I might add. For those of you at smaller institutions wondering if data mining is possible only with massive databases, the answer is NO.

And finally, how wonderful it is that Walnut Hill can quantify exactly what Phonathon contributes in terms of new donors, and new donors who convert to mail-responsive renewals.

Bravo!

15 April 2013

What do we do about Phonathon?

Filed under: Alumni, Annual Giving, Phonathon — Tags: , , , — kevinmacdonell @ 5:41 am

I love Phonathon. I love what it does, and I love the data it produces. But sad to say, Phonathon may be the sick old man of fundraising. In fact some have taken its pulse and declared it dead.

A few weeks ago, a Director of Annual Giving named Audra Vaz posted this question to a listserv: “I’m writing to see if any institutions out there have transitioned away from their Phonathon program. If so, how did it affect your Annual Giving program?”

A number of people immediately came to the defence of Phonathon with assurances of the long-term value of calling programs. The responses went something like this: Get rid of Phonathon?? It’s a great point of connection between an institution and its alumni, particularly its younger alumni. It’s the best tool for donor acquisition. It’s a great way to update contact and employment information. Don’t do it!

Audra wasn’t satisfied. “As currently run, it’s expensive and ineffective,” she wrote of her program at Florida Atlantic University in Boca Raton. “It takes up 30% of my budget, brings in less than 2% of Annual Fund donations and only has a 20% ROI. I could use that money for building societies, personal solicitations, and direct mail which is much more effective for us. In a difficult budget year, I cannot be nostalgic and continue to justify the bleed for a program that most institutions do yet hardly any makes money off of. Seems like a bad business model to me.”

I can’t disagree with Audra. Anyone following fundraising listservs knows that, in general, contact rates and productivity are declining year after year. And out of the contacts it does manage to make, Phonathon generates scads of pledges that are never fulfilled, entailing the additional cost of reminder mailings and write-offs. There are those who say that Phonathon should be viewed as an investment and not an expense. I have been inclined to that view myself. The problem is that yes, it IS an expense, and not a small one. If Phonathons create value in all the other ways that the defenders say they do, then where are the numbers to prove it? Where’s the ROI? Audra had numbers; the defenders did not. At strategic planning time, numbers talk louder than opinions.

When I contacted Audra recently to get permission to use her name, she told me she has opted to keep her Phonathon program for now, but will market its services to other university divisions to turn it into a revenue generator (athletics and arts ticket sales, admissions welcome calls, invitations to events, and alumni membership renewals). That sounds like a good idea. I can think of a number of additional ways to keep Phonathon alive and relevant, but since this is a data-related blog I will focus on just two.

1. Stop calling everybody!

At many institutions, Phonathon is used as a mass-contact tool for indiscriminately soliciting anyone the Annual Fund believes might have a pulse. This approach is becoming less and less sustainable. The same question is asked repeatedly on the listservs: “How many times, on average, do you attempt to call alumni non-donors before you retire their call sheet?” And then people give their one-size-fits-all answers: five times, seven times, whatever times per record. Given how graduating classes have increased in size for most institutions, I am not surprised to read that some programs are stretched too thin to call very deeply. As one person wrote recently: “Because of time and resources constraints, we’re lucky to get two attempts in with nondonor/long lapsed alumni.”

I just don’t get it.

We know that people who have attended events are more likely to pick up the phone. We know that alumni who have shared their job title with us are more likely to pick up the phone. We know that alumni who have given us their email address are more likely to pick up the phone. So why in 2013 are schools still expending the same amount of energy on each of their prospective donors as if they were all exactly alike? They are NOT all alike, and these schools are wasting time and money.

If you’ve got automated calling software, you should be adding up the number of times you’ve successfully reached individual alumni over the years (regardless of the call result), and use that data to build predictive models for likelihood to answer the phone. If you don’t have that historical data, you should at least consider an engagement-based scoring system to focus your efforts on alumni who have demonstrated some of the usual signs of affinity: coming to events, sharing contact and employment information, having other family members who are alumni, volunteering, responding to surveys and so on.

A phone contact propensity score (and related models such as donor acquisition likelihood) will allow you to make cuts to your program when and if the time comes. You can feel more confident that you’re trimming the bottom, cutting away the least productive slice of your program.

2. Think outside Phonathon!

Your phone program is a data generation machine, granting you a wide window view on the behaviours of your alumni and donors. I’m not talking just about address updates, as valuable as those are. You know how many times they’ve picked up the phone when they see your ID come up on the display, and you might also know how long they’ve spent on the phone with your student callers. This is not trivial information nor is it of interest only to Phonathon managers.

Relate this behavioural data to other desired behaviours: Are your current big donors characterized by picking up more often? Do your Planned Giving expectancies tend to have longer conversations on average? What about volunteering, mentoring, and other activities? Phone contact history is real, affinity-related data, delivered fresh to you daily, lifting the curtain on who likes you.

(When I say real data, I mean REAL. This is a record of what individuals have actually DONE, not what they’ve stated as a preference in a survey. This data doesn’t lie.)

A few closing thoughts. …

I said earlier that Phonathon has been used (or misused) as a mass-contact tool. Software and automation enables a hired team of students to make a staggering number of phone calls in a very short time. The bulk of long-lapsed and never-donors are approached by phone rather than mail: The cost of a single call attempt seems negligible, so Phonathon managers spread their acquisition efforts as thinly as possible, trying to turn over every last stone.

There’s something to be said about having adequate volume in order to generate new donors, but here’s the problem: The phone is no longer a mass-contact medium. In fact it’s well on its way to becoming a niche medium, handled by a whole new type of device. Some people answer the phone and respond positively to being approached that way, and for that reason phone will be important for as long as there are phones. But the masses are no longer answering.

These days some fundraisers think of email as their new mass-contact medium of choice. Again they must be thinking in terms of cost, since it hardly matters whether you’re sending 1,000 emails or 100,000 emails. And again they’re mistaken in thinking that email is practically free — they’re just not counting the full cost to the institution of the practice of spamming people.

The truth is, there is no reliable mass-contact medium anymore. If email (or phone, or social media) is a great fundraising channel, it’s not because it’s a seemingly cheap way to reach out to thousands of people. It’s a great fundraising channel when, and only when, it reaches out to the right people at the right time.

  1. Alumni and donors are not all the same. They are not defined by their age, address or other demographic groupings. They are individual human beings.
  2. They have preferred channels for communicating and giving.
  3. These preferences are revealed only through observation of past behaviours. Not through self-reporting, not through classification by age or donor status, not by any other indirect means.
  4. We cannot know the real preferences of everyone in our database. Therefore, we model on observed past behaviours to make intelligent guesses about the preferences we don’t already know.
  5. Our models are an improvement on current practice, but they are imperfect. All models are wrong; we will make them better. And we will keep Phonathon healthy and productive.

21 March 2013

The lopsided nature of alumni giving

Filed under: Alumni, Major Giving, Peter Wylie — Tags: , , , — kevinmacdonell @ 6:06 am

Guest post by Peter B. Wylie

(Printer-friendly PDF download of this post available here: Lopsided Nature of Alum Giving – Wylie)

Eight years ago I wrote a piece called Sports, Fund Raising, and the 80/20 Rule”. It had to do with how most alumni giving in higher education comes from a very small group of former students. Nobody was shocked or awed by the article. The sotto voce response seemed to be, “Thanks, Pete. We got that. Tell us something we don’t know.” That’s okay. It’s like my jokes. A lot of ‘em don’t get more than a polite laugh; some get stone silence.

Anyway, time passed and I started working closely with John Sammis. Just about every week we’d look at a new alumni database, and over and over, we’d see the same thing. The top one percent of alumni givers had donated more than the other ninety-nine percent.

Finally, I decided to take a closer look at the lifetime giving data from seven schools that I thought covered a wide spectrum of higher education institutions in North America. Once again, I saw this huge lopsided phenomenon where a small, small group of alums were accounting for a whopping portion of the giving in each school. That’s when I went ahead and put this piece together.

What makes this one any different from the previous piece? For one thing, I think it gives you a more granular look at the lopsidedness, sort of like Google Maps allows you to really focus in on the names of tiny streets in a huge city. But more importantly, for this one I asked several people in advancement whose opinions I respect to comment on the data. After I show you that data, I’ll summarize some of what they had to say, and I’ll add in some thoughts of my own. After that, if you have a chance, I’d love to hear what you think. (Commenting on this blog has been turned off, but feel free to send an email to kevin.macdonell@gmail.com.)

The Data

I mentioned above that I looked at data from seven schools. After some agonizing, I decided I would end up putting you to sleep if I showed you all seven. So I chopped it down to four. Believe me, four is enough to make the point.

Here’s how I’ve laid out the data:

  • For each of the four schools I ranked only the alumni givers (no other constituencies) into deciles (10 groups), centiles (100 groups), and milliles (1,000 groups), by total lifetime hard credit giving. (There is actually no such word as “milliles” in English; I have borrowed from the French.)
  • In the first table in each set I’ve included all the givers. In the second table I’ve included only the top ten percent of givers. And in the third table I’ve included only the top one percent of givers. (The chart following the third table graphically conveys some of the information included in the third table.)

To make sure all this is clear, let’s go through the data for School A. Take a look at Table 1. It shows the lifetime giving for all alumni donors at the school divided into ten equal size groups called deciles. Notice that the alums in decile 10 account for over 95% of that giving. Conversely, the alums in decile 1 account for two tenths of one percent of the giving.

Table 1: Amount and Percentage of Total Lifetime Giving in School A for all Alumni by Giving Decile

table1

Moving on to Table 2. Here we’re looking at only the top decile of alumni givers divided into one percent groups. What jumps out from this table is that the top one percent of all givers account for more than 80% of alumni lifetime giving. That’s five times as much as the remaining 99% of alumni givers.

Table 2: Amount and Percentage of Total Lifetime Giving at School A for Top Ten Percent of Alumni Donors

table2

If that’s not lopsided enough for you, let’s look at Table 3 where the top one percent of alumni givers is divided up into what I’ve called milliles. That is, tenth of a percent groups. And lo and behold, the top one tenth of one percent of alumni donors account for more than 60% of alumni lifetime giving. Figure 1 shows the same information in a bit more dramatic way than does the table.

Table 3: Amount and Percentage of Total Lifetime Giving at School A for Top One Percent of Alumni Donors

table3

figure1

What I’d recommend is that you go through the same kinds of tables and charts laid out below for Schools B, C, and D. Go as fast or as slowly as you’d like. Being somewhat impatient, I would focus on Figures 2-4. I think that’s where the real punch in these data resides.

Table 4: Amount and Percentage of Total Lifetime Giving in School B for all Alumni by Giving Decile

table4

Table 5: Amount and Percentage of Total Lifetime Giving at School B for Top Ten Percent of Alumni Donors

table5

Table 6: Amount and Percentage of Total Lifetime Giving at School B for Top One Percent of Alumni Donors

table6

figure2

Table 7: Amount and Percentage of Total Lifetime Giving in School C for all Alumni by Giving Decile

table7

Table 8: Amount and Percentage of Total Lifetime Giving at School C for Top Ten Percent of Alumni Donors

table8

Table 9: Amount and Percentage of Total Lifetime Giving at School C for Top One Percent of Alumni Donors

table9

figure3

Table 10: Amount and Percentage of Total Lifetime Giving in School D for all Alumni by Giving Decile

table10

Table 11: Amount and Percentage of Total Lifetime Giving at School D for Top Ten Percent of Alumni Donors

table11

Table 12: Amount and Percentage of Total Lifetime Giving at School D for Top One Percent of Alumni Donors

table12

figure4

When I boil down to its essence what you’ve just looked at for these three schools, here’s what I see:

  • In School B over the half of the total giving is accounted for by three tenths of one percent of the givers.
  • In School C we have pretty much the same situation as we have in School B.
  • In School D over 60% of the total giving is accounted for by two tenths of one percent of the givers.

What Some People in Advancement have to Say about All This

Over the years I’ve gotten to know a number of thoughtful/idea-oriented folks in advancement. I asked several of them to comment on the data you’ve just seen. To protect the feelings of the people I didn’t ask, I’ll keep the commenters anonymous. They know who they are, and they know how much I appreciate their input.

Here are a few of the many helpful observations they made:

Most of the big money in campaigns and other advancement efforts does not come from alumni. I’m a bit embarrassed to admit that I had forgotten this fact. CASE puts out plenty of literature that confirms this. It is “friends” who carry the big load in higher education fundraising. At least two of the commenters pointed out that we could look at that fact as a sad commentary on the hundreds and hundreds of thousands of alums who give little or nothing to their alma maters. However, both felt it was better to look at these meager givers as an untapped resource that we have to do a better job of reaching.

The data we see here reflect the distribution of wealth in society. The commenter said, “There simply are very few people who have large amounts of disposable wealth and a whole lot of hard working folks who are just trying to participate in making a difference.” I like this comment; it jibes with my sense of the reality out there.

“It is easier (and more comfortable) to work with donors rather than prospective donors.” The commenter went on to say: “The wealthier the constituency the more you can get away with this approach because you have enough people who can make mega-gifts and that enables you to avoid building the middle of the gift pyramid.” This is very consistent with what some other commenters had to say about donors in the middle of the pyramid — donors who don’t get enough attention from the major giving folks in advancement.

Most people in advancement ARE aware of the lopsidedness. All of the commenters said they felt people in advancement were well aware of the lopsided phenomenon, perhaps not to the level of granularity displayed in this piece. But well aware, nonetheless.

What you see in this piece underestimates the skew because it doesn’t include non-givers. I was hoping that none of the commenters would bring up this fact because I had not (and still have not) come up with a clear, simple way to convey what the commenter had pointed out. But let’s see if I can give you an example. Look at Figure 4. It shows that one tenth of one percent of alumni givers account for over 48% of total alumni giving. However, let’s imagine that half of the solicitable alumni in this school have given nothing at all. Okay, if we now double the base to include all alums, not just alum givers, then what happens to the percentage size of that top one tenth of one percent of givers? It’s no longer one tenth of one percent; it’s now one twentieth of one percent. If you’re confused, let’s ask someone else reading this thing to explain it. I’m spinning my wheels.

One More Thought from Me

But here’s a thought that I’ve had for a long time. When I look at the incredible skewness that we see in the top one percent of alumni donors, I say, “WHY?!” Is the difference among the top millile and the bottom millile in that top one percent simply a function of capacity to give? Maybe it is, but I’d like to know. And then I say, call me crazy, LET’S FIND OUT! Not with some online survey. That won’t cut it. Let’s hire a first rate survey research team to go out and interview these folks (we’re not talking a lot of people here). Would that cost some money to go out and get these answers? Yes, and it would be worth every penny of it. The potential funding sources I’ve talked to yawn at the idea. But I’ll certainly never let go of it.

As always, let us know what you think.

22 January 2013

Sticking a pin in acquisition mail bloat

Filed under: Alumni, Annual Giving, Vendors — Tags: , , — kevinmacdonell @ 6:45 am

I recently read a question on a listserv that prompted me to respond. A university in the US was planning to solicit about 25,000 of its current non-donor alumni. The question was: How best to filter a non-donor base of 140,000 in order to arrive at the 25,000 names of those most likely to become donors? This university had only ever solicited donors in the past, so this was new territory for them. (How those alumni became donors in the first place was not explained.)

One responder to the question suggested narrowing down the pool by recent class years, reunion class years, or something similar, and also use any ratings, if they were available, and then do an Nth-record select on the remaining records to get to 25,000. Selecting every Nth record is one way to pick an approximately random sample. If you aren’t able to make this selection, the responder suggested, then your mail house vendor should be able to.

This answer was fine, up until the “Nth selection” part. I also had reservations about putting the vendor in control of prospect selection. So here are some thoughts on the topic of acquisition mailings.

Doing a random selection assumes that all non-donor alumni are alike, or at least that we aren’t able to make distinctions. Neither assumption would be true. Although they haven’t given yet, some alumni feel closer affinity to your school than others, and you should have some of these affinity-related cues stored in your database. This suggests that a more selective approach will perform better than a random sample.

Not long ago, I isolated all our alumni who converted from never-donor to donor at any time in the past two years. (Two years instead of just one, in order to boost the numbers a bit.) Then I compared this group with the universe of all the never-donors who had failed to convert, based on a number of attributes that might indicate affinity. Some of my findings included:

  • “Converters” were more likely than “non-converters” to have an email in the database.
  • They were more likely to have answered the phone in our Phonathon (even though the answer was ‘no pledge’)
  • They were more likely to have employment information (job title or employer name) in the database.
  • They were more likely to have attended an event since graduating.

Using these and other factors, I created a score which was used to select which non-donor alumni would be included in our acquisition mailing. I’ve been monitoring the results, and although new donors do tend to be the alumni with higher scores, frankly we’ve had poor results via mail solicitation, so evaluation is difficult. This in itself is not unusual: New-donor acquisition is very much a Phonathon phenomenon for us — in our phone results, the effectiveness of the score is much more evident.

Poor results or not, it’s still better than random, and whenever you can improve on random, you can reduce the size of a mailing. Acquisition mailings in general are way too big, simply because they’re often just random — they have to cast a wide net. Unfortunately your mail house is unlikely to encourage you to get more focused and save money.

Universities contract with vendors for their expertise and efficiency in dealing with large mailings, including cleaning the address data and handling the logistics that many small Annual Fund offices just aren’t equipped to deal with. A good mail house is a valuable ally and source of direct-marketing expertise. But acquisition presents a conflict for vendors, who make their money on volume. Annual Fund offices should be open to advice from their vendor, but they would do well to develop their own expertise in prospect selection, and make drastic cuts to the bloat in their mailings.

Donors may need to be acquired at a loss, no question. It’s about lifetime value, after all. But if the cumulative cost of that annual appeal exceeds the lifetime value of your newly-acquired donor, then the price is too high.

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 985 other followers