CoolData blog

9 September 2015

Prospect Research, ten years from now

Guest post by Peter B. Wylie

 

(This opinion piece was originally posted to the PRSPT-L Listserv.)

 

As many of you know, Dave Robertson decided to talk about the future of prospect research in New Orleans via a folk song. It was great. The guy’s got good pipes and plays both the harmonica and the guitar really well.

 

Anyway, here’s what I wrote him back in March. It goes on a bit. So if you get bored, just dump it.

 

It was stiflingly hot on that morning in early September of 1967 when my buddy Bunzy sat down in the amphitheater of Cornell’s med school in Manhattan. He and the rest of his first year classmates were chattering away when an older gentleman scuffled through a side door out to a podium. The room fell silent as the guy peered over his reading glasses out onto a sea of mostly male faces: “I’ll be straight with you, folks. Fifty percent of what we teach you over the next four years will be wrong. The problem is, we don’t know which fifty percent.”

 

I’ve often thought about the wisdom embedded in those words. The old doc was right. It is very hard for any of us to predict what anything will be like twenty years hence. Nate Silver in both his book “The Signal and the Noise” and on his immensely popular website underlines how bad highly regarded experts in most fields are at making even short range predictions.

 

So when Dave Robertson asked me to jot down some ideas about how prospect research will look a decade or more from now, I wanted to say, “Dave, I’ll be happy to give it a shot. But I’ll probably be as off the mark as the futurists in the early eighties. Remember those dudes? They gave us no hint whatsoever of how soon something called the internet would arrive and vastly transform our lives.”

 

With that caveat, I’d like to talk about two topics. The first has to do with something I’m pretty sure will happen. The second has to do with something sprinkled more with hope than certainty.

 

On to the first. I am increasingly convinced prospect researchers a decade or more from now will have far more information about prospects than they currently have. Frankly, I’m not enthusiastic about that possibility. Why? Privacy? Take my situation as I write down my thoughts for Dave. I’m on the island of Vieques, Puerto Rico with Linda to celebrate our fortieth wedding anniversary. We’ve been here almost two weeks. Any doggedly persistent law enforcement investigator could find out the following:

 

  • What flights we took to get here
  • What we paid for the tickets
  • The cost of each meal we paid for with a credit card
  • What ebooks I purchased while here
  • What shows I watched on Netflix
  • How many miles we’ve driven and where with our rental jeep
  • How happy we seemed with each other while in the field of the many security cameras, even in this rustic setting

 

You get the idea. Right now, I’m gonna assume that the vast majority of prospect researchers have no access to such information. More importantly, I assume their ethical compasses would steer them far away from even wanting to acquire such information.

 

But that’s today. March 2, 2015. How about ten years from now? Or 15 years from now, assuming I’m still able to make fog on a mirror? As it becomes easier and easier to amalgamate data about old Pete, I think all that info will be much easier to access by people willing to purchase it. That includes people who do prospect research. And if those researchers do get access to such data, it will help them enormously in finding out if I’m really the right fit for the mission of their fundraising institution. I guess that’s okay. But at my core, I don’t like the fact that they’ll be able to peek so closely into who I am and what I want in the days I have left on this wacky planet. I just don’t.

 

On to the second thing. Anybody who’s worked in prospect research even a little knows that the vast majority of the money raised by their organization comes from a small group of donors. If you look at university alumni databases, it’s not at all unusual to find that one tenth of one percent of the alums have given almost a half of the total current lifetime dollars. I think that situation needs to change. I think these institutions must find ways to get more involvement from the many folks who really like them and who have the wherewithal to give them big gifts.

 

So … how will the prospect researchers of the future play a key role in helping fundraising organizations (be they universities or general nonprofits) do a far better job of identifying and cultivating donors who have the resources and inclination to pitch in on the major giving front? I think/hope it’s gonna be in the way campaigns are run.

 

Right now, here’s what seems to happen. A campaign is launched with the help of a campaign consultant. A strategy is worked out whereby both the consultants and major gift officers spread out and talk to past major givers and basically say, “Hey, you all were really nice and generous to us in the last campaign. We’re enormously grateful for that. We truly are. But this time around we could use even more of your generosity. So … What can we put you down for?”

 

This is a gross oversimplification of what happens in campaigns. And it’s coming from a guy who doesn’t do campaign consulting. Still, I don’t think I’m too far off the mark. To change this pattern I think prospect researchers will have to be more assertive with the captains of these campaigns: The consultants, the VPs, the executives, all of whom talk so authoritatively about how things should be done and who can simultaneously be as full of crap as a Christmas goose.

 

These prospect researchers are going to have to put their feet down on the accelerator of data driven decision-making. In effect, they’ll need to say:

 

“We now have pretty damn accurate info on how wealthy a whole bunch of our younger donors are. And we have good analytics in place to ascertain which of them are most likely to step it up soon … IF we strategize how to nurture them over the long run. Right now, we’re going after the low hanging fruit that is comprised of tried and true donors. We gotta stop just doing that. Otherwise, we’re leaving way too much money on the table.”

 

All that I’ve been saying in this second part is not new. Not at all. Perhaps what may be a little new is what I have hinted at but not come right out and proclaimed. In the future we’ll need more prospect researchers to stand up and be outspoken to the campaign movers and shakers. To tell these big shots, politely and respectfully, that they need to start paying attention to the data. And do it in such a way that they get listened to.

 

That’s asking a lot of folks whose nature is often quiet, shy, and introverted. I get that. But some of them are not. Perhaps they are not as brazen as James Carvel is/was. But we need more folks like them who will stand up and say, “It’s the data, stupid!” Without yelling and without saying “stupid.”
Advertisement

5 May 2015

Predictive modelling for the nonprofit organization

Filed under: Non-university settings, Why predictive modeling? — Tags: , , — kevinmacdonell @ 6:15 pm

 

Predictive modelling enables an organization to focus its limited resources of time and money where they will earn the best return, using data. People who work at nonprofits can probably relate to the “limited resources” part of that statement. But is it a given that predictive analytics is possible or necessary for any organization?

 

This week, I’m in Kingston, Ontario to speak at the conference of the Association of Fundraising Professionals, Southeastern Ontario Chapter (AFP SEO). As usual I will be talking about how fundraisers can use data. Given the range of organizations represented at this conference, I’m considering questions that a small nonprofit might need to answer before jumping in. They boil down to two concerns, “when” and “what”:

 

When is the tipping point at which it makes sense to employ predictive modelling? And how is that tipping point defined — dollars raised, number of donors, size of database, or what?

 

What kind of data do we need to collect in order to do predictive modelling? How much should we be willing to spend to gather that data? What type of model should we build?

 

These sound like fundamental questions, yet I’ve rarely had to consider them. In higher education advancement, the questions are answered already.

 

In the first case, most universities are already over the tipping point. Even relatively small institutions have more non-donor alumni than they can solicit all at once via mail and phone — it’s just too expensive and it takes too much time. Prioritization is always necessary. Not all universities are using predictive modelling, but all could certainly benefit from doing so.

 

Regarding the second question — what data to collect — alumni databases are typically rich in the types of data useful for gauging affinity and propensity to give. Knowing everyone’s age is a huge advantage, for example. Even if the Advancement office doesn’t have ages for everyone, at least they have class year, which is usually a good proxy for age. Universities don’t always do a great job of tracking key engagement factors (event attendance, volunteering, and so on), but I’ve been fortunate in being able to have enough of this already-existing data with which to build robust models.

 

The situation is different for nonprofits, including small organizations that may not have real databases. (That situation was the topic I wrote about in my previous post: When does a small nonprofit need a database?) One can’t simply assume that predictive modelling is worth the trouble, nor can one assume that the data is available or worth investing in.

 

Fortunately the first question isn’t hard to answer, and I’ve already hinted at it. The tipping point occurs when the size of your constituency is so large that you cannot afford to reach out to all of them simultaneously. Your constituency may consist of any combination of past donors, volunteers, clients of your services, ticket buyers and subscribers, event attendees — anyone who has a reason to be in your database due to some connection with your organization.

 

Here’s an extreme example from the non-alumni charity world. Last year’s ALS Ice-Bucket Challenge already seems like a long time ago (which is the way of any social media-driven frenzy), but the real challenge is now squarely on the shoulders of ALS charities. Their constituency has grown by millions of new donors, but there is no guarantee that this windfall will translate into an elevated level of donor support in the long run. It’s a massive donor-retention problem: Most new donors will not give again, but retaining even a fraction could lead to a sizeable echo of giving. It always makes sense to ask recent donors to give again, but I think it would be incredibly wasteful to attempt reaching out to 2.5 million one-time donors. The organization needs to reach out to the right donors. I have no special insight into what ALS charities are doing, but this scenario screams “predictive modelling” to me. (I’ve written about it here: Your nonprofit’s real ice bucket challenge.)

 

None of us can relate to the ice-bucket thing, because it’s almost unique, but smaller versions of this dilemma abound. Let’s say your theatre company has a database with 20,000 records in it — people who have purchased subscriptions over the years, plus single-ticket buyers, plus all your donors (current and long-lapsed). You plan to run a two-week phone campaign for donations, but there’s no way you can reach everyone with a phone number in that limited time. You need a way to rank your constituents by likelihood to give, in order to maximize your return.

 

(About five years ago, I built a model using data from a symphony orchestra’s database. Among other things, I found that certain combinations of concert series subscriptions were associated with higher levels of giving. So: you don’t need a university alumni database to do this work!)

 

It works with smaller numbers, too. Let’s say your college has 1,000 alumni living in Toronto, and you want to invite them all to an event. Your budget allows a mail piece to be sent to just 250, however. If you have a predictive model for likelihood to attend an event, you can send mail to only the best prospective attendees, and perhaps email the rest.

 

In a reverse scenario, if your charity has 500 donors and you’re fully capable of contacting and visiting them all as often as you like, then there’s no business need for predictive modelling. I would also note that modelling is harder to do with small data sets, entailing  problems such as overfitting. But that’s a technical issue; it’s enough to know that modelling is something to consider only at the point when resources won’t cover the need to engage with your whole constituency.

 

Now for the second question: What data do you need?

 

My first suggestion is that you look to the data you already have. Going back to the example of the symphony orchestra: The data I used actually came from two different systems — one for donor management, the other for ticketing and concert series subscriptions. The key was that donors and concert attendees were each identified with a unique ID that spanned both databases. This allowed me to discover that people who favoured the great Classical composers were better donors than those who liked the “pops” concerts — but that people who attended both were the best donors of all! If the orchestra intended to identify a pool of prospects for leadership gifts, this would be one piece of the ranking score that would help them do it.

 

So: Explore your existing data. And while you’re doing so, don’t assume that messy, old, or incomplete data is not useable. It’s usually worth a look.

 

What about collecting new data? This can be an expensive proposition, and I think it would be risky to gather data just so you can build predictive models. There is no guarantee that what you’re spending time and money to gather is actually correlated with giving or other behaviours. My suggestion would be to gather data that serves operational purposes as well as analytical ones. A good example might be event attendance. If your organization holds a lot of events, you’ll want to keep statistics on attendance and how effective each event was. If you can find ways to record which individuals were at the event (donors, volunteers, community members), you will get this information, plus you will get a valuable input for your models.

 

Surveying is another way organizations can collect useful data for analysis while also serving other purposes. It’s one way to find out how old donors are — a key piece of information. Just be sure that your surveys are not anonymous! In my experience, people are not turned off by non-anonymous surveys so long as you’re not asking deeply personal questions. Offering a chance to win a prize for completing the survey can help.

 

Data you might gather on individuals falls into two general categories: Behaviours and attributes.

 

Behaviours are any type of action people take that might indicate affinity with your organization. Giving is obviously the big one, but other good examples would be event attendance or volunteering, or any type of interaction with your organization.

 

Attributes are just characteristics that prospects happen to have. This includes gender, where a person lives, age, wealth information, and so on.

 

Of the two types, behavioural factors are always the more powerful. You can never go wrong by looking at what people actually do. As the saying has it, people give of their time, talent, and treasure. Focus on those interactions first.

 

People also give of something else that is increasingly valuable: Their attention. If your organization makes use of a broadcast email platform, find out if it tracks opens and click-throughs — not just at the aggregate level, but at the individual level. Some platforms even assign a score to each email address that indicates the level of engagement with your emails. If you run phone campaigns, keep track of who answers the call. The world is so full of distractions, these periods of time when you have someone’s full attention are themselves gifts — and they are directly associated with likelihood to give financially.

 

Attributes are trickier. They can lead you astray with correlations that look real, but aren’t. Age is always a good thing to have, but gender is only sometimes useful. And I would never purchase external data (census and demographic data, for example) for predictive modelling alone. Aggregate data at the ZIP or postal code level is useful for a lot of things, but is not the strongest candidate for a model input. The correlations with giving to your organization will be weak, especially in comparison with the behavioural data you have on individuals.

 

What type of model does it make sense for a nonprofit to try to build first? Any modelling project starts with a clear statement of the business need. Perhaps you want to identify which ticket buyers will convert to donors, or which long-lapsed donors are most likely to respond positively to a phone call, or who among your past clients is most likely to be interested in becoming a volunteer.

 

Whatever it is, the key thing is that you have plenty of historical examples of the behaviour you want to predict. You want to have a big, fat target to aim for. If you want to predict likelihood to attend an event and your database contains 30,000 addressable records, you can be quite successful if 1,000 of those records have some history of attending events — but your model will be a flop if you’ve only got 50. The reason is that you’re trying to identify the behaviours and characteristics that typify the “event attendee,” and then go looking in your “non-attendee” group for those people who share those behaviours and characteristics. The better they fit the profile, the more likely they are to respond to an event invitation. Fifty people is probably not enough to define what is “typical.”

 

So for your first foray into modelling, I would avoid trying to hit very small targets. Major giving and planned giving propensity tend to fall into that category. I know why people choose to start there — because it implies high return on investment — but you would be wise to resist.

 

At this point, someone who’s done some reading may start to obsess about which highly advanced technique to use. But if you’re new to hands-on work, I strongly suggest using a simple method that requires you to study each variable individually, in relation to the outcome you’re trying to model. The best beginning point is to get familiar with comparing groups (attendees vs. non-attendees, donors vs. non-donors, etc.) using means and medians, preferably with the aid of a stats software package. (Peter Wylie’s book, Data Mining for Fundraisers has this covered.) From there, learn a bit more about exploring associations and correlations between variables by looking at scatterplots and using Pearson Product-Moment Correlation. That will set you up well for learning to do multiple linear regression, if you choose to take it that far.

 

In sum: Predictive modeling isn’t for everyone, but you don’t need Big Data or a degree in statistics to get some benefit from it. Start small, and build from there.

 

25 August 2014

Your nonprofit’s real ice bucket challenge

It was only a matter of time. Over the weekend, a longtime friend dumped a bucket of ice water over his head and posted the video to Facebook. He challenged three friends — me included — to take the Ice Bucket Challenge in support of ALS research. I passed on the cold shower, but this morning I did make a gift to ALS Canada, a cause I wouldn’t have supported had it not been for my friend Paul and the brilliant campaign he participated in.*

Universities and other charities are, of course, watching closely and asking themselves how they can replicate this phenomenon. Fine … I am skeptical that central planning and a modest budget can give birth to such a massive juggernaut of socially-responsible contagion … but I wish them luck.

While we can admire our colleagues’ amazing work and good fortune, I am not sure we should envy them. In the coming year, ALS charities will be facing a huge donor-retention issue. Imagine gaining between 1.5 and 2 million new donors in the span of a few months. Now, I have no knowledge of what ALS fundraisers really intend to do with their hordes of newly-acquired donors. Maybe retention is not a goal. But it is a sure thing that the world will move on to some other craze. Retaining a tiny fraction of these donors could make the difference between the ice bucket challenge being just a one-time, non-repeatable anomaly and turning it into a foundation for long-term support that permanently changes the game for ALS research.

Perhaps the ice bucket challenge can be turned into an annual event that becomes as established as the walks, runs and other participatory events that other medical-research charities have. Who knows.

For certain is that the majority of new donors will not give again. Also for certain is that it would be irresponsibly wasteful for charities to spread their retention budget equally over all new donors.

Which brings me to predictive modeling. Some portion of new donors WILL give again. Maybe something about the challenge touched them more deeply than the temporary fun of the ice bucket dare. Maybe they learned something about the disease. Maybe they know someone affected by ALS. There is no direct way to know. But I would be willing to bet that higher levels of engagement can be found in patterns in the data.

What factors might be predictors of longer-term engagement? It is not possible to say without some analysis, but sources of information might include:

  • How the donor arrived at the site prior to making a gift (following a link from another website, following a link via a social media platform, using a search engine).
  • How the donor became aware of the challenge (this is a question on some giving pages).
  • Whether they consented to future communications: Mail, email, or both.
  • Whether the donor continued on to a page on the website beyond the thank you page. (Did they start watching an ALS-related video and if so, how much of it did they watch?)
  • Whether the donor clicked on social media button to share the news of their gift, and where they shared it.

Shreds of ambiguous clues scattered here and there, admittedly, but that is what a good predictive model detects and amplifies. If it were up to me, I would also have asked on the giving page whether the donor had done the ice bucket thing. A year from now, my friend Paul is going to clearly remember the shock of pouring ice water over his head, plus the positive response he got on Facebook, and this will bring to mind his gift and the need to give again. My choosing not to do so might be associated with a lower level of commitment, and thus a lower likelihood of renewing. Just a theory.**

Data-informed segmentation aimed at getting a second gift from newly-acquired donors is not quite as sexy as being an internet meme. However, unlike riding the uncontrollable wave of a social media sensation, retention is something that charities might actually be able to plan for.

* I would like to see this phenomenon raise all boats for medical charities, therefore I also gave to Doctors Without Borders Canada and the Molly Appeal for Medical Research. Check them out.

** Update: I am told that actually, this question IS asked. I didn’t see it on the Canadian site, but maybe I just missed it. Great!

POSTSCRIPT

I was quoted on this topic in a story in the September 4th online edition of the Chronicle of Philanthropy. Link (subscribers only): After Windfall, ALS Group Grapples With 2.4-Million Donor Dilemma

18 February 2014

Save our planet

Filed under: Annual Giving, Why predictive modeling? — Tags: , , — kevinmacdonell @ 9:09 pm

You’ve seen those little signs — they’re in every hotel room these days. “Dear Guest,” they say, “Bed sheets that are washed daily in thousands of hotels around the world use millions of gallons of water and a lot of detergent.” The card then goes on to urge you to give some indication that you don’t want your bedding or towels taken away to be laundered.

Presumably millions of small gestures by hotel guests have by now added up to a staggering amount of savings in water, energy and detergent.

It reminds me of what predictive analytics does for a mass-contact area of operation such as Annual Giving. If we all trimmed down the amount of acquisition contacts we make — expending the same amount of effort but only on the people with highest propensity to give, or likelihood to pick up the phone, or greatest chance of opening our email or what-have-you — we’d be doing our bit to collectively conserve a whole lot of human energy, and not a few trees.

With many advancement leaders questioning whether they can continue to justify an expensive Phonathon program that is losing more ground every year, getting serious about focusing resources might just be the saviour of a key acquisition program, to boot.

18 April 2013

A response to ‘What do we do about Phonathon?’

I had a thoughtful response to my blog post from earlier this week (What do we do about Phonathon?) from Paul Fleming, Database Manager at Walnut Hill School for the Arts in Natick, Massachusetts, about half an hour from downtown Boston. With Paul’s permission, I will quote from his email, and then offer my comments afterword:

I just wanted to share with you some of my experiences with Phonathon. I am the database manager of a 5-person Development department at a wonderful boarding high school called the Walnut Hill School for the Arts. Since we are a very small office, I have also been able to take on the role of the organizer of our Phonathon. It’s only been natural for me to combine the two to find analysis about the worth of this event, and I’m happy to say, for our own school, this event is amazingly worthwhile.

First of all, as far as cost vs. gain, this is one of the cheapest appeals we have. Our Phonathon callers are volunteer students who are making calls either because they have a strong interest in helping their school, or they want to be fed pizza instead of dining hall food (pizza: our biggest expense). This year we called 4 nights in the fall and 4 nights in the spring. So while it is an amazing source of stress during that week, there aren’t a ton of man-hours put into this event other than that. We still mail letters to a large portion of our alumni base a few times a year. Many of these alumni are long-shots who would not give in response to a mass appeal, but our team feels that the importance of the touch point outweighs the short-term inefficiencies that are inherent in this type of outreach.

Secondly, I have taken the time to prioritize each of the people who are selected to receive phone calls. As you stated in your article, I use things like recency and frequency of gifts, as well as other factors such as event participation or whether we have other details about their personal life (job info, etc). We do call a great deal of lapsed or nondonors, but if we find ourselves spread too thin, we make sure to use our time appropriately to maximize effectiveness with the time we have. Our school has roughly 4,400 living alumni, and we graduate about 100 wonderful, talented students a year. This season we were able to attempt phone calls to about 1,200 alumni in our 4 nights of calling. The higher-priority people received up to 3 phone calls, and the lower-priority people received just 1-2.

Lastly, I was lucky enough to start working at my job in a year in which there was no Phonathon. This gave me an amazing opportunity to test the idea that our missing donors would give through other avenues if they had no other way to do so. We did a great deal of mass appeals, indirect appeals (alumni magazine and e-newsletters), and as many personalized emails and phone calls as we could handle in our 5-person team. Here are the most basic of our findings:

In FY11 (our only non-Phonathon year), 12% of our donors were repeat donors. We reached about 11% participation, our lowest ever. In FY12 (the year Phonathon returned):

  • 27% of our donors were new/recovered donors, a 14% increase from the previous year.
  • We reached 14% overall alumni participation.
  • Of the 27% of donors who were considered new/recovered, 44% gave through Phonathon.
  • The total amount of donors we had gained from FY11 to FY12 was about the same number of people who gave through the Phonathon.
  • In FY13 (still in progess, so we’ll see how this actually plays out), 35% of the previously-recovered donors who gave again gave in response to less work-intensive mass mailing appeals, showing that some of these Phonathon donors can, in fact, be converted and (hopefully) cultivated long-term.

In general, I think your article was right on point. Large universities with a for-pay, ongoing Phonathon program should take a look and see whether their efforts should be spent elsewhere. I just wanted to share with you my successes here and the ways in which our school has been able to maintain a legitimate, cost-effective way to increase our participation rate and maintain the quality of our alumni database.

Paul’s description of his program reminds me there are plenty of institutions out there who don’t have big, automated, and data-intensive calling programs gobbling up money. What really gets my attention is that Walnut Hill uses alumni affinity factors (event attendance, employment info) to prioritize calling to get the job done on a tight schedule and with a minimum of expense. This small-scale data mining effort is an example for the rest of us who have a lot of inefficiency in our programs due to a lack of focus.

The first predictive models I ever created were for a relatively small university Phonathon that was run with printed prospect cards and manual dialing — a very successful program, I might add. For those of you at smaller institutions wondering if data mining is possible only with massive databases, the answer is NO.

And finally, how wonderful it is that Walnut Hill can quantify exactly what Phonathon contributes in terms of new donors, and new donors who convert to mail-responsive renewals.

Bravo!

13 November 2012

Making a case for modeling

Guest post by Peter Wylie and John Sammis

(Click here to download post as a print-friendly PDF: Making a Case for Modeling – Wylie Sammis)

Before you wade too far into this piece, let’s be sure we’re talking to the right person. Here are some assumptions we’re making about you:

  • You work in higher education advancement and are interested in analytics. However, you’re not a sophisticated stats person who throws around terms like regression and cluster analysis and neural networks.
  • You’re convinced that your alumni database (we’ll leave “parents” and “friends” for a future paper) holds a great deal of information that can be used to pick out the best folks to appeal to — whether by mail, email, phone, or face-to-face visits.
  • Your boss and your boss’s bosses are, at best, less convinced than you are about this notion. At worst, they have no real grasp of what analytics (data mining and predictive modeling) are. And they may seem particularly susceptible to sales pitches from vendors offering expensive products and services for using your data – products and services you feel might cause more problems than they will solve.
  • You’d like to find a way to bring these “boss” folks around to your way of thinking, or at least move them in the “right” direction.

If we’ve made some accurate assumptions here, great. If we haven’t, we’d still like you to keep reading. But if you want to slip out the back of the seminar room, not to worry. We’ve done it ourselves more times than you can count.

Okay, here’s something you can try:

1. Divide the alums at your school into ten roughly equal size groups (deciles) by class year. Table 1 is an example from a medium sized four year college.

Table 1: Class Years and Counts for Ten Roughly Equal Size Groups (Deciles) of Alumni at School A

2. Create a very simple score:

EMAIL LISTED(1/0) + HOME PHONE LISTED(1/0)

This score can assume three values: “0, “1”, or “2.” A “0” means the alum has neither an email nor a home phone listed in the database. A “1” means the alum has either an email listed in the database or a home phone listed in the database, but not both. A “2” means the alum has both an email and a home phone listed in the database.

3. Create a table that contains the percentage of alums who have contributed at least $1,000 lifetime to your school for each score level for each class year decile. Table 1 is an example of such a table for School A.

Table 2: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School A

 

4. Create a three dimensional chart that conveys the same information contained in the table. Figure 1 is an example of such a chart for School A.

In the rest of this piece we’ll be showing tables and charts from seven other very diverse schools that look quite similar to the ones you’ve just seen. At the end, we’ll step back and talk about the importance of what emerges from these charts. We’ll also offer advice on how to explain your own tables and charts to colleagues and bosses.

If you think the above table and chart are clear, go ahead and start browsing through what we’ve laid out for the other seven schools. However, if you’re not completely sure you understand the table and the chart, see if the following hypothetical questions and answers help:

Question: “Okay, I’m looking at Table 2 where it shows 53% for alums in Decile 1 who have a score of 2. Could you just clarify what that means?”

Answer. “That means that 53% of the oldest alums at the school who have both a home phone and an email listed in the database have given at least $1,000 lifetime to the school.”

Question. “Then … that means if I look to the far left in that same row where it shows 29% … that means that 29% of the oldest alums at the school who have neither a home phone nor an email listed in the database have given at least $1,000 lifetime to the school?”

Answer. “Exactly.”

Question. “So those older alums who have a score of 2 are way better givers than those older alums who have a score of 0?”

Answer. “That’s how we see it.”

Question. “I notice that in the younger deciles, regardless of the score, there are a lot of 0 percentages or very low percentages. What’s going on there?”

Answer. “Two things. One, most younger alums don’t have the wherewithal to make big gifts. They need years, sometimes many years, to get their financial legs under them. The second thing? Over the last seven years or so, we’ve looked at the lifetime giving rates of hundreds and hundreds of four-year higher education institutions. The news is not good. In many of them, well over half of the solicitable alums have never given their alma maters a penny.”

Question. “So, maybe for my school, it might be good to lower that giving amount to something like ‘has given at least $500 lifetime’ rather than $1,000 lifetime?”

Answer. Absolutely. There’s nothing sacrosanct about the thousand dollar level that we chose for this piece. You can certainly lower the amount, but you can also raise the amount. In fact, if you told us you were going to try several different amounts, we’d say, “Fantastic!”

Okay, let’s go ahead and have you browse through the rest of the tables and charts for the seven schools we mentioned earlier. Then you can compare your thoughts on what you’ve seen with what we think is going on here.

(Note: After looking at a few of the tables and charts, you may find yourself saying, “Okay, guys. Think I got the idea here.” If so, go ahead and fast forward to our comments.)

Table 3: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School B

 

Table 4: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School C

Table 5: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School D

Table 6: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School E

Table 7: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School F

Table 8: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School G

Table 9: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School H

Definitely a lot of tables and charts. Here’s what we see in them:

  • We’ve gone through the material you’ve just seen many times. Our eyes have always been drawn to the charts; we use the tables for back-up. Even though we’re data geeks, we almost always find charts more compelling than tables. That is most certainly the case here.
  • We find the patterns in the charts across the seven schools remarkably similar. (We could have included examples from scores of other schools. The patterns would have looked the same.)
  • The schools differ markedly in terms of giving levels. For example, the alums in School C are clearly quite generous in contrast to the alums in School F. (Compare Figure 3 with Figure 6.)
  • We’ve never seen an exception to one of the obvious patterns we see in these data: The longer alums have been out of school, the more money they have given to their school.
  • The “time out of school” pattern notwithstanding, we continue to be taken by the huge differences in giving levels (especially among older alums) across the levels of a very simple score. School G is a prime example. Look at Figure 7 and look at Table 8. Any way you look at these data, it’s obvious that alums who have even a score of “1” (either a home phone listed or an email listed, but not both) are far better givers than alums who have neither listed.

Now we’d like to deal with an often advanced argument against what you see here. It’s not at all uncommon for us to hear skeptics say: “Well, of course alumni on whom we have more personal information are going to be better givers. In fact we often get that information when they make a gift. You could even say that amount of giving and amount of personal information are pretty much the same thing.”

We disagree for at least two reasons:

Amount of personal information and giving in any alumni database are never the same thing. If you have doubts about our assertion, the best way to dispel those doubts is to look in your own alumni database. Create the same simple score we have for this piece. Then look at the percentage of alums for each of the three levels of the score. You will find plenty of alums who have a score of 0 who have given you something, and you will find plenty of alums with a score of 2 who have given you nothing at all.

We have yet to encounter a school where the IT folks can definitively say how an email address or a home phone number got into the database for every alum. Why is that the case? Because email addresses and home phone numbers find their way into alumni database in a variety of ways. Yes, sometimes they are provided by the alum when he or she makes a gift. But there are other ways. To name a few:

  • Alums (givers or not) can provide that information when they respond to surveys or requests for information to update directories.
  • There are forms that alums fill out when they attend a school sponsored event that ask for this kind of information.
  • There are vendors who supply this kind of information.

Now here’s the kicker. Your reactions to everything you’ve seen in this piece are critical. If you’re going to go to a skeptical boss to try to make a case for scouring your alumni database for new candidates for major giving, we think you need to have several reactions to what we’ve laid out here:

1. “WOW!” Not, “Oh, that’s interesting.” It’s gotta be, “WOW!” Trust us on this one.

2. You have to be champing at the bit to create the same kinds of tables and charts that you’ve seen here for your own data.

3. You have to look at Table 2 (that we’ve recreated below) and imagine it represents your own data.

Table 2: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School A

Then you have to start saying things like:

“Okay, I’m looking at the third class year decile. These are alums who graduated between 1977 and 1983. Twenty-five percent of them with a score of 2 have given us at least $1,000 lifetime. But what about the 75% who haven’t yet reached that level? Aren’t they going to be much better bets for bigger giving than the 94% of those with a score of 0 who haven’t yet reached the $1,000 level?”

“A score that goes from 0 to 2? Really? What about a much more sophisticated score that’s based on lots more information than just email listed and home phone listed? Wouldn’t it make sense to build a score like that and look at the giving levels for that more sophisticated score across the class year deciles?”

If your reactions have been similar to the ones we’ve just presented, you’re probably getting very close to trying to making your case to the higher-ups. Of course, how you make that case will depend on who you’ll be talking to, who you are, and situational factors that you’re aware of and we’re not. But here are a few general suggestions:

Your first step should be making up the charts and figures for your own data. Maybe you have the skills to do this on your own. If not, find a technical person to do it for you. In addition to having the right skills, this person should think doing it would be cool and won’t take forever to finish it.

Choose the right person to show our stuff and your stuff to. More and more we’re hearing people in advancement say, “We just got a new VP who really believes in analytics. We think she may be really receptive to this kind of approach.” Obviously, that’s the kind of person you want to approach. If you have a stodgy boss in between you and that VP, find a way around your boss. There’s lots of ways to do that.

Do what mystery writers do; use the weapon of surprise. Whoever the boss you go to is, we’d recommend that you show them this piece first. After you know they’ve read it, ask them what they thought of it. If they say anything remotely similar to: “I wonder what our data looks like,” you say, “Funny you should ask.”

Whatever your reactions to this piece have been, we’d love to hear them.

Older Posts »

Blog at WordPress.com.