CoolData blog

28 July 2010

Applying predictive modeling to Phonathon segmentation

Filed under: Annual Giving, Donor acquisition, Phonathon, Segmentation — Tags: , — kevinmacdonell @ 8:05 pm

Segmenting the prospect pool is where the rubber hits the road in modeling for the phone program. Too bad there’s no road map!

When I was a prospect researcher working in Major Gifts and doing a little predictive modeling on the side, I was innocent of the intricacies of Annual Giving. I produced the models, and waited for the magic to happen. Hey, I wondered, how hard can it be? I tell you who’s most likely to give, and you get to work on that. Done.

Today, I’m responsible for running a phonathon program. Now I’M the guy who’s supposed to apply predictive scores to the Annual Fund, without messing everything up. What looked simple from the outside now looks like Rubik’s Cube. And I never solved Rubik’s Cube. (Had the book, but not the patience.)

In the same way, I have found that books and other resources on phonathon management are just no help when it comes to propensity-based segmentation. There seem to be no readily-available prototypes.

So let’s change that. Today I’m going to share a summary of the segmentation we hope to implement this fall. It’s not as detailed as our actual plan, but should give enough information to inform the development of your own plan. I’ve had to tear the Rubik’s Cube apart, as I used to do as a kid, and reassemble from scratch.

Click on this link to open a Word doc: “Phone segments”. Each row in this document is a separate segment. The segments are grouped into “blocks,” according to how many call attempts will be applied to each segment. Notice that the first column is “Decile Score”. That’s right, the first level of selection is going to be the propensity-of-giving predictive score I created for Phonathon.

It would seem that shifting from “traditional” segmentation is just as easy as that, but in fact making this change took a lot of hard thinking and consultation with others who have experience with phonathon. (*** See credits at end.)

Why was it so hard? Read on!

The first thing we have to ask ourselves is, why do we segment at all? The primary goals, as I see them, are:

  1. PRIORITIZATION: Focusing attention and resources on the best prospects
  2. MESSAGING: Customizing appeals to natural groupings of alumni

There are other reasons, including being able to track performance of various groups over time, but these are the most important at this planning stage.

By “prioritization,” I mean that alumni with the greatest propensity to give should be given special consideration in order to maximize response. Alumni who are most likely to give should:

  • receive the most care and attention with regards to when they are asked (early in the calling season, soonest after mail drops, etc.);
  • be assigned to the best and most experienced callers, and
  • have higher call-attempt limits than other, lower-propensity alumni.

The other goal, “messaging,” is simple enough to understand: We tailor our message to alumni based on what sort of group they are in. Alumni fall into a few groups based on their past donor history (LYBUNT, SYBUNT, never donor), which largely determines our solicitation goal for them (Leadership, Renewal, Acquisition). Alumni are also segmented by faculty, a common practice when alumni are believed to feel greater affinity for their faculty than they do the university as a whole. There may also be special segments created for other characteristics (young alumni, for example), or for special fundraising projects that need to be treated separately.

The “message” goal is often placed at the centre of phonathon segmentation — at the expense of optimizing treatment of the best prospects, in my view. In many programs, a rigid structure of calling by faculty commonly prevails, exhausting one message-defined pool (eg. “Law, Donors”) before moving on to the next. There are benefits to working one homogeneous calling pool at a time — callers can more quickly become familiar with the message (and objections to it) if it stays consistent through the night. However, overall gains in the program might be realized by taking a more propensity-driven approach.

Predictive modeling for propensity to give is the “new thing” that allows us to bring prioritization to the fore. Traditionally, propensity to give has been determined mainly by previous giving history, which is based on reasonable assumptions: Alumni who have given recently are most likely to give again. This approach works for donors, but is not helpful for segmenting the non-donor pool for acquisition. Predictive modeling is a marked improvement over giving history alone for segmenting donors as well: a never-donor who has a high likelihood of giving is far more valuable to the institution than a donor who is very unlikely to renew. Only predictive modeling can give us the insight into the unknown to allow us to decide who is the better prospect.

The issue: Layers of complexity

We need to somehow incorporate the scores from the predictive model into segmentation. But simply creating an additional level of segmentation will create an unreasonable amount of complexity: “Score decile 10, Law, donors”, “Score decile 9, Medicine, non-donors”, etc. etc. The number of segments would become unmanageable and many of them would be too small, especially when additionally broken up by time zone.

I considered keeping the traditional segments (Faculty and donor status) and simply ordering the individual prospects within each segment using a very granular score. This would require us to make a judgment call about when we should drop a segment and move on to the next one. The risk in doing so is that in leaving it to our judgment, we will either drop the segment too early, leaving money on the table, or call too deep into the segment before moving on. Calling alumni with a decile score of 7 before at least one attempt to ALL the 10s runs counter to the goal of prioritizing on best prospects.

So, what should we do?

The proposed new strategy going forward will draw a distinction between Prioritization and Messaging. Calling segments will be based on a combination of Propensity Score and Donor Status. More of the work involved in the “message” component (based on Faculty and past giving designations) will be managed at the point of the call, via the automated calling system and the caller him/herself.

The intention is to move messaging out of segmentation and into a combination of our automated dialing system’s conditional scripting features and the judgment of the caller. The callers will continue to be shown specific degree information, with customized scripts based on this information. The main difference from the caller’s point of view is that he or she will be speaking with alumni of numerous degree types on any given night, instead of just one or two.

Our system offers the ability to compose scripts that contain conditional statements, so that the message the caller presents changes on the fly in response to the particulars of the prospect being called (eg. degree and faculty, designation of last gift, and so on). This feature works automatically and requires no effort from callers, except to the extent that there are more talking points to absorb simultaneously.

The caller’s prospect information screen offers data on a prospect’s past giving. When historical gift designations disagree with a prospect’s faculty, the caller will need to shift gears slightly and ask the prospect if he or she wishes to renew their giving to that designation, rather than the default (faculty of preferred degree).

Shifting this aspect from segmentation to the point of the call is intended to remove a layer of complexity from segmentation, thereby making room for propensity to give. See?

‘Faculty’ will be removed as a primary concern in segmentation, by collapsing all the specific faculties into two overarching groups: Undergraduate degrees and graduate/professional degrees. This grouping preserves one of the fundamental differences between prospects (their stage of life while a student) while preventing the creation of an excessive number of tiny segments.

Have a look at the Excel file again. The general hierarchy for segmentation will be Score Decile (ten possible levels), then Donor Status (two levels, Donor and Non-Donor), then Graduate-Professional/Undergraduate (two levels).  Therefore the number of possible segments is 10 x 2 x 2 = 40. In practice there will be more than 40, but this number will be manageable. As well, although we will call as many prospects as we possibly can, it is not imperative that we call the very lowest deciles, where the probability of finding donors is extremely low. Having leftover segments at the end of the winter term is likely, but not cause for concern.

This is only a general structure — some segments may be split or collapsed depending on how large or small they are. As well, I will break out other segments for New Grads and any special projects we are running this year. And call attempt limits may require rejigging throughout the season, based on actual response.

New Grad calling is limited to Acquisition, for the purpose of caller training and familiarization. I prefer Renewal calling to be handled by experienced callers, therefore new-grad renewals are included in other segments.

Other issues that may require attention in your segmentation include double-alumni households (do both spouses receive a call, and if so, when?), and creating another segment to capture alumni who did not receive a propensity score because they were not contactable at time of model creation.

Potential issues

Calling pools are mixed with regard to faculty, so the message will vary from call to call. Callers won’t know from the outset who they will be speaking with (Law, Medicine, etc.), and will require multiple scripts to cover multiple prospect types. Training and familiarization with the job will take longer.

The changes will require a little more attentiveness on the part of call centre employees. The script will auto-populate the alum’s preferred faculty. However, the caller must be prepared to modify the conversation on the fly based on other information available, i.e. designation of last (or largest) gift. The downside is that callers may take more time to become proficient. However, the need to pay attention to context may help to keep callers more engaged with their work, as opposed to mindlessly reading the same script over and over all night.

Another potential issue is that some faculties are at a disadvantage because they have fewer high-scoring alumni. The extent to which this might be a problem can only be determined by looking at the data to see how each faculty’s alumni are distributed by score decile. Some redistribution among segments may be necessary if any one faculty is found to be at a severe disadvantage. Note that it cuts both ways, though: In the traditional segmentation, entire faculties were probably placed at a disadvantage because they had lower priority — based on nothing more than the need to order them in some kind of sequence.

As I say, this is all new, and untested. How large or small the proposed segments will be remains undetermined. How well the segmentation will work is not known. I am interested to hear how others have dealt with the issue of applying their predictive models to phonathon.

*** CREDITS: I owe thanks to Chris Steeves, Development Officer for the Faculty of Management at Dalhousie University, and a former Annual Giving Officer responsible for Phonathon, for his enthusiastic support and for certain ideas (such as collapsing faculties into ‘graduate’ and ‘undergraduate’ categories) which are a key part of this segmentation plan. Also, thanks to Marni Tuttle, Associate Director, Advancement Services at Dalhousie University, for her ideas and thorough review of this post in its various drafts.

12 May 2010

Donor acquisition: From ‘giving history’ to ‘giving future’

Filed under: Annual Giving, Donor acquisition, John Sammis, Peter Wylie, Phonathon — Tags: , , — kevinmacdonell @ 8:18 am

I hope you’ve had a chance to read “The tough job of bringing in new alumni donors” by Peter Wylie and John Sammis. What did you think of it? I’m sure the most common reaction is “That’s very interesting.” There’s a big gap, though, between reaction and action. I want to talk about converting this knowledge into action.

The subject of that guest post is a lot more than just “interesting” for me. I’ve recently changed jobs, moving from prospect research for major gifts at a university with 30,000 living alumni to running an annual fund phonathon program at a university with more than three times as many alumni. For the first time, I will be responsible not only for mining the data, but helping to design a program that will make use of the results.

Like most institutions, my new employer invests heavily in acquiring new donors. Calling and mailing to never-donors yields a return on investment that may be non-existent in the short term and difficult to quantify in the (future) long term.

Yet it must be done. ROI is important, but if you write off whole segments based only on ROI in the current year, ignoring long-term value, your pool of donors will shrink through attrition. Broadening the base of new donors costs money — an investment we hope to recoup with interest when new donors renew in future years. (See “The Habit of Giving“, by Jonathan Meer, on the subject of a donor tendency to renew. Also see “Donor Lifetime Value“, by Dan Allenby, from his Annual Giving Exchange blog, for a brief discussion of the importance of estimating donor lifetime value vs. costs of continuing to solicit. I also owe thanks to Chuck McClenon at The University of Texas at Austin for helping me understand the dangers of focusing on short-term ROI.)

I have argued that in phonathon we ought to give highest priority to propensity to give (i.e., from our predictive model), and stop using giving history (LYBUNTs, etc.) to determine calling priority. (Previous post: Rethinking calling priority in your phonathon campaign.) The results of our predictive model will give us the ROI side of the equation. But I’m growing increasingly convinced that propensity to give must be balanced against that other dimension: Lifetime value.

Dan Allenby says, “Donor lifetime value means predicting the sum of donors’ gifts over the course of their lives,” and later cautions: “This kind of calculation is complicated and imperfect.” This is so true. I certainly haven’t figured out an answer yet. I presume it will involve delving into our past data to find the average number of years a new donor continues to give, and what the average yearly renewal gift is, to establish a minimum lifetime value figure.

And I suspect this as well: The age of the donor at conversion is going to figure prominently. In this life there are three things that are inescapable: death, taxes, and a call from the Annual Fund! The number of years a donor has left to give is a function of age. We can assume, then, that early conversion is more desirable than late conversion. (Not to mention death-bed conversion.)

The discussion by Wylie and Sammis (to return to that) really seals the deal. Not only do younger alumni have more time left to give, so to speak, but Wylie/Sammis have clearly demonstrated that younger alumni are probably also more likely than older alumni to convert.

If you’re already using predictive modeling in your program, think about the implications. Year after year, the biggest factor in my giving models is age (i.e., class year). Older alumni tend to score higher, especially if my dependent variable is ‘lifetime giving’ going back to the beginning of time. This flies in the face of the evidence, provided by Wylie and Sammis, that non-donor alumni are less and less likely to convert the older they get.

We need a counterbalance to raw propensity-to-give scores in dealing with non-donors. What’s the answer? One possibility is a propensity-to-convert model that doesn’t undervalue young alumni so much. Another might be a matrix, with propensity-to-give scores on one axis, and some measure of lifetime value (factoring in age) on the other axis — the goal being to concentrate activity on the high-propensity AND high-lifetime value prospects, and work outwards from there.

I don’t know. Today all I know is that in order to broaden the base of your donor pool and maximize returns over the long term, you must call non-donors, and you must call the non-donors with the most potential. That means embracing younger alumni — despite what your model tells you to do.

3 May 2010

The tough job of bringing in new alumni donors

Filed under: Alumni, Donor acquisition, John Sammis, Peter Wylie — Tags: , , — kevinmacdonell @ 8:48 am

Guest post by Peter Wylie and John Sammis

(Click here: Donor acquisition – Wylie and Sammis – 2 May 2010 – to download Microsoft Word version of this paper.)

Most alumni have never given a cent to their alma maters. “Whoa!” you may be saying, “What’s your evidence, fellows? That’s hard to swallow.”

We would agree. It’s not a pretty picture, but it’s an accurate one. For some documentation you can read “Benchmarking Lifetime Giving in Higher Education”. Sadly, the bottom line is this: In North America the lifetime hard credit alumni participation of at least half of our higher education institutions is less than 50%. If you look at only private institutions, the view is better. Public institutions? Better to not even peek out the window.

We do have a bit of optimism to offer in this paper, but we’ll start off by laying some cards on the table:

  • We’re specialists in data analysis. If we’re not careful, Abraham Maslow’s oft-quoted dictum can apply to us: “If you’re only tool is a hammer, every problem starts looking like a nail.” We don’t have all the answers on this complex issue. In fact, we believe that institutional leadership (from your president and board of trustees) is what’s most important in getting more alums involved in giving. Data driven decision making (the underpinning of all our work) is only part of the solution.
  • Donor acquisition is hard. If you don’t believe that, talk to anyone who runs the annual fund for a large state university. Ask them about their success rates with calling and mailing to never-givers. They will emit sighs of frustration and exasperation. They will tell you about the depressing pledge rates from the thousands and thousands of letters and postcards they send out. They will tell you about the enervating effect of wrong numbers and hang-ups on their student callers. They will tell you it isn’t easy. And they’re right; it isn’t.
  • RFM won’t help. (RFM stands for “Recency of Giving,” “Frequency of Giving,” and “Monetary Value of Giving.” It’s a term that came out of the private sector world of direct marketing over 40 years ago.) Applying that concept to our world of higher education advancement, you would call and mail to alums who’ve given recently, often, and a lot. Great idea. But if we’re focused on non-donors … call it a hunch … that’s probably not going to work out too well.

So … what’s the optimism we can offer? First, we’ve had some success with building predictive models for donor acquisition. They’re not great models, but, as John likes to say, “They’re a heck of a lot better than throwing darts.” In the not too distant future we plan to write something up on how we do that.

But for now we’d like to show you some very limited data from three schools — data that may shed just a little light on who among your non-giving alums are going to be a bit easier than others to attract into the giving fold. Again, nothing we show you here is cause for jumping up and down and dancing on the table. Far from it. But we do think it’s intriguing, and we hope it encourages folks like you to share these ideas with your colleagues and supervisors.

Here’s what we’ll be talking about:

  • The schools
  • The data we collected from the schools
  • Some results
  • A makeshift score that you might test out at your own school

The Schools

One of the schools is a northeastern private institution; the other two are southeastern public institutions, one medium size, the other quite small.

The data we collected from the schools

The most important aspect of the data we got from each school is lifetime giving (for the exact same group of alums) collected at two points in time. With one school (A), the time interval we looked at stretched out over five years. For the other two (B and C), the interval was just a year. However, with all three schools we were able to clearly identify alums who had converted from non-donor to donor status over the time interval.

We collected a lot of other information from each school, but the data we’ll focus on in this piece include:

  • Preferred year of graduation
  • Home Phone Listed (Yes/No)
  • Business Phone Listed (Yes/No)
  • Email Address Listed (Yes/No)

Some Results

The result that we paid most attention to in this study is that a greater percentage of new donors came from the ranks of recent grads than from “older” grads. To arrive at this result we:

  • Divided all alums into one of four roughly equal size groups. If you look at Chart 1, you’ll see that these groups consisted of the oldest 25% of alums who graduated in 1976 and earlier, the next oldest 25% of alums who graduated between the years 1977 and 1980, and so on.
  • For each class year quartile we computed the percentage of those alums who became new donors over the time interval we looked at.

Notice in Chart 1 that, as the graduation years of the alums in School A becomes more recent, their likelihood of becoming a new donor goes up. In the oldest quartile (1976 and earlier), the conversion rate is 1.2%, 1.5% for those graduating between 1977 and 1990, 3% for those graduating between 1991and 1997, and 7.5% for alums graduating in 1998 or later. You’ll see a similar (but less pronounced) pattern in Charts 2 and 3 for Schools B and C.

At this point you may be saying, “Hold on a second. There are more non-donors in the more recent class year quartiles than in the older class year quartiles, right?”

“Right.”

“So maybe those conversion rates are misleading. Maybe if you just looked at the conversion rates of previous non-donors by class year quartiles, those percentages would flatten out?”

Good question. Take a look at Charts 1a, 2a, and 3a below.

Clearly the pool of non-donors diminishes the longer alums have been out of school. So let’s recompute the conversion rates for each of the three schools based solely on previous non-donors. Does that make a difference? Take a look at Charts 1b, 2b, and 3b.

It does make some difference. But, without getting anymore carried away with the arithmetic here, the message is clear. Many more new donors are coming from the more recent alums than they are from the ones who graduated a good while back.

Now let’s look at the three other variables we chose for this study:

  • Home Phone Listed (Yes/No)
  • Business Phone Listed (Yes/No)
  • Email Address Listed (Yes/No)

Specifically, we wanted to know if previous non-donors with a home phone listed were more likely to convert than previous non-donors without a home phone listed. And we wanted to know the same thing for business phone listed and for email address listed.

The overall answer is “yes;” the detailed answers are contained in Charts 4-6. For the sake of clarity, let’s go through Chart 4 together.  It shows that:

  • In School A, 5.8% of previous non-donors with a home phone listed converted; 3.7% without a home phone listed converted.
  • In School B, 3.7% of previous non-donors with a home phone listed converted; 1.2% without a home phone listed converted.
  • In School C, 1.0% of previous non-donors with a home phone listed converted; 0.4% without a home phone listed converted.

Looking at Charts 5 and 6 you can see a similar pattern of differences for whether or not a business phone or an email address was listed.

What comes across from all these charts is that the variables we’ve chosen to look at in this study (year of graduation, home phone, email, and business phone) don’t show big differences between previous non-donors who converted and previous non-donors who did not convert. They show small differences. There’s no getting around that.

What’s encouraging (at least we think so) is that these differences are consistent across the three schools. And since the schools are quite different from one another, we expect that the same kind of differences are likely to hold true at many other schools.

Let’s assume you’re willing to give us the benefit of the doubt on that. Let’s further assume you’d like to check out our proposition at your own school.

A Makeshift Score That You Might Test at Your Own School

Here’s what we did for the data we’ve shown you for each of the three schools:

We created four 0/1 variables for all alums who were non-donors at the first point in time:

  • Youngest Class Year Quartile – alums who were in this group were assigned a 1; all others were assigned a 0.
  • Home Phone Listed — alums who had a home phone listed in the data base were assigned a 1; all others were assigned a 0.
  • Business Phone Listed — alums who had a business phone listed in the data base were assigned a 1; all others were assigned a 0.
  • Email Listed — alums who had an email address listed in the data base were assigned a 1; all others were assigned a 0.

For each alum who was a non-donor at the first point in time, we created a very simple score by adding each of the above variables together. Here’s the formula we used:

SCORE = Youngest Class Year Quartile (0/1) + Home Phone Listed (0/1) + Business Phone Listed (0/1) + Email Listed (0/1)

An alum with a Score of 0 was not in the Youngest Class Year Quartile, did not have a home phone listed, did not have a business phone listed and did not have an email address listed. An alum with a Score of 1met only one of these criteria, but not the other three, and so on up to an alum with a score of 4 who met all the criteria.

Charts 7-9 show the relationship of the Score to new donor conversion. We’d like you browse through them. After you do that we have a few concluding comments.

Some final thoughts:

  1. We think the charts are interesting because they show that using just a little information from an alumni database can point to folks who are far more likely to convert than other folks. Obviously, the score we created here (and suggest you try out at your own school) is very simple. Far more accurate scores can be developed using more advanced statistical techniques and the vast amount of information that’s included in almost all alumni databases.
  2. If you’ve taken the trouble to read this far, we’re, of course, pleased. We believe so fundamentally in data driven decision making that it brightens our day whenever someone at least entertains our ideas. But the problem may be with all the decision makers and opinion influencers out there who are not reading this piece and who would be, at best, bored by it. These are vice presidents and directors and bloggers and vendors who seem unwilling to make a commitment to the use of internal alumni database information — information that could save millions and millions of dollars on appeals (both mail and calling) to alums who are very unlikely to ever become donors.
  3. If you agree with us on point 2, the question becomes, “What can we do to change their minds, to get their attention?” First of all, we strongly encourage you to suppress the urge to grab them by the scruff of the neck and scold them. That won’t work. (Would that it did.) What we suggest is patience combined with persistence. New ideas and ways of doing things take a long time to take hold in institutions. How long has the idea of converting print based medical records to electronic form so they can be quickly shared among physicians and other who must make life altering decisions on the spot every day been around? If memory serves, it’s been a while. But don’t give up making the case and pushing politely but assertively. They’ll come around. We’re (all of us) a benevolent juggernaut whose opinions will eventually prevail.
« Newer Posts

Create a free website or blog at WordPress.com.