CoolData blog

28 July 2010

Applying predictive modeling to Phonathon segmentation

Filed under: Annual Giving, Donor acquisition, Phonathon, Segmentation — Tags: , — kevinmacdonell @ 8:05 pm

Segmenting the prospect pool is where the rubber hits the road in modeling for the phone program. Too bad there’s no road map!

When I was a prospect researcher working in Major Gifts and doing a little predictive modeling on the side, I was innocent of the intricacies of Annual Giving. I produced the models, and waited for the magic to happen. Hey, I wondered, how hard can it be? I tell you who’s most likely to give, and you get to work on that. Done.

Today, I’m responsible for running a phonathon program. Now I’M the guy who’s supposed to apply predictive scores to the Annual Fund, without messing everything up. What looked simple from the outside now looks like Rubik’s Cube. And I never solved Rubik’s Cube. (Had the book, but not the patience.)

In the same way, I have found that books and other resources on phonathon management are just no help when it comes to propensity-based segmentation. There seem to be no readily-available prototypes.

So let’s change that. Today I’m going to share a summary of the segmentation we hope to implement this fall. It’s not as detailed as our actual plan, but should give enough information to inform the development of your own plan. I’ve had to tear the Rubik’s Cube apart, as I used to do as a kid, and reassemble from scratch.

Click on this link to open a Word doc: “Phone segments”. Each row in this document is a separate segment. The segments are grouped into “blocks,” according to how many call attempts will be applied to each segment. Notice that the first column is “Decile Score”. That’s right, the first level of selection is going to be the propensity-of-giving predictive score I created for Phonathon.

It would seem that shifting from “traditional” segmentation is just as easy as that, but in fact making this change took a lot of hard thinking and consultation with others who have experience with phonathon. (*** See credits at end.)

Why was it so hard? Read on!

The first thing we have to ask ourselves is, why do we segment at all? The primary goals, as I see them, are:

  1. PRIORITIZATION: Focusing attention and resources on the best prospects
  2. MESSAGING: Customizing appeals to natural groupings of alumni

There are other reasons, including being able to track performance of various groups over time, but these are the most important at this planning stage.

By “prioritization,” I mean that alumni with the greatest propensity to give should be given special consideration in order to maximize response. Alumni who are most likely to give should:

  • receive the most care and attention with regards to when they are asked (early in the calling season, soonest after mail drops, etc.);
  • be assigned to the best and most experienced callers, and
  • have higher call-attempt limits than other, lower-propensity alumni.

The other goal, “messaging,” is simple enough to understand: We tailor our message to alumni based on what sort of group they are in. Alumni fall into a few groups based on their past donor history (LYBUNT, SYBUNT, never donor), which largely determines our solicitation goal for them (Leadership, Renewal, Acquisition). Alumni are also segmented by faculty, a common practice when alumni are believed to feel greater affinity for their faculty than they do the university as a whole. There may also be special segments created for other characteristics (young alumni, for example), or for special fundraising projects that need to be treated separately.

The “message” goal is often placed at the centre of phonathon segmentation — at the expense of optimizing treatment of the best prospects, in my view. In many programs, a rigid structure of calling by faculty commonly prevails, exhausting one message-defined pool (eg. “Law, Donors”) before moving on to the next. There are benefits to working one homogeneous calling pool at a time — callers can more quickly become familiar with the message (and objections to it) if it stays consistent through the night. However, overall gains in the program might be realized by taking a more propensity-driven approach.

Predictive modeling for propensity to give is the “new thing” that allows us to bring prioritization to the fore. Traditionally, propensity to give has been determined mainly by previous giving history, which is based on reasonable assumptions: Alumni who have given recently are most likely to give again. This approach works for donors, but is not helpful for segmenting the non-donor pool for acquisition. Predictive modeling is a marked improvement over giving history alone for segmenting donors as well: a never-donor who has a high likelihood of giving is far more valuable to the institution than a donor who is very unlikely to renew. Only predictive modeling can give us the insight into the unknown to allow us to decide who is the better prospect.

The issue: Layers of complexity

We need to somehow incorporate the scores from the predictive model into segmentation. But simply creating an additional level of segmentation will create an unreasonable amount of complexity: “Score decile 10, Law, donors”, “Score decile 9, Medicine, non-donors”, etc. etc. The number of segments would become unmanageable and many of them would be too small, especially when additionally broken up by time zone.

I considered keeping the traditional segments (Faculty and donor status) and simply ordering the individual prospects within each segment using a very granular score. This would require us to make a judgment call about when we should drop a segment and move on to the next one. The risk in doing so is that in leaving it to our judgment, we will either drop the segment too early, leaving money on the table, or call too deep into the segment before moving on. Calling alumni with a decile score of 7 before at least one attempt to ALL the 10s runs counter to the goal of prioritizing on best prospects.

So, what should we do?

The proposed new strategy going forward will draw a distinction between Prioritization and Messaging. Calling segments will be based on a combination of Propensity Score and Donor Status. More of the work involved in the “message” component (based on Faculty and past giving designations) will be managed at the point of the call, via the automated calling system and the caller him/herself.

The intention is to move messaging out of segmentation and into a combination of our automated dialing system’s conditional scripting features and the judgment of the caller. The callers will continue to be shown specific degree information, with customized scripts based on this information. The main difference from the caller’s point of view is that he or she will be speaking with alumni of numerous degree types on any given night, instead of just one or two.

Our system offers the ability to compose scripts that contain conditional statements, so that the message the caller presents changes on the fly in response to the particulars of the prospect being called (eg. degree and faculty, designation of last gift, and so on). This feature works automatically and requires no effort from callers, except to the extent that there are more talking points to absorb simultaneously.

The caller’s prospect information screen offers data on a prospect’s past giving. When historical gift designations disagree with a prospect’s faculty, the caller will need to shift gears slightly and ask the prospect if he or she wishes to renew their giving to that designation, rather than the default (faculty of preferred degree).

Shifting this aspect from segmentation to the point of the call is intended to remove a layer of complexity from segmentation, thereby making room for propensity to give. See?

‘Faculty’ will be removed as a primary concern in segmentation, by collapsing all the specific faculties into two overarching groups: Undergraduate degrees and graduate/professional degrees. This grouping preserves one of the fundamental differences between prospects (their stage of life while a student) while preventing the creation of an excessive number of tiny segments.

Have a look at the Excel file again. The general hierarchy for segmentation will be Score Decile (ten possible levels), then Donor Status (two levels, Donor and Non-Donor), then Graduate-Professional/Undergraduate (two levels).  Therefore the number of possible segments is 10 x 2 x 2 = 40. In practice there will be more than 40, but this number will be manageable. As well, although we will call as many prospects as we possibly can, it is not imperative that we call the very lowest deciles, where the probability of finding donors is extremely low. Having leftover segments at the end of the winter term is likely, but not cause for concern.

This is only a general structure — some segments may be split or collapsed depending on how large or small they are. As well, I will break out other segments for New Grads and any special projects we are running this year. And call attempt limits may require rejigging throughout the season, based on actual response.

New Grad calling is limited to Acquisition, for the purpose of caller training and familiarization. I prefer Renewal calling to be handled by experienced callers, therefore new-grad renewals are included in other segments.

Other issues that may require attention in your segmentation include double-alumni households (do both spouses receive a call, and if so, when?), and creating another segment to capture alumni who did not receive a propensity score because they were not contactable at time of model creation.

Potential issues

Calling pools are mixed with regard to faculty, so the message will vary from call to call. Callers won’t know from the outset who they will be speaking with (Law, Medicine, etc.), and will require multiple scripts to cover multiple prospect types. Training and familiarization with the job will take longer.

The changes will require a little more attentiveness on the part of call centre employees. The script will auto-populate the alum’s preferred faculty. However, the caller must be prepared to modify the conversation on the fly based on other information available, i.e. designation of last (or largest) gift. The downside is that callers may take more time to become proficient. However, the need to pay attention to context may help to keep callers more engaged with their work, as opposed to mindlessly reading the same script over and over all night.

Another potential issue is that some faculties are at a disadvantage because they have fewer high-scoring alumni. The extent to which this might be a problem can only be determined by looking at the data to see how each faculty’s alumni are distributed by score decile. Some redistribution among segments may be necessary if any one faculty is found to be at a severe disadvantage. Note that it cuts both ways, though: In the traditional segmentation, entire faculties were probably placed at a disadvantage because they had lower priority — based on nothing more than the need to order them in some kind of sequence.

As I say, this is all new, and untested. How large or small the proposed segments will be remains undetermined. How well the segmentation will work is not known. I am interested to hear how others have dealt with the issue of applying their predictive models to phonathon.

*** CREDITS: I owe thanks to Chris Steeves, Development Officer for the Faculty of Management at Dalhousie University, and a former Annual Giving Officer responsible for Phonathon, for his enthusiastic support and for certain ideas (such as collapsing faculties into ‘graduate’ and ‘undergraduate’ categories) which are a key part of this segmentation plan. Also, thanks to Marni Tuttle, Associate Director, Advancement Services at Dalhousie University, for her ideas and thorough review of this post in its various drafts.

14 January 2010

Building your ‘event attendance likelihood’ model

Filed under: Event attendance, Model building — Tags: , , , , — kevinmacdonell @ 12:20 pm

Photo courtesy of Alumnae Association of Mount Holyoke College (Creative Commons licence)

Your model’s predicted value doesn’t always have to be ‘giving’. Once you’ve discovered the power of predictive modeling for your fundraising efforts, you can direct that power into other Advancement functions.

How about alumni event attendance?

I’ve had great success with this new model, which scores all of our alumni according to how likely they are to attend an event.  I’ll show you what we use it for, and then I’ll bounce a cool idea off you for your thoughts.

If you’ve read some earlier posts, you will already know that event attendance is highly correlated with giving (for our institution – but probably yours as well). Event attendance is an excellent predictor of giving, but it works the other way too: giving is a predictor of propensity to attend events.

We can say this because when we build our models we’re concerned only with correlation, not causation. It would be incorrect for me to say that attending events causes an alum to give, or vice-versa. I don’t know enough to make a statement either way. It could be that both behaviours spring from other influences. It’s enough for our purposes to say that they’re linked in a meaningful way.

To create an event attendance likelihood model you need at least a few years of actual attendance data. I was lucky – I had Homecoming data going back to 1999, as well as a few years of data for alumni receptions across the country. (Gathering this data pays off in many ways besides predictive modeling. See my earlier post, Why you should capture alumni event attendance in your database.)

I gave a lot of thought as to whether I should consider Homecoming and off-campus receptions separately. Clearly they are not the same thing, and perhaps should not have been weighted equally. However, for the sake of simplicity, I regarded all events as the same when I calculated my predicted value (‘number of events attended’). As long as an alumnus/na had to RSVP for the event AND showed up, they got a point for that event.

Another consideration is opportunity. To validly count off-campus events, ALL alumni should have at least had the option to attend an event. It is true that there are many cities where we have yet to host an event. However, I reasoned, we’ve hosted events in many of the towns and cities where the majority of our alumni live (or can reasonably travel to). Therefore I chose to include receptions along with Homecoming. Was I wrong? Not sure!

(Events I chose to leave out were of the exclusive, invite-only type. Because not all alumni were given the opportunity to attend, those events are not suitable to use in this model.)

You create a new model whenever you change the predicted value. Whether you use Peter Wylie’s simple-score method or multiple regression to create your model, when you make “number of events attended” your predicted value, your resulting score set will help to rank all alumni by how likely they are to show up to your event.

Here’s how we use those scores.

Photo courtesy of Alumnae Association of Mount Holyoke College (Creative Commons licence)

Let’s say the Alumni Office wants to send out invitations for Homecoming or for a reception in a city somewhere. Email is a no-brainer. It’s cheap and fast, and alumni of all ages seem very receptive to receiving communications that way.

Naturally we still mail out paper invitations, but for various reasons (cost being supreme), we have to be more selective. Some criteria we use for selecting who will get a mailing are included in the list below. The criteria are adjusted to be more or less restrictive, depending on what our target for mail pieces is.

  • Lifetime household and business giving $x and up
  • Member of donor recognition group in a recent year
  • Has a Planned Giving commitment
  • Identified as an ‘involved’ young alumnus/na
  • Attended Homecoming once in past ‘x’ years
  • Attended a previous event in region

The problem with these criteria is that so many alumni (particularly young alumni) might attend our event but aren’t donors and have never attended an event before. If the goal is attracting new faces to your event, you need some way to segment the ‘willing’ from the disinterested masses, and give them the extra attention they deserve.

This is where predictive modeling shines. I’ll have more to say about building this model later.

Now I want to bounce a cool idea off you. Let’s say you’ve created your model, scored all your alumni, and have since then put on several large events. Those events have generated actual attendance data. Let’s say you use this attendance data to work out the ‘percentage attended’ for each score level. Would that not provide you with a rough estimate of projected attendance for any given invitation list in the future? With incremental adjustments over time, and perhaps for different event types, would this be a valid tool your event planners could use?

I want to know!

An example. Let’s say you have an event coming up in Los Angeles, and your invitation list for that city includes 200 alumni who have a score of 10 in the Event Likelihood Model. You know from past events that 20% of alumni with that score will show up. Therefore you expect to see about 40 of them in Los Angeles. You add in 12% for the next level, 8% for the next level, and so on, and sum it all up to get your total projected attendance.

Valid? Not valid?

Blog at WordPress.com.