CoolData blog

21 June 2011

How many times to keep calling?

Guest post by Peter Wylie and John Sammis

(Click to download a printer-friendly .PDF version here: NUMBER OF ATTEMPTS 050411)

Since Kevin MacDonell took over the phonathon at Dalhousie University, he and I have had a number of discussions about the call center and how it works. I’ve learned a lot from these discussions, especially because Kevin often raises intriguing questions about how data analysis can make for a more efficient and productive calling process.

One of the questions he’s concerned with is the number of call attempts it’s worth making to a given alum. That is, he’s asking, “How many attempts should my callers make before they ‘make contact’ with an alum and either get a pledge or some other voice-to-voice response – or they give up and stop calling?”

Last January Kevin was able to gather some calling data from several schools that may, among other things, offer the beginnings of a methodology for answering this question. What we’d like to do in this piece is walk you through a technique we’ve tried, and we’d like to ask you to send us some reactions to what we’ve done.

Here’s what we’ll cover:

  1. How we decided whether contact was made (or not) with 41,801 alums who were recently called by the school we used for this exercise.
  2. Our comments on the percentage of contacts made and the pledge money raised for each of eight categories of attempts: 1, 2, 3, 4, 5, 6, 7, and 8 or more.
  3. How we built an experimental predictive model for the likelihood of making contact with a given alum.
  4. How we used that model to see when it might (and might not) make sense to keep calling an alum.

Deciding Whether Contact Was Made

            John Sammis and I do tons of analyses on alumni databases, but we’re nowhere near as familiar with call center data as Kevin is. So I asked him to take a look at the table you see below that shows the result of the last call made to almost 42,000 alums. Then I asked, “Kevin, which of these results would you classify as contact made?”

Table 1: Frequency Percentage Distribution for Results of Last Call Made to 41,801 Alums

He said he’d go with these categories:

  • SPEC PLDG (i.e., Specified Pledge)
  • UNSP PLDG  (i.e., Unspecified Pledge)

Kevin’s reasoning was that, with each of these categories, there was a final “voice to voice” discussion between the caller and the alum. Sometimes this discussion had a pretty negative conclusion. If the alum says “do not call” or “remove from list” (1.13% and 0.10% respectively), that’s not great. “No pledge” (29.72%) and “unspecified pledge” (4.15%) are not so hot either, but at least they leave the door open for the future. “Already pledged” (1.06%)? What can you say to that one? “And which decade was that, sir?”

Lame humor aside, the point is that Kevin feels (and I agree), that, for this school, these categories meet the criterion of “contact made.” The others do not.

Our Comments on Percentage Contact Made and Pledge Money Raised for Each of Eight Categories of Attempts

            Let’s go back to the title of this piece: “How Many Times to Keep Calling?” Maybe the simplest way to decide this question is to look at the contact rate as well as the pledge rate by attempt. Why not? So that’s what we did. You can see the results in Table 2 and Figure 1 and Table 3 and Figure 2.

Table 2: Number of Contacts Made and Percentage Contact Made For Each of Eight Categories of Attempts

Table 3: Total pledge dollars and mean pledge dollars received for each of eight categories of attempts

 We’ve taken a hard look at both these tables and figures, and we’ve concluded that they don’t really offer helpful guidelines for deciding when to stop calling at this school. Why? We don’t see a definitive number of attempts where it would make sense to stop.  To get specific, let’s go over the attempts:

  • 1st attempt: This attempt clearly yielded the most alums contacted (6,023) and the most dollars pledged ($79,316). However, stopping here would make little sense if only for the fact that the attempt yielded only a third of the $230,526 that would eventually be raised.
  • 2nd attempt: Should we stop here? Well, $49,385 was raised, and the contact rate has now jumped from about 50% to over 60%. We’d say keep going.
  • 3rd attempt: How about here? Over $30,000 raised and the contact rate has jumped even a bit higher. We’re not stopping.
  • 4th attempt: Here things start to go downhill a bit. The contact rate has fallen to about 43% and the total pledges raised have fallen below $20,000. However, if we stop here, we’ll be leaving more money on the table.
  • 5th attempt through 8 or more attempts: What can we say? Clearly the contact rates are not great for these attempts; they never get above the 40% level. Still, money for pledges continues to come in – over $50,000.

Even before we looked at the attempts data, we were convinced that the right question was not: “How many call attempts should be made before callers stop?” The right question was: “How many call attempts should be made with what alums?” In other words, with some alums it makes sense to keep calling until you reach them and have a chance to ask for a pledge. With others, that’s not a good strategy. In fact, it’s a waste of time and energy and money.

So, how do you identify those alums who should be called a lot and those who shouldn’t?

How We Built an Experimental Predictive Model for the Likelihood of Making Contact with a Given Alum

            This was Kevin’s idea. Being a strong believer in data-driven decision making, he firmly believed it would be possible to build a predictive model for making contact with alums. The trick would be finding the right predictors.

Now we’re at a point in the paper where, if we’re not careful, we risk confusing you more than enlightening you. The concept of model building is simple. The problem is that constructing a model can get very technical; that’s where the confusing stuff creeps in.

So we’ll stay away from the technical side of the process and just try to cover the highpoints. For each of the 41,801 alumni included in this study we amassed data on the following variables:

  • Email (whether or not the alum had an email addressed listed in the database)
  • Lifetime hard credit dollars given to the school
  • Preferred class year
  • Year of last gift made over the phone (if one was ever made)
  • Marital status missing (whether or not there was no marital code whatsoever for the alum in the marital status field)
  • Event Attendance (whether or not the alum had ever attended an event since graduation)

With these variables we used a technique called multiple regression to combine the variables into a score that could be used to predict an alum’s likelihood of being contacted by a caller. Because multiple regression is hard to get one’s arms around, we won’t try to explain that part of what we did. We’ll just ask you to trust us that it worked pretty well.

What we will do is show you the relationship between three of the above variables and whether or not contact was made with an alum. This will give you a sense of why we included them as predictors in the model.

We’ll start with lifetime giving. Table 4 and Figure 3 show that as lifetime giving goes up, the likelihood of making contact with an alum also goes up. Notice that callers are more than twice as likely to make contact with alums who have given $120 or more lifetime (75.4%) than they are to make contact with alums whose lifetime giving is zero (34.9%).

Table 4: Number of Contacts Made and Percentage Contact Made for Three Levels of Lifetime Giving

How about Preferred Class Year? The relationship between this variable and contact rate is a bit complicated. You’ll see in Table 5 that we’ve divided class year into ten roughly equal size groups called “deciles.” The first decile includes alums whose preferred class year goes from 1964 to 1978. The second decile includes alums whose preferred class year goes from 1979 to 1985. The tenth decile includes alums whose preferred class year goes from 2008 to 2010.

A look at Figure 4 shows that contact rate is highest with the older alums and then gradually falls off as the class years get more recent. However, the rate rises a bit with the most recent alums. Without going into boring and confusing detail, we can tell you that we’re able to use this less than straight line relationship in building our model.


Table 5: Percentage Contact Made by Class Year Decile

The third variable we’ll look at is Event Attendance. Table 6 and Figure 5 show that, although relatively few alums (2,211) attended an event versus those who did not (35,590), the contact rate was considerably higher for the event attenders than the non-attenders: 58.3% versus 41.4%.

Table 6: Percentage Contact Made by Event Attendance

The predictive model we built generated a very granular score for each of the 41,801 alums in the study. To make it easier to see how these scores looked and worked, we collapsed the alums into ten roughly equal size groups (called deciles) based on the scores. The higher the decile the better the scores. (These deciles are, of course, different from the deciles we talked about for Preferred Class Year.)

Shortly we’ll talk about how we used these decile scores as a possible method for deciding when to stop calling. But first, let’s look at how these scores are related to both contact rate and pledging. Table 7 and Figure 6 deal with contact rate.

Table 7: Number of Contacts Made and Percentage Contact Made, by Contact Score Decile

Clearly, there is a strong relationship between the scores and whether contact was made. Maybe the most striking aspect of these displays is the contrast between contact rate for alums in the 10th decile and that for those in the first decile: 79.9% versus 19.2%. In practical terms, this means that, over time in this school, your callers are going to make contact with only one in every five alums in the first decile. But in the 10th decile? They should make contact with four in every five alums.

How about pledge rates?  We didn’t build this model to predict pledge rates. However, look at Table 8 and Figure 7. Notice the striking differences between the lower and upper deciles in terms of total dollars pledged. For example, we can compare the total pledge dollars received for the bottom 20% of alums called (deciles 1 and 2) and the top 20% of alums called (deciles 9 and 10): about $2,700 versus almost $200,000.

Table 8: Total Pledge Dollars and Mean Pledge Dollars Received by Contact Score Decile

How We Used the Model to See When It Might (And Might Not) Make Sense to Keep Calling an Alum

In this section we have a lot of tables and figures for you to look at. Specifically, you’ll see:

  • Both the number of contacts made and the contact rate by decile score level for each of the first six attempts. (We decided to cut things off at the sixth attempt for reasons we think you’ll find obvious.)
  • A table that shows the total pledge dollars raised for each attempt by decile score level.

Looked at from one perspective, there is a huge amount of information to absorb in all this. Looked at from another perspective, we believe there are a few obvious facts that emerge.

Go ahead and browse through the tables and figures for each of the six attempts. After you finish doing that, we’ll tell you what we see.

The First Attempt

Table 9: Number of Contacts Made and Percentage Contact Made, by Contact Score Decile for the First Attempt

The Second Attempt

Table 10: Number of Contacts Made and Percentage Contact Made, by Contact Score Decile for the Second Attempt

The Third Attempt

Table 11: Number of Contacts Made and Percentage Contact Made by Contact Score Decile for the Third Attempt

The Fourth Attempt

Table 12: Number of Contacts Made and Percentage Contact Made by Contact Score Decile for the Fourth Attempt

The Fifth Attempt

Table 13: Number of Contacts Made and Percentage Contact Made by Contact Score Decile for the Fifth Attempt

The Sixth Attempt

Table 14: Number of Contacts Made and Percentage Contact Made by Contact Score Decile for the Sixth Attempt

This is what we see:

  • For each of the six attempts, the contact rate increases as the score decile increases. There are some bumps and inconsistencies along the way (see Figure 10, for example), but this is clearly the overall pattern for each of the attempts.
  • For all the attempts, the contact rate for the lowest 20% of scores (deciles 1 and 2) is always substantially lower than the contact rate for the highest 20% of scores (deciles 9 and 10).
  • Once we reach the sixth attempt, the contact rates fall off dramatically for all but the tenth decile.

Now take a look at Table 15 that shows the total pledge money raised for each attempt (including the seventh attempt and eight or more attempts) by score decile. You can also look at Table 16 which shows the same information but with the amounts exceeding $1,000 highlighted in red.

Table 15: Total Pledge Dollars Raised In Each Attempt by Contact Score Decile

Table 16: Total Pledge Dollars Raised In Each Attempt by Contact Score Decile with Pledge Amounts Greater Than $1,000 Highlighted In Red

We could talk about these two tables in some detail, but we’d rather just say, “Wow!”

Some Concluding Remarks

            We began this paper by saying that we wanted to introduce what might be the beginnings of a methodology for answering the question: “How many attempts should my callers make before they ‘make contact’ with an alum and either get a pledge or some other voice to voice response – or they give up and stop calling?”

We also said we’d like to walk you through a technique we’ve tried, and we’d like to ask you to send us some reactions to what we’ve done. So, if you’re willing, we’d really appreciate your getting back to us with some feedback on what we’ve done here.

Specifically, you might tell us how much you agree or disagree with these assertions:

  • There is no across-the-board number of attempts that you should apply in your program, or even to any segment in your program; the number of attempts you make to reach an alum very much depends on who that alum is.
  • There are some alums who should be called and called because you will eventually reach them and (probably) receive a pledge from them. There are other alums who should be called once, or not at all.
  • If the school we used in this paper is at all representative of other schools that do calling, all across North America huge amounts of time and money are wasted trying to reach alums with whom contact will never be made nor will any pledges be raised.
  • Anyone who is at a high level of decision making regarding the annual fund (whether inside the institution or a vendor) should be leading the charge for the kind of data analysis shown in this paper. If they’re not, someone needs to have a polite little chat with them.

We look forward to getting your comments. (Comment below, or email Kevin MacDonell at


16 November 2010

Keep the phones ringing – but not all of them

Filed under: Annual Giving, Phonathon, Segmentation — Tags: , , — kevinmacdonell @ 11:32 am

How many times should you call the prospects in a Phonathon pool before giving up on that group? Five? Ten? 50? If you segment based on propensity to give, looking at your call results will give you the right answer. If you segment by other criteria, there is no right answer: You’ll make too many AND too few call attempts to those prospects — simultaneously.

Bear with me and I will try to explain.

This past summer, I proposed a new way to approach Phonathon segmentation. My top-level sort would be the propensity-to-give scores I came up with from my Phonathon-specific predictive model. I didn’t completely do away with the more “traditional” segmentation criteria (eg., faculty and past giving status), but they had lower priority. (See Applying predictive modeling to Phonathon segmentation, 28 July 2010.)

So, how’s that working? We’re just past the middle of the term, so obviously no final verdict is in, but so far the results are looking favourable for the model-driven approach. I’ll have more to say about that in the coming weeks. Today I want to zero in on one particularly interesting aspect of Phonathon: The efficacy of multiple call attempts, and the crucial role predictive scores play in choosing the optimal amount of effort to expend.

In my segmentation plan, alumni with the highest scores get called first and most often. This is important. Ask any phonathon manager and they’ll tell you the biggest challenge right now is getting people to answer their phones. Regardless of a prospect’s score, between 60% and 65% of calls are going to answering machines. We have some talented fundraisers in the room; odds are good that if one of them can get you on the phone, you’re going to give! But they have to get you on the phone first, and that is proving incredibly challenging.

Given that barrier, it makes sense that we would want to call our best and most likely givers early and often. Many will never answer their phones, but the hope is that enough of them will answer to make many repeated attempts worth the time and expense.

Have a look at the chart below. This shows the number of “Yes” pledges (i.e., with specified dollar amounts) that have come in on the first, second, third …. up to the ninth call attempt. Each coloured bar represents prospects with a certain decile score from the predictive model (with 10 being the highest decile). So far only one person has picked up the phone the ninth time we called him or her, and made a pledge — and that person, no surprise, is in the highest decile.

Pledge numbers drop dramatically as the number of call attempts increases, even for the 10s — but a quick glance shows that the 10s consistently give twice as often as the next decile down. Much below decile 8, and calling more than a handful of times seems to be a lost cause.

Let me anticipate an objection and say, yes, I know: We have spent far more time on the 9s and 10s than we have the lower score deciles, therefore the lower scoring alumni are showing up with fewer call attempts and fewer pledges. Let’s look at it a different way, then. Let’s include the “Nos” as well as the “Yeses”. Then we can see what percentage of decisions went in our favour at each call-attempt stage and each score decile. (For simplicity’s sake I will leave out the “maybes” and other results that are not really a decision.)

To start off, here’s a summary of how all Yes AND No prospects to date break down by number of call attempts.

The number of decisions falls off sharply with each additional attempt — by about half, in fact. With each additional call attempt, it is that much less likely that you’ll get the prospect on the line. That’s the nature of the beast.

That’s the bad news, but here’s something interesting. The percentage of “Yes” responses starts relatively low with the prospects who answer on the first attempt, but seems to go UP slightly with the number of attempts it takes to get a decision! See this chart:

I wouldn’t have expected to see that, but it does go to show that multiple callbacks can pay off. Not for all prospects, though! I will show in a moment that the steady or increasing pledge percentage is due to the activity of a select group of prospects.

Before we continue: I wouldn’t pay too much attention to the bars for the seventh attempt and higher. These represent small numbers of prospects, and I don’t trust the percentages. Therefore for the sake of simplicity, let’s focus on prospects who made decisions at attempts numbered 1 through 6 only. For the same reason, let’s focus on score deciles 6 and up, because while 161 prospects at score level 6 have made decisions so far, only five prospects at score level 5 have done so — that’s not enough data to make valid conclusions.

Okay … I know the next chart looks confusing, but stay with me: This one really brings it all together. Have a look, then read my discussion below.

Start over on the left-hand side, with the group of bars that shows how people who made a decision on the first call attempt break down by decile score. The 10s outshine everyone else, while everyone with a score of 6 through 9 are all neck-and-neck with regards to percentage of Yes decisions.

Now move right, to the next group of bars which represent people who made a decision at call attempt number two. The 10s are holding their own, and some of the other score levels are improving their pledge percentages. After three call attempts, though, the lower score levels mostly fall away, while the 10s keep improving as a percentage of decisions made. The numbers at the 6th attempt are small — only 32 Score 10s said yes on the 6th call (out of 61 decisions) — but the trend is pretty encouraging, no?

Does it not seem worthwhile to keep calling our highest scorers for a while yet? With more calling and more data, it should become clear when a reasonable cutoff for each score decile has been reached, but we have a ways to go yet.

How do you currently decide when a calling pool is exhausted? When the contact rate falls below some level that you consider acceptable? When the dollars per employee hour are too low? Or simply when the calling room seems a little too quiet?

Well, in my calling room we are going to have some rather quiet nights in the coming weeks. Contact rates do drop rapidly as pools are called repeatedly. But I know now that pledge rates for the highest-scoring alumni are good enough to justify a little bit of boredom on the part of callers, because nightly totals remain respectable as long as we focus on the best prospects.

The bottom line: Keep on calling, but only if you’re calling the right people.

28 July 2010

Applying predictive modeling to Phonathon segmentation

Filed under: Annual Giving, Donor acquisition, Phonathon, Segmentation — Tags: , — kevinmacdonell @ 8:05 pm

Segmenting the prospect pool is where the rubber hits the road in modeling for the phone program. Too bad there’s no road map!

When I was a prospect researcher working in Major Gifts and doing a little predictive modeling on the side, I was innocent of the intricacies of Annual Giving. I produced the models, and waited for the magic to happen. Hey, I wondered, how hard can it be? I tell you who’s most likely to give, and you get to work on that. Done.

Today, I’m responsible for running a phonathon program. Now I’M the guy who’s supposed to apply predictive scores to the Annual Fund, without messing everything up. What looked simple from the outside now looks like Rubik’s Cube. And I never solved Rubik’s Cube. (Had the book, but not the patience.)

In the same way, I have found that books and other resources on phonathon management are just no help when it comes to propensity-based segmentation. There seem to be no readily-available prototypes.

So let’s change that. Today I’m going to share a summary of the segmentation we hope to implement this fall. It’s not as detailed as our actual plan, but should give enough information to inform the development of your own plan. I’ve had to tear the Rubik’s Cube apart, as I used to do as a kid, and reassemble from scratch.

Click on this link to open a Word doc: “Phone segments”. Each row in this document is a separate segment. The segments are grouped into “blocks,” according to how many call attempts will be applied to each segment. Notice that the first column is “Decile Score”. That’s right, the first level of selection is going to be the propensity-of-giving predictive score I created for Phonathon.

It would seem that shifting from “traditional” segmentation is just as easy as that, but in fact making this change took a lot of hard thinking and consultation with others who have experience with phonathon. (*** See credits at end.)

Why was it so hard? Read on!

The first thing we have to ask ourselves is, why do we segment at all? The primary goals, as I see them, are:

  1. PRIORITIZATION: Focusing attention and resources on the best prospects
  2. MESSAGING: Customizing appeals to natural groupings of alumni

There are other reasons, including being able to track performance of various groups over time, but these are the most important at this planning stage.

By “prioritization,” I mean that alumni with the greatest propensity to give should be given special consideration in order to maximize response. Alumni who are most likely to give should:

  • receive the most care and attention with regards to when they are asked (early in the calling season, soonest after mail drops, etc.);
  • be assigned to the best and most experienced callers, and
  • have higher call-attempt limits than other, lower-propensity alumni.

The other goal, “messaging,” is simple enough to understand: We tailor our message to alumni based on what sort of group they are in. Alumni fall into a few groups based on their past donor history (LYBUNT, SYBUNT, never donor), which largely determines our solicitation goal for them (Leadership, Renewal, Acquisition). Alumni are also segmented by faculty, a common practice when alumni are believed to feel greater affinity for their faculty than they do the university as a whole. There may also be special segments created for other characteristics (young alumni, for example), or for special fundraising projects that need to be treated separately.

The “message” goal is often placed at the centre of phonathon segmentation — at the expense of optimizing treatment of the best prospects, in my view. In many programs, a rigid structure of calling by faculty commonly prevails, exhausting one message-defined pool (eg. “Law, Donors”) before moving on to the next. There are benefits to working one homogeneous calling pool at a time — callers can more quickly become familiar with the message (and objections to it) if it stays consistent through the night. However, overall gains in the program might be realized by taking a more propensity-driven approach.

Predictive modeling for propensity to give is the “new thing” that allows us to bring prioritization to the fore. Traditionally, propensity to give has been determined mainly by previous giving history, which is based on reasonable assumptions: Alumni who have given recently are most likely to give again. This approach works for donors, but is not helpful for segmenting the non-donor pool for acquisition. Predictive modeling is a marked improvement over giving history alone for segmenting donors as well: a never-donor who has a high likelihood of giving is far more valuable to the institution than a donor who is very unlikely to renew. Only predictive modeling can give us the insight into the unknown to allow us to decide who is the better prospect.

The issue: Layers of complexity

We need to somehow incorporate the scores from the predictive model into segmentation. But simply creating an additional level of segmentation will create an unreasonable amount of complexity: “Score decile 10, Law, donors”, “Score decile 9, Medicine, non-donors”, etc. etc. The number of segments would become unmanageable and many of them would be too small, especially when additionally broken up by time zone.

I considered keeping the traditional segments (Faculty and donor status) and simply ordering the individual prospects within each segment using a very granular score. This would require us to make a judgment call about when we should drop a segment and move on to the next one. The risk in doing so is that in leaving it to our judgment, we will either drop the segment too early, leaving money on the table, or call too deep into the segment before moving on. Calling alumni with a decile score of 7 before at least one attempt to ALL the 10s runs counter to the goal of prioritizing on best prospects.

So, what should we do?

The proposed new strategy going forward will draw a distinction between Prioritization and Messaging. Calling segments will be based on a combination of Propensity Score and Donor Status. More of the work involved in the “message” component (based on Faculty and past giving designations) will be managed at the point of the call, via the automated calling system and the caller him/herself.

The intention is to move messaging out of segmentation and into a combination of our automated dialing system’s conditional scripting features and the judgment of the caller. The callers will continue to be shown specific degree information, with customized scripts based on this information. The main difference from the caller’s point of view is that he or she will be speaking with alumni of numerous degree types on any given night, instead of just one or two.

Our system offers the ability to compose scripts that contain conditional statements, so that the message the caller presents changes on the fly in response to the particulars of the prospect being called (eg. degree and faculty, designation of last gift, and so on). This feature works automatically and requires no effort from callers, except to the extent that there are more talking points to absorb simultaneously.

The caller’s prospect information screen offers data on a prospect’s past giving. When historical gift designations disagree with a prospect’s faculty, the caller will need to shift gears slightly and ask the prospect if he or she wishes to renew their giving to that designation, rather than the default (faculty of preferred degree).

Shifting this aspect from segmentation to the point of the call is intended to remove a layer of complexity from segmentation, thereby making room for propensity to give. See?

‘Faculty’ will be removed as a primary concern in segmentation, by collapsing all the specific faculties into two overarching groups: Undergraduate degrees and graduate/professional degrees. This grouping preserves one of the fundamental differences between prospects (their stage of life while a student) while preventing the creation of an excessive number of tiny segments.

Have a look at the Excel file again. The general hierarchy for segmentation will be Score Decile (ten possible levels), then Donor Status (two levels, Donor and Non-Donor), then Graduate-Professional/Undergraduate (two levels).  Therefore the number of possible segments is 10 x 2 x 2 = 40. In practice there will be more than 40, but this number will be manageable. As well, although we will call as many prospects as we possibly can, it is not imperative that we call the very lowest deciles, where the probability of finding donors is extremely low. Having leftover segments at the end of the winter term is likely, but not cause for concern.

This is only a general structure — some segments may be split or collapsed depending on how large or small they are. As well, I will break out other segments for New Grads and any special projects we are running this year. And call attempt limits may require rejigging throughout the season, based on actual response.

New Grad calling is limited to Acquisition, for the purpose of caller training and familiarization. I prefer Renewal calling to be handled by experienced callers, therefore new-grad renewals are included in other segments.

Other issues that may require attention in your segmentation include double-alumni households (do both spouses receive a call, and if so, when?), and creating another segment to capture alumni who did not receive a propensity score because they were not contactable at time of model creation.

Potential issues

Calling pools are mixed with regard to faculty, so the message will vary from call to call. Callers won’t know from the outset who they will be speaking with (Law, Medicine, etc.), and will require multiple scripts to cover multiple prospect types. Training and familiarization with the job will take longer.

The changes will require a little more attentiveness on the part of call centre employees. The script will auto-populate the alum’s preferred faculty. However, the caller must be prepared to modify the conversation on the fly based on other information available, i.e. designation of last (or largest) gift. The downside is that callers may take more time to become proficient. However, the need to pay attention to context may help to keep callers more engaged with their work, as opposed to mindlessly reading the same script over and over all night.

Another potential issue is that some faculties are at a disadvantage because they have fewer high-scoring alumni. The extent to which this might be a problem can only be determined by looking at the data to see how each faculty’s alumni are distributed by score decile. Some redistribution among segments may be necessary if any one faculty is found to be at a severe disadvantage. Note that it cuts both ways, though: In the traditional segmentation, entire faculties were probably placed at a disadvantage because they had lower priority — based on nothing more than the need to order them in some kind of sequence.

As I say, this is all new, and untested. How large or small the proposed segments will be remains undetermined. How well the segmentation will work is not known. I am interested to hear how others have dealt with the issue of applying their predictive models to phonathon.

*** CREDITS: I owe thanks to Chris Steeves, Development Officer for the Faculty of Management at Dalhousie University, and a former Annual Giving Officer responsible for Phonathon, for his enthusiastic support and for certain ideas (such as collapsing faculties into ‘graduate’ and ‘undergraduate’ categories) which are a key part of this segmentation plan. Also, thanks to Marni Tuttle, Associate Director, Advancement Services at Dalhousie University, for her ideas and thorough review of this post in its various drafts.

Create a free website or blog at