CoolData blog

30 April 2013

Final thoughts on Phonathon donor acquisition

No, this is not the last time I’ll write about Phonathon, but after today I promise to give it a rest and talk about something else. I just wanted to round out my post on the waste I see happening in donor acquisition via phone programs with some recent findings of mine. Your mileage may vary, or “YMMV” as they say on the listservs, so as usual don’t just accept what I say. I suggest questions that you might ask of your own data — nothing more.

I’ve been doing a thorough analysis of our acquisition efforts this past year. (The technical term for this is a WTHH analysis … as in “What The Heck Happened??”) I found that getting high phone contact rates seemed to be linked with making a sufficient number of call attempts per prospect. For us, any fewer than three attempts per prospect is too few to acquire new donors in any great number. In general, contact rates improve with call attempt numbers above three, and after that, the more the better.

“Whoa!”, I hear you protest. “Didn’t you just say in your first post that it makes no sense to have a set number of call attempts for all prospects?”

You’re right — I did. It doesn’t make sense to have a limit. But it might make sense to have a minimum.

To get anything from an acquisition segment, more calling is better. However, by “call more” I don’t mean call more people. I mean make more calls per prospect. The RIGHT prospects. Call the right people, and eventually many or most of them will pick up the phone. Call the wrong people, and you can ring them up 20, 30, 50 times and you won’t make a dent. That’s why I think there’s no reason to set a maximum number of call attempts. If you’re calling the right people, then just keep calling.

What’s new here is that three attempts looks like a solid minimum. This is higher than what I see some people reporting on the listservs, and well beyond the capacity of many programs as they are currently run — the ones that call every single person with a phone number in the database. To attain the required amount of per-prospect effort, those schools would have to increase phone capacity (more students, more nights), or load fewer prospects. The latter option is the only one that makes sense.

Reducing the number of people we’re trying to reach to acquire as new donors means using a predictive model or at least some basic data mining and scoring to figure out who is most likely to pick up the phone. I’ve built models that do that for two years now, and after evaluating their performance I can say that they work okay. Not super fantastic, but okay. I can live with okay … in the past five years our program has made close to one million call attempts. Even a marginal improvement in focus at that scale of activity makes a significant difference.

You don’t need to hack your acquisition segment in half today. I’m not saying that. To get new donors you still need lots and lots of prospects. Maybe someday you’ll be calling only a fraction of the people you once did, but there’s no reason you can’t take a gradual approach to getting more focused in the meantime. Trim things down a bit in the first year, evaluate the results, and fold what you learned into trimming a bit more the next year.

18 April 2013

A response to ‘What do we do about Phonathon?’

I had a thoughtful response to my blog post from earlier this week (What do we do about Phonathon?) from Paul Fleming, Database Manager at Walnut Hill School for the Arts in Natick, Massachusetts, about half an hour from downtown Boston. With Paul’s permission, I will quote from his email, and then offer my comments afterword:

I just wanted to share with you some of my experiences with Phonathon. I am the database manager of a 5-person Development department at a wonderful boarding high school called the Walnut Hill School for the Arts. Since we are a very small office, I have also been able to take on the role of the organizer of our Phonathon. It’s only been natural for me to combine the two to find analysis about the worth of this event, and I’m happy to say, for our own school, this event is amazingly worthwhile.

First of all, as far as cost vs. gain, this is one of the cheapest appeals we have. Our Phonathon callers are volunteer students who are making calls either because they have a strong interest in helping their school, or they want to be fed pizza instead of dining hall food (pizza: our biggest expense). This year we called 4 nights in the fall and 4 nights in the spring. So while it is an amazing source of stress during that week, there aren’t a ton of man-hours put into this event other than that. We still mail letters to a large portion of our alumni base a few times a year. Many of these alumni are long-shots who would not give in response to a mass appeal, but our team feels that the importance of the touch point outweighs the short-term inefficiencies that are inherent in this type of outreach.

Secondly, I have taken the time to prioritize each of the people who are selected to receive phone calls. As you stated in your article, I use things like recency and frequency of gifts, as well as other factors such as event participation or whether we have other details about their personal life (job info, etc). We do call a great deal of lapsed or nondonors, but if we find ourselves spread too thin, we make sure to use our time appropriately to maximize effectiveness with the time we have. Our school has roughly 4,400 living alumni, and we graduate about 100 wonderful, talented students a year. This season we were able to attempt phone calls to about 1,200 alumni in our 4 nights of calling. The higher-priority people received up to 3 phone calls, and the lower-priority people received just 1-2.

Lastly, I was lucky enough to start working at my job in a year in which there was no Phonathon. This gave me an amazing opportunity to test the idea that our missing donors would give through other avenues if they had no other way to do so. We did a great deal of mass appeals, indirect appeals (alumni magazine and e-newsletters), and as many personalized emails and phone calls as we could handle in our 5-person team. Here are the most basic of our findings:

In FY11 (our only non-Phonathon year), 12% of our donors were repeat donors. We reached about 11% participation, our lowest ever. In FY12 (the year Phonathon returned):

  • 27% of our donors were new/recovered donors, a 14% increase from the previous year.
  • We reached 14% overall alumni participation.
  • Of the 27% of donors who were considered new/recovered, 44% gave through Phonathon.
  • The total amount of donors we had gained from FY11 to FY12 was about the same number of people who gave through the Phonathon.
  • In FY13 (still in progess, so we’ll see how this actually plays out), 35% of the previously-recovered donors who gave again gave in response to less work-intensive mass mailing appeals, showing that some of these Phonathon donors can, in fact, be converted and (hopefully) cultivated long-term.

In general, I think your article was right on point. Large universities with a for-pay, ongoing Phonathon program should take a look and see whether their efforts should be spent elsewhere. I just wanted to share with you my successes here and the ways in which our school has been able to maintain a legitimate, cost-effective way to increase our participation rate and maintain the quality of our alumni database.

Paul’s description of his program reminds me there are plenty of institutions out there who don’t have big, automated, and data-intensive calling programs gobbling up money. What really gets my attention is that Walnut Hill uses alumni affinity factors (event attendance, employment info) to prioritize calling to get the job done on a tight schedule and with a minimum of expense. This small-scale data mining effort is an example for the rest of us who have a lot of inefficiency in our programs due to a lack of focus.

The first predictive models I ever created were for a relatively small university Phonathon that was run with printed prospect cards and manual dialing — a very successful program, I might add. For those of you at smaller institutions wondering if data mining is possible only with massive databases, the answer is NO.

And finally, how wonderful it is that Walnut Hill can quantify exactly what Phonathon contributes in terms of new donors, and new donors who convert to mail-responsive renewals.

Bravo!

15 April 2013

What do we do about Phonathon?

Filed under: Alumni, Annual Giving, Phonathon — Tags: , , , — kevinmacdonell @ 5:41 am

I love Phonathon. I love what it does, and I love the data it produces. But sad to say, Phonathon may be the sick old man of fundraising. In fact some have taken its pulse and declared it dead.

A few weeks ago, a Director of Annual Giving named Audra Vaz posted this question to a listserv: “I’m writing to see if any institutions out there have transitioned away from their Phonathon program. If so, how did it affect your Annual Giving program?”

A number of people immediately came to the defence of Phonathon with assurances of the long-term value of calling programs. The responses went something like this: Get rid of Phonathon?? It’s a great point of connection between an institution and its alumni, particularly its younger alumni. It’s the best tool for donor acquisition. It’s a great way to update contact and employment information. Don’t do it!

Audra wasn’t satisfied. “As currently run, it’s expensive and ineffective,” she wrote of her program at Florida Atlantic University in Boca Raton. “It takes up 30% of my budget, brings in less than 2% of Annual Fund donations and only has a 20% ROI. I could use that money for building societies, personal solicitations, and direct mail which is much more effective for us. In a difficult budget year, I cannot be nostalgic and continue to justify the bleed for a program that most institutions do yet hardly any makes money off of. Seems like a bad business model to me.”

I can’t disagree with Audra. Anyone following fundraising listservs knows that, in general, contact rates and productivity are declining year after year. And out of the contacts it does manage to make, Phonathon generates scads of pledges that are never fulfilled, entailing the additional cost of reminder mailings and write-offs. There are those who say that Phonathon should be viewed as an investment and not an expense. I have been inclined to that view myself. The problem is that yes, it IS an expense, and not a small one. If Phonathons create value in all the other ways that the defenders say they do, then where are the numbers to prove it? Where’s the ROI? Audra had numbers; the defenders did not. At strategic planning time, numbers talk louder than opinions.

When I contacted Audra recently to get permission to use her name, she told me she has opted to keep her Phonathon program for now, but will market its services to other university divisions to turn it into a revenue generator (athletics and arts ticket sales, admissions welcome calls, invitations to events, and alumni membership renewals). That sounds like a good idea. I can think of a number of additional ways to keep Phonathon alive and relevant, but since this is a data-related blog I will focus on just two.

1. Stop calling everybody!

At many institutions, Phonathon is used as a mass-contact tool for indiscriminately soliciting anyone the Annual Fund believes might have a pulse. This approach is becoming less and less sustainable. The same question is asked repeatedly on the listservs: “How many times, on average, do you attempt to call alumni non-donors before you retire their call sheet?” And then people give their one-size-fits-all answers: five times, seven times, whatever times per record. Given how graduating classes have increased in size for most institutions, I am not surprised to read that some programs are stretched too thin to call very deeply. As one person wrote recently: “Because of time and resources constraints, we’re lucky to get two attempts in with nondonor/long lapsed alumni.”

I just don’t get it.

We know that people who have attended events are more likely to pick up the phone. We know that alumni who have shared their job title with us are more likely to pick up the phone. We know that alumni who have given us their email address are more likely to pick up the phone. So why in 2013 are schools still expending the same amount of energy on each of their prospective donors as if they were all exactly alike? They are NOT all alike, and these schools are wasting time and money.

If you’ve got automated calling software, you should be adding up the number of times you’ve successfully reached individual alumni over the years (regardless of the call result), and use that data to build predictive models for likelihood to answer the phone. If you don’t have that historical data, you should at least consider an engagement-based scoring system to focus your efforts on alumni who have demonstrated some of the usual signs of affinity: coming to events, sharing contact and employment information, having other family members who are alumni, volunteering, responding to surveys and so on.

A phone contact propensity score (and related models such as donor acquisition likelihood) will allow you to make cuts to your program when and if the time comes. You can feel more confident that you’re trimming the bottom, cutting away the least productive slice of your program.

2. Think outside Phonathon!

Your phone program is a data generation machine, granting you a wide window view on the behaviours of your alumni and donors. I’m not talking just about address updates, as valuable as those are. You know how many times they’ve picked up the phone when they see your ID come up on the display, and you might also know how long they’ve spent on the phone with your student callers. This is not trivial information nor is it of interest only to Phonathon managers.

Relate this behavioural data to other desired behaviours: Are your current big donors characterized by picking up more often? Do your Planned Giving expectancies tend to have longer conversations on average? What about volunteering, mentoring, and other activities? Phone contact history is real, affinity-related data, delivered fresh to you daily, lifting the curtain on who likes you.

(When I say real data, I mean REAL. This is a record of what individuals have actually DONE, not what they’ve stated as a preference in a survey. This data doesn’t lie.)

A few closing thoughts. …

I said earlier that Phonathon has been used (or misused) as a mass-contact tool. Software and automation enables a hired team of students to make a staggering number of phone calls in a very short time. The bulk of long-lapsed and never-donors are approached by phone rather than mail: The cost of a single call attempt seems negligible, so Phonathon managers spread their acquisition efforts as thinly as possible, trying to turn over every last stone.

There’s something to be said about having adequate volume in order to generate new donors, but here’s the problem: The phone is no longer a mass-contact medium. In fact it’s well on its way to becoming a niche medium, handled by a whole new type of device. Some people answer the phone and respond positively to being approached that way, and for that reason phone will be important for as long as there are phones. But the masses are no longer answering.

These days some fundraisers think of email as their new mass-contact medium of choice. Again they must be thinking in terms of cost, since it hardly matters whether you’re sending 1,000 emails or 100,000 emails. And again they’re mistaken in thinking that email is practically free — they’re just not counting the full cost to the institution of the practice of spamming people.

The truth is, there is no reliable mass-contact medium anymore. If email (or phone, or social media) is a great fundraising channel, it’s not because it’s a seemingly cheap way to reach out to thousands of people. It’s a great fundraising channel when, and only when, it reaches out to the right people at the right time.

  1. Alumni and donors are not all the same. They are not defined by their age, address or other demographic groupings. They are individual human beings.
  2. They have preferred channels for communicating and giving.
  3. These preferences are revealed only through observation of past behaviours. Not through self-reporting, not through classification by age or donor status, not by any other indirect means.
  4. We cannot know the real preferences of everyone in our database. Therefore, we model on observed past behaviours to make intelligent guesses about the preferences we don’t already know.
  5. Our models are an improvement on current practice, but they are imperfect. All models are wrong; we will make them better. And we will keep Phonathon healthy and productive.

3 October 2011

Data 1, Gut Instinct 0

Filed under: Annual Giving, Best practices, Phonathon — Tags: , , — kevinmacdonell @ 8:30 am

Sometimes I employ a practice in our Phonathon program simply because my gut says it’s gotta work. Some things just seem so obvious that it doesn’t seem worthwhile testing them to prove my intuition is valid. And like a lot of people who work in Annual Giving, I like to flatter myself that I can make a non-engaged alum give just by making shrewd tweaks to the program.

It turns out that I am quite wrong. I am thinking about a practice that seems to be part of the Phonathon gospel of best practices. I firmly believed in it, and I got serious about using it this fall. As the song says, though, it ain’t necessarily so.

When possible, I am pairing up student callers with alumni whose degree is in the same faculty of study. If I have business students in the calling room, for example, I’ll assign them alumni with degrees associated with the Faculty of Management. A grad with a BSc majoring in chemistry, meanwhile, will get a call from a student majoring in one of the sciences, rather than arts or business. It’s not perfect: There are too many degree programs, current and historic, for me to get any more specific than the overall faculty, but at least it increases the chance that student and alum will have something in common to talk about.

It’s easy to see why this ought to work. When speaking with young alumni, callers are somewhat more likely to have had certain professors or classes in common, and their interests may be aligned — for example, the alum might be able to provide the student with a glimpse into the job market that awaits. With older alumni, the callers might at least know the campus and buildings that alumni of the past inhabited just as they do today. If alumni feel so inclined, the conversation might even lead to a discussion about life and career.

These would be meaningful conversations, the kind of connection we hope to achieve on the phone. Just that much, even without a gift (this year), would be a desirable result.

On the other hand … if faculty pairings really lead to longer, better-quality conversations, would we not expect that faculty-paired conversations would, on average, result in more gifts than non-paired conversations? In the long run, is that not our goal? If it makes no difference who asks whom, then why complicate things?

First let me say that I embarked on this analysis fully expecting that the data would demonstrate the effectiveness of faculty-paired conversations. I might be a data guy, but I am not unbiased! I really hoped that my intervention would actually produce results. Allow me to admit that I was quite disappointed by what I found.

Here’s what I did.

Last year, I did not employ faculty pairings. We made caller assignments based on prospects’ donor status (LYBUNT, SYBUNT, etc.), but not faculty. I don’t know how our automated software distributes prospects to callers, but I am comfortable saying that, with regards to the faculty of preferred degree, the distribution to callers was random. This more or less random assignment by faculty allowed me to compare “paired” conversations with “unpaired” conversations, to see whether one was better than the other with regards to length of call, participation rate, and average pledge.

I dug into the database of our automated calling application and I pulled a big file with every single call result for last year. The file included the caller’s ID, the length of the call in seconds, the last result (Yes Pledge, No Pledge, No Answer, Answering Machine, etc. etc.), and the pledge amount (if applicable).

Then I removed all the records that did not result in an actual conversation. If the caller didn’t get to speak to the prospect, faculty pairing is irrelevant. I kept any record that ended in Yes Pledge (specified-dollar pledge or credit card gift), Maybe (unspecified pledge), No Pledge, or a request to be put on the Do Not Call list.

I added two more columns (variables) of data: The faculty of the caller’s area of study, and the faculty of the prospect’s preferred degree. Because not all of our dozen or so faculties is represented in our calling room, I then removed all the records for which no pairing was possible. For example, because I employed no Law or Medicine students, 100% of our Law and Medicine alumni would end up on the “non-paired” side, which would skew the results.

As well, I excluded calls with call lengths of five seconds or less. It is doubtful callers would have had enough time to identify themselves in less than five seconds — therefore those calls do not qualify as conversations.

In the end, my data file for analysis contained the results of 6,500 conversations for which a pairing was possible. Each prospect record, each conversation, could have one of two states: ‘Paired’ or ‘Unpaired’. About 1,500 conversations (almost 22%) were paired, as assigned at random by the calling software.

I then compared the Paired and Unpaired groups by talk time (length of call in seconds), participation, and size of pledge.

1. Talk time

Better rapport-building on the phone implies longer, chattier calls. According to the table, “paired” calls are indeed longer on average, but not by much. A few seconds maybe.

2. Participation rate

The donor rates you see here are affected by all the exclusions, especially that of some key faculties. However, it’s the comparison we’re interested in, and the results are counter-intuitive. Non-paired conversations resulted in a slightly higher rate of participation (number of Yes Pledges divided by total conversations).

3. Average and median pledge

This table is also affected by the exclusion of a lot of alumni who tend to make larger pledges. Again, though, the point is that there is very little difference between the groups in terms of the amount given per Yes pledge.

The differences between the groups are not significant. Think about the range of values your callers get for common performance metrics (pledge rate, credit card rate, talk time, and so on). There are huge differences! If you want to move the yardsticks in your program, hire mature, friendly, chatty students who love your school and want to talk about it. Train them well. Keep them happy. Reward them appropriately. Retain them year over year so they develop confidence. These are the interventions that matter. Whom they are assigned to call doesn’t matter nearly as much.

Over and above that, pay attention to what matters even more than caller skills: The varying level of engagement of individual alumni. Call alumni who will answer the phone. Call alumni who will give a gift. Stop fussing over the small stuff.

You know what, though? Even faced with this evidence, I will probably continue to pair up students and alumni by faculty. First of all, the callers love it. They say they’re having better conversations, and I’m inclined to believe them. It’s not technically difficult to match up by faculty, so why not? As well, there might be nuances that I overlooked in my study of last year’s data. Maybe the faculty pairings are too broad. (Anytime you find economics in the same faculty as physics, you have to wonder how some people define Science. A discussion for someone else’s blog, perhaps.)

But my study has cast doubt on the usefulness of going to any great length to target alumni by faculty. For example, should I try hard to recruit a student caller from Law or Medicine to maximize on alumni from those faculties? Probably not.

Finally, I caution readers not to interpret my results as being generally applicable. I’m not saying that faculty pairing as a best practice is invalid. You need to determine for yourself whether a practice is part of your core strategy, or just a refinement, or completely useless. As I opined in my previous post (Are we too focused on trivia?), I suspect a lot of Annual Fund professionals aren’t making these distinctions.

The answers are in the data. Go find them.

16 August 2011

Phonathon pledges and time on the call: Another look

Filed under: Annual Giving, John Sammis, Peter Wylie, Phonathon — Tags: , , — kevinmacdonell @ 11:54 am

by Peter B. Wylie, John Sammis, and Kevin MacDonell

(Download a printer-friendly PDF version here: Time on call and pledges P2)

Back in January of this year, the three of us posted a paper based on calling data from one higher education institution (Time on the call and how much the alum pledges). You can go back and take a look at the paper, but its essence is the strong relationship we saw between time spent on a call to an alum, and whether or not that alum made a pledge and how big the pledge was.

We weren’t bowled over by these findings, but we were certainly intrigued by them. In this paper we’ve got some more data to show you — data that provides “corroborative testimony” for that relationship between calling time and pledging. And we’ve got something a little bit extra for you, too.

We’ll start by tipping our hand just a little. We looked at calling time (in seconds) only for those alums with whom contact was made, and the result of the last call was labeled either “NO PLEDGE” or “SPECIFIED PLEDGE.”

Tables 1 – 3 show the calling time in seconds for the three schools (X,Y, and Z) that we looked at. Notice that we divided the alums called at each school into ten groups (called deciles) of approximately equal size.

Table 1: Median Talk Time, Minimum Talk Time, and Maximum Talk Time by Decile for All Alums in School X Who Either Made a Specified Pledge or No Pledge

Table 2: Median Talk Time, Minimum Talk Time, and Maximum Talk Time by Decile for All Alums in School Y Who Either Made a Specified Pledge or No Pledge

Table 3: Median Talk Time, Minimum Talk Time, and Maximum Talk Time by Decile for All Alums in School Z Who Either Made a Specified Pledge or No Pledge

These three tables convey a lot of information that we think is worth looking through carefully. On the other hand, sometimes it’s just easier to look at a quick summary. And that’s what you’ll see in Table 4 and Figure 4; both show the median talk time (in minutes, not seconds) by decile for the three schools.

Table 4: Median Talk Time (in Minutes) by Decile for All Three Schools

There’s not much difference among the schools in terms of how much time their callers spent on the phone with alums. Schools X and Y look very similar; School Z callers appear to have been just a bit “chattier.”

Now let’s look at the pledge money that was received from alums in the three schools by our time on the call deciles. It’s laid out for you in Tables 5-7 and Figures 5-7.

Table 5: Total Pledge Dollars and Mean Pledge Dollars by Talk Time Decile for All Alums in School X Who Either Made a Specified Pledge or No Pledge

Table 6: Total Pledge Dollars and Mean Pledge Dollars by Talk Time Decile for All Alums in School Y Who Either Made a Specified Pledge or No Pledge

Table 7: Total Pledge Dollars and Mean Pledge Dollars by Talk Time Decile for All Alums in School Z Who Either Made a Specified Pledge or No Pledge

These data are not tough to summarize. There is an obvious and strong relationship between time spent on the call with alums and how much the alums pledged. If someone pressed us for specifics, we’d say, “Look at the total pledge money received for deciles 1-3 (the bottom 30%) versus deciles 8-10 (the top 30%) for each school.”

Here they are:

  • School X: $6,850 versus $164,485 (24 times as much)
  • School Y: $25,032 versus $93,355 (3.7 times as much)
  • School Z:  $3,554 versus $220,860 (62 times as much)

So far we’ve confirmed some of the findings from our January paper. But what about the extra we promised?

You’ll recall that the alums we looked at in this study were ones who had (on the last call made to them) either agreed to make a pledge, or who had told the caller they would not make a pledge.

Take a look at Tables 8-10 and Figures 8-10. They show the percentage of alums at each decile who chose either option.

Table 8: Percentage of No Pledges versus Specified Pledges by Talk Time Decile for School X

Table 9: Percentage of No Pledges versus Specified Pledges by Talk Time Decile for School Y

Table 10: Percentage of No Pledges versus Specified Pledges by Talk Time Decile for School Z

As is often the case with data analysis, we sort of happened upon what you’ve just seen in these table and charts. We were looking at outcomes that were related to call length. We didn’t plan to look only at alums who either said they’d give a pledge or, “Nope, can’t help you out.” The thought just occurred to us as we were looking at lots of different possibilities. But look at what popped out. It almost appears as if we fudged the data. But we didn’t.

Some Concluding Thoughts

Here are three:

  1. We’ve now looked at call time data from four quite different higher education institutions. At this  point, it would take a mountain of evidence from other schools to dissuade us from this notion: “The longer student callers talk to the alums they are soliciting, the more likely those callers are to obtain bigger and bigger pledges.”
  2. We are far from ready to tell call center managers: “Tell your callers to try to keep the alum on the phone as long as they can. If they do that, both your pledge rates and pledge amounts will go up dramatically.” It would be nice if things were that simple, but, of course, they are not. Some alums are quite willing to give a healthy pledge, and the last thing they want to do is yak on and on with a kid who went to a place they graduated from when people used rotary phones. Some callers are naturally chatty and engaging, as are some alums. Others are not. Humans beings are complicated creatures and they vary enormously. One size fits all advice is almost always unhelpful for dealing with them.
  3. That said, we do think this relationship between time on the call and pledge rate/pledge amount is worth a lot more investigation. A good example. Not long ago, Kevin (a call center manager himself) said:

“I’m always interested in identifying ways to predict which of those people who’ve never given us anything before will finally make a pledge. I’m going to start looking at the talk time of lifetime non-givers from last year who ended up making a pledge this year. I bet the talk time for those who converted will be a lot longer than for those who didn’t.”

Great idea. Let’s hear some more from you all.

21 June 2011

How many times to keep calling?

Guest post by Peter Wylie and John Sammis

(Click to download a printer-friendly .PDF version here: NUMBER OF ATTEMPTS 050411)

Since Kevin MacDonell took over the phonathon at Dalhousie University, he and I have had a number of discussions about the call center and how it works. I’ve learned a lot from these discussions, especially because Kevin often raises intriguing questions about how data analysis can make for a more efficient and productive calling process.

One of the questions he’s concerned with is the number of call attempts it’s worth making to a given alum. That is, he’s asking, “How many attempts should my callers make before they ‘make contact’ with an alum and either get a pledge or some other voice-to-voice response – or they give up and stop calling?”

Last January Kevin was able to gather some calling data from several schools that may, among other things, offer the beginnings of a methodology for answering this question. What we’d like to do in this piece is walk you through a technique we’ve tried, and we’d like to ask you to send us some reactions to what we’ve done.

Here’s what we’ll cover:

  1. How we decided whether contact was made (or not) with 41,801 alums who were recently called by the school we used for this exercise.
  2. Our comments on the percentage of contacts made and the pledge money raised for each of eight categories of attempts: 1, 2, 3, 4, 5, 6, 7, and 8 or more.
  3. How we built an experimental predictive model for the likelihood of making contact with a given alum.
  4. How we used that model to see when it might (and might not) make sense to keep calling an alum.

Deciding Whether Contact Was Made

            John Sammis and I do tons of analyses on alumni databases, but we’re nowhere near as familiar with call center data as Kevin is. So I asked him to take a look at the table you see below that shows the result of the last call made to almost 42,000 alums. Then I asked, “Kevin, which of these results would you classify as contact made?”

Table 1: Frequency Percentage Distribution for Results of Last Call Made to 41,801 Alums

He said he’d go with these categories:

  • ALREADY PLEDGED
  • NO PLEDGE
  • NO SOLICIT
  • REMOVE LIST
  • SPEC PLDG (i.e., Specified Pledge)
  • UNSP PLDG  (i.e., Unspecified Pledge)
  • DO NOT CALL

Kevin’s reasoning was that, with each of these categories, there was a final “voice to voice” discussion between the caller and the alum. Sometimes this discussion had a pretty negative conclusion. If the alum says “do not call” or “remove from list” (1.13% and 0.10% respectively), that’s not great. “No pledge” (29.72%) and “unspecified pledge” (4.15%) are not so hot either, but at least they leave the door open for the future. “Already pledged” (1.06%)? What can you say to that one? “And which decade was that, sir?”

Lame humor aside, the point is that Kevin feels (and I agree), that, for this school, these categories meet the criterion of “contact made.” The others do not.

Our Comments on Percentage Contact Made and Pledge Money Raised for Each of Eight Categories of Attempts

            Let’s go back to the title of this piece: “How Many Times to Keep Calling?” Maybe the simplest way to decide this question is to look at the contact rate as well as the pledge rate by attempt. Why not? So that’s what we did. You can see the results in Table 2 and Figure 1 and Table 3 and Figure 2.

Table 2: Number of Contacts Made and Percentage Contact Made For Each of Eight Categories of Attempts

Table 3: Total pledge dollars and mean pledge dollars received for each of eight categories of attempts

 We’ve taken a hard look at both these tables and figures, and we’ve concluded that they don’t really offer helpful guidelines for deciding when to stop calling at this school. Why? We don’t see a definitive number of attempts where it would make sense to stop.  To get specific, let’s go over the attempts:

  • 1st attempt: This attempt clearly yielded the most alums contacted (6,023) and the most dollars pledged ($79,316). However, stopping here would make little sense if only for the fact that the attempt yielded only a third of the $230,526 that would eventually be raised.
  • 2nd attempt: Should we stop here? Well, $49,385 was raised, and the contact rate has now jumped from about 50% to over 60%. We’d say keep going.
  • 3rd attempt: How about here? Over $30,000 raised and the contact rate has jumped even a bit higher. We’re not stopping.
  • 4th attempt: Here things start to go downhill a bit. The contact rate has fallen to about 43% and the total pledges raised have fallen below $20,000. However, if we stop here, we’ll be leaving more money on the table.
  • 5th attempt through 8 or more attempts: What can we say? Clearly the contact rates are not great for these attempts; they never get above the 40% level. Still, money for pledges continues to come in – over $50,000.

Even before we looked at the attempts data, we were convinced that the right question was not: “How many call attempts should be made before callers stop?” The right question was: “How many call attempts should be made with what alums?” In other words, with some alums it makes sense to keep calling until you reach them and have a chance to ask for a pledge. With others, that’s not a good strategy. In fact, it’s a waste of time and energy and money.

So, how do you identify those alums who should be called a lot and those who shouldn’t?

How We Built an Experimental Predictive Model for the Likelihood of Making Contact with a Given Alum

            This was Kevin’s idea. Being a strong believer in data-driven decision making, he firmly believed it would be possible to build a predictive model for making contact with alums. The trick would be finding the right predictors.

Now we’re at a point in the paper where, if we’re not careful, we risk confusing you more than enlightening you. The concept of model building is simple. The problem is that constructing a model can get very technical; that’s where the confusing stuff creeps in.

So we’ll stay away from the technical side of the process and just try to cover the highpoints. For each of the 41,801 alumni included in this study we amassed data on the following variables:

  • Email (whether or not the alum had an email addressed listed in the database)
  • Lifetime hard credit dollars given to the school
  • Preferred class year
  • Year of last gift made over the phone (if one was ever made)
  • Marital status missing (whether or not there was no marital code whatsoever for the alum in the marital status field)
  • Event Attendance (whether or not the alum had ever attended an event since graduation)

With these variables we used a technique called multiple regression to combine the variables into a score that could be used to predict an alum’s likelihood of being contacted by a caller. Because multiple regression is hard to get one’s arms around, we won’t try to explain that part of what we did. We’ll just ask you to trust us that it worked pretty well.

What we will do is show you the relationship between three of the above variables and whether or not contact was made with an alum. This will give you a sense of why we included them as predictors in the model.

We’ll start with lifetime giving. Table 4 and Figure 3 show that as lifetime giving goes up, the likelihood of making contact with an alum also goes up. Notice that callers are more than twice as likely to make contact with alums who have given $120 or more lifetime (75.4%) than they are to make contact with alums whose lifetime giving is zero (34.9%).

Table 4: Number of Contacts Made and Percentage Contact Made for Three Levels of Lifetime Giving

How about Preferred Class Year? The relationship between this variable and contact rate is a bit complicated. You’ll see in Table 5 that we’ve divided class year into ten roughly equal size groups called “deciles.” The first decile includes alums whose preferred class year goes from 1964 to 1978. The second decile includes alums whose preferred class year goes from 1979 to 1985. The tenth decile includes alums whose preferred class year goes from 2008 to 2010.

A look at Figure 4 shows that contact rate is highest with the older alums and then gradually falls off as the class years get more recent. However, the rate rises a bit with the most recent alums. Without going into boring and confusing detail, we can tell you that we’re able to use this less than straight line relationship in building our model.

 

Table 5: Percentage Contact Made by Class Year Decile

The third variable we’ll look at is Event Attendance. Table 6 and Figure 5 show that, although relatively few alums (2,211) attended an event versus those who did not (35,590), the contact rate was considerably higher for the event attenders than the non-attenders: 58.3% versus 41.4%.

Table 6: Percentage Contact Made by Event Attendance

The predictive model we built generated a very granular score for each of the 41,801 alums in the study. To make it easier to see how these scores looked and worked, we collapsed the alums into ten roughly equal size groups (called deciles) based on the scores. The higher the decile the better the scores. (These deciles are, of course, different from the deciles we talked about for Preferred Class Year.)

Shortly we’ll talk about how we used these decile scores as a possible method for deciding when to stop calling. But first, let’s look at how these scores are related to both contact rate and pledging. Table 7 and Figure 6 deal with contact rate.

Table 7: Number of Contacts Made and Percentage Contact Made, by Contact Score Decile

Clearly, there is a strong relationship between the scores and whether contact was made. Maybe the most striking aspect of these displays is the contrast between contact rate for alums in the 10th decile and that for those in the first decile: 79.9% versus 19.2%. In practical terms, this means that, over time in this school, your callers are going to make contact with only one in every five alums in the first decile. But in the 10th decile? They should make contact with four in every five alums.

How about pledge rates?  We didn’t build this model to predict pledge rates. However, look at Table 8 and Figure 7. Notice the striking differences between the lower and upper deciles in terms of total dollars pledged. For example, we can compare the total pledge dollars received for the bottom 20% of alums called (deciles 1 and 2) and the top 20% of alums called (deciles 9 and 10): about $2,700 versus almost $200,000.

Table 8: Total Pledge Dollars and Mean Pledge Dollars Received by Contact Score Decile

How We Used the Model to See When It Might (And Might Not) Make Sense to Keep Calling an Alum

In this section we have a lot of tables and figures for you to look at. Specifically, you’ll see:

  • Both the number of contacts made and the contact rate by decile score level for each of the first six attempts. (We decided to cut things off at the sixth attempt for reasons we think you’ll find obvious.)
  • A table that shows the total pledge dollars raised for each attempt by decile score level.

Looked at from one perspective, there is a huge amount of information to absorb in all this. Looked at from another perspective, we believe there are a few obvious facts that emerge.

Go ahead and browse through the tables and figures for each of the six attempts. After you finish doing that, we’ll tell you what we see.

The First Attempt

Table 9: Number of Contacts Made and Percentage Contact Made, by Contact Score Decile for the First Attempt

The Second Attempt

Table 10: Number of Contacts Made and Percentage Contact Made, by Contact Score Decile for the Second Attempt

The Third Attempt

Table 11: Number of Contacts Made and Percentage Contact Made by Contact Score Decile for the Third Attempt

The Fourth Attempt

Table 12: Number of Contacts Made and Percentage Contact Made by Contact Score Decile for the Fourth Attempt

The Fifth Attempt

Table 13: Number of Contacts Made and Percentage Contact Made by Contact Score Decile for the Fifth Attempt

The Sixth Attempt

Table 14: Number of Contacts Made and Percentage Contact Made by Contact Score Decile for the Sixth Attempt

This is what we see:

  • For each of the six attempts, the contact rate increases as the score decile increases. There are some bumps and inconsistencies along the way (see Figure 10, for example), but this is clearly the overall pattern for each of the attempts.
  • For all the attempts, the contact rate for the lowest 20% of scores (deciles 1 and 2) is always substantially lower than the contact rate for the highest 20% of scores (deciles 9 and 10).
  • Once we reach the sixth attempt, the contact rates fall off dramatically for all but the tenth decile.

Now take a look at Table 15 that shows the total pledge money raised for each attempt (including the seventh attempt and eight or more attempts) by score decile. You can also look at Table 16 which shows the same information but with the amounts exceeding $1,000 highlighted in red.

Table 15: Total Pledge Dollars Raised In Each Attempt by Contact Score Decile

Table 16: Total Pledge Dollars Raised In Each Attempt by Contact Score Decile with Pledge Amounts Greater Than $1,000 Highlighted In Red

We could talk about these two tables in some detail, but we’d rather just say, “Wow!”

Some Concluding Remarks

            We began this paper by saying that we wanted to introduce what might be the beginnings of a methodology for answering the question: “How many attempts should my callers make before they ‘make contact’ with an alum and either get a pledge or some other voice to voice response – or they give up and stop calling?”

We also said we’d like to walk you through a technique we’ve tried, and we’d like to ask you to send us some reactions to what we’ve done. So, if you’re willing, we’d really appreciate your getting back to us with some feedback on what we’ve done here.

Specifically, you might tell us how much you agree or disagree with these assertions:

  • There is no across-the-board number of attempts that you should apply in your program, or even to any segment in your program; the number of attempts you make to reach an alum very much depends on who that alum is.
  • There are some alums who should be called and called because you will eventually reach them and (probably) receive a pledge from them. There are other alums who should be called once, or not at all.
  • If the school we used in this paper is at all representative of other schools that do calling, all across North America huge amounts of time and money are wasted trying to reach alums with whom contact will never be made nor will any pledges be raised.
  • Anyone who is at a high level of decision making regarding the annual fund (whether inside the institution or a vendor) should be leading the charge for the kind of data analysis shown in this paper. If they’re not, someone needs to have a polite little chat with them.

We look forward to getting your comments. (Comment below, or email Kevin MacDonell at kevin.macdonell@gmail.com.)

Create a free website or blog at WordPress.com.