CoolData blog

18 February 2014

Save our planet

Filed under: Annual Giving, Why predictive modeling? — Tags: , , — kevinmacdonell @ 9:09 pm

You’ve seen those little signs — they’re in every hotel room these days. “Dear Guest,” they say, “Bed sheets that are washed daily in thousands of hotels around the world use millions of gallons of water and a lot of detergent.” The card then goes on to urge you to give some indication that you don’t want your bedding or towels taken away to be laundered.

Presumably millions of small gestures by hotel guests have by now added up to a staggering amount of savings in water, energy and detergent.

It reminds me of what predictive analytics does for a mass-contact area of operation such as Annual Giving. If we all trimmed down the amount of acquisition contacts we make — expending the same amount of effort but only on the people with highest propensity to give, or likelihood to pick up the phone, or greatest chance of opening our email or what-have-you — we’d be doing our bit to collectively conserve a whole lot of human energy, and not a few trees.

With many advancement leaders questioning whether they can continue to justify an expensive Phonathon program that is losing more ground every year, getting serious about focusing resources might just be the saviour of a key acquisition program, to boot.

30 April 2013

Final thoughts on Phonathon donor acquisition

No, this is not the last time I’ll write about Phonathon, but after today I promise to give it a rest and talk about something else. I just wanted to round out my post on the waste I see happening in donor acquisition via phone programs with some recent findings of mine. Your mileage may vary, or “YMMV” as they say on the listservs, so as usual don’t just accept what I say. I suggest questions that you might ask of your own data — nothing more.

I’ve been doing a thorough analysis of our acquisition efforts this past year. (The technical term for this is a WTHH analysis … as in “What The Heck Happened??”) I found that getting high phone contact rates seemed to be linked with making a sufficient number of call attempts per prospect. For us, any fewer than three attempts per prospect is too few to acquire new donors in any great number. In general, contact rates improve with call attempt numbers above three, and after that, the more the better.

“Whoa!”, I hear you protest. “Didn’t you just say in your first post that it makes no sense to have a set number of call attempts for all prospects?”

You’re right — I did. It doesn’t make sense to have a limit. But it might make sense to have a minimum.

To get anything from an acquisition segment, more calling is better. However, by “call more” I don’t mean call more people. I mean make more calls per prospect. The RIGHT prospects. Call the right people, and eventually many or most of them will pick up the phone. Call the wrong people, and you can ring them up 20, 30, 50 times and you won’t make a dent. That’s why I think there’s no reason to set a maximum number of call attempts. If you’re calling the right people, then just keep calling.

What’s new here is that three attempts looks like a solid minimum. This is higher than what I see some people reporting on the listservs, and well beyond the capacity of many programs as they are currently run — the ones that call every single person with a phone number in the database. To attain the required amount of per-prospect effort, those schools would have to increase phone capacity (more students, more nights), or load fewer prospects. The latter option is the only one that makes sense.

Reducing the number of people we’re trying to reach to acquire as new donors means using a predictive model or at least some basic data mining and scoring to figure out who is most likely to pick up the phone. I’ve built models that do that for two years now, and after evaluating their performance I can say that they work okay. Not super fantastic, but okay. I can live with okay … in the past five years our program has made close to one million call attempts. Even a marginal improvement in focus at that scale of activity makes a significant difference.

You don’t need to hack your acquisition segment in half today. I’m not saying that. To get new donors you still need lots and lots of prospects. Maybe someday you’ll be calling only a fraction of the people you once did, but there’s no reason you can’t take a gradual approach to getting more focused in the meantime. Trim things down a bit in the first year, evaluate the results, and fold what you learned into trimming a bit more the next year.

18 April 2013

A response to ‘What do we do about Phonathon?’

I had a thoughtful response to my blog post from earlier this week (What do we do about Phonathon?) from Paul Fleming, Database Manager at Walnut Hill School for the Arts in Natick, Massachusetts, about half an hour from downtown Boston. With Paul’s permission, I will quote from his email, and then offer my comments afterword:

I just wanted to share with you some of my experiences with Phonathon. I am the database manager of a 5-person Development department at a wonderful boarding high school called the Walnut Hill School for the Arts. Since we are a very small office, I have also been able to take on the role of the organizer of our Phonathon. It’s only been natural for me to combine the two to find analysis about the worth of this event, and I’m happy to say, for our own school, this event is amazingly worthwhile.

First of all, as far as cost vs. gain, this is one of the cheapest appeals we have. Our Phonathon callers are volunteer students who are making calls either because they have a strong interest in helping their school, or they want to be fed pizza instead of dining hall food (pizza: our biggest expense). This year we called 4 nights in the fall and 4 nights in the spring. So while it is an amazing source of stress during that week, there aren’t a ton of man-hours put into this event other than that. We still mail letters to a large portion of our alumni base a few times a year. Many of these alumni are long-shots who would not give in response to a mass appeal, but our team feels that the importance of the touch point outweighs the short-term inefficiencies that are inherent in this type of outreach.

Secondly, I have taken the time to prioritize each of the people who are selected to receive phone calls. As you stated in your article, I use things like recency and frequency of gifts, as well as other factors such as event participation or whether we have other details about their personal life (job info, etc). We do call a great deal of lapsed or nondonors, but if we find ourselves spread too thin, we make sure to use our time appropriately to maximize effectiveness with the time we have. Our school has roughly 4,400 living alumni, and we graduate about 100 wonderful, talented students a year. This season we were able to attempt phone calls to about 1,200 alumni in our 4 nights of calling. The higher-priority people received up to 3 phone calls, and the lower-priority people received just 1-2.

Lastly, I was lucky enough to start working at my job in a year in which there was no Phonathon. This gave me an amazing opportunity to test the idea that our missing donors would give through other avenues if they had no other way to do so. We did a great deal of mass appeals, indirect appeals (alumni magazine and e-newsletters), and as many personalized emails and phone calls as we could handle in our 5-person team. Here are the most basic of our findings:

In FY11 (our only non-Phonathon year), 12% of our donors were repeat donors. We reached about 11% participation, our lowest ever. In FY12 (the year Phonathon returned):

  • 27% of our donors were new/recovered donors, a 14% increase from the previous year.
  • We reached 14% overall alumni participation.
  • Of the 27% of donors who were considered new/recovered, 44% gave through Phonathon.
  • The total amount of donors we had gained from FY11 to FY12 was about the same number of people who gave through the Phonathon.
  • In FY13 (still in progess, so we’ll see how this actually plays out), 35% of the previously-recovered donors who gave again gave in response to less work-intensive mass mailing appeals, showing that some of these Phonathon donors can, in fact, be converted and (hopefully) cultivated long-term.

In general, I think your article was right on point. Large universities with a for-pay, ongoing Phonathon program should take a look and see whether their efforts should be spent elsewhere. I just wanted to share with you my successes here and the ways in which our school has been able to maintain a legitimate, cost-effective way to increase our participation rate and maintain the quality of our alumni database.

Paul’s description of his program reminds me there are plenty of institutions out there who don’t have big, automated, and data-intensive calling programs gobbling up money. What really gets my attention is that Walnut Hill uses alumni affinity factors (event attendance, employment info) to prioritize calling to get the job done on a tight schedule and with a minimum of expense. This small-scale data mining effort is an example for the rest of us who have a lot of inefficiency in our programs due to a lack of focus.

The first predictive models I ever created were for a relatively small university Phonathon that was run with printed prospect cards and manual dialing — a very successful program, I might add. For those of you at smaller institutions wondering if data mining is possible only with massive databases, the answer is NO.

And finally, how wonderful it is that Walnut Hill can quantify exactly what Phonathon contributes in terms of new donors, and new donors who convert to mail-responsive renewals.

Bravo!

22 January 2013

Sticking a pin in acquisition mail bloat

Filed under: Alumni, Annual Giving, Vendors — Tags: , , — kevinmacdonell @ 6:45 am

I recently read a question on a listserv that prompted me to respond. A university in the US was planning to solicit about 25,000 of its current non-donor alumni. The question was: How best to filter a non-donor base of 140,000 in order to arrive at the 25,000 names of those most likely to become donors? This university had only ever solicited donors in the past, so this was new territory for them. (How those alumni became donors in the first place was not explained.)

One responder to the question suggested narrowing down the pool by recent class years, reunion class years, or something similar, and also use any ratings, if they were available, and then do an Nth-record select on the remaining records to get to 25,000. Selecting every Nth record is one way to pick an approximately random sample. If you aren’t able to make this selection, the responder suggested, then your mail house vendor should be able to.

This answer was fine, up until the “Nth selection” part. I also had reservations about putting the vendor in control of prospect selection. So here are some thoughts on the topic of acquisition mailings.

Doing a random selection assumes that all non-donor alumni are alike, or at least that we aren’t able to make distinctions. Neither assumption would be true. Although they haven’t given yet, some alumni feel closer affinity to your school than others, and you should have some of these affinity-related cues stored in your database. This suggests that a more selective approach will perform better than a random sample.

Not long ago, I isolated all our alumni who converted from never-donor to donor at any time in the past two years. (Two years instead of just one, in order to boost the numbers a bit.) Then I compared this group with the universe of all the never-donors who had failed to convert, based on a number of attributes that might indicate affinity. Some of my findings included:

  • “Converters” were more likely than “non-converters” to have an email in the database.
  • They were more likely to have answered the phone in our Phonathon (even though the answer was ‘no pledge’)
  • They were more likely to have employment information (job title or employer name) in the database.
  • They were more likely to have attended an event since graduating.

Using these and other factors, I created a score which was used to select which non-donor alumni would be included in our acquisition mailing. I’ve been monitoring the results, and although new donors do tend to be the alumni with higher scores, frankly we’ve had poor results via mail solicitation, so evaluation is difficult. This in itself is not unusual: New-donor acquisition is very much a Phonathon phenomenon for us — in our phone results, the effectiveness of the score is much more evident.

Poor results or not, it’s still better than random, and whenever you can improve on random, you can reduce the size of a mailing. Acquisition mailings in general are way too big, simply because they’re often just random — they have to cast a wide net. Unfortunately your mail house is unlikely to encourage you to get more focused and save money.

Universities contract with vendors for their expertise and efficiency in dealing with large mailings, including cleaning the address data and handling the logistics that many small Annual Fund offices just aren’t equipped to deal with. A good mail house is a valuable ally and source of direct-marketing expertise. But acquisition presents a conflict for vendors, who make their money on volume. Annual Fund offices should be open to advice from their vendor, but they would do well to develop their own expertise in prospect selection, and make drastic cuts to the bloat in their mailings.

Donors may need to be acquired at a loss, no question. It’s about lifetime value, after all. But if the cumulative cost of that annual appeal exceeds the lifetime value of your newly-acquired donor, then the price is too high.

16 August 2011

Phonathon pledges and time on the call: Another look

Filed under: Annual Giving, John Sammis, Peter Wylie, Phonathon — Tags: , , — kevinmacdonell @ 11:54 am

by Peter B. Wylie, John Sammis, and Kevin MacDonell

(Download a printer-friendly PDF version here: Time on call and pledges P2)

Back in January of this year, the three of us posted a paper based on calling data from one higher education institution (Time on the call and how much the alum pledges). You can go back and take a look at the paper, but its essence is the strong relationship we saw between time spent on a call to an alum, and whether or not that alum made a pledge and how big the pledge was.

We weren’t bowled over by these findings, but we were certainly intrigued by them. In this paper we’ve got some more data to show you — data that provides “corroborative testimony” for that relationship between calling time and pledging. And we’ve got something a little bit extra for you, too.

We’ll start by tipping our hand just a little. We looked at calling time (in seconds) only for those alums with whom contact was made, and the result of the last call was labeled either “NO PLEDGE” or “SPECIFIED PLEDGE.”

Tables 1 – 3 show the calling time in seconds for the three schools (X,Y, and Z) that we looked at. Notice that we divided the alums called at each school into ten groups (called deciles) of approximately equal size.

Table 1: Median Talk Time, Minimum Talk Time, and Maximum Talk Time by Decile for All Alums in School X Who Either Made a Specified Pledge or No Pledge

Table 2: Median Talk Time, Minimum Talk Time, and Maximum Talk Time by Decile for All Alums in School Y Who Either Made a Specified Pledge or No Pledge

Table 3: Median Talk Time, Minimum Talk Time, and Maximum Talk Time by Decile for All Alums in School Z Who Either Made a Specified Pledge or No Pledge

These three tables convey a lot of information that we think is worth looking through carefully. On the other hand, sometimes it’s just easier to look at a quick summary. And that’s what you’ll see in Table 4 and Figure 4; both show the median talk time (in minutes, not seconds) by decile for the three schools.

Table 4: Median Talk Time (in Minutes) by Decile for All Three Schools

There’s not much difference among the schools in terms of how much time their callers spent on the phone with alums. Schools X and Y look very similar; School Z callers appear to have been just a bit “chattier.”

Now let’s look at the pledge money that was received from alums in the three schools by our time on the call deciles. It’s laid out for you in Tables 5-7 and Figures 5-7.

Table 5: Total Pledge Dollars and Mean Pledge Dollars by Talk Time Decile for All Alums in School X Who Either Made a Specified Pledge or No Pledge

Table 6: Total Pledge Dollars and Mean Pledge Dollars by Talk Time Decile for All Alums in School Y Who Either Made a Specified Pledge or No Pledge

Table 7: Total Pledge Dollars and Mean Pledge Dollars by Talk Time Decile for All Alums in School Z Who Either Made a Specified Pledge or No Pledge

These data are not tough to summarize. There is an obvious and strong relationship between time spent on the call with alums and how much the alums pledged. If someone pressed us for specifics, we’d say, “Look at the total pledge money received for deciles 1-3 (the bottom 30%) versus deciles 8-10 (the top 30%) for each school.”

Here they are:

  • School X: $6,850 versus $164,485 (24 times as much)
  • School Y: $25,032 versus $93,355 (3.7 times as much)
  • School Z:  $3,554 versus $220,860 (62 times as much)

So far we’ve confirmed some of the findings from our January paper. But what about the extra we promised?

You’ll recall that the alums we looked at in this study were ones who had (on the last call made to them) either agreed to make a pledge, or who had told the caller they would not make a pledge.

Take a look at Tables 8-10 and Figures 8-10. They show the percentage of alums at each decile who chose either option.

Table 8: Percentage of No Pledges versus Specified Pledges by Talk Time Decile for School X

Table 9: Percentage of No Pledges versus Specified Pledges by Talk Time Decile for School Y

Table 10: Percentage of No Pledges versus Specified Pledges by Talk Time Decile for School Z

As is often the case with data analysis, we sort of happened upon what you’ve just seen in these table and charts. We were looking at outcomes that were related to call length. We didn’t plan to look only at alums who either said they’d give a pledge or, “Nope, can’t help you out.” The thought just occurred to us as we were looking at lots of different possibilities. But look at what popped out. It almost appears as if we fudged the data. But we didn’t.

Some Concluding Thoughts

Here are three:

  1. We’ve now looked at call time data from four quite different higher education institutions. At this  point, it would take a mountain of evidence from other schools to dissuade us from this notion: “The longer student callers talk to the alums they are soliciting, the more likely those callers are to obtain bigger and bigger pledges.”
  2. We are far from ready to tell call center managers: “Tell your callers to try to keep the alum on the phone as long as they can. If they do that, both your pledge rates and pledge amounts will go up dramatically.” It would be nice if things were that simple, but, of course, they are not. Some alums are quite willing to give a healthy pledge, and the last thing they want to do is yak on and on with a kid who went to a place they graduated from when people used rotary phones. Some callers are naturally chatty and engaging, as are some alums. Others are not. Humans beings are complicated creatures and they vary enormously. One size fits all advice is almost always unhelpful for dealing with them.
  3. That said, we do think this relationship between time on the call and pledge rate/pledge amount is worth a lot more investigation. A good example. Not long ago, Kevin (a call center manager himself) said:

“I’m always interested in identifying ways to predict which of those people who’ve never given us anything before will finally make a pledge. I’m going to start looking at the talk time of lifetime non-givers from last year who ended up making a pledge this year. I bet the talk time for those who converted will be a lot longer than for those who didn’t.”

Great idea. Let’s hear some more from you all.

16 November 2010

Keep the phones ringing – but not all of them

Filed under: Annual Giving, Phonathon, Segmentation — Tags: , , — kevinmacdonell @ 11:32 am

How many times should you call the prospects in a Phonathon pool before giving up on that group? Five? Ten? 50? If you segment based on propensity to give, looking at your call results will give you the right answer. If you segment by other criteria, there is no right answer: You’ll make too many AND too few call attempts to those prospects — simultaneously.

Bear with me and I will try to explain.

This past summer, I proposed a new way to approach Phonathon segmentation. My top-level sort would be the propensity-to-give scores I came up with from my Phonathon-specific predictive model. I didn’t completely do away with the more “traditional” segmentation criteria (eg., faculty and past giving status), but they had lower priority. (See Applying predictive modeling to Phonathon segmentation, 28 July 2010.)

So, how’s that working? We’re just past the middle of the term, so obviously no final verdict is in, but so far the results are looking favourable for the model-driven approach. I’ll have more to say about that in the coming weeks. Today I want to zero in on one particularly interesting aspect of Phonathon: The efficacy of multiple call attempts, and the crucial role predictive scores play in choosing the optimal amount of effort to expend.

In my segmentation plan, alumni with the highest scores get called first and most often. This is important. Ask any phonathon manager and they’ll tell you the biggest challenge right now is getting people to answer their phones. Regardless of a prospect’s score, between 60% and 65% of calls are going to answering machines. We have some talented fundraisers in the room; odds are good that if one of them can get you on the phone, you’re going to give! But they have to get you on the phone first, and that is proving incredibly challenging.

Given that barrier, it makes sense that we would want to call our best and most likely givers early and often. Many will never answer their phones, but the hope is that enough of them will answer to make many repeated attempts worth the time and expense.

Have a look at the chart below. This shows the number of “Yes” pledges (i.e., with specified dollar amounts) that have come in on the first, second, third …. up to the ninth call attempt. Each coloured bar represents prospects with a certain decile score from the predictive model (with 10 being the highest decile). So far only one person has picked up the phone the ninth time we called him or her, and made a pledge — and that person, no surprise, is in the highest decile.

Pledge numbers drop dramatically as the number of call attempts increases, even for the 10s — but a quick glance shows that the 10s consistently give twice as often as the next decile down. Much below decile 8, and calling more than a handful of times seems to be a lost cause.

Let me anticipate an objection and say, yes, I know: We have spent far more time on the 9s and 10s than we have the lower score deciles, therefore the lower scoring alumni are showing up with fewer call attempts and fewer pledges. Let’s look at it a different way, then. Let’s include the “Nos” as well as the “Yeses”. Then we can see what percentage of decisions went in our favour at each call-attempt stage and each score decile. (For simplicity’s sake I will leave out the “maybes” and other results that are not really a decision.)

To start off, here’s a summary of how all Yes AND No prospects to date break down by number of call attempts.

The number of decisions falls off sharply with each additional attempt — by about half, in fact. With each additional call attempt, it is that much less likely that you’ll get the prospect on the line. That’s the nature of the beast.

That’s the bad news, but here’s something interesting. The percentage of “Yes” responses starts relatively low with the prospects who answer on the first attempt, but seems to go UP slightly with the number of attempts it takes to get a decision! See this chart:

I wouldn’t have expected to see that, but it does go to show that multiple callbacks can pay off. Not for all prospects, though! I will show in a moment that the steady or increasing pledge percentage is due to the activity of a select group of prospects.

Before we continue: I wouldn’t pay too much attention to the bars for the seventh attempt and higher. These represent small numbers of prospects, and I don’t trust the percentages. Therefore for the sake of simplicity, let’s focus on prospects who made decisions at attempts numbered 1 through 6 only. For the same reason, let’s focus on score deciles 6 and up, because while 161 prospects at score level 6 have made decisions so far, only five prospects at score level 5 have done so — that’s not enough data to make valid conclusions.

Okay … I know the next chart looks confusing, but stay with me: This one really brings it all together. Have a look, then read my discussion below.

Start over on the left-hand side, with the group of bars that shows how people who made a decision on the first call attempt break down by decile score. The 10s outshine everyone else, while everyone with a score of 6 through 9 are all neck-and-neck with regards to percentage of Yes decisions.

Now move right, to the next group of bars which represent people who made a decision at call attempt number two. The 10s are holding their own, and some of the other score levels are improving their pledge percentages. After three call attempts, though, the lower score levels mostly fall away, while the 10s keep improving as a percentage of decisions made. The numbers at the 6th attempt are small — only 32 Score 10s said yes on the 6th call (out of 61 decisions) — but the trend is pretty encouraging, no?

Does it not seem worthwhile to keep calling our highest scorers for a while yet? With more calling and more data, it should become clear when a reasonable cutoff for each score decile has been reached, but we have a ways to go yet.

How do you currently decide when a calling pool is exhausted? When the contact rate falls below some level that you consider acceptable? When the dollars per employee hour are too low? Or simply when the calling room seems a little too quiet?

Well, in my calling room we are going to have some rather quiet nights in the coming weeks. Contact rates do drop rapidly as pools are called repeatedly. But I know now that pledge rates for the highest-scoring alumni are good enough to justify a little bit of boredom on the part of callers, because nightly totals remain respectable as long as we focus on the best prospects.

The bottom line: Keep on calling, but only if you’re calling the right people.

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,056 other followers