CoolData blog

31 August 2016

Phonathon call attempt limits: A reading roundup

Filed under: Annual Giving, Best practices, Phonathon — Tags: , , — kevinmacdonell @ 2:49 pm

 

As September arrives, Annual Fund programs everywhere are gearing up for mailing and calling. Managers of phone programs are seeking advice on how best to proceed, and inevitably that includes asking about the optimal number of call attempts to make for each alum.

 

How many calls is too many? What’s ideal? Should it differ for LYBUNTs and SYBUNTs?

 

In my opinion, these are the wrong questions.

 

If your aim is to get someone on the phone, more calling is better. However, by “call more” I don’t mean call more people. I mean make more calls per prospect. The RIGHT prospects. Call the right people, and eventually many or most of them will pick up the phone. Call the wrong people, and you can ring them up 20, 30, 50 times and you won’t make a dent. That’s why I think there’s no reason to set a maximum number of call attempts. If you’re calling the right people, then just keep calling.

 

For Phonathon programs that are expensive or time-consuming (and potentially under threat of being cut), and shops with some ability to make decisions informed by data, it doesn’t make sense to apply across-the-board limits. Much better to use predictive modeling to determine who’s most likely to pick up the phone, and focus resources on those people.

 

Here are a number of pieces I’ve written or co-written on this topic:

 

Keep the phones ringing – but not all of them

 

Call attempt limits? You need propensity scores

 

How many times to keep calling?

 

Answering questions about “How many times to keep calling”

 

Final thoughts on Phonathon donor acquisition

 

30 May 2016

Donor volatility: testing years of non-giving as a predictor for the next big gift

Filed under: Annual Giving, Coolness — Tags: , , , , — kevinmacdonell @ 5:02 am

Guest post by Jessica Kostuck, Data Analyst, Annual Giving, Queen’s University

 

During my first few weeks on the job, my AD set me up on several calls with colleagues in similar, data-driven roles, at universities across the country. One such call was with Kevin MacDonell, keeper of CoolData, with whom I had a delightfully geeked-out conversation about predictive modeling. We ran the gamut of weird and wonderful data points, ending on the concept of donor volatility.

 

When a lapsed high-end donor has no discernable annual giving pattern, is it possible to use his or her years of non-giving to predict and influence their next big gift?

 

Our goal for our Annual Giving program was to identify these “volatile” donors (lapsed high-end donors with an erratic giving history), and reactivate (ideally, upgrade) them, through a targeted solicitation with an aggressive ask string.

 

(For more on volatility, see Odd but true findings? Upgrading annual donors are “erratic” and “volatile”, which describes findings that suggest the best prospects for a big upgrade in giving are those who are “erratic”, i.e. have prior giving but are not loyal, every-year donors, and “volatile”, i.e. are inconsistent about the amounts they give.)

 

I did some stock market research (see footnote), decided on a minimum value for the entry-point into our volatility matrix ($500), and, together with Senior Programmer Analyst, Kim Wilkinson, got cracking on writing a program to identify volatile donors.

 

volatile sql clip

 

 

Our ideal volatile donors had given ≥ $500 at least twice in the last 10 years, without any consecutive (“stable”) periods. Year over year, our ideal volatile donor would act in one of three ways – increase their giving by at least 60%, decrease their giving by at least 60%, or not give at all. Given the capacity level displayed by these volatile donors, we replaced years of very low-end giving <$99) with null values (“throwaway gifts”).

 

We had strict conditions for what would remove a donor from our table. If a donor had two years of consecutive giving within a ±60% differential from their previous highest giving point (v_value), we considered this a natural (or, at least, for this test, not sufficiently irregular) fluctuation in giving, and they were removed from the table. If the donor had two consecutive years of low-end (but not null) giving ($99-$499), this was considered a deliberate decrease, and they, too, were removed. Conversely, if a donor had two consecutive years of greatly increased giving, this was considered a deliberate increase, and they were also removed.

 

At any point, a donor could be admitted, or readmitted into our volatility matrix, by establishing, or re-establishing, a v_value and subsequent valid volatility point.

 

The difference between a lapsed donor and a volatile donor

 

Below is a sample pool of donors we examined.

 

volatile donor history image

 

Donor 1 is volatile all the way through, with greatly varying levels of giving, culminating in two years of non-giving. Donor 1 is currently volatile, and thus enters our test group.

 

Donor 2 is volatile for two years – FY07-08 and FY08-09 (v_value of $5,000 in FY07-08, followed by a valid volatile point in FY08-09 with a decrease of -80%), but then is removed from the table in FY09-10 with only a -50% decrease in giving. They do not establish a new v_value, even though their FY09-10 giving meets the minimum giving threshold for this test, because of their consecutive, only marginally decreased giving in FY10-11. This excludes Donor 2 from our test.

 

Donor 3 enters our volatility matrix in FY04-05, leaves in FY07-08, reenters in FY10-11, and maintains volatility to current day, and, thus, enters into our test solicitation.

 

While all three of these donors are lapsed, and are all SYBUNTs, only Donor 1 and Donor 3 are, by our definition, volatile.

 

Solicitation strategy and results

 

We now had a pool of constituents who were at least two years lapsed in giving, who all had a history of inconsistent, but not unsubstantial, contributions to the university. In an email solicitation, we presented constituents with both upgrade language and an aggressive ask matrix, beginning at a minimum of +60% of their highest ever v_value, regardless of where they were in the ebb and flow of their volatility cycle. Again, the goal of this test was to (1) identify donors with high capacity (2) whose giving to the university was erratic in frequency and loyalty and (3) encourage these donors to reactivate at greater than their previously-established high-end giving.

 

In our results analysis, we broadened our examination to include any gifts received from our testing pool within the subsequent four weeks, not just gifts linked to this particular solicitation code, to verify the legitimacy of tagging these donors as volatile – that is, having a higher-than-average probability to reactivate at a high-end giving level.

 

An important part of our analysis included comparing our testing pool to a control pool, pairing each of our volatile donors with a non-volatile twin who shared as many points of fiscal and biographic information as was possible.

 

Within the four-week time frame, our test group had about a 7% activity rate, whereas our control group had an activity rate of about 5% (average for the institution during this timeframe). Within our volatility test group, 50% of donors gave an amount that would plot a valid point on our volatility matrix.

 

Conclusion and next steps

 

Through our experiment, we sought to identify volatile donors, and test if we could trigger a reactivation in giving, ideally at, or greater than, their highest level on record.

 

Since not all of the donors within our test group made their gifts to the coded solicitation with the volatile ask matrix, it is indiscernible whether being presented with language and ask amounts that reflected their elusive giving behavior prompted a gift – volatile or otherwise. However, we do feel confident that we’re onto something when it comes to identifying and predicting the behavior of a particular, valuable set of donors to our institution.

 

Our above-average response rate (both versus the control group, and institution-wide) supports our “theory of volatility”, insofar as that volatile donors are an existing pool with shared behaviors within our donor population. We plan to re-run this test again at the same time next year, continuing our search to find a pattern within the instability.

 

Were we able to gather definitive results that will define and shape future annual giving strategy? Not exactly. But as far as data goes, this was definitely cool.

 

Jessica Kostuck is the Data Analyst, Annual Giving at Queen’s University in Kingston, Ontario. She can be reached at jessica.kostuck@queensu.ca.

 

————-

1. Varadi, David. “Volatility Differentials: High/Low Volatility versus Close/Close Volatility (HVL-CCV).” CSS Analytics. 29 Mar. 2011. Web. Winter 2015.

18 April 2013

A response to ‘What do we do about Phonathon?’

I had a thoughtful response to my blog post from earlier this week (What do we do about Phonathon?) from Paul Fleming, Database Manager at Walnut Hill School for the Arts in Natick, Massachusetts, about half an hour from downtown Boston. With Paul’s permission, I will quote from his email, and then offer my comments afterword:

I just wanted to share with you some of my experiences with Phonathon. I am the database manager of a 5-person Development department at a wonderful boarding high school called the Walnut Hill School for the Arts. Since we are a very small office, I have also been able to take on the role of the organizer of our Phonathon. It’s only been natural for me to combine the two to find analysis about the worth of this event, and I’m happy to say, for our own school, this event is amazingly worthwhile.

First of all, as far as cost vs. gain, this is one of the cheapest appeals we have. Our Phonathon callers are volunteer students who are making calls either because they have a strong interest in helping their school, or they want to be fed pizza instead of dining hall food (pizza: our biggest expense). This year we called 4 nights in the fall and 4 nights in the spring. So while it is an amazing source of stress during that week, there aren’t a ton of man-hours put into this event other than that. We still mail letters to a large portion of our alumni base a few times a year. Many of these alumni are long-shots who would not give in response to a mass appeal, but our team feels that the importance of the touch point outweighs the short-term inefficiencies that are inherent in this type of outreach.

Secondly, I have taken the time to prioritize each of the people who are selected to receive phone calls. As you stated in your article, I use things like recency and frequency of gifts, as well as other factors such as event participation or whether we have other details about their personal life (job info, etc). We do call a great deal of lapsed or nondonors, but if we find ourselves spread too thin, we make sure to use our time appropriately to maximize effectiveness with the time we have. Our school has roughly 4,400 living alumni, and we graduate about 100 wonderful, talented students a year. This season we were able to attempt phone calls to about 1,200 alumni in our 4 nights of calling. The higher-priority people received up to 3 phone calls, and the lower-priority people received just 1-2.

Lastly, I was lucky enough to start working at my job in a year in which there was no Phonathon. This gave me an amazing opportunity to test the idea that our missing donors would give through other avenues if they had no other way to do so. We did a great deal of mass appeals, indirect appeals (alumni magazine and e-newsletters), and as many personalized emails and phone calls as we could handle in our 5-person team. Here are the most basic of our findings:

In FY11 (our only non-Phonathon year), 12% of our donors were repeat donors. We reached about 11% participation, our lowest ever. In FY12 (the year Phonathon returned):

  • 27% of our donors were new/recovered donors, a 14% increase from the previous year.
  • We reached 14% overall alumni participation.
  • Of the 27% of donors who were considered new/recovered, 44% gave through Phonathon.
  • The total amount of donors we had gained from FY11 to FY12 was about the same number of people who gave through the Phonathon.
  • In FY13 (still in progess, so we’ll see how this actually plays out), 35% of the previously-recovered donors who gave again gave in response to less work-intensive mass mailing appeals, showing that some of these Phonathon donors can, in fact, be converted and (hopefully) cultivated long-term.

In general, I think your article was right on point. Large universities with a for-pay, ongoing Phonathon program should take a look and see whether their efforts should be spent elsewhere. I just wanted to share with you my successes here and the ways in which our school has been able to maintain a legitimate, cost-effective way to increase our participation rate and maintain the quality of our alumni database.

Paul’s description of his program reminds me there are plenty of institutions out there who don’t have big, automated, and data-intensive calling programs gobbling up money. What really gets my attention is that Walnut Hill uses alumni affinity factors (event attendance, employment info) to prioritize calling to get the job done on a tight schedule and with a minimum of expense. This small-scale data mining effort is an example for the rest of us who have a lot of inefficiency in our programs due to a lack of focus.

The first predictive models I ever created were for a relatively small university Phonathon that was run with printed prospect cards and manual dialing — a very successful program, I might add. For those of you at smaller institutions wondering if data mining is possible only with massive databases, the answer is NO.

And finally, how wonderful it is that Walnut Hill can quantify exactly what Phonathon contributes in terms of new donors, and new donors who convert to mail-responsive renewals.

Bravo!

15 April 2013

What do we do about Phonathon?

Filed under: Alumni, Annual Giving, Phonathon — Tags: , , , — kevinmacdonell @ 5:41 am

I love Phonathon. I love what it does, and I love the data it produces. But sad to say, Phonathon may be the sick old man of fundraising. In fact some have taken its pulse and declared it dead.

A few weeks ago, a Director of Annual Giving named Audra Vaz posted this question to a listserv: “I’m writing to see if any institutions out there have transitioned away from their Phonathon program. If so, how did it affect your Annual Giving program?”

A number of people immediately came to the defence of Phonathon with assurances of the long-term value of calling programs. The responses went something like this: Get rid of Phonathon?? It’s a great point of connection between an institution and its alumni, particularly its younger alumni. It’s the best tool for donor acquisition. It’s a great way to update contact and employment information. Don’t do it!

Audra wasn’t satisfied. “As currently run, it’s expensive and ineffective,” she wrote of her program at Florida Atlantic University in Boca Raton. “It takes up 30% of my budget, brings in less than 2% of Annual Fund donations and only has a 20% ROI. I could use that money for building societies, personal solicitations, and direct mail which is much more effective for us. In a difficult budget year, I cannot be nostalgic and continue to justify the bleed for a program that most institutions do yet hardly any makes money off of. Seems like a bad business model to me.”

I can’t disagree with Audra. Anyone following fundraising listservs knows that, in general, contact rates and productivity are declining year after year. And out of the contacts it does manage to make, Phonathon generates scads of pledges that are never fulfilled, entailing the additional cost of reminder mailings and write-offs. There are those who say that Phonathon should be viewed as an investment and not an expense. I have been inclined to that view myself. The problem is that yes, it IS an expense, and not a small one. If Phonathons create value in all the other ways that the defenders say they do, then where are the numbers to prove it? Where’s the ROI? Audra had numbers; the defenders did not. At strategic planning time, numbers talk louder than opinions.

When I contacted Audra recently to get permission to use her name, she told me she has opted to keep her Phonathon program for now, but will market its services to other university divisions to turn it into a revenue generator (athletics and arts ticket sales, admissions welcome calls, invitations to events, and alumni membership renewals). That sounds like a good idea. I can think of a number of additional ways to keep Phonathon alive and relevant, but since this is a data-related blog I will focus on just two.

1. Stop calling everybody!

At many institutions, Phonathon is used as a mass-contact tool for indiscriminately soliciting anyone the Annual Fund believes might have a pulse. This approach is becoming less and less sustainable. The same question is asked repeatedly on the listservs: “How many times, on average, do you attempt to call alumni non-donors before you retire their call sheet?” And then people give their one-size-fits-all answers: five times, seven times, whatever times per record. Given how graduating classes have increased in size for most institutions, I am not surprised to read that some programs are stretched too thin to call very deeply. As one person wrote recently: “Because of time and resources constraints, we’re lucky to get two attempts in with nondonor/long lapsed alumni.”

I just don’t get it.

We know that people who have attended events are more likely to pick up the phone. We know that alumni who have shared their job title with us are more likely to pick up the phone. We know that alumni who have given us their email address are more likely to pick up the phone. So why in 2013 are schools still expending the same amount of energy on each of their prospective donors as if they were all exactly alike? They are NOT all alike, and these schools are wasting time and money.

If you’ve got automated calling software, you should be adding up the number of times you’ve successfully reached individual alumni over the years (regardless of the call result), and use that data to build predictive models for likelihood to answer the phone. If you don’t have that historical data, you should at least consider an engagement-based scoring system to focus your efforts on alumni who have demonstrated some of the usual signs of affinity: coming to events, sharing contact and employment information, having other family members who are alumni, volunteering, responding to surveys and so on.

A phone contact propensity score (and related models such as donor acquisition likelihood) will allow you to make cuts to your program when and if the time comes. You can feel more confident that you’re trimming the bottom, cutting away the least productive slice of your program.

2. Think outside Phonathon!

Your phone program is a data generation machine, granting you a wide window view on the behaviours of your alumni and donors. I’m not talking just about address updates, as valuable as those are. You know how many times they’ve picked up the phone when they see your ID come up on the display, and you might also know how long they’ve spent on the phone with your student callers. This is not trivial information nor is it of interest only to Phonathon managers.

Relate this behavioural data to other desired behaviours: Are your current big donors characterized by picking up more often? Do your Planned Giving expectancies tend to have longer conversations on average? What about volunteering, mentoring, and other activities? Phone contact history is real, affinity-related data, delivered fresh to you daily, lifting the curtain on who likes you.

(When I say real data, I mean REAL. This is a record of what individuals have actually DONE, not what they’ve stated as a preference in a survey. This data doesn’t lie.)

A few closing thoughts. …

I said earlier that Phonathon has been used (or misused) as a mass-contact tool. Software and automation enables a hired team of students to make a staggering number of phone calls in a very short time. The bulk of long-lapsed and never-donors are approached by phone rather than mail: The cost of a single call attempt seems negligible, so Phonathon managers spread their acquisition efforts as thinly as possible, trying to turn over every last stone.

There’s something to be said about having adequate volume in order to generate new donors, but here’s the problem: The phone is no longer a mass-contact medium. In fact it’s well on its way to becoming a niche medium, handled by a whole new type of device. Some people answer the phone and respond positively to being approached that way, and for that reason phone will be important for as long as there are phones. But the masses are no longer answering.

These days some fundraisers think of email as their new mass-contact medium of choice. Again they must be thinking in terms of cost, since it hardly matters whether you’re sending 1,000 emails or 100,000 emails. And again they’re mistaken in thinking that email is practically free — they’re just not counting the full cost to the institution of the practice of spamming people.

The truth is, there is no reliable mass-contact medium anymore. If email (or phone, or social media) is a great fundraising channel, it’s not because it’s a seemingly cheap way to reach out to thousands of people. It’s a great fundraising channel when, and only when, it reaches out to the right people at the right time.

  1. Alumni and donors are not all the same. They are not defined by their age, address or other demographic groupings. They are individual human beings.
  2. They have preferred channels for communicating and giving.
  3. These preferences are revealed only through observation of past behaviours. Not through self-reporting, not through classification by age or donor status, not by any other indirect means.
  4. We cannot know the real preferences of everyone in our database. Therefore, we model on observed past behaviours to make intelligent guesses about the preferences we don’t already know.
  5. Our models are an improvement on current practice, but they are imperfect. All models are wrong; we will make them better. And we will keep Phonathon healthy and productive.

Create a free website or blog at WordPress.com.