CoolData blog

18 February 2014

Save our planet

Filed under: Annual Giving, Why predictive modeling? — Tags: , , — kevinmacdonell @ 9:09 pm

You’ve seen those little signs — they’re in every hotel room these days. “Dear Guest,” they say, “Bed sheets that are washed daily in thousands of hotels around the world use millions of gallons of water and a lot of detergent.” The card then goes on to urge you to give some indication that you don’t want your bedding or towels taken away to be laundered.

Presumably millions of small gestures by hotel guests have by now added up to a staggering amount of savings in water, energy and detergent.

It reminds me of what predictive analytics does for a mass-contact area of operation such as Annual Giving. If we all trimmed down the amount of acquisition contacts we make — expending the same amount of effort but only on the people with highest propensity to give, or likelihood to pick up the phone, or greatest chance of opening our email or what-have-you — we’d be doing our bit to collectively conserve a whole lot of human energy, and not a few trees.

With many advancement leaders questioning whether they can continue to justify an expensive Phonathon program that is losing more ground every year, getting serious about focusing resources might just be the saviour of a key acquisition program, to boot.

30 April 2013

Final thoughts on Phonathon donor acquisition

No, this is not the last time I’ll write about Phonathon, but after today I promise to give it a rest and talk about something else. I just wanted to round out my post on the waste I see happening in donor acquisition via phone programs with some recent findings of mine. Your mileage may vary, or “YMMV” as they say on the listservs, so as usual don’t just accept what I say. I suggest questions that you might ask of your own data — nothing more.

I’ve been doing a thorough analysis of our acquisition efforts this past year. (The technical term for this is a WTHH analysis … as in “What The Heck Happened??”) I found that getting high phone contact rates seemed to be linked with making a sufficient number of call attempts per prospect. For us, any fewer than three attempts per prospect is too few to acquire new donors in any great number. In general, contact rates improve with call attempt numbers above three, and after that, the more the better.

“Whoa!”, I hear you protest. “Didn’t you just say in your first post that it makes no sense to have a set number of call attempts for all prospects?”

You’re right — I did. It doesn’t make sense to have a limit. But it might make sense to have a minimum.

To get anything from an acquisition segment, more calling is better. However, by “call more” I don’t mean call more people. I mean make more calls per prospect. The RIGHT prospects. Call the right people, and eventually many or most of them will pick up the phone. Call the wrong people, and you can ring them up 20, 30, 50 times and you won’t make a dent. That’s why I think there’s no reason to set a maximum number of call attempts. If you’re calling the right people, then just keep calling.

What’s new here is that three attempts looks like a solid minimum. This is higher than what I see some people reporting on the listservs, and well beyond the capacity of many programs as they are currently run — the ones that call every single person with a phone number in the database. To attain the required amount of per-prospect effort, those schools would have to increase phone capacity (more students, more nights), or load fewer prospects. The latter option is the only one that makes sense.

Reducing the number of people we’re trying to reach to acquire as new donors means using a predictive model or at least some basic data mining and scoring to figure out who is most likely to pick up the phone. I’ve built models that do that for two years now, and after evaluating their performance I can say that they work okay. Not super fantastic, but okay. I can live with okay … in the past five years our program has made close to one million call attempts. Even a marginal improvement in focus at that scale of activity makes a significant difference.

You don’t need to hack your acquisition segment in half today. I’m not saying that. To get new donors you still need lots and lots of prospects. Maybe someday you’ll be calling only a fraction of the people you once did, but there’s no reason you can’t take a gradual approach to getting more focused in the meantime. Trim things down a bit in the first year, evaluate the results, and fold what you learned into trimming a bit more the next year.

18 April 2013

A response to ‘What do we do about Phonathon?’

I had a thoughtful response to my blog post from earlier this week (What do we do about Phonathon?) from Paul Fleming, Database Manager at Walnut Hill School for the Arts in Natick, Massachusetts, about half an hour from downtown Boston. With Paul’s permission, I will quote from his email, and then offer my comments afterword:

I just wanted to share with you some of my experiences with Phonathon. I am the database manager of a 5-person Development department at a wonderful boarding high school called the Walnut Hill School for the Arts. Since we are a very small office, I have also been able to take on the role of the organizer of our Phonathon. It’s only been natural for me to combine the two to find analysis about the worth of this event, and I’m happy to say, for our own school, this event is amazingly worthwhile.

First of all, as far as cost vs. gain, this is one of the cheapest appeals we have. Our Phonathon callers are volunteer students who are making calls either because they have a strong interest in helping their school, or they want to be fed pizza instead of dining hall food (pizza: our biggest expense). This year we called 4 nights in the fall and 4 nights in the spring. So while it is an amazing source of stress during that week, there aren’t a ton of man-hours put into this event other than that. We still mail letters to a large portion of our alumni base a few times a year. Many of these alumni are long-shots who would not give in response to a mass appeal, but our team feels that the importance of the touch point outweighs the short-term inefficiencies that are inherent in this type of outreach.

Secondly, I have taken the time to prioritize each of the people who are selected to receive phone calls. As you stated in your article, I use things like recency and frequency of gifts, as well as other factors such as event participation or whether we have other details about their personal life (job info, etc). We do call a great deal of lapsed or nondonors, but if we find ourselves spread too thin, we make sure to use our time appropriately to maximize effectiveness with the time we have. Our school has roughly 4,400 living alumni, and we graduate about 100 wonderful, talented students a year. This season we were able to attempt phone calls to about 1,200 alumni in our 4 nights of calling. The higher-priority people received up to 3 phone calls, and the lower-priority people received just 1-2.

Lastly, I was lucky enough to start working at my job in a year in which there was no Phonathon. This gave me an amazing opportunity to test the idea that our missing donors would give through other avenues if they had no other way to do so. We did a great deal of mass appeals, indirect appeals (alumni magazine and e-newsletters), and as many personalized emails and phone calls as we could handle in our 5-person team. Here are the most basic of our findings:

In FY11 (our only non-Phonathon year), 12% of our donors were repeat donors. We reached about 11% participation, our lowest ever. In FY12 (the year Phonathon returned):

  • 27% of our donors were new/recovered donors, a 14% increase from the previous year.
  • We reached 14% overall alumni participation.
  • Of the 27% of donors who were considered new/recovered, 44% gave through Phonathon.
  • The total amount of donors we had gained from FY11 to FY12 was about the same number of people who gave through the Phonathon.
  • In FY13 (still in progess, so we’ll see how this actually plays out), 35% of the previously-recovered donors who gave again gave in response to less work-intensive mass mailing appeals, showing that some of these Phonathon donors can, in fact, be converted and (hopefully) cultivated long-term.

In general, I think your article was right on point. Large universities with a for-pay, ongoing Phonathon program should take a look and see whether their efforts should be spent elsewhere. I just wanted to share with you my successes here and the ways in which our school has been able to maintain a legitimate, cost-effective way to increase our participation rate and maintain the quality of our alumni database.

Paul’s description of his program reminds me there are plenty of institutions out there who don’t have big, automated, and data-intensive calling programs gobbling up money. What really gets my attention is that Walnut Hill uses alumni affinity factors (event attendance, employment info) to prioritize calling to get the job done on a tight schedule and with a minimum of expense. This small-scale data mining effort is an example for the rest of us who have a lot of inefficiency in our programs due to a lack of focus.

The first predictive models I ever created were for a relatively small university Phonathon that was run with printed prospect cards and manual dialing — a very successful program, I might add. For those of you at smaller institutions wondering if data mining is possible only with massive databases, the answer is NO.

And finally, how wonderful it is that Walnut Hill can quantify exactly what Phonathon contributes in terms of new donors, and new donors who convert to mail-responsive renewals.

Bravo!

15 April 2013

What do we do about Phonathon?

Filed under: Alumni, Annual Giving, Phonathon — Tags: , , , — kevinmacdonell @ 5:41 am

I love Phonathon. I love what it does, and I love the data it produces. But sad to say, Phonathon may be the sick old man of fundraising. In fact some have taken its pulse and declared it dead.

A few weeks ago, a Director of Annual Giving named Audra Vaz posted this question to a listserv: “I’m writing to see if any institutions out there have transitioned away from their Phonathon program. If so, how did it affect your Annual Giving program?”

A number of people immediately came to the defence of Phonathon with assurances of the long-term value of calling programs. The responses went something like this: Get rid of Phonathon?? It’s a great point of connection between an institution and its alumni, particularly its younger alumni. It’s the best tool for donor acquisition. It’s a great way to update contact and employment information. Don’t do it!

Audra wasn’t satisfied. “As currently run, it’s expensive and ineffective,” she wrote of her program at Florida Atlantic University in Boca Raton. “It takes up 30% of my budget, brings in less than 2% of Annual Fund donations and only has a 20% ROI. I could use that money for building societies, personal solicitations, and direct mail which is much more effective for us. In a difficult budget year, I cannot be nostalgic and continue to justify the bleed for a program that most institutions do yet hardly any makes money off of. Seems like a bad business model to me.”

I can’t disagree with Audra. Anyone following fundraising listservs knows that, in general, contact rates and productivity are declining year after year. And out of the contacts it does manage to make, Phonathon generates scads of pledges that are never fulfilled, entailing the additional cost of reminder mailings and write-offs. There are those who say that Phonathon should be viewed as an investment and not an expense. I have been inclined to that view myself. The problem is that yes, it IS an expense, and not a small one. If Phonathons create value in all the other ways that the defenders say they do, then where are the numbers to prove it? Where’s the ROI? Audra had numbers; the defenders did not. At strategic planning time, numbers talk louder than opinions.

When I contacted Audra recently to get permission to use her name, she told me she has opted to keep her Phonathon program for now, but will market its services to other university divisions to turn it into a revenue generator (athletics and arts ticket sales, admissions welcome calls, invitations to events, and alumni membership renewals). That sounds like a good idea. I can think of a number of additional ways to keep Phonathon alive and relevant, but since this is a data-related blog I will focus on just two.

1. Stop calling everybody!

At many institutions, Phonathon is used as a mass-contact tool for indiscriminately soliciting anyone the Annual Fund believes might have a pulse. This approach is becoming less and less sustainable. The same question is asked repeatedly on the listservs: “How many times, on average, do you attempt to call alumni non-donors before you retire their call sheet?” And then people give their one-size-fits-all answers: five times, seven times, whatever times per record. Given how graduating classes have increased in size for most institutions, I am not surprised to read that some programs are stretched too thin to call very deeply. As one person wrote recently: “Because of time and resources constraints, we’re lucky to get two attempts in with nondonor/long lapsed alumni.”

I just don’t get it.

We know that people who have attended events are more likely to pick up the phone. We know that alumni who have shared their job title with us are more likely to pick up the phone. We know that alumni who have given us their email address are more likely to pick up the phone. So why in 2013 are schools still expending the same amount of energy on each of their prospective donors as if they were all exactly alike? They are NOT all alike, and these schools are wasting time and money.

If you’ve got automated calling software, you should be adding up the number of times you’ve successfully reached individual alumni over the years (regardless of the call result), and use that data to build predictive models for likelihood to answer the phone. If you don’t have that historical data, you should at least consider an engagement-based scoring system to focus your efforts on alumni who have demonstrated some of the usual signs of affinity: coming to events, sharing contact and employment information, having other family members who are alumni, volunteering, responding to surveys and so on.

A phone contact propensity score (and related models such as donor acquisition likelihood) will allow you to make cuts to your program when and if the time comes. You can feel more confident that you’re trimming the bottom, cutting away the least productive slice of your program.

2. Think outside Phonathon!

Your phone program is a data generation machine, granting you a wide window view on the behaviours of your alumni and donors. I’m not talking just about address updates, as valuable as those are. You know how many times they’ve picked up the phone when they see your ID come up on the display, and you might also know how long they’ve spent on the phone with your student callers. This is not trivial information nor is it of interest only to Phonathon managers.

Relate this behavioural data to other desired behaviours: Are your current big donors characterized by picking up more often? Do your Planned Giving expectancies tend to have longer conversations on average? What about volunteering, mentoring, and other activities? Phone contact history is real, affinity-related data, delivered fresh to you daily, lifting the curtain on who likes you.

(When I say real data, I mean REAL. This is a record of what individuals have actually DONE, not what they’ve stated as a preference in a survey. This data doesn’t lie.)

A few closing thoughts. …

I said earlier that Phonathon has been used (or misused) as a mass-contact tool. Software and automation enables a hired team of students to make a staggering number of phone calls in a very short time. The bulk of long-lapsed and never-donors are approached by phone rather than mail: The cost of a single call attempt seems negligible, so Phonathon managers spread their acquisition efforts as thinly as possible, trying to turn over every last stone.

There’s something to be said about having adequate volume in order to generate new donors, but here’s the problem: The phone is no longer a mass-contact medium. In fact it’s well on its way to becoming a niche medium, handled by a whole new type of device. Some people answer the phone and respond positively to being approached that way, and for that reason phone will be important for as long as there are phones. But the masses are no longer answering.

These days some fundraisers think of email as their new mass-contact medium of choice. Again they must be thinking in terms of cost, since it hardly matters whether you’re sending 1,000 emails or 100,000 emails. And again they’re mistaken in thinking that email is practically free — they’re just not counting the full cost to the institution of the practice of spamming people.

The truth is, there is no reliable mass-contact medium anymore. If email (or phone, or social media) is a great fundraising channel, it’s not because it’s a seemingly cheap way to reach out to thousands of people. It’s a great fundraising channel when, and only when, it reaches out to the right people at the right time.

  1. Alumni and donors are not all the same. They are not defined by their age, address or other demographic groupings. They are individual human beings.
  2. They have preferred channels for communicating and giving.
  3. These preferences are revealed only through observation of past behaviours. Not through self-reporting, not through classification by age or donor status, not by any other indirect means.
  4. We cannot know the real preferences of everyone in our database. Therefore, we model on observed past behaviours to make intelligent guesses about the preferences we don’t already know.
  5. Our models are an improvement on current practice, but they are imperfect. All models are wrong; we will make them better. And we will keep Phonathon healthy and productive.

22 January 2013

Sticking a pin in acquisition mail bloat

Filed under: Alumni, Annual Giving, Vendors — Tags: , , — kevinmacdonell @ 6:45 am

I recently read a question on a listserv that prompted me to respond. A university in the US was planning to solicit about 25,000 of its current non-donor alumni. The question was: How best to filter a non-donor base of 140,000 in order to arrive at the 25,000 names of those most likely to become donors? This university had only ever solicited donors in the past, so this was new territory for them. (How those alumni became donors in the first place was not explained.)

One responder to the question suggested narrowing down the pool by recent class years, reunion class years, or something similar, and also use any ratings, if they were available, and then do an Nth-record select on the remaining records to get to 25,000. Selecting every Nth record is one way to pick an approximately random sample. If you aren’t able to make this selection, the responder suggested, then your mail house vendor should be able to.

This answer was fine, up until the “Nth selection” part. I also had reservations about putting the vendor in control of prospect selection. So here are some thoughts on the topic of acquisition mailings.

Doing a random selection assumes that all non-donor alumni are alike, or at least that we aren’t able to make distinctions. Neither assumption would be true. Although they haven’t given yet, some alumni feel closer affinity to your school than others, and you should have some of these affinity-related cues stored in your database. This suggests that a more selective approach will perform better than a random sample.

Not long ago, I isolated all our alumni who converted from never-donor to donor at any time in the past two years. (Two years instead of just one, in order to boost the numbers a bit.) Then I compared this group with the universe of all the never-donors who had failed to convert, based on a number of attributes that might indicate affinity. Some of my findings included:

  • “Converters” were more likely than “non-converters” to have an email in the database.
  • They were more likely to have answered the phone in our Phonathon (even though the answer was ‘no pledge’)
  • They were more likely to have employment information (job title or employer name) in the database.
  • They were more likely to have attended an event since graduating.

Using these and other factors, I created a score which was used to select which non-donor alumni would be included in our acquisition mailing. I’ve been monitoring the results, and although new donors do tend to be the alumni with higher scores, frankly we’ve had poor results via mail solicitation, so evaluation is difficult. This in itself is not unusual: New-donor acquisition is very much a Phonathon phenomenon for us — in our phone results, the effectiveness of the score is much more evident.

Poor results or not, it’s still better than random, and whenever you can improve on random, you can reduce the size of a mailing. Acquisition mailings in general are way too big, simply because they’re often just random — they have to cast a wide net. Unfortunately your mail house is unlikely to encourage you to get more focused and save money.

Universities contract with vendors for their expertise and efficiency in dealing with large mailings, including cleaning the address data and handling the logistics that many small Annual Fund offices just aren’t equipped to deal with. A good mail house is a valuable ally and source of direct-marketing expertise. But acquisition presents a conflict for vendors, who make their money on volume. Annual Fund offices should be open to advice from their vendor, but they would do well to develop their own expertise in prospect selection, and make drastic cuts to the bloat in their mailings.

Donors may need to be acquired at a loss, no question. It’s about lifetime value, after all. But if the cumulative cost of that annual appeal exceeds the lifetime value of your newly-acquired donor, then the price is too high.

13 September 2012

Odd but true findings? Upgrading annual donors are “erratic” and “volatile”

Filed under: Annual Giving, Prospect identification, RFM — Tags: , , , , — kevinmacdonell @ 8:26 am

In Annual Fund, Leadership giving typically starts at gifts of $1,000 (at least in Canada it does). For most schools, these donors make up a minority of all donors, but a majority of annual revenue. They are important in their own right, and for delivering prospects to Major Giving. Not surprising, then, that elevating donors from entry-level giving to the upper tiers of the Annual Fund is a common preoccupation.

It has certainly been mine. I’ve spent considerable time studying where Leadership donors come from, in terms of how past behaviours potentially signal a readiness to enter a new level of support. Some of what I’ve learned seems like common sense. Other findings strike me as a little weird, yet plausible. I’d like to share some of the weird insights with you today. Although they’re based on data from a single school, I think they’re interesting enough to merit your trying a similar study of donor behaviour.

First, some things I learned which you probably won’t find startling:

  • New Leadership donors tend not to come out of nowhere. They have giving histories.
  • Their previous giving is usually recent, and consists of more than one or two years of giving.
  • Usually those gifts are of a certain size. Many donors giving at the $1,000 level for the first time gave at least $500 the previous year. Some gave less than that, but $500 seems to be an important threshold.

In short, it’s all about the upgrade: Find the donors who are ready to move up, and you’re good to go. But who are those donors? How do you identify them?

It would be reasonable to suggest that you should focus on your most loyal donors, and that RFM scoring might be the way to go. I certainly thought so. Everyone wants high retention rates and loyal donors. Just like high-end donors, people who give every year are probably your program’s bread and butter. They have high lifetime value, they probably give at the same time of year (often December), and they are in tune with your consistent yearly routine of mailings and phone calls. Just the sort of donor who will have a high RFM score. So what’s the problem?

The problem was described at a Blackbaud annual fund benchmarking session I attended this past spring: Take a hard look at your donor data, they said, and you’ll probably discover that the longer a donor has given at a certain level, the less likely she is to move up. She may be loyal, but if she plateaued years ago at $100 or $500 per year, she’s not going to respond to your invitation to join the President’s Circle, or whatever you call it.

Working with this idea that donor  loyalty can equate to donor inertia, I looked for evidence of an opposite trait I started calling “momentum.” I defined it as an upward trajectory in giving year over year, hopefully aimed at the Leadership level. I pulled a whole lot of data: The giving totals for each of the past seven years for every Annual Fund donor. I tried various methods for characterizing the pattern of each donor’s contributions over time. I wanted to calculate a single number that represented the slope and direction of each donor’s path: Trending sharply up, or somewhat up, staying level, trending somewhat down, or sharply down.

I worked with that concept for a while. A long while. I think people got sick of me talking about “momentum.”

After many attempts, I had to give up. The formulas I used just didn’t seem to give me anything useful to sum up the variety of patterns out there. So I tried studying some giving scenarios, based on whether or not a donor gave in a given year. As you might imagine, the number of possible likely scenarios quickly approached the level of absurdity. I actually wrote this sentence: “What % of donors with no giving Y1-Y4, but gave in Y5 and did not give in Y6 upgraded from Y5 to Y7?” It was at that point that my brain seized up. I cracked a beer and said to hell with that.

I tried something new. For each donor, I converted their yearly giving totals into a flag that indicated whether they had giving in a particular year or not: Y for yes, N for no. Imagine an Excel file with seven columns full of Ys and Ns, going on for thousands of rows, one row per donor. Then I concatenated the first six columns of Y/Ns. A donor who gave every year ended up with the string “YYYYYY”. A donor who gave every second year looked like “YNYNYN” — and so on.

I called these strings “donor signatures” — sort of a fingerprint of their giving patterns over six years. Unlike a fingerprint, though, these signatures were not unique to the individual. The 15,000 donors in my data file fit into just 64 signatures.

A-ha, now I was getting somewhere. I had set aside the final year of giving data — year seven — which I could use to determine whether a donor had upgraded, downgraded or stayed the same. All I had to do was take those 64 categories of donors and rank them by the percentage of donors who had upgraded in the final year. Then I could just eyeball the sorted signatures and see if I could detect any patterns in the signatures that most often led to the upgrading behaviours I was looking for. (This is much easier done in stats software than in Excel, by the way.)

All of the following observations are based on the giving patterns of donors who gave in the final two years, which allowed me to compare whether they upgraded or not. This cut out many possible scenarios (eg., donors who didn’t give in one of those two years), but it was a good starting point.

I confirmed that the more years a donor has given, the more likely they are to be retained. BUT:

  • The more previous years a donor has given consecutively, the LESS likely they are to upgrade if they give again.
  • A donor is markedly more likely to upgrade from the prior year if they have lapsed at least one year prior to giving again.
  • Specifically, they are most likely to upgrade if they have one, two or three years with giving in the previous five. More than that, and they are becoming more loyal, and therefore less likely to upgrade.
  • Donors who give every other year, or who have skipped up to two years at a time, are most likely to upgrade from last year to the current year.

I told you it was counter-intuitive. If it was just all obvious common sense, we wouldn’t need data analysis. Here’s more odd stuff:

  • In general, the same qualities that make a donor more likely to upgrade also make a donor upgrade by a higher amount.
  • By far, the highest-value upgrader is a last-year donor who lapsed the previous year but had three years of giving in the previous five.
  • The next-highest donor signatures all show combinations of repeated giving and lapsing.
  • As a general rule, the highest-value upgraders have about an equal number of years as a donor and as a non-donor.

The conclusion? Upgrade potential can be a strangely elusive quality. From this analysis it appears that being a frequent donor (three or four years out of the past six) is a positive, but only if those years are broken up by the odd non-giving year. In other words, the upgrading donor is also something of an erratic donor.

I thought that was a pretty nifty phenomenon to bring to light. I decided to augment it by trying another, similar approach. Instead of flagging the simple fact of having given or not given in a particular year, this time I flagged whether a donor had upgraded from one year to the next.

Again I worked with seven fiscal years of giving data. I was interested in the final year – year seven – setting that as the “result” of the previous six years of giving behaviour. I was interested only in people who gave that year, AND who had some previous giving in years 1 to 6. The result set consisted of “Gave same or less” or “Upgrade”, and if upgrade, the average dollar upgrade.

The flags were a little more complicated than Y/N. I used ‘U’ to denote an upgrade from the year previous, ‘S’ to denote giving at the same level as the year previous, ‘D’ for a downgrade, and ‘O’ (for “Other”) if no comparison was possible (i.e., one or both years had no giving). Each signature had five characters instead of six, since it’s not possible to assign a code to the first year (no previous year of giving in the data to compare with).

This time there were 521 signatures, which made interpretation much more difficult. Many signatures had fewer than five donors, and only a dozen or so contained more than 100 donors. But when I counted the number of upgrades, downgrades and “sames” that a donor had in the previous five years, and then looked at how they behaved in the final year, some clear patterns did emerge:

  • Donors who upgraded two or more times in the past were most likely to upgrade again in the current year, and the size of their upgrade was larger, than donors who upgraded fewer times, or never upgraded. Upgrade likelihood was highest if the donor had upgraded at least four times in the previous five years.
  • Donors who gave the same amount every year were the least likely to upgrade — this is the phenomenon people were talking about at the benchmarking meeting I mentioned earlier. Donors who never gave the same amount from one year to the next, or did so only once, had higher median upgrade amounts.
  • And finally, the number of downgrades … this paints a strongly counter-intuitive picture. The more previous downgrades a donor had, the more likely they were to upgrade in the current year!

In other words, along with being erratic, donors who upgrade also have the characteristic that I started to call volatility.

I wondered what the optimum mix of upgrades and downgrades might be, so I created a variable called “Upgrades minus Downgrades”, which calculated the difference only for donors who had at least one upgrade or downgrade. The variable ranged from -4 (lots of downgrades) to plus 5 (lots of upgrades). What I discovered is that it’s not a balance that is important, but that a donor be at one extreme or the other. The more extreme the imbalance, the more likely an upgrade will occur, and the larger it will be, on average.

ERRATIC and VOLATILE … two qualities you’ve probably never ascribed to your most generous donors. But there it is: Your best prospects for an ambitious ask (perhaps a face-to-face one) might be the ones who are inconsistent about the amounts they give, and who don’t care to give every year.

By all means continue to use RFM to identify the core of your top supporters, but be aware that this approach will not isolate the kind of rogue donors I’m talking about. You can use donor signatures, as I have, to explore the extent to which this phenomenon prevails in your own donor database. From there, you might wish to capture these behaviours as input variables for a proper predictive model.

At worst, you’ll be soliciting donors who will never become loyal, and who may not have lifetime values that are as attractive as our less flashy, but more dependable, loyal donors. On the other hand, if you put a bigger ask in front of them and they go for it, they may eventually enter the realm of major giving. And then it will all be worth it.

Older Posts »

The Silver is the New Black Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 975 other followers