CoolData blog

9 March 2010

Predicting who will increase their pledge

Filed under: Annual Giving, Model building, Predictive scores — Tags: , , , — kevinmacdonell @ 10:12 am

Whether you’ve got an alumni database or strictly a donor database, as a fundraiser you face the same unknown: Which donors in your database are most likely to give at higher levels? Who should you focus your time and attention on? Who is ready to be asked to give more? Maybe much more?

Unless you’re really digging into your database, the answer will remain shrouded in darkness. For example, if your primary means of segmenting your phonathon prospects is still by giving history (eg. LYBUNTs, SYBUNTs, whatever-BUNTs), then I’m telling you that you’re not learning anything you don’t already know.

Predictive analytics is all about revealing patterns and relationships that we could not otherwise have known, and a predictive model is the only tool capable of distinguishing between the $20-a-year donor who might be ready to give $100, and the $20-a-year donor who isn’t. Why? Because a predictive model is based on variables that are wholly or partially independent of giving, but correlated with giving.

Last year I created a predictive model for our Phonathon program, which ranked every one of our living alumni by their propensity-to-give score, and sorted them into deciles. The idea was, the top decile (i.e., the top 10%) of scores would be expected to have highest rates of participation, and highest average pledges. As our calling season winds down, I can confirm that both are overwhelmingly true. (This outcome has long since ceased to be a surprise, I should add.)

But here’s something: Can our Phonathon model predict which donors are ready to boost their pledges?

Recently I pulled some data to identify which alumni had given in the Phonathon program in 2008-09, and then went on to make a bigger pledge this year. The chart below shows how they break down by score decile:

  • Donors with a score from 1 to 7 account for about 19% of “increasers.”
  • Donors who score an 8 or 9 account for another 30%.
  • Donors with a score of 10 account for more than HALF of all increasers.

So there’s yet another reason, if you needed it, for getting focused on the high-scorers in your program: They’re the ones most likely to increase their pledge year over year.

The implications are exciting. Could this knowledge not inform the way you construct formulas for target asks? I think so. At least you would have a sense of the relative probability that a increased ask will be successful. (For donors with lower scores, you might want to just maintain them at the giving level they’re at.)

I haven’t done this, but it would not be hard to build a model that specifically predicts propensity to increase. This would be especially valuable for non-university nonprofits whose databases are made up exclusively of donors. I can think of a number of ways to define the dependent variable for such a model, but a good start would be to go back a few years in order to identify a critical mass of donors who have exhibited the behaviour, and train the model on those.


  1. Were the scores based on the same variables as the overall phonathon scores or different variables?

    Comment by Diane Webber-Thrush — 9 March 2010 @ 11:20 am

    • The very same variables, the very same scores. The model that seems to do a good job predicting “increasers” is the same model that produced the phonathon scores.

      Comment by kevinmacdonell — 9 March 2010 @ 11:52 am

  2. […] will make a larger pledge in the current year. (I’ve already written about this: “Predicting who will increase their pledge“.) I’m wondering if I tagged the known Increasers with an indicator variable, could I […]

    Pingback by More and more and more models « CoolData blog — 24 March 2010 @ 12:30 pm

  3. What guidelines do you suggest for minimum number of records needed (for instance, number of gifts at X amount) for a linear regression model to effectively predict likelihood of giving at a specified amount? Let’s assume good quality data

    Comment by Randy Bunney — 26 March 2012 @ 8:57 am

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at

%d bloggers like this: