CoolData blog

25 August 2014

Your nonprofit’s real ice bucket challenge

It was only a matter of time. Over the weekend, a longtime friend dumped a bucket of ice water over his head and posted the video to Facebook. He challenged three friends — me included — to take the Ice Bucket Challenge in support of ALS research. I passed on the cold shower, but this morning I did make a gift to ALS Canada, a cause I wouldn’t have supported had it not been for my friend Paul and the brilliant campaign he participated in.*

Universities and other charities are, of course, watching closely and asking themselves how they can replicate this phenomenon. Fine … I am skeptical that central planning and a modest budget can give birth to such a massive juggernaut of socially-responsible contagion … but I wish them luck.

While we can admire our colleagues’ amazing work and good fortune, I am not sure we should envy them. In the coming year, ALS charities will be facing a huge donor-retention issue. Imagine gaining between 1.5 and 2 million new donors in the span of a few months. Now, I have no knowledge of what ALS fundraisers really intend to do with their hordes of newly-acquired donors. Maybe retention is not a goal. But it is a sure thing that the world will move on to some other craze. Retaining a tiny fraction of these donors could make the difference between the ice bucket challenge being just a one-time, non-repeatable anomaly and turning it into a foundation for long-term support that permanently changes the game for ALS research.

Perhaps the ice bucket challenge can be turned into an annual event that becomes as established as the walks, runs and other participatory events that other medical-research charities have. Who knows.

For certain is that the majority of new donors will not give again. Also for certain is that it would be irresponsibly wasteful for charities to spread their retention budget equally over all new donors.

Which brings me to predictive modeling. Some portion of new donors WILL give again. Maybe something about the challenge touched them more deeply than the temporary fun of the ice bucket dare. Maybe they learned something about the disease. Maybe they know someone affected by ALS. There is no direct way to know. But I would be willing to bet that higher levels of engagement can be found in patterns in the data.

What factors might be predictors of longer-term engagement? It is not possible to say without some analysis, but sources of information might include:

  • How the donor arrived at the site prior to making a gift (following a link from another website, following a link via a social media platform, using a search engine).
  • How the donor became aware of the challenge (this is a question on some giving pages).
  • Whether they consented to future communications: Mail, email, or both.
  • Whether the donor continued on to a page on the website beyond the thank you page. (Did they start watching an ALS-related video and if so, how much of it did they watch?)
  • Whether the donor clicked on social media button to share the news of their gift, and where they shared it.

Shreds of ambiguous clues scattered here and there, admittedly, but that is what a good predictive model detects and amplifies. If it were up to me, I would also have asked on the giving page whether the donor had done the ice bucket thing. A year from now, my friend Paul is going to clearly remember the shock of pouring ice water over his head, plus the positive response he got on Facebook, and this will bring to mind his gift and the need to give again. My choosing not to do so might be associated with a lower level of commitment, and thus a lower likelihood of renewing. Just a theory.**

Data-informed segmentation aimed at getting a second gift from newly-acquired donors is not quite as sexy as being an internet meme. However, unlike riding the uncontrollable wave of a social media sensation, retention is something that charities might actually be able to plan for.

* I would like to see this phenomenon raise all boats for medical charities, therefore I also gave to Doctors Without Borders Canada and the Molly Appeal for Medical Research. Check them out.

** Update: I am told that actually, this question IS asked. I didn’t see it on the Canadian site, but maybe I just missed it. Great!

26 May 2011

Lift charts, for fun and profit

Filed under: Uncategorized — Tags: , , — kevinmacdonell @ 5:18 am

I like simple charts that convey ideas with maximum impact. The “lift chart,”  well-known in direct marketing, is one of these. I recently found a good description of lift charts in the newly-published third edition of “Data Mining Techniques” by Gordon Linoff and Michael Berry. (Another good discussion can be found here.) I will show how you might use this tool to demonstrate the performance of a predictive model, and follow that with some ideas for even better uses.

Let’s say you’ve created a model which assigns scores to your constituents, ranking them by their likelihood to give, and that you’ve used it to segment your appeal — you more or less started soliciting at the top of the list and worked your way down, leaving your least-likely prospects till the end. I say “more or less,” because given other factors involved in segmentation, you may not have strictly ordered your appeal by propensity score. It doesn’t really matter.

So now the fiscal year is out, and you want to compare results obtained by using the propensity score against some sort of alternative way that you could have proceeded. The logical alternative is using no model at all. In fact, it’s more than that: the alternative is soliciting your prospects in perfectly random order.

The table below shows what that scenario looks like, using a Phonathon model as an example. The first column is how many prospects out of the entire pool have been contacted, as a percent. The second column is the cumulative amount of Yes pledges that have been received out of the total number of Yes pledges for the year, again as a percent of the total.

In this “no model” scenario, once you’ve solicited 10% of your prospects (first column of the table), you’ve gotten 10% of all your gifts (second column of the table). At 20% of your prospects, you’ve gotten 20% of your gifts (cumulative). And so on, until calling all prospects yields 100% of all the gifts and pledges received.

I know it seems silly, but let’s chart it. The x-axis is the percentage of prospects who were attempted at least once, and the y-axis is the cumulative percentage of all gifts and pledges that came in by phone. The chart of expected results from random solicitation is as exciting as it sounds (i.e., not very). It’s a perfectly straight line, because just as in the table above, every percentage of the prospect pool yielded exactly the same percentage of the total number of gifts and pledges.

It’s a hypothetical (and artificial) scenario; the chart you create will look exactly the same as mine, regardless of the model.

Good so far?

The next step is to add another line to the chart which represents the results from the real solicitation, i.e. as guided by the predictive model scores. Our chart is created in Excel, so that is where we will prepare the underlying data. The first two columns of the table below are the deciles. What I’ve done is rank everyone who was contacted at least once by their raw score, and chopped that list up into deciles in my stats software. The top 10% of prospects, the ones who were called first, are in the top decile (10). And of course, each decile contains roughly the same number of prospects.

The remaining three columns show how I calculate the final result: The cumulative percentage of all Yes pledges that correspond with each decile. For example, by the time we reach Decile number 5, we have called 60% of all prospects (“5″ is the sixth row down), and received 1,354 Yes pledges, which is 76.5% of all the Yes pledges received during the year.

Now we have enough information to complete the table that we started at the beginning. The first column in the table below contains the values for the x-axis, and the other two columns are the y-axis values for our two lines — ‘called at random’ for the cumulative percentage of all Yes pledges in our hypothetical random calling (which we’ve seen already), and ‘called by score’ for the cumulative percentage of all Yes pledges in our actual score-driven calling:

The first data point will be at 0% on the x-axis, where of course both lines touch. They touch again at 100%, where calling ALL prospects returns 100% of the Yes pledges, regardless of the order in which prospects were called. The lift chart, created from the table, looks like this (click for full-size version):

At the 10% mark, about twice as many of the pledges came in for scored prospects as for prospects contacted at random. That difference, a factor of 2.18, is called “lift“. (Thus, “lift chart.”) When we penetrated to 20% of the prospect sample, we continued to get twice the yield of the random line (lift = 2.07). Therefore the point of maximum lift is somewhere between the first two deciles. After that, the line begins to flatten, and the relative advantage of scoring vs. random calling begins to diminish.

Every once in a while, the question comes up about how to measure the positive effect of employing predictive modeling in a fundraising appeal. I don’t think this question can be answered definitively, but a lift chart is a good thing to have on hand. It’s not difficult to create, and easy to explain to someone else.

On the other hand, the message it conveys is nothing more profound than, “Using the predictive model worked better than soliciting prospects at random.” Although that might be just the thing your boss needs to see with her own eyes, that’s not a very exciting conclusion for those of us who make and use predictive models.

I think lift charts can be used for more than just that. In my example I used only one model, comparing its performance against using no model at all — it would be far more enlightening (and realistic) to compare the performance against at least one alternative model.

What we would like to see in a successful model is a “called by score” line that shoots upward at a steep angle and begins flattening only after reaching a high percentage of our goal. That would indicate that our “Yes” pledgers are concentrated in the upper scoring levels of our model. Does the chart above show a good model, or a mediocre one? Every application is different, and it’s hard to say what constitutes “good.” Without a second model to compare, it’s anyone’s call.

As well, my example demonstrates an “after-the-fact” analysis. A lift chart to compare two models before deployment would be most helpful, using the results of your holdout sample to judge which model produces the best lift curve.

And finally, the Phonathon lift chart shows results for both renewed donors and never-donors. It would be far more interesting to see those groups charted separately, with at least two competing models available for comparison in each chart.

28 December 2010

How you can make a difference, right now, where you are, and online

Filed under: Uncategorized — kevinmacdonell @ 12:23 pm

It’s been a remarkable year for my wife and me, and for CoolData. This blog, just over a year old, has been a point of connection for me and others sharing thoughts about new data-oriented tools for nonprofit fundraising. It’s been a lot of fun.

I have something to ask you.

Please consider making one final donation before the clock runs out on December 31. I know there’s a cause out there that holds meaning for you, and for which your online gift will make a much-needed difference.

If you don’t already have a specific charity in mind, I urge you to give a gift to the CanAssist African Relief Trust. CanAssist is a registered charity that assists impoverished communities by providing funding for small, sustainable, capital projects related to health, education, water and sanitation so as to improve the quality of life and health and raise the standard of living.

On the CanAssist webpage, scroll down a little bit and click on the CanadaHelps.org button to donate online right now. In minutes you’ll have your charitable tax receipt for 2010 (good for both Canadian AND American donors) emailed to you. It doesn’t get any easier.

Since its inception in April 2008, this small, local charity has transferred $112,500 to projects in East Africa. In addition to its other current partner projects, CanAssist African Relief has a couple of new projects they are developing that involve a rural clinic in northwest Uganda and an adult literacy centre in Kenya.

Through the generosity of a donor, who has covered all of the charity’s administrative costs, 100% of your donation goes to support one of CanAssist’s projects.

Why this charity? It’s a personal connection. My sister-in-law Suzanne and her partner Virginia live in Kingston, Ontario, where two local doctors founded CanAssist. In February Sue and Ginny will be doing some volunteer work with a school and women’s group in Kenya on the shores of Lake Victoria. There is a research station there where they can live for two weeks or so while they see what can be done.

Sue and Ginny say they’re really impressed by the work CanAssist does, supporting the development goals of people in the communities they work in. That’s good enough for me.

To find out more about CanAssist and to make your donation, visit their website.

If CanAssist doesn’t interest you, then take a moment to consider what charity YOU have a personal connection with, besides the one you work for. I think it’s exciting that technology allows any charity, large or small, to connect with people who are in a giving frame of mind in these key final hours of the year. Somewhere out there is a charity just for you, whether it’s doing work in your neighbourhood or in a distant village you will never see.

Choose wisely, but do choose. And give generously!

Happy New Year to you.

19 August 2010

CoolData in the summer – 3

Filed under: Uncategorized — kevinmacdonell @ 9:29 am

Back in December I wrote about scholarship/bursary recipients and alumni giving — are alumni who received financial assistance as students more likely to give? This was a side-topic in my presentation on “using survey data in models” at the APRA Data Analytics Symposium in July. It came up in connection with surveys because the university I used to work for did not have historical data on who received assistance or awards. In retrospect, knowing that survey respondents are unreliable reporters of fact, I should have been skeptical of the finding that 43% alumni actually received a scholarship or bursary. However, although it does not provide factual information necessarily, survey data CAN reveal an aspect of attitude that might be correlated with giving. I say “might be” — you need to read the original post for the answer to my question. Follow the link above!

During the month of August I will be posting new material slightly less often, but calling attention to previously-posted material that had lower readership because the blog was still very new.

17 August 2010

Introducing the “Guide to CoolData”

Filed under: Uncategorized — Tags: , — kevinmacdonell @ 7:32 am

A blog is great for communicating ideas, but it is a very loose collection of ideas. They come at you every week, in bite-sized pieces. That’s how blogs work. But sometimes you need to know how all these disparate piece fit together. (And they DO fit together.) Guide to CoolData is a new page that gathers my blog posts into a more coherent and logical order. Not all of my posts are there yet, but this will become more complete over time and, I hope, make CoolData more useful as a reference. In future, you can find the Guide in the “Pages” menu on the right-hand side.

3 August 2010

CoolData in the summer – 2

Filed under: Uncategorized — kevinmacdonell @ 7:55 am

Does your data include a lot of Canadians? One of my earliest posts showed you how to create an indicator variable to distinguish rural dwellers from people who live in an urban area or small town. See Rural vs. urban postal codes. I have sometimes found a significant difference in giving levels between the two groups.

During the month of August I will be posting new material slightly less often, but calling attention to previously-posted material that had low readership because the blog was still very new.

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,050 other followers