CoolData blog

1 February 2012

Where’s your institution on the Culture of Analytics Ladder?

Filed under: Fun, predictive analytics, Why predictive modeling? — Tags: , , — kevinmacdonell @ 2:21 pm

I’m laying on the couch with a bad head cold, and there’s a mix of snow and rain in the forecast. Time to curl up with my laptop and a cup of tea. I’ve got a question for you!

Not long ago I asked you to give me examples of institutions you’re aware of that are shining examples of institution-wide data-driven decision making. I was grateful for the responses, but no single institution was named twice. A few people offered an opinion about how their own organizations size up, which I found interesting.

So let’s explore that a bit more with a quick and anonymous poll: Where do you think your non-profit organization or institution fits on the Culture of Analytics Ladder? (That’s CoAL for short … but no need to be so formal. I totally made this up under the influence of cold medication.) Don’t over think it. Just pick whatever stage you feel your org or institution occupies.

The categories may seem a bit vague. If it’s any help, by “analysis” or “analytics” I am referring to the process of sifting through large quantities of data in search of patterns that lead to insights, primarily about your constituents. I am NOT referring to reporting. In fact I want you to ignore a lot of the day-to-day processes that involve data but are not really “analysis,” including: data entry, gift accounting, appeal segmentation, reporting on historical results, preparation of financials, and so on.

I am thinking more along the lines of modelling for the prediction of behaviours (which group of constituents are most likely to engage in such-and-so a behaviour?), prediction of future results (i.e., forecasting), open-ended exploration of constituent data in search of “clusters”, and and any other variety of data work that would be called on to make a decision about what to do in the future, as opposed to documenting what happened in the past. I am uncertain whether A/B split testing fits my definition of analysis, but let’s be generous and say that it does.

A couple of other pointers:

  • If you work for, say, a large university advancement department and aren’t sure whether analytics is used in other departments such as student admissions or recruitment, then answer just for your department. Same thing if you work for a regional office of a large non-profit and aren’t sure about the big picture.
  • If you have little or no in-house expertise, but occasionally hire a vendor to produce predictive modelling scores, then you might answer “6” — but only if those scores are actually being well used.

Here we go.


17 October 2011

À la recherche du alumni engagement perdu

Filed under: Alumni, Why predictive modeling? — Tags: — kevinmacdonell @ 8:56 am

Have you read any Proust? His voluminous novel contains many unsentimental thoughts about friendship and love. Among them is the idea that the opposite of love is not hate. The opposite of love is indifference.

The constituents in your database who are not engaged are not the ones who write nasty letters. They’re not the ones who give you a big thumbs-down on your survey. They’re not the ones who criticize the food at your gala dinner. They’re not the ones who tell your phone campaign callers never to call again.

Nope. Your non-engaged constituents are the ones you never hear from. The ones who chuck out your mailings unopened. The ones who ignore the invitation to participate in a survey. The ones who have never attended an event. The ones who never answer the phone.

If your school or organization is typical, a good-sized portion of your database falls into this category. There’s money in identifying who is truly not engaged, and therefore not worth wasting resources on.

The ones who are moved to criticize you, the ones who have opinions about you, the ones who want to be contacted only a certain way — ah, they’re different.

The future belongs to those who can tell the difference.

6 October 2011

The emerging role of the (fundraising) analyst

Filed under: Data, skeptics, Why predictive modeling? — Tags: — kevinmacdonell @ 12:44 pm

Effective fundraisers tell stories. When we communicate with prospective donors, we do well to evoke feelings and emotions, and go light on the facts. We may attempt to persuade with numbers and charts, but that will never work as well as one true and powerful story, conveyed in word and image.

But what about the stories we tell to ourselves? Humans need narratives to make sense of the world, but our inborn urge to order events as “this happened, then that happened” leads us into all kinds of unsupported or erroneous assumptions, often related to causation.

How many times have you heard assertions such as, “The way to reach young alumni donors is online, because that’s where they spend all their time”? Or, “We shouldn’t ask young alumni to give more than $20, because they have big student loans to pay.” Or, “There’ s no need to look beyond loyal donors to find the best prospects for Planned Giving.” Or, “We should stop calling people for donations, because focus groups say they don’t like to get those calls.”

Such mini-narratives are all around us and they beguile us into believing them. Who knows whether they’re true or not? They might make intuitive sense, or they’re told to us by people with experience. Experts tell us stories like this. National donor surveys and reports on philanthropic trends tell stories, too. And we act on them, not because we know they’re true, but because we believe them.

Strictly speaking, none of them can be “true” in the sense that they apply everywhere and at all times. Making assertions about causation in connection with complex human behaviours such as philanthropy is suspect right from the start. Even when there is some truth, whose truth is it? Trend-watchers and experts who know nothing about your donors are going to lead you astray with their suppositions.

I’m reminded of the scene in the movie Moneyball, now playing in theatres, in which one grizzled baseball scout says a certain player must lack confidence “because his girlfriend is ugly.” We can hope that most received wisdom about philanthropy is not as prodigiously stupid, but the logic should be familiar. Billy Beane, general manager of the Oakland A’s, needed a new way of doing things, and so do we.

The antidote to being led astray is learning what’s actually true about your own donors and your own constituency. It’s a new world, folks: We’ve got the tools and the smarts to put any assertion to the test, in the environment of our own data. The age of basing decisions on fact instead of supposition has arrived.

No doubt some feel threatened by that. I imagine a time when something like observation-driven, experimental medicine started to break on the scene. Doctors treating mental illness by knocking holes in peoples’ skulls to let out the bad spirits must have resisted the tide. The witch-doctors, and the baseball scouts obsessed with ugly girlfriends, may have had a lot of experience, but does anyone miss them?

The role of the analyst is not to shut down our natural, story-telling selves. No. The role of the analyst is to treat every story as a hypothesis. Not in order to explode it necessarily, but to inject validity, context, and relevance. The role of the analyst, in short, is to help us tell better and better stories.


This blog post is part of the Analytics Blogarama, in which bloggers writing on all aspects of the field offer their views on “The Emerging Role of the Analyst.” Follow the link (hosted by SmartData Collective) to read other viewpoints.

3 February 2011

Let’s do some scary work

Filed under: Training / Professional Development, Why predictive modeling? — kevinmacdonell @ 7:53 am

When I’m about to begin work on a new predictive model, I get a little scared. When I agree to speak to an audience with more brains and experience than I have (i.e., most any audience), I get a little scared. Heck, when I’m about to click the ‘Publish’ button on a new blog post, I’m a little scared.

But here’s what I’ve learned from working on unproven models, meeting and talking to smart people, and writing about stuff: When we feel a little bit scared, it means we’re probably onto something. It’s a signal to press on, not duck for cover.

Think about advanced data work. Here’s the thing: It’s totally optional. In fundraising or marketing or business, you can bull your way through without it. Can a fundraising program be successful without predictive modeling? Yes, it can.

I’ve heard it said that success in Annual Fund is based on making many tiny, incremental improvements over time. A gain of half a percent here, half a percent there will add up to substantial progress. These are little tweaks to familiar variables: the form and content of an appeal letter, the shape and size of an envelope, the choice of which Phonathon attempt to leave a message on, et cetera.

We fundraisers are, I think, cautious and conservative. We are very receptive to the idea that continuous progress is possible without having to learn anything new. I’m cautious and conservative myself, so I get it.

I just don’t find it very interesting.

We can probably do A/B testing forever, and we should. But at some point there will be a limit on returns. When steady application of the tired/tried-and-true fails to result in a year-over-year gain, we need to stop blaming external factors such as the economy or major world disasters (significant though they may be) and get serious about how we focus our efforts.

Judging from the questions I see on some of the listservs, fundraisers are being challenged to stretch. Some are responding by exploring boldly, most are just rearranging the known elements. Hardly ever does someone suggest doing serious work with data.

Working with predictive models is as iterative a process as any of the traditional stuff, once you’ve gotten started in that direction. Your models and predictions will get more focused year after year with incremental improvements in data collecting and analysis.

But getting started is not an increment. Getting started is something new, and starting something new is a little scary. Do it.

27 January 2011

RFM: Not a substitute for predictive modeling

Filed under: RFM, Why predictive modeling? — Tags: , — kevinmacdonell @ 9:06 am

Recency, Frequency, Monetary value. The calculation of scores based on these three transactional variables has a lot of sway over the minds of fundraisers, and I just don’t understand why.

It’s one of those concepts that never seems to go away. Everyone wants a simple way to whip up an RFM score. Yet anyone who can do a good job of RFM is probably capable of doing real predictive modeling. The persistence of RFM seems to rest on some misconceptions, which I want to address today.

First, people are under the impression that RFM is cutting-edge. It isn’t. In his book, “Fundraising Analytics: Using Data to Guide Strategy,” Joshua Birkholz points out that RFM is “one of the most common measures of customer value in corporate America.” It’s been around a long time. That alone doesn’t mean it’s invalid — it just isn’t anything new, even in fundraising.

Second, it’s often misconstrued as a predictive tool, and therefore the best way to segment a prospect pool. It’s not. As Josh makes clear in his book, RFM has always been a measure of customer value. It does not take personal affinities into account, nor any non-purchasing activities, he writes.

Note the language. RFM is borrowed from the for-profit world: retail and sales. Again, this doesn’t discredit it, but it does make it inappropriate as the sole or primary tool for prediction. Because it’s purely transactional in nature, all RFM can tell you is that donors will become donors. It CAN’T tell you which non-donors are most likely to be acquired as new donors.  The RFM score for a non-donor is always ZERO.

It also can’t tell you which lapsed donors are most likely to be reactivated, or which donors are most likely to be upgraded to higher levels of giving. In the eyes of RFM, one person who gave $50 last year is exactly the same as any other person who gave $50 last year. They’re NOT.

Third, we’re often told that RFM is easy to do. RFM is easy to explain and understand. It’s not necessarily easy to do. Recency and Monetary Value are straightforward, but Frequency requires a number of steps and calculations, and you’re probably not going to do it in Excel. Josh himself says it’s easy to do, but the demonstration in his book requires SPSS. If you’re using a statistics software package such as SPSS and you’ve mastered Frequency, then true predictive modeling is easily within your grasp. Almost all the variables I use in my models are simpler to derive than Frequency.

Is RFM useless? No, but we need to learn not to pick up a hammer when what we really need is a saw. RFM is for ranking existing donors according to their value to your organization, based on their past history of giving. Predictive modeling is for predicting (who knew?), and answering the three hard questions I listed above (acquisition, reactivation, upgrade potential.)

You could, in fact, use both. Your predictive model might identify the top 10% of your donor constituency who are most likely to renew, while your RFM score set will inform you who in that top 10% have the highest potential value to your organization. A matrix with affinity (predictive model) on one axis and value on the other (RFM) would make a powerful tool for, say, segmenting an Annual Giving donor pool for Leadership giving potential. Just focus on the quadrant of donors who have high scores for both affinity and value.

If you want to use RFM in that way (that is, properly), then fill your boots. I recommend Josh Birkholz’s book, because he lays it out very clearly.

The real danger in RFM is that it can become an excuse for not collecting more and better data.

For any institution of higher education, the idea that RFM is the bee’s knees is patently untrue. Institutions with alumni have a trove of variables that are informative about ALL of their constituency, not just those who happen to be donors. Expand that to arts-based nonprofits, and you’ll find member-based constituencies and the very same opportunities to model in a donor/non-donor environment. Neither of these types of institutions should be encouraged to rely exclusively on RFM.

For the rest, who don’t have the data for their constituency but could, the idea that pure donor transaction data is all you need cuts off the chance of doing the work now to get more sophisticated about collecting and organizing data that will pay off in the years ahead.

28 March 2010

Get data-focused, or else?

Filed under: skeptics, Why predictive modeling? — kevinmacdonell @ 9:34 pm

I can see a day when data mining will no longer be optional. It will be something all nonprofits have to do – standard practice, part of our responsibility to donors and to the causes they support.

Promoting the smart use of data in fundraising faces some barriers in skills and priorities and culture, but sooner or later all nonprofits will have to work harder at leveraging the power in their databases. They might do it in-house or find the expertise elsewhere. But it will be the norm.

Later this morning I’ll be speaking to a room of fundraising professionals about data mining. A few in the room will have had some experience with data mining. Some won’t. And some more will have a database that is in such rough shape that they’re not ready for it.

I’ll be keeping the tone light, and I’ll focus on what’s happening (or not happening) in my own workplace. I won’t presume to tell any of the hard-working people in the room what they ought to be doing. I won’t say that organizations that fail to make quality data collection and analysis a priority are guilty of negligence.

But I might think it.

If you don’t have a process in place to determine that a gift received this year came from someone who was also a donor last year (that is, you allow duplicate donor records to proliferate), you’re disconnected from who your real supporters are, and you’re wasting money. If you conduct surveys but do it anonymously, you’re throwing away the possibility of insight, and wasting money. If you host events but fail to track attendance in your database, you’re choosing to remain in the dark about where tomorrow’s gifts will come from, and you’re wasting money. If you segment prospect pools based solely on past giving, you exhaust existing best donors without breaking any new ground, and your unfocused approach wastes money.

Whose money? Donors’ money. Wasting donor dollars is no longer acceptable. I think donors will only get better at figuring out which charities are allowing fundraising costs to get out of control, which ones are diverting too much cash from their stated goals.

Nope, I won’t say it, but I might think it: Nonprofits that do not learn to use data will have data used against them.

« Newer PostsOlder Posts »

Blog at