CoolData blog

3 December 2009

Predictive models can’t do everything. (Duh.)

Filed under: Planned Giving, skeptics — Tags: , , , — kevinmacdonell @ 11:23 am

In the haystack of 8,000-plus individuals, where would you look first?

 

The purpose of predictive modeling in fundraising can be misunderstood. Some think it can do anything. Others are skeptics.

 

A discussion on the PROSPECT-DMM listserv some time ago about the use of predictive modeling for identifying Planned Giving prospects brought some of these views to the fore. One poster wrote that s/he was all for predictive modeling, but added, “unfortunately, modeling alone just cannot tell you everything you need and want to know.” The commentator cautioned readers that poor models can lead to poor fundraising strategies, and that we need to pay attention to “trends, experience, insights and perhaps even some common sense assumptions.”

 

(I am not identifying the poster, as s/he may not be getting a fair shake here.)

 

There was much in this person’s post that made sense. Some of us work in institutions where non-specialists overestimate the power of a model to “pick winners”. Their view of modeling requires a corrective. But it stuck me as odd that these comments were directed at readers of PROSPECT-DMM, many of whom are experienced data analysts with a good sense of the limits and proper uses of their work.

 

This was my response:

 

With due respect, and conceding the core truth of these observations, I do find the DM list an interesting choice of venue for these warnings. Are there really that many of us who think that “models alone” can lead us to success? If so, then we need to heed these warnings.

 

But I doubt it.

 

My own institution is relatively small, yet our pool of potential prospects for Planned Giving is huge, given that we have just two staff people who work in planned giving. Who among that undifferentiated pool of 8,000 people (just counting alumni) should they approach first?

 

What trend, observed by a consultant in some distant city, and derived from data in aggregate, do we choose to apply to our alumni, with whom WE have the relationship and a history of interaction, as recorded in our database?

 

Yes, intelligence from officers in the field trumps a probability model every time. But what field officer, or staff of field officers, has met with 8,000 alumni?

 

Absolutely, “modeling alone just cannot tell you everything you need and want to know.” But I don’t think I’ve ever heard anyone say that it does. What modeling can do, better than anything else, is make that first cut of most promising names, which must then be sifted, qualified, researched, visited, and strategized around.

 

I think we should be especially wary of “common sense assumptions,” at least the ones that we can test in the environment of our own data. In discussions leading up to the creation of our planned-giving model, I heard a number of these assumptions, which purportedly came from experts in the planned giving field: Just pick your long-time donors, and that’s your prospect pool; RFM is all you need; etc. Through looking at the hard data in our database I was able to demonstrate that, had we applied any of this well-meant advice faithfully from the outset, we would have failed to identify two-thirds of the PG commitments in place today.

 

Some assumptions come with a high price tag!

 

I don’t disagree with [commentator] in the main; we may differ only in emphasis. My view is this: Where the numbers and the stats leave off, the art of the personal begins – that’s the proper order. It is true that no model can, as s/he puts it well, “capture that emotional and motivating element.” But no amount of experience, insight, relationships or knowledge of trends will allow two gift officers to make a personal connection with 8,000 people.

 

My models offer a better than fighting chance that the person my PGO is calling on will desire that personal connection. After that, I step aside, the slide rule is put away. The rest is all about human connection, all about Art.

 

Advertisements

1 Comment »

  1. […] Slide rule vs. Art in fundraising, published in December 2009, addresses skepticism about data mining. An excerpt: […]

    Pingback by CoolData in the summer – 1 « CoolData blog — 27 July 2010 @ 11:20 am


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: