CoolData blog

18 February 2014

Save our planet

Filed under: Annual Giving, Why predictive modeling? — Tags: , , — kevinmacdonell @ 9:09 pm

You’ve seen those little signs — they’re in every hotel room these days. “Dear Guest,” they say, “Bed sheets that are washed daily in thousands of hotels around the world use millions of gallons of water and a lot of detergent.” The card then goes on to urge you to give some indication that you don’t want your bedding or towels taken away to be laundered.

Presumably millions of small gestures by hotel guests have by now added up to a staggering amount of savings in water, energy and detergent.

It reminds me of what predictive analytics does for a mass-contact area of operation such as Annual Giving. If we all trimmed down the amount of acquisition contacts we make — expending the same amount of effort but only on the people with highest propensity to give, or likelihood to pick up the phone, or greatest chance of opening our email or what-have-you — we’d be doing our bit to collectively conserve a whole lot of human energy, and not a few trees.

With many advancement leaders questioning whether they can continue to justify an expensive Phonathon program that is losing more ground every year, getting serious about focusing resources might just be the saviour of a key acquisition program, to boot.

18 April 2013

A response to ‘What do we do about Phonathon?’

I had a thoughtful response to my blog post from earlier this week (What do we do about Phonathon?) from Paul Fleming, Database Manager at Walnut Hill School for the Arts in Natick, Massachusetts, about half an hour from downtown Boston. With Paul’s permission, I will quote from his email, and then offer my comments afterword:

I just wanted to share with you some of my experiences with Phonathon. I am the database manager of a 5-person Development department at a wonderful boarding high school called the Walnut Hill School for the Arts. Since we are a very small office, I have also been able to take on the role of the organizer of our Phonathon. It’s only been natural for me to combine the two to find analysis about the worth of this event, and I’m happy to say, for our own school, this event is amazingly worthwhile.

First of all, as far as cost vs. gain, this is one of the cheapest appeals we have. Our Phonathon callers are volunteer students who are making calls either because they have a strong interest in helping their school, or they want to be fed pizza instead of dining hall food (pizza: our biggest expense). This year we called 4 nights in the fall and 4 nights in the spring. So while it is an amazing source of stress during that week, there aren’t a ton of man-hours put into this event other than that. We still mail letters to a large portion of our alumni base a few times a year. Many of these alumni are long-shots who would not give in response to a mass appeal, but our team feels that the importance of the touch point outweighs the short-term inefficiencies that are inherent in this type of outreach.

Secondly, I have taken the time to prioritize each of the people who are selected to receive phone calls. As you stated in your article, I use things like recency and frequency of gifts, as well as other factors such as event participation or whether we have other details about their personal life (job info, etc). We do call a great deal of lapsed or nondonors, but if we find ourselves spread too thin, we make sure to use our time appropriately to maximize effectiveness with the time we have. Our school has roughly 4,400 living alumni, and we graduate about 100 wonderful, talented students a year. This season we were able to attempt phone calls to about 1,200 alumni in our 4 nights of calling. The higher-priority people received up to 3 phone calls, and the lower-priority people received just 1-2.

Lastly, I was lucky enough to start working at my job in a year in which there was no Phonathon. This gave me an amazing opportunity to test the idea that our missing donors would give through other avenues if they had no other way to do so. We did a great deal of mass appeals, indirect appeals (alumni magazine and e-newsletters), and as many personalized emails and phone calls as we could handle in our 5-person team. Here are the most basic of our findings:

In FY11 (our only non-Phonathon year), 12% of our donors were repeat donors. We reached about 11% participation, our lowest ever. In FY12 (the year Phonathon returned):

  • 27% of our donors were new/recovered donors, a 14% increase from the previous year.
  • We reached 14% overall alumni participation.
  • Of the 27% of donors who were considered new/recovered, 44% gave through Phonathon.
  • The total amount of donors we had gained from FY11 to FY12 was about the same number of people who gave through the Phonathon.
  • In FY13 (still in progess, so we’ll see how this actually plays out), 35% of the previously-recovered donors who gave again gave in response to less work-intensive mass mailing appeals, showing that some of these Phonathon donors can, in fact, be converted and (hopefully) cultivated long-term.

In general, I think your article was right on point. Large universities with a for-pay, ongoing Phonathon program should take a look and see whether their efforts should be spent elsewhere. I just wanted to share with you my successes here and the ways in which our school has been able to maintain a legitimate, cost-effective way to increase our participation rate and maintain the quality of our alumni database.

Paul’s description of his program reminds me there are plenty of institutions out there who don’t have big, automated, and data-intensive calling programs gobbling up money. What really gets my attention is that Walnut Hill uses alumni affinity factors (event attendance, employment info) to prioritize calling to get the job done on a tight schedule and with a minimum of expense. This small-scale data mining effort is an example for the rest of us who have a lot of inefficiency in our programs due to a lack of focus.

The first predictive models I ever created were for a relatively small university Phonathon that was run with printed prospect cards and manual dialing — a very successful program, I might add. For those of you at smaller institutions wondering if data mining is possible only with massive databases, the answer is NO.

And finally, how wonderful it is that Walnut Hill can quantify exactly what Phonathon contributes in terms of new donors, and new donors who convert to mail-responsive renewals.

Bravo!

13 November 2012

Making a case for modeling

Guest post by Peter Wylie and John Sammis

(Click here to download post as a print-friendly PDF: Making a Case for Modeling – Wylie Sammis)

Before you wade too far into this piece, let’s be sure we’re talking to the right person. Here are some assumptions we’re making about you:

  • You work in higher education advancement and are interested in analytics. However, you’re not a sophisticated stats person who throws around terms like regression and cluster analysis and neural networks.
  • You’re convinced that your alumni database (we’ll leave “parents” and “friends” for a future paper) holds a great deal of information that can be used to pick out the best folks to appeal to — whether by mail, email, phone, or face-to-face visits.
  • Your boss and your boss’s bosses are, at best, less convinced than you are about this notion. At worst, they have no real grasp of what analytics (data mining and predictive modeling) are. And they may seem particularly susceptible to sales pitches from vendors offering expensive products and services for using your data – products and services you feel might cause more problems than they will solve.
  • You’d like to find a way to bring these “boss” folks around to your way of thinking, or at least move them in the “right” direction.

If we’ve made some accurate assumptions here, great. If we haven’t, we’d still like you to keep reading. But if you want to slip out the back of the seminar room, not to worry. We’ve done it ourselves more times than you can count.

Okay, here’s something you can try:

1. Divide the alums at your school into ten roughly equal size groups (deciles) by class year. Table 1 is an example from a medium sized four year college.

Table 1: Class Years and Counts for Ten Roughly Equal Size Groups (Deciles) of Alumni at School A

2. Create a very simple score:

EMAIL LISTED(1/0) + HOME PHONE LISTED(1/0)

This score can assume three values: “0, “1”, or “2.” A “0” means the alum has neither an email nor a home phone listed in the database. A “1” means the alum has either an email listed in the database or a home phone listed in the database, but not both. A “2” means the alum has both an email and a home phone listed in the database.

3. Create a table that contains the percentage of alums who have contributed at least $1,000 lifetime to your school for each score level for each class year decile. Table 1 is an example of such a table for School A.

Table 2: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School A

 

4. Create a three dimensional chart that conveys the same information contained in the table. Figure 1 is an example of such a chart for School A.

In the rest of this piece we’ll be showing tables and charts from seven other very diverse schools that look quite similar to the ones you’ve just seen. At the end, we’ll step back and talk about the importance of what emerges from these charts. We’ll also offer advice on how to explain your own tables and charts to colleagues and bosses.

If you think the above table and chart are clear, go ahead and start browsing through what we’ve laid out for the other seven schools. However, if you’re not completely sure you understand the table and the chart, see if the following hypothetical questions and answers help:

Question: “Okay, I’m looking at Table 2 where it shows 53% for alums in Decile 1 who have a score of 2. Could you just clarify what that means?”

Answer. “That means that 53% of the oldest alums at the school who have both a home phone and an email listed in the database have given at least $1,000 lifetime to the school.”

Question. “Then … that means if I look to the far left in that same row where it shows 29% … that means that 29% of the oldest alums at the school who have neither a home phone nor an email listed in the database have given at least $1,000 lifetime to the school?”

Answer. “Exactly.”

Question. “So those older alums who have a score of 2 are way better givers than those older alums who have a score of 0?”

Answer. “That’s how we see it.”

Question. “I notice that in the younger deciles, regardless of the score, there are a lot of 0 percentages or very low percentages. What’s going on there?”

Answer. “Two things. One, most younger alums don’t have the wherewithal to make big gifts. They need years, sometimes many years, to get their financial legs under them. The second thing? Over the last seven years or so, we’ve looked at the lifetime giving rates of hundreds and hundreds of four-year higher education institutions. The news is not good. In many of them, well over half of the solicitable alums have never given their alma maters a penny.”

Question. “So, maybe for my school, it might be good to lower that giving amount to something like ‘has given at least $500 lifetime’ rather than $1,000 lifetime?”

Answer. Absolutely. There’s nothing sacrosanct about the thousand dollar level that we chose for this piece. You can certainly lower the amount, but you can also raise the amount. In fact, if you told us you were going to try several different amounts, we’d say, “Fantastic!”

Okay, let’s go ahead and have you browse through the rest of the tables and charts for the seven schools we mentioned earlier. Then you can compare your thoughts on what you’ve seen with what we think is going on here.

(Note: After looking at a few of the tables and charts, you may find yourself saying, “Okay, guys. Think I got the idea here.” If so, go ahead and fast forward to our comments.)

Table 3: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School B

 

Table 4: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School C

Table 5: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School D

Table 6: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School E

Table 7: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School F

Table 8: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School G

Table 9: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School H

Definitely a lot of tables and charts. Here’s what we see in them:

  • We’ve gone through the material you’ve just seen many times. Our eyes have always been drawn to the charts; we use the tables for back-up. Even though we’re data geeks, we almost always find charts more compelling than tables. That is most certainly the case here.
  • We find the patterns in the charts across the seven schools remarkably similar. (We could have included examples from scores of other schools. The patterns would have looked the same.)
  • The schools differ markedly in terms of giving levels. For example, the alums in School C are clearly quite generous in contrast to the alums in School F. (Compare Figure 3 with Figure 6.)
  • We’ve never seen an exception to one of the obvious patterns we see in these data: The longer alums have been out of school, the more money they have given to their school.
  • The “time out of school” pattern notwithstanding, we continue to be taken by the huge differences in giving levels (especially among older alums) across the levels of a very simple score. School G is a prime example. Look at Figure 7 and look at Table 8. Any way you look at these data, it’s obvious that alums who have even a score of “1” (either a home phone listed or an email listed, but not both) are far better givers than alums who have neither listed.

Now we’d like to deal with an often advanced argument against what you see here. It’s not at all uncommon for us to hear skeptics say: “Well, of course alumni on whom we have more personal information are going to be better givers. In fact we often get that information when they make a gift. You could even say that amount of giving and amount of personal information are pretty much the same thing.”

We disagree for at least two reasons:

Amount of personal information and giving in any alumni database are never the same thing. If you have doubts about our assertion, the best way to dispel those doubts is to look in your own alumni database. Create the same simple score we have for this piece. Then look at the percentage of alums for each of the three levels of the score. You will find plenty of alums who have a score of 0 who have given you something, and you will find plenty of alums with a score of 2 who have given you nothing at all.

We have yet to encounter a school where the IT folks can definitively say how an email address or a home phone number got into the database for every alum. Why is that the case? Because email addresses and home phone numbers find their way into alumni database in a variety of ways. Yes, sometimes they are provided by the alum when he or she makes a gift. But there are other ways. To name a few:

  • Alums (givers or not) can provide that information when they respond to surveys or requests for information to update directories.
  • There are forms that alums fill out when they attend a school sponsored event that ask for this kind of information.
  • There are vendors who supply this kind of information.

Now here’s the kicker. Your reactions to everything you’ve seen in this piece are critical. If you’re going to go to a skeptical boss to try to make a case for scouring your alumni database for new candidates for major giving, we think you need to have several reactions to what we’ve laid out here:

1. “WOW!” Not, “Oh, that’s interesting.” It’s gotta be, “WOW!” Trust us on this one.

2. You have to be champing at the bit to create the same kinds of tables and charts that you’ve seen here for your own data.

3. You have to look at Table 2 (that we’ve recreated below) and imagine it represents your own data.

Table 2: Percentage of Alumni at Each Simple Score Level at Each Class Year Decile Who Have Contributed at Least $1,000 Lifetime to School A

Then you have to start saying things like:

“Okay, I’m looking at the third class year decile. These are alums who graduated between 1977 and 1983. Twenty-five percent of them with a score of 2 have given us at least $1,000 lifetime. But what about the 75% who haven’t yet reached that level? Aren’t they going to be much better bets for bigger giving than the 94% of those with a score of 0 who haven’t yet reached the $1,000 level?”

“A score that goes from 0 to 2? Really? What about a much more sophisticated score that’s based on lots more information than just email listed and home phone listed? Wouldn’t it make sense to build a score like that and look at the giving levels for that more sophisticated score across the class year deciles?”

If your reactions have been similar to the ones we’ve just presented, you’re probably getting very close to trying to making your case to the higher-ups. Of course, how you make that case will depend on who you’ll be talking to, who you are, and situational factors that you’re aware of and we’re not. But here are a few general suggestions:

Your first step should be making up the charts and figures for your own data. Maybe you have the skills to do this on your own. If not, find a technical person to do it for you. In addition to having the right skills, this person should think doing it would be cool and won’t take forever to finish it.

Choose the right person to show our stuff and your stuff to. More and more we’re hearing people in advancement say, “We just got a new VP who really believes in analytics. We think she may be really receptive to this kind of approach.” Obviously, that’s the kind of person you want to approach. If you have a stodgy boss in between you and that VP, find a way around your boss. There’s lots of ways to do that.

Do what mystery writers do; use the weapon of surprise. Whoever the boss you go to is, we’d recommend that you show them this piece first. After you know they’ve read it, ask them what they thought of it. If they say anything remotely similar to: “I wonder what our data looks like,” you say, “Funny you should ask.”

Whatever your reactions to this piece have been, we’d love to hear them.

1 February 2012

Where’s your institution on the Culture of Analytics Ladder?

Filed under: Fun, predictive analytics, Why predictive modeling? — Tags: , , — kevinmacdonell @ 2:21 pm

I’m laying on the couch with a bad head cold, and there’s a mix of snow and rain in the forecast. Time to curl up with my laptop and a cup of tea. I’ve got a question for you!

Not long ago I asked you to give me examples of institutions you’re aware of that are shining examples of institution-wide data-driven decision making. I was grateful for the responses, but no single institution was named twice. A few people offered an opinion about how their own organizations size up, which I found interesting.

So let’s explore that a bit more with a quick and anonymous poll: Where do you think your non-profit organization or institution fits on the Culture of Analytics Ladder? (That’s CoAL for short … but no need to be so formal. I totally made this up under the influence of cold medication.) Don’t over think it. Just pick whatever stage you feel your org or institution occupies.

The categories may seem a bit vague. If it’s any help, by “analysis” or “analytics” I am referring to the process of sifting through large quantities of data in search of patterns that lead to insights, primarily about your constituents. I am NOT referring to reporting. In fact I want you to ignore a lot of the day-to-day processes that involve data but are not really “analysis,” including: data entry, gift accounting, appeal segmentation, reporting on historical results, preparation of financials, and so on.

I am thinking more along the lines of modelling for the prediction of behaviours (which group of constituents are most likely to engage in such-and-so a behaviour?), prediction of future results (i.e., forecasting), open-ended exploration of constituent data in search of “clusters”, and and any other variety of data work that would be called on to make a decision about what to do in the future, as opposed to documenting what happened in the past. I am uncertain whether A/B split testing fits my definition of analysis, but let’s be generous and say that it does.

A couple of other pointers:

  • If you work for, say, a large university advancement department and aren’t sure whether analytics is used in other departments such as student admissions or recruitment, then answer just for your department. Same thing if you work for a regional office of a large non-profit and aren’t sure about the big picture.
  • If you have little or no in-house expertise, but occasionally hire a vendor to produce predictive modelling scores, then you might answer “6″ — but only if those scores are actually being well used.

Here we go.

17 October 2011

À la recherche du alumni engagement perdu

Filed under: Alumni, Why predictive modeling? — Tags: — kevinmacdonell @ 8:56 am

Have you read any Proust? His voluminous novel contains many unsentimental thoughts about friendship and love. Among them is the idea that the opposite of love is not hate. The opposite of love is indifference.

The constituents in your database who are not engaged are not the ones who write nasty letters. They’re not the ones who give you a big thumbs-down on your survey. They’re not the ones who criticize the food at your gala dinner. They’re not the ones who tell your phone campaign callers never to call again.

Nope. Your non-engaged constituents are the ones you never hear from. The ones who chuck out your mailings unopened. The ones who ignore the invitation to participate in a survey. The ones who have never attended an event. The ones who never answer the phone.

If your school or organization is typical, a good-sized portion of your database falls into this category. There’s money in identifying who is truly not engaged, and therefore not worth wasting resources on.

The ones who are moved to criticize you, the ones who have opinions about you, the ones who want to be contacted only a certain way — ah, they’re different.

The future belongs to those who can tell the difference.

6 October 2011

The emerging role of the (fundraising) analyst

Filed under: Data, skeptics, Why predictive modeling? — Tags: — kevinmacdonell @ 12:44 pm

Effective fundraisers tell stories. When we communicate with prospective donors, we do well to evoke feelings and emotions, and go light on the facts. We may attempt to persuade with numbers and charts, but that will never work as well as one true and powerful story, conveyed in word and image.

But what about the stories we tell to ourselves? Humans need narratives to make sense of the world, but our inborn urge to order events as “this happened, then that happened” leads us into all kinds of unsupported or erroneous assumptions, often related to causation.

How many times have you heard assertions such as, “The way to reach young alumni donors is online, because that’s where they spend all their time”? Or, “We shouldn’t ask young alumni to give more than $20, because they have big student loans to pay.” Or, “There’ s no need to look beyond loyal donors to find the best prospects for Planned Giving.” Or, “We should stop calling people for donations, because focus groups say they don’t like to get those calls.”

Such mini-narratives are all around us and they beguile us into believing them. Who knows whether they’re true or not? They might make intuitive sense, or they’re told to us by people with experience. Experts tell us stories like this. National donor surveys and reports on philanthropic trends tell stories, too. And we act on them, not because we know they’re true, but because we believe them.

Strictly speaking, none of them can be “true” in the sense that they apply everywhere and at all times. Making assertions about causation in connection with complex human behaviours such as philanthropy is suspect right from the start. Even when there is some truth, whose truth is it? Trend-watchers and experts who know nothing about your donors are going to lead you astray with their suppositions.

I’m reminded of the scene in the movie Moneyball, now playing in theatres, in which one grizzled baseball scout says a certain player must lack confidence “because his girlfriend is ugly.” We can hope that most received wisdom about philanthropy is not as prodigiously stupid, but the logic should be familiar. Billy Beane, general manager of the Oakland A’s, needed a new way of doing things, and so do we.

The antidote to being led astray is learning what’s actually true about your own donors and your own constituency. It’s a new world, folks: We’ve got the tools and the smarts to put any assertion to the test, in the environment of our own data. The age of basing decisions on fact instead of supposition has arrived.

No doubt some feel threatened by that. I imagine a time when something like observation-driven, experimental medicine started to break on the scene. Doctors treating mental illness by knocking holes in peoples’ skulls to let out the bad spirits must have resisted the tide. The witch-doctors, and the baseball scouts obsessed with ugly girlfriends, may have had a lot of experience, but does anyone miss them?

The role of the analyst is not to shut down our natural, story-telling selves. No. The role of the analyst is to treat every story as a hypothesis. Not in order to explode it necessarily, but to inject validity, context, and relevance. The role of the analyst, in short, is to help us tell better and better stories.

+++++++++++++++++++++++++++++++++++++++++++++

This blog post is part of the Analytics Blogarama, in which bloggers writing on all aspects of the field offer their views on “The Emerging Role of the Analyst.” Follow the link (hosted by SmartData Collective) to read other viewpoints.

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,045 other followers