CoolData blog

30 May 2016

Donor volatility: testing years of non-giving as a predictor for the next big gift

Filed under: Annual Giving, Coolness — Tags: , , , , — kevinmacdonell @ 5:02 am

Guest post by Jessica Kostuck, Data Analyst, Annual Giving, Queen’s University

 

During my first few weeks on the job, my AD set me up on several calls with colleagues in similar, data-driven roles, at universities across the country. One such call was with Kevin MacDonell, keeper of CoolData, with whom I had a delightfully geeked-out conversation about predictive modeling. We ran the gamut of weird and wonderful data points, ending on the concept of donor volatility.

 

When a lapsed high-end donor has no discernable annual giving pattern, is it possible to use his or her years of non-giving to predict and influence their next big gift?

 

Our goal for our Annual Giving program was to identify these “volatile” donors (lapsed high-end donors with an erratic giving history), and reactivate (ideally, upgrade) them, through a targeted solicitation with an aggressive ask string.

 

(For more on volatility, see Odd but true findings? Upgrading annual donors are “erratic” and “volatile”, which describes findings that suggest the best prospects for a big upgrade in giving are those who are “erratic”, i.e. have prior giving but are not loyal, every-year donors, and “volatile”, i.e. are inconsistent about the amounts they give.)

 

I did some stock market research (see footnote), decided on a minimum value for the entry-point into our volatility matrix ($500), and, together with Senior Programmer Analyst, Kim Wilkinson, got cracking on writing a program to identify volatile donors.

 

volatile sql clip

 

 

Our ideal volatile donors had given ≥ $500 at least twice in the last 10 years, without any consecutive (“stable”) periods. Year over year, our ideal volatile donor would act in one of three ways – increase their giving by at least 60%, decrease their giving by at least 60%, or not give at all. Given the capacity level displayed by these volatile donors, we replaced years of very low-end giving <$99) with null values (“throwaway gifts”).

 

We had strict conditions for what would remove a donor from our table. If a donor had two years of consecutive giving within a ±60% differential from their previous highest giving point (v_value), we considered this a natural (or, at least, for this test, not sufficiently irregular) fluctuation in giving, and they were removed from the table. If the donor had two consecutive years of low-end (but not null) giving ($99-$499), this was considered a deliberate decrease, and they, too, were removed. Conversely, if a donor had two consecutive years of greatly increased giving, this was considered a deliberate increase, and they were also removed.

 

At any point, a donor could be admitted, or readmitted into our volatility matrix, by establishing, or re-establishing, a v_value and subsequent valid volatility point.

 

The difference between a lapsed donor and a volatile donor

 

Below is a sample pool of donors we examined.

 

volatile donor history image

 

Donor 1 is volatile all the way through, with greatly varying levels of giving, culminating in two years of non-giving. Donor 1 is currently volatile, and thus enters our test group.

 

Donor 2 is volatile for two years – FY07-08 and FY08-09 (v_value of $5,000 in FY07-08, followed by a valid volatile point in FY08-09 with a decrease of -80%), but then is removed from the table in FY09-10 with only a -50% decrease in giving. They do not establish a new v_value, even though their FY09-10 giving meets the minimum giving threshold for this test, because of their consecutive, only marginally decreased giving in FY10-11. This excludes Donor 2 from our test.

 

Donor 3 enters our volatility matrix in FY04-05, leaves in FY07-08, reenters in FY10-11, and maintains volatility to current day, and, thus, enters into our test solicitation.

 

While all three of these donors are lapsed, and are all SYBUNTs, only Donor 1 and Donor 3 are, by our definition, volatile.

 

Solicitation strategy and results

 

We now had a pool of constituents who were at least two years lapsed in giving, who all had a history of inconsistent, but not unsubstantial, contributions to the university. In an email solicitation, we presented constituents with both upgrade language and an aggressive ask matrix, beginning at a minimum of +60% of their highest ever v_value, regardless of where they were in the ebb and flow of their volatility cycle. Again, the goal of this test was to (1) identify donors with high capacity (2) whose giving to the university was erratic in frequency and loyalty and (3) encourage these donors to reactivate at greater than their previously-established high-end giving.

 

In our results analysis, we broadened our examination to include any gifts received from our testing pool within the subsequent four weeks, not just gifts linked to this particular solicitation code, to verify the legitimacy of tagging these donors as volatile – that is, having a higher-than-average probability to reactivate at a high-end giving level.

 

An important part of our analysis included comparing our testing pool to a control pool, pairing each of our volatile donors with a non-volatile twin who shared as many points of fiscal and biographic information as was possible.

 

Within the four-week time frame, our test group had about a 7% activity rate, whereas our control group had an activity rate of about 5% (average for the institution during this timeframe). Within our volatility test group, 50% of donors gave an amount that would plot a valid point on our volatility matrix.

 

Conclusion and next steps

 

Through our experiment, we sought to identify volatile donors, and test if we could trigger a reactivation in giving, ideally at, or greater than, their highest level on record.

 

Since not all of the donors within our test group made their gifts to the coded solicitation with the volatile ask matrix, it is indiscernible whether being presented with language and ask amounts that reflected their elusive giving behavior prompted a gift – volatile or otherwise. However, we do feel confident that we’re onto something when it comes to identifying and predicting the behavior of a particular, valuable set of donors to our institution.

 

Our above-average response rate (both versus the control group, and institution-wide) supports our “theory of volatility”, insofar as that volatile donors are an existing pool with shared behaviors within our donor population. We plan to re-run this test again at the same time next year, continuing our search to find a pattern within the instability.

 

Were we able to gather definitive results that will define and shape future annual giving strategy? Not exactly. But as far as data goes, this was definitely cool.

 

Jessica Kostuck is the Data Analyst, Annual Giving at Queen’s University in Kingston, Ontario. She can be reached at jessica.kostuck@queensu.ca.

 

————-

1. Varadi, David. “Volatility Differentials: High/Low Volatility versus Close/Close Volatility (HVL-CCV).” CSS Analytics. 29 Mar. 2011. Web. Winter 2015.

1 February 2016

Regular-season passing yardage and the NFL playoffs

Filed under: Analytics, Fun, John Sammis, Off on a tangent, Peter Wylie — Tags: , , , , — kevinmacdonell @ 7:37 pm

Guest post by Peter B. Wylie, with John Sammis

 

How much is regular-season passing yardage related to success in the NFL playoffs? (Click link to download .PDF: Passing yardage in the NFL.)

 

Peter was really interested in finding out how strong the relationship might be between an NFL team’s passing during the regular season and its performance in the playoffs. There’s been plenty of talk about this relationship, but he wanted to see for himself.

 

A bit of a departure for CoolData, but still all about data and analysis … hope you enjoy!

 

9 January 2016

“Score!” now available in e-book formats

Filed under: Score!, Uncategorized — Tags: — kevinmacdonell @ 10:39 am

2014-07-18 06.41.41

 

I’m pleased to note that “Score! Data-Driven Success for Your Advancement Team” is now available in e-book formats. “Score!”, by Peter B. Wylie and Kevin MacDonell, is published by CASE, the Council for Advancement and Support of Education. To order your copy, click here to enter the CASE book store, and select EPUB or Mobi/Kindle.

 

“Score!” has been out and selling well as a print publication for some time now. But print isn’t for everyone these days, and we’re glad our work has been chosen as one of a handful of publications to get the electronic treatment — a new initiative for CASE books.

 

If you’re not familiar with the book already, please click on the blue cover to the right for links to reviews!

 

3 January 2016

CoolData (the book) beta testers needed

 

UPDATE (Jan 5): 16 people have responded to my call for volunteers, so I am going to close this off now. I have been in touch with each person who has emailed me, and I will be making a final selection within a few days. Thank you to everyone who considered taking a crack at it.

 

Interested in being a guinea pig for my new handbook on predictive modelling? I’m looking for someone (two or three people, max) to read and work through the draft of “CoolData” (the book), to help me make it better.

 

What’s it about? This long subtitle says it all: “A how-to guide for predictive modelling for higher education advancement and nonprofits using multiple linear regression in Data Desk.”

 

The ideal beta tester is someone who:

 

  • has read or heard about predictive modelling and understands what it’s for, but has never done it and is keen to learn. (Statistical concepts are introduced only when and if they are needed – no prior stats knowledge is required. I’m looking for beginners, but beginners who aren’t afraid of a challenge.);
  • tends to learn independently, particularly using books and manuals to work through examples, either in addition to training or completely on one’s own;
  • does not have an IT background but has some IT support at his or her organization, and would not be afraid to learn a little SQL in order to query a database him- or herself, and
  • has a copy of Data Desk, or intends to purchase Data Desk. (Available for PC or Mac).

 

It’s not terribly important that you work in the higher ed or nonprofit world — any type of data will do — but the book is strictly about multiple linear regression and the stats software Data Desk. The methods outlined in the book can be extended to any software package (multiple linear regression is the same everywhere), but because the prescribed steps refer specifically to Data Desk, I need someone to actually go through the motions in that specific package.

 

Think of a cookbook full of recipes, and how each must be tested in real kitchens before the book can go to press. Are all the needed ingredients listed? Has the method been clearly described? Are there steps that don’t make sense? I want to know where a reader is likely to get lost so that I can fix those sections. In other words, this is about more than just zapping typos.

 

I might be asking a lot. You or your organization will be expected to invest some money (for the software, sales of which I do not benefit from, by the way) and your time (in working through some 200 pages).

 

As a return on your investment, however, you should expect to learn how to build a predictive model. You will receive a printed copy of the current draft (electronic versions are not available yet), along with a sample data file to work through the exercises. You will also receive a free copy of the final published version, with an acknowledgement of your work.

 

One unusual aspect of the book is that a large chunk of it is devoted to learning how to extract data from a database (using SQL), as well as cleaning it and preparing the data for analysis. This is in recognition of the fact that data preparation accounts for the majority of time spent on any analysis project. It is not mandatory that you learn to write queries in SQL yourself, but simply knowing which aspects of data preparation can be dealt with at the database query level can speed your work considerably. I’ve tried to keep the sections about data extraction as non-technical as possible, and augmented with clear examples.

 

For a sense of the flavour of the book, I suggest you read these excerpts carefully: Exploring associations between variables and Testing associations between two categorical variables.

 

Contact me at kevin.macdonell@gmail.com and tell me why you’re interested in taking part.

 

 

 

9 October 2015

Ready for a sobering look at your last five years of alumni giving?

Guest post by Peter B. Wylie and John Sammis

  

Download this discussion paper here: Sobering Look at last 5 fiscal years of alumni giving

 

My good friends Wylie and Sammis are at it again, digging into the data to ask some hard questions.

 

This time, their analysis shines a light on a concerning fact about higher education fundraising: A small group of donors from the past are responsible for the lion’s share of recent giving.

 

My first reaction on reading this paper was, well, that looks about right. A school’s best current donors have probably been donors for quite some time, and alumni participation is in decline all over North America. So?

 

The “so” is that we talk about new donor acquisition but are we really investing in it? Do we have any clue who’s going to replace those donors from the past and address the fact that our fundraising programs are leaky boats taking on water? Is there a future in focusing nearly exclusively on current loyal donors? (Answer: Sure, if loyal donors are immortal.)

 

A good start would be for you to get a handle on the situation at your institution by looking at your data as Wylie and Sammis have done for the schools in their discussion paper. Download it here: Sobering Look at last 5 fiscal years of alumni giving.

 

9 September 2015

Prospect Research, ten years from now

Guest post by Peter B. Wylie

 

(This opinion piece was originally posted to the PRSPT-L Listserv.)

 

As many of you know, Dave Robertson decided to talk about the future of prospect research in New Orleans via a folk song. It was great. The guy’s got good pipes and plays both the harmonica and the guitar really well.

 

Anyway, here’s what I wrote him back in March. It goes on a bit. So if you get bored, just dump it.

 

It was stiflingly hot on that morning in early September of 1967 when my buddy Bunzy sat down in the amphitheater of Cornell’s med school in Manhattan. He and the rest of his first year classmates were chattering away when an older gentleman scuffled through a side door out to a podium. The room fell silent as the guy peered over his reading glasses out onto a sea of mostly male faces: “I’ll be straight with you, folks. Fifty percent of what we teach you over the next four years will be wrong. The problem is, we don’t know which fifty percent.”

 

I’ve often thought about the wisdom embedded in those words. The old doc was right. It is very hard for any of us to predict what anything will be like twenty years hence. Nate Silver in both his book “The Signal and the Noise” and on his immensely popular website underlines how bad highly regarded experts in most fields are at making even short range predictions.

 

So when Dave Robertson asked me to jot down some ideas about how prospect research will look a decade or more from now, I wanted to say, “Dave, I’ll be happy to give it a shot. But I’ll probably be as off the mark as the futurists in the early eighties. Remember those dudes? They gave us no hint whatsoever of how soon something called the internet would arrive and vastly transform our lives.”

 

With that caveat, I’d like to talk about two topics. The first has to do with something I’m pretty sure will happen. The second has to do with something sprinkled more with hope than certainty.

 

On to the first. I am increasingly convinced prospect researchers a decade or more from now will have far more information about prospects than they currently have. Frankly, I’m not enthusiastic about that possibility. Why? Privacy? Take my situation as I write down my thoughts for Dave. I’m on the island of Vieques, Puerto Rico with Linda to celebrate our fortieth wedding anniversary. We’ve been here almost two weeks. Any doggedly persistent law enforcement investigator could find out the following:

 

  • What flights we took to get here
  • What we paid for the tickets
  • The cost of each meal we paid for with a credit card
  • What ebooks I purchased while here
  • What shows I watched on Netflix
  • How many miles we’ve driven and where with our rental jeep
  • How happy we seemed with each other while in the field of the many security cameras, even in this rustic setting

 

You get the idea. Right now, I’m gonna assume that the vast majority of prospect researchers have no access to such information. More importantly, I assume their ethical compasses would steer them far away from even wanting to acquire such information.

 

But that’s today. March 2, 2015. How about ten years from now? Or 15 years from now, assuming I’m still able to make fog on a mirror? As it becomes easier and easier to amalgamate data about old Pete, I think all that info will be much easier to access by people willing to purchase it. That includes people who do prospect research. And if those researchers do get access to such data, it will help them enormously in finding out if I’m really the right fit for the mission of their fundraising institution. I guess that’s okay. But at my core, I don’t like the fact that they’ll be able to peek so closely into who I am and what I want in the days I have left on this wacky planet. I just don’t.

 

On to the second thing. Anybody who’s worked in prospect research even a little knows that the vast majority of the money raised by their organization comes from a small group of donors. If you look at university alumni databases, it’s not at all unusual to find that one tenth of one percent of the alums have given almost a half of the total current lifetime dollars. I think that situation needs to change. I think these institutions must find ways to get more involvement from the many folks who really like them and who have the wherewithal to give them big gifts.

 

So … how will the prospect researchers of the future play a key role in helping fundraising organizations (be they universities or general nonprofits) do a far better job of identifying and cultivating donors who have the resources and inclination to pitch in on the major giving front? I think/hope it’s gonna be in the way campaigns are run.

 

Right now, here’s what seems to happen. A campaign is launched with the help of a campaign consultant. A strategy is worked out whereby both the consultants and major gift officers spread out and talk to past major givers and basically say, “Hey, you all were really nice and generous to us in the last campaign. We’re enormously grateful for that. We truly are. But this time around we could use even more of your generosity. So … What can we put you down for?”

 

This is a gross oversimplification of what happens in campaigns. And it’s coming from a guy who doesn’t do campaign consulting. Still, I don’t think I’m too far off the mark. To change this pattern I think prospect researchers will have to be more assertive with the captains of these campaigns: The consultants, the VPs, the executives, all of whom talk so authoritatively about how things should be done and who can simultaneously be as full of crap as a Christmas goose.

 

These prospect researchers are going to have to put their feet down on the accelerator of data driven decision-making. In effect, they’ll need to say:

 

“We now have pretty damn accurate info on how wealthy a whole bunch of our younger donors are. And we have good analytics in place to ascertain which of them are most likely to step it up soon … IF we strategize how to nurture them over the long run. Right now, we’re going after the low hanging fruit that is comprised of tried and true donors. We gotta stop just doing that. Otherwise, we’re leaving way too much money on the table.”

 

All that I’ve been saying in this second part is not new. Not at all. Perhaps what may be a little new is what I have hinted at but not come right out and proclaimed. In the future we’ll need more prospect researchers to stand up and be outspoken to the campaign movers and shakers. To tell these big shots, politely and respectfully, that they need to start paying attention to the data. And do it in such a way that they get listened to.

 

That’s asking a lot of folks whose nature is often quiet, shy, and introverted. I get that. But some of them are not. Perhaps they are not as brazen as James Carvel is/was. But we need more folks like them who will stand up and say, “It’s the data, stupid!” Without yelling and without saying “stupid.”
« Newer PostsOlder Posts »

Create a free website or blog at WordPress.com.