CoolData blog

9 October 2015

Ready for a sobering look at your last five years of alumni giving?

Guest post by Peter B. Wylie and John Sammis


Download this discussion paper here: Sobering Look at last 5 fiscal years of alumni giving


My good friends Wylie and Sammis are at it again, digging into the data to ask some hard questions.


This time, their analysis shines a light on a concerning fact about higher education fundraising: A small group of donors from the past are responsible for the lion’s share of recent giving.


My first reaction on reading this paper was, well, that looks about right. A school’s best current donors have probably been donors for quite some time, and alumni participation is in decline all over North America. So?


The “so” is that we talk about new donor acquisition but are we really investing in it? Do we have any clue who’s going to replace those donors from the past and address the fact that our fundraising programs are leaky boats taking on water? Is there a future in focusing nearly exclusively on current loyal donors? (Answer: Sure, if loyal donors are immortal.)


A good start would be for you to get a handle on the situation at your institution by looking at your data as Wylie and Sammis have done for the schools in their discussion paper. Download it here: Sobering Look at last 5 fiscal years of alumni giving.


9 September 2015

Prospect Research, ten years from now

Guest post by Peter B. Wylie


(This opinion piece was originally posted to the PRSPT-L Listserv.)


As many of you know, Dave Robertson decided to talk about the future of prospect research in New Orleans via a folk song. It was great. The guy’s got good pipes and plays both the harmonica and the guitar really well.


Anyway, here’s what I wrote him back in March. It goes on a bit. So if you get bored, just dump it.


It was stiflingly hot on that morning in early September of 1967 when my buddy Bunzy sat down in the amphitheater of Cornell’s med school in Manhattan. He and the rest of his first year classmates were chattering away when an older gentleman scuffled through a side door out to a podium. The room fell silent as the guy peered over his reading glasses out onto a sea of mostly male faces: “I’ll be straight with you, folks. Fifty percent of what we teach you over the next four years will be wrong. The problem is, we don’t know which fifty percent.”


I’ve often thought about the wisdom embedded in those words. The old doc was right. It is very hard for any of us to predict what anything will be like twenty years hence. Nate Silver in both his book “The Signal and the Noise” and on his immensely popular website underlines how bad highly regarded experts in most fields are at making even short range predictions.


So when Dave Robertson asked me to jot down some ideas about how prospect research will look a decade or more from now, I wanted to say, “Dave, I’ll be happy to give it a shot. But I’ll probably be as off the mark as the futurists in the early eighties. Remember those dudes? They gave us no hint whatsoever of how soon something called the internet would arrive and vastly transform our lives.”


With that caveat, I’d like to talk about two topics. The first has to do with something I’m pretty sure will happen. The second has to do with something sprinkled more with hope than certainty.


On to the first. I am increasingly convinced prospect researchers a decade or more from now will have far more information about prospects than they currently have. Frankly, I’m not enthusiastic about that possibility. Why? Privacy? Take my situation as I write down my thoughts for Dave. I’m on the island of Vieques, Puerto Rico with Linda to celebrate our fortieth wedding anniversary. We’ve been here almost two weeks. Any doggedly persistent law enforcement investigator could find out the following:


  • What flights we took to get here
  • What we paid for the tickets
  • The cost of each meal we paid for with a credit card
  • What ebooks I purchased while here
  • What shows I watched on Netflix
  • How many miles we’ve driven and where with our rental jeep
  • How happy we seemed with each other while in the field of the many security cameras, even in this rustic setting


You get the idea. Right now, I’m gonna assume that the vast majority of prospect researchers have no access to such information. More importantly, I assume their ethical compasses would steer them far away from even wanting to acquire such information.


But that’s today. March 2, 2015. How about ten years from now? Or 15 years from now, assuming I’m still able to make fog on a mirror? As it becomes easier and easier to amalgamate data about old Pete, I think all that info will be much easier to access by people willing to purchase it. That includes people who do prospect research. And if those researchers do get access to such data, it will help them enormously in finding out if I’m really the right fit for the mission of their fundraising institution. I guess that’s okay. But at my core, I don’t like the fact that they’ll be able to peek so closely into who I am and what I want in the days I have left on this wacky planet. I just don’t.


On to the second thing. Anybody who’s worked in prospect research even a little knows that the vast majority of the money raised by their organization comes from a small group of donors. If you look at university alumni databases, it’s not at all unusual to find that one tenth of one percent of the alums have given almost a half of the total current lifetime dollars. I think that situation needs to change. I think these institutions must find ways to get more involvement from the many folks who really like them and who have the wherewithal to give them big gifts.


So … how will the prospect researchers of the future play a key role in helping fundraising organizations (be they universities or general nonprofits) do a far better job of identifying and cultivating donors who have the resources and inclination to pitch in on the major giving front? I think/hope it’s gonna be in the way campaigns are run.


Right now, here’s what seems to happen. A campaign is launched with the help of a campaign consultant. A strategy is worked out whereby both the consultants and major gift officers spread out and talk to past major givers and basically say, “Hey, you all were really nice and generous to us in the last campaign. We’re enormously grateful for that. We truly are. But this time around we could use even more of your generosity. So … What can we put you down for?”


This is a gross oversimplification of what happens in campaigns. And it’s coming from a guy who doesn’t do campaign consulting. Still, I don’t think I’m too far off the mark. To change this pattern I think prospect researchers will have to be more assertive with the captains of these campaigns: The consultants, the VPs, the executives, all of whom talk so authoritatively about how things should be done and who can simultaneously be as full of crap as a Christmas goose.


These prospect researchers are going to have to put their feet down on the accelerator of data driven decision-making. In effect, they’ll need to say:


“We now have pretty damn accurate info on how wealthy a whole bunch of our younger donors are. And we have good analytics in place to ascertain which of them are most likely to step it up soon … IF we strategize how to nurture them over the long run. Right now, we’re going after the low hanging fruit that is comprised of tried and true donors. We gotta stop just doing that. Otherwise, we’re leaving way too much money on the table.”


All that I’ve been saying in this second part is not new. Not at all. Perhaps what may be a little new is what I have hinted at but not come right out and proclaimed. In the future we’ll need more prospect researchers to stand up and be outspoken to the campaign movers and shakers. To tell these big shots, politely and respectfully, that they need to start paying attention to the data. And do it in such a way that they get listened to.


That’s asking a lot of folks whose nature is often quiet, shy, and introverted. I get that. But some of them are not. Perhaps they are not as brazen as James Carvel is/was. But we need more folks like them who will stand up and say, “It’s the data, stupid!” Without yelling and without saying “stupid.”

13 November 2014

How to measure the rate of increasing giving for major donors

Filed under: John Sammis, Major Giving, Peter Wylie, RFM — Tags: , , , , , , — kevinmacdonell @ 12:35 pm

Not long ago, this question came up on the Prospect-DMM list, generating some discussion: How do you measure the rate of increasing giving for donors, i.e. their “velocity”? Can this be used to find significant donors who are poised to give more? This question got Peter Wylie thinking, and he came up with a simple way to calculate an index that is a variation on the concept of “recency” — like the ‘R’ in an RFM score, only much better.

This index should let you see that two donors whose lifetime giving is the same can differ markedly in terms of the recency of their giving. That will help you decide how to go after donors who are really on a roll.

You can download a printer-friendly PDF of Peter’s discussion paper here: An Index of Increasing Giving for Major Donors


6 October 2014

Don’t worry, just do it

2014-10-03 09.45.37People trying to learn how to do predictive modelling on the job often need only one thing to get them to the next stage: Some reassurance that what they are doing is valid.

Peter Wylie and I are each just back home, having presented at the fall conference of the Illinois chapter of the Association of Professional Researchers for Advancement (APRA-IL), hosted at Loyola University Chicago. (See photos, below!) Following an entertaining and fascinating look at the current and future state of predictive analytics presented by Josh Birkholz of Bentz Whaley Flessner, Peter and I gave a live demo of working with real data in Data Desk, with the assistance of Rush University Medical Center. We also drew names to give away a few copies of our book, Score! Data-Driven Success for Your Advancement Team.

We were impressed by the variety and quality of questions from attendees, in particular those having to do with stumbling blocks and barriers to progress. It was nice to be able to reassure people that when it comes to predictive modelling, some things aren’t worth worrying about.

Messy data, for example. Some databases, particularly those maintained by non higher ed nonprofits, have data integrity issues such as duplicate records. It would be a shame, we said, if data analysis were pushed to the back burner just because of a lack of purity in the data. Yes, work on improving data integrity — but don’t assume that you cannot derive valuable insights right now from your messy data.

And then the practice of predictive modelling itself … Oh, there is so much advice out there on the net, some of it highly technical and involving a hundred different advanced techniques. Anyone trying to learn on their own can get stymied, endlessly questioning whether what they’re doing is okay.

For them, our advice was this: In our field, you create value by ranking constituents according to their likelihood to engage in a behaviour of interest (giving, usually), which guides the spending of scarce resources where they will do the most good. You can accomplish this without the use of complex algorithms or arcane math. In fact, simpler models are often better models.

The workhorse tool for this task is multiple linear regression. A very good stand-in for regression is building a simple score using the techniques outlined in Peter’s book, Data Mining for Fundraisers. Sticking to the basics will work very well. Fussing with technical issues or striving for a high degree of accuracy are distractions that the beginner need not be overly concerned with.

If your shop’s current practice is to pick prospects or other targets by throwing darts, then even the crudest model will be an improvement. In many situations, simply performing better than random will be enough to create value. The bottom line: Just do it. Worry about perfection some other day.

If the decisions are high-stakes, if the model will be relied on to guide the deployment of scarce resources, then insert another step in the process. Go ahead and build the model, but don’t use it. Allow enough time of “business as usual” to elapse. Then, gather fresh examples of people who converted to donors, agreed to a bequest, or made a large gift — whatever the behaviour is you’ve tried to predict — and check their scores:

  • If the chart shows these new stars clustered toward the high end of scores, wonderful. You can go ahead and start using the model.
  • If the result is mixed and sort of random-looking, then examine where it failed. Reexamine each predictor you used in the model. Is the historical data in the predictor correlated with the new behaviour? If it isn’t, then the correlation you observed while building the model may have been spurious and led you astray, and should be excluded. As well, think hard about whether the outcome variable in your model is properly defined: That is, are you targeting for the right behaviour? If you are trying to find good prospects for Planned Giving, for example, your outcome variable should focus on that, and not lifetime giving.

“Don’t worry, just do it” sounds like motivational advice, but it’s more than that. The fact is, there is only so much model validation you can do at the time you create the model. Sure, you can hold out a generous number of cases as a validation sample to test your scores with. But experience will show you that your scores will always pass the validation test just fine — and yet the model may still be worthless.

A holdout sample of data that is contemporaneous with that used to train the model is not the same as real results in the future. A better way to go might be to just use all your data to train the model (no holdout sample), which will result in a better model anyway, especially if you’re trying to predict something relatively uncommon like Planned Giving potential. Then, sit tight and observe how it does in production, or how it would have done in production if it had been deployed.

  1. Observe, learn, tweak, and repeat. Errors are hard to avoid, but they can be discovered.
  2. Trust the process, but verify the results. What you’re doing is probably fine. If it isn’t, you’ll get a chance to find out.
  3. Don’t sweat the small stuff. Make a difference now by sticking to basics and thinking of the big picture. You can continue to delve and explore technical refinements and new methods, if that’s where your interest and aptitude take you. Data analysis and predictive modelling are huge subjects — start where you are, where you can make a difference.

* A heartfelt thank you to APRA-IL and all who made our visit such a pleasure, especially Sabine Schuller (The Rotary Foundation), Katie Ingrao and Viviana Ramirez (Rush University Medical Center), Leigh Peterson Visaya (Loyola University Chicago), Beth Witherspoon (Elmhurst College), and Rodney P. Young, Jr. (DePaul University), who took the photos you see below. (See also: APRA IL Fall Conference Datapalooza.)

Click on any of these for a full-size image.

DSC_0017 DSC_0018 DSC_0026 DSC_0051 DSC_0054 DSC_0060 DSC_0066 DSC_0075 DSC_0076 DSC_0091

19 August 2014

Score! … As pictured by you

Filed under: Book, Peter Wylie, Score! — Tags: , , — kevinmacdonell @ 7:25 pm
2014-07-18 06.41.41

Left to right: Elisa Shoenberger, Leigh Petersen Visaya, Rebekah O’Brien, and Alison Rane in Chicago. (Click for full size.)

During the long stretch of time that Peter Wylie and I were writing our book, Score! Data-Driven Success for Your Advancement Team, there were days when I thought that even if we managed to get the thing done, it might not be that great. There were just so many pieces that needed to fit together somehow … I guess we each didn’t want to let the other down, so we plugged on despite doubts and delays, and then, somehow, it got finished.

Whew, I thought. Washed my hands of that! I expected I would walk away from it,  move on to other projects, and be glad that I had my early mornings and weekends back.

That’s not what happened.

These few months later, my eye will still be caught now and then by the striking, colourful cover of the book sitting on my desk. It draws me to pick it up and flip through it — even re-read bits. I find myself thinking, “Hey, I like this.”

Of course, who cares, right? I am not the reader. However, whatever I might think about Score!, it has been even more gratifying for Peter and I to hear from folks who seem to like it as much as we do. How fun it has been to see that bright cover popping up in photos and on social media every once in a while.

I’ve collected a few of those photos and tweets here, along with some other images related to the book. Feel free to post your own “Score selfies” on Twitter using the hashtag #scorethebook. Or if you’re not into Twitter, send me a photo at

Click here to order your copy of Score! from the CASE Bookstore.

2014-10-03 19.37.25

2014-09-23 11.56.58

2014-08-14 06.34.25


Jennifer Cunningham, Senior Director, Metrics+Marketing for the Office of Alumni Affairs, Cornell University. @jenlynham

Click here to order your copy of Score! from the CASE Bookstore.

While we would like for you to buy it, we would LOVE for you to read it and put it to work in your shop. Your buying it earns us each enough money to buy a cup of coffee. Your READING it furthers the reach and impact of ideas and concepts that fascinate us and which we love to share.

7 June 2014

A fresh look at RFM scoring

Filed under: Annual Giving, John Sammis, Peter Wylie, RFM — Tags: , — kevinmacdonell @ 7:08 pm

Guest post by Peter B. Wylie and John Sammis

Back in February and March, Kevin MacDonell published a couple of posts about RFM for this blog (Automate RFM scoring of your donors with this Python script and An all-SQL way to automate RFM scoring). If you’ve read these, you know Kevin was talking about a quick way to amass the data you need to compute measures of RECENCY, FREQUENCY, and MONETARY AMOUNT for a particular set of donors over the last five fiscal years.

But how useful, really, is RFM? This short paper highlights some key issues with RFM scoring, but ends on a positive note. Rather than chucking it out the window, we suggest a new twist that goes beyond RFM to something potentially much more useful.

Download the PDF here: Why We Are Not in Love With RFM

Older Posts »

The Silver is the New Black Theme. Blog at


Get every new post delivered to your Inbox.

Join 1,213 other followers