Guest post by Peter B. Wylie, with John Sammis
I recently read a question on a listserv that prompted me to respond. A university in the US was planning to solicit about 25,000 of its current non-donor alumni. The question was: How best to filter a non-donor base of 140,000 in order to arrive at the 25,000 names of those most likely to become donors? This university had only ever solicited donors in the past, so this was new territory for them. (How those alumni became donors in the first place was not explained.)
One responder to the question suggested narrowing down the pool by recent class years, reunion class years, or something similar, and also use any ratings, if they were available, and then do an Nth-record select on the remaining records to get to 25,000. Selecting every Nth record is one way to pick an approximately random sample. If you aren’t able to make this selection, the responder suggested, then your mail house vendor should be able to.
This answer was fine, up until the “Nth selection” part. I also had reservations about putting the vendor in control of prospect selection. So here are some thoughts on the topic of acquisition mailings.
Doing a random selection assumes that all non-donor alumni are alike, or at least that we aren’t able to make distinctions. Neither assumption would be true. Although they haven’t given yet, some alumni feel closer affinity to your school than others, and you should have some of these affinity-related cues stored in your database. This suggests that a more selective approach will perform better than a random sample.
Not long ago, I isolated all our alumni who converted from never-donor to donor at any time in the past two years. (Two years instead of just one, in order to boost the numbers a bit.) Then I compared this group with the universe of all the never-donors who had failed to convert, based on a number of attributes that might indicate affinity. Some of my findings included:
Using these and other factors, I created a score which was used to select which non-donor alumni would be included in our acquisition mailing. I’ve been monitoring the results, and although new donors do tend to be the alumni with higher scores, frankly we’ve had poor results via mail solicitation, so evaluation is difficult. This in itself is not unusual: New-donor acquisition is very much a Phonathon phenomenon for us — in our phone results, the effectiveness of the score is much more evident.
Poor results or not, it’s still better than random, and whenever you can improve on random, you can reduce the size of a mailing. Acquisition mailings in general are way too big, simply because they’re often just random — they have to cast a wide net. Unfortunately your mail house is unlikely to encourage you to get more focused and save money.
Universities contract with vendors for their expertise and efficiency in dealing with large mailings, including cleaning the address data and handling the logistics that many small Annual Fund offices just aren’t equipped to deal with. A good mail house is a valuable ally and source of direct-marketing expertise. But acquisition presents a conflict for vendors, who make their money on volume. Annual Fund offices should be open to advice from their vendor, but they would do well to develop their own expertise in prospect selection, and make drastic cuts to the bloat in their mailings.
Donors may need to be acquired at a loss, no question. It’s about lifetime value, after all. But if the cumulative cost of that annual appeal exceeds the lifetime value of your newly-acquired donor, then the price is too high.
There’s been some back-and-forth on one of the listservs about the “correct” way to measure and score alumni engagement. An emphasis on scientific rigor is being pressed for by one vendor who claims to specialize in rigor. The emphasis is misplaced.
No doubt there are sophisticated ways of measuring engagement that I know nothing about, but the question I can’t get beyond is, how do you define “engagement”? How do you make it measurable so that one method applies everywhere? I think that’s a challenging proposition, one that limits any claim to “correctness” of method. This is the main reason that I avoid writing about measuring engagement — it sounds analytical, but inevitably it rests on some messy, intuitive assumptions.
The closest I’ve ever seen anyone come is Engagement Analysis Inc., a firm based here in Canada. They have a carefully chosen set of engagement-related survey questions which are held constant from school to school. The questions are grouped in various categories or “drivers” of engagement according to how closely related (statistically) the responses tend to be to each other. Although I have issues with alumni surveys and the dangers involved in interpreting the results, I found EA’s approach fascinating in terms of gathering and comparing data on alumni attitudes.
(Disclaimer: My former employer was once a client of this firm’s but I have no other association with them. Other vendors do similar and very fine work, of course. I can think of a few, but haven’t actually worked with them, so I will not offer an opinion.)
Some vendors may make claims of being scientific or analytically correct, but the only requirement of quantifying engagement is that it be reasonable, and (if you are benchmarking against other schools) consistent from school to school. In general, if you want to benchmark, then engage a vendor if you want to do it right, because it’s not easily done.
But if you want to benchmark against yourself (that is, over time), don’t be intimidated by anyone telling you your method isn’t good enough. Just do your own thing. Survey if you like, but call first upon the real, measurable activities that your alumni participate in. There is no single right way, so find out what others have done. One institution will give more weight to reunion attendance than to showing up for a pub night, while another will weigh all event attendance equally. Another will ditch event attendance altogether in favour of volunteer activity, or some other indicator.
Can anyone say definitively that any of these approaches are wrong? I don’t think so — they may be just right for the school doing the measuring. Many schools (mine included) assign fairly arbitrary weights to engagement indicators based on intuition and experience. I can’t find fault with that, simply because “engagement” is not a quantity. It’s not directly measurable, so we have to use proxies which ARE measurable. Other schools measure the degree of association (correlation) between certain activities and alumni giving, and base their weights on that, which is smart. But it’s all the same to me in the end, because ‘giving’ is just another proxy for the freely interpretable quality of “engagement.”
Think of devising a “love score” to rank people’s marriages in terms of the strength of the pair bond. A hundred analysts would head off in a hundred different directions at Step 1: Defining “love”. That doesn’t mean the exercise is useless or uninteresting, it just means that certain claims have to be taken with a grain of salt.
We all have plenty of leeway to chose the proxies that work for us, and I’ve seen a number of good examples from various schools. I can’t say one is better than another. If you do a good job measuring the proxies from one year to the next, you should be able to learn something from the relative rises and falls in engagement scores over time and compared between different groups of alumni.
Are there more rigorous approaches? Yes, probably. Should that stop you from doing your own thing? Never!
I’ve written a guest post for Andrew Urban’s blog, Return on Mission. Andrew is the author of a great little book called “The Nonprofit Buyer,” which is subtitled: “Strategies for Success from a Nonprofit Technology Sales Veteran.” It’s all about helping nonprofits make better choices when it comes to dealing with vendors of technology products and services. You can find out more on Andrew’s blog.
I’m pleased he’s asked me to contribute to Return on Mission, where I write on a topic I haven’t addressed on my own blog. Readers of CoolData know that my focus is the in-house analytics capability of nonprofits and higher-education institutions. So what do I think about analytics and analytic services purchased from vendors?
Well, if you want to find out, you’ll have to follow the link: Knowledgeable Purchasers — 4 Easy Rules