CoolData blog

3 October 2011

Data 1, Gut Instinct 0

Filed under: Annual Giving, Best practices, Phonathon — Tags: , , — kevinmacdonell @ 8:30 am

Sometimes I employ a practice in our Phonathon program simply because my gut says it’s gotta work. Some things just seem so obvious that it doesn’t seem worthwhile testing them to prove my intuition is valid. And like a lot of people who work in Annual Giving, I like to flatter myself that I can make a non-engaged alum give just by making shrewd tweaks to the program.

It turns out that I am quite wrong. I am thinking about a practice that seems to be part of the Phonathon gospel of best practices. I firmly believed in it, and I got serious about using it this fall. As the song says, though, it ain’t necessarily so.

When possible, I am pairing up student callers with alumni whose degree is in the same faculty of study. If I have business students in the calling room, for example, I’ll assign them alumni with degrees associated with the Faculty of Management. A grad with a BSc majoring in chemistry, meanwhile, will get a call from a student majoring in one of the sciences, rather than arts or business. It’s not perfect: There are too many degree programs, current and historic, for me to get any more specific than the overall faculty, but at least it increases the chance that student and alum will have something in common to talk about.

It’s easy to see why this ought to work. When speaking with young alumni, callers are somewhat more likely to have had certain professors or classes in common, and their interests may be aligned — for example, the alum might be able to provide the student with a glimpse into the job market that awaits. With older alumni, the callers might at least know the campus and buildings that alumni of the past inhabited just as they do today. If alumni feel so inclined, the conversation might even lead to a discussion about life and career.

These would be meaningful conversations, the kind of connection we hope to achieve on the phone. Just that much, even without a gift (this year), would be a desirable result.

On the other hand … if faculty pairings really lead to longer, better-quality conversations, would we not expect that faculty-paired conversations would, on average, result in more gifts than non-paired conversations? In the long run, is that not our goal? If it makes no difference who asks whom, then why complicate things?

First let me say that I embarked on this analysis fully expecting that the data would demonstrate the effectiveness of faculty-paired conversations. I might be a data guy, but I am not unbiased! I really hoped that my intervention would actually produce results. Allow me to admit that I was quite disappointed by what I found.

Here’s what I did.

Last year, I did not employ faculty pairings. We made caller assignments based on prospects’ donor status (LYBUNT, SYBUNT, etc.), but not faculty. I don’t know how our automated software distributes prospects to callers, but I am comfortable saying that, with regards to the faculty of preferred degree, the distribution to callers was random. This more or less random assignment by faculty allowed me to compare “paired” conversations with “unpaired” conversations, to see whether one was better than the other with regards to length of call, participation rate, and average pledge.

I dug into the database of our automated calling application and I pulled a big file with every single call result for last year. The file included the caller’s ID, the length of the call in seconds, the last result (Yes Pledge, No Pledge, No Answer, Answering Machine, etc. etc.), and the pledge amount (if applicable).

Then I removed all the records that did not result in an actual conversation. If the caller didn’t get to speak to the prospect, faculty pairing is irrelevant. I kept any record that ended in Yes Pledge (specified-dollar pledge or credit card gift), Maybe (unspecified pledge), No Pledge, or a request to be put on the Do Not Call list.

I added two more columns (variables) of data: The faculty of the caller’s area of study, and the faculty of the prospect’s preferred degree. Because not all of our dozen or so faculties is represented in our calling room, I then removed all the records for which no pairing was possible. For example, because I employed no Law or Medicine students, 100% of our Law and Medicine alumni would end up on the “non-paired” side, which would skew the results.

As well, I excluded calls with call lengths of five seconds or less. It is doubtful callers would have had enough time to identify themselves in less than five seconds — therefore those calls do not qualify as conversations.

In the end, my data file for analysis contained the results of 6,500 conversations for which a pairing was possible. Each prospect record, each conversation, could have one of two states: ‘Paired’ or ‘Unpaired’. About 1,500 conversations (almost 22%) were paired, as assigned at random by the calling software.

I then compared the Paired and Unpaired groups by talk time (length of call in seconds), participation, and size of pledge.

1. Talk time

Better rapport-building on the phone implies longer, chattier calls. According to the table, “paired” calls are indeed longer on average, but not by much. A few seconds maybe.

2. Participation rate

The donor rates you see here are affected by all the exclusions, especially that of some key faculties. However, it’s the comparison we’re interested in, and the results are counter-intuitive. Non-paired conversations resulted in a slightly higher rate of participation (number of Yes Pledges divided by total conversations).

3. Average and median pledge

This table is also affected by the exclusion of a lot of alumni who tend to make larger pledges. Again, though, the point is that there is very little difference between the groups in terms of the amount given per Yes pledge.

The differences between the groups are not significant. Think about the range of values your callers get for common performance metrics (pledge rate, credit card rate, talk time, and so on). There are huge differences! If you want to move the yardsticks in your program, hire mature, friendly, chatty students who love your school and want to talk about it. Train them well. Keep them happy. Reward them appropriately. Retain them year over year so they develop confidence. These are the interventions that matter. Whom they are assigned to call doesn’t matter nearly as much.

Over and above that, pay attention to what matters even more than caller skills: The varying level of engagement of individual alumni. Call alumni who will answer the phone. Call alumni who will give a gift. Stop fussing over the small stuff.

You know what, though? Even faced with this evidence, I will probably continue to pair up students and alumni by faculty. First of all, the callers love it. They say they’re having better conversations, and I’m inclined to believe them. It’s not technically difficult to match up by faculty, so why not? As well, there might be nuances that I overlooked in my study of last year’s data. Maybe the faculty pairings are too broad. (Anytime you find economics in the same faculty as physics, you have to wonder how some people define Science. A discussion for someone else’s blog, perhaps.)

But my study has cast doubt on the usefulness of going to any great length to target alumni by faculty. For example, should I try hard to recruit a student caller from Law or Medicine to maximize on alumni from those faculties? Probably not.

Finally, I caution readers not to interpret my results as being generally applicable. I’m not saying that faculty pairing as a best practice is invalid. You need to determine for yourself whether a practice is part of your core strategy, or just a refinement, or completely useless. As I opined in my previous post (Are we too focused on trivia?), I suspect a lot of Annual Fund professionals aren’t making these distinctions.

The answers are in the data. Go find them.

8 Comments »

  1. Interesting. Did you run this through a chi-squared test? What were your findings?

    Comment by Adam Gluntz — 3 October 2011 @ 2:54 pm

    • I don’t have my data file in front of me now, but I can tell you that, where appropriate, I did look at chi-squared, and the verdict was that observed differences between the groups were not significant.

      Comment by kevinmacdonell — 3 October 2011 @ 3:00 pm

  2. Did you look at the age of the alum (how many years removed from the campus)? I would think that if the caller and alum have some common points of reference then they would be more likely to connect. If the whole campus and all of the faculty has changed then they would have less to talk about, at least where the school is concerned.

    It may not mean more giving but it might be a point of engagement for younger alum to keep them involved in campus life and move them up the giving ladder. And also a way to recapture disconnected young alum.

    Older alum would also have broader interests and so ‘like’ pairings may be less important?

    Comment by artem1s — 4 October 2011 @ 9:53 am

    • The age of the alum might or might not make a difference — I didn’t look groups of differing age, and I don’t plan to, either. What I really want to see is other people putting their suppositions to the test by analyzing their own data. If anyone thinks age makes a difference with regards to faculty pairings, then study it, and prove me wrong. I would like nothing better! I don’t want to be shown to be right — I want to encourage people to understand their constituencies and their programs better by seeking facts in the data instead of relying on ‘commonsense’, gut instinct, tradition, or what-have-you.

      Comment by kevinmacdonell — 4 October 2011 @ 10:55 am

  3. I have found that you have to think of how engaged the students are and how much they identify with their faculty during school, to justify whether or not having a student from the same area will make more sense.

    For instance, we’ve found that pairing Law alum with a law student is very effective. We nearly tripled phonathon pledges that way this last year and the donors and caller loved it. This would seem to be due to the fact that it is a smaller faculty, the career stream after graduation is fairly concise (most become lawyers), and there is strong engagement in the program. The law student knows how to talk to lawyers, get past the secretaries, etc and enjoys it. For the alum talking to a current law student brings back good memories – because the memories of law school are already strong. And, the student can sincerely ask for career advice, talk about their thesis, or give updates on the alum’s favourite profs.

    Other faculties where this would also work well are in Engineering and Business. They have a similar culture that breeds strong affinity with that faculty, and so it makes sense to put a student from the faculty on the phone with the alums.

    Comment by Mira — 5 October 2011 @ 3:42 pm

    • Mira, I’m glad to see that faculty pairing can work in certain scenarios, and that it works for you. I just have one quibble with your comment. You say, “They have a similar culture that breeds strong affinity with that faculty, and so it makes sense to put a student from the faculty on the phone with the alums.” It makes intuitive sense, it seems like common sense — but my point is, when the results are all in, does it still make sense? Is your belief that it works, and your anecdotal evidence that it works, backed up by evidence that it works? Because the answer is going to be there in the numbers, right? From what you say about Law, you’ve got the numbers, so I’m happy to hear that.

      I should back off a little and add that I may be asking too much of people who are running their programs manually. (I don’t know what your setup is.) The data is simply not available in the same way as it is with an automated program. Manual programs may simply have to run according to industry best practices, rules of thumb, and intuition. These guides may not always produce great results, but managers of manual programs can only do so much.

      If there’s anything I would like to convey to people who work in Annual Giving, though, it’s that we really have to let go of “yeah, that makes sense” and depending so much on speculation about what causes people to give, and study historical data to distinguish between things that are associated with giving and things that don’t make much difference.

      Comment by kevinmacdonell — 5 October 2011 @ 4:28 pm

  4. Thanks for the reply Kevin. I agree about the need to put more emphasis on historical data and distinguishing what actually makes a difference. One thing that is for sure in the law case, is that something is working well. Whether it is the fact that the alums like being called by a student from their faculty, or whether it is that the student is more motivated and confident in doing their job, it worked. It’s important, though to figure out what that factor is, before going through the trouble of assigning calling this way across the board.

    We are an automated shop here, but still, it has only been about three years that we have actually taken the data and possibilities to data mine seriously. So, some of the data from the past isn’t set up correctly for us to really be able to compare.

    I think that as we draw more attention to this need, more of the resources and administrative practices will start to be in place, to make data mining and analytics more accurate and manageable.

    Comment by Mira — 5 October 2011 @ 4:43 pm

  5. Hi Kevin – interesting post! I started being quite granular, but also saw no perceivable difference when I had the chance to analyse it and so stopped being so rigid about assigning callers. Now we do it on an Arts, Sciences, Medics split – so arts students talk to arts alumni, sciences to sciences, etc. Also no perceivable downturns in results from this looser matching and simpler to run. We also split donors the same way.

    What we would find when it was tighter was that there was certainly a big difference in propensity to give by alumni from different departments, and that actually if you had some callers getting predominantly alumni from a low-propensity department (such as Philosophy for us) because that was what they were studying, you were actually restricting their chances of getting gifts. The only exception for us was medics, because they got shirty when not talking to a medical student, and actually complained about it!

    Comment by Adrian Salmon (@adriansalmon) — 12 October 2011 @ 7:46 am


RSS feed for comments on this post. TrackBack URI

Leave a reply to kevinmacdonell Cancel reply

Create a free website or blog at WordPress.com.