CoolData blog

7 June 2014

A fresh look at RFM scoring

Filed under: Annual Giving, John Sammis, Peter Wylie, RFM — Tags: , — kevinmacdonell @ 7:08 pm

Guest post by Peter B. Wylie and John Sammis

Back in February and March, Kevin MacDonell published a couple of posts about RFM for this blog (Automate RFM scoring of your donors with this Python script and An all-SQL way to automate RFM scoring). If you’ve read these, you know Kevin was talking about a quick way to amass the data you need to compute measures of RECENCY, FREQUENCY, and MONETARY AMOUNT for a particular set of donors over the last five fiscal years.

But how useful, really, is RFM? This short paper highlights some key issues with RFM scoring, but ends on a positive note. Rather than chucking it out the window, we suggest a new twist that goes beyond RFM to something potentially much more useful.

Download the PDF here: Why We Are Not in Love With RFM

31 May 2014

Presenting at a conference: Why the pain is totally worth it

One morning some years ago, when I was a prospect researcher, I was sitting at my desk when I felt a stab of pain in my back. I’d never had serious back pain before, but this felt like a very strong muscle spasm, low down and to one side. I stood up and stretched a bit, hoping it would go away. It got worse — a lot worse.

I stepped out into the hallway, rigid with pain. Down the hall, standing by the photocopier waiting for her job to finish, was Bernardine. She had a perceptive eye for stuff, especially medical stuff. She glanced in my direction and said, “Kidney stone.”

An hour later I was laying on a hospital gurney getting a Toradol injection and waiting for an X-ray. It was indeed a kidney stone, and not a small one.

This post is not about my kidney stone. But it is a little bit about Bernardine. Like I said, she knew stuff. She diagnosed my condition from 40 feet away, and she was also the first person to suggest that I should present at a conference.

At that time, there were few notions that struck terror in my heart like the idea of talking in front of a roomful of people. I thought she was nuts. ME? No! I’d rather have another kidney stone.

But Bernardine had also given me my first copy of Peter Wylie’s little blue book, “Data Mining for Fundraisers.” With that, and the subsequent training I had in data mining, I was hooked — and she knew it. Eventually, my absorption with the topic and my enthusiasm to talk about it triumphed over my doubts. I had something I really wanted to tell people about, and the fear was something I needed to manage. Which I did.

To date I’ve done maybe nine or ten conference presentations. I am not a seasoned presenter, nor has public speaking become one of my strengths. But I do know this: Presenting stuff to my counterparts at other institutions has proven one of the best ways to understand what it is I’m doing. These were the few times I got to step back and grasp not only the “how” of my work, but the “why”.

This is why I recommend it to you. The effort of explaining a project you’ve worked on to a roomful of people you’re meeting for the first time HAS to force some deeper reflection than you’re used to. Never moving beyond the company of your co-workers means you’re always swimming in the same waters of unspoken assumptions. Creating a presentation forces you to step outside the fishbowl, to see things from the perspective of someone you don’t know. That’s powerful.

Yes, preparing a presentation is a lot of work, if you care about it enough. But presenting can change your relationship with your job and career, and through that it can change your life. It changed mine. Blogging also changed my life, and I think a lot more people should be blogging too. (A post for another day.) Speaking and writing have rewarded me with an interesting career and professional friendships with people far and wide. These opportunities are not for the exceptional few; they are open to everyone.

I mentioned earlier that Bernardine introduced me me Peter Wylie’s book. Back then I could never have predicted that one day he and I would co-author another book. But there it is. It gave me great pleasure to give credit to Bernardine in the acknowledgements; I put a copy in the mail to her just this week. (I also give credit to my former boss, Iain. He was the one who drove me to the hospital on the day of the kidney stone. That’s not why he’s in the acknowledgements, FYI.)

Back to presenting … Peter and I co-presented a workshop on data mining for prospect researchers at the APRA-Canada conference in Toronto in 2010. I’m very much looking forward to co-presenting with him again this coming October in Chicago. (APRA-Illinois Data Analytics Fall Conference … Josh Birkholz will also present, so I encourage you to consider attending.)

Today, playing the role of a Bernardine, I am thinking of who I ought to encourage to present at a conference. I have at least one person in mind, who has worked long and hard on a project that I know people will want to hear about. I also know that the very idea would make her vomit on her keyboard.

But I’ve been there, and I know she will be just fine.

29 May 2014

Nate Silver on age-guessing from first names

Filed under: CoolData — Tags: , , — kevinmacdonell @ 3:22 pm

Friend and colleague Greg Pemberton (@GregPemberton) pointed me to this interesting post on the FiveThirtyEight blog: How to Tell Someone’s Age When All You Know Is Her Name. Wow, I thought … that rings a bell! I wrote a blog post on exactly that topic: How to infer age, when all you have is a name. That was nearly four years ago, and I’ve written a couple more posts on the subject since then.

I’m not suggesting that there’s any borrowing going on. The idea is hardly rocket science and has undoubtedly occurred to many people independently long before I got my noggin around it. So why am I posting this?

Ah.

I am a fan of Nate Silver and his blog. I devoured his book, “The Signal and the Noise,” shortly after it came out. And last year I dragged my butt out of bed in the early morning after an awesome conference reception with multiple open bars to hear him deliver a keynote address. So I was very interested to read his post, co-authored with Allison McCann.

And yes, I may also have been interested in posting a comment in response, with links to CoolData. I am a blogger, after all. So I carefully prepared my comment, and hit ‘Go’. What happened then? A Facebook fail!

facebook_fail

Really, Nate? I need a Facebook account to post a comment? I shut down my Facebook account years ago, for all sorts of reasons, and I don’t plan to go back. (Maybe I shouldn’t criticize. People can’t leave comments on CoolData at all. But Facebook??)

My comment needs a home. Why not right here? Thank you for reading.

I use these age/name/sex patterns to infer likely age in our university database work. We already know the name, gender and age for most people, so we can calculate mean and median ages for all combinations of name and sex, and apply those to any new records that are lacking this data (such as prospective donors). This is helpful, as ‘age’ is strongly correlated with likelihood to make a donation and the size of the gift. Gender can be an important factor … A number of first names have “flipped gender”, so they either belong to a relatively old man or a relatively young female. Examples I know of include Ainslie, Isadore, Sydney, Shelly, and Brooke.

I have written about this a few times:

How to infer age, when all you have is a name

New twists on inferring age from first name

Putting an age-guessing trick to the test

 

POSTSCRIPT

On re-reading the FiveThirtyEight post, I was struck by this passage, which I didn’t notice earlier: “There are quite a lot of websites devoted to tracking the popularity of American baby names over time. … But we haven’t seen anyone ask the age of living Americans with a given name.”

Oh really. … Let me Google that for you.

 

POST-POSTSCRIPT

When I first posted the “Let me Google that for you” link, CoolData was at the top of the results. It has since been crowded out by FiveThirtyEight and others. The benefits of a large web presence and the resources to optimize search results.

 

POST-POST-POSTSCRIPT

One thing was puzzling me … In my stats, I have seen a lot of people clicking on the link to FiveThirtyEight from my blog, but I also noticed that almost 200 people (to date) have come to CoolData from FiveThirtyEight. I couldn’t figure out how — there was no link to CoolData that I could find. Well, I’ve found it. CoolData is referenced in the first footnote below the FiveThirtyEight post on age-guessing from names. One has to click the plus sign in the circle (before the comments) to see the footnotes. So — thanks, Nate!

variations

16 May 2014

Find out “who knows who” in your database with SQL and Python

Filed under: Fun, Prospect identification, Python, SQL — Tags: , , , , — kevinmacdonell @ 12:30 pm

Figuring out connections is a big part of major gift fundraising. A development office might have a great list of prospective donors in hand, but those prospects will languish untouched if the right people who can make introductions are never found. While cultivating relationships with well-connected people is clearly important, research also plays a role in suggesting possible go-betweens. And to the extent that research involves sifting through data to find connections, it’s possible to automate some of the work.

Wouldn’t it be great to have software that asks you to name a prospective donor, and then spits out a list of people in your database who are most likely to be associates of that person? Sure it would!

Social media applications do a version of this, suggesting “people you may know,” based on shared connections and attributes such as schools attended, professional group memberships, employment information, and stated interests. I first encountered a higher-ed advancement application of this concept via Erich Preisendorfer of the University of New Hampshire. A while back, Erich explained his approach in a guest post for CoolData. (Read it here: Finding connections to your major gift prospects in your data.) I was recently in touch with Erich and he reports that the tool is still in use by prospect researchers at UNH and is still turning up valuable connections.

I tried to build something similar using our reporting tools, but it was a slow-lumbering beast, and not very user-friendly for the people most likely to be interesting in using it — prospect researchers, mainly. I never gave up on the idea of building something better, and finally on a quiet weekend a few months ago, I found two or three hours to mess around with a Python script. Given the ID of a person, this script makes multiple queries of the database and joins the results together to produce a ranked list of other people who might be associated with the input ID.

When the script is run, it prompts the user for an ID, then an Oracle database password. I’ll get into more detail later, but here is what the output looks like in the Python shell of IDLE, the development tool that I used to write the script, when I use my own ID as the one to find connections for. (I have blanked out the ID and password.)

connections_output copy

The script goes through these steps:

  1. It takes the ID provided by the user and queries the database to display the name associated with that ID (that would be me, in this example).
  2. It checks to see what events I attended, and produces a list (in memory) of all the IDs of constituents who attended those same events.
  3. It checks to see what activity codes I have attached to my record — varsity sports teams, student clubs and so on. I have none, but if I did, the script would return a list of all the people who shared the same activity codes. Again, this is saved in memory for use later on.
  4. The same routine is followed for employers (past and present), all class years, degrees, referring high school, cities extracted from mailing addresses (again, all past and present addresses), age, and database cross-references.
  5. All of these lists are combined into a single “object” in memory. The number of instances of each distinct ID is counted up. Every ID retrieved is a “hit”. The idea is, the more hits for any one retrieved ID, the more likely that ID belongs to someone who knows me (or knows the person who we are searching on).
  6. Because the resulting list is huge (more than 31,000 records just for me — and I have very little data attached to my record), the script asks the user to limit by a minimum number of hits. I’ve entered 3 here.
  7. The result is a file with 47 IDs. The script then does a final query on the database to retrieve the names and other details for these people, and saves the results as a .csv file that can be opened in Excel.

Here are the top few rows of the output. I have removed some columns for simplicity, and blanked out some personal information. The count_sum column is the “hit count”, and the list is sorted in descending order by count_sum. The person being searched on is always at the top of the list. This is not an especially impressive example, because I am not an alumnus. My list of likely connections is made up of co-workers and people from my hometown who happen to be the same age as me. The woman in the third row of the data was someone I had a crush on in junior high school but haven’t seen in 27 years … If I were a prospective donor, this search would be a bust. Clearly this method requires quality alumni data in order to be useful.

 

result2

That’s how it functions. We will dig into the actual code shortly. But first, let me say that my primary aim is to share this with you as an idea. I can’t help you with installing Python or debugging or any of that. All I can do is show you my code and hope you can modify it to make it work for you. If you think this is cool and useful but don’t want to learn any code, I suggest you team up with someone from IT or your Advancement Services team, or a student or prof in the Computing Science department.

As I said, I’m using Python and a code library for handling data called pandas. (For some background, see my earlier posts: Getting bitten by Python and Automate RFM scoring of your donors with this Python script.) The great thing about pandas is that you can load data into in-memory structures called DataFrames, which are like data tables in that you can perform joins and sort, rank, de-dupe, add calculated columns, and so on.

A few closing comments before I show you the code:

  • Erich at UNH goes the extra step of matching on date range along with employer. I think a shared corporate connection is worth a point, but being at the same place of employment at the same time is obviously worth a lot more. If you have other ideas for things to match on, I’d love to hear them.
  • Those of you who aren’t into Python but know SQL can probably make an all-SQL version of this. I just like Python.
  • In the code, I have inserted comments to help guide you through. Everything to the right of a “#” is a comment.
  • The script includes multiple individual queries. You may wish to lump them together using UNION ALL, appending all the results sets together right in the body of the SQL. Up to you.
  • The queries that pull records based on shared events, degrees and so on all have the same basic structure: First, a WITH clause gathers the codes for the events (or degrees or cities) that the target ID has attached to his or her record, then a SELECT statement finds all the other IDs who have those same codes attached to their records.
  • In order to break the SQL up into multiple lines, I have enclosed the string between groups of three single quotes. You’ll see that whenever I have to insert a variable into the SQL (such as ‘target_pidm’), I have to use what looks like a confusing jumble of single and double quotes — but the ordering is very precise.
  • I was going to simplify the SQL for you, but didn’t see the point. One way or the other, you will still have to modify it. So it’s much the same as what I actually use. You’ll also notice that I tend not to use aliases for tables and views, which makes the queries text-heavy. Sorry.
  • One limitation of this tool is that it will run only on a machine that has a Python installation. For that reason I haven’t been able to simply give it to someone (in Prospect Research, say) and have them run it themselves. The next step might be bundling it as a stand-alone executable file (an .exe) that could be shared with others, but I haven’t gotten that to work yet.

AND, one final, final comment: This is probably the last piece on Python and database querying I’m going to write for a while. In future posts I will tone down the technical stuff and get back to talking about data and predictive modelling.

 

# Import required code libraries

 

import pandas as pd

import numpy as np

import pandas.io.sql as psql

import cx_Oracle, os

 

 

# User prompts, and connect to Oracle database

 

ID_input = raw_input('Enter ID to find connectors for: ')

 

user = raw_input('Enter username for database: ')

 

password = raw_input('Enter password for database: ')

 

connectstr = user + '/' + password + '@database:0000'

 

connection = cx_Oracle.connect(connectstr)

 

 

# Query to get the database PIDM and name for the entered ID.

 

SQL = '''

SELECT

BANINST1.AA_ENTITY.PIDM,

BANINST1.AA_ENTITY.ID,

BANINST1.AA_ENTITY.FIRST_NAME,

BANINST1.AA_ENTITY.MI,

BANINST1.AA_ENTITY.LAST_NAME

 

FROM BANINST1.AA_ENTITY

 

WHERE

(BANINST1.AA_ENTITY.CHANGE_IND IS NULL

AND BANINST1.AA_ENTITY.ID=''' + "'" + target + "'" + ''')'''

 

df = psql.frame_query(SQL, con=connection)

 

name = str(df['FIRST_NAME']+' '+df['LAST_NAME'])

 

print('Searching for connections to: '+ name)

 

target_pidm = max(df['PIDM'])    # PIDM, not ID, is the primary key in my database.

 

 

# Find events

 

SQL = '''

WITH events AS (

SELECT

BANINST1.AA_EVENTS.EVNT_CRN

 

FROM BANINST1.AA_EVENTS

 

WHERE BANINST1.AA_EVENTS.PIDM = ''' +str(target_pidm)+ ''')

 

SELECT

events.EVNT_CRN,

BANINST1.AA_EVENTS.PIDM

 

FROM

events,

BANINST1.AA_EVENTS

 

WHERE

events.EVNT_CRN = AA_EVENTS.EVNT_CRN (+)'''

 

events = psql.frame_query(SQL, con=connection)

 

print("Found " + str(len(events)) + " people who attended the same events.")

 

 

# Find activities

 

SQL = '''WITH activities AS (

SELECT

BANINST1.AA_ACTIVITY_AND_LEADERS.ACTP_CODE

 

FROM

BANINST1.AA_ACTIVITY_AND_LEADERS

 

WHERE

BANINST1.AA_ACTIVITY_AND_LEADERS.PIDM = '''+str(target_pidm)+''')

 

SELECT

activities.ACTP_CODE,

BANINST1.AA_ACTIVITY_AND_LEADERS.PIDM

 

FROM activities,

BANINST1.AA_ACTIVITY_AND_LEADERS

 

WHERE

activities.ACTP_CODE = BANINST1.AA_ACTIVITY_AND_LEADERS.ACTP_CODE (+)

'''

 

activities = psql.frame_query(SQL, con=connection)

 

print("Found " + str(len(activities)) + " people with the same Activity codes.")

 

 

# Find employers

 

SQL = '''WITH employers AS (

SELECT

BANINST1.AA_ALL_EMPLOYMENT.EMPR_NAME

 

FROM

BANINST1.AA_ALL_EMPLOYMENT

 

WHERE

BANINST1.AA_ALL_EMPLOYMENT.PIDM = '''+str(target_pidm)+''')

 

SELECT

employers.EMPR_NAME,

BANINST1.AA_ALL_EMPLOYMENT.PIDM

 

FROM

employers,

BANINST1.AA_ALL_EMPLOYMENT

 

WHERE

employers.EMPR_NAME = BANINST1.AA_ALL_EMPLOYMENT.EMPR_NAME (+)'''

 

employers = psql.frame_query(SQL, con=connection)

 

print("Found " + str(len(employers)) + " people with the same Employers.")

 

 

# Find class years

 

SQL = '''WITH classes AS (

SELECT

BANINST1.AA_DEGREE.ACYR_CODE

 

FROM

BANINST1.AA_DEGREE

 

WHERE

BANINST1.AA_DEGREE.PIDM = '''+str(target_pidm)+''' )

 

SELECT

classes.ACYR_CODE,

BANINST1.AA_DEGREE.PIDM

 

FROM

classes,

BANINST1.AA_DEGREE

 

WHERE

classes.ACYR_CODE = BANINST1.AA_DEGREE.ACYR_CODE (+)'''

 

classes = psql.frame_query(SQL, con=connection)

 

print("Found " + str(len(classes)) + " people with the same class year(s).")

 

 

# Find degrees

 

SQL = '''

WITH degrees AS (

SELECT

BANINST1.AA_DEGREE.DEGC_CODE

 

FROM

BANINST1.AA_DEGREE

 

WHERE

BANINST1.AA_DEGREE.PIDM = '''+str(target_pidm)+'''

)

 

SELECT

degrees.DEGC_CODE,

BANINST1.AA_DEGREE.PIDM

 

FROM

degrees,

BANINST1.AA_DEGREE

 

WHERE

degrees.DEGC_CODE = BANINST1.AA_DEGREE.DEGC_CODE (+)'''

 

degrees = psql.frame_query(SQL, con=connection)

 

print("Found " + str(len(degrees)) + " people with the same degree(s).")

 

 

# Find high school

 

SQL = '''

WITH high_school AS (

SELECT

BANINST1.AA_HIGH_SCHOOL.HIGH_SCHOOL_CODE

 

FROM

BANINST1.AA_HIGH_SCHOOL

 

WHERE

BANINST1.AA_HIGH_SCHOOL.PIDM = '''+str(target_pidm)+'''

)

 

SELECT

high_school.HIGH_SCHOOL_CODE,

BANINST1.AA_HIGH_SCHOOL.PIDM

 

FROM

high_school,

BANINST1.AA_HIGH_SCHOOL

 

WHERE

high_school.HIGH_SCHOOL_CODE = BANINST1.AA_HIGH_SCHOOL.HIGH_SCHOOL_CODE (+)'''

 

 

hs = psql.frame_query(SQL, con=connection)

 

print("Found " + str(len(hs)) + " people from the same high school.")

 

 

 

# Find cities

 

SQL = '''

WITH cities AS (

SELECT

SATURN.SPRADDR.SPRADDR_CITY

 

FROM

SATURN.SPRADDR

 

WHERE

SATURN.SPRADDR.SPRADDR_PIDM = '''+str(target_pidm)+'''

)

 

SELECT DISTINCT

cities.SPRADDR_CITY,

SATURN.SPRADDR.SPRADDR_PIDM AS PIDM

 

FROM

cities,

SATURN.SPRADDR

 

WHERE

cities.SPRADDR_CITY = SATURN.SPRADDR.SPRADDR_CITY (+)'''

 

 

cities = psql.frame_query(SQL, con=connection)

 

print("Found " + str(len(cities)) + " people with addresses in the same city/cities.")

 

 

# Find age

 

SQL = '''

WITH age AS (

SELECT

BANINST1.AA_ENTITY.AGE

 

FROM

BANINST1.AA_ENTITY

 

WHERE

BANINST1.AA_ENTITY.PIDM = '''+str(target_pidm)+'''

)

 

SELECT

age.AGE,

BANINST1.AA_ENTITY.PIDM

 

FROM

age,

BANINST1.AA_ENTITY

 

WHERE

age.AGE = BANINST1.AA_ENTITY.AGE (+)'''

 

 

age = psql.frame_query(SQL, con=connection)

 

print("Found " + str(len(age)) + " people of the same age.")

 

 

# Find cross-references

 

SQL = '''

WITH xref AS (

SELECT

ALUMNI.APRXREF.APRXREF_XREF_PIDM

 

FROM

ALUMNI.APRXREF

 

WHERE

ALUMNI.APRXREF.APRXREF_PIDM = '''+str(target_pidm)+'''

)

 

SELECT

ALUMNI.APRXREF.APRXREF_XREF_PIDM,

ALUMNI.APRXREF.APRXREF_PIDM AS PIDM

 

FROM

xref,

ALUMNI.APRXREF

 

WHERE

xref.APRXREF_XREF_PIDM = ALUMNI.APRXREF.APRXREF_XREF_PIDM (+)'''

 

 

xref = psql.frame_query(SQL, con=connection)

 

print("Found " + str(len(xref)) + " people who share the same cross-references.")

 

 

 

# Concatenate all results

 

print("Joining results.")

 

results = pd.concat([events, activities, employers, classes, degrees, hs, cities, age, xref])

 

# Remove unneeded columns

 

results = results.drop(['ACTP_CODE', 'ACYR_CODE', 'AGE', 'DEGC_CODE', 'EMPR_NAME', 'EVNT_CRN', 'HIGH_SCHOOL_CODE', 'SPRADDR_CITY', 'APRXREF_XREF_PIDM'], axis=1)

 

print("Found " + str(len(results)) + " total results.")

 

## Count up the number of times each PIDM appears in the results. (A few steps here.)

 

# Add a column called ‘count’ and populate every row with a ‘1’

 

results['count'] = 1

 

# Group by PIDM and sum on ‘count’ to get ‘hits per PIDM’.

 

results = results.join(results.groupby(results['PIDM'])['count'].sum(), on=['PIDM'], rsuffix='_sum')

 

# De-dupe rows in order to get one result per PIDM.

 

results = results.drop_duplicates()

 

# Sort results in descending order by the number of hits.

 

results.sort(columns='count_sum', axis=0, ascending=False, inplace=True)

 

# The file is potentially huge, so prompt the user to limit it by hit count.

 

cutoff = raw_input('How many hits would you like to limit by? ')

 

# Keep rows with minimum number of hits.

 

results = results[results['count_sum'] >= int(cutoff)]

 

# Get rid of the ‘count’ column - not needed.

 

results = results.drop(['count'], axis=1)

 

print("Found " + str(len(results)) + " unique results, with at least "+cutoff+" hits.")

 

 

## Join names to results

 

print('Preparing results and saving.')

 

SQL = '''

SELECT

BANINST1.AA_ENTITY.PIDM,

BANINST1.AA_ENTITY.ID,

BANINST1.AA_ENTITY.NAME_PREFIX,

BANINST1.AA_ENTITY.FIRST_NAME,

BANINST1.AA_ENTITY.MI,

BANINST1.AA_ENTITY.LAST_NAME,

BANINST1.AA_ENTITY.NAME_SUFFIX,

BANINST1.AA_ENTITY.AGE,

BANINST1.AA_ENTITY.DEAD_IND,

BANINST1.AA_DONOR_TYPE_HIGHEST.DONR_CODE

 

FROM

BANINST1.AA_ENTITY,

BANINST1.AA_DONOR_TYPE_HIGHEST

 

WHERE

BANINST1.AA_ENTITY.PIDM = BANINST1.AA_DONOR_TYPE_HIGHEST.PIDM (+)

AND BANINST1.AA_ENTITY.CHANGE_IND IS NULL

'''

 

results = pd.merge(results, psql.frame_query(SQL, con=connection), how='left', on=['PIDM'], sort=False)

 

results.to_csv(target + ' - ' + name + '.csv')

 

print('All done!')

 

 

connection.close()

 

22 April 2014

Score! ships tomorrow

Filed under: Book, Score! — Tags: , , — kevinmacdonell @ 7:29 pm

scoreThe printer delivered early, and a copy of Score! showed up at CASE headquarters in Washington DC this afternoon.

(Doug Goldenberg-Hart, CASE’s Director, Editorial Projects sent this photo to prove it.)

To everyone who put in an advance order, your copy will be available to ship tomorrow (Wednesday).

Peter Wylie and I sincerely hope you enjoy it.

Click here to order.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

13 April 2014

Optimizing lost alumni research, with a twist

Filed under: Alumni, Best practices, engagement, External data, Tableau — Tags: , , , , — kevinmacdonell @ 9:47 am

There are data-driven ways to get the biggest bang for your buck from the mundane activity of finding lost alumni. I’m going to share some ideas on optimizing for impact (which should all sound like basic common sense), and then I’m going to show you a cool data way to boost your success as you search for lost alumni and donors (the “twist”). If lost alumni is not a burning issue for your school, you still might find the cool stuff interesting, so I encourage you to skip down the page.

I’ve never given a great deal of thought to how a university’s alumni records office goes about finding lost alumni. I’ve simply assumed that having a low lost rate is a good thing. More addressable (or otherwise contactable) alumni is good: More opportunities to reengage and, one hopes, attract a gift. So every time I’ve seen a huge stack of returned alumni magazine covers, I’ve thought, well, it’s not fun, but what can you do. Mark the addresses as invalid, and then research the list. Work your way though the pile. First-in, first-out. And then on to the next raft of returned mail.

But is this really a wise use of resources? John Smith graduates in 1983, never gives a dime, never shows up for a reunion … is there likely to be any return on the investment of time to track him down? Probably not. Yet we keep hammering away at it.

All this effort is evident in my predictive models. Whenever I have a variable that is a count of ‘number of address updates’, I find it is correlated with giving — but only up to a point. Beyond a certain number of address updates, the correlation turns sharply negative. The reason is that while highly engaged alumni are conscientious about keeping alma mater informed of their whereabouts, alumni who are completely unengaged are perpetually lost. The ones who are permanently unreachable get researched the most and are submitted for data appends the most. Again and again a new address is entered into the database. It’s often incorrect — we got the wrong John Smith — so the mail comes back undeliverable, and the cycle begins again.

Consider that at any time there could be many thousands of lost alumni. It’s a never-ending task. Every day people in your database pull up stakes and move without informing you. Some of those people are important to your mission. Others, like Mr. Smith from the Class of 1983, are not. You should be investing in regular address cleanups for all records, but when it comes down to sleuthing for individuals, which is expensive, I think you’d agree that those John Smiths should never come ahead of keeping in touch with your loyal donors. I’m afraid that sometimes they do — a byproduct, perhaps, of people working in silos, pursuing goals (eg., low lost rates) that may be laudable in a narrow context but are not sufficiently aligned with the overall mission.

Here’s the common sense advice for optimizing research: ‘First-in, first-out’ is the wrong approach. Records research should always be pulling from the top of the pile, searching for the lost constituents who are deemed most valuable to your mission. Defining “most valuable” is a consultative exercise that must take Records staff out of the back office and face-to-face with fundraisers, alumni officers and others. It’s not done in isolation. Think “integration”.

The first step, then, is consultation. After that, all the answers you need are in the data. Depending on your tools and resources, you will end up with some combination of querying, reporting and predictive modelling to deliver the best research lists possible, preferably on a daily basis. The simplest approach is to develop a database query or report that produces the following lists in whatever hierarchical order emerges from consultation. Research begins with List 1 and does not proceed to List 2 until everyone on List 1 has been found. An example hierarchy might look like this:

  1. Major gift and planned giving prospects: No major gift prospect under active management should be lost (and that’s not limited to alumni). Records staff MUST review their lists and research results with Prospect Research and/or Prospect Management to ensure integrity of the data, share research resources, and alert gift officers to potentially significant events.
  2. Major gift donors (who are no longer prospects): Likewise, these folks should be 100% contactable. In this case, Records needs to work with Donor Relations.
  3. Planned Giving expectancies: I’m not knowledgeable about Planned Giving, but it seems to me that a change of address for an expectancy could signal a significant event that your Planned Giving staff ought to know about. A piece of returned mail might be a good reason to reach out and reestablish contact.
  4. Annual Giving Leadership prospects and donors: The number of individuals is getting larger … but these lists should be reviewed with Annual Fund staff.
  5. Annual Fund donors who gave in the past year.
  6. Annual Fund donors who gave in the year previous.
  7. All other Annual Fund donors, past five or 10 years.
  8. Recent alumni volunteers (with no giving)
  9. Recent event attendees (reunions, etc.) — again, who aren’t already represented in a previous category.
  10. Young alumni with highest scores from predictive models for propensity to give (or similar).
  11. All other non-donor alumni, ranked by predictive model score.

Endless variations are possible. Although I see potential for controversy here, as everyone will feel they need priority consideration, I urge you not to shrink from a little lively discussion — it’s all good. It may be that in the early days of your optimization effort, Annual Fund is neglected while you clean up your major gift and planned giving prospect/donor lists. But in time, those high-value lists will become much more manageable — maybe a handful of names a week — and everyone will be well-served.

There’s a bit of “Do as I say, not as I do” going on here. In my shop, we are still evolving towards becoming data-driven in Records. Not long ago I created a prototype report in Tableau that roughly approximates the hierarchy above. Every morning, a data set is refreshed automatically that feeds these lists, one tab for each list, and the reports are available to Records via Tableau Server and a browser.

That’s all fine, but we are not quite there yet. The manager of the Records team said to me recently, “Kevin, can’t we collapse all these lists into a single report, and have the names ranked in order by some sort of calculated score?” (I have to say, I feel a warm glow when I hear talk like that.) Yes — that’s what we want. The hierarchy like the one above suggests exclusive categories, but a weighted score would allow for a more sophisticated ranking. For example, a young but loyal Annual Fund donor who is also a current volunteer might have a high enough score to outrank a major gift prospect who has no such track record of engagement — maybe properly so. Propensity scores could also play a much bigger role.

However it shakes out, records research will no longer start the day by picking up where the previous day’s work left off. It will be a new list every morning, based on the actual value of the record to the institution.

And now for the twist …

Some alumni might not be addressable, but they are not totally lost if you have other information such as an email address. If they are opening your email newsletters, invitations and solicitations, then you might be able to determine their approximate geographic location via the IP address given to them by their internet service provider.

That sounds like a lot of technical work, but it doesn’t have to be. Your broadcast email platform might be collecting this information for you. For example, MailChimp has been geolocating email accounts since at least 2010. The intention is to give clients the ability to segment mailings by geographic location or time zone. You can use it to clue you in to where in the world someone lives when they’ve dropped off your radar.

(Yes, yes, I know you could just email them to ask them to update their contact info. But the name of this blog is CoolData, not ObviousData.)

What MailChimp does is append latitude and longitude coordinates to each email record in your account. Not everyone will have coordinates: At minimum, an alum has to have interacted with your emails in order for the data to be collected. As well, ISP-provided data may not be very accurate. This is not the same as identifying exactly where someone lives (which would be fraught with privacy issues), but it should put the individual in the right city or state.

In the data I’m looking at, about half of alumni with an email address also have geolocation data. You can download this data, merge it with your records for alumni who have no current valid address, and then the fun begins.

I mentioned Tableau earlier. If you’ve got lat-long coordinates, visualizing your data on a map is a snap. Have a look at the dashboard below. I won’t go into detail about how it was produced, except to say that it took only an hour or so. First I queried the database for all our alumni who don’t have a valid preferred address in the database. For this example, I pulled ID, sum of total giving, Planned Giving status (i.e., current expectancy or no), and the city, province/state and country of the alum’s most recent valid address. Then I joined the latitude and longitude data from MailChimp, using the ID as the common key.

The result was a smallish data file (less than 1,000 records), which I fed into Tableau. Here’s the result, scrubbed of individual personal information — click on the image to get a readable size.

map_alums

The options at top right are filters that enable the user to focus on the individuals of greatest interest. I’ve used Giving and Planned Giving status, but you can include anything — major gift prospect status, age, propensity score — whatever. If I hover my cursor over any dot on the map, a tooltip pops up containing information about the alum at that location, including the city and province/state of the last place they lived. I can also zoom in on any portion of the map. When I take a closer look at a certain tropical area, I see one dot for a person who used to live in Toronto and one for a former Vancouverite, and one of these is a past donor. Likewise, many of the alumni scattered across Africa and Asia last lived in various parts of eastern Canada.

These four people are former Canadians who are now apparently living in a US city — at least according to their ISP. I’ve blanked out most of the info in the tooltip:

manhattan

If desired, I could also load the email address into the tooltip and turn it into a mailto link: The user could simply click on the link to send a personal message to the alum.

(What about people who check email while travelling? According to MailChimp, location data is not updated unless it’s clear that a person is consistently checking their email over an extended period of time — so vacations or business trips shouldn’t be a factor.)

Clearly this is more dynamic and interesting for research than working from a list or spreadsheet. If I were a records researcher, I would have some fun filtering down on the biggest donors and using the lcoation to guide my search. Having a clue where they live now should shorten the time it takes to decide that a hit is a real match, and should also improve the number of correct addresses. As well, because a person has to actually open an email in order to register their IP with the email platform, they are also sending a small signal of engagement. The fact they’re engaging with our email is assurance that going to the trouble to research their address and other details such as employment is not a waste of time.

This is a work in progress. My example is based on some manual work — querying the database, downloading MailChimp data, and merging the files. Ideally we would automate this process using the vendor’s API and scheduled data refreshes in Tableau Server. I can also see applications beyond searching for lost alumni. What about people who have moved but whose former address is still valid, so the mail isn’t getting returned? This is one way to proactively identify alumni and donors who have moved.

MailChimp offers more than just geolocation. There’s also a nifty engagement score, based on unsubscribes, opens and click-throughs. Stay tuned for more on this — it’s fascinating stuff.

« Newer PostsOlder Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,068 other followers