CoolData blog

13 April 2014

Optimizing lost alumni research, with a twist

Filed under: Alumni, Best practices, engagement, External data, Tableau — Tags: , , , , — kevinmacdonell @ 9:47 am

There are data-driven ways to get the biggest bang for your buck from the mundane activity of finding lost alumni. I’m going to share some ideas on optimizing for impact (which should all sound like basic common sense), and then I’m going to show you a cool data way to boost your success as you search for lost alumni and donors (the “twist”). If lost alumni is not a burning issue for your school, you still might find the cool stuff interesting, so I encourage you to skip down the page.

I’ve never given a great deal of thought to how a university’s alumni records office goes about finding lost alumni. I’ve simply assumed that having a low lost rate is a good thing. More addressable (or otherwise contactable) alumni is good: More opportunities to reengage and, one hopes, attract a gift. So every time I’ve seen a huge stack of returned alumni magazine covers, I’ve thought, well, it’s not fun, but what can you do. Mark the addresses as invalid, and then research the list. Work your way though the pile. First-in, first-out. And then on to the next raft of returned mail.

But is this really a wise use of resources? John Smith graduates in 1983, never gives a dime, never shows up for a reunion … is there likely to be any return on the investment of time to track him down? Probably not. Yet we keep hammering away at it.

All this effort is evident in my predictive models. Whenever I have a variable that is a count of ‘number of address updates’, I find it is correlated with giving — but only up to a point. Beyond a certain number of address updates, the correlation turns sharply negative. The reason is that while highly engaged alumni are conscientious about keeping alma mater informed of their whereabouts, alumni who are completely unengaged are perpetually lost. The ones who are permanently unreachable get researched the most and are submitted for data appends the most. Again and again a new address is entered into the database. It’s often incorrect — we got the wrong John Smith — so the mail comes back undeliverable, and the cycle begins again.

Consider that at any time there could be many thousands of lost alumni. It’s a never-ending task. Every day people in your database pull up stakes and move without informing you. Some of those people are important to your mission. Others, like Mr. Smith from the Class of 1983, are not. You should be investing in regular address cleanups for all records, but when it comes down to sleuthing for individuals, which is expensive, I think you’d agree that those John Smiths should never come ahead of keeping in touch with your loyal donors. I’m afraid that sometimes they do — a byproduct, perhaps, of people working in silos, pursuing goals (eg., low lost rates) that may be laudable in a narrow context but are not sufficiently aligned with the overall mission.

Here’s the common sense advice for optimizing research: ‘First-in, first-out’ is the wrong approach. Records research should always be pulling from the top of the pile, searching for the lost constituents who are deemed most valuable to your mission. Defining “most valuable” is a consultative exercise that must take Records staff out of the back office and face-to-face with fundraisers, alumni officers and others. It’s not done in isolation. Think “integration”.

The first step, then, is consultation. After that, all the answers you need are in the data. Depending on your tools and resources, you will end up with some combination of querying, reporting and predictive modelling to deliver the best research lists possible, preferably on a daily basis. The simplest approach is to develop a database query or report that produces the following lists in whatever hierarchical order emerges from consultation. Research begins with List 1 and does not proceed to List 2 until everyone on List 1 has been found. An example hierarchy might look like this:

  1. Major gift and planned giving prospects: No major gift prospect under active management should be lost (and that’s not limited to alumni). Records staff MUST review their lists and research results with Prospect Research and/or Prospect Management to ensure integrity of the data, share research resources, and alert gift officers to potentially significant events.
  2. Major gift donors (who are no longer prospects): Likewise, these folks should be 100% contactable. In this case, Records needs to work with Donor Relations.
  3. Planned Giving expectancies: I’m not knowledgeable about Planned Giving, but it seems to me that a change of address for an expectancy could signal a significant event that your Planned Giving staff ought to know about. A piece of returned mail might be a good reason to reach out and reestablish contact.
  4. Annual Giving Leadership prospects and donors: The number of individuals is getting larger … but these lists should be reviewed with Annual Fund staff.
  5. Annual Fund donors who gave in the past year.
  6. Annual Fund donors who gave in the year previous.
  7. All other Annual Fund donors, past five or 10 years.
  8. Recent alumni volunteers (with no giving)
  9. Recent event attendees (reunions, etc.) — again, who aren’t already represented in a previous category.
  10. Young alumni with highest scores from predictive models for propensity to give (or similar).
  11. All other non-donor alumni, ranked by predictive model score.

Endless variations are possible. Although I see potential for controversy here, as everyone will feel they need priority consideration, I urge you not to shrink from a little lively discussion — it’s all good. It may be that in the early days of your optimization effort, Annual Fund is neglected while you clean up your major gift and planned giving prospect/donor lists. But in time, those high-value lists will become much more manageable — maybe a handful of names a week — and everyone will be well-served.

There’s a bit of “Do as I say, not as I do” going on here. In my shop, we are still evolving towards becoming data-driven in Records. Not long ago I created a prototype report in Tableau that roughly approximates the hierarchy above. Every morning, a data set is refreshed automatically that feeds these lists, one tab for each list, and the reports are available to Records via Tableau Server and a browser.

That’s all fine, but we are not quite there yet. The manager of the Records team said to me recently, “Kevin, can’t we collapse all these lists into a single report, and have the names ranked in order by some sort of calculated score?” (I have to say, I feel a warm glow when I hear talk like that.) Yes — that’s what we want. The hierarchy like the one above suggests exclusive categories, but a weighted score would allow for a more sophisticated ranking. For example, a young but loyal Annual Fund donor who is also a current volunteer might have a high enough score to outrank a major gift prospect who has no such track record of engagement — maybe properly so. Propensity scores could also play a much bigger role.

However it shakes out, records research will no longer start the day by picking up where the previous day’s work left off. It will be a new list every morning, based on the actual value of the record to the institution.

And now for the twist …

Some alumni might not be addressable, but they are not totally lost if you have other information such as an email address. If they are opening your email newsletters, invitations and solicitations, then you might be able to determine their approximate geographic location via the IP address given to them by their internet service provider.

That sounds like a lot of technical work, but it doesn’t have to be. Your broadcast email platform might be collecting this information for you. For example, MailChimp has been geolocating email accounts since at least 2010. The intention is to give clients the ability to segment mailings by geographic location or time zone. You can use it to clue you in to where in the world someone lives when they’ve dropped off your radar.

(Yes, yes, I know you could just email them to ask them to update their contact info. But the name of this blog is CoolData, not ObviousData.)

What MailChimp does is append latitude and longitude coordinates to each email record in your account. Not everyone will have coordinates: At minimum, an alum has to have interacted with your emails in order for the data to be collected. As well, ISP-provided data may not be very accurate. This is not the same as identifying exactly where someone lives (which would be fraught with privacy issues), but it should put the individual in the right city or state.

In the data I’m looking at, about half of alumni with an email address also have geolocation data. You can download this data, merge it with your records for alumni who have no current valid address, and then the fun begins.

I mentioned Tableau earlier. If you’ve got lat-long coordinates, visualizing your data on a map is a snap. Have a look at the dashboard below. I won’t go into detail about how it was produced, except to say that it took only an hour or so. First I queried the database for all our alumni who don’t have a valid preferred address in the database. For this example, I pulled ID, sum of total giving, Planned Giving status (i.e., current expectancy or no), and the city, province/state and country of the alum’s most recent valid address. Then I joined the latitude and longitude data from MailChimp, using the ID as the common key.

The result was a smallish data file (less than 1,000 records), which I fed into Tableau. Here’s the result, scrubbed of individual personal information — click on the image to get a readable size.

map_alums

The options at top right are filters that enable the user to focus on the individuals of greatest interest. I’ve used Giving and Planned Giving status, but you can include anything — major gift prospect status, age, propensity score — whatever. If I hover my cursor over any dot on the map, a tooltip pops up containing information about the alum at that location, including the city and province/state of the last place they lived. I can also zoom in on any portion of the map. When I take a closer look at a certain tropical area, I see one dot for a person who used to live in Toronto and one for a former Vancouverite, and one of these is a past donor. Likewise, many of the alumni scattered across Africa and Asia last lived in various parts of eastern Canada.

These four people are former Canadians who are now apparently living in a US city — at least according to their ISP. I’ve blanked out most of the info in the tooltip:

manhattan

If desired, I could also load the email address into the tooltip and turn it into a mailto link: The user could simply click on the link to send a personal message to the alum.

(What about people who check email while travelling? According to MailChimp, location data is not updated unless it’s clear that a person is consistently checking their email over an extended period of time — so vacations or business trips shouldn’t be a factor.)

Clearly this is more dynamic and interesting for research than working from a list or spreadsheet. If I were a records researcher, I would have some fun filtering down on the biggest donors and using the lcoation to guide my search. Having a clue where they live now should shorten the time it takes to decide that a hit is a real match, and should also improve the number of correct addresses. As well, because a person has to actually open an email in order to register their IP with the email platform, they are also sending a small signal of engagement. The fact they’re engaging with our email is assurance that going to the trouble to research their address and other details such as employment is not a waste of time.

This is a work in progress. My example is based on some manual work — querying the database, downloading MailChimp data, and merging the files. Ideally we would automate this process using the vendor’s API and scheduled data refreshes in Tableau Server. I can also see applications beyond searching for lost alumni. What about people who have moved but whose former address is still valid, so the mail isn’t getting returned? This is one way to proactively identify alumni and donors who have moved.

MailChimp offers more than just geolocation. There’s also a nifty engagement score, based on unsubscribes, opens and click-throughs. Stay tuned for more on this — it’s fascinating stuff.

Advertisements

13 December 2011

Finding connections to your major gift prospects in your data

Guest post by Erich Preisendorfer, Associate Director, Business Intelligence, Advancement Services, University of New Hampshire

(Thanks to Erich for this guest post, which touches on something a lot of prospect researchers are interested in: mapping relationships to prospects in their database. Actually, this work is more exciting than that, because it actually helps people find connections they may not have known about, via database queries and a simple scoring system. Is your Advancement Services department working on something like this? Why not ask them? — Kevin.)

Data miners often have an objective of exploring sets of data to determine meaningful patterns which can then be modeled for predictive patterning, hopefully to help meet their organization’s end goal(s).  However, there may be a time when the end behavior is not inherent in your database. Such a situation recently came up for my Advancement organization.

Our prospecting team recently started a program wrapped around peer recommendations: A prospect recommends new suspects to us based on the prospect’s interactions with the suspects. The question then became, what can we provide to the prospect to help get them thinking about potential suspects?

We currently do not have any type of data which would allow us to say, “Yes, this is what a relationship looks like,” outside of family relationships. We had to find a different way to identify potential acquaintances. I looked back at my own relationships to determine how I know the people I know. My friends and acquaintances largely come from some basic areas: school, work, places I’ve gone, etc.

Transforming my experience with relationships into what we have for useable data, I saw three key areas where relationships may exist: work history, education history, and extracurricular activities including one-time events. Fortunately, I was able to pinpoint our constituents’ time in each of the above areas to help isolate meaningful, shared experiences amongst constituents. Our work and extracurricular history includes to/from dates, and we have loads of educational history data that includes specific dates. Using this data, I am able to come up with potential relationships from a single prospect.

Prospect Profile (generated by entering a single prospect’s ID):

  • John Adams
  • Widget Factory, Employee 01/05/1971 – 06/16/1996
  • Student Activities: Football, Student Senate 09/1965-05/1966
  • Bachelor of Arts, Botany 1966

Potential Relationships (each item below is a separate query, using the Prospect Profile results):

  • Those employed by the Widget Factory who started before John ended, and ended after John began.
  • Those students who participated in Football and had a class year within +/-3 years of John.
  • Those students in Student Senate at the same time as John, similar to the Widget Factory example.
  • Those students who were in the same class year as John.
  • Those students who share John’s major.

Currently,since I have no way of proving the value of one point of contact over the other, each row returned in the potential relationships earns the constituent one point. Since my database stores historical records, I may get more than one row per constituent in any one category if they met more than one of John’s associated records – say they participated in Student Senate and played Football. This is great, because I want to give those particular constituents two points since they have more than one touch point in common with John.

I end up with a ranked list of constituents who share potential relationship contacts with my main prospect. The relationship lists provide our prospect researchers a starting point in putting together a solid list of high capacity constituents a single person may have some sort of relationship with, thus a greater insight into potential giving.

As of now, the report is in its infancy but looks to have high potential. As we grow the concept, there are multiple data points where further exploration could result in a higher level of functioning. As prospects use the lists to identify people they know, we can then deconstruct those choices to determine what is more likely a relationship. Should shared employment be ranked higher than shared class year? Should Football rank higher than Student Senate? I would guess yes, but I currently do not have supporting data to make that decision.

Another interesting concept, raised at the recent DRIVE 2011 conference, would be: “How are these two prospects potentially related by a third constituent?”  The result could mean the difference between two separate, forced conversations and one single conversation with three prospects shared over nostalgic conversations, drinks and, hopefully, money in the door!

Erich Preisendorfer is Associate Director, Business Intelligence, working in Advancement Services at the University of New Hampshire.

13 April 2010

Exploring your database relationships with NodeXL

Filed under: Coolness, Relationship mapping — Tags: , , , , — kevinmacdonell @ 7:16 am

Guest Post by Jason Boley, Associate Director of Prospect Management and Tracking, Purdue University

NodeXL is a powerful social network analysis tool that is designed as a plug-in to Excel 2007. NodeXL was designed by a team of programmers at Microsoft Research and is now maintained by the Connected Action Consulting Group.  In a nutshell, the tool was designed to produce network graphs from a list of network usage data, and includes built-in tools for getting information from Twitter, Flickr, YouTube, etc. (Link to NodeXL site.)

But can it be useful for fundraising too?

If you look at NodeXL from a fundraising perspective you immediately see the correlation. NodeXL is designed to be a discovery tool for discovering deep connections in your social networks. This in itself is pretty useful, especially if your fundraising department is heavily involved in social media (and it probably will be at some point). This tool helps you discover who is having an impact on your social reputation. But let’s put that aside. Even the most basic of fundraising offices are constantly looking to find new connections to and between donors. Who knows who that can provide me with an introduction to Mr. X? Depending on the size and complexity of your database (and your staff) these connections are often easy to miss. Sure, we may note that Bob and Jim sit on the same board or work at the same company, but going a layer deeper than that is often problematic.

With that in mind, I began to explore NodeXL.

There are a couple of things that are appealing about NodeXL. First, it’s free. It downloads as a zip file and operates essentially as a macro plug-in for Excel 2007. But this is unlike any macro you’ve ever seen. The visual capabilities are stunning. Secondly, it is relatively straightforward. While it does have some extremely powerful advanced features, it’s relatively easy to get started.

The only information that NodeXL accepts is called an edge list. An edge list is essentially an Excel list with two columns. In column A you have the first value and in column B you have a second value which is related to the value in Column A. So a sample edge list might look as follows:

Bob        Jerry
Jerry      Tom
Tom       Bob
Alice      Jerry

These two columns of data are all that NodeXL accepts and needs to do network analysis. The rest of the work, calculating connections between these relationships, is done for you.

I rarely think small, so my first inclination was to see if we could visualize all of the recorded board memberships in our Advance database. The first step was to create my edge list. This was very simple and took the following format:

I_Jason_Boley   O_Jason_Boley_Foundation
I_Bob Smith       O_Bob_Smith_Foundation
I_Joe_Klein        O_Gates_Foundation

I think you can get the picture. The I and O appended to the front are for facilitating sorting (if you should wish to color code them later you can easily sort the organizations from the individuals). As you might expect, this query returned a couple thousand connections. But I was pleased NodeXL handled them so efficiently. The resulting graph (Harel-Koren Fast Multiscale format) was visually stunning.

(Click for full size.)

Stunning? Yes! Useful? I wasn’t sure yet. I knew that I could isolate an individual within this huge graph. The highlighted green line above, for example, is one of our major donors.

I decided to leave this dataset behind for a bit and move to more of a discovery project. We are very fortunate to have completed a full database screening a few months ago. I turned my eyes towards the data we received from our vendor that includes a table of people in our database with connections the vendor discovered to foundations (based on GuideStar data). I created an edge list from this data that resulted in 4,872 relationships. The resulting graph (Fruchterman-Reingold format) returned a huge blob.

(Click for full size.)

You can click around on a graph like this to isolate individual connections (or edges) and try to do some discovery this way, but in a graph as dense of this it is basically impossible. But the other way to do discovery work is to pose the question from a certain perspective. For instance, I highlighted in the edge list two particular closely related foundations. I then used the subgroup option in NodeXL to select all of the vertices four layers deep, meaning let’s look at relationships between individuals not just directly to the organization, but let’s dig a few layers deeper. You can see from the graph below the highlighted connections from the larger mass.

(Click for full size.)

I then extracted these selected vertices into a brand new NodeXL sheet. After adding some labels, you can reduce the useful information to just a few connections.

(Click for full size.)

What can we tell from this graph? It is likely that the connection between Al Winner and Steven Cook was already known and well documented. But the real advantage here is seeing the tangential connections we may have missed. Steven Cook has a potential relationship with Charles Talor. Indeed, in our database Steven Cook is a well known alumnus, while Charles Talor is someone we would like to know. So a potential for establishing a relationship is provided. The same holds true for potential connections to James Jones and Jean Smith. While some of these connections may be known, the advantages to visualizing them are great. Potential connections jump from the page.

What I have discussed is scratching the surface of this advanced tool in only a cursory manner. I have very little knowledge of network graphing beyond what I’ve read online. Yet with NodeXL I have been able to produce some simple but powerful graphs. Relationships as they are defined in fundraising databases are very different from those that can be extracted from complicated social networking data. And yet there are similarities. As I begin to delve deeper into this tool, it is forcing me to rethink how we have relationships defined in our database, think about existing data that we have that might be useful in a visual format, and think about the future where we will certainly want to supplement our traditional alumni data with information from social networks.

(Jason Boley is the Associate Director of Prospect Management and Tracking at Purdue where he manages prospect information and focuses on data analytics. Jason has over ten years of experience working with fundraising databases and has presented nationally on the topics of database reporting and prospect management.)

Stunning? Yes! Useful? I wasn’t sure yet. I knew that I could isolate an individual within this huge graph. The highlighted green line above, for example, is one of our major donors.

I decided to leave this dataset behind for a bit and move to more of a discovery project. We are very fortunate to have completed a full database screening a few months ago. I turned my eyes towards the data we received from our vendor that includes a table of people in our database with connections the vendor discovered to foundations (based on GuideStar data). I created an edge list from this data that resulted in 4,872 relationships. The resulting graph (Fruchterman-Reingold format) returned a huge blob.

12 April 2010

New way to look at words

Filed under: Coolness, Data visualization, Free stuff, Text — Tags: , , , , , — kevinmacdonell @ 8:13 am

Word clouds aren’t new, but there’s a new online app for creating them that is worth checking out. Tagxedo allows you to create your clouds using some versatile tools for shaping the appearance of the cloud, which you can then easily save as a .jpg or .png.

This comes to me via a post on the LoveStats blog, where Annie Pettit has posted a couple of her own creations – one based on the text of her resume, and one on all the words in her blog.

I wrote about word clouds back in December (Quick and easy visuals of large text files), and the well-known and very cool tool known as Wordle, the creation of Jonathan Feinberg. Tagxedo does the same thing but works a little differently. Powered by Microsoft’s SilverLight browser plug-in, Tagxedo offers a nifty interface for importing your text (or URL), finely controlling your word choice, and playing with the font, colour, theme and layout of your cloud, including being able to choose a shape. The choice of shapes is rather limited – hearts, stars, rectangles and ovals, mostly. Here’s a star-shaped word cloud based on the 150 most common words on this blog:

(Click for full size image.)

My interest in word clouds is related to visualization of data – in this context, conveying the gist of a mass of text by giving prominence to the most common significant words. For example, last year I used Wordles to visualize tens of thousands of words entered as free-text comments in a survey of alumni. It’s no substitute for real analysis, but it does make a cool presentation slide!

NOTE: Check in tomorrow for Jason Boley’s amazing work with NodeXL for visualizing prospect connections in your data.

8 April 2010

Coming soon: Fun with NodeXL

Filed under: Coolness, Relationship mapping — Tags: , , , — kevinmacdonell @ 7:53 am

What is this? Find out next week.

Relationship mapping is a side-interest of mine, one which I’ve only rarely been able to apply in my work. In major-gifts research, I’ve used software to map connections between prospects via the corporate and non-profit boards they sit on together, to create visualizations that are similar in appearance to Muckety, TheyRule and other online apps. These visualizations were half exploratory (searching for the best point of entry to a prospect) and half illustrative (as in showing likely points of connection in a briefing memo).

(WellStar Foundation’s David Broussard reviews a number of relationship mapping applications here. He represents one of the products he reviews, Market Visual, but seems to have a genuine interest in keeping up with everything that’s out there.)

Naturally I was intrigued by NodeXL, an Excel add-on which makes it a snap to visually portray the relationships between and among hundreds or thousands of entities at once. Unlike the products David reviews, NodeXL works exclusively with the data you feed into it.

It’s cool, it’s free, it’s easy to use, and it’s been on my list of blog post ideas for months. I have played with it a bit, creating a visualization of the cross-references in our database. The result looked like a giant knot of black spaghetti. I thought that was cool, but doubted anyone else could be convinced of its coolness. I didn’t find an application for it at work.

I think Jason Boley at Purdue has, though. (Twitter: soam) He’s worked with it a lot more and taken it far beyond what I was able to do. His explorations will be of interest to fundraising shops, so a little while ago I asked if he would contribute a guest post on the topic, and he has obliged. Read it here.


20 January 2010

Another take on Google’s Motion Charts

Filed under: Coolness, Data visualization, Free stuff — Tags: , , , , — kevinmacdonell @ 9:09 am

Late last year I posted a tutorial on creating Google motion charts with your data. These very cool charts work with your time-series data, stored in Google Docs, to create an animation with the power to convey a lot of information in an easily understandable form.

But what about private data? You may not want to rely on Google’s ability to password-protect your data, or the privacy provisions you work with may prohibit posting data to an outside server.

Here’s another way to take advantage of motion charts. I was put onto this by Trevor Skillen, President and CEO of Metasoft, in Vancouver BC, whose company is working on incorporating motion charts into their well-known FoundationSearch product.

This version uses stored code to manipulate your data locally, rather than pulling it from Google Docs.

The advantages are clear:

  • Your data is stored locally and the code is executed locally, in the browser – nothing is sent to Google.
  • You gain precise control over the appearance – you can hide options that the user doesn’t need to see.
  • The example code provided by Google is fairly easy to modify without requiring programming or scripting skills.

Trevor directed me to Google’s ‘playground’ where one can get a quick feel for the technology without much tech effort.

There is a downside … there is a good deal of manual coding you’ll have to do if you want to put a chart together using your own data. This limits you to fairly simple charts – unless you’re capable of writing the additional code that will allow the chart to get data from a file or table.

Older Posts »

Create a free website or blog at WordPress.com.