CoolData blog

28 June 2015

Data mining in the archives

Filed under: Data, Predictor variables — Tags: , , , — kevinmacdonell @ 6:24 pm

 

When I was a student, I worked in a university archives to earn a little money. I spent many hours penciling consecutive index numbers onto acid-free paper folders, on the ultra-quiet top floor of the library. It was as dull a job as one can imagine.

 

Today’s post is not about that kind of archive. I’m talking about database archive views, also called snapshots. They’re useful for reporting and business intelligence, but they can also play a role in predictive modelling.

 

What is an archive view?

 

Think of a basic stat such as “number of living alumni”. This number changes constantly as new alumni join the fold and others are identified as deceased. A straightforward query will tell you how many living alumni there are, but that number will be out of date tomorrow. What if someone asks you how many living alumni you had a year ago? Then it’s necessary to take grad dates and death dates into account in order to generate an estimate. Or, you look the number up in previously-reported statistics.

 

A database archive view makes such reporting relatively easy by preserving the exact status of a record at regular points in time. The ideal archive is a materialized view in a data warehouse. On a given schedule (yearly, quarterly, or even monthly), an automated process adds fresh rows to an archive table that keeps getting longer and longer. You’re likely reliant on central IT services to set it up.

 

“Number of living alumni” is an important denominator for such key ratios as the percentage of alumni for whom you have contact information (mail, phone, email) and participation rates (the proportion of alumni who give). Every gift is entered as an individual transaction record with a specific date, which enables reporting on historical giving activity. This tends not to be true of contact information. Even though mailing addresses may be added one after another, without overwriting older addresses, the key piece of information is whether the address is coded ‘valid’ or ‘invalid’. This changes all the time, and your database may not preserve a history of those changes. Contact information records may have “To” and “From” dates associated with them, but your query will need to do a lot of relative-date calculations to determine if someone was both alive and had a valid address for any given point in time in the past.

 

An archive table obviates the need for this complex logic, and ours looks like the example below. There’s the unique ID of each individual, the archive date, and a series of binary indicator variables — ‘1’ for “yes, this data is present” or ‘0’ for “this data is absent”.

 

archive

 

Here we see three individuals and how their data has changed over three months in 2015. This is sorted by ID rather than by the order in which the records were added to the archive, so that you can see the journey each person has taken in that time:

 

  • A00001 had no valid email in the database in February and March, but we obtained it in time for the April 1 snapshot.
  • A00002 had no contact information at all until just before March, when a phone append supplied us with a new number. The number proved to be invalid, however, and when we coded it as such in the database, the indicator reverted to zero.
  • A00003 appeared in our data in February and March, but that person was coded deceased in the database before April 1, and was excluded from the April snapshot.

 

That last bullet point is important. Once someone has died, continuing to include them as a row in the archive every month would be a waste of resources. In your reporting software, a simple count of records by archive date will give you the number of living alumni. A simple count of ‘Address Indicator’ will give you the number of alumni with valid addresses. Dividing the number of valid addresses by the number of living alumni (and multiplying by 100) will give you the percentage of living alumni that are addressable for that month. (Reporting software such as Tableau will make very quick work of all this.)

 

Because an archive view preserves changing statuses over time at the level of the individual constituent, it can be used for reporting trends along any slice you choose (age bracket, geography, school, etc.), and can play a role in staff activity/performance reporting and alumni engagement scoring.

 

But enough about archive views themselves. Let’s talk about using them for predictive modelling.

 

In the archive example above, you see a bunch of 0/1 indicator variables. Indicator variables are common in predictive modelling. For example, “Mailing address present” can have one of two states: Present or not present. It’s binary. A frequency breakdown of my data at this point in time looks like this (in Data Desk):

 

freq1

 

About 78% of living alumni have a valid address in the database today — the records with an address indicator of ‘1’. As you might expect, alumni with a good address are more likely to have given than alumni without, and they have much higher lifetime giving on average. In the models I build to predict likelihood to give (and give at higher levels), I almost always make use of this association between contact information and giving.

 

But what about using the archive view data instead? The ‘Address Indicator’ variable breakdown above shows me the current situation, but the archive view adds depth by going back in time. Our own archive has been taking monthly snapshots since December of last year — seven distinct points in time. Summing on “Address Indicator” for each ID shows that large numbers of alumni have either never had a valid address during that time (0 out of 7 months), or always did (7 out of 7). The rest had a change of status during the period, and therefore fall between 0 and 7:

 

freq2

 

A few hundred alumni (387) had a valid address in one out of seven months, 143 had a valid address in two out of seven — and so on. Our archive is still very young; only about 1% of alumni have a count that is not 0 or 7. A year from now, we can expect to see far more constituents populating the middle ground.

 

What is most interesting to me is an apparent relationship between “number of months with valid address” (x-axis) and average lifetime giving (y-axis), even with the relative scarcity of data:

 

chart1

 

My real question, of course, is whether these summed, continuous indicators really make much of a difference in a model over simply using the more familiar binary variables. The answer is “not yet — but someday.” As I noted earlier, only about 1% of living alumni have changed status in the past seven months, so even though this relationship seems linear, the numbers aren’t there to influence the strength of correlation. The Pearson correlation for “Address Indicator” (0/1) and “Lifetime Giving” is 0.186, which is identical to the Pearson correlation for “Address Count” (0 to 7) and “Lifetime Giving.” For all other variables except one, the archive counts have only very slightly higher correlations with Lifetime Giving than the straight indicator variables. (Email is slightly lower.)

 

It’s early days yet. All I can say is that there is potential. Have a look at this pair of regression analyses, both using Lifetime Giving (log-transformed) as the dependent variable. (Click on image for larger view.) In the window on the left, all the independent variables are the regular binary indicator variables. On the right, the independent variables are counts from our archive view. The difference in R-squared from one model to the other is very slight, but headed in the right direction: From 12.7% to 13.0%.

 

regressions

 

Looking back on my student days, I cannot deny that I enjoyed even the quiet, dull hours spent in the university archives. Fortunately, though, and due in no small part to cool data like this, my work since then has been a lot more interesting. Stay tuned for more from our archives.

 

11 May 2015

A new way to look at alumni web survey data

Filed under: Alumni, Surveying, Vendors — Tags: , , , , — kevinmacdonell @ 7:38 pm

Guest post by Peter B. Wylie, with John Sammis

 

Click to download the PDF file of this discussion paper: A New Way to Look at Survey Data

 

Web-based surveys of alumni are useful for all sorts of reasons. If you go to the extra trouble of doing some analysis — or push your survey vendor to supply it — you can derive useful insights that could add huge value to your investment in surveying.

 

This discussion paper by Peter B. Wylie and John Sammis demonstrates a few of the insights that emerge by matching up survey data with some of the plentiful data you have on alums who respond to your survey, as well as those who don’t.

 

Neither alumni survey vendors nor their higher education clients are doing much work in this area. But as Peter writes, “None of us in advancement can do too much of this kind of analysis.”

 

Download: A New Way to Look at Survey Data

 

 

21 March 2013

The lopsided nature of alumni giving

Filed under: Alumni, Major Giving, Peter Wylie — Tags: , , , — kevinmacdonell @ 6:06 am

Guest post by Peter B. Wylie

(Printer-friendly PDF download of this post available here: Lopsided Nature of Alum Giving – Wylie)

Eight years ago I wrote a piece called Sports, Fund Raising, and the 80/20 Rule”. It had to do with how most alumni giving in higher education comes from a very small group of former students. Nobody was shocked or awed by the article. The sotto voce response seemed to be, “Thanks, Pete. We got that. Tell us something we don’t know.” That’s okay. It’s like my jokes. A lot of ’em don’t get more than a polite laugh; some get stone silence.

Anyway, time passed and I started working closely with John Sammis. Just about every week we’d look at a new alumni database, and over and over, we’d see the same thing. The top one percent of alumni givers had donated more than the other ninety-nine percent.

Finally, I decided to take a closer look at the lifetime giving data from seven schools that I thought covered a wide spectrum of higher education institutions in North America. Once again, I saw this huge lopsided phenomenon where a small, small group of alums were accounting for a whopping portion of the giving in each school. That’s when I went ahead and put this piece together.

What makes this one any different from the previous piece? For one thing, I think it gives you a more granular look at the lopsidedness, sort of like Google Maps allows you to really focus in on the names of tiny streets in a huge city. But more importantly, for this one I asked several people in advancement whose opinions I respect to comment on the data. After I show you that data, I’ll summarize some of what they had to say, and I’ll add in some thoughts of my own. After that, if you have a chance, I’d love to hear what you think. (Commenting on this blog has been turned off, but feel free to send an email to kevin.macdonell@gmail.com.)

The Data

I mentioned above that I looked at data from seven schools. After some agonizing, I decided I would end up putting you to sleep if I showed you all seven. So I chopped it down to four. Believe me, four is enough to make the point.

Here’s how I’ve laid out the data:

  • For each of the four schools I ranked only the alumni givers (no other constituencies) into deciles (10 groups), centiles (100 groups), and milliles (1,000 groups), by total lifetime hard credit giving. (There is actually no such word as “milliles” in English; I have borrowed from the French.)
  • In the first table in each set I’ve included all the givers. In the second table I’ve included only the top ten percent of givers. And in the third table I’ve included only the top one percent of givers. (The chart following the third table graphically conveys some of the information included in the third table.)

To make sure all this is clear, let’s go through the data for School A. Take a look at Table 1. It shows the lifetime giving for all alumni donors at the school divided into ten equal size groups called deciles. Notice that the alums in decile 10 account for over 95% of that giving. Conversely, the alums in decile 1 account for two tenths of one percent of the giving.

Table 1: Amount and Percentage of Total Lifetime Giving in School A for all Alumni by Giving Decile

table1

Moving on to Table 2. Here we’re looking at only the top decile of alumni givers divided into one percent groups. What jumps out from this table is that the top one percent of all givers account for more than 80% of alumni lifetime giving. That’s five times as much as the remaining 99% of alumni givers.

Table 2: Amount and Percentage of Total Lifetime Giving at School A for Top Ten Percent of Alumni Donors

table2

If that’s not lopsided enough for you, let’s look at Table 3 where the top one percent of alumni givers is divided up into what I’ve called milliles. That is, tenth of a percent groups. And lo and behold, the top one tenth of one percent of alumni donors account for more than 60% of alumni lifetime giving. Figure 1 shows the same information in a bit more dramatic way than does the table.

Table 3: Amount and Percentage of Total Lifetime Giving at School A for Top One Percent of Alumni Donors

table3

figure1

What I’d recommend is that you go through the same kinds of tables and charts laid out below for Schools B, C, and D. Go as fast or as slowly as you’d like. Being somewhat impatient, I would focus on Figures 2-4. I think that’s where the real punch in these data resides.

Table 4: Amount and Percentage of Total Lifetime Giving in School B for all Alumni by Giving Decile

table4

Table 5: Amount and Percentage of Total Lifetime Giving at School B for Top Ten Percent of Alumni Donors

table5

Table 6: Amount and Percentage of Total Lifetime Giving at School B for Top One Percent of Alumni Donors

table6

figure2

Table 7: Amount and Percentage of Total Lifetime Giving in School C for all Alumni by Giving Decile

table7

Table 8: Amount and Percentage of Total Lifetime Giving at School C for Top Ten Percent of Alumni Donors

table8

Table 9: Amount and Percentage of Total Lifetime Giving at School C for Top One Percent of Alumni Donors

table9

figure3

Table 10: Amount and Percentage of Total Lifetime Giving in School D for all Alumni by Giving Decile

table10

Table 11: Amount and Percentage of Total Lifetime Giving at School D for Top Ten Percent of Alumni Donors

table11

Table 12: Amount and Percentage of Total Lifetime Giving at School D for Top One Percent of Alumni Donors

table12

figure4

When I boil down to its essence what you’ve just looked at for these three schools, here’s what I see:

  • In School B over the half of the total giving is accounted for by three tenths of one percent of the givers.
  • In School C we have pretty much the same situation as we have in School B.
  • In School D over 60% of the total giving is accounted for by two tenths of one percent of the givers.

What Some People in Advancement have to Say about All This

Over the years I’ve gotten to know a number of thoughtful/idea-oriented folks in advancement. I asked several of them to comment on the data you’ve just seen. To protect the feelings of the people I didn’t ask, I’ll keep the commenters anonymous. They know who they are, and they know how much I appreciate their input.

Here are a few of the many helpful observations they made:

Most of the big money in campaigns and other advancement efforts does not come from alumni. I’m a bit embarrassed to admit that I had forgotten this fact. CASE puts out plenty of literature that confirms this. It is “friends” who carry the big load in higher education fundraising. At least two of the commenters pointed out that we could look at that fact as a sad commentary on the hundreds and hundreds of thousands of alums who give little or nothing to their alma maters. However, both felt it was better to look at these meager givers as an untapped resource that we have to do a better job of reaching.

The data we see here reflect the distribution of wealth in society. The commenter said, “There simply are very few people who have large amounts of disposable wealth and a whole lot of hard working folks who are just trying to participate in making a difference.” I like this comment; it jibes with my sense of the reality out there.

“It is easier (and more comfortable) to work with donors rather than prospective donors.” The commenter went on to say: “The wealthier the constituency the more you can get away with this approach because you have enough people who can make mega-gifts and that enables you to avoid building the middle of the gift pyramid.” This is very consistent with what some other commenters had to say about donors in the middle of the pyramid — donors who don’t get enough attention from the major giving folks in advancement.

Most people in advancement ARE aware of the lopsidedness. All of the commenters said they felt people in advancement were well aware of the lopsided phenomenon, perhaps not to the level of granularity displayed in this piece. But well aware, nonetheless.

What you see in this piece underestimates the skew because it doesn’t include non-givers. I was hoping that none of the commenters would bring up this fact because I had not (and still have not) come up with a clear, simple way to convey what the commenter had pointed out. But let’s see if I can give you an example. Look at Figure 4. It shows that one tenth of one percent of alumni givers account for over 48% of total alumni giving. However, let’s imagine that half of the solicitable alumni in this school have given nothing at all. Okay, if we now double the base to include all alums, not just alum givers, then what happens to the percentage size of that top one tenth of one percent of givers? It’s no longer one tenth of one percent; it’s now one twentieth of one percent. If you’re confused, let’s ask someone else reading this thing to explain it. I’m spinning my wheels.

One More Thought from Me

But here’s a thought that I’ve had for a long time. When I look at the incredible skewness that we see in the top one percent of alumni donors, I say, “WHY?!” Is the difference among the top millile and the bottom millile in that top one percent simply a function of capacity to give? Maybe it is, but I’d like to know. And then I say, call me crazy, LET’S FIND OUT! Not with some online survey. That won’t cut it. Let’s hire a first rate survey research team to go out and interview these folks (we’re not talking a lot of people here). Would that cost some money to go out and get these answers? Yes, and it would be worth every penny of it. The potential funding sources I’ve talked to yawn at the idea. But I’ll certainly never let go of it.

As always, let us know what you think.

6 June 2012

How you measure alumni engagement is up to you

Filed under: Alumni, Best practices, Vendors — Tags: , , , — kevinmacdonell @ 8:02 am

There’s been some back-and-forth on one of the listservs about the “correct” way to measure and score alumni engagement. An emphasis on scientific rigor is being pressed for by one vendor who claims to specialize in rigor. The emphasis is misplaced.

No doubt there are sophisticated ways of measuring engagement that I know nothing about, but the question I can’t get beyond is, how do you define “engagement”? How do you make it measurable so that one method applies everywhere? I think that’s a challenging proposition, one that limits any claim to “correctness” of method. This is the main reason that I avoid writing about measuring engagement — it sounds analytical, but inevitably it rests on some messy, intuitive assumptions.

The closest I’ve ever seen anyone come is Engagement Analysis Inc., a firm based here in Canada. They have a carefully chosen set of engagement-related survey questions which are held constant from school to school. The questions are grouped in various categories or “drivers” of engagement according to how closely related (statistically) the responses tend to be to each other. Although I have issues with alumni surveys and the dangers involved in interpreting the results, I found EA’s approach fascinating in terms of gathering and comparing data on alumni attitudes.

(Disclaimer: My former employer was once a client of this firm’s but I have no other association with them. Other vendors do similar and very fine work, of course. I can think of a few, but haven’t actually worked with them, so I will not offer an opinion.)

Some vendors may make claims of being scientific or analytically correct, but the only requirement of quantifying engagement is that it be reasonable, and (if you are benchmarking against other schools) consistent from school to school. In general, if you want to benchmark, then engage a vendor if you want to do it right, because it’s not easily done.

But if you want to benchmark against yourself (that is, over time), don’t be intimidated by anyone telling you your method isn’t good enough. Just do your own thing. Survey if you like, but call first upon the real, measurable activities that your alumni participate in. There is no single right way, so find out what others have done. One institution will give more weight to reunion attendance than to showing up for a pub night, while another will weigh all event attendance equally. Another will ditch event attendance altogether in favour of volunteer activity, or some other indicator.

Can anyone say definitively that any of these approaches are wrong? I don’t think so — they may be just right for the school doing the measuring. Many schools (mine included) assign fairly arbitrary weights to engagement indicators based on intuition and experience. I can’t find fault with that, simply because “engagement” is not a quantity. It’s not directly measurable, so we have to use proxies which ARE measurable. Other schools measure the degree of association (correlation) between certain activities and alumni giving, and base their weights on that, which is smart. But it’s all the same to me in the end, because ‘giving’ is just another proxy for the freely interpretable quality of “engagement.”

Think of devising a “love score” to rank people’s marriages in terms of the strength of the pair bond. A hundred analysts would head off in a hundred different directions at Step 1: Defining “love”. That doesn’t mean the exercise is useless or uninteresting, it just means that certain claims have to be taken with a grain of salt.

We all have plenty of leeway to chose the proxies that work for us, and I’ve seen a number of good examples from various schools. I can’t say one is better than another. If you do a good job measuring the proxies from one year to the next, you should be able to learn something from the relative rises and falls in engagement scores over time and compared between different groups of alumni.

Are there more rigorous approaches? Yes, probably. Should that stop you from doing your own thing? Never!

16 January 2012

Address updates and affinity: Consider the source

Filed under: Correlation, Predictor variables, skeptics — Tags: , , , , — kevinmacdonell @ 1:03 pm

Some of the best predictors in my models are related to the presence or absence of phone numbers and addresses. For example, the presence of a business phone is usually a highly significant predictor of giving. As well, a count of either phone or address updates present in the database is also highly correlated with giving.

Some people have difficulty accepting this as useful information. The most common objection I hear is that such updates can easily come from research and data appends, and are therefore not signals of affinity at all. And that would be true: Any data that exists solely because you bought it or looked it up doesn’t tell you how someone feels about your institution. (Aside from the fact that you had to go looking for them in the first place — which I’ve observed is negatively correlated with giving.)

Sometimes this objection comes from someone who is just learning data mining. Then I know I’m dealing with someone who’s perceptive. They obviously get it, to some degree — they understand there’s potentially a problem.

I’m less impressed when I hear it from knowledgeable people, who say they avoid contact information in their variable selection altogether. I think that’s a shame, and a signal that they aren’t willing to put in the work to a) understand the data they’re working with, or b) take steps to counteract the perceived taint in the data.

If you took the trouble to understand your data (and why wouldn’t you), you’d find out soon enough if the variables are useable:

  • If the majority of phone numbers or business addresses or what-have-you are present in the database only because they came off donors’ cheques, then you’re right in not using it to predict giving. It’s not independent of giving and will harm your model. The telltale sign might be a correlation with the target variable that exceeds correlations for all your other variables.
  • If the information could have come to you any number of ways (with gift transactions being only one of them), then use with caution. That is, be alert if the correlation looks too good to be true. This is the most likely scenario, which I will discuss in detail shortly.
  • If the information could only have come from data appends or research, then you’ve got nothing much to worry about: The correlation with giving will be so weak that the variable probably won’t make it into your model at all. Or it may be a negative predictor, highlighting the people who allowed themselves to become lost in the first place. An exception to the “don’t worry” policy would be if research is conducted mainly to find past donors who have become lost — then there might be a strong correlation that will lead you astray.

An in-house predictive modeler will simply know what the case is, or will take the trouble to find out. A vendor hired to do the work may or may not bother — I don’t know. As far as my own models are concerned, I know that addresses and phone numbers come to us via a mix of voluntary and involuntary means: Via Phonathon, forms on the website, records research, and so on.

I’ve found that a simple count of all historical address updates for each alum is positively correlated with giving. But a line plot of the relationship between number of address updates and average lifetime giving suggests there’s more going on under the surface. Average lifetime giving goes up sharply for the first half-dozen or so updates, and then falls away just as sharply. This might indicate a couple of opposing forces: Alumni who keep us informed of their locations are more likely to be donors, but alumni who are perpetually lost and need to be found via research are less likely to be donors.

If you’re lucky, your database not only has a field in which to record the source of updates, but your records office is making good use of it. Our database happens to have almost 40 different codes for the source, applied to some 300,000 changes of address and/or phone number. Not surprisingly, some of these are not in regular use — some account for fewer than one-tenth of one percent of updates, and will have no significance in a model on their own.

For the most common source types, though, an analysis of their association with giving is very interesting. Some codes are positively correlated with giving, some negatively. In most cases, a variable is positive or negative depending on whether the update was triggered by the alum (positive), or by the institution (negative). On the other hand, address updates that come to us via Phonathon are negatively correlated with giving, possibly because by-mail donors tend not to need a phone call — if ‘giving’ were restricted to phone solicitation only, perhaps the association might flip toward the positive. Other variables that I thought should be positive were actually flat. But it’s all interesting stuff.

For every source code, a line plot of average LT giving and number of updates is useful, because the relationship is rarely linear. The relationship might be positive up to a point, then drop off sharply, or maybe the reverse will be true. Knowing this will suggest ways to re-express the variable. I’ve found that alumni who have a single update based on the National Change of Address database have given more than alumni who have no NCOA updates. However, average giving plummets for every additional NCOA update. If we have to keep going out there to find you, it probably means you don’t want to be found!

Classifying contact updates by source is more work, of course, and it won’t always pay off. But it’s worth exploring if your goal is to produce better, more accurate models.

23 April 2010

The big list: 85 predictor variables for alumni models

Filed under: Model building, Predictor variables, regression — Tags: , — kevinmacdonell @ 10:06 am

Here is my attempt at compiling an exhaustive list of every predictor variable I have ever tested in the environment of my data – 85 of them! Not every variable is listed separately – some are grouped together by type or source. In some cases I’ve indicated whether the variable is an indicator variable (0/1) or a continuous variable, as necessary. A few variables are peculiar to the institution where I created my models. Variables that came from external sources are marked with an asterisk.

Some of these predictors were never used in a model because they were eclipsed by other, related variables that had stronger correlation with the dependent variable. Others (such as gender) proved problematic and were left out of my models for specific reasons. And some were tested and found not to be predictive at all. (A final model may contain only 15 to 20 good predictor variables.) Still, I include them all here, because any one of them might add value to models you build for your own data.

Also note: A number of predictors, listed at the end, are based on giving history. These are NOT to be used when your predicted value is ‘giving’. These variables were used in other models, such as Planned Giving potential and likelihood to attend events.

  • Class year
  • Earned a degree / Did not earn a degree
  • Number of degrees earned
  • Faculty is Education
  • Faculty is Business
  • Faculty is Arts
  • Faculty is Science
  • Spouse name present
  • Spouse is an alum
  • Spouse has giving (0/1)
  • Spouse lifetime giving (continuous)
  • Student activities present (0/1), eg. athletics, etc.
  • Number of student activities (continuous)
  • Religion present (0/1)
  • Religion is Roman Catholic (0/1)
  • Number of refusals to pledge
  • Refusal reason ‘will handle donation ourselves’
  • Requested to be excluded from affinity programs
  • Requested to be excluded from phone solicitation
  • Preferred address type is ‘Business’
  • Seasonal address present
  • Number of address updates
  • Address is in U.S.A.
  • Address is international
  • Province is Nova Scotia [also tested variables for other provinces]
  • Postal code is rural
  • Postal code is urban
  • Variables based on specific PSYTE cluster codes*
  • Has ‘Found’ code (i.e. records researcher has had to locate alum marked lost)
  • Prefers to read alumni magazine online (‘Green’ option)
  • Home phone number present
  • Business phone number present
  • Mobile phone number present
  • Seasonal phone number present
  • Number of phone updates
  • Home phone number is on Canada’s National Do Not Call Registry*
  • Email present
  • Number of email updates
  • Gender
  • Female-widowed
  • Female-married
  • Marital status ‘married’
  • Marital status ‘single’
  • Marital status ‘widow’
  • Marital status ‘divorced’
  • Marital status – other
  • Name prefix is “Dr.”
  • Name prefix is “Rev.” (or other religious)
  • Name prefix is Hon., Justice, or similar
  • Length of entire name
  • Nickname present
  • First name is single initial
  • Middle name is single initial
  • Suffix present
  • Cross-references present (0/1)
  • Number of cross-references (continuous)
  • Has attended Homecoming (0/1)
  • Number of Homecomings attended (continuous)
  • Number of President’s Receptions attended
  • Position (i.e. job title) present
  • Employer present
  • Number of employment updates
  • Employment status present
  • Employment status is ‘retired’
  • ID number begins with ‘F’ (faculty)
  • Registered as a member of the alumni online community
  • Participated in Alumni Engagement Benchmarking Survey* (0/1)
  • Engagement Survey score (continuous)*
  • [Numerous variables created from specific Engagement survey questions, including the following specific ones]
  • Lived primarily in residence while a student [survey]
  • Received a scholarship or bursary [survey]
  • Number of children under 18 [survey]
  • Enjoys speaking with student callers for Phonathon [survey]
  • Likely to attend Homecoming [survey]
  • Likely to attend an event in their area [survey]
  • Holds degrees from other universities [survey]
  • Number of close family members who are also alumni [survey]
  • Span of giving (last year of giving minus first year of giving)
  • Frequency of giving (gifts per year during span of giving)
  • Number of years in which gifts were made
  • Lifetime giving
  • Number of gifts
  • Recency: Gave in past year
  • Recency: Gave at least once in past two years
  • Recency: Gave at least once in past three years

Every year I discover new data points hiding in our database. Many other variables are out there, but often the data exists only for our youngest alumni. Someday, I’m sure, this additional data will yield cool new predictors. For ideas on other variables to look for in your data (including non-university data), refer to the list that begins on page 138 of Joshua Birkholz’s book, “Fundraising Analytics.”

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,167 other followers