
Alumni engagement scoring has an undeniable appeal. What could be simpler? Just add up how many events an alum has attended, add more points for volunteering, add more points for supporting the Annual Fund, and maybe some points for other factors that seem related to engagement, and there you have your score. If you want to get more sophisticated, you can try weighting each score input, but generally engagement scoring doesn’t involve any advanced statistics and is easily grasped.
Not so with predictive modelling, which does involve advanced stats and isn’t nearly as intuitive; often it’s not possible to really say how an input variable is related to the outcome. It’s tempting, too, to think of an engagement score as being a predictor of giving and therefore a good replacement for modelling. Actually, it should be predictive — if it isn’t, your score is not measuring the right things — but an engagement score is not the same thing as a predictive model score. They are different tools for different jobs.
Not only are engagement scoring schemes different from predictive models, their simplicity is deceptive. Engagement scoring is incomplete without some plan for acting on observed trends with targeted programming. This implies the ability to establish causal drivers of engagement, which is a tricky thing.
That’s a sequence of events — not a one-time thing. In fact, engagement scoring is like checking the temperature at regular intervals over a long period of time, looking for up and down trends not just for the group as a whole but via comparisons of important subgroups defined by age, sex, class year, college, degree program, geography or other divisions. This requires discipline: taking measurements in exactly the same way every year (or quarter, or what-have-you). If the score is fed by a survey component, you must survey constantly and consistently.
Predictive models and engagement scores have some surface similarities. They share variables in common, the output of both is a numerical score applied to every individual, and both require database work and math in order to calculate them. Beyond that, however, they are built in different ways and for different purposes. To summarize:
Let’s explore key differences in more depth:
The purpose of modelling is prediction, for ranking or segmentation. The purpose of engagement scoring is comparison.
Predictive modelling scores are not usually included in reports. Used immediately in decision making, they may never be seen by more than one or two people. Engagement scores are included in reports and dashboards, and influence decision-making over a long span of time.
The target variable of a predictive model is quantifiable (eg. giving, measurable in dollars). In engagement scoring, there is no target variable, only an output – a construct called “engagement”, which itself is not directly measurable.
Potential input variables for predictive models are numerous (100+) and vary from model to model. Input variables for engagement scores are limited to a handful of easily measured attributes (giving, event attendance, volunteering) which must remain consistent over time.
Variables for predictive models are chosen primarily using statistical methods (correlation) and only secondarily using judgment and “common sense.” For example, if the presence of a business phone number is highly correlated with being a donor, it may be included in the model. For engagement scores, variables are chosen by consensus of stakeholders, primarily according to subjective standards. For example, event attendance and giving would probably be deemed by the committee to indicate engagement, and would therefore be included in the score. Advanced statistics rarely come into play. (For more thoughts on this, read How you measure alumni engagement is up to you.)
In predictive models, giving and variables related to the activity of giving are usually excluded as variables (if ‘giving’ is what we are trying to predict). Using any aspect of the target variable as an input is bad practice in predictive modelling and is carefully avoided. You wouldn’t, for example, use attendance at a donor recognition event to predict likelihood to give. In engagement scoring, though, giving history is usually a key input, as it is common sense to believe that being a donor is an indication of engagement. (It might be excluded or reported separately if the aim is to demonstrate the causal link between engagement indicators and giving.)
Modelling variables are weighted using multiple linear regression or other statistical method which calculates the relative influence of each variable while simultaneously controlling for the influence of all other variables in the model. Engagement score variables are usually weighted according to gut feel. For example, coming to campus for Homecoming seems to carry more weight than showing up for a pub night in one’s own city, therefore we give it more weight.
The quality of a predictive model is testable, first against a validation data set, and later against actual results. But there is no right or wrong way to estimate engagement, therefore the quality of scores cannot be evaluated conclusively.
The variables in a predictive model have complex relationships with each other that are difficult or impossible to explain except very generally. Usually there is no reason to explain a model in detail. The components in an engagement score, on the other hand, have plausible (although not verifiable) connections to engagement. For example, volunteering is indicative of engagement, while Name Prefix is irrelevant.
Predictive models are built for a single, time-limited purpose and then thrown away. They evolve iteratively and are ever-changing. On the other hand, once established, the method for calculating an engagement score must not change if comparisons are to be made over time. Consistency is key.
Which is all to say: alumni engagement scoring is not predictive modelling. (And neither is RFM analysis.) Only predictive modelling is predictive modelling.
There’s been some back-and-forth on one of the listservs about the “correct” way to measure and score alumni engagement. An emphasis on scientific rigor is being pressed for by one vendor who claims to specialize in rigor. The emphasis is misplaced.
No doubt there are sophisticated ways of measuring engagement that I know nothing about, but the question I can’t get beyond is, how do you define “engagement”? How do you make it measurable so that one method applies everywhere? I think that’s a challenging proposition, one that limits any claim to “correctness” of method. This is the main reason that I avoid writing about measuring engagement — it sounds analytical, but inevitably it rests on some messy, intuitive assumptions.
The closest I’ve ever seen anyone come is Engagement Analysis Inc., a firm based here in Canada. They have a carefully chosen set of engagement-related survey questions which are held constant from school to school. The questions are grouped in various categories or “drivers” of engagement according to how closely related (statistically) the responses tend to be to each other. Although I have issues with alumni surveys and the dangers involved in interpreting the results, I found EA’s approach fascinating in terms of gathering and comparing data on alumni attitudes.
(Disclaimer: My former employer was once a client of this firm’s but I have no other association with them. Other vendors do similar and very fine work, of course. I can think of a few, but haven’t actually worked with them, so I will not offer an opinion.)
Some vendors may make claims of being scientific or analytically correct, but the only requirement of quantifying engagement is that it be reasonable, and (if you are benchmarking against other schools) consistent from school to school. In general, if you want to benchmark, then engage a vendor if you want to do it right, because it’s not easily done.
But if you want to benchmark against yourself (that is, over time), don’t be intimidated by anyone telling you your method isn’t good enough. Just do your own thing. Survey if you like, but call first upon the real, measurable activities that your alumni participate in. There is no single right way, so find out what others have done. One institution will give more weight to reunion attendance than to showing up for a pub night, while another will weigh all event attendance equally. Another will ditch event attendance altogether in favour of volunteer activity, or some other indicator.
Can anyone say definitively that any of these approaches are wrong? I don’t think so — they may be just right for the school doing the measuring. Many schools (mine included) assign fairly arbitrary weights to engagement indicators based on intuition and experience. I can’t find fault with that, simply because “engagement” is not a quantity. It’s not directly measurable, so we have to use proxies which ARE measurable. Other schools measure the degree of association (correlation) between certain activities and alumni giving, and base their weights on that, which is smart. But it’s all the same to me in the end, because ‘giving’ is just another proxy for the freely interpretable quality of “engagement.”
Think of devising a “love score” to rank people’s marriages in terms of the strength of the pair bond. A hundred analysts would head off in a hundred different directions at Step 1: Defining “love”. That doesn’t mean the exercise is useless or uninteresting, it just means that certain claims have to be taken with a grain of salt.
We all have plenty of leeway to chose the proxies that work for us, and I’ve seen a number of good examples from various schools. I can’t say one is better than another. If you do a good job measuring the proxies from one year to the next, you should be able to learn something from the relative rises and falls in engagement scores over time and compared between different groups of alumni.
Are there more rigorous approaches? Yes, probably. Should that stop you from doing your own thing? Never!
Have you read any Proust? His voluminous novel contains many unsentimental thoughts about friendship and love. Among them is the idea that the opposite of love is not hate. The opposite of love is indifference.
The constituents in your database who are not engaged are not the ones who write nasty letters. They’re not the ones who give you a big thumbs-down on your survey. They’re not the ones who criticize the food at your gala dinner. They’re not the ones who tell your phone campaign callers never to call again.
Nope. Your non-engaged constituents are the ones you never hear from. The ones who chuck out your mailings unopened. The ones who ignore the invitation to participate in a survey. The ones who have never attended an event. The ones who never answer the phone.
If your school or organization is typical, a good-sized portion of your database falls into this category. There’s money in identifying who is truly not engaged, and therefore not worth wasting resources on.
The ones who are moved to criticize you, the ones who have opinions about you, the ones who want to be contacted only a certain way — ah, they’re different.
The future belongs to those who can tell the difference.
(Download a printer-friendly PDF version of this paper: Online behaviour of alums)
For a number of years John Sammis and I have been pushing colleges and universities to examine the data they (or their vendors) collect for alums who are members of their online communities. For example, we encourage them to look at very basic things like:
Why do we think they should be recording and examining these kinds of data? Because (based on some limited but compelling evidence) we think such data are related to how much and how often alums give to their alma maters as well as how engaged they are (e.g., reunion attendance, volunteering, etc.) to these institutions. To ignore such data means leaving money on the table and losing a chance to spot alums who are truly interested in the school, even if they’ll never become major givers.
Frankly the response to our entreaties has been less than heartening:
But we’re nothing if not persistent. So what we’ve done here is put together some data from a four year higher education institution that has a pretty active online community. Granted, it’s only one school, but the data show a pronounced relationship between number of website visits and several different measures of alumni engagement and alumni giving.
We have to believe this school is not a glaring exception among the thousands of schools out there that have online communities. Our hope is that you’ll read through what we have to show and tell and conclude, “What the heck. Why don’t we take a similar look at our own data and see what we can see. Can’t hurt.”
Nope. Can’t hurt, and it might help – might help a lot.
A View of the Overall Distribution of Website Visits and the Distribution of Visits by Class Year
Table 1 shows that almost exactly two thirds of the alums have never visited the school’s website as an identifiable member of the school’s online community. The remaining third are roughly evenly divided among four categories: one visit; two to three visits; four to seven visits; and eight or more visits.
Table 1: Frequency and Percentage Distribution of Website Visits for More Than 40,000 Alums
As soon as we saw this distribution, we were quite sure it would vary a great deal depending how long people had been out of school. To confirm that hunch we divided all alums into ten roughly equal sized groups (i.e., into deciles).
Table 2: Count, Median Class Year, and Minimum and Maximum Class Years for All Alums Divided into Deciles
As you can see in Table 2, there are some very senior people in this alumni universe, and there are some very junior people. For example, the majority of folks in Decile 10 (CY 2006 – CY 2010) are probably in their 20’s. What about Decile 1 (CY 1926 –CY 1958)? It’s a safe bet that these folks are all over 70, and we may have at least one centenarian in the group (which we think is pretty cool).
If you look at Table 3, you can see the percentage distribution of website visits for each Class Year Decile. However, the problem with that table (and most tables that have lots of information in them) is that (unless you’re a data geek like we are) it’s not something you want to spend a lot of time looking at. You’d probably prefer to look at a chart, a graphic display of the data. So what we’ve done here (and throughout this paper) is display the data graphically for the folks in Decile 1, Decile 5, and Decile 10 – very senior people, middle-aged people, and very young people.
Table 3: Percentage of Website visits by Class Year Decile
Clearly our hunch was right. The distribution of website visits is highly related to how long people have been out of school:
The Relationship between Number of Website Visits and Alumni Engagement
If you work in higher education advancement, you probably hear the term “alumni engagement” mentioned several times a week. It’s something lots and lots of folks are concerned about. And plenty of these folks are finding more and more ways to operationally define the term.
Here we’ve taken a very simple approach. We’ve looked at whether or not an alum had ever volunteered for the institution and whether or not an alum had ever attended a reunion.
Volunteering
Table 4 and Figures 4 to 6 show the relationship between number of website visits and volunteering. Just to be clear on what we’re laying out here, let’s go through some of the details of Table 4.
We’ll use Class year Decile 1 (alums who graduated between 1926 and 1958) as an example. Look at the alums in this Decile who have never visited the website; only 17.1% of them have ever volunteered. On the other hand, 42.9% of alums who have visited the website 8 or more times have volunteered. If you look at Figure 4, of course, you’ll see the same information depicted graphically.
Table 4: Percentage of Alums by Number of Website Visits for All Deciles Who Ever Volunteered
There are two facts that stick out for us in Table 4 and Figures 4 to 6:
Reunion Attendance
If you look through Table 5 and Figures 7 to 9, you’ll see a relationship between number of website visits and reunion attendance that’s very similar to what you saw between number of website visits and volunteering. The one exception would be for the youngest group of alums – those in Decile 10 who graduated between 2006 and 2010. These alums simply are too young to have attended a five year reunion. (Although it would appear that several of them found a way to make it back to school anyway – good for them.)
Table 5: Percentage of Alums by Number of Website Visits for All Deciles Who Ever Attended a Reunion
The Relationship between Number of Website Visits and Giving
There is no question that advancement offices are interested in alumni engagement. But if we’re realistic, we have to admit they tend to view engagement as mainly a step in the direction of one day becoming a donor. So let’s take a look at how number of website visits is related to alumni giving at this school.
We’ve created two sets of tables and figures to allow you to get a clear look at all this:
Browse through all this material. After you’ve done that, we’ll tell you what we see.
Table 6: Percentage of Alums by Number of Website Visits for All Deciles Who Have Given Anything in the Last Two Fiscal Years
Table 7: Percentage of Alums by Number of Website Visits for All Deciles Who Have Given $10,000 or More Lifetime
Clearly, there is a lot of information contained in these tables and charts. But if we stand back from all that we see, the picture becomes clear. Regardless of how long alums have been out of school, those who have visited the website versus those who have not are better recent givers, and they are better major givers.
For example, let’s focus on alums who graduated before 1958 (Decile 1). Those who have visited the website at least 8 times are almost twice as likely to have given in the last two fiscal years as those who have never visited the site (75% versus 41.6%). If we look at giving of $10,000 or more lifetime for this same Decile, the difference is even more striking: 42.9% versus 12.5%.
Let’s jump down to Decile 10, the “youngsters” who graduated between 2006 and 2010. Understandably, almost none of these alums have given $10,000 or more lifetime. But look at Figure 12. For this group the relationship between number of website visits and giving over the last two fiscal years is striking:
Where to Go from Here
Clearly, there is a strong relationship between this simple web metric (number of website visits) and alumni engagement and alumni giving at this particular school. If that’s the case, it’s reasonable to assume that the same sort of relationship holds true for other schools. If you agree with that assumption, then we think it’s more than worth your while to take a look at similar data at your own institution.
At this point you might decide:
“Look guys, this is all very interesting, but we simply don’t have the time, resources, nor staff to do that. Maybe sometime in the future, when things are less hectic around here, we’ll take your advice. But not now.”
As much as we love this sort of analysis, we totally get a decision like that. We may be specialists, but we talk to enough people in advancement every week to realize you have a lot more on your minds than data mining and predictive modeling.
On the other hand, you might conclude that what we’ve done here is something you’d like to try to replicate, or improve on. If so, here’s what we’d recommend:
1. Find Out What Kind of Data Is Available
Depending on how your shop is set up, this may take some persistence and digging. If it were us, we’d be trying to find out:
In all probability, you’ll be dealing with a vendor (either directly or through your IT folks) to get answers to these questions. Expect some pushback. A dialogue that goes like this would not be unusual:
YOU: Can I get the number of e-mails and e-newsletters that each of our alums has opened since the school has been sending out that kind of stuff?
VENDOR: We can certainly give you the number of e-mails and number of e-newsletters that were opened on each date that one was sent out.
YOU: That’s great, but that’s not what I’m looking for. I need to know, on a record-by-record basis, which alums opened the e-communication, and I need a total count for each alum for their total number of openings.
VENDOR: That’ll take some doing.
YOU: But you can do it?
VENDOR: I suppose.
YOU: Terrific!
2. Ask Your Technical Folks to Get The Data Into Analyzable Form.
What does “analyzable form” mean? To us that just means getting the data into spreadsheet format (probably Excel) where the first field will probably be the unique ID number you use to keep track of all the alums (and other constituents) in your fundraising database. For starters, we’d recommend something very simple. For example:
In our opinion, this kind of file should be very simple to build. In our experience, however, that is often not the case. (Why? How much time you got?)
Our frustrations with this sort of problem notwithstanding, keep pushing for the file. Be polite. Be diplomatic. And, above all, be persistent.
3. Do Some Simple Analyses with The Data.
There are any number of ways to analyze your data. Our bias would be to have you import the Excel file into a stats software package, and then do the analysis. (You can do it in Excel, but it’s a lot harder than if you use something like SPSS or Data Desk [our preference]).
If you can’t do this yourself, we’d recommend that you find someone on your team or on your campus to do it for you. The right person, when you ask them if they can roughly replicate the tables and charts included in this paper, will say something like, “Sure,” “No problem,” “Piece of cake,” etc. If they don’t, keep looking.
4. Share The Results With Colleagues You Think Would Find It Interesting.
Sharing your results with colleagues should be stimulating and enjoyable. You know the folks you work with and have probably already got some in mind. But here are a few suggestions:
When you can, let us know how things go.