One day earlier this fall on my walk to the office, I passed a young woman bundled up in toque and sweater and sitting in a foldup chair at an intersection. She was holding a clipboard, and as I passed by, I heard a click from somewhere in that bundle. She was counting. Whether it was cars going through the intersection, or whether I myself had just been counted, I don’t know. I could have asked her, but I knew what she was doing: She was collecting data.
All those clicks might be used by a local business or charity looking for the best location and time to solicit passersby, or they might find their way into GIS and statistical analysis and be used by city planners working on traffic control issues. Locating business franchises, planning for urban disasters, optimizing emergency services — all sort of activities are based on the mundane activity of counting.
This week I’m thinking about a different type of click: the reams of data that flow from Phonathon. If your institution is fortunate enough to have a call centre that is automated, you may be sitting on a wealth of data that never makes it into the institutional database. (Thus, “hidden”.) In our program, only a few things are loaded into Banner from CampusCall: Address updates, employment updates, any requested contact restrictions, and the pledges themselves. The rest stays behind in the Oracle database that runs the calling software, and I am only now pulling out some interesting bits which I intend to analyze over the coming days.
Call centre data is not just about the Phonathon program. Gathered from many thousands of interactions across a broad swath of your constituency, this data contains clues that will potentially inform any model you create, including giving by mail, Planned Giving, even major gifts.
What data am I looking for? So far, here’s what I have, plus some early intuition about what it might tell me.
- ID: Naturally, I’ll need prospect IDs in order to match my data up, both across calling projects and in my predictive models themselves.
- Last result code: The last call result coded by the student caller (No Pledge, Answering Machine, etc.) There are many codes, and I will discuss those in more detail in a future post.
- Day call: People who tell us they’d rather be called back during the day (at the office, in many cases), are probably statistically different from the rest.
- Number of attempts: This is the number of times a prospect was called before we finally reached them or gave up. I suspect high call attempt numbers are associated with lower affinity, although that remains to be seen. It’s probably more specific than that — high attempt numbers make a person a relatively poor phone prospect, but may cause them to score better in a mail-solicitation model.
- Refusal reason: The reason given by the prospect for not making a pledge, usually chosen by the Phonathon employee from a drop-down menu of the most common responses. Refusal reasons are not always well-tracked, but they’re potentially useful for designing strategies aimed at overcoming objections. I’ve observed in the past that certain refusal reasons are actually predictive of giving (by mail).
- Talk time: The length of the call, in seconds. People who pledge are on the phone longer, of course, but not every long call results in a pledge. I think of longer calls as a sign of successful rapport-building.
There are other important types of information: Address and employment updates, method of payment and so on — but these are all coded in our database and I do not need to extract them from the Phonathon software. My focus today is on hidden data — the data that gets left behind.
In CampusCall, prospects are loaded into giant batches called “projects”. Usually there is only one project per term, but multiple projects can be run at once. Each one is like its own separate database. I have data for ten projects conducted from 2007 to the present. I had to extract data for each project separately, and then match all the records up by ID in order to create one huge file of historical calling results. The total number of records in all the extracts was 189,927; when matched up they represent 56,216 unique IDs. Yum!
Where I go from here will be discussed in future posts. I need to put some thought into the variables I will create. For example, will I simply add up all call attempts into a single variable called “Attempts”, or should I calculate an average number of attempts, keeping in mind that some prospects were called in some projects and not others?
Until I figure these things out, here’s a final thought for today. If your job is handling data, then it’s also your job to understand where that data comes from and how it is gathered. Stick your nose into other peoples’ business from time to time, and get involved in the establishment of new processes that will pay off in good data down the road. Go to the person who runs your Phonathon and ask him or her if refusal reasons are being tracked. (In an automated system, it’s not that hard.) If you ARE the person running the Phonathon, make sure your callers are trained to select the right code for the right result.
In other words, it all starts with that young person bundled against the cold: The point at which data is collected. What happens here determines whether the data is good, usable, reliable. Without this person and her clicker, not much else is possible.
P.S. If you’re interested in analyzing your call centre data, have a read of this white paper by Peter Wylie: What Makes a Call Successful.