CoolData blog

3 October 2017

Our “data-informed decision making” journey

Filed under: Analytics, Business Intelligence, Dalhousie University — Tags: , , — kevinmacdonell @ 6:34 pm

 

Building a business intelligence and analytics program can take years, and the move toward data-informed decision making is a cultural evolution that might never be complete. In my previous blog post, I talked about what advancement BI looks like in its ideal state (Analytics as an organizing principle). Today I want to talk about the messy reality.

 

Looking back at our own journey at Dalhousie University, I realize that we didn’t pursue the most direct and well-lit path, but we did learn a lot along the way. Eight years ago or so, we had very limited capability for supporting decisions with data. We still haven’t “arrived” — there is plenty more to do — but our progress is worth looking back on. It’s this progress I’ve been recounting for audiences across the country lately; it seems everyone is attempting to plan their own journey, or at least compare notes.

 

Here I’ll recount a few of the steps that got us to where we are today, starting with some of the obvious ingredients for a successful BI program — quality data, good software tools, and so on — and then talk about some of the perhaps less obvious influences that were essential for driving us forward.

 

First of all, DATA: Years ago, the general perception in our office was that our data was in bad shape. Our coverage rate for contact and employment information was believed to be low, and the accuracy of the data was frequently called into question, based largely on errors spotted in lists. But aside from anecdotes we really had no objective idea.

 

We developed reports to get a handle on coverage rates, as well as the ability to automatically archive these data points to be able to track progress and gaps over time. With our alumni constituency alone growing by several thousand individuals a year, we stepped away from imagining that we should aim to find every lost alum, and instead used a score to prioritize who to trace first. More importantly, the purely clerical “Alumni Records” team was reinvented as the “Constituent Data Integrity” team, tasked with going beyond data entry to developing and acting on data integrity audits, leading a large, cross-functional Data Integrity group to discuss integrity issues, and working much more closely with Prospect Research and Alumni Engagement to provide better support. We have also worked with frontline staff to encourage them to think of Advancement data as something they “own” and will benefit from directly, with a responsibility to feed information and intelligence to records and research staff.

 

We also made a concerted effort to establish written definitions for fundraising terminology and drafted a standardized set of counting rules, agreed to and approved by our leadership. Embedded in a single, core reporting view from which all reports and analyses are derived, these rules enforce a single version of the truth across all reports and dashboards. This work is not complete, but well advanced, and starting with Development data was a good idea.

 

A second enabler was the development of a three-year strategic plan for Advancement Services, something we’d never had. The plan charted a way forward for the team and became the foundation for much-needed investments in personnel. Without a doubt, the most important element in a successful BI program is people — hiring well has been the biggest driver of momentum for us — but resources won’t be made available in the absence of a plan and a roadmap for the future. Our plan did not necessarily lay out everything we intended to do with reporting, BI, and analytics — we didn’t know what the ideal team and technology would look like — but we were able to clearly articulate our gaps and what we needed to help us bridge those gaps.

 

Developing the plan required a commitment to change. This was a big step, because our team had gotten adept at concealing issues. For example, whenever leadership and deans needed an update on fundraising progress against campaign priorities, someone would take the raw data home each night and crunch the data manually in Excel. People were getting the information they wanted, so why change? But in fact we had zero agility, and reporting was never going to be able to grow beyond the basics. The fact that our AVP of Development felt forced to author his own reports should have been a wakeup call.

 

With qualified people making smart decisions, we have invested considerable time adopting Tableau as a reporting tool to bridge over a years-long period of uncertainty with regards to a centrally-supported BI tool. Over the years we evolved from having senior staff being served with pots of raw data and having to fend for themselves in Excel, to having our standard Development reporting automated in Tableau, with progress being made in reporting for other units. At the same time, we hired a BI Analyst to perform more ad hoc analytical and predictive modelling work. At this time, we are hiring two additional BI analysts, each with more specialized roles.

 

Greater demand for more sophisticated reports, dashboards, and analyses meant a greater need for complex transformations of our raw transactional data. We therefore put some emphasis on hiring people who knew SQL or could learn it. My colleague Darrell Rhodenizer puts it this way: Being able to use reporting tools such as Tableau Desktop or Cognos Reporting is one thing, but being able to directly speak the language of our database enables us to use all sorts of tricks to better shape our data for the reporting environment. Other departments that have not invested in the ability to look under the hood seem to be at a disadvantage.

 

As a result, our team has taken over from central IT the primary responsibility for modelling our data — that is, assembling our database tables into complex data structures to serve reporting and analysis. This works well for Advancement, which at most universities is far down the list of departments in terms of central IT support, and often has frequently-changing needs as priorities shift and campaigns roll through.

 

It’s gone beyond just learning SQL. Darrell, as our Associate Director, Advancement Systems & Reporting, has developed a new ETL tool which has accelerated our progress and promises to change the game for years to come. Our unit’s data is extracted nightly from the university’s centrally-managed data warehouse and multiple transformations are applied to it before it is re-stored to the same data warehouse. Under the full control of Advancement, the transformed data is available to all the same users using whatever tool they have. Data model changes are made with agility and with minimal disruption to business.

 

One final enabler: Outside Advancement, a new attitude to working cooperatively across departments and a new appreciation of data as an institutional asset has led to development of a data governance model and policies for opening up access to data. Before, if data was shared at all, it was done haphazardly and insecurely via Excel files. Today, we have a process for responsible use of data across the institution.

 

These elements of progress — technology, tools, people, skills — had a combined effect that was more than additive. We achieved an increasing momentum over the years, such that newer staff members struggle to imagine how bad things used to be in “them days.”

 

These and other factors were important enablers of change. Without some of them, we could not have made the improvements we did. However, they were not sufficient themselves to drive change. I suspect we are too often prone to falsely equate analytics competence with a piece of software, or an employee with a certain title, or a team, when really it’s none of those things. We would not have hired key people, and we would not have sought out and effectively deployed new tools, had there not been forces driving us in that direction.

 

Internally, we faced increased demand from Advancement leadership for information and insight. The closing of a comprehensive campaign was very revealing of our gaps in reporting and analysis — and the eventual ramp-up to another campaign spurs us to ensure that we are ready.

 

As well, for some years now a new culture of strategic planning has taken hold, with the development and adoption of an Advancement Balanced Scorecard. This plan for the whole department has had a focusing and integrative effect — everyone sees how functions fit together, and how their own job supports the mission. As great as that is for Development or Marketing or Alumni Engagement, it’s been essential for Operations. We now have a vision for what priorities we will need to support into the future, and a chunk of that support consists of data, information, reporting, dashboards, analyses, and other analytical products — not to mention the development of KPIs directly tied to measuring Advancement’s progress against the goals and objectives of the Balanced Scorecard itself. To date, high-level strategic planning has been the most significant “focusing” factor for our BI work.

 

You may have noticed that these and other internal drivers of change all come from the top, whereas the “enablers” tended to rely on initiative from lower down in the organization. Again, without both, not much would have happened.

 

But some drivers of a culture of analytics aren’t coming from the organization itself at all. We’re growing increasingly aware of external drivers. There are some new realities out there, and the organizations that position their data teams to address these new realities will have a better chance of succeeding.

 

First, alumni and donors have a different relationship with institutions than they once did, and their expectations are different. Alumni populations are growing, the number of donors is decreasing, and traditional engagement methods are less effective. Friend-raising and “one size fits all” approaches to engagement are increasingly seen as unsustainable wastes of resources. University leaders are questioning the very purpose and value of typical alumni relations activities.

 

According to current wisdom, engaged alumni are seeking meaningful interactions that make a difference, especially interactions with students in the form of advice, mentorship, or career development. If they have anything to do with the institution itself, it’s less about nostalgia for student life than it is being a part of the university’s role in society and community. Barbecues and pub nights hold little appeal for truly engaged alumni who believe in your brand of higher education (or your cause), and believe in the power of your students to change the world for the better. They want to be part of the mission.

 

Donors, too, are looking for meaningful engagement. Through their giving they want to accomplish things in the world. If they’re giving to your institution, it is because they feel your institution is uniquely qualified to carry out the change they’re seeking. Society’s needs, not the institution’s needs, are of greatest importance to this donor. They are not interested in “giving back.” Instead of giving TO institutions, they give THROUGH institutions.

 

This is partly borne out in what many of our organizations are seeing happening in our Annual Fund: for years now, donor numbers have been trending down, while average gift size has been going up. Donors are being more strategic with their giving, pooling resources and being more deliberate with their dollars.

 

These global shifts are not new, but I don’t think their real impact on the sector has yet been fully realized. Certainly for many of us, our strategies are not keeping pace. Analytics is going to be increasingly important for responding to these global shifts. A few examples follow …

 

In order to move from one-size-fits-all messages and programs, and evolve toward more targeted, relevant opportunities to engage, we need to understand how engaged each individual is right now. So we, along with many other institutions, have developed a means to measure alumni engagement. Every alumnus and alumna has a score that reflects where on the engagement spectrum they are, just as we know where on the donor spectrum they are. With those two pieces of information we can invest more time and money developing opportunities aimed at the upper niche of engaged individuals where it will have the most impact. (See: Why we measure engagement.) We need to engage with them on their own level, not ours, via relevant events and volunteerism. What information, programs, and services do they need, and which connect with their interests and talents?

 

In place of “one size fits all,” engaged alumni need more fulfilling experiences such as guest lecturing, student recruiting, and mentorship, career development and networking for students and new grads. Engagement measurement, then, is really a tool that enables alumni relations to better align itself with the mission of Advancement and the university.

 

Second, we aspire to understand our constituents not just based on their degree or by how much they’ve given, but through their interests and values — data we are just starting to bring together from a variety of sources in order to inform more intelligent segmentation of alumni and donors.

 

Third, we are doing what we can to measure impact of programming and events. We might report that we had 100 events that attracted 10,000 attendees, but why stop there? We should also be able to say we moved 2,000 people, say, to the next level of engagement, or that this or that event inspired 50 people to give. According to research conducted by the Education Advisory Board, a consulting firm, alumni relations does the poorest job of any office on campus in providing hard data on its real contribution to the university’s mission. Too many offices are stuck on tracking activities instead of results and outcomes.

 

Wonderful as these examples sound, and as far as we’ve come, we haven’t done everything right. There are areas where I wish we had made more progress, and things I discovered along the way that I wish I’d thought of earlier.

 

We’ve never had a long-range plan for the BI/analytics team. Yes, BI was a component of our three-year strategic plan and we have yearly operational plans, but there was no overarching vision of what the team would finally look like, along the lines of the three-tiered structure I outlined in my previous post. Our growth has been organic, addressing the gaps as we saw them from year to year. Perhaps that’s the right way to grow, especially as employees themselves grow and discover new strengths, but I think in a perfect world we might have had an idea of what the ideal future state would look like.

 

More fundamentally, a major all-at-once investment in rapid growth absolutely requires a plan. The way we did it, each new person who came on had to be somewhat self-sufficient in provisioning themselves with data to analyze, being responsible for transforming it and so on. That’s not the way it is now – as we evolve, positions are becoming more specialized.

 

Second, in hindsight I would have given more thought to how data-informed decisions are made. I mentioned earlier that the Balanced Scorecard exercise for Advancement has provided a main focus for BI, but I can see that it’s not enough. There has to be a framework for prioritizing and directing data-informed decision making below the level addressed by the Scorecard. (I wrote about this in my previous post.) I could have spent some time earlier on thinking about the structure and processes to make that happen.

 

A third thing I wish we had devoted more brainpower to was tackling self-serve list generation. Automation of the generation of lists of contact information for event invitations, solicitations, and so on is surprisingly challenging for a whole host of reasons, and this has prevented us from putting that ability into the hands of users. Had we cracked that nut early on in the journey, we would have freed resources for more interesting work. And more generally, “self-serve” is a cultural shift which takes a lot of time, training, and reinforcement. Even if we had developed a good tool for users to pull their own ready-to-use lists, it would have taken a long time to get people to use it (regardless of what they might say about the idea of it). If you’re considering a big push for self-serve, I would warn you that the payoff will come years, not months, from now.

 

Data-informed decision making in general is a cultural shift; it’s not just a series of technical problems to be solved. Nothing will happen without the technology, to be sure, but the technology enables — it does not drive. You can invest heavily in a BI team and software and still not achieve a state of making decisions informed by data.

 

When I talk about how poorly we did some years ago, that’s not intended as a critique of the people doing the work at that time. Everyone always did the best they could with what they had to work with. In the same way, when I speak with folks from other universities who are struggling with how to make progress in this area, it’s not a lack of will or even skill that I detect: It’s more a lack of clarity about the way forward. It’s rarely obvious how to pull the pieces and people together, but with progress comes momentum. I wish you luck on your own journey!

 

Advertisement

20 February 2013

The ‘analytic’ investment

Filed under: Analytics, Data — Tags: , — kevinmacdonell @ 10:49 am

Everyone’s talking about predictive analytics, Big Data, yadda yadda. The good news is, many institutions and organizations in our sector are indeed making investments in analytics and inching towards becoming data-driven. I have to wonder, though, how much of current investment is based on hype, and how much is going to fall away when data is no longer a hot thing.

Becoming a data-driven organization is a journey, not a destination. Forward progress is not inevitable, and it is possible for an office, a department or an institution to slip backward on the path, even when it seems they’ve “arrived”. In order for analytics to mature from a cutting-edge “nice-to-have” into a regular part of operations, the enterprise needs to be aware of its returns to the bottom line.

In my view, current investments in analytics are often done for reasons that are well-intentioned but vague: It seems to be the right thing to do these days … we see others doing it, so we feel we need to as well … we have an agenda for innovation and this fits the bill … and so on. I’m glad to see the investment, but not every promising innovation gets to stick around. Demonstrating ability to generate revenue — either through savings or through identifying new sources of revenue — will carry the day in the long run.

As I write this, I hear the jangle of railway bells at the level crossing in the early-morning dark outside my hotel room on the city’s downtown waterfront. I’m in Seattle today to attend the DRIVE 2013 conference, hosted by the University of Washington. I’ll be speaking on this topic — the “analytic” investment — later today. I have to admit to having struggled with making the session relevant for this group. For one, they don’t need convincing that making the investment is worth it. And second, if they think that I and my employer have figured out how to calculate the return on investment for analytics programs, they may be in for a disappointment. We have not.

In fact, when it comes right down to it, I like to spend my day working on cool things, interesting problems that face our department, and not so much on stuff that sounds like accounting (“ROI”). I’m betting many of the attendees of my session feel the same way. So I’ll be asking them to stop thinking about how they can get their managers, directors and vice presidents to understand the language of data and analytics. They’ll be far more successful if they try to speak the language their bosses respond to: Return on investment.

I may be a little short on answers for you, but I do have some pretty good questions.

30 November 2012

Analytics conferences: Two problems, two antidotes

A significant issue for gaining data-related skills is finding the right method of sharing knowledge. No doubt conferences are part of the answer. They attract a lot of people with an interest in analytics, whose full-time job is currently non-analytical. That’s great. But I’m afraid that a lot of these people assume that attending a conference is about passively absorbing knowledge doled out by expert speakers. If that’s what you think, then you’re wasting your money, or somebody’s money.

There are two problems here. One is the passive-absorption thing. The other is a certain attitude towards the “expert”. Today I want to describe both problems, and prescribe a couple of conferences related to data and analytics which offer antidotes.

Problem One: “Just Tell Me What To Do”

You know the answer already: Knowledge can’t be passively absorbed. It is created, built up inside you, through engagement with an other (a teacher, a mentor, a book, whatever). We don’t get good ideas from other people like we catch a cold. We actively recognize an idea as good and re-create it for ourselves. This is work, and work creates friction — this is why good ideas don’t spread as quickly as mere viral entertainment, which passes through our hands quickly and leaves us unchanged. Sure, this can be exciting or pleasant work, but it requires active involvement. That’s pretty much true for anything you’d call education.

Antidote One: DRIVE

Ever wish you could attend a live TED event? Well, the DRIVE conference (Feb. 20-21 in Seattle — click for details) captures a bit of that flavour: Ideas are front and centre, not professions. Let me explain … Many or most conferences are of the “birds of a feather” variety — fundraisers talking to fundraisers, analysts talking to analysts, researchers talking to researchers, IT talking to IT. The DRIVE conference (which I have written about recently) is a diverse mix of people from all of those fields, but adds in speakers from whole other professional universes, such a developmental molecular biologist and a major-league baseball scout.

Cool, right? But if you’re going to attend, then do the work: Listen and take notes, re-read your notes later, talk to people outside your own area of expertise, write and reflect during the plane ride home, spin off tangential ideas. Dream. Better: dream with a pencil and paper at the ready.

Problem Two: “You’re the Expert, So Teach Me Already”

People may assume the person at the podium is an expert. The presenter has got something that the audience doesn’t, and that if it isn’t magically communicated in those 90 minutes then the session hasn’t lived up to its billing. Naturally, those people are going to leave dissatisfied, because that’s not how communicating about analytics works. If you’re setting up an artificial “me/expert” divide every time you sit down, you’re impeding your ability to be engaged as a conference participant.

Antidote Two: APRA Analytics Symposium

Every year, the Association of Professional Researchers for Advancement runs its Data Analytics Symposium in concert with its international conference. (This year it’s Aug 7-8 in Baltimore.) The Symposium is a great learning opportunity for all sorts of reasons, and yes, you’ll get to hear and meet experts in the field. One thing I really like about the Symposium is the  case-study “blitz” that offers the opportunity for colleagues to describe projects they are working on at their institutions. Presenters have just 20 or so minutes to present a project of their choice and take a few questions. Some experienced presenters have done these, but it’s also a super opportunity for people who have some analytics experience but are novice presenters. It’s a way to break through that artificial barrier without having to be up there for 90 minutes. If you have an idea, or would just like more information on the case studies, get in touch with me at kevin.macdonell@gmail.com, or with conference chair Audrey Geoffroy: ageoffroy@uff.ufl.edu. Slots are limited, so you must act quickly.

I present at conferences, but I assure you, I have never referred to myself as an “expert”. When I write a blog post, it’s just me sweating through a problem nearly in real time. If sometimes I sound like I knew my way through the terrain all along, you should know that my knowledge of the lay of the land came long after the first draft. I like to think the outlook of a beginner or an avid amateur might be an advantage when it comes to taking readers through an idea or analysis. It’s a voyage of discovery, not a to-do list. Experts have written for this blog, but they’re good because although they know their way around, every new topic or study or analysis is like starting out anew, even for them. The mind goes blank for a bit while one ponders the best way to explore the data — some of the most interesting explorations begin in confusion and uncertainty. When Peter Wylie calls me about an idea he has for a blog post, he doesn’t say, “Yeah, let’s pull out Regression Trick #47. You know the one. I’ll find some data to fit.” No — it’s always something fresh, and his deep curiosity is always evident.

So whichever way you’re facing when you’re in that conference room, remember that we are all on this road together. We’re at different places on the road, but we’re all traveling in the same direction.

6 June 2012

How you measure alumni engagement is up to you

Filed under: Alumni, Best practices, Vendors — Tags: , , , — kevinmacdonell @ 8:02 am

There’s been some back-and-forth on one of the listservs about the “correct” way to measure and score alumni engagement. An emphasis on scientific rigor is being pressed for by one vendor who claims to specialize in rigor. The emphasis is misplaced.

No doubt there are sophisticated ways of measuring engagement that I know nothing about, but the question I can’t get beyond is, how do you define “engagement”? How do you make it measurable so that one method applies everywhere? I think that’s a challenging proposition, one that limits any claim to “correctness” of method. This is the main reason that I avoid writing about measuring engagement — it sounds analytical, but inevitably it rests on some messy, intuitive assumptions.

The closest I’ve ever seen anyone come is Engagement Analysis Inc., a firm based here in Canada. They have a carefully chosen set of engagement-related survey questions which are held constant from school to school. The questions are grouped in various categories or “drivers” of engagement according to how closely related (statistically) the responses tend to be to each other. Although I have issues with alumni surveys and the dangers involved in interpreting the results, I found EA’s approach fascinating in terms of gathering and comparing data on alumni attitudes.

(Disclaimer: My former employer was once a client of this firm’s but I have no other association with them. Other vendors do similar and very fine work, of course. I can think of a few, but haven’t actually worked with them, so I will not offer an opinion.)

Some vendors may make claims of being scientific or analytically correct, but the only requirement of quantifying engagement is that it be reasonable, and (if you are benchmarking against other schools) consistent from school to school. In general, if you want to benchmark, then engage a vendor if you want to do it right, because it’s not easily done.

But if you want to benchmark against yourself (that is, over time), don’t be intimidated by anyone telling you your method isn’t good enough. Just do your own thing. Survey if you like, but call first upon the real, measurable activities that your alumni participate in. There is no single right way, so find out what others have done. One institution will give more weight to reunion attendance than to showing up for a pub night, while another will weigh all event attendance equally. Another will ditch event attendance altogether in favour of volunteer activity, or some other indicator.

Can anyone say definitively that any of these approaches are wrong? I don’t think so — they may be just right for the school doing the measuring. Many schools (mine included) assign fairly arbitrary weights to engagement indicators based on intuition and experience. I can’t find fault with that, simply because “engagement” is not a quantity. It’s not directly measurable, so we have to use proxies which ARE measurable. Other schools measure the degree of association (correlation) between certain activities and alumni giving, and base their weights on that, which is smart. But it’s all the same to me in the end, because ‘giving’ is just another proxy for the freely interpretable quality of “engagement.”

Think of devising a “love score” to rank people’s marriages in terms of the strength of the pair bond. A hundred analysts would head off in a hundred different directions at Step 1: Defining “love”. That doesn’t mean the exercise is useless or uninteresting, it just means that certain claims have to be taken with a grain of salt.

We all have plenty of leeway to chose the proxies that work for us, and I’ve seen a number of good examples from various schools. I can’t say one is better than another. If you do a good job measuring the proxies from one year to the next, you should be able to learn something from the relative rises and falls in engagement scores over time and compared between different groups of alumni.

Are there more rigorous approaches? Yes, probably. Should that stop you from doing your own thing? Never!

26 April 2012

For agile data mining, start with the basics

Filed under: Analytics, Pitfalls, Training / Professional Development — Tags: , , , — kevinmacdonell @ 8:56 am

Lately I’ve been telling people that one of the big hurdles to implementing predictive analytics in higher education advancement is the “project mentality.” We too often think of each data mining initiative as a project, something with a beginning and end. We’d be far better off to think in terms of “process” — something iterative, always improving, and never-ending. We also need to think of it as a process with a fairly tight cycle: Deploy it, let it work for a bit, then quickly evaluate, and tweak, or scrap it completely and start over. The whole cycle works over the course of weeks, not months or years.

Here’s how it sometimes goes wrong, in five steps:

  1. Someone has the bright idea to launch a “major donor predictive modelling project.” Fantastic! A committee is struck. They put their heads together and agree on a list of variables that they believe are most likely to be predictive of major giving.
  2. They submit a request to their information management people, or whomever toils in extracting stuff from the database. Emails and phone calls fly back and forth over what EXACTLY THE HECK the data mining team is looking for.
  3. Finally, a massive Excel file is delivered, a thing the likes of which would never exist in nature — like the unstable, man-made elements on the nether fringes of the Periodic Table. More meetings are held to come to agreement about what to do about multiple duplicate rows in the data, and what to do about empty cells. The committee thinks maybe the IT people need to fix the file. Ummm — no!
  4. Half of the data mining team then spends considerable time in pursuit of a data file that gleams in its cleanliness and perfection. The other half is no longer sure what the goal of the project was.
  5. Somehow, a model is created and the records are scored by the one team member left standing. Unfortunately, a year has passed and the person for whom the model was built has left for a new job in California. Her replacement refers to the model as “astrology.”

Allow me a few observations that follow from these five stages:

  1. Successful models are rarely produced by committee, and variables cannot be pre-selected by popular agreement and intuition — although certainly experience is a valuable source of clues.
  2. Submitting requests to someone else for data, having to define exactly what it is you want, and then waiting for the request to be fulfilled — all of that is DEATH to creative data exploration.
  3. A massive, one-time, all-or-nothing data suction job is probably not the ideal starting point. Neither is handling an Excel file with 200,000 rows and a hundred columns.
  4. Perfect data is not a realistic goal, and is not a prerequisite for fruitful data mining.
  5. A year is too long. The cycle has to be much, much tighter than that.

And finally, here are some concrete steps, based on the observations, again point-for-point:

  1. If you’re interested in data mining, try going it alone. Ask for help when you need it, but you’ll make faster progress if you explore on your own or in a team of no more than two or three like-minded people. Don’t tell anyone you’re launching a “project,” and don’t promise deliverables unless you know what you’re doing.
  2. Learn how to build simple queries to pull data from your database. Get IT to set you up. Figure out how to pull a file of IDs along with sum of all their hard-credit giving. Then, pull that AND something else — anything else. Email address, class year, marital status, whatever. Practice, get comfortable with how your data is stored and how to limit it to what you want.
  3. Look into stats software, and learn some of the most common stats terms. Read up on correlation in particular. Build larger files for analysis in the stats software rather than in Excel. Read, read, read. Play, play, play.
  4. Think in terms of pattern detection, and don’t get hung up on the validity of individual data points.
  5. If you’ve done steps 1 to 4, you have the foundations in place for being an agile data miner.

Mind you, it could take considerable time — months, maybe even years — to get really comfortable with the basics, especially if data mining is a sideline to your “real” job.  But success and agility does depend on being able to work independently, being able to snag data on a whim, being able to understand a bit of what is going on in your software, having the freedom to play and explore, and losing notions about data that come from the business analysis and reporting side. In other words, the basics.

19 March 2012

Symposium on Data Analytics is a must-attend

If you’re interested in working with data for the benefit of a non-profit organization or for education institutional advancement, then you must make room in your calendar for the APRA Symposium on Data Analytics.

Kate Chamberlin of Memorial Sloan-Kettering Cancer Center recently posted the listserv message below which I am quoting in its entirety, with her blessing. Kate is Chair of this year’s Symposium, being held this summer in Minneapolis. I’ve attended a few of these symposiums (and presented at one), and I can tell you that they’re great. This is a conference where you can really learn, and meet the people who are doing cool stuff with data for their institutions and organizations.

Of particular interest are the Case Study sessions, which are brief (20 minutes) presentations of analytics projects that your colleagues at other institutions have carried out. If you’ve worked on a such a project, consider sharing! Contact information is included below.

Here’s Kate’s message:

Hello everyone!

Many of you may have noticed the fifth annual APRA Symposium on Data Analytics is definitely happening again this summer in conjunction with APRA’s International Conference in Minneapolis!  The dates are Wednesday and Thursday, August 1st and 2nd — some additional information is available here: http://www.aprahome.org/p/cm/ld/fid=72.

We don’t have the full schedule yet, but hopefully will within a week or so.  In the meantime, let me give you some preliminary details:

Wednesday morning the conference will open with a keynote from Rob Scott at MIT, who was instrumental in founding the Symposium, and has a bird’s-eye view of the history of analytics in fundraising, from the perspective of research, IT, front-line fundraising, and fundraising management.  Thursday morning, we will have the opportunity to join the larger conference to hear Penelope Burke, President of Cygnus Applied Research Inc., on Donor-Centered Fundraising.  http://www.aprahome.org/p/cm/ld/&fid=73

The fundamental track is intended as a two day introduction to analytics in fundraising, with the goal of giving participants a solid road map to approach their first project.  Topics will include: Various Variables: Data Preparation and Management for Successful Analytics, Walkthrough: Understanding the Problem and the Resources, Key Questions in Project Management, and Implementation.  Presenters will include Chuck McClenon at the University of Texas, James Cheng at Dana Farber Cancer Institute, Audrey Geoffroy at the University of Florida, and myself.  In addition, six short case studies from a variety of nonprofits will be presented in the fundamental track.

In the intermediate/advanced track, we will continue the focus on case study with nine short project presentations.  We will also have a presentation from Jeff Shuck of Event 360, who applies predictive modeling and segmentation to fundraising events and peer-to-peer fundraising programs.  Marianne Pelletier of Cornell University and Josh Birkholz of Bentz Whaley Flessner will present on constituent engagement.  Chuck McClenon of the University of Texas will lead a panel of practitioners to discuss the intricacies of collaborating with development IT.

Finally, we will have our usual faculty/committee panel to close the Symposium.  We will be asking our faculty, committee members, and a few guests to tell us about the one best idea they’ve heard recently in the area of development analytics, and follow up with a free-wheeling conversation including these ideas and any and all questions from the floor.

Last year we experimented with a case study format that gave us the opportunity to hear many of our colleagues present on projects they are working on at their institutions.  As you see above, with a few tweaks, we are continuing to set aside some time for case study this year.  If you’re planning to attend, I’m hoping some of you might have a project you’d be interested in presenting?  You will have 20 minutes to present a project of your choice and take a few questions.  Emma Hinke at Johns Hopkins has kindly agreed to handle the logistics of case studies for me, so if you have an idea, or would just like more information on the case studies, please be in touch with Emma at ehinke2@jhu.edu.  If we have a great flood of ideas, we may not be able to pack them all in, but wouldn’t that be a great problem to have?  Please send us your thoughts, and if we can’t manage them all this year, we’ll start a list for next year.

I do hope you will consider joining us — it’s the variety of attendees that makes the Symposium great.  I’ll let you know when we have the full schedule up on the Symposium web site.

Many thanks,

Kate Chamberlin
Chair, APRA Symposium on Data Analytics
Campaign Strategic Research Director, Memorial Sloan-Kettering Cancer Center

Older Posts »

Create a free website or blog at WordPress.com.