CoolData blog

23 February 2014

Really swell SQL: Why you must know PIVOT and WITH

Filed under: Data, SQL — Tags: , , , — kevinmacdonell @ 2:37 pm

Some data manipulation needs are so common, and their solutions so elusive, that when I find an answer, I just have to share. Today I  will show you two problems and their solutions, wrapped up into one.

The first problem: Given a database query pulling any kind of transactional data (involving dollars, say), how do you go about aggregating the data in more than one way, within the confines of a single query, so that the various aggregations appear on one row of data?

The second problem: Given a database query that returns rows that are duplicates (by constituent ID, for example) due to some category that appears multiple times per ID (category code, gift year or whatever), how do you get rid of those duplicate rows without losing any of the data? In other words, how do you pivot the data so that the categories are added as columns rather than rows, while preserving the structure of all the other data that you don’t want to pivot?

That’s as clear as I can describe it … believe me, if you work with data, you encounter these situations all the time. An illustration might be helpful. Here is some donor gift data, limited to just Donor ID, Fiscal Year Code, and Giving Total, which is the sum of Gift Amount by ID and fiscal year. (For the sake of simplicity, I am using a code for fiscal year instead of extracting the fiscal year from actual gift dates, and I am limiting the data to three fiscal years of donation history.)

fig1

This is fine, but notice that some IDs appear on multiple rows. What if we want only one row of data per donor? And what if we want an overall giving total for the donor in one column, and then yearly totals for the three fiscal years in our data — one column and total per year? Like this:

fig2

Today I’ll show you a couple of techniques to combine different types of aggregations into a single, short SQL statement that isn’t difficult to read. (All of this works in Oracle 11g — I can’t speak to other systems’ implementations of SQL.) My primary need for writing such a query is for embedding custom SQL in a Tableau data source for the purpose of reporting. I don’t really require fancy SQL to render this precise example in Tableau. This is just a very simple example which you can adapt to more complex situations.

The SQL statement that produced our first set of results looks like this. Again for simplicity, let’s assume we require only one table, called GIFTS, so there are no joins to consider:

SELECT GIFTS.ID,
 SUM ( GIFTS.GIFT_AMT ) AS GIVING_TOTAL,
 GIFTS.FISC_YR_CODE

 FROM
 GIFTS

 WHERE 
 GIFTS.FISC_YR_CODE IN ('FY11', 'FY12', 'FY13')

 GROUP BY
 GIFTS.ID,
 GIFTS.FISC_YR_CODE

We end up with one type of aggregation: The SUM of all gifts, grouped by ID and Fiscal Year. But there is no way to include a SUM by ID only, because if we ask for FISC_YR_CODE in the SELECT, we have to include it in the GROUP BY as well. We will deal with that issue a little later on.

First let’s deal with the duplicate IDs. Each donor ID appears once for every fiscal year out of the three that the donor made a gift. We want to pivot the data, so that the rows become columns instead. Introducing … the PIVOT operator! Have a look at the following SQL:

SELECT *

 FROM (

  SELECT GIFTS.ID,
  GIFTS.GIFT_AMT,
  GIFTS.FISC_YR_CODE

  FROM
  GIFTS

  WHERE 
  GIFTS.FISC_YR_CODE IN ('FY11', 'FY12', 'FY13')

 )

PIVOT (
  SUM ( GIFT_AMT ) AS year FOR ( FISC_YR_CODE )
  IN ( 'FY11' AS FY11, 'FY12' AS FY12, 'FY13' AS FY13 )
  )

The inner SELECT statement gets all the data and hands it to the PIVOT operator, and the outer SELECT asks for all the resulting columns from that pivot. The three categories we want to appear as columns (that is, each fiscal year) are specified with IN, and the word “year” is appended to each column name. (This text can be anything you want.) I have added aliases (using AS) in order to prevent the single quote marks, which are required, from appearing in the results.

The content of the “cells” will be the SUM of gift amounts for each ID and fiscal year. The result will look like this:

fig3

Useful, eh? PIVOT does have some limitations, which I will discuss later. Let’s press on. We’re definitely getting close to realizing our goal. Now we just need to have one other aggregation (total giving by donor) appear on the same line. The problem, noted earlier, is that to SUM on giving by ID only, we are forced to leave out all other columns, in this case the fiscal year code. The SQL for giving by donor looks like this:

SELECT GIFTS.ID,
 SUM ( GIFTS.GIFT_AMT ) AS GIVING_TOTAL

 FROM
 GIFTS

 WHERE 
 GIFTS.FISC_YR_CODE IN ('FY11', 'FY12', 'FY13')

 GROUP BY
 GIFTS.ID

Clearly we need not one but two queries, the first to get total giving by donor, and the second to get giving by ID and year (pivoted), and then join the two result sets as if they were tables. Normally this would call for creating temporary tables in the database to store the results for further querying, but data analysts are not DBAs: We may not have sufficient permissions to perform this operation on the database. (As far as I am aware, by default Tableau disallows table creation, perhaps to reassure database admins?)

Fortunately there is a solution. It’s not well-known — my fat Oracle 11g SQL reference book doesn’t even mention it — so you’ll need to do some online searches to find out more. Introducing … the WITH clause!

It works like this. Each WITH clause contains a SELECT statement that returns a result set that behaves like a temporary table, and which can be referenced in other SELECT statements. This example uses two clauses, aliased as ‘total_giving’ and ‘yearly_giving’. A final SELECT statement joins the two as if they were tables.

WITH
 total_giving AS
 (

SELECT GIFTS.ID,
 SUM ( GIFTS.GIFT_AMT ) AS GIVING_TOTAL

 FROM
 GIFTS

 WHERE 
 GIFTS.FISC_YR_CODE IN ('FY11', 'FY12', 'FY13')

 GROUP BY
 GIFTS.ID

 ),

 yearly_giving AS
 (

SELECT *

 FROM (

  SELECT GIFTS.ID,
  GIFTS.GIFT_AMT,
  GIFTS.FISC_YR_CODE

  FROM
  GIFTS

  WHERE 
  GIFTS.FISC_YR_CODE IN ('FY11', 'FY12', 'FY13')

  GROUP BY
  GIFTS.ID,
  GIFTS.FISC_YR_CODE

 )

PIVOT (
  SUM ( GIFT_AMT ) AS year FOR (FISC_YR_CODE)
  IN ('FY11' AS FY11, 'FY12' AS FY12, 'FY13' AS FY13)
  )
 )

SELECT
 total_giving.ID,
 total_giving.GIVING_TOTAL,
 yearly_giving.FY11_YEAR,
 yearly_giving.FY12_YEAR,
 yearly_giving.FY13_YEAR

 FROM
 total_giving,
 yearly_giving

 WHERE
 total_giving.ID = yearly_giving.ID

You can accomplish what I’ve done in this example without using the WITH clause, but as an SQL statement gets more complex, it gets harder to read. As far as I know, you can use as many WITH clauses as you like, allowing you to build complex queries while preserving the neat organization and therefore readability of the SQL statement. Readability is important when you must modify or debug the statement weeks later. Oracle error messages are maddeningly uninformative!

I encourage you to research WITH and PIVOT on your own, but here are a few pointers:

  • Type ‘WITH’ only once. Multiple WITH clauses are separated by a comma. (No comma following the last clause.)
  • A WITH clause can reference any previously-defined WITH clause as if it were an already-existing table. Very useful.
  • When using PIVOT, notice that we must specify all the categories in the pivot (fiscal year code, in this case). Unfortunately, PIVOT is not capable of dynamically pivoting whatever data it happens to find. This is just the way SQL works. If you want to pivot data without knowing in advance what the categories are, you may have to use a true programming language. I use Python, but that is not always an option.
  • You can use any aggregation you like in the PIVOT — COUNT, SUM, MAX, etc.
  • Don’t include the schema and table names in the identifiers named in PIVOT, or you will get an error.

I hope this has been of some help. The nuts and bolts of pulling data for analysis may not be the sexiest part of a data analyst’s job, but brushing up on these skills (whatever your database system is) will pay off in the long run.

30 July 2013

Getting bitten by Python

When I was first learning to build predictive models, preparing the data was part of the adventure. In time, though, many operations on the data became standard instead of exploratory. Eventually they became simply repetitive and tedious. When any task becomes repetitive, I think of ways to automate it. Given that data prep makes up 80 percent of the work of building a model (according to some authors), the benefits of automation are obvious.

I can think of only two ways to replicate the manual operations you need to perform on a large data set to make it ready for modelling: Use software specially designed for the task, or code your own data-handling scripts. I am lazy and drawn to software solutions that make hard things easy, and I’m not a programmer. Yet I have veered away from a ready-made software solution to pursue an interest in the scripting language called Python, and in particular the Python code library called pandas, written specifically for working with data.

Maybe it’s because Python is open-source and free, or because it is powerful, or because it is flexible and widely adaptable to multiple uses on the job. I don’t know. But for the past few months I’ve been obsessed with learning to use it, and that’s what I’d like to talk about today.

I’m guessing very few CoolData readers have experience writing scripts for handling data. I know some people who do most of their stats work in the R language or manipulate data in Excel using VBA. But the majority of readers probably consider themselves severely allergic to coding of any kind. I concede that it isn’t for everyone, but look: Just as we don’t need to be professional statisticians to use statistical tools to create value for the business, we don’t need to be computer scientists to write useful scripts that can free up large chunks of time we now spend on routine tasks that bore us.

(If you work with someone in IT or Advancement Services who pulls and reshapes your data for you, they might be especially interested in the idea of learning how to automate your requests. They might also be familiar with Python already.)

I should say here that my aim is not to automate predictive modelling itself. There are Python modules for modelling, too, from the venerable classics such as regression to the latest advanced techniques. But I’m not so much interested in them, not yet at least. Building predictive models is best done hands-on, guided by a human modeler’s expertise and domain knowledge. My main interest is in eliminating a big chunk of the standard rote work so that I can apply the freshest version of myself to the more interesting and creative elements of data exploration and model creation.

So what is Python (and more specifically, pandas) good for?

  • A script or program can execute a series of database queries and join the results in exactly the way you want, allowing you to build very complex structures and incorporate custom aggregations that might be harder to do using your existing querying/reporting tools. For example, let’s say you want to build a file of donors and include columns for date of first and last gift, amount of highest gift, total cash gifts for the past five fiscal years, and percentage of total giving devoted to student financial assistance. Unless IT has built some advanced views for you from the base tables in your database, many of these variables will require applying some calculations to the raw transactional data. I could certainly build a query to get the results for this modest example, but it would involve a few sub-queries and calculated fields. Multiply that by a hundred and you’ve got an idea of how complex a query you’d have to build to deliver a modelling-ready data set. In fact it may be technically impossible, or at least difficult, to build such a single massive query. In Python, however, you can build your data file up in an orderly series of steps. Adding, removing or editing those steps is not a big deal.
  • Python also makes it simple to read data from .csv and Excel files, and merge it painlessly with the data you’ve extracted from your database. This is important to me because not all of my modelling data comes from our database. I’ve got eight years of call centre data results by alumni ID, wealth-related census data by Canadian postal code, capacity data by American ZIP code, and other standalone data sets. Adding these variables to my file used to be a tedious, manual process. In Python, left-joining 20 columns of census data to a file of 100,000 alumni records using Postal Code as the join key takes a single line of code and executes faster than a knight can say “Ni!” (Inside Python joke.)
  • Many other common operations also take only one or two lines of code, including conversion of categorical variables to 0/1 dummy variables, performing transformations and mathematical operations on variables, filling in or imputing missing data with constants or calculated values, pivoting data, and creating new variables from existing ones via concatenation (for strings) or math (for numbers).
  • With a script, you can also iterate over the rows of a data file and perform different operations based on conditional statements.

I’m not going to provide a Python tutorial today (although I’m tempted to do so in the future), but here is a sample line of code from a script, with a description of what it does. This doesn’t give you enough information to do anything useful, but you’ll at least see how compact and powerful the language is.

Skipping some necessary preliminaries, let’s say you’ve just used Python to query your Oracle database to read into memory a data set containing the variables ID, Constituent Category, Sex, and Age for all living constituent persons. (An operation that itself takes little more than two or three lines of code.) Obviously it depends on your database and code structure, but let’s say “Constituent Category” includes codes for such categories as Alumnus/na (ALUM), Non-degreed alumni (ALND), Parent (PRNT), Friend (FRND), Faculty (FCTY), Staff (STAF), and so on. And let’s further assume that a constituent can belong to multiple categories. Most people will have only one code, but it’s possible that a person can simultaneously be an alum, a parent, and a faculty member.

In our script, the data is read into a structure called a DataFrame (a tool provided by the pandas code library). This should sound familiar to users of R in particular. For the rest of us, a DataFrame is very much like a database table, with named columns and numbered (“indexed”) rows. Had we pasted the data into Excel instead, it might look like this:

pivot1

Right away we see that William and Janet are represented by multiple rows because they have multiple constituent codes. This won’t work for predictive modelling, which requires that we have just one row per individual – otherwise certain individuals would carry more weight in the model than they should. You could say that multiple records for Janet means that 60-year-old females are over-represented in the data. We could delete the extra rows, but we don’t want to do that because we’d be throwing away important information that is almost certainly informative of our modelling target, eg. likelihood to make a donation.

In order to keep this information while avoiding duplicate IDs, we need to pivot the data so that each category of Constituent Code (ALUM, PRNT, etc.) becomes its own column. The result we want would look like this in Excel:

pivot2

The Con_Code column is gone, and replaced with a series of columns, each a former category of Con_Code. In each column is either a 0 or 1, a “dummy variable” indicating whether an individual belongs to that constituency or not.

Getting the data from the first state to the final state requires just three lines of code in Python/pandas:

df = pd.merge(df, pd.crosstab(df.ID, df.Con_Code), how='left', left_on='ID', right_index=True)

df = df.drop(['Con_Code'], axis=1)

df = df.drop_duplicates()

This snippet of code may look invitingly simple or simply terrifying – it depends on your background. Whatever – it doesn’t matter, because my point is only that these three lines are very short, requiring very little typing, yet they elegantly handle a common data prep task that I have spent many hours performing manually.

Here’s a brief summary of what each line does:

Line 1: There’s a lot going on here … First, “df” is just the name of the DataFrame object. I could have called it anything. On the right-hand side, you see “pd” (which is shorthand for pandas, the module of code that is doing the work), then “crosstab,” (a function that performs the actual pivot). In the parentheses after pd.crosstab, we have specified the two columns to use in the pivot: df.ID is the data we want for the rows, and df.Con_Code is the column of categories that we want to expand into as many columns as there are categories. You don’t have to know in advance how many categories exist in your data, or what they are – Python just does it.

Pd.crosstab creates a new table containing only ID and all the new columns. That entity (or “object”) is just sitting out there, invisible, in your computer’s memory. We need to join it back to our original data set so that it is reunited with Age, Sex and whatever other stuff you’ve got. That’s what “pd.merge” does. Again, “pd” is just referencing the pandas module that is providing the “merge” function. The operation is called “merge,” but it’s much the same thing as an SQL-type join, familiar to anyone who queries a database. The merge takes two inputs, our original DataFrame (“df”), and the result from the crosstab operation that I described above. The argument called “how” specifies that we want to perform the equivalent of a left-join. A couple of other optional arguments explicitly tell Python which column to use as a join key (‘ID’).

The crosstab operation is enclosed within the merge operation. I could have separated these into multiple lines, which would have been less confusing, but my point is not to teach Python but to demonstrate how much you can accomplish with a trivial amount of typing. (Or copying-and-pasting, which works too!)

We’re not quite done, though. Our merged data is still full of duplicate IDs, because the Con_Code column is still present in our original data.

Line 2 deletes (“drops”) the entire column named Con_Code, and reassigns the altered DataFrame to “df” – essentially, replacing the original df with the new df created by the drop operation.

Now that Con_Code is gone, the “extra” rows are not just duplicates by ID, they are perfect duplicates across the entire row – there is nothing left to make two rows with the same ID unique. We are ready for the final step …

Line 3 deletes (or “drops”) every row that is a duplicate of a previous row.

Having accomplished this, another couple of lines near the end of the script (not shown) will write the data row by row into a new .csv file, which you can then import into your stats package of choice. If you had two dozen different constituent codes in your data, your new file will be wider by two dozen columns … all in the blink of an eye, without any need for Excel or any manual manipulation of the data.

Excel is perfectly capable of pivoting data like we see in the example, but for working with very large data sets and seamlessly merging the pivoted data back into the larger data file, I can’t think of a better tool than Python/pandas. As the data set gets bigger and bigger, the more need there is to stop working with it in tools that go to the extra work of DISPLAYING it. I suppose one of the beauties of Excel is that you can see the data as you are working on it. In fact, as I slowly built up my script, I repeatedly opened the .csv file in Excel just to have that visual inspection of the data to see that I was doing the right thing. But I inevitably reached the point at which the file was just too large for Excel to function smoothly. At 120,000 rows and 185 columns in a 90MB file, it was hardly Big Data – Excel could open the file no problem – but it was large enough that I wouldn’t want to do much filtering or messing with formulas.

On a quick first read, the code in the example above may seem impenetrable to a non-programmer (like me), but you don’t need to memorize a lot of functions and methods to write scripts in Python. Combing the Web for examples of what you want to do, using a lot of cut-and-paste, perhaps referring to a good book now and again – that’s all it takes, really.

That said, it does require time and patience. It took me many hours to cobble together my first script. I re-ran it a hundred times before I tracked down all the errors I made. I think it was worth it, though – every working piece of code is a step in the direction of saving untold hours. A script that works for one task often does not require much modification to work for another. (This cartoon says it all: Geeks and repetitive tasks.)

Beyond data preparation for predictive modelling, there are a number of directions I would like to go with Python, some of which I’ve made progress on already:

  • Merging data from multiple sources into data extract files for use in Tableau … With version 8.0 of the software comes the new Tableau API for building .tde files in Python. This was actually my first experiment with Python scripting. Using the TDE module and a combination of database queries and pandas DataFrames, you can achieve a high degree of automation for refreshing the most complex data sets behind your views and dashboards.
  • Exploring other modelling techniques besides my regular mainstay (regression) … I’ve long been intrigued by stuff such as neural networks, Random Forest, and so on, but I’ve been held back by a lack of time as well as some doubt that these techniques offer a significant improvement over what I’m doing now. Python gives ready access to many of these methods, allowing me to indulge my casual interest without investing a great deal of time. I am not a fan of the idea of automated modelling – the analyst should grasp what is going on in that black box. But I don’t see any harm in some quick-and-dirty experimentation, which could lead to solutions for problems I’m not even thinking of yet.
  • Taking advantage of APIs …. I’d like to try tapping into whatever social networking sites offer in the way of interfaces, and also programmatically access web services such as geocoding via Google.
  • Working with data sets that are too large for high-level applications such as Excel … I recently tried playing with two days’ worth of downloaded geocoded Twitter data. That’s MILLIONS of rows. You aren’t going to be using Excel for that.

I hope I’ve been able to transfer to you some of my enthusiasm for the power and potential of Python. I guess now you’ll be wondering how to get started. That’s not an easy question to answer. I could tell you how to download and install Python and an IDE (an integrated development environment, a user interface in which you may choose write, run, and debug your scripts), but beyond that, so much depends on what you want to do. Python has been extended in a great many directions – pandas for data analysis being just one of them.

However, it wouldn’t hurt to get a feel for how “core Python” works – that is, the central code base of the language along with its data types, object types, and basic operations such as “for” loops. Even before you bother installing anything, go to Codecademy.com and try a couple of the simple tutorials there.

For specific questions Google is your friend, but if you want a reference that covers all the basics in more or less plain English, I like “Learning Python” (4th Edition, but I see there’s a 5th Edition now) by Mark Lutz, published by O’Reilly. Another O’Reilly book, “Python for Data Analysis,” by Wes McKinney, describes how to crunch data with pandas and other related code libraries. (McKinney is the main author of the pandas library.)

I think readers new to programming (like me) will feel some frustration while learning to write their first scripts using any one book or resource. The Lutz book might seem too fine-grained in its survey of the basics for some readers, and McKinney is somewhat terse when offering examples of how various methods work. The problem is not with the books themselves – they’re wonderful. Consider that Python is used in web interfaces, robotics, database programming, gaming, financial markets, GIS, scientific programming, and probably every academic discipline that uses data – you must understand that core texts are perforce very general and abstract. (Think of grammar books for spoken languages.) It’s up to coders themselves to combine the basic building blocks in creative and powerful ways.

That said, after many, many hours spent hopping back and forth between these books, plus online tutorials and Python discussion forums – and just messing around on my own – I have figured out a few useful ways to accomplish some of the more common data preparation tasks that are specific to predictive modelling. Someday I would be happy to share – and, as always, to learn from the experience of others.

20 February 2013

The ‘analytic’ investment

Filed under: Analytics, Data — Tags: , — kevinmacdonell @ 10:49 am

Everyone’s talking about predictive analytics, Big Data, yadda yadda. The good news is, many institutions and organizations in our sector are indeed making investments in analytics and inching towards becoming data-driven. I have to wonder, though, how much of current investment is based on hype, and how much is going to fall away when data is no longer a hot thing.

Becoming a data-driven organization is a journey, not a destination. Forward progress is not inevitable, and it is possible for an office, a department or an institution to slip backward on the path, even when it seems they’ve “arrived”. In order for analytics to mature from a cutting-edge “nice-to-have” into a regular part of operations, the enterprise needs to be aware of its returns to the bottom line.

In my view, current investments in analytics are often done for reasons that are well-intentioned but vague: It seems to be the right thing to do these days … we see others doing it, so we feel we need to as well … we have an agenda for innovation and this fits the bill … and so on. I’m glad to see the investment, but not every promising innovation gets to stick around. Demonstrating ability to generate revenue — either through savings or through identifying new sources of revenue — will carry the day in the long run.

As I write this, I hear the jangle of railway bells at the level crossing in the early-morning dark outside my hotel room on the city’s downtown waterfront. I’m in Seattle today to attend the DRIVE 2013 conference, hosted by the University of Washington. I’ll be speaking on this topic — the “analytic” investment — later today. I have to admit to having struggled with making the session relevant for this group. For one, they don’t need convincing that making the investment is worth it. And second, if they think that I and my employer have figured out how to calculate the return on investment for analytics programs, they may be in for a disappointment. We have not.

In fact, when it comes right down to it, I like to spend my day working on cool things, interesting problems that face our department, and not so much on stuff that sounds like accounting (“ROI”). I’m betting many of the attendees of my session feel the same way. So I’ll be asking them to stop thinking about how they can get their managers, directors and vice presidents to understand the language of data and analytics. They’ll be far more successful if they try to speak the language their bosses respond to: Return on investment.

I may be a little short on answers for you, but I do have some pretty good questions.

18 October 2012

It’s your turn to DRIVE!

It’s been a full year since I attended the first DRIVE Conference in Seattle, and I’m pleased to let you know (if you don’t already) that a second one is on the way. DRIVE 2013 takes place February 20-21 at the Bell Harbor International Conference Center in Seattle, Washington, and is hosted by the University of Washington. Registration is now open!

I’ll be making the trip to DRIVE 2013, and I think you should, too. I’m there to speak, but I expect to get a whole lot more out of it than I give.

DRIVE stands for those most awesome and beautiful words “Data, Reporting, Information and Visualization Exchange.” It’s a gathering-place for the growing community of non-profit IT/data people seeking to bring new ideas and efficient processes and systems to their organizations. Whether you’re just joining the non-profit ranks or you’ve been in the sector a while, this is the place to explore the latest ideas in analytics, modeling, data, reporting, information and visualization with people who are of like mind but come from all sorts of different backgrounds.

It’s this diversity that really injects value into the “exchange” part of DRIVE: You’ll meet some fascinating people who will help you see data-driven performance through a whole new lens.

Especially this year … wow. There are fundraisers and report-writers and data miners – all great. But a developmental molecular biologist? And a major-league baseball scout? Yes!

On top of that, there’s an opportunity to sign up for some on-the-spot mentoring (either as a mentor or mentee) which will allow you to have a focused conversation on a topic of interest that goes beyond the merely social aspect of a conference. Check that out on the conference website.

A few speaker highlights:

DR. JOHN J. MEDINA, a developmental molecular biologist, has a lifelong fascination with how the mind reacts to and organizes information. He is the author of the New York Times bestseller “Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School” — a provocative book that takes on the way our schools and work environments are designed.

ASHUTOSH NANDESHWAR, Associate Director of Analytics at the University of Michigan, will talk about how we can tackle the three biggest problems in fundraising using data science.

KARL R. HABERL, of Principal BI will be presenting on the merits of powerful visualization. His presentation will introduce you to three innovative ‘compound charting techniques’ that provide new levels of insights to analysts and their audiences.

ANDREW PERCIVAL, an advanced scout with Major League Baseball’s Seattle Mariners. For his presentation, Andrew will be speaking about the use of data in the game of baseball. Come hear how an MLB scout turns massive data sets into information that is used by coaches and front-office personnel.

Oh yeah – and me, and a whole lot more. For more information on the other speakers and topics lined up so far, visit the DRIVE 2013 website.

24 April 2012

Data I want to play with

Filed under: Data, Fun — Tags: — kevinmacdonell @ 5:23 am

Guest post by Marianne M. Pelletier, Director of Advancement Research and Data Support, Cornell University

In my present job, I deal with a whole lot of data – over 2,000 fields of data on gifts, names, addresses, relationships, segmenting codes, dates, attributes, interests, contacts, you name it. Yet getting to play in this playground as a donor modeler only leaves me lusting for other kinds of data to play with, so much that my hobbies often lead me to places where data lives so I can fool with it. This short article is my wish list, whether or not I’ll ever get to mine any of it.

Horse Races are tracked to the umpteenth degree by handicappers. Buy a copy of the Daily Racing Form and you’ll see more statistics presented than you can read in a week. DRF also has a web page where you can download even more statistics – tracking the horses’ pedigree generations back in time and the jockey’s entire career, ride by ride. So what do I do? I spend some Sundays diligently typing key statistics into a homemade database, along with the race results, to see if I can find the regression formula that would make me more money than just following the program picks. The answer? So far, on maiden sprints on dirt, the horse that had the fastest workout is most likely to win. For every other kind of race, I’m still wishing to buy the data in a format I can manipulate instead of having to type it.

Speaking of gambling, I’d give my remaining eye tooth to play in Harrah’s data. Harrah is an incredibly good marketing firm, from offering me a free weekend to their new casino in some remote place to being the only game in town that offers $10 craps all weekend long. Imagine if you will getting to download affinity player card data and tracking where a person wanders in the casino – how many mix slots with table play? How many are single game players? What if the casino moved the buffet closer to Keno? What’s the best game to put right inside the valet parking entrance? Do the longer, red craps tables make one bet more or lose more? Or play longer? What is the average time for a player at a blackjack table? What if she’s drinking alcohol? What if she’s an awards card member? What if the player is male? What if the dealer is the same gender as the player? I’d be a kid in a candy store to get a contract to work data like that.

On the other side of the coin, what is the effect of parking availability on local business? Wouldn’t it be fun to figure out the dependent variable on that? Ithaca recently changed its parking rates from the first hour free to charging for every hour. Was it that or the longstanding recession that caused local businesses to disappear? Or is the turnover normal? Would I have to study when the students are in town vs. when they are gone? Would local businesses share their profit numbers with me?

And then there’s the whole thing about the best time of year to go to Disney World. I’d want to offer Disney a study of some kind (like, which ride should go next to the Small World ride?) in order to get data on when I’m most likely to enjoy good weather, a maximum number of rides open, and the fewest number of screaming children and strollers under my feet.

And speaking of flying somewhere, I’d love for Delta to hire me to study when people want to fly somewhere. All that Expedia/Travelocity search data – does anyone use it? After all, what if airlines could arrange that people in Boston can fly midmorning but people in New York can fly at night? What if there were one extra flight at 11:00 am from somewhere that would double an airline’s traffic because of the ripple effect? I’d love to be the one who discovers that.

Lastly, who can resist wishing to forecast forex? The currency exchange market is very likely very well tested by experts, but not by me. What if I could predict the day of week and time of day that the Euro drifts off against the dollar? I’d place my bet once a week and then go off to the casino. Or Disney. Or shopping. Oh, bother! It all looks like there’s data teeming everywhere, everywhere, and I’m only going to live so long.

13 December 2011

Finding connections to your major gift prospects in your data

Guest post by Erich Preisendorfer, Associate Director, Business Intelligence, Advancement Services, University of New Hampshire

(Thanks to Erich for this guest post, which touches on something a lot of prospect researchers are interested in: mapping relationships to prospects in their database. Actually, this work is more exciting than that, because it actually helps people find connections they may not have known about, via database queries and a simple scoring system. Is your Advancement Services department working on something like this? Why not ask them? — Kevin.)

Data miners often have an objective of exploring sets of data to determine meaningful patterns which can then be modeled for predictive patterning, hopefully to help meet their organization’s end goal(s).  However, there may be a time when the end behavior is not inherent in your database. Such a situation recently came up for my Advancement organization.

Our prospecting team recently started a program wrapped around peer recommendations: A prospect recommends new suspects to us based on the prospect’s interactions with the suspects. The question then became, what can we provide to the prospect to help get them thinking about potential suspects?

We currently do not have any type of data which would allow us to say, “Yes, this is what a relationship looks like,” outside of family relationships. We had to find a different way to identify potential acquaintances. I looked back at my own relationships to determine how I know the people I know. My friends and acquaintances largely come from some basic areas: school, work, places I’ve gone, etc.

Transforming my experience with relationships into what we have for useable data, I saw three key areas where relationships may exist: work history, education history, and extracurricular activities including one-time events. Fortunately, I was able to pinpoint our constituents’ time in each of the above areas to help isolate meaningful, shared experiences amongst constituents. Our work and extracurricular history includes to/from dates, and we have loads of educational history data that includes specific dates. Using this data, I am able to come up with potential relationships from a single prospect.

Prospect Profile (generated by entering a single prospect’s ID):

  • John Adams
  • Widget Factory, Employee 01/05/1971 – 06/16/1996
  • Student Activities: Football, Student Senate 09/1965-05/1966
  • Bachelor of Arts, Botany 1966

Potential Relationships (each item below is a separate query, using the Prospect Profile results):

  • Those employed by the Widget Factory who started before John ended, and ended after John began.
  • Those students who participated in Football and had a class year within +/-3 years of John.
  • Those students in Student Senate at the same time as John, similar to the Widget Factory example.
  • Those students who were in the same class year as John.
  • Those students who share John’s major.

Currently,since I have no way of proving the value of one point of contact over the other, each row returned in the potential relationships earns the constituent one point. Since my database stores historical records, I may get more than one row per constituent in any one category if they met more than one of John’s associated records – say they participated in Student Senate and played Football. This is great, because I want to give those particular constituents two points since they have more than one touch point in common with John.

I end up with a ranked list of constituents who share potential relationship contacts with my main prospect. The relationship lists provide our prospect researchers a starting point in putting together a solid list of high capacity constituents a single person may have some sort of relationship with, thus a greater insight into potential giving.

As of now, the report is in its infancy but looks to have high potential. As we grow the concept, there are multiple data points where further exploration could result in a higher level of functioning. As prospects use the lists to identify people they know, we can then deconstruct those choices to determine what is more likely a relationship. Should shared employment be ranked higher than shared class year? Should Football rank higher than Student Senate? I would guess yes, but I currently do not have supporting data to make that decision.

Another interesting concept, raised at the recent DRIVE 2011 conference, would be: “How are these two prospects potentially related by a third constituent?”  The result could mean the difference between two separate, forced conversations and one single conversation with three prospects shared over nostalgic conversations, drinks and, hopefully, money in the door!

Erich Preisendorfer is Associate Director, Business Intelligence, working in Advancement Services at the University of New Hampshire.

Older Posts »

The Silver is the New Black Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 973 other followers