CoolData blog

22 April 2010

Introducing the nonprofit data collective

Filed under: Non-university settings — Tags: , , — kevinmacdonell @ 8:20 am

Yesterday I gave a conference presentation to a group of fundraisers, all but one of whom work for non-university nonprofits. Many have databases that are small, do not capture the right kinds of information to develop a model, or are unfit in any number of ways. But this group seemed highly attentive to what I was talking about, understood the concepts, and a few were eager to improve the quality of their data – and from there get into data mining someday.

The questions were all spot-on. One person asked how many database records one needed as a minimum for predictive modeling. I don’t know if there’s a pat answer for that one, but in any case I think my answer was discouraging. If you’re below a certain size threshold, you may not have any need for modeling at all. But the fact is, if you want to model mass behaviour, you need a lot of data.

So here’s a thought. What if a bunch of small- to mid-sized charities were to somehow get together and agree to submit their data to a centralized database? Before you object, hear me out.

Each charity would fund part of the salary of a database administrator and data-entry person, according to the proportion of the donor base they occupy in the data pool. The first benefit is that data would be entered and stored according to strict quality-control guidelines. Any time a charity required an address file for a particular mailing according to certain selection criteria, they’d just ask for it. The charity could focus on the delivery of their mission, not mucking around with a database they don’t fully know how to use.

The next benefit is scale. The records of donors to charities with related missions can be pooled for the purpose of building stronger predictive models than any one charity could hope to do on its own. Certain costs, such as list acquisition, could be shared and the benefits apportioned out. Some cross-promotion between causes could also occur, if charities found that to have a net benefit.

Maybe charities would not choose to cede control of their data. Maybe there are donor privacy concerns that I’m overlooking. It’s just an idea. My knowledge of the nonprofit sector outside of universities is limited – does anyone know of an example of this idea in use today?

P.S. (18 Feb 2011): This post by Jim Berigan on the Step by Step Fundraising blog is a step in the right direction: 5 Reasons You Should Collaborate with Another Non-profit in 2011.

9 Comments »

  1. A couple of issues I’d face: budget for anything, but particularly to support something relatively foreign and unproven in the smaller non-profit world, though I’ve been advocating the concept and potential of data analysis, mining, and modeling.

    Also, I’d guess many non-profits have invested in donor database software or online subscription, training, tech support, etc. Moving to a new home is a significant move for a small non-profit, though reducing staff time for extracting lists and reports is enticing.

    And yes, there would be considerable questions about the security of donor records and use of their information.

    On the other hand, collecting anonymous data from many organizations for modeling purposes is very appealing, and might give we data-geek wannabees a productive outlet!

    Thanks for your work on this blog!

    Comment by Dwight Downs — 22 April 2010 @ 10:55 am

    • I know, there are hurdles, for sure. I’m also proposing that the information-sharing NOT be anonymous. Organizations would not have unfettered access to each others’ data, but the database administrator would certainly have access. It would be a hard sell. But as charities large and small start feeling the pressure to manage their data better, it might not be as hard a sell in a few years’ time.

      Comment by kevinmacdonell — 22 April 2010 @ 2:11 pm

      • I suspect you’re right. I, for one, will be keeping an eye on it as a future option!

        Comment by Dwight Downs — 23 April 2010 @ 10:50 am

  2. You would really have to think this through quite a bit, but I would not approach this as an independent project. I think you would need the backing of a group like the Center on Philanthropy and perhaps try to fund through a grant.

    I think one thing you may be overlooking is that the value of modeling for an institution is to see how /your/ donors act. And each will have different variables and the results unique to each institution. I would worry that the differing missions of organizations would skew the overall results.

    Comment by Jason Boley — 23 April 2010 @ 8:09 pm

    • Hope I didn’t give the impression that this was a project I actually intend to undertake! This was just a trial-balloon idea, released to the atmosphere – either to be shot down as unworkable, or to land somewhere and prove useful. Yes, I agree, this would require support across a broad base.

      Your second point I also (mostly) agree with. By suggesting that organizations pool their data in order to mine it, I am contradicting myself – I’ve said over and over that each organization’s data set is unique. However, I wonder if some data mining capability (imperfect though it would be) is better than none at all (the situation for a nonprofit with an inadequate database). That said, the real advantage to pooling might not be data mining, but just having decent data support at reasonable cost.

      Comment by kevinmacdonell — 24 April 2010 @ 8:37 am

  3. I’ve often thought that something like this would be a boon to smaller non-profits. Basically, they’d just be outsourcing their data and reporting needs, just as they might outsource ticketing, or printing, or accounting, or mass e-mailing, or any of the myriad of activities that an organization might not be big enough to justify dedicating an entire full-time, in-house department to. Let them focus on their core competencies and farm out the rest.

    The ability to quickly benchmark and to create large-scale models, etc., would just be an added bonus (and something that not every organization using the service would necessarily have to opt-in on).

    – Jeff

    Comment by Jeff Jetton — 26 April 2010 @ 12:03 pm

  4. one problem I see would be guaranteeing the consistency of data maintained over an extended period of time. I have been the ED of exactly this type of organization. these organizations could certainly use this type of service, as well as other shared services. but they usually cannot make the leap because their own staff fluctuates too much to guarantee a commitment to any long term project. and because they understand this phenomenon well they do not trust other organizations in the same boat will be able to keep up their end of the bargain. they know that this is exactly the type of thing that staff is likely to throw out the window if pressed for time and that it will be the first thing the board looks at cutting during a deficit. it is one of those things that they all recognize that they need and would benefit from but they know that they don’t already have it because the institution has already demonstrated that it isn’t willing to expend meager resources to get it.

    Comment by artem1s — 27 April 2010 @ 9:44 am

  5. While there are a number of problems, prefer to think of them as challenges, it’s a good idea if it had structure and quality control as mentioned.

    I put together a similar short piece a while back on this subject that you can have a look at here:

    http://www.supportingadvancement.com/reporting/data_sharing/data_sharing.htm

    Comment by Brian Dowling — 19 May 2010 @ 12:14 am

    • Wow, everything I thought about, you had already said, and in much more depth. Thanks for sharing. When people begin having the same idea independently, that can only mean one thing: The time has arrived.

      Comment by kevinmacdonell — 19 May 2010 @ 6:18 am


RSS feed for comments on this post. TrackBack URI

Leave a reply to kevinmacdonell Cancel reply

Create a free website or blog at WordPress.com.