Skip to main navigation

Catalogue Blog

Evaluating the (nonprofit) Evaluators: Part II

The following blog, written by Catalogue for Philanthropy President and Editor, Barbara Harman, was published in the Huffington Post on Tuesday, February 11th. It is the second post in a three-part series on the “evaluation problem.” Part I of this series can be found here.

Part II: The Evaluators

There is a new kid on the block that makes it possible to check reviews on some 11,000 charities. Charity Checker (sponsored by the Tampa Bay Times and the Center for Investigative Journalism) is designed to streamline the process of evaluation by aggregating reviews from the top reviewing entities into one easy-to-use tool. But what exactly is being aggregated, and what are the reviewers reviewing?

Charity Navigator is perhaps the most well-known of Charity Checker’s sources, and it currently examines the finances of about 7,000 of the over 1.5 million US charities. One of the early groups to focus attention on the ratio between administrative and program expenses, it last year signed, along with Guidestar and the BBB Wise Giving Alliance, a “pledge to end the overhead myth” — in other words, to stop focusing on overhead as the chief charity culprit. This amounted to an admission that reliance on a strict ratio of administrative to program expenses so widely hailed as the sign of a charity’s cost-effectiveness was not the holy grail, and that, indeed, nonprofits needed to invest in themselves if they were going to thrive. Charity Navigator (CN) continues to look at the ratio, but it also focuses more broadly on financial transparency as a key indicator and has begun to look at what it calls “results reporting” — how effective a charity is at reporting, in an evidence-based manner, its outcomes and impact. CN admits that too few organizations are in a position to measure impact in this manner and that the process is a “developmental” one, so “CN 3.0, ” as it is called, is not yet here.

Guidestar, another group that powers Charity Checker, is not actually a watchdog site at all. It is, according to Guidestar itself, a “comprehensive” information source. It awards logos (bronze, silver, gold) to participants for successful completion at different levels, of their profiles on the Guidestar Exchange. To a certain extent one can see why a gold rating, for example, might mean something important: filling out the profile demands that an organization reflect on itself, gather information and write about itself, and commit significant time and energy to the process. In other words, full participation registers meaningful organizational capacity. The information, assuming it is accurately reported, can also give a serious investor a lot to consider. But it is important to remember that Guidestar isn’t actually rating the charities or their programs. It assigns its logos/awards based on a charity’s level of participation, and leaves the analysis to the reader.

Great Nonprofits, the third source of Charity Checker’s information, is the Yelp or Zagat of the charity world: it invites readers (and encourages nonprofits to invite supporters) to rate charities the way you or I might rate a restaurant or doctor or retailer. Perhaps predictably, the reviews of about 12,000 charities vary widely in their usefulness. Some are incredibly thoughtful, others are … not; many reviewers simply don’t have the deep information they need to offer an informed opinion. There is something to be said for hearing what volunteers, staff, donors and clients have to say about nonprofits, but how do these ratings stack up against the serious due diligence that we are always urged to perform before we make a donation?

BBB’s Wise Giving Alliance, which covers approximately 1500 national charities (I was unable to find the exact number on its site) focuses on four key measures: good governance, financial accountability, truthfulness and transparency (“willingness to disclose basic information to the public”). These are all important standards, but I wonder how many donors who see the BBB seal on a charity’s site are aware of the fact that, in its own words, it does “not seek to evaluate the quality and content” of a charity’s performance and effectiveness. The standards are all “best practices” in the field, and the very process of seeking to meet them will, at minimum, educate a charity about how it should govern itself and provide relevant information to the public about its operations. But there is no evaluation here of programmatic quality.

Charity Checker combines into one accessible site the ratings of four well-known organizations, making it easy for busy donors to find everything in one place — though it is important to remember that it is dealing with a relatively small number of the over a million and a half US charities, and it is likely dealing with very few of the community-based nonprofits that operate in your hometown.

In any case, it would be a mistake to think that the review process, because aggregated, is necessarily comprehensive. None of the sites that powers Charity Checker assesses the need a charity exists to meet, the programs it has created to meet those needs, or the effectiveness of the work. None of them claims to do this either, but the whole business of awarding stars and badges and seals, and then of aggregating them, creates the illusion of comprehensiveness for a public eager for hard answers about where to give — and short on time to conduct its own research.

So what would it take to make the evaluation process really valuable, and how might it work in communities around the nation?

Stay tuned for Part III.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>