What is a ‘Fair’ Performance Standard for Nonprofits? The Art of Philanthropic Due Diligence (Part 3)

Posted August 17, 2010 08:55 AM by Al Mueller

Print Friendly and PDF

None of us wants to be judged for not doing what we never tried to do. But it happens all the time. One nonprofit I recently critiqued responded with this exact complaint, “You can’t say we have failed to become partially self-sustaining when we have not made that an explicit goal for the last ten years.” The complaint was justified. I had to modify my critique to read: if a donor wants a self-sustaining model, this organization has not developed it in the last ten years.

So how do we create a ‘fair’ performance standard for measuring nonprofit outcomes? In the world of business investments, analysts can run the numbers and get a clear record of expenses, revenue, and profit. In the nonprofit world, measuring performance is more elusive. There is no absolute standard that applies equally to organizations operating in different program and geographic areas. The only fair approach is comparing the relative performance of organizations in the same sectors. 

A recent study by Hope Consulting found that 85% of funders wanted to fund effective organizations. However, only 3% of all funders evaluate the relative performance of grantees. That is a significant disparity. It is the proverbial difference between good intentions and strategic giving. It is also the reason why my business isn’t booming (evaluating the relative performance of nonprofits in select geographic and programmatic areas happens to be my specialty). 

How does one check the “relative performance” of a nonprofit? It takes proper data collection, some experience, and some time. For example, I just compared the performance of a string of nonprofits serving the homeless population in Fort Worth, TX. I gathered the numbers on how many distinct clients were served. I identified the types of services offered at each organization. I found out how many meals were served. I added up the number of clients placed in paying jobs, and I counted how many homeless individuals now lived in independent housing. With these statistics on both the organization’s outputs (program activities) and outcomes (lasting impact), I could analyze their relative performance. 

A couple homeless service providers boasted greater outputs: the most meals served and most overnight stays. A couple other service providers boasted the best outcomes: the most job placements and most program graduates in independent housing. Those numbers gave me the relative performance of homeless service providers in Fort Worth. All that remained is a personal decision about what type of results were most meaningful. In this particular case, our clients most valued the outcome of drug-free and employed individuals who found long-term housing arrangements. We concurred.

Without analyzing “relative performance” in similar sectors, nonprofits can fall victim to unfair absolute standards. For example, Charity Navigator has spent years analyzing all nonprofits based on the percentage of money spent on Administrative & Fundraising costs versus Program costs. Their absolute standard that downgrades a nonprofit for having 32% instead of 20% of their annual budget in the former category is unfair (to their credit it is being changed next year). A nonprofit that does weekly case management with hundreds of struggling families will need more personnel than an organization that acquires and distributes medical equipment for the elderly. More staff will lead to higher administrative costs from additional office space, equipment, benefits, and insurance needed. It is inappropriate to judge both organizations by some universal standard.

The inequity of absolute standards can take many other forms. A common pitfall is the “most bang for the buck” principle. For example, a school outreach program may “reach” 70,000 children for only $280,000. That is $4 per person. Should a donor choose that type of efficient performance when compared to a holistic, residential urban program for 18-30 year olds that provides housing, tutoring, college tuition, counseling, and financial education at the price of $35,000 per person per year? The same $280,000 could allow a donor to reach either 70,000 children or 8 guys. Wouldn’t you go for the “most bang for the buck”? The fact is to compare the performance of these organizations is unfair. One goes deep for 3 years with each individual while the other is wide and has 2 hours of contact with each child.

At the end of the day, we must compare the outputs and outcomes of organizations operating in similar areas with similar programs. Throw the unfair absolute standards out the window. We must raise the statistics far higher than the current 3% of donors who evaluate “relative performance.” If this happened, some nonprofits would lose all their funding when the public sees that they have good intentions but not great performance compared to others. Good. Other nonprofits would be given the chance to stand out from the rest. If charitable giving choices reflected the relative performance of organizations and not their marketing or networking success, philanthropists would not only be fair but also more effective.

Add Your Comments

Reader Comments

0 comments

Tags: Charity Evaluation, Charity Evaluations, Due Diligence, Nonprofit Evaluation, Performance Standard




Subscribe to our philanthropy newsletter