Development & Education

Discussion: How Do You Know a Good Charity When You See It?

This story is a part of a series

Tracking Charity

This story is a part of a series

Tracking Charity


Reporter Amy Costello is leading an online discussion about evaluating charities with Iqbal Dhaliwal (MIT's Poverty Action Lab), Dayna Brown (CDA Collaborative Learning Projects) and Holden Karnofsky (GiveWell).

When people find out that I reported from Africa for many years and am now producing a series called Tracking Charity, they frequently ask me this: "Which charities do you think are doing really good work on the ground overseas?"

Player utilities

Listen to the Story.

Honestly, I have trouble answering.

Certainly, many charities are doing good work, but even after all my years covering conflict, food crises, HIV/AIDS, and refugees, I still find it difficult to define effective aid. How should one measure success? Should all charities keep overhead low, or can high expenses be justified if they allow a charity to hire the best people? Even if an aid program improves lives in the short term, might it create a culture of dependency in the long run?

Thursday, August 8, we're giving you the chance to discuss these and related questions with people who have devoted their careers to answering them. I'll be moderating the conversation and will be joined by:

Iqbal Dhaliwal, an economist who grew up in Delhi, is director of policy at MIT's Abdul Latif Jameel Poverty Action Lab. When people ask him where to donate money, he advises, "Don't just think process, but think of the final impact that you are interested in."

Dayna Brown is director of the Listening Program at CDA Collaborative Learning Projects, a nonprofit in Cambridge, Massachusetts. She is co-author of Time To Listen: Hearing People on the Receiving End of International Aid.

Holden Karnofsky is co-founder of GiveWell, a nonprofit that conducts cost-benefit analyses of charities "to help donors decide where to give." A graduate of Harvard University, he previously worked in the hedge fund industry.

Our discussion will take place in the section below where you can leave your questions and comments. You can follow the discussion as it evolves by subscribing to the comment thread by RSS, or by clicking "Subscribe via email" at the bottom of the discussion box.

Editor's Note: Here is an archived version of this discussion, which took place on an older version of our website. Feel free to continue the conversation in the live comment section below.



  • Avatar

    Good morning to our panelists, Iqbal, Dayna and Holden! Thank you for joining today's discussion. Panelists, please introduce yourselves and answer this question by hitting "Reply": What's the biggest misunderstanding out there about what makes aid effective?


    Iqbal Dhaliwal  amy_costello

    Thanks Amy and PRI for hosting this discussion on a very important topic. The Jameel Poverty Action Lab (J-PAL) started as a center at MIT’s economics department with a mission to promote evidence informed development policy. Our network of 87 professors from 42 universities have ongoing or completed evaluations of over 400 programs in about 40 countries. I lead the policy group at J-PAL that tries to make the research results more accessible to implementers, policymakers, donors, and the civil-society, and works with them to scale-up programs and policies that are found to be effective.

    These underlying programs are designed and implemented by NGOs, foundations, governments and the private sector with varying degrees of inputs from policymakers, researchers, local communities, donors, and other stakeholders. Using rigorous impact evaluations, our researchers working in the field, try to understand what policies and programs work or not, and why, thus generating original evidence that can be used by implementing organizations and donors to make decisions based on hard evidence and not just rely on instincts or ideology, or selective anecdotes from the community.

    To answer your question about common “misunderstanding about
    what makes aid effective”, many organizations trying to create social change or improve people's lives believe that if their program(s) are based on strong logic or a sound theory of change, then combined with a motived organization, they will be effective, when in reality we need to be careful and really test our assumptions about what works and why in different contexts. There is often an under-appreciation for the complexity of constraints that affect an individual, household, community, or economy and the administrative capacity of the implementing organizations (i.e. their ability to effectively delivery and monitor the program).

    For instance, a charity, its donors, and even many of the parents in the local community may believe that the reason the children are not learning in schools is because of unaffordable textbooks and hence they will try and provide free textbooks directly to the children and this will lead to an increase in learning. The charity then measures its success in terms of number of textbooks delivered, the donors see pictures of smiling children with books in their hands and a few parents provide nice anecdotal tales of how their child who never had a book before now comes home and reads a book. Very often, no attempt is made to rigorously measure the one outcome that really matters and that motivated this entire program – the learning levels of the children.

    In a few cases, reading tests are administered to the students receiving the books at the beginning and end of the school year and the increase in these test scores are shown as proof of how the program improved learning outcomes. In other cases, comparisons are made between schools that received the program and neighboring district schools that did not. These commonly used methods ignore the learning that anyway happens during the year or the impact of any other factors or programs that may have run concurrently (e.g. a new motivated teacher may be driving the before-after improvements, or greater scrutiny by administrators in schools that got the free textbooks may lead to decreased absenteeism among teachers and be driving the inter-district improvements).

    This is why it is so important to not just rely on an untested theory of change, however sound it may seem (far less just the instincts, ideology or inertia of either the implementer, the donor, the researcher or even the beneficiary), but also rigorously measure the aggregate impact of a program on all the beneficiaries, and on the outcome that we are really
    interested in. Thus adding rigorous evidence to the mix of factors used to make funding and program decisions can increase the impact of development spending.