Blog: Shifting power and advancing justice through retrospective logic

 We, Michelle Garred and Matteah Spencer Reppart, would like to spark a conversation about Outcome Harvesting (OH) and social justice. OH is a unique evaluation approach because it uses retrospective logic, identifying outcomes after they emerge, in response to complexity. Following Wilson-Grau, OH practitioners consistently approach complexity as a way of understanding social change within contexts that are too dynamically multi-causal to permit advance prediction of an intervention’s outcomes. Complexity theory also helps us to strategize systems change, including the challenge of dismantling systemic discrimination along lines of ethnicity, nationality and other human differences. OH is well suited for evaluating justice-promotion initiatives because of its foundational understanding of complex systems.

At the same time, we also see OH practitioners approaching retrospective logic from a different justice-focused angle, as a way of respecting the worldviews of local people. Most people don’t think of social change in the mechanistic, linear way that mainstream evaluation requires, but rather as an organic and emergent process. The OH practice of retrospective logic can provide a breath of cultural fresh air, because the assumptions underlying mainstream evaluation are not culture-neutral. Rather, excessively linear thinking is an expression of European and North American enlightenment-era rationalism, spread around the world through (neo)colonialism. When OH displaces linear predictive planning, it shifts power by honoring diverse ways of working, thinking and knowing. If that cultural shift is reinforced by equitable relational dynamics within the evaluation process, the transformative effect is even stronger.

In an example from the USA, these benefits were evident during a 2020 evaluation in Grand Rapids, Michigan. LINC UP is a nonprofit organization that pursues racial equity by building economic and community political power for people of color.  Matteah, with OH coaching from Michelle, facilitated an evaluation of their work, which identified  37 significant outcomes relating to housing policies and community power (e.g. police reform, civic engagement, and human rights). These findings helped LINC to recognize and validate their contributions towards systems change.  The results were used to focus LINC’s strategic plan and formalize their advocacy efforts, doubling down on seeking systemic transformation and coalition building for police reform, public safety, civic engagement, and housing – all of which present elements of structural racism.  Considering that LINC engaged this work outside of grant funded deliverables, they experienced OH as liberating:  It was responsive to their changing context and emergent strategy, gave them an opportunity to fully demonstrate their influence, and provided them an opportunity to shift away from solely focusing on predicted programmatic outcomes.

At the global level, Michelle is co-creating a noticeable trend toward using OH in working with faith-inspired actors, who tend to be much more inclined toward emergence than linear predictive planning. Harvesting outcomes through narrative storytelling, a central cultural practice among many faith communities, increases their power when engaging with secular funders and partners. Indeed, across sectors we increasingly hear global colleagues saying that they choose OH in large part for its resonance with non-western worldviews. We believe this to be an expression of the global movement toward “decolonizing” evaluation (while noting some controversy around the term). It is up to the OH community to interpret what these trends mean. We are eager to hear your perspectives. Are social justice values becoming an important rationale for using OH, alongside and complementary to the complexity of our contexts?

Note: We thank Min Ma and Susan Putnins of MXM Research for their thought partnership in developing these OH ideas.

Blog: Not everything that counts can be counted, part 1: Visualizing Outcome Harvesting Data Effectively

   

By Chris Allan and Atalie Pestalozzi

You’ve probably seen them: those reports using outcome harvesting with beautiful graphs of the outcomes, like these two examples from recent reports.

Most Frequently Reported Outcomes by Program Graduates

How useful are these graphs, since Outcome Harvesting is primarily a qualitative method?

Counting different types of outcomes shows patterns of outcomes, which can be helpful for understanding where the program has been successful, which groups have been influenced the most, which areas are missing, and so on. And this can help construct narratives about what went right and what needs more attention. So, when nuanced narratives about these patterns explain the graphs and the work, then we have some solid evidence to go on.

But when we present this counting as we would quantitative results, there’s a risk that people will read more into them than is actually there. Here are some of the pitfalls:

  1. Not all outcomes are the same. There are vast differences in significance across a set of outcomes. A recent evaluation we did produced one outcome where a major government policy changed, and another where an African community-based organization had received a small grant. The first we ranked as “high significance” due to its wider influence, the second “low,” but we counted both in the numbers of outcomes. A graph of all outcomes treats them with the same weight.

  2. Readers understandably want to see a statistical analysis. Tallying up the number of outcomes says little about program influence. It is common to present outcomes in a graph like the one below. It implies that there were more than twice as many outcomes in changed discourse than there were in changed practices (51% vs. 20%).                                                                                                                                               Does this mean the program was twice as effective at changing discourse than in changing practices, as the total outcomes in each category would suggest? As a pattern yes, but as a statistical measure of effectiveness, the data just does not support that kind of conclusion.

  3. Sometimes you lump, sometimes you split. Sometimes there are disagreements among an evaluation team about what changes to lump together, and which to report separately – “lumpers” and “splitters.” In a recent foundation evaluation, one grantee reported dozens of related outcomes for which the foundation’s contribution was minimal. If we took each individual outcome, it would have increased the outcomes we had for the whole program by about 25%, vastly overstating the grantee’s contribution to one small aspect of the program. By grouping that large number of related outcomes into a couple of outcome statements, we represented more accurately the contributions of that grantee, and were able to represent what the program had done more realistically.
  4. Counting doesn’t capture the story. One of the most valuable things about Outcome Harvesting – and one that we hear regularly from clients – is how outcome statements help tell the story of what happened. Somewhat like a mini case study – it generally only takes a couple of typical outcome statements to show how change was achieved in each category. If you turn outcomes into a number, you risk losing the story.

    So what’s a harvester to do? See our next blog on how we address these issues.