Earlier this month I travelled to Emory University in Atlanta to attend the annual Imagining America (IA) Conference. A consortium of over 100 colleges and universities, IA’s mission is to “create democratic spaces to foster and advance publicly engaged scholarship that draws on arts, humanities, and design.” Specifically, IA is focused on the herculean task of bringing a greater sense of democracy to the mandarin system of higher education. Its target membership is those who self-identify as publically engaged artists, designers, scholars, and/or community activists.
I have to say I love this group. They tend to view mainstream issues from slightly off-center perspectives. IA was founded fifteen years ago by my friend David Scobey, who was then director of the University of Michigan’s Arts of Citizenship Program, with his colleague Julie Ellison. The consortium publishes an online journal, hosts several conferences annually and oversees several important and timely collaborative research and action initiatives—including the Full Participation research project.
One enlightening conference session I attended was Saturday morning’s “How Do We Know? Defining Community Impact Metrics in Partnership.” Organized by an IA working group started in 2010 to document, measure and assess the impact of public scholarship and campus-community partnerships, members identified the five core values of collaboration, reciprocity, generativity (sic), rigor and practicability as guiding principles of meaningful assessment for community engagement and public scholarship. The goal of any assessment, according to this group, should be “to contribute to both transformative outcomes (e.g. improved campus-community partnerships, impact in relation to defined civic, social and academic goals) and, just as significantly, to transformational processes.”
This all sounds good. But what the heck does it mean in practice?
Session leaders presented several methods for assessing community engagement to a full room on Saturday. The method I found most intriguing was “Outcome Harvesting,” presented by Lisa Yun Lee, the director of the School of Art & Art History at the University of Illinois at Chicago (and a 1999 Duke Ph.D recipient in German Studies). After brief presentations, workshop participants were invited to split up into working groups led by one of the presenters. I followed Professor Lee to learn more.
Individuals or organizations are change agents whose interventions influence the outcome of a project either successfully or unsuccessfully. Rather than measuring the actual results in relationship to predetermined and idealized goals and objectives set in advance of project completion, Outcome Harvesters, according to Wilson-Grau and Britt, “work backward”—like forensic scientists—to determine and measure the process of change that occurs as the result of specific interventions. Developed 10 years ago by Ricardo Wilson-Grau and his colleagues, this method is widely used by development and social change programs around the world, according to a summary report published by the Ford Foundation.
OUTCOME: a change in the behavior, relationships, actions, activities, policies, or practices of an individual, group, community, organization or institution.
the identification, formulation, analysis, and interpretation of outcomes to answer useable questions.
*Wilson-Grau, Ricardo & Heather Britt. “Outcome Harvesting.” MENA Office: Ford Foundation, May 2012: 1.
Professor Lee asked everyone in her group to think about a program or initiative to evaluate using this method. She handed us a worksheet with the following questions to guide our assessment:
- How do the social actors that you influence change through the assistance of your project?
- What is the unique contribution [of] your site/project in moving social actors towards action?
- Under What Conditions is your site/project successful in fulfilling the goal of moving social actors toward action for positive social change?
How? Why? Under What Conditions? These are a few of the key questions that Willson-Grau and Britt use to measure not only desired and/or intended outcomes, but those that were not originally predicted or expected from the planned intervention.
Outcome Harvesting, according to Wilson-Grau and Britt, measures the process of transformation by taking into account that planned interventions may change, leaving room for adaptation in the process of implementation. These methodological adjustments are not necessarily a bad thing, as long as the outcome remains in line with the change agent’s mission and goals for the project.
A frustrating aspect of conference sessions is sometimes their saving grace: they are short. We didn’t have enough time to apply this method of assessment to real world situations. Outcome Harvesting holds a promise, however, as an effective way to measure specific outcomes, especially in complex projects where the relationship of cause and effect is not immediately clear.
Claims made in this method of evaluation must be verified by independent and knowledgeable people, Lee instructed. These credible “verifiers” must agree to go on record stating the degree to which they are in agreement with both the stated contributions made as well as with the outcomes described in the evaluation report.
This method is useful for those of us who oversee complex projects and initiatives that develop and change over a period of time. It is difficult for us to anticipate ahead of time the full range of outcomes that might be realized once the planned intervention is complete. This method allows us to “harvest” the actual as well as the desired outcomes of our work.