medium_SCRA_logomark_4col.jpg

The
Community
Psychologist

Volume 47 Number 3 
Summer 2014

Community Ideas

Edited by Gina Cardazone

Next Generation Evaluation

Evaluation is a core competency for community psychologists, and is increasingly valued by practitioners, grantmakers, and community partners. Often, evaluation is framed in terms of accountability, and is associated with specific grants or projects. However, community psychologists know that evaluation is about much more than accountability. Appropriately deployed, evaluation can enable ongoing organizational learning, and more effective community action.

In November 2013, the Stanford Social Innovation Review (SSIR) and the Foundation Strategy Group (FSG) co-sponsored a conference titled Next Generation Evaluation: Embracing Complexity, Connectivity, and Change. The conference focused on three approaches to evaluation that were deemed to be game-changers: Developmental Evaluation, Shared Measurement, and Big Data.

As community psychologists, we are often called upon to conduct evaluations. The good news is that the values and trends identified as being essential to next-generation evaluation are by and large extremely compatible with the values of community psychology. In a learning brief created prior to the conference (Gopalakrishnan, Preskin & Lu, 2013), FSG listed 6 characteristics of next generation evaluation:

(1)    Evaluation of whole systems vs. individual programs and projects

(2)    Shorter cycles and more real-time feedback using alternative formats vs. a fixed plan with end-of-year reports

(3)    Newer, innovative, often digital data collection vs. exclusively traditional data collection methods

(4)    Shared responsibility for learning across multiple organizations vs. evaluation required by one foundation for one grantee

(5)    Sophisticated data visualization and infographics vs. traditional data reporting

(6)    Everyone collecting and using data vs. only evaluator collecting data

It is important to note that, although the characteristics of next-generation evaluation are being presented in contrast with traditional evaluation, they are viewed as complementary approaches rather than competitors. However, it is clear that many of the characteristics of next generation evaluation are in line with the principles and practices of community psychologists.

The focus on evaluation of whole systems is directly related to community psychology’s long-held focus on systems change (Foster-Fishman, Nowell, & Yang, 2007). The emphasis on short-term cycles and real-time feedback is compatible with the principles and practices of community practitioners who believe that evaluation should be a tool for organizational learning, rather than simply a mechanism to assess outcomes and determine funding allocation (Fetterman, 2002). The shared responsibility for learning across multiple organizations is clearly connected to community psychology’s longstanding tradition of working with coalitions and collaboratives (Wolff, 2001), and both this and the imperative for everyone to collect data are compatible with community psychology’s emphasis on empowerment (Fetterman, 2002; Perkins & Zimmerman, 1995). 

While the focus on innovative digital data collection and sophisticated data visualization does not at first glance appear to be related to principles of community psychology, these technologies may enable more democratic participation in data collection and interpretation. Participatory research methods and the dissemination and practical use of research results are highly valued by community psychologists, and these activities can be aided by judicious use of emerging technologies. The proliferation of digital infrastructure is listed as one of the major trends driving the transformation of evaluation practices, along with philanthropic innovations and changes in stakeholder interactions.

The primer provides summaries and examples of the three “game-changing” approaches that were the focus of the conference. Here I’ll provide a quick overview of each of these, though I do refer you to the primer and materials from the conference, as well as other sources (see resource list) for more information. 

Developmental evaluation was first described by the ever-prolific Michael Quinn Patton (2011). In short, it is an approach to evaluation that emphasizes real-time feedback and learning and adaptation, both in programs and in evaluation methods. It is presented in contrast to pre-planned evaluations following a linear pathway based on a logic model. It is especially useful for evaluating innovative programs in which the theory of change is non-linear or hasn’t been fully articulated, and activities or targeted outcomes may change over time (Gamble, 2008). The primer provides an example of an evaluation of a college prep program that was dynamic and adaptive, which was necessary not only because of the experimental nature of the program, but also because of ongoing changes in partnerships and the environment influencing the program. 

Shared Measurement refers to cases where there is mutual learning among multiple organizations which share data and collectively determine common indicators and outcomes. This is presented as a complement to program or organization level evaluations, which are still necessary. The primer provides an example of a collaborative effort organized by a federal agency, an alliance of Arizona-based nonprofits, and a community foundation. In partnership with 15 nonprofit organizations, they determined indicators at multiple levels, including indicators that were shared by the entire cohort and a common pool of customizable indicators.

Big Data is a term whose popularity has grown exponentially over the last couple of years. The term is used in an attempt to capture perceived changes in data that result from developments in digital infrastructure, characterized by the three “V’s” – volume, variety, and velocity. The ability for people to gain rapid access to huge amounts of data, including everything from cell phone records to sensors tracking climate information, has been compounded with improvements in the ability to store and manipulate such vast quantities of data. Although there are dangers in overreliance on easily obtained digital data, there is no doubt that the big data revolution has led to new opportunities in business, science, and the social sector. The primer uses the example of a UN-sponsored program that uses data from multiple sources such as social media and bank records, to find information that can help vulnerable populations.

Some aspects of these approaches have been in use for years by community psychologists, whether or not we’ve used the same terminology (e.g. developmental evaluation), while others particularly those relying on recent changes in digital infrastructure, may still be very new to researchers and practitioners. In any case, community psychologists should be at the forefront of innovation in evaluation, ensuring that these developments can be put to maximum use in increasing empowerment, improving individual quality of life, and strengthening communities.

References

Fetterman, D. M. (2002). 2001 Invited address: Empowerment evaluation: Building communities of practice and a culture of learning. American Journal of Community Psychology30(1), 89-102.

Foster-Fishman, P. G., Nowell, B., & Yang, H. (2007). Putting the system back into systems change: A framework for understanding and changing organizational and community systems. American Journal of Community Psychology39(3-4), 197-215.

Gopalakrishnan, S., Preskill H., & Lu, S.J. (2013). Next Generation Evaluation: Embracing Complexity,

Connectivity, and Change - A Learning Brief. Retrieved from:

http://www.fsg.org/nextgenerationevaluation.aspx

Gamble, J. A. (2008). A Developmental Evaluation Primer. Montréal: JW McConnell Family Foundation.

Patton, M.Q. (2011). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: Guilford Press.

Perkins, D. D., & Zimmerman, M. A. (1995). Empowerment theory, research, and application. American Journal of Community Psychology23(5), 569-579.

Wolff, T. (2001). A practitioner's guide to successful coalitions. American Journal of Community Psychology29(2), 173-191.

Resources on Next Generation Evaluation

Next Generation Evaluation Conference Archive: http://www.fsg.org/nextgenerationevaluation.aspx

Developmental Evaluation: http://betterevaluation.org/plan/approach/developmental_evaluation and http://aea365.org/blog/michael-quinn-patton-on-developmental-evaluation-applying-complexity-concepts-to-enhance-innovation-and-use/

Shared Measurement: Kania, J., & Kramer, M. (2011). Collective impact. Stanford Social Innovation Review9(1), 36-41. Retrieved from: http://www.ssireview.org/articles/entry/collective_impact and Breakthroughs in

shared measurement documents at http://www.fsg.org/tabid/191/ArticleId/87/Default.aspx?srpush=true

Big Data: http://www.fsg.org/KnowledgeExchange/Blogs/StrategicEvaluation/PostID/440.aspx and http://aea365.org/blog/stephanie-fuentes-on-harnessing-big-data-in-higher-education-evaluators-as-data-scientists/




You must be logged in to leave a reply. Login »

You must be logged in to the website in order to leave a comment.