You Don't Need Perfect Conditions: Reflections on Evaluating Systems Change

Written by: Alexandria Sedar, The Convive Collective

Over the last decade, I've seen systems change become an increasingly prominent part of the evaluation and programming landscape. There have long been practitioners and champions for a systems approach, but lately, it seems like more funders want to understand it, more grantees are trying to do it, and more evaluators are being asked to measure it. And yet, a common thread in many conversations I have is the same: systems change can be hard to see, hard to measure, and hard to communicate.

Part of what makes it hard is that we often lack a shared language for describing where a system is and where it's headed. And once we get our heads around what systems change might look like, we face the next challenge: how do we translate that understanding into something useful to different actors and parts of the system? How do we ensure evaluation insights and feedback actually feed back into the work?

These aren't new questions, but a recent webinar gave me a lot to chew on. Clare Nolan, Anna Saltzman, and other talented members of Engage R+D presented "Evaluating Systems Change Midstream: Practical Tools for Real Conditions" for the American Evaluation Association's Systems in Evaluation Topical Interest Group. They shared lessons from their evaluation of the Early Childhood Governing and Finance Project (ECGFP), a multi-state philanthropic initiative to strengthen how states govern and fund early learning. I went down the rabbit hole from there, reading the companion evaluation report and exploring the Roadmap for Change tool that came out of it, and was left with a lot to think about. What I found most useful wasn't a new methodology.

 It was a reminder that good evaluation can be designed for real conditions, not ideal ones.


A Familiar Challenge

The evaluation challenge Engage R+D walked into is one many of us would recognize. Eight state and territorial grantees, state agencies, nonprofits, public-private partnerships, each starting from a different place, working in different political and economic contexts, and pursuing different combinations of strategies. No unified theory of change. No shared baseline. And a funder who needed to understand progress without overburdening the people doing the work.

What's worth examining is not whether they solved these challenges, but how they worked with them. A few things in particular have stayed with me.

There was an implicit theory of change all along, it just needed to be made visible. Rather than imposing a framework from the outside, Engage R+D worked alongside grantees to surface shared priorities and themes. They used a participatory exercise, a mad libs-inspired format where grantees completed sentences about their vision, strategies, and what success looked like, to draw out the mental models and hopes about the system they were operating in.  As a facilitation nerd, I love this. It's a low-stakes, playful way to get at something quite deep: what people believe about how change happens and what change means to them. And it worked; common threads emerged across very different contexts.

Grantees used different “tactics” that clustered around key actions. Even though specifics varied, there was a shared goal and theme underneath. For example, in the financing domain, some grantees were mapping existing resources, others were expanding investment, and others were focused on making sure resources reached the right people equitably. The “how” was different, but all grantees were working towards reforming financing. This kind of pattern recognition is one way to build shared language without requiring everyone to work the same way.

Evaluation insights served actors across the system. This is perhaps the thing I appreciated most. The Roadmap for Change that emerged from this evaluation is designed, in the document's own framing, as "a tool for planning and strategic collaboration across agencies." Not a report card, but a way to "support honest reflection on strengths, challenges, and opportunities without judgment." Structured around four areas, People, Money, Progress, and Context, it walks cross-agency teams through stages of development from "Early" to "Transforming," with the goal of building shared understanding of where a system is and where it's headed.

What struck me most is how it reorients the question. Instead of asking "did this work?", it asks "where are we, and what would it look like to move forward?" That's a subtle but meaningful shift, and one that feels applicable well beyond early childhood governance.


The Bigger Picture

The conditions Engage R+D navigated aren't unique to early childhood, they show up across sectors, initiatives, and funding contexts. And the Roadmap they produced serves as an example of what evaluation can produce when it's designed with practitioners in mind. The tool is built on the premise that every system starts somewhere and that being at an "early" stage isn't failure, it's a foundation. Progress happens incrementally, and the most important thing is to have an honest starting point and a shared direction.

That feels right to me, not just for early learning systems, but for any of us trying to understand and communicate change in complex systems. We don't need perfect conditions. We need a place to begin, and people willing to look honestly at where we are.

Alexandria (Dria) Sedar is Systems Change Measurement Lead at Convive Collective. The webinar, "Evaluating Systems Change Midstream: Practical Tools for Real Conditions," was hosted by the American Evaluation Association's Systems in Evaluation Topical Interest Group (SETIG).


Next
Next

Know Your Learner: Why Discovery Is Where Great Learning Starts