I’ve recently shifted from junior high to high school ministry, which has fostered a season of evaluation for me. I’m asking all sorts of questions about vision, values, programs, environments, structure, and leadership. Lots of what questions, why questions, and how questions are filling my thoughts and my journals. It brought me back to a college course on curriculum and program development, where the prof unpacked two models for evaluating ministry programs: Stake and Eisner. Both based on 20th-century educators, the basic principles translate well into the world of youth ministry and have been helpful for me in the evaluation process.
Based on one of educator Robert Stake‘s models for educational evaluation–but without the technical jargon like “antecedents” and “empirical contingency”–the Stake model asks four questions based on the matrix below. The questions revolve around the relationship between where the boxes connect:
1. Did we do what we planned? Did our plans actually happen? For instance, when planning a mid-week program, one may have an outlined schedule for the evening (announcements, video, worship, teaching, discussion, etc.). Whether or not those plans actually happen in real life is another story. This question begins the evaluation process by making simple observations about the program. We had an idea of what this event/discussion/program would be like; did it happen like we thought?
2. Did our activities connect with our outcomes? Was this a good or poor plan for accomplishing what we wanted to accomplish? If you planned on having a 10-minute small group discussion, but had a desired outcome of students fully understanding the doctrine of the atonement, that plan won’t meet the outcome. However, if you planned on having a 45-minute discussion in order to foster some initial thinking about the atonement–or to build relational equity between a discussion leader and students–then the plan makes a bit more sense. The evaluation begins to get a bit deeper here; this is an evaluation of the planning process, asking if strategies and ideas are truly leading to the outcomes we desire.
3. Did our planned outcomes match our actual outcomes? Did we hit what we aimed for? This is the evaluation question most often asked, but it cannot be accurately evaluated without the first two. If you don’t know what your intended plan or outcomes were, it’s rather difficult to know if you accomplished them. Not hitting what you aimed for doesn’t always mean the activity or plan was a total bust; you may have been aiming for a deep theological discussion, but ended up with a student’s surprise authenticity about a hidden struggle. This question helps process if the planned outcomes were realistic or not, as well as if the activity needs to evolve in the future.
4. Why did it happen the way it happened? Did our activity cause the actual outcome, or were there other factors involved? Maybe the plan and activities logically led to the outcomes. Maybe your plan for a water event was fantastic, but the abnormally cold and rainy weather altered your plans. Maybe your last-minute and sub-par planning for a Sunday morning talk turned into a significant spiritual awakening for a student. We’ve all had the experience where the Holy Spirit used our moment of weakness to reveal His transformative power, or where our seemingly perfect plans don’t quite go so perfectly.
These are more objective and concrete evaluation questions–the Eisner model I’ll unpack tomorrow deals more with the subjective and abstract–and are useful tools in evaluating some of the basic programmatic and structure questions in ministry. You can read more of Robert Stake’s work here, though it’s very academic in nature.
How do you evaluate your program/curriculum/ministry/life? What are some of the questions you’re asking?