From Theory to Practice-: Applying the Relational Systems Evaluation to Evaluation Capacity Building
Author: Su-Ching Lin( Graduate Institute of Education, National Changhua University of Education ), Fu-Nan Su (Taichung Municipal Li-jen Junior High School)
Vol.&No.:Vol. 69, No. 1
Date:March 2024
Pages:99-130
DOI:https://doi.org/10.6209/JORIES.202403_69(1).0004
Abstract:
Program evaluation involves the systematic collection of information about a program to improve its quality (Rossi et al., 2004). Program evaluators may be program stakeholders rather than neutral third parties; this is particularly the case when educators are asked to design and implement curriculum reforms. In the case of Taiwan’s 12-year compulsory education curriculum, teachers are expected to design, implement, and evaluate the curriculum (the program) to ensure quality. This study used relational systems evaluation (RSE) as a model for evaluative capacity building (ECB) to equip teachers in different fields to evaluate interdisciplinary programs. Unlike a traditional evaluation model, which views the program as something set in stone to be implemented, RSE is based on a picture where a program and its implementation inform and shape each other for the program’s objectives to be achieved from the ground up (Bollinger, 2021). RSE is a cyclic process in which the local program context is evaluated in light of the relationship of program components to the whole. The changes to be effected, the scope of the evaluation, and the evaluative competencies to be developed are laid out in partnership with stakeholders using systems evaluation protocol (SEP). SEP, RSE, and ECB conceive of educators as both program practitioners and program evaluators who must thus have strong evaluative knowledge and skills (Trochim & Urban, 2021).
RSE is derived from theoretical approaches such as evolutionary evaluation (EE), relational developmental systems theory, and systems thinking (ST). In EE, all programs are viewed as cycles, and evaluation is integral to each cycle. In evaluating a system, evaluators must select an appropriate evaluation model that fits the current stage of the program’s cycle (Trochim & Urban, 2021). In RSDT, a program’s environment drives development and change, and evaluators must consider whether the program requires adjustment based on the local environment or context (Bornstein, 2006). Finally, in ST, evaluators must consider the dynamic relationship between parts of the program to the whole program when implementing changes. Moreover, evaluators must establish program boundaries to determine the scope of the evaluation, identify the causes of strengths and weaknesses, and determine the critical outcomes to be measured. In RSE, all these processes are seen to build evaluative competencies in a collaborative SEP process (Chauveron et al., 2021; Urban, Archibald et al., 2021).
ECB is a dynamic and intentional process for improving individual evaluation motivation, knowledge, skills, and attitudes; strengthening a team’s ability to conduct evaluations; continually optimizing evaluation quality; and making evaluation routine (Buckley et al., 2015; Clinton, 2014; Labin et al., 2012). This study analyzed three evaluative domains: evaluative thinking (ET), cultural competency (CC), and evaluative capacity (EC). Buckley et al. (2015) observe that ET is the core of ECB, which requires continual analysis and rethinking. Without ET, reviews stagnate, fail to get at essential qualities, and induce inappropriate or incorrect corrections. CC refers to the position and attitude of the evaluator toward the culture of the organization. CC recognizes that any decision made during the evaluation process reflects the values of the evaluator, from a statement of the purpose of the evaluation to the collection and analysis of data and interpretation and application of the results (Centers for Disease Control and Prevention [CDC], 2014; Frierson et al., 2002). EC refers to the basic competence of the evaluator in identifying stakeholders; establishing the intention, purpose, problem, and scope of the evaluation; selecting or developing assessment tools; collecting and analyzing evaluation data; and using evaluation results to improve a program (Frierson et al., 2002).
Although several studies have discussed the implementation of ECB, empirical research on this topic is limited (Wandersman, 2014). Moreover, only two empirical studies have applied the RSE model to ECB (Chauveron et al., 2021; Urban, Linver et al., 2021). These studies have indicated that RSE is an effective model for enhancing participants’ ET and EC. However, no studies applying RSE have been conducted in Taiwan. This study had two primary goals: (1) To implement an RSE-ECB model to improve reforms of Taiwan’s 12-year compulsory education system, and (2) to translate the theoretical insights from these models into practical steps or programs through a series of workshops and self-directed learning modules. Through this process, the study will expand the literature on ECB by exploring its application to Taiwan’s unique cultural landscape of compulsory education reform.
In ECB, the evaluation skills individuals acquire must be implemented in their workplace with the support of leaders in their organization (Chauveron et al., 2021; Labin et al., 2012); for teachers, these leaders would be their principal or department heads. Therefore, this study recruited a total of eight teachers in the disciplines of science and technology, art, and the humanities from one rural and one urban junior high school in addition to the principals of these schools. The study enabled these stakeholders to collaborate in employing ECB in the design, implementation, and assessment of a program of curriculum reform grounded in RSE theories.
The secondary goals of this study were to (1) analyze participant responses to determine the effectiveness of the ECB program; (2) analyze changes in participants’ ET, CC, and EC; (3) analyze participants’ application of RSE perspectives in developing school-wide curriculum evaluation measures; and (4) analyze participants’ views on the RSE model. This study adopted a mixed-method design. This study collected data on ET, CC, and EC scale scores and from ECB workshop satisfaction questionnaires, feedback sheets, e-portfolio professional growth files, and interview records. The participants provided positive feedback about the ECB workshops, and their ET, CC, and EC scores significantly improved. The school-wide curriculum evaluation measures they developed reflected RSE perspectives and the unique demands of each school’s curriculum. According to the participants, compared with the traditional linear evaluation model, which only incorporated feedback at the end of the school year, the RSE model reflected the essential objectives of the curriculum and enabled real-time feedback for program revisions. Furthermore, the RSE model helped practitioners determine whether the curriculum met the needs of students and the community. In short, the RSE model provided a useful and comprehensible curriculum evaluation method for teachers and principals, although its implementation was not without challenges.
Keywords:cultural competency, evaluative thinking, evaluative capacity, evaluative capacity building, relational systems evaluation
《Full Text》
References:
» More
- 林素卿、葉順宜(2014)。檔案評量於國中英語教學應用之個案研究。教育科學研究期刊,59(2),111-139。https://doi.org/10.6209/JORIES.2014.59(2).05【Lin, S.-C., & Yeh, S.-I. (2014). Applying a portfolio assessment of English teaching in a junior high school: A case study. Journal of Research in Education Sciences, 59(2), 111-139. https://doi.org/10.6209/JORIES.2014. 59(2).05】
- American Evaluation Association. (2011). American Evaluation Association statement on cultural competence in evaluation. https://www.eval.org/Community/Volunteer/Statement-on-Cultural- Competence-in-Evaluation
- Archibald, T. (2021). The role of evaluative thinking in the teaching of evaluation. Canadian Journal of Program Evaluation, 35(3), 310-319. https://doi.org/10.1177/1098214015581706
- Bollinger, R. (2021). A view from the outside: Reflections on relational systems evaluation. New Directions for Evaluation, 169, 117-123. https://doi.org/10.1002/ev.20450
- Bornstein, M. H. (2006). Parenting science and practice. In K. A. Renninger, I. E. Sigel, & W. Damon (Eds.), Handbook of child psychology: Child psychology in practice (Vol. 4, 6th ed., pp. 893-949). Wiley.
- Buckley, J., Archibald, T., Hargraves, M., & Trochim, W. M. (2015). Defining and teaching evaluative thinking: Insights from research on critical thinking. American Journal of Evaluation, 36(3), 375-388. https://doi.org/10.1177/1098214015581706
- Buckley, J., Hargraves, M., & Moorman, L. (2021). The relational nature of evaluation capacity building: Lessons from facilitated evaluation partnerships. New Directions for Evaluation, 169, 47-64. https://doi.org/10.1002/ev.20445
- Casillas, W. D., & Trochim, W. M. (2015). A systems approach to culturally responsive evaluation practice. In S. Hood, R. Hopson, & H. Frierson (Eds.), Continuing the journey to reposition culture and cultural context in evaluation theory and practice (pp. 29-48). Information Age.
- Centers for Disease Control and Prevention. (2014). Practical strategies for culturally competent evaluation. https://www.cdc.gov/dhdsp/docs/cultural_competence_guide.pdf
- Central Vancouver Island Multicultural Society. (2015). Cultural competence self-assessment checklist. https://www.cvims.org/resources/cultural-competency/
- Chauveron, L. M., Urban, J. B., Samtani, S., Cox, M., Moorman, L., Hargraves, M., Buckley, J., & Linver, M. R. (2021). Promoting evaluation in youth character development through enhanced evaluation capacity building: Empirical findings from the PACE project. New Directions for Evaluation, 169, 79-95. https://doi.org/10.1002/ev.20447
- Clinton, J. (2014). The true impact of evaluation: Motivation for ECB. American Journal of Evaluation, 35(1), 120-127. https://doi.org/10.1177/1098214013499602
- Frierson, H. T., Hood, S., & Hughes, G. B. (2002). A guide to conducting culturally responsive evaluations. https://www.nsf.gov/pubs/2002/nsf02057/nsf02057_5.pdf
- Guskey, T. R. (2000). Evaluating professional development. Corwin Press.
- Hargraves, M., Buckley, J., Urban, J. B., Linver, M. R., Chauveron, L. M., Samtani, S., Archibald, T., & Moorman, L. (2021). Resonance, stickiness, and the value propositions of evaluation capacity building: Key takeaways and future directions. New Directions for Evaluation, 169, 97-116. https://doi.org/10.1002/ev.20442
- Hopson, R. (2009). Reclaiming knowledge at the margins: Culturally responsive evaluation in the current evaluation moment. In K. E. Ryan & J. B. Cousins (Eds.), The Sage international handbook of educational evaluation (pp. 429-446). Sage.
- Labin, S. N., Duffy, J. L., Meyers, D. C., Wandersman, A., & Lesesne, C. A. (2012). A research synthesis of the evaluation capacity building literature. American Journal of Evaluation, 33(3), 307-338. https://doi.org/10.1177/1098214011434608
- McIntosh, J. S., Buckley, J., & Archibald, T. (2020). Refining and measuring the construct of evaluative thinking: An exploratory factor analysis of the evaluative thinking inventory. Journal of MultiDisciplinary Evaluation, 16(34), 104-117. https://doi.org/10.56645/jmde. v16i34.591
- Morariu, J. (2012). Evaluation capacity building: Examples and lessons from the field. http:// www.pointk.org/client_docs/tear_sheet_ecb-innovation_network.pdf
- Patton, M. Q. (2005). In conversation: Michael Quinn Patton. Interview with Lisa Waldick, from the international development research center. http://www.idrc.ca/en/ev-30442-201-1-DO_TOPIC. html
- Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443-459. https://doi.org/10.1177/1098214008324182
- Rossi, P., Lipsey, M., & Freeman, H. (2004). A systematic approach (7th ed.). Sage.
- Schön, D. (1983). The reflective practitioner: How professionals think in action. Basic Books.
- Schwandt, T. A. (2008). Educating for intelligent belief in evaluation. American Journal of Evaluation, 29, 139-150. https://doi.org/10.1177/1098214008316889
- Suarez-Balcazar, Y., & Taylor-Ritzler, T. (2014). Moving from science to practice in evaluation capacity building. American Journal of Evaluation, 35(1), 95-99. https://doi.org/10.1177/ 1098214013499440
- Taylor-Ritzler, T., Suarez-Balcazar, Y., Garcia-Iriarte, E., Henry, D. B., & Balcazar, F. E. (2013). Understanding and measuring evaluation capacity: A model and instrument validation study. American Journal of Evaluation, 34(2), 190-206. https://doi.org/10.1177/1098214012471421
- Thomas, V. G., & Parsons, B. A. (2017). Culturally responsive evaluation meets systems-oriented evaluation. American Journal of Evaluation, 38(1), 7-28. https://doi.org/10.1177/ 1098214016644069
- Trochim, W. M., & Urban, J. B. (2021). Theoretical foundations and philosophical orientation of relational systems evaluation. New Directions for Evaluation, 169, 19-30. https://doi.org/ 10.1002/ev.20449
- Urban, J. B., & Trochim, W. M. (2009). The role of evaluation in research-practice integration: Working toward the ‘‘Golden Spike’’. American Journal of Evaluation, 30(4), 538-553. https://doi.org/10.1177/1098214009348327
- Urban, J. B., Archibald, T., Hargraves, M., Buckley, J., Hebbard, C., Linver, M. R., & Trochim, W. M. (2021). Introduction to relational systems evaluation. New Directions for Evaluation, 169, 11-18. https://doi.org/10.1002/ev.20444
- Urban, J. B., Hargraves, M., Buckley, J., Archibald, T., Hebbard, C., & Trochim, W. M. (2021). The systems evaluation protocol for evaluation planning. New Directions for Evaluation, 169, 31-45. https://doi.org/10.1002/ev.20443
- Urban, J. B., Linver, M. R., Chauveron, L. M., Archibald, T., Hargraves, M., & Buckley, J. (2021). Applying the systems evaluation protocol in the real world: Six case studies. New Directions for Evaluation, 169, 65-77. https://doi.org/10.1002/ev.20448
- Vo, A. T., & Archibald T. (2018). New directions for evaluative thinking. New Directions for Evaluation, 158, 139-147. https://doi.org/10.1002/ev.20317
- Wandersman, A. (2014). Getting to outcomes: An evaluation capacity building example of rationale, science, and practice. American Journal of Evaluation, 35(1), 100-106. https://doi.org/10.1177/1098214013500705
- Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? The American Journal of Evaluation, 19(1), 21-33. https://doi.org/10.1016/S1098-2140(99)80178-7