ebook img

Evaluability Assessment: A Practical Approach PDF

230 Pages·1989·18.787 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Evaluability Assessment: A Practical Approach

Evaluability Assessment Evaluation in Education and Human Services series Editors: George F. Madaus Boston College Chestnut Hill, Massachusetts, U.S.A. Daniel L. Stufflebeam Western Michigan University Kalamazoo, Michigan, U.S.A. Previously published books in the series: 1. Kelleghan, T., Madaus, G., and Airasian, P.: THE EFFECTS OF STANDARDIZED TESTING 2. Madaus, G. (editor): THE COURTS, VALIDITY, AND MINIMUM COMPETENCY TESTING 3. Brinkerhoff, R., Brethower, D., Hluchyj, T., and Nowakowski, J.: PROGRAM EVALUATION, SOURCEBOOK/CASEBOOK 4. Brinkerhoff, R., Brethower, D., Hluchyj, T., and Nowakowski, J.: PROGRAM EVALUATION, SOURCEBOOK 5. Brinkerhoff, R., Brethower, D., Hluchyj, T., and Nowakowski, J.: PROGRAM EVALUATION, DESIGN MANUAL 6. Madaus, G., Scriven, M., and Stufflebeam, D.: EVALUATION MODELS: VIEW· POINTS ON EDUCATIONAL AND HUMAN SERVICES EVALUATION 7. Hambleton, R., and Swamlnathan, H.: ITEM RESPONSE THEORY 8. Stufflebeam, D., and Shinkfield, A.: SYSTEMATIC EVALUATION 9. Nowakowski, J.: HANDBOOK OF EDUCATIONAL VARIABLES: A GUIDE TO EVALUATION 10. Stufflebeam, D.: CONDUCTING EDUCATIONAL NEEDS ASSESSMENTS 11. Cooley, W. and Bickel, W.: DECISION·ORIENTED EDUCATIONAL RESEARCH 12. Gable, R.: INSTRUMENT DEVELOPMENT IN THE AFFECTIVE DOMAIN 13. Sirotnlk, K. and Oakes, J.: CRITICAL PERSPECTIVES ON THE ORGANIZATION AND IMPROVEMENT OF SCHOOLING 14. Wick, J.: SCHOOL·BASED EVALUATION: A GUIDE FOR BOARD MEMBERS, SUPERINTENDENTS, PRINCIPALS, DEPARTMENT HEADS, AND TEACHERS 15. Worthen, B. and White, K.: EVALUATING EDUCATIONAL AND SOCIAL PROGRAMS 16. McArthur, D.: ALTERNATIVE APPROACHES TO THE ASSESSMENT OF ACHIEVEMENT 17. May, L., Moore, C., and Zammit, S.: EVALUATING BUSINESS AND INDUSTRY TRAINING 18. Abrahamson, S.: EVALUATION OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 19. Glasman, N. and Nevo, D.: EVALUTION IN DECISION MAKING: THE CASE OF SCHOOL ADMINISTRATION 20. Gephart, W. and Ayers, J.: TEACHER EDUCATION EVALUATION 21. Madaus, G. and Stufflebeam, D.: EDUCATIONAL EVALUATION: CLASSIC WORKS OF RALPH W. TYLER 22. Gifford, B.: TEST POLICY AND THE POLITICS OF OPPORTUNITY ALLOCATION: THE WORKPLACE AND THE LAW 23. Gifford, B.: TEST POLICY AND TEST PERFORMANCE 24. Mertens, D.: CREATIVE IDEAS FOR TEACHING EVALUATION 25. Osterlind, S.: CONSTRUCTING TEST ITEMS Evaluability Assessment A Practical Approach M. F. Smith with preface by George Mayeske "~. Springer Science+Business Media, LLC Llbrary of Congre •• Cataloglng·ln·Publicatlon Data Smlth, M. F. Evaluablllty assessmenl. (Evaluatlon In educatlon and human servlces) Blbliography: p. 1. Evaluatlon research (Social actlon programs) 1. Title. II. Series. H62.S586 1989 361.2'072 89-15433 ISBN 978-90-481-5782-2 ISBN 978-94-015-7827-1 (eBook) DOI 10.1007/978-94-015-7827-1 Copyright © 1989 by Springer Science+Business Media New York Originally published by Kluwer Academic Publishers in 1989 Softcover reprint of the hardcover 1s t edition 1989 AII rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitled in any torm or by any means, mechanical, photocopying, recording, or otherwlse, without the prior wrltlen permlssion of the publisher, Springer Science+Business Media, LLC. Contents Foreword ix Preface xiii by George W. Mayeske 1 Introduction 1 Definition of Terms 4 Organization of Remaining Chapters and Appendices 6 2 Evaluability Assessment: Overview of Process 11 Definition 11 Historical Perspective 13 Expected Outcomes 15 Implementation Steps 26 3 Determine Purpose, Secure Commitment, and Identify Work Group Members 31 Role of the EA Task Team 33 Team Characteristics and Dynamics 33 Observations from the CES Studies 36 4 Define Boundaries of Program to be Studied 39 What is a Program? 39 5 Identify and Analyze Program Documents 45 6 Develop/Clarify Program Theory 49 Program Theory and Causality 49 Theory and Models 50 Model Format 53 Constructing Theory Models 61 7 Identify and Interview Stakeholders 81 Definition of a Stakeholder 82 Rationale for Stakeholder Input 83 Identify Stakeholders 85 Prepare Interview Questions 88 Select Interviewer(s) 90 Conduct Interviews 92 Insure that Data are Valid and Useful 93 vi EVALUABILITY ASSESSMENT 8 Describe Stakeholder Perceptions of Program 99 Analyze and Summarize Stakeholder Interviews 99 Examples from CES Case Studies 104 Computerized Coding 107 9 Identify Stakeholder Needs, Concerns, and Differences in Perceptions 109 Is there Agreement in General about the Overall Intent of the Program? 109 Is there Agreement about who is to be Served by the Program? 111 Is there Agreement about Support Activities and/or Resources 112 Define Stakeholder Evaluation Needs 113 10 Determine Plausibility of Program Model 115 Are Overall Goals Well Defined? 116 Are Components/Activities Well Defined? 116 Are Components/Activities Sufficient? 117 Are Resources Adequate? 118 11 Draw Conclusions and Make Recommendations 125 Should Conclusions be Drawn and Recommendations Made? 125 Who Should Draw Conclusions and Make Recommendations 126 What Should Conclusions and Recommendations be Made About? 127 How can Conclusions and Recommendations be Justified? 127 Evaluability Assessment Data 128 Potential Validity Threats 130 Guidelines for Conclusions and Recommendations 133 12 Plan Specific Steps for Utilization of EA Data 135 1. Decide to Evaluate 137 2. Decide to Change the Program 139 3. Decide to take no Further Program Action 140 4. Decide to Stop the Program 141 5. Do not Decide; Ignore the EA 141 13 Observations about the Process 143 EA is based on an Underlying Assumption of Rationality, and the World is not so Rational 144 The EA Process Emphasizes Means Rather than Ends 145 The Development of Multiple Models of an Ongoing Program is Hard to do and Harder still to Justify 147 There are many Perceptions of "Reality" and Program Implementors and EA Implementors often Confuse Them 147 Evaluators may Lose (the Appearance of) Program Objectivity in an EA 149 Other Observations 149 In Conclusion 151 Appendix One The Cooperative Extension System 153 Appendix Two Evaluability Assessment of the 4·H Youth Program, Maryland Cooperative Extension Service 155 M.F. Smith Appendix Three Evaluability Assessment of the Local Government Officials Program, Illinois Cooperative Extension Service 193 Charles D. Clark and M.F. Smith Appendix Four Evaluability Assessment of the Master Gardener Program, California Cooperative Extension 203 Dennis Pittenger, James J. Grieshop, M. F. Smith References 209 Index 217 Foreword My interest in and appreciation for program evaluation began in the early 1970's when conducting a curriculum development research project at the University of Florida's P. K. Yo nge Laboratory School. This interest was sparked when it became apparent that testing the success of an education program required more skills than just statistics and research methods. After pursuing additional formal schooling, I embarked on a career featuring educational program evaluation as its central thrust--as a private consultant, later in a university health sciences center involving seven academic colleges, and then in the Cooperative Extension Services of Florida and Maryland. Adding evaluability assessment (EA) to the performance of evaluations, to program development, and to teaching about evaluation has been a significant development for me personally, and I hope to those who have been participants with me in each endeavor. This book grew out of many of these experiences and involved numerous colleagues who made significant contributions. First among these is Dr. George Mayeske, Program Evaluation Specialist, Extension Service, U. S. Department of Agriculture, Washington, D. c., who (1) initiated the project (which took us to all the case-study sites) with approval for the federal portion of the funds after the two of us had worked together at the University of Florida's Winter Institute on Evaluating Cooperative Extension Programs; (2) teamed with me in site visits to the states where the process was field tested; and (3) listened attentively to my continuous "out loud" thinking and challenged me to dig deeper. Quite often his challenges were in the form of silence, which would force me to question myself. I sometimes thought George x EVALUABILITY ASSESSMENT already knew everything I was struggling to piece together and was just letting me learn by the discovery method! I am deeply grateful to George. This book would not have happened without his interest in EA and his initiation of the project on which it is primarily based. Next, is all the diligent professionals at the fieldtest sites who were willing to risk with us on a project in which--at the beginning--we could not explain exactly what had to be done or what was in it for them. I am still amazed that they took on the tasks and worked so hard. Their efforts, their reactions, and their constant challenges of the "theory" furthered the process development. Some of the people who come to mind most readily are Dr. Charley Clark, Program Evaluation Specialist in Cooperative Extension and leader of the EA project in Illinois, who contributed beyond what was required to get that case study completed and the report written. Charley became interested in the process as well as the Illinois product. He contributed unselfishly of his time and his ideas both through the Illinois EA and later in a six-month sabbatical with me at Maryland. Dr. Doris Smith, Assistant Director of Cooperative Extension, University of California, was the impetus for the selection of the second EA implemented in this project and contributed mightily to its success. Dr. Dennis Pittenger at UC Riverside, and Dr. James Grieshop at UC Davis made sure all the tasks were performed and that the CA master gardener EA stayed on schedule, and wrote the final report. Their insight about the process and their diligence in getting it completed are appreciated. So many others contributed that it is impossible to mention each one's special effort, e.g., Dr. Don Steinbach, Project Supervisor, Wildlife and Fisheries, Texas Agricultural Extension Service, Texas A&M University, and all the others involved in the aquaculture study; Dr. Lynda Harriman, Assistant Director, Oklahoma Cooperative Extension Service, Stillwater, and all the Resource Management faculty who participated in the home-based business EA; the three county faculty in Maryland who worked through the 4-H study: Robert Shirley, Carroll County; Ava Baker, Baltimore County; and Hope Jackson, Howard County. FOREWORD xi Special appreciation is extended to Dr. Dick Weismiller, Department of Agronomy; Dr. Nan Booth, Community Resource Development; Dr. Bill Magette, Department of Agricultural Engineering; Ms. Ruth Miller, Home Economics Agent in Calvert County, and Mr. Reg Traband, Agricultural Agent in Harford County, all at the University of Maryland, and Dr. Margaret Ordonez, formerly Department of Home Economics, University of Maryland, now at the University of Rhode Island. These six individuals and I carried out the water resources EA. Without any reservation, they have been the best team with which I have ever worked. They trusted me and each other, worked hard--often doing tasks to which they were not at first committed--and have since become spokespersons for the product we produced and the EA process. Dr. Craig Oliver, Director of Maryland Cooperative Extension Service and Associate Vice Chancellor for Agricultural Affairs, University of Maryland System, has been supportive of the two EAs done in Maryland (4-H youth and water programs) AND supportive of my time spent in developing the process both here and in the other four states. I could not have proceeded without this support. Dr. Michael Quinn Patton, author of many books on evaluation, Past President of the American Evaluation Association, and Leader of Extension programs in the Caribbean for the University of Minnesota, critiqued a previous draft of this book. His suggestions were very helpful. Dr. Leonard Rutman, Partner in the Price Waterhouse Management Consultants firm in Canada and author of many books on evaluation--one on evaluability assessment- provided the last review. He made many constructive and very useful comments. This book is much better because of Rutman's and Patton's reviews. I appreciate their help and their encouragement to proceed was reassuring. I am also grateful to my students who provide an always changing source of stimulation. Greg Corman fits in this category plus he assisted with the layout of most of the models in this book.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.