Description
Efnisyfirlit
- Preface
- Intended Audience
- Scope
- Need for Program Evaluation
- Handbook Organization
- Acknowledgments
- The Editors
- The Contributors
- PART ONE Evaluation Planning and Design
- The Chapters
- CHAPTER ONE Planning and Designing Useful Evaluations
- Matching the Evaluation Approach to Information Needs
- Supporting Causal Inferences
- Planning a Responsive and Useful Evaluation
- Using Evaluation Information
- Glossary
- References
- CHAPTER TWO Analyzing and Engaging Stakeholders
- Understanding Who Is a Stakeholder—Especially a Key Stakeholder
- Identifying and Working with Primary Intended Users
- Using Stakeholder Identification and Analysis Techniques
- Dealing with Power Differentials
- Determining the Evaluation’s Purpose and Goals
- Engaging Stakeholders
- Meeting the Challenges of Turbulent and Uncertain Environments
- Conclusion
- References
- CHAPTER THREE Using Logic Models
- What Is a Logic Model?
- The Utility of Logic Models
- Theory-Driven Evaluation
- Building the Logic Model
- Conclusion
- References
- CHAPTER FOUR Exploratory Evaluation
- Evaluability Assessment Assesses a Program’s Readiness for Evaluation
- Rapid Feedback Evaluation Produces Tested Evaluation Designs
- Evaluation Synthesis Summarizes What Is Known About Program Performance
- Small-Sample Studies May Be Useful in Vetting Performance Measures
- Selecting an Exploratory Evaluation Approach
- Conclusion
- References
- CHAPTER FIVE Performance Measurement
- Performance Measurement and Program Evaluation
- Measurement Systems
- Identifying, Operationalizing, and Assessing Performance Measures
- Converting Performance Data to Information
- Presenting and Analyzing Performance Data
- Current Challenges to Performance Measurement
- Conclusion: The Outlook
- References
- CHAPTER SIX Comparison Group Designs
- Introduction to Causal Theory for Impact Evaluation
- Comparison Group Designs
- Conclusion
- References
- Further Reading
- CHAPTER SEVEN Randomized Controlled Trials
- History of RCTs
- Why Randomize?
- Trial Design
- Conclusion
- References
- CHAPTER EIGHT Conducting Case Studies
- What Are Case Studies?
- Designing Case Studies
- Conducting Case Studies
- Analyzing the Data
- Preparing the Report
- Avoiding Common Pitfalls
- Conclusion
- References
- CHAPTER NINE Recruitment and Retention of Study Participants
- Planning for Recruitment and Retention
- Institutional Review Boards and the Office of Management and Budget
- Recruitment and Retention Staffing
- Implementing Recruitment and Retention
- Monitoring Recruitment and Retention Progress
- Cultural Considerations
- Conclusion
- References
- CHAPTER TEN Designing, Managing, and Analyzing Multisite Evaluations
- Defining the Multisite Evaluation
- Advantages and Disadvantages of Multisite Evaluations
- Multisite Approaches and Designs
- Strategies for Multisite Data Collection
- Assessing Multisite Interventions
- Monitoring Multisite Implementation
- Quality Control in MSEs
- Data Management
- Quantitative Analysis Strategies
- Qualitative Analysis Strategies
- Telling the Story
- Final Tips for the MSE Evaluator
- References
- CHAPTER ELEVEN Evaluating Community Change Programs
- Defining Community Change Interventions
- Challenges
- Guidance for Evaluators and Practitioners
- Conclusion
- References
- Further Reading
- CHAPTER TWELVE Culturally Responsive Evaluation
- What Is CRE?
- Pioneers in the Foundations of CRE
- From CRE Theory to CRE Practice
- Case Applications of CRE Theory and Practice
- Implications for the Profession
- Conclusion
- Notes
- References
- PART TWO Practical Data Collection Procedures
- The Chapters
- Other Data Collection Considerations
- CHAPTER THIRTEEN Using Agency Records
- Potential Problems and Their Alleviation
- Data Quality Control Processes
- Other Suggestions for Obtaining Data from Agency Records
- Conclusion
- Note
- References
- CHAPTER FOURTEEN Using Surveys
- Planning the Survey
- Select the Sample
- Design the Survey Instrument
- Collect Data from Respondents
- Prepare Data for Analysis
- Present Survey Findings
- Conclusion
- References
- CHAPTER FIFTEEN Role Playing
- What Is Role Playing?
- Diversity of Uses
- Sampling
- Data Collection Instruments
- Recruiting, Selecting, and Training Role Players
- Implementing Role Playing
- Practical Problems (and Solutions)
- Statistical Analysis
- Expanding Applications for Role Playing
- Ethical and Legal Issues
- Limitations of Role Playing
- Conclusion
- Notes
- References
- CHAPTER SIXTEEN Using Ratings by Trained Observers
- Uses for Trained Observer Ratings
- Is a Trained Observer Method Appropriate for Your Needs?
- What You Will Need to Start
- Decisions About Ratings and Sampling
- Examples of Trained Observer Programs
- Presenting Findings for Trained Observations
- Quality Control
- Using Technology or Paper?
- Benefits of the Trained Observer Approach
- Conclusion
- Notes
- References
- CHAPTER SEVENTEEN Collecting Data in the Field
- Objectives of Field Studies
- Design Issues
- Field Visit Protocol
- Data Maintenance and Analysis
- Conclusion
- References
- Further Reading
- CHAPTER EIGHTEEN Using the Internet
- Using the Internet for Literature Reviews
- Conducting Surveys on the Internet
- Putting Your Program Evaluation on the Web
- References
- Further Reading
- CHAPTER NINETEEN Conducting Semi-Structured Interviews
- Disadvantages and Advantages of SSIs
- Designing and Conducting SSIs
- Polishing Interview Techniques
- Analyzing and Reporting SSIs
- References
- CHAPTER TWENTY Focus Group Interviewing
- Examples of Focus Group Use
- Characteristics of Focus Group Interviews
- Responsibilities
- Planning
- Developing Questions
- Recruiting
- Moderating
- Analysis
- Addressing Challenges in Focus Group Interviews
- Conclusion
- Reference
- Further Reading
- CHAPTER TWENTY-ONE Using Stories in Evaluation
- How Stories Enrich Evaluations
- A Definition of an Evaluation Story
- How Stories Can Be Used in Evaluation Studies
- An Overview of Critical Steps
- Strategies of Expert Storytellers: Presenting the Story Effectively
- Challenges in Using Stories and How to Manage Them
- A Final Thought
- Conclusion
- References
- Further Reading
- PART THREE Data Analysis
- The Chapters
- CHAPTER TWENTY-TWO Qualitative Data Analysis
- Types of Evaluation and Analytic Purpose
- Application
- Application
- Application
- Application
- Framing Analytic Choices
- Program Evaluation Standards and Quality criteria for QDA
- Conclusion
- References
- CHAPTER TWENTY-THREE Using Statistics in Evaluation
- Descriptive Statistics: Simple Measures Used in Evaluations
- Inferential Statistics: From Samples to Populations
- Selecting Appropriate Statistics
- Reporting Statistics Appropriately
- Reporting Statistical Results to High-Level Public Officials
- Conclusion
- Appendix 23A: An Application of the Chi-Square Statistic Calculated with SPSS
- Appendix 23B: An Application of the t Test
- References
- Further Reading Textbooks
- Special Topics
- Statistical Software
- CHAPTER TWENTY-FOUR Cost-Effectiveness and Cost-Benefit Analysis
- Step 1: Set the Framework for the Analysis
- Step 2: Decide Whose Costs and Benefits Should Be Recognized
- Step 3: Identify and Categorize Costs and Benefits
- Step 4: Project Cost and Benefits Over the Life of the Program, If Applicable
- Step 5: Monetizing (Putting a Dollar Value on) Costs
- Costs to the Private Sector
- Costs to Participants and Volunteers
- Step 6: Quantify (for CEA) and Monetize (for CBA) Benefits
- Step 7: Discount Costs and Benefits to Obtain Present Values
- Step 8: Compute Cost-Effectiveness Ratio (for CEA) or Net Present Value (for CBA)
- Step 9: Perform Sensitivity Analysis
- Step 10: Make a Recommendation
- Conclusion
- Notes
- References
- CHAPTER TWENTY-FIVE Meta-Analyses, Systematic Reviews, and Evaluation Syntheses
- Why Be Conscientious in Reviewing Studies of Intervention Effects?
- How Are the Best Approaches to Systematic Reviews Employed at Their Best?
- What Resources Can Be Employed to Do the Job Well?
- To What End? Value Added and Usefulness
- Conclusion
- Note
- References
- PART FOUR Use of Evaluation
- The Chapters
- CHAPTER TWENTY-SIX Pitfalls in Evaluations
- Pitfalls Before Data Collection Begins
- Pitfalls During Data Collection
- Pitfalls After Data Collection
- Conclusion
- Note
- References
- CHAPTER TWENTY-SEVEN Providing Recommendations, Suggestions, and Options for Improvement
- But First, an Important Distinction
- When to Make Recommendations
- Hallmarks of Effective Recommendations
- General Strategies for Developing Recommendations
- Reference
- CHAPTER TWENTY-EIGHT Writing for Impact
- The Message
- The Audience
- The Medium
- Conclusion
- Reference
- CHAPTER TWENTY-NINE Contracting for Evaluation Products and Services
- Creating a Feasible, Approved Concept Plan
- Developing a Well-Defined Request for Proposal
- Selecting a Well-Qualified Evaluation Contractor
- Constructively Monitoring Interim Progress
- Assuring Product Quality and Usefulness
- Conclusion
- Reference
- CHAPTER THIRTY Use of Evaluation in Government
- Use of Evaluation in Government
- Political and Bureaucratic Challenges Affecting Use of Evaluation
- Overcoming Political and Bureaucratic Challenges
- Conclusion
- References
- CHAPTER THIRTY-ONE Evaluation Challenges, Issues, and Trends
- Challenge 1: Controlling the Quality of the Evaluation Process
- Challenge 2: Selecting and Training Evaluators
- Challenge 3: Maintaining Standards and Ethics
- Challenge 4: Using Evaluation Findings to Improve Programs
- The Relationship Between Performance Monitoring and Evaluation
- Trends in Program Evaluation
- Final Thoughts
- References
- NAME INDEX
- SUBJECT INDEX
- EULA
Reviews
There are no reviews yet.