Handbook of Practical Program Evaluation

Höfundur Kathryn Newcomer

Útgefandi Wiley Professional Development (P&T)

Snið Page Fidelity

Print ISBN 9781119171386

Útgáfa 4

Útgáfuár 2015

9.490 kr.

Description

Efnisyfirlit

  • Handbook of Practical Program Evaluation
  • Contents
  • Figures, Tables, and Exhibits
  • Figures
  • Tables
  • Exhibits
  • Preface
  • Intended Audience
  • Scope
  • Need for Program Evaluation
  • Handbook Organization
  • Acknowledgments
  • The Editors
  • The Contributors
  • PART ONE Evaluation Planning and Design
  • The Chapters
  • CHAPTER ONE PLANNING AND DESIGNING USEFUL EVALUATIONS
  • Matching the Evaluation Approach to Information Needs
  • Select Programs to Evaluate
  • Select the Type of Evaluation
  • Identify Contextual Elements That May Affect Evaluation Conduct and Use
  • Produce the Methodological Rigor Needed to Support Credible Findings
  • Choose Appropriate Measures
  • Choose Reliable Ways to Obtain the Chosen Measures
  • Supporting Causal Inferences
  • Internal Validity
  • Generalizability
  • Statistical Conclusion Validity
  • Reporting
  • Planning a Responsive and Useful Evaluation
  • Planning Evaluation Processes
  • Data Collection
  • Data Analysis
  • Using Evaluation Information
  • Glossary
  • References
  • CHAPTER TWO ANALYZING AND ENGAGING STAKEHOLDERS
  • Understanding Who Is a Stakeholder—Especially a Key Stakeholder
  • Identifying and Working with Primary Intended Users
  • 1. Develop Facilitation Skills
  • 2. Find and Train Evaluation Information Users
  • 3. Find Tipping Point Connectors
  • 4. Facilitate High-Quality Interactions
  • 5. Nurture Interest in Evaluation
  • 6. Demonstrate Cultural Sensitivity and Competence
  • 7. Anticipate Turnover of Intended Users
  • Using Stakeholder Identification and Analysis Techniques
  • Conducting Basic Stakeholder Identification and Analysis
  • Choosing Evaluation Stakeholder Analysis Participants
  • Creating a Purpose Network Diagram
  • Dealing with Power Differentials
  • Power Versus Interest Grid
  • Stakeholder Influence Diagram
  • Bases of Power–Directions of Interest Diagram
  • Determining the Evaluations Purpose and Goals
  • Engaging Stakeholders
  • Meeting the Challenges of Turbulent and Uncertain Environments
  • Conclusion
  • References
  • CHAPTER THREE USING LOGIC MODELS
  • What Is a Logic Model?
  • The Utility of Logic Models
  • Theory-Driven Evaluation
  • Building the Logic Model
  • Stage 1: Collecting the Relevant Information
  • Stage 2: Clearly Defining the Problem and Its Context
  • Stage 3: Defining the Elements of the Program in a Table: Early Sense Making
  • Stage 4: Drawing the Logic Model to Reveal the Programs Theory of Change
  • Stage 5: Verifying the Program Logic Model with Stakeholders
  • Conclusion
  • References
  • CHAPTER FOUR EXPLORATORY EVALUATION
  • Evaluability Assessment Assesses a Programs Readiness for Evaluation
  • The Evaluability Assessment Process
  • Issues, Problems, and Potential Solutions
  • Significance
  • Rapid Feedback Evaluation Produces Tested Evaluation Designs
  • The Rapid Feedback Evaluation Process
  • Issues, Problems, and Potential Solutions
  • Significance
  • Evaluation Synthesis Summarizes What Is Known About Program Performance
  • Small-Sample Studies May Be Useful in Vetting Performance Measures
  • Selecting an Exploratory Evaluation Approach
  • Conclusion
  • References
  • CHAPTER FIVE PERFORMANCE MEASUREMENT: Monitoring Program Outcomes
  • Performance Measurement and Program Evaluation
  • Measurement Systems
  • Outcomes and Other Types of Performance Measures
  • Identifying, Operationalizing, and Assessing Performance Measures
  • Data Sources
  • Criteria for Good Performance Measures
  • Quality Assurance
  • Converting Performance Data to Information
  • Trends Over Time
  • Actual Performance Versus Targets
  • Comparisons Among Units
  • Other Breakouts
  • External Benchmarking
  • Presenting and Analyzing Performance Data
  • Current Challenges to Performance Measurement
  • Using Performance Data to Improve Performance
  • Implementing Performance Measures in Networked Environments
  • Conclusion: The Outlook
  • References
  • CHAPTER SIX COMPARISON GROUP DESIGNS
  • Introduction to Causal Theory for Impact Evaluation
  • Comparison Group Designs
  • 1. Naïve Design
  • 2. Basic Value-Added Design: Regression Adjusted for a Preprogram Measure
  • 3. Regression-Adjusted Covariate Design
  • 4. Value-Added Design Adjusted for Additional Covariates
  • 5. Interrupted Time-Series Designs
  • 6. Fixed-Effect Designs for Longitudinal Evaluations
  • 7. Matching Designs
  • 8. Regression Discontinuity Designs
  • Conclusion
  • References
  • CHAPTER SEVEN RANDOMIZED CONTROLLED TRIALS
  • History of RCTs
  • Why Randomize?
  • Trial Design
  • Biased Allocation and Secure Allocation
  • Contamination and Cluster Randomization
  • Ascertainment and Blinded Follow-Up
  • Crossover and Intention to Treat
  • Attrition
  • Resentful Demoralization: Preference Designs
  • Waiting List and Stepped Wedge Designs
  • Design Issues in Cluster Randomized Trials
  • Sample Size Issues
  • Increased Power for Very Little Cost
  • Analytical Issues
  • Generalizability or External Validity
  • Quality of Randomized Trials
  • Barriers to the Wider Use of RCTs
  • Conclusion
  • References
  • CHAPTER EIGHT CONDUCTING CASE STUDIES
  • What Are Case Studies?
  • Designing Case Studies
  • Defining Research Questions
  • Determining the Unit of Analysis
  • Choosing Single-Case or Multiple-Case Designs
  • Selecting Cases or Sites
  • Conducting Case Studies
  • Preparation
  • Data Collection Strategies
  • Analyzing the Data
  • Preparing the Report
  • Avoiding Common Pitfalls
  • Conclusion
  • References
  • CHAPTER NINE RECRUITMENT AND RETENTION OF STUDY PARTICIPANTS
  • Planning for Recruitment and Retention
  • The Importance of Early Planning
  • Defining the Target Population
  • Participant Motivation and Data Collection Design
  • Pretesting
  • Institutional Review Boards and the Office of Management and Budget
  • Recruitment and Retention Staffing
  • Staff Background
  • Interpersonal Qualities
  • Communication Skills
  • Training and Supervision
  • Implementing Recruitment and Retention
  • Modes of Contact for Recruitment and Retention
  • Recruitment and Retention Efforts in a Health Care Setting
  • Gaining Participant Cooperation
  • Retention-Specific Considerations
  • Monitoring Recruitment and Retention Progress
  • Monitoring Multiple Recruitment Strategies
  • Monitoring Recruitment and Retention of Subpopulations
  • Cultural Considerations
  • Conclusion
  • References
  • CHAPTER TEN DESIGNING, MANAGING, AND ANALYZING MULTISITE EVALUATIONS
  • Defining the Multisite Evaluation
  • Advantages and Disadvantages of Multisite Evaluations
  • Multisite Approaches and Designs
  • Laying the Foundation for an MSE
  • Determining the MSE Design
  • Sampling Sites
  • Strategies for Multisite Data Collection
  • Collecting Common Versus Specific Site Data
  • Developing a Common Protocol
  • Maximizing Existing Data
  • Developing a Common Data Collection Tool
  • Assessing Multisite Interventions
  • Monitoring Fidelity
  • Assessing Common Ingredients
  • Studying Implementation
  • Measuring Program Participation
  • Assessing Comparison as Well as Treatment Sites
  • Monitoring Multisite Implementation
  • Design Features to Monitor
  • Monitoring Methods
  • Quality Control in MSEs
  • Selecting and Hiring Data Collectors
  • Common Training and Booster Sessions
  • Readiness of Interviewers
  • Communication, Supervision, and Ongoing Review
  • Data Management
  • Computerizing and Managing Qualitative Data
  • Computerizing and Managing Quantitative Data
  • IDs and Confidentiality
  • Quantitative Analysis Strategies
  • Challenges and Strategies
  • Overall Analysis Plan
  • Qualitative Analysis Strategies
  • Telling the Story
  • Final Tips for the MSE Evaluator
  • References
  • CHAPTER ELEVEN EVALUATING COMMUNITY CHANGE PROGRAMS
  • Defining Community Change Interventions
  • Challenges
  • Guidance for Evaluators and Practitioners
  • 1. Define a Comprehensive, Parsimonious Set of Metrics Through Which to Assess Program Performance
  • 2. Select the Right Unit of Analysis
  • 3. Assess How “Stable” or Mobile the Unit of Analysis Is
  • 4. Determine the Right Time Period for Evaluation
  • 5. Inventory What Data Are Available and What Original Data Collection Is Necessary
  • 6. Support the Creation and Management of a Data System
  • 7. Choose the Most Appropriate Evaluation Method(s)
  • Conclusion
  • References
  • CHAPTER TWELVE CULTURALLY RESPONSIVE EVALUATION: Theory, Practice, and Future Implications
  • What Is CRE?
  • Pioneers in the Foundations of CRE
  • From CRE Theory to CRE Practice
  • Preparing for the Evaluation
  • Engaging Stakeholders
  • Identifying the Purpose and Intent of the Evaluation
  • Framing the Right Questions
  • Designing the Evaluation
  • Selecting and Adapting Instrumentation
  • Collecting the Data
  • Analyzing the Data
  • Disseminating and Using the Results
  • Case Applications of CRE Theory and Practice
  • Implications for the Profession
  • Validity, Rigor, and CRE
  • Responsibility as a Core Principle of CRE
  • Conclusion
  • Notes
  • References
  • PART TWO Practical Data Collection Procedures
  • The Chapters
  • Other Data Collection Considerations
  • CHAPTER THIRTEEN USING AGENCY RECORDS
  • Potential Problems and Their Alleviation
  • 1. Missing or Incomplete Data
  • 2. Concerns with Data Accuracy
  • 3. Data Available Only in Overly Aggregated Form
  • 4. Unknown, Different, or Changing Definitions of Data Elements
  • 5. Data Need to Be Linked Across Programs and Agencies
  • 6. Confidentiality and Privacy Considerations
  • Data Quality Control Processes
  • Data Checks for Reasonableness
  • Staffing Considerations
  • Other Suggestions for Obtaining Data from Agency Records
  • Conclusion
  • References
  • CHAPTER FOURTEEN USING SURVEYS
  • Planning the Survey
  • Establish Evaluation Questions
  • Determine Whether a Survey Is Necessary and Feasible
  • Determine the Population of Interest
  • Decide on the Analysis Plan
  • Decide on a Plan for Collecting the Data
  • Identify Who Will Conduct the Survey
  • Decide on the Timing of the Data Collection
  • Select the Sample
  • Design the Survey Instrument
  • Consider the Target Respondents
  • Get a Foot in the Door
  • Craft Good Questions
  • Pretest
  • Collect Data from Respondents
  • Mail Surveys
  • Web Surveys
  • In-Person Surveys
  • Telephone Surveys
  • Train Interviewers
  • Employ Quality Control
  • Response Rates
  • Prepare Data for Analysis
  • Present Survey Findings
  • Conclusion
  • References
  • CHAPTER FIFTEEN ROLE PLAYING
  • What Is Role Playing?
  • Diversity of Uses
  • Evaluation
  • Monitoring
  • Enforcement
  • Sampling
  • Representativeness
  • Sample Size
  • Selecting the Sample
  • Data Collection Instruments
  • Determining Which Elements of Role Playing to Document
  • Data Collection Forms
  • Recruiting, Selecting, and Training Role Players
  • Determining Key Characteristics for Role Players
  • Recruiting and Selecting Role Players
  • Training Role Players
  • Implementing Role Playing
  • Management and Quality Control
  • Cost Considerations
  • Practical Problems (and Solutions)
  • Role-Player Attrition
  • Detection
  • Design Efficiencies
  • Statistical Analysis
  • Measuring Differences in Treatment
  • Tests of Statistical Significance
  • Systematic Versus Random Differences in Treatment
  • Expanding Applications for Role Playing
  • Innovative Applications for Role Playing
  • Ethical and Legal Issues
  • Limitations of Role Playing
  • Conclusion
  • References
  • CHAPTER SIXTEEN USING RATINGS BY TRAINED OBSERVERS
  • Uses for Trained Observer Ratings
  • Is a Trained Observer Method Appropriate for Your Needs?
  • What Do You Want to Know?
  • Will Your Findings Require Subsequent Action?
  • What Do You Want to Do with the Information?
  • What You Will Need to Start
  • Decisions About Ratings and Sampling
  • Examples of Trained Observer Programs
  • Volunteers as Trained Observers
  • Employees as Trained Observers
  • Outsiders Running Trained Observer Programs
  • Observing and Rating Interactions
  • Presenting Findings for Trained Observations
  • Quality Control
  • Using Technology or Paper?
  • Benefits of the Trained Observer Approach
  • Lower Costs
  • The Only Direct Way
  • Conclusion
  • References
  • CHAPTER SEVENTEEN COLLECTING DATA IN THE FIELD
  • Objectives of Field Studies
  • Program Management Fieldwork Model
  • Program Evaluation Fieldwork Model
  • Design Issues
  • Frameworks for Guiding Data Collection
  • Site Selection and Staffing
  • Basis for Site Selection
  • Types and Scope of Instruments
  • Field Visit Protocol
  • Previsit Preparations
  • On-Site Procedures
  • Data Maintenance and Analysis
  • Conclusion
  • References
  • Further Reading
  • CHAPTER EIGHTEEN USING THE INTERNET
  • Using the Internet for Literature Reviews
  • The Campbell and Cochrane Collaborations
  • Google, Bing, and Yahoo!
  • Google Scholar
  • ProQuest, PAIS, and ArticlesPlus
  • WorldCat
  • PolicyFile
  • CRS and GAO Reports
  • Government Publications
  • Public Policy Research Institutes
  • Conducting Surveys on the Internet
  • Getting Started: Drafting Questions
  • Validating Respondent Representation
  • Using Unique Aspects of Online Survey Design
  • Outsourcing Online Survey Research
  • Contacting Respondents
  • Putting Your Program Evaluation on the Web
  • References
  • Further Reading
  • CHAPTER NINETEEN CONDUCTING SEMI-STRUCTURED INTERVIEWS
  • Disadvantages and Advantages of SSIs
  • Designing and Conducting SSIs
  • Selecting Respondents and Arranging Interviews
  • Drafting Questions and the Interview Guide
  • Starting the Interview
  • Polishing Interview Techniques
  • Analyzing and Reporting SSIs
  • References
  • CHAPTER TWENTY FOCUS GROUP INTERVIEWING
  • Examples of Focus Group Use
  • To Assess Needs and Assets
  • To Design an Intervention
  • To Evaluate Policy Options
  • To Pilot-Test Data Collection Instruments
  • To Understand Quantitative Findings
  • To Monitor and Evaluate Agency Operation
  • Characteristics of Focus Group Interviews
  • The Questions Are Focused
  • There Is No Push for Agreement or Consensus
  • The Environment Is Permissive and Nonthreatening
  • The Participants Are Homogeneous
  • The Group Size Is Reasonable
  • Patterns and Trends Are Examined Across Groups
  • The Group Is Guided by a Skillful Moderator
  • The Analysis Fits the Study
  • Responsibilities
  • Planning
  • First Steps
  • Sampling and Number of Groups
  • Developing Questions
  • Developing the Questioning Route
  • Examples of Questioning Routes
  • Recruiting
  • The Recruiting Procedure
  • Finding a Pool of Participants
  • Getting People to Attend—Incentives
  • Consider Your Recruiting Assets
  • Moderating
  • Moderator Skills
  • Analysis
  • Use a Systematic Analysis Process
  • Try the Classic Analysis Strategy: Long Tables, Scissors, and Colored Marking Pens
  • Addressing Challenges in Focus Group Interviews
  • Conclusion
  • Reference
  • CHAPTER TWENTY-ONE USING STORIES IN EVALUATION
  • How Stories Enrich Evaluations
  • They Help Us Understand
  • They Help Us Share What We Learned
  • A Definition of an Evaluation Story
  • How Stories Can Be Used in Evaluation Studies
  • An Overview of Critical Steps
  • Decide on the Evaluation Question or the Topic
  • Decide How You Will Use Stories in the Evaluation
  • Decide on a Sampling Strategy
  • Select a Method for Gathering Stories
  • Develop Questions to Elicit Stories and Guide the Storytellers
  • Decide How You Will Capture the Stories
  • Collect the Stories
  • Decide How to Present the Stories
  • Analyze the Stories
  • Verify the Stories You Will Use in Your Reports
  • Decide on the Level of Confidentiality
  • Describe Representativeness
  • Deal with the Concept of Truth
  • Document Your Strategy
  • Strategies of Expert Storytellers: Presenting the Story Effectively
  • 1. Stories Are About a Person, Not an Organization
  • 2. Stories Have a Hero, an Obstacle, a Struggle, and a Resolution
  • 3. Set the Stage for the Story
  • 4. The Story Unfolds
  • 5. Emotions Are Described
  • 6. Dialogue Adds Richness
  • 7. Suspense and Surprise Add Interest
  • 8. Key Message Is Revealed
  • Challenges in Using Stories and How to Manage Them
  • A Final Thought
  • Conclusion
  • References
  • PART THREE Data Analysis
  • The Chapters
  • CHAPTER TWENTY-TWO QUALITATIVE DATA ANALYSIS
  • Types of Evaluation and Analytic Purpose
  • Coding Data
  • Overview of Qualitative Analytic Methods
  • Enumerative Methods
  • Application
  • When These Methods Are Appropriate
  • Descriptive Methods
  • Application
  • When These Methods Are Appropriate
  • Hermeneutic Methods
  • Application
  • When These Methods Are Appropriate
  • Explanatory Methods
  • Application
  • When These Methods Are Appropriate
  • Framing Analytic Choices
  • How Can Software Help?
  • Who Does the Analysis?
  • High Quality Qualitative Data Analysis
  • Program Evaluation Standards and Quality criteria for QDA
  • Conclusion
  • References
  • CHAPTER TWENTY-THREE USING STATISTICS IN EVALUATION
  • Descriptive Statistics: Simple Measures Used in Evaluations
  • Univariate Statistics
  • Bivariate Statistics
  • Inferential Statistics: From Samples to Populations
  • Sampling Tips
  • Statistical Hypothesis Testing
  • Selecting a Statistical Confidence Level
  • Using a Confidence Interval to Convey Results
  • Testing Statistical Significance for Nominal- and Ordinal-Level Variables: The Chi-Square Test
  • Testing Statistical Significance of Difference of Means: The t Test
  • Regression Analysis
  • Introduction to the Multiple Regression Model
  • Tips on Pulling It All Together: Practical Significance
  • Selecting Appropriate Statistics
  • Selecting Techniques to Sort Measures or Units
  • Other Factors Affecting Selection of Statistical Techniques
  • Reporting Statistics Appropriately
  • Reporting Statistical Results to High-Level Public Officials
  • Conclusion
  • Appendix 23A: An Application of the Chi-Square Statistic Calculated with SPSS
  • Appendix 23B: An Application of the t Test
  • References
  • CHAPTER TWENTY-FOUR COST-EFFECTIVENESS AND COST-BENEFIT ANALYSIS
  • Step 1: Set the Framework for the Analysis
  • The Status Quo
  • Timing
  • Step 2: Decide Whose Costs and Benefits Should Be Recognized
  • Step 3: Identify and Categorize Costs and Benefits
  • Step 4: Project Cost and Benefits Over the Life of the Program, If Applicable
  • Step 5: Monetizing (Putting a Dollar Value on) Costs
  • Step 6: Quantify (for CEA) and Monetize (for CBA) Benefits
  • Quantifying Benefits (for CEA)
  • Monetizing Benefits (for CBA)
  • Chain Reaction Problem
  • Step 7: Discount Costs and Benefits to Obtain Present Values
  • Step 8: Compute Cost-Effectiveness Ratio (for CEA) or Net Present Value (for CBA)
  • Compute Cost-Effectiveness Ratio (for CEA)
  • Calculate Net Present Value (for CBA)
  • Step 9: Perform Sensitivity Analysis
  • Step 10: Make a Recommendation
  • Conclusion
  • Notes
  • References
  • CHAPTER TWENTY-FIVE META-ANALYSES, SYSTEMATIC REVIEWS, AND EVALUATION SYNTHESES
  • Why Be Conscientious in Reviewing Studies of Intervention Effects?
  • Multiple Evaluations Versus a Single Evaluation
  • Identifying High-Quality Evidence
  • Going Beyond the Flaws in Conventional Literature Reviews
  • How Are the Best Approaches to Systematic Reviews Employed at Their Best?
  • Practical Advice: Read or Take a Course
  • Practical Advice: Contribute to a Meta-Analysis, Systematic Review, or Evaluation Synthesis
  • Producing a Meta-Analysis, Systematic Review, Evaluation Synthesis
  • What Resources Can Be Employed to Do the Job Well?
  • Independent International and Domestic Resources
  • Government Organizations and Government-Sponsored Entities
  • Technical Resources
  • Resources and Issues for the Future: Scenarios
  • To What End? Value Added and Usefulness
  • Value Added: Surprises
  • Academic Disciplines, the Policy Sector, and Dependence on Systematic Reviews
  • By-Products
  • Conclusion
  • References
  • PART FOUR Use of Evaluation
  • The Chapters
  • CHAPTER TWENTY-SIX PITFALLS IN EVALUATIONS
  • Pitfalls Before Data Collection Begins
  • Pitfall 1: Failure to Assess Whether the Program Is Evaluable
  • Pitfall 2: Starting Data Collection Too Early in the Life of a Program
  • Pitfall 3: Failure to Secure Input from Program Managers and Other Stakeholders on Appropriate Evalu
  • Pitfall 4: Failure to Clarify Program Managers Expectations About What Can Be Learned from the Evalu
  • Pitfall 5: Failure to Pretest Data Collection Instruments Appropriately
  • Pitfall 6: Use of Inadequate Indicators of Program Effects
  • Pitfall 7: Inadequately Training Data Collectors
  • Pitfalls During Data Collection
  • Pitfall 8: Failure to Identify and Adjust for Changes in Data Collection Procedures That Occur Durin
  • Pitfall 9: Collecting Too Many Data and Not Allowing Adequate Time for Analysis of the Data Collecte
  • Pitfall 10: Inappropriate Conceptualization or Implementation of the Intervention
  • Pitfall 11: Beginning Observation When Conditions (Target Behaviors) Are at an Extreme Level or Not
  • Pitfall 12: Inappropriate Involvement of Program Providers in Data Collection
  • Pitfall 13: Overly Intrusive Data Collection Procedures That Change Behaviors of Program Staff or Pa
  • Pitfall 14: Failure to Account for Drop-Off in Sample Size Due to Attrition
  • Pitfall 15: Failure to Draw a Representative Sample of Program Participants
  • Pitfall 16: Insufficient Number of Callbacks to Boost Response Rates
  • Pitfall 17: Failure to Account for Natural Maturation Among Program Participants
  • Pitfall 18: Failure to Provide a Comparison Group
  • Pitfall 19: Failure to Take into Account Key Contextual Factors (Out of the Control of Program Staff
  • Pitfall 20: Failure to Take into Account the Degree of Difficulty of Helping Program Participants
  • Pitfalls After Data Collection
  • Pitfall 21: Overemphasis on Statistical Significance and Under-emphasis on Practical Significance of
  • Pitfall 22: Focusing on Only the Overall (Average) Results with Inadequate Attention to Disaggregate
  • Pitfall 23: Generalizing Beyond the Confines of the Sample or the Limits of the Program Sites Includ
  • Pitfall 24: Failure to Acknowledge the Effects of Multiple Program Components
  • Pitfall 25: Failure to Submit Preliminary Findings to Key Program Staff for Reality Testing
  • Pitfall 26: Failure to Adequately Support Conclusions with Specific Data
  • Pitfall 27: Poor Presentation of Evaluation Findings
  • Conclusion
  • References
  • CHAPTER TWENTY-SEVEN PROVIDING RECOMMENDATIONS, SUGGESTIONS, AND OPTIONS FOR IMPROVEMENT
  • But First, an Important Distinction
  • When to Make Recommendations
  • Aiming for Acceptance and Appreciation
  • Choosing Between Recommendations and Suggestions
  • Hallmarks of Effective Recommendations
  • Compliance Reviews
  • Other Evaluations
  • General Strategies for Developing Recommendations
  • Brainstorm
  • Vet Ideas Up the Chain of Command and into the World of Stakeholders
  • Start with the Findings
  • Think Outside the Box
  • Consider the Problem of Financing the Recommendations
  • Narrow the List and Provide Options
  • Take Ownership of the Recommendations
  • Reference
  • CHAPTER TWENTY-EIGHT WRITING FOR IMPACT
  • The Message
  • The Mom Test
  • Findings
  • Options and Recommendations
  • Methodology
  • The Audience
  • Thought Leaders
  • Other Interested Persons
  • The Medium
  • The Six Basic Formats
  • Writing Style and Layout
  • Conclusion
  • Reference
  • CHAPTER TWENTY-NINE CONTRACTING FOR EVALUATION PRODUCTS AND SERVICES
  • Creating a Feasible, Approved Concept Plan
  • Key Elements of a Concept Plan
  • Shaping a Feasible Concept Plan
  • Developing a Well-Defined Request for Proposal
  • Determining RFP Content
  • Writing an RFP
  • Selecting a Well-Qualified Evaluation Contractor
  • Reviewing Proposals
  • Selecting the Evaluation Contractor
  • Constructively Monitoring Interim Progress
  • Reviewing Progress Reports and Invoices
  • Monitoring Process
  • Checking the Mandate During the Evaluation
  • Assuring Product Quality and Usefulness
  • Conclusion
  • Reference
  • Further Reading
  • CHAPTER THIRTY USE OF EVALUATION IN GOVERNMENT: The Politics of Evaluation
  • Use of Evaluation in Government
  • Political and Bureaucratic Challenges Affecting Use of Evaluation
  • Overcoming Political and Bureaucratic Challenges
  • Redesigning Agency Management Systems to Focus on Results
  • Creating Incentives for Higher Program Performance
  • Developing Agreement on Key National, State, or Community Indicators
  • Developing Performance Partnerships
  • Conclusion
  • References
  • CHAPTER THIRTY-ONE EVALUATION CHALLENGES, ISSUES, AND TRENDS
  • Challenge 1: Controlling the Quality of the Evaluation Process
  • Challenge 2: Selecting and Training Evaluators
  • Challenge 3: Maintaining Standards and Ethics
  • Challenge 4: Using Evaluation Findings to Improve Programs
  • The Relationship Between Performance Monitoring and Evaluation
  • Trends in Program Evaluation
  • Information Technology
  • Big Data
  • Data Visualization
  • Complex Adaptive Systems
  • Evaluation Mandates
  • Demand for Rigorous Evidence
  • Final Thoughts
  • References
  • Name Index
  • Subject Index
  • EULA
Show More

Additional information

Veldu vöru

Rafbók til eignar

Reviews

There are no reviews yet.

Be the first to review “Handbook of Practical Program Evaluation”

Netfang þitt verður ekki birt. Nauðsynlegir reitir eru merktir *

Aðrar vörur

0
    0
    Karfan þín
    Karfan þín er tómAftur í búð