Description
Efnisyfirlit
- Cover
- Half Title
- Series Page
- Title Page
- Copyright Page
- Table of Contents
- Preface to the Second Edition
- Preface
- Audience
- Teaching strategy
- How to use this book
- Installing the rethinking R package
- Acknowledgments
- Chapter 1. The Golem of Prague
- 1.1. Statistical golems
- 1.2. Statistical rethinking
- 1.3. Tools for golem engineering
- 1.4. Summary
- Chapter 2. Small Worlds and Large Worlds
- 2.1. The garden of forking data
- 2.2. Building a model
- 2.3. Components of the model
- 2.4. Making the model go
- 2.5. Summary
- 2.6. Practice
- Chapter 3. Sampling the Imaginary
- 3.1. Sampling from a grid-approximate posterior
- 3.2. Sampling to summarize
- 3.3. Sampling to simulate prediction
- 3.4. Summary
- 3.5. Practice
- Chapter 4. Geocentric Models
- 4.1. Why normal distributions are normal
- 4.2. A language for describing models
- 4.3. Gaussian model of height
- 4.4. Linear prediction
- 4.5. Curves from lines
- 4.6. Summary
- 4.7. Practice
- Chapter 5. The Many Variables & The Spurious Waffles
- 5.1. Spurious association
- 5.2. Masked relationship
- 5.3. Categorical variables
- 5.4. Summary
- 5.5. Practice
- Chapter 6. The Haunted DAG & The Causal Terror
- 6.1. Multicollinearity
- 6.2. Post-treatment bias
- 6.3. Collider bias
- 6.4. Confronting confounding
- 6.5. Summary
- 6.6. Practice
- Chapter 7. Ulysses’ Compass
- 7.1. The problem with parameters
- 7.2. Entropy and accuracy
- 7.3. Golem taming: regularization
- 7.4. Predicting predictive accuracy
- 7.5. Model comparison
- 7.6. Summary
- 7.7. Practice
- Chapter 8. Conditional Manatees
- 8.1. Building an interaction
- 8.2. Symmetry of interactions
- 8.3. Continuous interactions
- 8.4. Summary
- 8.5. Practice
- Chapter 9. Markov Chain Monte Carlo
- 9.1. Good King Markov and his island kingdom
- 9.2. Metropolis algorithms
- 9.3. Hamiltonian Monte Carlo
- 9.4. Easy HMC: ulam
- 9.5. Care and feeding of your Markov chain
- 9.6. Summary
- 9.7. Practice
- Chapter 10. Big Entropy and the Generalized Linear Model
- 10.1. Maximum entropy
- 10.2. Generalized linear models
- 10.3. Maximum entropy priors
- 10.4. Summary
- Chapter 11. God Spiked the Integers
- 11.1. Binomial regression
- 11.2. Poisson regression
- 11.3. Multinomial and categorical models
- 11.4. Summary
- 11.5. Practice
- Chapter 12. Monsters and Mixtures
- 12.1. Over-dispersed counts
- 12.2. Zero-inflated outcomes
- 12.3. Ordered categorical outcomes
- 12.4. Ordered categorical predictors
- 12.5. Summary
- 12.6. Practice
- Chapter 13. Models With Memory
- 13.1. Example: Multilevel tadpoles
- 13.2. Varying effects and the underfitting/overfitting trade-off
- 13.3. More than one type of cluster
- 13.4. Divergent transitions and non-centered priors
- 13.5. Multilevel posterior predictions
- 13.6. Summary
- 13.7. Practice
- Chapter 14. Adventures in Covariance
- 14.1. Varying slopes by construction
- 14.2. Advanced varying slopes
- 14.3. Instruments and causal designs
- 14.4. Social relations as correlated varying effects
- 14.5. Continuous categories and the Gaussian process
- 14.6. Summary
- 14.7. Practice
- Chapter 15. Missing Data and Other Opportunities
- 15.1. Measurement error
- 15.2. Missing data
- 15.3. Categorical errors and discrete absences
- 15.4. Summary
- 15.5. Practice
- Chapter 16. Generalized Linear Madness
- 16.1. Geometric people
- 16.2. Hidden minds and observed behavior
- 16.3. Ordinary differential nut cracking
- 16.4. Population dynamics
- 16.5. Summary
- 16.6. Practice
- Chapter 17. Horoscopes
- Endnotes
- Bibliography
- Citation index
- Topic index
Reviews
There are no reviews yet.