The R Book

Höfundur Michael J. Crawley

Útgefandi Wiley Global Research (STMS)

Snið Page Fidelity

Print ISBN 9780470973929

Útgáfa 2

Útgáfuár 2012

11.090 kr.

Description

Efnisyfirlit

  • The R Book
  • Preface
  • 1 Getting Started
  • 1.1 How to use this book
  • 1.1.1 Beginner in both computing and statistics
  • 1.1.2 Student needing help with project work
  • 1.1.3 Done some R and some statistics, but keen to learn more of both
  • 1.1.4 Done regression and ANOVA, but want to learn more advanced statistical modelling
  • 1.1.5 Experienced in statistics, but a beginner in R
  • 1.1.6 Experienced in computing, but a beginner in R
  • 1.1.7 Familiar with statistics and computing, but need a friendly reference manual
  • 1.2 Installing R
  • 1.3 Running R
  • 1.4 The Comprehensive R Archive Network
  • 1.4.1 Manuals
  • 1.4.2 Frequently asked questions
  • 1.4.3 Contributed documentation
  • 1.5 Getting help in R
  • 1.5.1 Worked examples of functions
  • 1.5.2 Demonstrations of R functions
  • 1.6 Packages in R
  • 1.6.1 Contents of packages
  • 1.6.2 Installing packages
  • 1.7 Command line versus scripts
  • 1.8 Data editor
  • 1.9 Changing the look of the R screen
  • 1.10 Good housekeeping
  • 1.11 Linking to other computer languages
  • 2 Essentials of the R Language
  • 2.1 Calculations
  • 2.1.1 Complex numbers in R
  • 2.1.2 Rounding
  • 2.1.3 Arithmetic
  • 2.1.4 Modulo and integer quotients
  • 2.1.5 Variable names and assignment
  • 2.1.6 Operators
  • 2.1.7 Integers
  • 2.1.8 Factors
  • 2.2 Logical operations
  • 2.2.1 TRUE and T with FALSE and F
  • 2.2.2 Testing for equality with real numbers
  • 2.2.3 Equality of floating point numbers using all.equal
  • 2.2.4 Summarizing differences between objects using all.equal
  • 2.2.5 Evaluation of combinations of TRUE and FALSE
  • 2.2.6 Logical arithmetic
  • 2.3 Generating sequences
  • 2.3.1 Generating repeats
  • 2.3.2 Generating factor levels
  • 2.4 Membership: Testing and coercing in R
  • 2.5 Missing values, infinity and things that are not numbers
  • 2.5.1 Missing values: NA
  • 2.6 Vectors and subscripts
  • 2.6.1 Extracting elements of a vector using subscripts
  • 2.6.2 Classes of vector
  • 2.6.3 Naming elements within vectors
  • 2.6.4 Working with logical subscripts
  • 2.7 Vector functions
  • 2.7.1 Obtaining tables of means using tapply
  • 2.7.2 The aggregate function for grouped summary statistics
  • 2.7.3 Parallel minima and maxima: pmin and pmax
  • 2.7.4 Summary information from vectors by groups
  • 2.7.5 Addresses within vectors
  • 2.7.6 Finding closest values
  • 2.7.7 Sorting, ranking and ordering
  • 2.7.8 Understanding the difference between unique and duplicated
  • 2.7.9 Looking for runs of numbers within vectors
  • 2.7.10 Sets: union, intersect and setdiff
  • 2.8 Matrices and arrays
  • 2.8.1 Matrices
  • 2.8.2 Naming the rows and columns of matrices
  • 2.8.3 Calculations on rows or columns of the matrix
  • 2.8.4 Adding rows and columns to the matrix
  • 2.8.5 The sweep function
  • 2.8.6 Applying functions with apply, sapply and lapply
  • 2.8.7 Using the max.col function
  • 2.8.8 Restructuring a multi-dimensional array using aperm
  • 2.9 Random numbers, sampling and shuffling
  • 2.9.1 The sample function
  • 2.10 Loops and repeats
  • 2.10.1 Creating the binary representation of a number
  • 2.10.2 Loop avoidance
  • 2.10.3 The slowness of loops
  • 2.10.4 Do not ‘grow’ data sets by concatenation or recursive function calls
  • 2.10.5 Loops for producing time series
  • 2.11 Lists
  • 2.11.1 Lists and lapply
  • 2.11.2 Manipulating and saving lists
  • 2.12 Text, character strings and pattern matching
  • 2.12.1 Pasting character strings together
  • 2.12.2 Extracting parts of strings
  • 2.12.3 Counting things within strings
  • 2.12.4 Upper- and lower-case text
  • 2.12.5 The match function and relational databases
  • 2.12.6 Pattern matching
  • 2.12.7 Dot . as the ‘anything’ character
  • 2.12.8 Substituting text within character strings
  • 2.12.9 Locations of a pattern within a vector using regexpr
  • 2.12.10 Using %in% and which
  • 2.12.11 More on pattern matching
  • 2.12.12 Perl regular expressions
  • 2.12.13 Stripping patterned text out of complex strings
  • 2.13 Dates and times in R
  • 2.13.1 Reading time data from files
  • 2.13.2 The strptime function
  • 2.13.3 The difftime function
  • 2.13.4 Calculations with dates and times
  • 2.13.5 The difftime and as.difftime functions
  • 2.13.6 Generating sequences of dates
  • 2.13.7 Calculating time differences between the rows of a dataframe
  • 2.13.8 Regression using dates and times
  • 2.13.9 Summary of dates and times in R
  • 2.14 Environments
  • 2.14.1 Using with rather than attach
  • 2.14.2 Using attach in this book
  • 2.15 Writing R functions
  • 2.15.1 Arithmetic mean of a single sample
  • 2.15.2 Median of a single sample
  • 2.15.3 Geometric mean
  • 2.15.4 Harmonic mean
  • 2.15.5 Variance
  • 2.15.6 Degrees of freedom
  • 2.15.7 Variance ratio test
  • 2.15.8 Using variance
  • 2.15.9 Deparsing: A graphics function for error bars
  • 2.15.10 The switch function
  • 2.15.11 The evaluation environment of a function
  • 2.15.12 Scope
  • 2.15.13 Optional arguments
  • 2.15.14 Variable numbers of arguments (…)
  • 2.15.15 Returning values from a function
  • 2.15.16 Anonymous functions
  • 2.15.17 Flexible handling of arguments to functions
  • 2.15.18 Structure of an object: str
  • 2.16 Writing from R to file
  • 2.16.1 Saving your work
  • 2.16.2 Saving history
  • 2.16.3 Saving graphics
  • 2.16.4 Saving data produced within R to disc
  • 2.16.5 Pasting into an Excel spreadsheet
  • 2.16.6 Writing an Excel readable file from R
  • 2.17 Programming tips
  • 3 Data Input
  • 3.1 Data input from the keyboard
  • 3.2 Data input from files
  • 3.2.1 The working directory
  • 3.2.2 Data input using read.table
  • 3.2.3 Common errors when using read.table
  • 3.2.4 Separators and decimal points
  • 3.2.5 Data input directly from the web
  • 3.3 Input from files using scan
  • 3.3.1 Reading a dataframe with scan
  • 3.3.2 Input from more complex file structures using scan
  • 3.4 Reading data from a file using readLines
  • 3.4.1 Input a dataframe using readLines
  • 3.4.2 Reading non-standard files using readLines
  • 3.5 Warnings when you attach the dataframe
  • 3.6 Masking
  • 3.7 Input and output formats
  • 3.8 Checking files from the command line
  • 3.9 Reading dates and times from files
  • 3.10 Built-in data files
  • 3.11 File paths
  • 3.12 Connections
  • 3.13 Reading data from an external database
  • 3.13.1 Creating the DSN for your computer
  • 3.13.2 Setting up R to read from the database
  • 4 Dataframes
  • 4.1 Subscripts and indices
  • 4.2 Selecting rows from the dataframe at random
  • 4.3 Sorting dataframes
  • 4.4 Using logical conditions to select rows from the dataframe
  • 4.5 Omitting rows containing missing values, NA
  • 4.5.1 Replacing NAs with zeros
  • 4.6 Using order and !duplicated to eliminate pseudoreplication
  • 4.7 Complex ordering with mixed directions
  • 4.8 A dataframe with row names instead of row numbers
  • 4.9 Creating a dataframe from another kind of object
  • 4.10 Eliminating duplicate rows from a dataframe
  • 4.11 Dates in dataframes
  • 4.12 Using the match function in dataframes
  • 4.13 Merging two dataframes
  • 4.14 Adding margins to a dataframe
  • 4.15 Summarizing the contents of dataframes
  • 5 Graphics
  • 5.1 Plots with two variables
  • 5.2 Plotting with two continuous explanatory variables: Scatterplots
  • 5.2.1 Plotting symbols: pch
  • 5.2.2 Colour for symbols in plots
  • 5.2.3 Adding text to scatterplots
  • 5.2.4 Identifying individuals in scatterplots
  • 5.2.5 Using a third variable to label a scatterplot
  • 5.2.6 Joining the dots
  • 5.2.7 Plotting stepped lines
  • 5.3 Adding other shapes to a plot
  • 5.3.1 Placing items on a plot with the cursor, using the locator function
  • 5.3.2 Drawing more complex shapes with polygon
  • 5.4 Drawing mathematical functions
  • 5.4.1 Adding smooth parametric curves to a scatterplot
  • 5.4.2 Fitting non-parametric curves through a scatterplot
  • 5.5 Shape and size of the graphics window
  • 5.6 Plotting with a categorical explanatory variable
  • 5.6.1 Boxplots with notches to indicate significant differences
  • 5.6.2 Barplots with error bars
  • 5.6.3 Plots for multiple comparisons
  • 5.6.4 Using colour palettes with categorical explanatory variables
  • 5.7 Plots for single samples
  • 5.7.1 Histograms and bar charts
  • 5.7.2 Histograms
  • 5.7.3 Histograms of integers
  • 5.7.4 Overlaying histograms with smooth density functions
  • 5.7.5 Density estimation for continuous variables
  • 5.7.6 Index plots
  • 5.7.7 Time series plots
  • 5.7.8 Pie charts
  • 5.7.9 The stripchart function
  • 5.7.10 A plot to test for normality
  • 5.8 Plots with multiple variables
  • 5.8.1 The pairs function
  • 5.8.2 The coplot function
  • 5.8.3 Interaction plots
  • 5.9 Special plots
  • 5.9.1 Design plots
  • 5.9.2 Bubble plots
  • 5.9.3 Plots with many identical values
  • 5.10 Saving graphics to file
  • 5.11 Summary
  • 6 Tables
  • 6.1 Tables of counts
  • 6.2 Summary tables
  • 6.3 Expanding a table into a dataframe
  • 6.4 Converting from a dataframe to a table
  • 6.5 Calculating tables of proportions with prop.table
  • 6.6 The scale function
  • 6.7 The expand.grid function
  • 6.8 The model.matrix function
  • 6.9 Comparing table and tabulate
  • 7 Mathematics
  • 7.1 Mathematical functions
  • 7.1.1 Logarithmic functions
  • 7.1.2 Trigonometric functions
  • 7.1.3 Power laws
  • 7.1.4 Polynomial functions
  • 7.1.5 Gamma function
  • 7.1.6 Asymptotic functions
  • 7.1.7 Parameter estimation in asymptotic functions
  • 7.1.8 Sigmoid (S-shaped) functions
  • 7.1.9 Biexponential model
  • 7.1.10 Transformations of the response and explanatory variables
  • 7.2 Probability functions
  • 7.3 Continuous probability distributions
  • 7.3.1 Normal distribution
  • 7.3.2 The central limit theorem
  • 7.3.3 Maximum likelihood with the normal distribution
  • 7.3.4 Generating random numbers with exact mean and standard deviation
  • 7.3.5 Comparing data with a normal distribution
  • 7.3.6 Other distributions used in hypothesis testing
  • 7.3.7 The chi-squared distribution
  • 7.3.8 Fisher’s F distribution
  • 7.3.9 Student’s t distribution
  • 7.3.10 The gamma distribution
  • 7.3.11 The exponential distribution
  • 7.3.12 The beta distribution
  • 7.3.13 The Cauchy distribution
  • 7.3.14 The lognormal distribution
  • 7.3.15 The logistic distribution
  • 7.3.16 The log-logistic distribution
  • 7.3.17 The Weibull distribution
  • 7.3.18 Multivariate normal distribution
  • 7.3.19 The uniform distribution
  • 7.3.20 Plotting empirical cumulative distribution functions
  • 7.4 Discrete probability distributions
  • 7.4.1 The Bernoulli distribution
  • 7.4.2 The binomial distribution
  • 7.4.3 The geometric distribution
  • 7.4.4 The hypergeometric distribution
  • 7.4.5 The multinomial distribution
  • 7.4.6 The Poisson distribution
  • 7.4.7 The negative binomial distribution
  • 7.4.8 The Wilcoxon rank-sum statistic
  • 7.5 Matrix algebra
  • 7.5.1 Matrix multiplication
  • 7.5.2 Diagonals of matrices
  • 7.5.3 Determinant
  • 7.5.4 Inverse of a matrix
  • 7.5.5 Eigenvalues and eigenvectors
  • 7.5.6 Matrices in statistical models
  • 7.5.7 Statistical models in matrix notation
  • 7.6 Solving systems of linear equations using matrices
  • 7.7 Calculus
  • 7.7.1 Derivatives
  • 7.7.2 Integrals
  • 7.7.3 Differential equations
  • 8 Classical Tests
  • 8.1 Single samples
  • 8.1.1 Data summary
  • 8.1.2 Plots for testing normality
  • 8.1.3 Testing for normality
  • 8.1.4 An example of single-sample data
  • 8.2 Bootstrap in hypothesis testing
  • 8.3 Skew and kurtosis
  • 8.3.1 Skew
  • 8.3.2 Kurtosis
  • 8.4 Two samples
  • 8.4.1 Comparing two variances
  • 8.4.2 Comparing two means
  • 8.4.3 Student’s t test
  • 8.4.4 Wilcoxon rank-sum test
  • 8.5 Tests on paired samples
  • 8.6 The sign test
  • 8.7 Binomial test to compare two proportions
  • 8.8 Chi-squared contingency tables
  • 8.8.1 Pearson’s chi-squared
  • 8.8.2 G test of contingency
  • 8.8.3 Unequal probabilities in the null hypothesis
  • 8.8.4 Chi-squared tests on table objects
  • 8.8.5 Contingency tables with small expected frequencies: Fisher’s exact test
  • 8.9 Correlation and covariance
  • 8.9.1 Data dredging
  • 8.9.2 Partial correlation
  • 8.9.3 Correlation and the variance of differences between variables
  • 8.9.4 Scale-dependent correlations
  • 8.10 Kolmogorov–Smirnov test
  • 8.11 Power analysis
  • 8.12 Bootstrap
  • 9 Statistical Modelling
  • 9.1 First things first
  • 9.2 Maximum likelihood
  • 9.3 The principle of parsimony (Occam’s razor)
  • 9.4 Types of statistical model
  • 9.5 Steps involved in model simplification
  • 9.5.1 Caveats
  • 9.5.2 Order of deletion
  • 9.6 Model formulae in R
  • 9.6.1 Interactions between explanatory variables
  • 9.6.2 Creating formula objects
  • 9.7 Multiple error terms
  • 9.8 The intercept as parameter 1
  • 9.9 The update function in model simplification
  • 9.10 Model formulae for regression
  • 9.11 Box–Cox transformations
  • 9.12 Model criticism
  • 9.13 Model checking
  • 9.13.1 Heteroscedasticity
  • 9.13.2 Non-normality of errors
  • 9.14 Influence
  • 9.15 Summary of statistical models in R
  • 9.16 Optional arguments in model-fitting functions
  • 9.16.1 Subsets
  • 9.16.2 Weights
  • 9.16.3 Missing values
  • 9.16.4 Offsets
  • 9.16.5 Dataframes containing the same variable names
  • 9.17 Akaike’s information criterion
  • 9.17.1 AIC as a measure of the fit of a model
  • 9.18 Leverage
  • 9.19 Misspecified model
  • 9.20 Model checking in R
  • 9.21 Extracting information from model objects
  • 9.21.1 Extracting information by name
  • 9.21.2 Extracting information by list subscripts
  • 9.21.3 Extracting components of the model using $
  • 9.21.4 Using lists with models
  • 9.22 The summary tables for continuous and categorical explanatory variables
  • 9.23 Contrasts
  • 9.23.1 Contrast coefficients
  • 9.23.2 An example of contrasts in R
  • 9.23.3 A priori contrasts
  • 9.24 Model simplification by stepwise deletion
  • 9.25 Comparison of the three kinds of contrasts
  • 9.25.1 Treatment contrasts
  • 9.25.2 Helmert contrasts
  • 9.25.3 Sum contrasts
  • 9.26 Aliasing
  • 9.27 Orthogonal polynomial contrasts: contr.poly
  • 9.28 Summary of statistical modelling
  • 10 Regression
  • 10.1 Linear regression
  • 10.1.1 The famous five in R
  • 10.1.2 Corrected sums of squares and sums of products
  • 10.1.3 Degree of scatter
  • 10.1.4 Analysis of variance in regression: SSY = SSR + SSE
  • 10.1.5 Unreliability estimates for the parameters
  • 10.1.6 Prediction using the fitted model
  • 10.1.7 Model checking
  • 10.2 Polynomial approximations to elementary functions
  • 10.3 Polynomial regression
  • 10.4 Fitting a mechanistic model to data
  • 10.5 Linear regression after transformation
  • 10.6 Prediction following regression
  • 10.7 Testing for lack of fit in a regression
  • 10.8 Bootstrap with regression
  • 10.9 Jackknife with regression
  • 10.10 Jackknife after bootstrap
  • 10.11 Serial correlation in the residuals
  • 10.12 Piecewise regression
  • 10.13 Multiple regression
  • 10.13.1 The multiple regression model
  • 10.13.2 Common problems arising in multiple regression
  • 11 Analysis of Variance
  • 11.1 One-way ANOVA
  • 11.1.1 Calculations in one-way ANOVA
  • 11.1.2 Assumptions of ANOVA
  • 11.1.3 A worked example of one-way ANOVA
  • 11.1.4 Effect sizes
  • 11.1.5 Plots for interpreting one-way ANOVA
  • 11.2 Factorial experiments
  • 11.3 Pseudoreplication: Nested designs and split plots
  • 11.3.1 Split-plot experiments
  • 11.3.2 Mixed-effects models
  • 11.3.3 Fixed effect or random effect?
  • 11.3.4 Removing the pseudoreplication
  • 11.3.5 Derived variable analysis
  • 11.4 Variance components analysis
  • 11.5 Effect sizes in ANOVA: aov or lm?
  • 11.6 Multiple comparisons
  • 11.7 Multivariate analysis of variance
  • 12 Analysis of Covariance
  • 12.1 Analysis of covariance in R
  • 12.2 ANCOVA and experimental design
  • 12.3 ANCOVA with two factors and one continuous covariate
  • 12.4 Contrasts and the parameters of ANCOVA models
  • 12.5 Order matters in summary.aov
  • 13 Generalized Linear Models
  • 13.1 Error structure
  • 13.2 Linear predictor
  • 13.3 Link function
  • 13.3.1 Canonical link functions
  • 13.4 Proportion data and binomial errors
  • 13.5 Count data and Poisson errors
  • 13.6 Deviance: Measuring the goodness of fit of a GLM
  • 13.7 Quasi-likelihood
  • 13.8 The quasi family of models
  • 13.9 Generalized additive models
  • 13.10 Offsets
  • 13.11 Residuals
  • 13.11.1 Misspecified error structure
  • 13.11.2 Misspecified link function
  • 13.12 Overdispersion
  • 13.13 Bootstrapping a GLM
  • 13.14 Binomial GLM with ordered categorical variables
  • 14 Count Data
  • 14.1 A regression with Poisson errors
  • 14.2 Analysis of deviance with count data
  • 14.3 Analysis of covariance with count data
  • 14.4 Frequency distributions
  • 14.5 Overdispersion in log-linear models
  • 14.6 Negative binomial errors
  • 15 Count Data in Tables
  • 15.1 A two-class table of counts
  • 15.2 Sample size for count data
  • 15.3 A four-class table of counts
  • 15.4 Two-by-two contingency tables
  • 15.5 Using log-linear models for simple contingency tables
  • 15.6 The danger of contingency tables
  • 15.7 Quasi-Poisson and negative binomial models compared
  • 15.8 A contingency table of intermediate complexity
  • 15.9 Schoener’s lizards: A complex contingency table
  • 15.10 Plot methods for contingency tables
  • 15.11 Graphics for count data: Spine plots and spinograms
  • 16 Proportion Data
  • 16.1 Analyses of data on one and two proportions
  • 16.2 Count data on proportions
  • 16.3 Odds
  • 16.4 Overdispersion and hypothesis testing
  • 16.5 Applications
  • 16.5.1 Logistic regression with binomial errors
  • 16.5.2 Estimating LD50 and LD90 from bioassay data
  • 16.5.3 Proportion data with categorical explanatory variables
  • 16.6 Averaging proportions
  • 16.7 Summary of modelling with proportion count data
  • 16.8 Analysis of covariance with binomial data
  • 16.9 Converting complex contingency tables to proportions
  • 16.9.1 Analysing Schoener’s lizards as proportion data
  • 17 Binary Response Variables
  • 17.1 Incidence functions
  • 17.2 Graphical tests of the fit of the logistic to data
  • 17.3 ANCOVA with a binary response variable
  • 17.4 Binary response with pseudoreplication
  • 18 Generalized Additive Models
  • 18.1 Non-parametric smoothers
  • 18.2 Generalized additive models
  • 18.2.1 Technical aspects
  • 18.3 An example with strongly humped data
  • 18.4 Generalized additive models with binary data
  • 18.5 Three-dimensional graphic output from gam
  • 19 Mixed-Effects Models
  • 19.1 Replication and pseudoreplication
  • 19.2 The lme and lmer functions
  • 19.2.1 lme
  • 19.2.2 lmer
  • 19.3 Best linear unbiased predictors
  • 19.4 Designed experiments with different spatial scales: Split plots
  • 19.5 Hierarchical sampling and variance components analysis
  • 19.6 Mixed-effects models with temporal pseudoreplication
  • 19.7 Time series analysis in mixed-effects models
  • 19.8 Random effects in designed experiments
  • 19.9 Regression in mixed-effects models
  • 19.10 Generalized linear mixed models
  • 19.10.1 Hierarchically structured count data
  • 20 Non-Linear Regression
  • 20.1 Comparing Michaelis–Menten and asymptotic exponential
  • 20.2 Generalized additive models
  • 20.3 Grouped data for non-linear estimation
  • 20.4 Non-linear time series models (temporal pseudoreplication)
  • 20.5 Self-starting functions
  • 20.5.1 Self-starting Michaelis–Menten model
  • 20.5.2 Self-starting asymptotic exponential model
  • 20.5.3 Self-starting logistic
  • 20.5.4 Self-starting four-parameter logistic
  • 20.5.5 Self-starting Weibull growth function
  • 20.5.6 Self-starting first-order compartment function
  • 20.6 Bootstrapping a family of non-linear regressions
  • 21 Meta-Analysis
  • 21.1 Effect size
  • 21.2 Weights
  • 21.3 Fixed versus random effects
  • 21.3.1 Fixed-effect meta-analysis of scaled differences
  • 21.3.2 Random effects with a scaled mean difference
  • 21.4 Random-effects meta-analysis of binary data
  • 22 Bayesian Statistics
  • 22.1 Background
  • 22.2 A continuous response variable
  • 22.3 Normal prior and normal likelihood
  • 22.4 Priors
  • 22.4.1 Conjugate priors
  • 22.5 Bayesian statistics for realistically complicated models
  • 22.6 Practical considerations
  • 22.7 Writing BUGS models
  • 22.8 Packages in R for carrying out Bayesian analysis
  • 22.9 Installing JAGS on your computer
  • 22.10 Running JAGS in R
  • 22.11 MCMC for a simple linear regression
  • 22.12 MCMC for a model with temporal pseudoreplication
  • 22.13 MCMC for a model with binomial errors
  • 23 Tree Models
  • 23.1 Background
  • 23.2 Regression trees
  • 23.3 Using rpart to fit tree models
  • 23.4 Tree models as regressions
  • 23.5 Model simplification
  • 23.6 Classification trees with categorical explanatory variables
  • 23.7 Classification trees for replicated data
  • 23.8 Testing for the existence of humps
  • 24 Time Series Analysis
  • 24.1 Nicholson’s blowflies
  • 24.2 Moving average
  • 24.3 Seasonal data
  • 24.3.1 Pattern in the monthly means
  • 24.4 Built-in time series functions
  • 24.5 Decompositions
  • 24.6 Testing for a trend in the time series
  • 24.7 Spectral analysis
  • 24.8 Multiple time series
  • 24.9 Simulated time series
  • 24.10 Time series models
  • 25 Multivariate Statistics
  • 25.1 Principal components analysis
  • 25.2 Factor analysis
  • 25.3 Cluster analysis
  • 25.3.1 Partitioning
  • 25.3.2 Taxonomic use of kmeans
  • 25.4 Hierarchical cluster analysis
  • 25.5 Discriminant analysis
  • 25.6 Neural networks
  • 26 Spatial Statistics
  • 26.1 Point processes
  • 26.1.1 Random points in a circle
  • 26.2 Nearest neighbours
  • 26.2.1 Tessellation
  • 26.3 Tests for spatial randomness
  • 26.3.1 Ripley’s K
  • 26.3.2 Quadrat-based methods
  • 26.3.3 Aggregated pattern and quadrat count data
  • 26.3.4 Counting things on maps
  • 26.4 Packages for spatial statistics
  • 26.4.1 The spatstat package
  • 26.4.2 The spdep package
  • 26.4.3 Polygon lists
  • 26.5 Geostatistical data
  • 26.6 Regression models with spatially correlated errors: Generalized least squares
  • 26.7 Creating a dot-distribution map from a relational database
  • 27 Survival Analysis
  • 27.1 A Monte Carlo experiment
  • 27.2 Background
  • 27.3 The survivor function
  • 27.4 The density function
  • 27.5 The hazard function
  • 27.6 The exponential distribution
  • 27.6.1 Density function
  • 27.6.2 Survivor function
  • 27.6.3 Hazard function
  • 27.7 Kaplan–Meier survival distributions
  • 27.8 Age-specific hazard models
  • 27.9 Survival analysis in R
  • 27.9.1 Parametric models
  • 27.9.2 Cox proportional hazards model
  • 27.9.3 Cox’s proportional hazard or a parametric model?
  • 27.10 Parametric analysis
  • 27.11 Cox’s proportional hazards
  • 27.12 Models with censoring
  • 27.12.1 Parametric models
  • 27.12.2 Comparing coxph and survreg survival analysis
  • 28 Simulation Models
  • 28.1 Temporal dynamics: Chaotic dynamics in population size
  • 28.1.1 Investigating the route to chaos
  • 28.2 Temporal and spatial dynamics: A simulated random walk in two dimensions
  • 28.3 Spatial simulation models
  • 28.3.1 Metapopulation dynamics
  • 28.3.2 Coexistence resulting from spatially explicit (local) density dependence
  • 28.4 Pattern generation resulting from dynamic interactions
  • 29 Changing the Look of Graphics
  • 29.1 Graphs for publication
  • 29.2 Colour
  • 29.2.1 Palettes for groups of colours
  • 29.2.2 The RColorBrewer package
  • 29.2.3 Coloured plotting symbols with contrasting margins
  • 29.2.4 Colour in legends
  • 29.2.5 Background colours
  • 29.2.6 Foreground colours
  • 29.2.7 Different colours and font styles for different parts of the graph
  • 29.2.8 Full control of colours in plots
  • 29.3 Cross-hatching
  • 29.4 Grey scale
  • 29.5 Coloured convex hulls and other polygons
  • 29.6 Logarithmic axes
  • 29.7 Different font families for text
  • 29.8 Mathematical and other symbols on plots
  • 29.9 Phase planes
  • 29.10 Fat arrows
  • 29.11 Three-dimensional plots
  • 29.12 Complex 3D plots with wireframe
  • 29.13 An alphabetical tour of the graphics parameters
  • 29.13.1 Text justification, adj
  • 29.13.2 Annotation of graphs, ann
  • 29.13.3 Delay moving on to the next in a series of plots, ask
  • 29.13.4 Control over the axes, axis
  • 29.13.5 Background colour for plots, bg
  • 29.13.6 Boxes around plots, bty
  • 29.13.7 Size of plotting symbols using the character expansion function, cex
  • 29.13.8 Changing the shape of the plotting region, plt
  • 29.13.9 Locating multiple graphs in non-standard layouts using fig
  • 29.13.10 Two graphs with a common x scale but different y scales using fig
  • 29.13.11 The layout function
  • 29.13.12 Creating and controlling multiple screens on a single device
  • 29.13.13 Orientation of numbers on the tick marks, las
  • 29.13.14 Shapes for the ends and joins of lines, lend and ljoin
  • 29.13.15 Line types, lty
  • 29.13.16 Line widths, lwd
  • 29.13.17 Several graphs on the same page, mfrow and mfcol
  • 29.13.18 Margins around the plotting area, mar
  • 29.13.19 Plotting more than one graph on the same axes, new
  • 29.13.20 Two graphs on the same plot with different scales for their y axes
  • 29.13.21 Outer margins, oma
  • 29.13.22 Packing graphs closer together
  • 29.13.23 Square plotting region, pty
  • 29.13.24 Character rotation, srt
  • 29.13.25 Rotating the axis labels
  • 29.13.26 Tick marks on the axes
  • 29.13.27 Axis styles
  • 29.14 Trellis graphics
  • 29.14.1 Panel box-and-whisker plots
  • 29.14.2 Panel scatterplots
  • 29.14.3 Panel barplots
  • 29.14.4 Panels for conditioning plots
  • 29.14.5 Panel histograms
  • 29.14.6 Effect sizes
  • 29.14.7 More panel functions
  • References and Further Reading
  • Index
Show More

Additional information

Veldu vöru

Rafbók til eignar

Reviews

There are no reviews yet.

Be the first to review “The R Book”

Netfang þitt verður ekki birt. Nauðsynlegir reitir eru merktir *

Aðrar vörur

1
    1
    Karfan þín
    A Concise History of Italy
    A Concise History of Italy
    Veldu vöru:

    Rafbók til eignar

    1 X 3.590 kr. = 3.590 kr.