Introduction to Probability and Statistics for Engineers and Scientists

Höfundur Sheldon M. Ross

Útgefandi Elsevier S & T

Snið Page Fidelity

Print ISBN 9780128243466

Útgáfa 6

Útgáfuár 2021

12.290 kr.

Description

Efnisyfirlit

  • Introduction to Probability and Statistics for Engineers and Scientists
  • Copyright
  • Contents
  • Preface
  • Organization and coverage
  • Preface
  • Acknowledgments
  • 1 Introduction to statistics
  • 1.1 Introduction
  • 1.2 Data collection and descriptive statistics
  • 1.3 Inferential statistics and probability models
  • 1.4 Populations and samples
  • 1.5 A brief history of statistics
  • Problems
  • 2 Descriptive statistics
  • 2.1 Introduction
  • 2.2 Describing data sets
  • 2.2.1 Frequency tables and graphs
  • 2.2.2 Relative frequency tables and graphs
  • 2.2.3 Grouped data, histograms, ogives, and stem and leaf plots
  • 2.3 Summarizing data sets
  • 2.3.1 Sample mean, sample median, and sample mode
  • Germ-Free Mice
  • Conventional Mice
  • 2.3.2 Sample variance and sample standard deviation
  • An algebraic identity
  • 2.3.3 Sample percentiles and box plots
  • 2.4 Chebyshev’s inequality
  • Chebyshev’s inequality
  • The one-sided Chebyshev inequality
  • 2.5 Normal data sets
  • The empirical rule
  • 2.6 Paired data sets and the sample correlation coefficient
  • Properties of r
  • 2.7 The Lorenz curve and Gini index
  • 2.8 Using R
  • Problems
  • 3 Elements of probability
  • 3.1 Introduction
  • 3.2 Sample space and events
  • 3.3 Venn diagrams and the algebra of events
  • 3.4 Axioms of probability
  • 3.5 Sample spaces having equally likely outcomes
  • Basic principle of counting
  • Proof of the Basic Principle
  • Notation and terminology
  • 3.6 Conditional probability
  • 3.7 Bayes’ formula
  • 3.8 Independent events
  • Problems
  • 4 Random variables and expectation
  • 4.1 Random variables
  • 4.2 Types of random variables
  • 4.3 Jointly distributed random variables
  • 4.3.1 Independent random variables
  • 4.3.2 Conditional distributions
  • 4.4 Expectation
  • Remarks
  • 4.5 Properties of the expected value
  • 4.5.1 Expected value of sums of random variables
  • 4.6 Variance
  • Remark
  • Remark
  • 4.7 Covariance and variance of sums of random variables
  • 4.8 Moment generating functions
  • 4.9 Chebyshev’s inequality and the weak law of large numbers
  • Problems
  • 5 Special random variables
  • 5.1 The Bernoulli and binomial random variables
  • 5.1.1 Using R to calculate binomial probabilities
  • 5.2 The Poisson random variable
  • 5.2.1 Using R to calculate Poisson probabilities
  • 5.3 The hypergeometric random variable
  • 5.4 The uniform random variable
  • 5.5 Normal random variables
  • 5.6 Exponential random variables
  • 5.6.1 The Poisson process
  • 5.6.2 The Pareto distribution
  • 5.7 The gamma distribution
  • 5.8 Distributions arising from the normal
  • 5.8.1 The chi-square distribution
  • 5.8.1.1 The relation between chi-square and gamma random variables
  • 5.8.2 The t-distribution
  • 5.8.3 The F-distribution
  • 5.9 The logistics distribution
  • 5.10 Distributions in R
  • Problems
  • 6 Distributions of sampling statistics
  • 6.1 Introduction
  • 6.2 The sample mean
  • 6.3 The central limit theorem
  • 6.3.1 Approximate distribution of the sample mean
  • 6.3.2 How large a sample is needed?
  • 6.4 The sample variance
  • 6.5 Sampling distributions from a normal population
  • 6.5.1 Distribution of the sample mean
  • 6.5.2 Joint distribution of X and S2
  • 6.6 Sampling from a finite population
  • Remark
  • Problems
  • 7 Parameter estimation
  • 7.1 Introduction
  • 7.2 Maximum likelihood estimators
  • 7.2.1 Estimating life distributions
  • 7.3 Interval estimates
  • Remark
  • 7.3.1 Confidence interval for a normal mean when the variance is unknown
  • Remarks
  • 7.3.2 Prediction intervals
  • 7.3.3 Confidence intervals for the variance of a normal distribution
  • 7.4 Estimating the difference in means of two normal populations
  • Remark
  • 7.5 Approximate confidence interval for the mean of a Bernoulli random variable
  • Remark
  • 7.6 Confidence interval of the mean of the exponential distribution
  • 7.7 Evaluating a point estimator
  • 7.8 The Bayes estimator
  • Remark
  • Remark: On choosing a normal prior
  • Problems
  • 8 Hypothesis testing
  • 8.1 Introduction
  • 8.2 Significance levels
  • 8.3 Tests concerning the mean of a normal population
  • 8.3.1 Case of known variance
  • Remark
  • 8.3.1.1 One-sided tests
  • Remark
  • Remarks
  • 8.3.2 Case of unknown variance: the t-test
  • 8.4 Testing the equality of means of two normal populations
  • 8.4.1 Case of known variances
  • 8.4.2 Case of unknown variances
  • 8.4.3 Case of unknown and unequal variances
  • 8.4.4 The paired t-test
  • 8.5 Hypothesis tests concerning the variance of a normal population
  • 8.5.1 Testing for the equality of variances of two normal populations
  • 8.6 Hypothesis tests in Bernoulli populations
  • 8.6.1 Testing the equality of parameters in two Bernoulli populations
  • 8.7 Tests concerning the mean of a Poisson distribution
  • 8.7.1 Testing the relationship between two Poisson parameters
  • Problems
  • 9 Regression
  • 9.1 Introduction
  • 9.2 Least squares estimators of the regression parameters
  • 9.3 Distribution of the estimators
  • Remarks
  • Notation
  • Computational identity for SSR
  • 9.4 Statistical inferences about the regression parameters
  • 9.4.1 Inferences concerning β
  • Hypothesis test of H0: β= 0
  • Confidence interval for β
  • Remark
  • 9.4.1.1 Regression to the mean
  • 9.4.2 Inferences concerning α
  • Confidence interval estimator of α
  • 9.4.3 Inferences concerning the mean response α+βx0
  • Confidence interval estimator of α+βx0
  • 9.4.4 Prediction interval of a future response
  • Prediction interval for a response at the input level x0
  • Remarks
  • 9.4.5 Summary of distributional results
  • 9.5 The coefficient of determination and the sample correlation coefficient
  • 9.6 Analysis of residuals: assessing the model
  • 9.7 Transforming to linearity
  • 9.8 Weighted least squares
  • Remarks
  • Remarks
  • 9.9 Polynomial regression
  • Remark
  • 9.10 Multiple linear regression
  • Remark
  • Remark
  • 9.10.1 Predicting future responses
  • Confidence interval estimate of E [ Y|x] =∑ ki=0xiβi, (x 0≡ 1)
  • Prediction Interval for Y(x)
  • 9.10.2 Dummy variables for categorical data
  • 9.11 Logistic regression models for binary output data
  • Problems
  • 10 Analysis of variance
  • 10.1 Introduction
  • 10.2 An overview
  • 10.3 One-way analysis of variance
  • The sum of squares identity
  • 10.3.1 Using R to do the computations
  • 10.3.2 Multiple comparisons of sample means
  • 10.3.3 One-way analysis of variance with unequal sample sizes
  • Remark
  • 10.4 Two-factor analysis of variance: introduction and parameter estimation
  • 10.5 Two-factor analysis of variance: testing hypotheses
  • 10.6 Two-way analysis of variance with interaction
  • Problems
  • 11 Goodness of fit tests and categorical data analysis
  • 11.1 Introduction
  • 11.2 Goodness of fit tests when all parameters are specified
  • Remarks
  • 11.2.1 Determining the critical region by simulation
  • Remarks
  • 11.3 Goodness of fit tests when some parameters are unspecified
  • 11.4 Tests of independence in contingency tables
  • 11.5 Tests of independence in contingency tables having fixed marginal totals
  • 11.6 The Kolmogorov-Smirnov goodness of fit test for continuous data
  • Problems
  • 12 Nonparametric hypothesis tests
  • 12.1 Introduction
  • 12.2 The sign test
  • 12.3 The signed rank test
  • 12.4 The two-sample problem
  • 12.4.1 Testing the equality of multiple probability distributions
  • 12.5 The runs test for randomness
  • Problems
  • 13 Quality control
  • 13.1 Introduction
  • 13.2 Control charts for average values: the x control chart
  • Remarks
  • 13.2.1 Case of unknown μ and σ
  • Technical remark
  • Remarks
  • 13.3 S-control charts
  • 13.4 Control charts for the fraction defective
  • Remark
  • 13.5 Control charts for number of defects
  • 13.6 Other control charts for detecting changes in the population mean
  • 13.6.1 Moving-average control charts
  • 13.6.2 Exponentially weighted moving-average control charts
  • 13.6.3 Cumulative sum control charts
  • Problems
  • 14 Life testing
  • 14.1 Introduction
  • 14.2 Hazard rate functions
  • Remark on terminology
  • 14.3 The exponential distribution in life testing
  • 14.3.1 Simultaneous testing – stopping at the rth failure
  • Remark
  • 14.3.2 Sequential testing
  • 14.3.3 Simultaneous testing – stopping by a fixed time
  • Remark
  • 14.3.4 The Bayesian approach
  • Remark
  • 14.4 A two-sample problem
  • 14.5 The Weibull distribution in life testing
  • 14.5.1 Parameter estimation by least squares
  • Remarks
  • Problems
  • 15 Simulation, bootstrap statistical methods, and permutation tests
  • 15.1 Introduction
  • 15.2 Random numbers
  • 15.2.1 The Monte Carlo simulation approach
  • 15.3 The bootstrap method
  • 15.4 Permutation tests
  • 15.4.1 Normal approximations in permutation tests
  • 15.4.2 Two-sample permutation tests
  • 15.5 Generating discrete random variables
  • 15.6 Generating continuous random variables
  • 15.6.1 Generating a normal random variable
  • 15.7 Determining the number of simulation runs in a Monte Carlo study
  • Problems
  • 16 Machine learning and big data
  • 16.1 Introduction
  • 16.2 Late flight probabilities
  • 16.3 The naive Bayes approach
  • 16.3.1 A variation of naive Bayes approach
  • 16.4 Distance-based estimators. The k-nearest neighbors rule
  • 16.4.1 A distance-weighted method
  • 16.4.2 Component-weighted distances
  • 16.5 Assessing the approaches
  • 16.6 When characterizing vectors are quantitative
  • 16.6.1 Nearest neighbor rules
  • 16.6.2 Logistics regression
  • 16.7 Choosing the best probability: a bandit problem
  • Remarks
  • Problems
  • Appendix of Tables
  • Index
  • Back Cover
Show More

Additional information

Veldu vöru

Rafbók til eignar

Aðrar vörur

0
    0
    Karfan þín
    Karfan þín er tómAftur í búð