Details

Applied Statistics


Applied Statistics

Theory and Problem Solutions with R
1. Aufl.

von: Dieter Rasch, Rob Verdooren, Jürgen Pilz

86,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 14.08.2019
ISBN/EAN: 9781119551546
Sprache: englisch
Anzahl Seiten: 512

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>Instructs readers on how to use methods of statistics and experimental design with R software</b><b> </b></p> <p>Applied statistics covers both the theory and the application of modern statistical and mathematical modelling techniques to applied problems in industry, public services, commerce, and research. It proceeds from a strong theoretical background, but it is practically oriented to develop one's ability to tackle new and non-standard problems confidently. Taking a practical approach to applied statistics, this user-friendly guide teaches readers how to use methods of statistics and experimental design without going deep into the theory.</p> <p><i>Applied Statistics: Theory and Problem Solutions with R</i> includes chapters that cover R package sampling procedures, analysis of variance, point estimation, and more. It follows on the heels of Rasch and Schott's <i>Mathematical Statistics</i> via that book's theoretical background—taking the lessons learned from there to another level with this book’s addition of instructions on how to employ the methods using R. But there are two important chapters not mentioned in the theoretical back ground as Generalised Linear Models and Spatial Statistics. </p> <ul> <li>Offers a practical over theoretical approach to the subject of applied statistics</li> <li>Provides a pre-experimental as well as post-experimental approach to applied statistics</li> <li>Features classroom tested material</li> <li>Applicable to a wide range of people working in experimental design and all empirical sciences</li> <li>Includes 300 different procedures with R and examples with R-programs for the analysis and for determining minimal experimental sizes</li> </ul> <p><i>Applied Statistics: Theory and Problem Solutions with R</i> will appeal to experimenters, statisticians, mathematicians, and all scientists using statistical procedures in the natural sciences, medicine, and psychology amongst others. </p>
<p>Preface xi</p> <p><b>1 The </b><b>R-Package, Sampling Procedures, and Random Variables 1</b></p> <p>1.1 Introduction 1</p> <p>1.2 The Statistical Software Package R 1</p> <p>1.3 Sampling Procedures and Random Variables 4</p> <p>References 10</p> <p><b>2 Point Estimation </b><b>11</b></p> <p>2.1 Introduction 11</p> <p>2.2 Estimating Location Parameters 12</p> <p>2.2.1 Maximum Likelihood Estimation of Location Parameters 17</p> <p>2.2.2 Estimating Expectations from Censored Samples and Truncated Distributions 20</p> <p>2.2.3 Estimating Location Parameters of Finite Populations 23</p> <p>2.3 Estimating Scale Parameters 24</p> <p>2.4 Estimating Higher Moments 27</p> <p>2.5 Contingency Tables 29</p> <p>2.5.1 Models of Two-Dimensional Contingency Tables 29</p> <p>2.5.1.1 Model I 29</p> <p>2.5.1.2 Model II 29</p> <p>2.5.1.3 Model III 30</p> <p>2.5.2 Association Coefficients for 2 ×2 Tables 30</p> <p>References 38</p> <p><b>3 Testing Hypotheses – One- and Two-Sample Problems </b><b>39</b></p> <p>3.1 Introduction 39</p> <p>3.2 The One-Sample Problem 41</p> <p>3.2.1 Tests on an Expectation 41</p> <p>3.2.1.1 Testing the Hypothesis on the Expectation of a Normal Distribution with Known Variance 41</p> <p>3.2.1.2 Testing the Hypothesis on the Expectation of a Normal Distribution with Unknown Variance 47</p> <p>3.2.2 Test on the Median 51</p> <p>3.2.3 Test on the Variance of a Normal Distribution 54</p> <p>3.2.4 Test on a Probability 56</p> <p>3.2.5 Paired Comparisons 57</p> <p>3.2.6 Sequential Tests 59</p> <p>3.3 The Two-Sample Problem 63</p> <p>3.3.1 Tests on Two Expectations 63</p> <p>3.3.1.1 The Two-Sample <i>t</i>-Test 63</p> <p>3.3.1.2 The Welch Test 66</p> <p>3.3.1.3 The Wilcoxon Rank Sum Test 70</p> <p>3.3.1.4 Definition of Robustness and Results of Comparing Tests by Simulation 72</p> <p>3.3.1.5 Sequential Two-Sample Tests 74</p> <p>3.3.2 Test on Two Medians 76</p> <p>3.3.2.1 Rationale 77</p> <p>3.3.3 Test on Two Probabilities 78</p> <p>3.3.4 Tests on Two Variances 79</p> <p>References 81</p> <p><b>4 Confidence Estimations – One- and Two-Sample Problems </b><b>83</b></p> <p>4.1 Introduction 83</p> <p>4.2 The One-Sample Case 84</p> <p>4.2.1 A Confidence Interval for the Expectation of a Normal Distribution 84</p> <p>4.2.2 A Confidence Interval for the Variance of a Normal Distribution 91</p> <p>4.2.3 A Confidence Interval for a Probability 93</p> <p>4.3 The Two-Sample Case 96</p> <p>4.3.1 A Confidence Interval for the Difference of Two Expectations – Equal Variances 96</p> <p>4.3.2 A Confidence Interval for the Difference of Two Expectations – Unequal Variances 98</p> <p>4.3.3 A Confidence Interval for the Difference of Two Probabilities 100</p> <p>References 104</p> <p><b>5 Analysis of Variance (ANOVA) – Fixed Effects Models </b><b>105</b></p> <p>5.1 Introduction 105</p> <p>5.1.1 Remarks about Program Packages 106</p> <p>5.2 Planning the Size of an Experiment 106</p> <p>5.3 One-Way Analysis of Variance 108</p> <p>5.3.1 Analysing Observations 109</p> <p>5.3.2 Determination of the Size of an Experiment 112</p> <p>5.4 Two-Way Analysis of Variance 115</p> <p>5.4.1 Cross-Classification (<i>A</i>× <i>B</i>) 115</p> <p>5.4.1.1 Parameter Estimation 117</p> <p>5.4.1.2 Testing Hypotheses 119</p> <p>5.4.2 Nested Classification (<i>A</i><i>≻B</i>) 131</p> <p>5.5 Three-Way Classification 134</p> <p>5.5.1 Complete Cross-Classification (<i>A</i>×<i>B </i>×<i>C</i>) 135</p> <p>5.5.2 Nested Classification (<i>C </i><i>≺B</i><i>≺A</i>) 144</p> <p>5.5.3 Mixed Classifications 147</p> <p>5.5.3.1 Cross-Classification between Two Factors where One of Them Is Sub-Ordinated to a Third Factor ((<i>B</i><i>≺A</i>)x<i>C</i>) 148</p> <p>5.5.3.2 Cross-Classification of Two Factors, in which a Third Factor is Nested (<i>C</i><i>≺</i>(<i>A</i>× <i>B</i>)) 153</p> <p>References 157</p> <p><b>6 Analysis of Variance –Models with Random Effects </b><b>159</b></p> <p>6.1 Introduction 159</p> <p>6.2 One-Way Classification 159</p> <p>6.2.1 Estimation of the Variance Components 160</p> <p>6.2.1.1 ANOVA Method 160</p> <p>6.2.1.2 Maximum Likelihood Method 164</p> <p>6.2.1.3 <i>REML </i>– Estimation 166</p> <p>6.2.2 Tests of Hypotheses and Confidence Intervals 169</p> <p>6.2.3 Expectation and Variances of the ANOVA Estimators 174</p> <p>6.3 Two-Way Classification 176</p> <p>6.3.1 Two-Way Cross Classification 176</p> <p>6.3.2 Two-Way Nested Classification 182</p> <p>6.4 Three-Way Classification 186</p> <p>6.4.1 Three-Way Cross-Classification with Equal Sub-Class Numbers 186</p> <p>6.4.2 Three-Way Nested Classification 192</p> <p>6.4.3 Three-Way Mixed Classifications 195</p> <p>6.4.3.1 Cross-Classification Between Two Factors Where One of Them is Sub-Ordinated to a Third Factor ((<i>B</i><i>≺A</i>)×<i>C</i>) 195</p> <p>6.4.3.2 Cross-Classification of Two Factors in Which a Third Factor is Nested (<i>C</i><i>≺</i>(<i>A</i>×<i>B</i>)) 197</p> <p>References 199</p> <p><b>7 Analysis of Variance –Mixed Models </b><b>201</b></p> <p>7.1 Introduction 201</p> <p>7.2 Two-Way Classification 201</p> <p>7.2.1 Balanced Two-Way Cross-Classification 201</p> <p>7.2.2 Two-Way Nested Classification 214</p> <p>7.3 Three-Way Layout 223</p> <p>7.3.1 Three-Way Analysis of Variance – Cross-Classification <i>A </i>× <i>B </i>× <i>C </i>223</p> <p>7.3.2 Three-Way Analysis of Variance – Nested Classification <i>A</i><i>≻B</i><i>≻C </i>230</p> <p>7.3.2.1 Three-Way Analysis of Variance – Nested Classification – Model III – Balanced Case 230</p> <p>7.3.2.2 Three-Way Analysis of Variance – Nested Classification – Model IV – Balanced Case 232</p> <p>7.3.2.3 Three-Way Analysis of Variance – Nested Classification – Model V – Balanced Case 234</p> <p>7.3.2.4 Three-Way Analysis of Variance – Nested Classification – Model VI – Balanced Case 236</p> <p>7.3.2.5 Three-Way Analysis of Variance – Nested Classification – Model VII – Balanced Case 237</p> <p>7.3.2.6 Three-Way Analysis of Variance – Nested Classification – Model VIII – Balanced Case 238</p> <p>7.3.3 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i>× <i>B</i>)<i>≻C </i>239</p> <p>7.3.3.1 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i>× <i>B</i>)<i>≻C </i>Model III 239</p> <p>7.3.3.2 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i>× <i>B</i>)<i>≻C </i>Model IV 242</p> <p>7.3.3.3 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i>× <i>B</i>)<i>≻C </i>Model V 243</p> <p>7.3.3.4 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i>× <i>B</i>)<i>≻C </i>Model VI 245</p> <p>7.3.4 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i><i>≻B</i>) ×<i>C </i>247</p> <p>7.3.4.1 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i><i>≻B</i>) ×<i>C </i>Model III 247</p> <p>7.3.4.2 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i><i>≻B</i>) ×<i>C </i>Model IV 249</p> <p>7.3.4.3 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i><i>≻B</i>) ×<i>C </i>Model V 251</p> <p>7.3.4.4 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i><i>≻B</i>) ×<i>C </i>Model VI 253</p> <p>7.3.4.5 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i><i>≻B</i>) ×<i>C </i>model VII 254</p> <p>7.3.4.6 Three-Way Analysis of Variance – Mixed Classification – (<i>A</i><i>≻B</i>) ×<i>C </i>Model VIII 255</p> <p>References 256</p> <p><b>8 Regression Analysis </b><b>257</b></p> <p>8.1 Introduction 257</p> <p>8.2 Regression with Non-Random Regressors – Model I of Regression 262</p> <p>8.2.1 Linear and Quasilinear Regression 262</p> <p>8.2.1.1 Parameter Estimation 263</p> <p>8.2.1.2 Confidence Intervals and Hypotheses Testing 274</p> <p>8.2.2 Intrinsically Non-Linear Regression 282</p> <p>8.2.2.1 The Asymptotic Distribution of the Least Squares Estimators 283</p> <p>8.2.2.2 The Michaelis–Menten Regression 285</p> <p>8.2.2.3 Exponential Regression 290</p> <p>8.2.2.4 The Logistic Regression 298</p> <p>8.2.2.5 The Bertalanffy Function 306</p> <p>8.2.2.6 The Gompertz Function 312</p> <p>8.2.3 Optimal Experimental Designs 315</p> <p>8.2.3.1 Simple Linear and Quasilinear Regression 316</p> <p>8.2.3.2 Intrinsically Non-linear Regression 317</p> <p>8.2.3.3 The Michaelis-Menten Regression 319</p> <p>8.2.3.4 Exponential Regression 319</p> <p>8.2.3.5 The Logistic Regression 320</p> <p>8.2.3.6 The Bertalanffy Function 321</p> <p>8.2.3.7 The Gompertz Function 321</p> <p>8.3 Models with Random Regressors 322</p> <p>8.3.1 The Simple Linear Case 322</p> <p>8.3.2 The Multiple Linear Case and the Quasilinear Case 330</p> <p>8.3.2.1 Hypotheses Testing - General 333</p> <p>8.3.2.2 Confidence Estimation 333</p> <p>8.3.3 The Allometric Model 334</p> <p>8.3.4 Experimental Designs 335</p> <p>References 335</p> <p><b>9 Analysis of Covariance (ANCOVA) </b><b>339</b></p> <p>9.1 Introduction 339</p> <p>9.2 Completely Randomised Design with Covariate 340</p> <p>9.2.1 Balanced Completely Randomised Design 340</p> <p>9.2.2 Unbalanced Completely Randomised Design 350</p> <p>9.3 Randomised Complete Block Design with Covariate 358</p> <p>9.4 Concluding Remarks 365</p> <p>References 366</p> <p><b>10 Multiple Decision Problems </b><b>367</b></p> <p>10.1 Introduction 367</p> <p>10.2 Selection Procedures 367</p> <p>10.2.1 The Indifference Zone Formulation for Selecting Expectations 368</p> <p>10.2.1.1 Indifference Zone Selection, 𝜎<sup>2</sup> Known 368</p> <p>10.2.1.2 Indifference Zone Selection, 𝜎<sup>2</sup> Unknown 371</p> <p>10.3 The Subset Selection Procedure for Expectations 371</p> <p>10.4 Optimal Combination of the Indifference Zone and the Subset Selection Procedure 372</p> <p>10.5 Selection of the Normal Distribution with the Smallest Variance 375</p> <p>10.6 Multiple Comparisons 375</p> <p>10.6.1 The Solution of MC Problem 10.1 377</p> <p>10.6.1.1 The <i>F</i>-test for MC Problem 10.1 377</p> <p>10.6.1.2 Scheffé’s Method for MC Problem 10.1 378</p> <p>10.6.1.3 Bonferroni’s Method for MC Problem 10.1 379</p> <p>10.6.1.4 Tukey’s Method for MC Problem 10.1 for <i>n<sub>i</sub> </i>= <i>n </i>382</p> <p>10.6.1.5 Generalised Tukey’s Method for MC Problem 10.1 for <i>n<sub>i</sub> </i>≠<i>n </i>383</p> <p>10.6.2 The Solution of MC Problem 10.2 – the Multiple t-Test 384</p> <p>10.6.3 The Solution of MC Problem 10.3 – Pairwise and Simultaneous Comparisons with a Control 385</p> <p>10.6.3.1 Pairwise Comparisons – The Multiple t-Test 385</p> <p>10.6.3.2 Simultaneous Comparisons –The Dunnett Method 387</p> <p>References 390</p> <p><b>11 Generalised Linear Models </b><b>393</b></p> <p>11.1 Introduction 393</p> <p>11.2 Exponential Families of Distributions 394</p> <p>11.3 Generalised Linear Models – An Overview 396</p> <p>11.4 Analysis – Fitting a GLM – The Linear Case 398</p> <p>11.5 Binary Logistic Regression 399</p> <p>11.5.1 Analysis 400</p> <p>11.5.2 Overdispersion 408</p> <p>11.6 Poisson Regression 411</p> <p>11.6.1 Analysis 411</p> <p>11.6.2 Overdispersion 417</p> <p>11.7 The Gamma Regression 417</p> <p>11.8 GLM for Gamma Regression 418</p> <p>11.9 GLM for the Multinomial Distribution 425</p> <p>References 428</p> <p><b>12 Spatial Statistics </b><b>429</b></p> <p>12.1 Introduction 429</p> <p>12.2 Geostatistics 431</p> <p>12.2.1 Semi-variogram Function 432</p> <p>12.2.2 Semi-variogram Parameter Estimation 439</p> <p>12.2.3 Kriging 440</p> <p>12.2.4 <i>Trans</i>-Gaussian Kriging 446</p> <p>12.3 Special Problems and Outlook 450</p> <p>12.3.1 Generalised Linear Models in Geostatistics 450</p> <p>12.3.2 Copula Based Geostatistical Prediction 451</p> <p>References 451</p> <p>Appendix A List of Problems 455</p> <p>Appendix B Symbolism 483</p> <p>Appendix C Abbreviations 485</p> <p>Appendix D Probability and Density Functions 487</p> <p>Index 489</p>
<p><b>DIETER RASCH, P<small>H</small>D,</b> is scientific advisor at the Center for Design of Experiments at the University of Natural Resources and Life Sciences, Vienna, Austria. He is also an elected member of the International Statistical Institute (ISI) and the Institute of Mathematical Statistics (IMS). <p><b>ROB VERDOOREN, P<small>H</small>D,</b> is a Consultant Statistician at Danone Nutricia Research, Utrecht, The Netherlands. <p><b>JÜRGEN PILZ, P<small>H</small>D,</b> is the Head of the Department of Applied Statistics at AAU Klagenfurt, Austria. He is also an elected member of the International Statistical Institute (ISI) and the Institute of Mathematical Statistics (IMS).
<p><b>Instructs readers on how to use methods of statistics and experimental design with R software</b> <p><i>Applied Statistics: Theory and Problem Solutions with R</i> covers both the theory and the application of modern statistical and mathematical modelling techniques to applied problems in industry, public services, commerce, and research. It proceeds from a strong theoretical background, but it is practically oriented to develop one's ability to tackle new and non-standard problems confidently. Taking a practical approach to applied statistics, this user-friendly guide teaches readers how to use methods of statistics and experimental design without going deep into the theory. <p><i>Applied Statistics: Theory and Problem Solutions with R</i> includes chapters that cover R package sampling procedures, analysis of variance, point estimation, and more. It follows on the heels of Rasch and Schott's <i>Mathematical Statistics</i> via that book's theoretical background—taking the lessons learned from there to another level with this book's addition of instructions on how to employ the methods using R. But there are two important chapters not mentioned in the theoretical background as Generalised Linear Models and Spatial Statistics. <p><i>Applied Statistics: Theory and Problem Solutions with R:</i> <ul> <li>Offers a practical over theoretical approach to the subject of applied statistics</li> <li>Provides a pre-experimental as well as post-experimental approach to applied statistics</li> <li>Features classroom tested material</li> <li>Applicable to a wide range of people working in experimental design and all empirical sciences</li> <li>Includes 300 different procedures with R and examples with R-programs for the analysis and for determining minimal experimental sizes</li> </ul> <p><i>Applied Statistics: Theory and Problem Solutions with R</i> will appeal to experimenters, statisticians, mathematicians, and all scientists using statistical procedures in the natural sciences, medicine, and psychology amongst others.

Diese Produkte könnten Sie auch interessieren:

Modeling Uncertainty
Modeling Uncertainty
von: Moshe Dror, Pierre L'Ecuyer, Ferenc Szidarovszky
PDF ebook
236,81 €
Level Crossing Methods in Stochastic Models
Level Crossing Methods in Stochastic Models
von: Percy H. Brill
PDF ebook
203,29 €
Continuous Bivariate Distributions
Continuous Bivariate Distributions
von: N. Balakrishnan, Chin Diew Lai
PDF ebook
128,39 €