ebook img

Discovering Statistics Using SPSS PDF

854 Pages·2011·16.9 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Discovering Statistics Using SPSS

A N D Y F I E L D DISCOVERING STATISTICS USING SpSS T H I R D E D I T I O N (and sex and drugs and rock ’n’ roll) © Andy Field 2009 First edition published 2000 Second edition published 2005 Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act, 1988, this publication may be reproduced, stored or transmitted in any form, or by any means, only with the prior permission in writing of the publishers, or in the case of reprographic reproduction, in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. SAGE Publications Ltd 1 Oliver’s Yard 55 City Road London EC1Y 1SP SAGE Publications Inc. 2455 Teller Road Thousand Oaks, California 91320 SAGE Publications India Pvt Ltd B 1/I 1 Mohan Cooperative Industrial Area Mathura Road New Delhi 110 044 SAGE Publications Asia-Pacific Pte Ltd 33 Pekin Street #02-01 Far East Square Singapore 048763 Library of Congress Control Number: 2008930166 British Library Cataloguing in Publication data A catalogue record for this book is available from the British Library ISBN 978-1-84787-906-6 ISBN 978-1-84787-907-3 Typeset by C&M Digitals (P) Ltd, Chennai, India Printed by Oriental Press, Dubai Printed on paper from sustainable resources CONTENTS Preface xix How to use this book xxiv Acknowledgements xxviii Dedication xxx Symbols used in this book xxxi Some maths revision xxxiii 1 Why is my evil lecturer forcing me to learn statistics? 1 1.1. What will this chapter tell me? 1 1 1.2. What the hell am I doing here? I don’t belong here 1 2 1.2.1. The research process 1 3 1.3. Initial observation: finding something that needs explaining 1 3 1.4. Generating theories and testing them 1 4 1.5. Data collection 1: what to measure 1 7 1.5.1. Variables 1 7 1.5.2. Measurement error 1 10 1.5.3. Validity and reliability 1 11 1.6. Data collection 2: how to measure 1 12 1.6.1. Correlational research methods 1 12 1.6.2. Experimental research methods 1 13 1.6.3. Randomization 1 17 1.7. Analysing data 1 18 1.7.1. Frequency distributions 1 18 1.7.2. The centre of a distribution 1 20 1.7.3. The dispersion in a distribution 1 23 1.7.4. Using a frequency distribution to go beyond the data 1 24 1.7.5. Fitting statistical models to the data 1 26 What have I discovered about statistics? 1 28 Key terms that I’ve discovered 28 Smart Alex’s stats quiz 29 Further reading 29 Interesting real research 30 vi DISCOVERING STATISTICS USING SPSS 2 Everything you ever wanted to know about statistics (well, sort of) 31 2.1. What will this chapter tell me? 1 31 2.2. Building statistical models 1 32 2.3. Populations and samples 1 34 2.4. Simple statistical models 1 35 2.4.1. The mean: a very simple statistical model 1 35 2.4.2. Assessing the fit of the mean: sums of squares, variance and standard deviations 1 35 2.4.3. Expressing the mean as a model 2 38 2.5. Going beyond the data 1 40 2.5.1. The standard error 1 40 2.5.2. Confidence intervals 2 43 2.6. Using statistical models to test research questions 1 48 2.6.1. Test statistics 1 52 2.6.2. One- and two-tailed tests 1 54 2.6.3. Type I and Type II errors 1 55 2.6.4. Effect sizes 2 56 2.6.5. Statistical power 2 58 What have I discovered about statistics? 1 59 Key terms that I’ve discovered 59 Smart Alex’s stats quiz 59 Further reading 60 Interesting real research 60 3 The SpSS environment 61 3.1. What will this chapter tell me? 1 61 3.2. Versions of SPSS 1 62 3.3. Getting started 1 62 3.4. The data editor 1 63 3.4.1. Entering data into the data editor 1 69 3.4.2. The ‘Variable View’ 1 70 3.4.3. Missing values 1 77 3.5. The SPSS viewer 1 78 3.6. The SPSS SmartViewer 1 81 3.7. The syntax window 3 82 3.8. Saving files 1 83 3.9. Retrieving a file 1 84 What have I discovered about statistics? 1 85 Key terms that I’ve discovered 85 Smart Alex’s tasks 85 Further reading 86 Online tutorials 86 4 Exploring data with graphs 87 4.1. What will this chapter tell me? 1 87 4.2. The art of presenting data 1 88 4.2.1. What makes a good graph? 1 88 4.2.2. Lies, damned lies, and … erm … graphs 1 90 vii CONTENTS 4.3. The SPSS Chart Builder 1 91 4.4. Histograms: a good way to spot obvious problems 1 93 4.5. Boxplots (box–whisker diagrams) 1 99 4.6. Graphing means: bar charts and error bars 1 103 4.6.1. Simple bar charts for independent means 1 105 4.6.2. Clustered bar charts for independent means 1 107 4.6.3. Simple bar charts for related means 1 109 4.6.4. Clustered bar charts for related means 1 111 4.6.5. Clustered bar charts for ‘mixed’ designs 1 113 4.7. Line charts 1 115 4.8. Graphing relationships: the scatterplot 1 116 4.8.1. Simple scatterplot 1 117 4.8.2. Grouped scatterplot 1 119 4.8.3. Simple and grouped 3-D scatterplots 1 121 4.8.4. Matrix scatterplot 1 123 4.8.5. Simple dot plot or density plot 1 125 4.8.6. Drop-line graph 1 126 4.9. Editing graphs 1 126 What have I discovered about statistics? 1 129 Key terms that I’ve discovered 130 Smart Alex’s tasks 130 Further reading 130 Online tutorial 130 Interesting real research 130 5 Exploring assumptions 131 5.1. What will this chapter tell me? 1 131 5.2. What are assumptions? 1 132 5.3. Assumptions of parametric data 1 132 5.4. The assumption of normality 1 133 5.4.1. Oh no, it’s that pesky frequency distribution again: checking normality visually 1 134 5.4.2. Quantifying normality with numbers 1 136 5.4.3. Exploring groups of data 1 140 5.5. Testing whether a distribution is normal 1 144 5.5.1. Doing the Kolmogorov–Smirnov test on SPSS 1 145 5.5.2. Output from the explore procedure 1 146 5.5.3. Reporting the K–S test 1 148 5.6. Testing for homogeneity of variance 1 149 5.6.1. Levene’s test 1 150 5.6.2. Reporting Levene’s test 1 152 5.7. Correcting problems in the data 2 153 5.7.1. Dealing with outliers 2 153 5.7.2. Dealing with non-normality and unequal variances 2 153 5.7.3. Transforming the data using SPSS 2 156 5.7.4. When it all goes horribly wrong 3 162 What have I discovered about statistics? 1 164 Key terms that I’ve discovered 164 Smart Alex’s tasks 165 Online tutorial 165 Further reading 165 viii DISCOVERING STATISTICS USING SPSS 6 Correlation 166 6.1. What will this chapter tell me? 1 166 6.2. Looking at relationships 1 167 6.3. How do we measure relationships? 1 167 6.3.1. A detour into the murky world of covariance 1 167 6.3.2. Standardization and the correlation coefficient 1 169 6.3.3. The significance of the correlation coefficient 3 171 6.3.4. Confidence intervals for r 3 172 6.3.5. A word of warning about interpretation: causality 1 173 6.4. Data entry for correlation analysis using SPSS 1 174 6.5. Bivariate correlation 1 175 6.5.1. General procedure for running correlations on SPSS 1 175 6.5.2. Pearson’s correlation coefficient 1 177 6.5.3. Spearman’s correlation coefficient 1 179 6.5.4. Kendall’s tau (non-parametric) 1 181 6.5.5. Biserial and point–biserial correlations 3 182 6.6. Partial correlation 2 186 6.6.1. The theory behind part and partial correlation 2 186 6.6.2. Partial correlation using SPSS 2 188 6.6.3. Semi-partial (or part) correlations 2 190 6.7. Comparing correlations 3 191 6.7.1. Comparing independent rs 3 191 6.7.2. Comparing dependent rs 3 191 6.8. Calculating the effect size 1 192 6.9. How to report correlation coefficents 1 193 What have I discovered about statistics? 1 195 Key terms that I’ve discovered 195 Smart Alex’s tasks 195 Further reading 196 Online tutorial 196 Interesting real research 196 7 Regression 197 7.1. What will this chapter tell me? 1 197 7.2. An introduction to regression 1 198 7.2.1. Some important information about straight lines 1 199 7.2.2. The method of least squares 1 200 7.2.3. Assessing the goodness of fit: sums of squares, R and R2 1 201 7.2.4. Assessing individual predictors 1 204 7.3. Doing simple regression on SPSS 1 205 7.4. Interpreting a simple regression 1 206 7.4.1. Overall fit of the model 1 206 7.4.2. Model parameters 1 207 7.4.3. Using the model 1 208 7.5. Multiple regression: the basics 2 209 7.5.1. An example of a multiple regression model 2 210 7.5.2. Sums of squares, R and R2 2 211 7.5.3. Methods of regression 2 212 7.6. How accurate is my regression model? 2 214 ix CONTENTS 7.6.1. Assessing the regression model I: diagnostics 2 214 7.6.2. Assessing the regression model II: generalization 2 220 7.7. How to do multiple regression using SPSS 2 225 7.7.1. Some things to think about before the analysis 2 225 7.7.2. Main options 2 225 7.7.3. Statistics 2 227 7.7.4. Regression plots 2 229 7.7.5. Saving regression diagnostics 2 230 7.7.6. Further options 2 231 7.8. Interpreting multiple regression 2 233 7.8.1. Descriptives 2 233 7.8.2. Summary of model 2 234 7.8.3. Model parameters 2 237 7.8.4. Excluded variables 2 241 7.8.5. Assessing the assumption of no multicollinearity 2 241 7.8.6. Casewise diagnostics 2 244 7.8.7. Checking assumptions 2 247 7.9. What if I violate an assumption? 2 251 7.10. How to report multiple regression 2 252 7.11. Categorical predictors and multiple regression 3 253 7.11.1. Dummy coding 3 253 7.11.2. SPSS output for dummy variables 3 256 What have I discovered about statistics? 1 261 Key terms that I’ve discovered 261 Smart Alex’s tasks 262 Further reading 263 Online tutorial 263 Interesting real research 263 8 Logistic regression 264 8.1. What will this chapter tell me? 1 264 8.2. Background to logistic regression 1 265 8.3. What are the principles behind logistic regression? 3 265 8.3.1. Assessing the model: the log-likelihood statistic 3 267 8.3.2. Assessing the model: R and R2 3 268 8.3.3. Assessing the contribution of predictors: the Wald statistic 2 269 8.3.4. The odds ratio: Exp(B) 3 270 8.3.5. Methods of logistic regression 2 271 8.4. Assumptions and things that can go wrong 4 273 8.4.1. Assumptions 2 273 8.4.2. Incomplete information from the predictors 4 273 8.4.3. Complete separation 4 274 8.4.4. Overdispersion 4 276 8.5. Binary logistic regression: an example that will make you feel eel 2 277 8.5.1. The main analysis 2 278 8.5.2. Method of regression 2 279 8.5.3. Categorical predictors 2 279 8.5.4. Obtaining residuals 2 280 8.5.5. Further options 2 281 8.6. Interpreting logistic regression 2 282 x DISCOVERING STATISTICS USING SPSS 8.6.1. The initial model 2 282 8.6.2. Step 1: intervention 3 284 8.6.3. Listing predicted probabilities 2 291 8.6.4. Interpreting residuals 2 292 8.6.5. Calculating the effect size 2 294 8.7. How to report logistic regression 2 294 8.8. Testing assumptions: another example 2 294 8.8.1. Testing for linearity of the logit 3 296 8.8.2. Testing for multicollinearity 3 297 8.9. Predicting several categories: multinomial logistic regression 3 300 8.9.1. Running multinomial logistic regression in SPSS 3 301 8.9.2. Statistics 3 304 8.9.3. Other options 3 305 8.9.4. Interpreting the multinomial logistic regression output 3 306 8.9.5. Reporting the results 312 What have I discovered about statistics? 1 313 Key terms that I’ve discovered 313 Smart Alex’s tasks 313 Further reading 315 Online tutorial 315 Interesting real research 315 9 Comparing two means 316 9.1. What will this chapter tell me? 1 316 9.2. Looking at differences 1 317 9.2.1. A problem with error bar graphs of repeated-measures designs 1 317 9.2.2. Step 1: calculate the mean for each participant 2 320 9.2.3. Step 2: calculate the grand mean 2 320 9.2.4. Step 3: calculate the adjustment factor 2 322 9.2.5. Step 4: create adjusted values for each variable 2 323 9.3. The t-test 1 324 9.3.1. Rationale for the t-test 1 325 9.3.2. Assumptions of the t-test 1 326 9.4. The dependent t-test 1 326 9.4.1. Sampling distributions and the standard error 1 327 9.4.2. The dependent t-test equation explained 1 327 9.4.3. The dependent t-test and the assumption of normality 1 329 9.4.4. Dependent t-tests using SPSS 1 329 9.4.5. Output from the dependent t-test 1 330 9.4.6. Calculating the effect size 2 332 9.4.7. Reporting the dependent t-test 1 333 9.5. The independent t-test 1 334 9.5.1. The independent t-test equation explained 1 334 9.5.2. The independent t-test using SPSS 1 337 9.5.3. Output from the independent t-test 1 339 9.5.4. Calculating the effect size 2 341 9.5.5. Reporting the independent t-test 1 341 9.6. Between groups or repeated measures? 1 342 9.7. The t-test as a general linear model 2 342 9.8. What if my data are not normally distributed? 2 344 xi CONTENTS What have I discovered about statistics? 1 345 Key terms that I’ve discovered 345 Smart Alex’s task 346 Further reading 346 Online tutorial 346 Interesting real research 346 10 Comparing several means: ANOVA (GLm 1) 347 10.1. What will this chapter tell me? 1 347 10.2. The theory behind AnOVA 2 348 10.2.1. Inflated error rates 2 348 10.2.2. Interpreting f 2 349 10.2.3. AnOVA as regression 2 349 10.2.4. Logic of the f-ratio 2 354 10.2.5. Total sum of squares (SST) 2 356 10.2.6. Model sum of squares (SSM) 2 356 10.2.7. Residual sum of squares (SSR) 2 357 10.2.8. Mean squares 2 358 10.2.9. The f-ratio 2 358 10.2.10. Assumptions of AnOVA 3 359 10.2.11. Planned contrasts 2 360 10.2.12. Post hoc procedures 2 372 10.3. Running one-way AnOVA on SPSS 2 375 10.3.1. Planned comparisons using SPSS 2 376 10.3.2. Post hoc tests in SPSS 2 378 10.3.3. Options 2 379 10.4. Output from one-way AnOVA 2 381 10.4.1. Output for the main analysis 2 381 10.4.2. Output for planned comparisons 2 384 10.4.3. Output for post hoc tests 2 385 10.5. Calculating the effect size 2 389 10.6. Reporting results from one-way independent AnOVA 2 390 10.7. Violations of assumptions in one-way independent AnOVA 2 391 What have I discovered about statistics? 1 392 Key terms that I’ve discovered 392 Smart Alex’s tasks 393 Further reading 394 Online tutorials 394 Interesting real research 394 11 Analysis of covariance, ANCOVA (GLm 2) 395 11.1. What will this chapter tell me? 2 395 11.2. What is AnCOVA? 2 396 11.3. Assumptions and issues in AnCOVA 3 397 11.3.1. Independence of the covariate and treatment effect 3 397 11.3.2. Homogeneity of regression slopes 3 399 11.4. Conducting AnCOVA on SPSS 2 399 11.4.1. Inputting data 1 399 11.4.2. Initial considerations: testing the independence of the independent variable and covariate 2 400 xii DISCOVERING STATISTICS USING SPSS 11.4.3. The main analysis 2 401 11.4.4. Contrasts and other options 2 401 11.5. Interpreting the output from AnCOVA 2 404 11.5.1. What happens when the covariate is excluded? 2 404 11.5.2. The main analysis 2 405 11.5.3. Contrasts 2 407 11.5.4. Interpreting the covariate 2 408 11.6. AnCOVA run as a multiple regression 2 408 11.7. Testing the assumption of homogeneity of regression slopes 3 413 11.8. Calculating the effect size 2 415 11.9. Reporting results 2 417 11.10. What to do when assumptions are violated in AnCOVA 3 418 What have I discovered about statistics? 2 418 Key terms that I’ve discovered 419 Smart Alex’s tasks 419 Further reading 420 Online tutorials 420 Interesting real research 420 12 Factorial ANOVA (GLm 3) 421 12.1. What will this chapter tell me? 2 421 12.2. Theory of factorial AnOVA (between-groups) 2 422 12.2.1. Factorial designs 2 422 12.2.2. An example with two independent variables 2 423 12.2.3. Total sums of squares (SST) 2 424 12.2.4. The model sum of squares (SSM) 2 426 12.2.5. The residual sum of squares (SSR) 2 428 12.2.6. The f-ratios 2 429 12.3. Factorial AnOVA using SPSS 2 430 12.3.1. Entering the data and accessing the main dialog box 2 430 12.3.2. Graphing interactions 2 432 12.3.3. Contrasts 2 432 12.3.4. Post hoc tests 2 434 12.3.5. Options 2 434 12.4. Output from factorial AnOVA 2 435 12.4.1. Output for the preliminary analysis 2 435 12.4.2. Levene’s test 2 436 12.4.3. The main AnOVA table 2 436 12.4.4. Contrasts 2 439 12.4.5. Simple effects analysis 3 440 12.4.6. Post hoc analysis 2 441 12.5. Interpreting interaction graphs 2 443 12.6. Calculating effect sizes 3 446 12.7. Reporting the results of two-way AnOVA 2 448 12.8. Factorial AnOVA as regression 3 450 12.9. What to do when assumptions are violated in factorial AnOVA 3 454 What have I discovered about statistics? 2 454 Key terms that I’ve discovered 455 Smart Alex’s tasks 455 xiii CONTENTS Further reading 456 Online tutorials 456 Interesting real research 456 13 Repeated-measures designs (GLm 4) 457 13.1. What will this chapter tell me? 2 457 13.2. Introduction to repeated-measures designs 2 458 13.2.1. The assumption of sphericity 2 459 13.2.2. How is sphericity measured? 2 459 13.2.3. Assessing the severity of departures from sphericity 2 460 13.2.4. What is the effect of violating the assumption of sphericity? 3 460 13.2.5. What do you do if you violate sphericity? 2 461 13.3. Theory of one-way repeated-measures AnOVA 2 462 13.3.1. The total sum of squares (SST) 2 464 13.3.2. The within-participant (SSW) 2 465 13.3.3. The model sum of squares (SSM) 2 466 13.3.4. The residual sum of squares (SSR) 2 467 13.3.5. The mean squares 2 467 13.3.6. The f-ratio 2 467 13.3.7. The between-participant sum of squares 2 468 13.4. One-way repeated-measures AnOVA using SPSS 2 468 13.4.1. The main analysis 2 468 13.4.2. Defining contrasts for repeated-measures 2 471 13.4.3. Post hoc tests and additional options 3 471 13.5. Output for one-way repeated-measures AnOVA 2 474 13.5.1. Descriptives and other diagnostics 1 474 13.5.2. Assessing and correcting for sphericity: Mauchly’s test 2 474 13.5.3. The main AnOVA 2 475 13.5.4. Contrasts 2 477 13.5.5. Post hoc tests 2 478 13.6. Effect sizes for repeated-measures AnOVA 3 479 13.7. Reporting one-way repeated-measures AnOVA 2 481 13.8. Repeated-measures with several independent variables 2 482 13.8.1. The main analysis 2 484 13.8.2. Contrasts 2 488 13.8.3. Simple effects analysis 3 488 13.8.4. Graphing interactions 2 490 13.8.5. Other options 2 491 13.9. Output for factorial repeated-measures AnOVA 2 492 13.9.1. Descriptives and main analysis 2 492 13.9.2. The effect of drink 2 493 13.9.3. The effect of imagery 2 495 13.9.4. The interaction effect (drink × imagery) 2 496 13.9.5. Contrasts for repeated-measures variables 2 498 13.10. Effect sizes for factorial repeated-measures AnOVA 3 501 13.11. Reporting the results from factorial repeated-measures AnOVA 2 502 13.12. What to do when assumptions are violated in repeated-measures AnOVA 3 503 What have I discovered about statistics? 2 503 Key terms that I’ve discovered 504 xiv DISCOVERING STATISTICS USING SPSS Smart Alex’s tasks 504 Further reading 505 Online tutorials 505 Interesting real research 505 14 mixed design ANOVA (GLm 5) 506 14.1. What will this chapter tell me? 1 506 14.2. Mixed designs 2 507 14.3. What do men and women look for in a partner? 2 508 14.4. Mixed AnOVA on SPSS 2 508 14.4.1. The main analysis 2 508 14.4.2. Other options 2 513 14.5. Output for mixed factorial AnOVA: main analysis 3 514 14.5.1. The main effect of gender 2 517 14.5.2. The main effect of looks 2 518 14.5.3. The main effect of charisma 2 520 14.5.4. The interaction between gender and looks 2 521 14.5.5. The interaction between gender and charisma 2 523 14.5.6. The interaction between attractiveness and charisma 2 524 14.5.7. The interaction between looks, charisma and gender 3 527 14.5.8. Conclusions 3 530 14.6. Calculating effect sizes 3 531 14.7. Reporting the results of mixed AnOVA 2 533 14.8. What to do when assumptions are violated in mixed AnOVA 3 536 What have I discovered about statistics? 2 536 Key terms that I’ve discovered 537 Smart Alex’s tasks 537 Further reading 538 Online tutorials 538 Interesting real research 538 15 Non-parametric tests 539 15.1. What will this chapter tell me? 1 539 15.2. When to use non-parametric tests 1 540 15.3. Comparing two independent conditions: the Wilcoxon rank-sum test and Mann–Whitney test 1 540 15.3.1. Theory 2 542 15.3.2. Inputting data and provisional analysis 1 545 15.3.3. Running the analysis 1 546 15.3.4. Output from the Mann–Whitney test 1 548 15.3.5. Calculating an effect size 2 550 15.3.6. Writing the results 1 550 15.4. Comparing two related conditions: the Wilcoxon signed-rank test 1 552 15.4.1. Theory of the Wilcoxon signed-rank test 2 552 15.4.2. Running the analysis 1 554 15.4.3. Output for the ecstasy group 1 556 15.4.4. Output for the alcohol group 1 557 15.4.5. Calculating an effect size 2 558 15.4.6. Writing the results 1 558 xv CONTENTS 15.5. Differences between several independent groups: the Kruskal–Wallis test 1 559 15.5.1. Theory of the Kruskal–Wallis test 2 560 15.5.2. Inputting data and provisional analysis 1 562 15.5.3. Doing the Kruskal–Wallis test on SPSS 1 562 15.5.4. Output from the Kruskal–Wallis test 1 564 15.5.5. Post hoc tests for the Kruskal–Wallis test 2 565 15.5.6. Testing for trends: the Jonckheere–Terpstra test 2 568 15.5.7. Calculating an effect size 2 570 15.5.8. Writing and interpreting the results 1 571 15.6. Differences between several related groups: Friedman’s AnOVA 1 573 15.6.1. Theory of Friedman’s AnOVA 2 573 15.6.2. Inputting data and provisional analysis 1 575 15.6.3. Doing Friedman’s AnOVA on SPSS 1 575 15.6.4. Output from Friedman’s AnOVA 1 576 15.6.5. Post hoc tests for Friedman’s AnOVA 2 577 15.6.6. Calculating an effect size 2 579 15.6.7. Writing and interpreting the results 1 580 What have I discovered about statistics? 1 581 Key terms that I’ve discovered 582 Smart Alex’s tasks 582 Further reading 583 Online tutorial 583 Interesting real research 583 16 multivariate analysis of variance (mANOVA) 584 16.1. What will this chapter tell me? 2 584 16.2. When to use MAnOVA 2 585 16.3. Introduction: similarities and differences to AnOVA 2 585 16.3.1. Words of warning 2 587 16.3.2. The example for this chapter 2 587 16.4. Theory of MAnOVA 3 588 16.4.1. Introduction to matrices 3 588 16.4.2. Some important matrices and their functions 3 590 16.4.3. Calculating MAnOVA by hand: a worked example 3 591 16.4.4. Principle of the MAnOVA test statistic 4 598 16.5. Practical issues when conducting MAnOVA 3 603 16.5.1. Assumptions and how to check them 3 603 16.5.2. Choosing a test statistic 3 604 16.5.3. Follow-up analysis 3 605 16.6. MAnOVA on SPSS 2 605 16.6.1. The main analysis 2 606 16.6.2. Multiple comparisons in MAnOVA 2 607 16.6.3. Additional options 3 607 16.7. Output from MAnOVA 3 608 16.7.1. Preliminary analysis and testing assumptions 3 608 16.7.2. MAnOVA test statistics 3 608 16.7.3. Univariate test statistics 2 609 16.7.4. SSCP Matrices 3 611 16.7.5. Contrasts 3 613 xvi DISCOVERING STATISTICS USING SPSS 16.8. Reporting results from MAnOVA 2 614 16.9. Following up MAnOVA with discriminant analysis 3 615 16.10. Output from the discriminant analysis 4 618 16.11. Reporting results from discriminant analysis 2 621 16.12. Some final remarks 4 622 16.12.1. The final interpretation 4 622 16.12.2. Univariate AnOVA or discriminant analysis? 624 16.13. What to do when assumptions are violated in MAnOVA 3 624 What have I discovered about statistics? 2 624 Key terms that I’ve discovered 625 Smart Alex’s tasks 625 Further reading 626 Online tutorials 626 Interesting real research 626 17 Exploratory factor analysis 627 17.1. What will this chapter tell me? 1 627 17.2. When to use factor analysis 2 628 17.3. Factors 2 628 17.3.1. Graphical representation of factors 2 630 17.3.2. Mathematical representation of factors 2 631 17.3.3. Factor scores 2 633 17.4. Discovering factors 2 636 17.4.1. Choosing a method 2 636 17.4.2. Communality 2 637 17.4.3. Factor analysis vs. principal component analysis 2 638 17.4.4. Theory behind principal component analysis 3 638 17.4.5. Factor extraction: eigenvalues and the scree plot 2 639 17.4.6. Improving interpretation: factor rotation 3 642 17.5. Research example 2 645 17.5.1. Before you begin 2 645 17.6. Running the analysis 2 650 17.6.1. Factor extraction on SPSS 2 651 17.6.2. Rotation 2 653 17.6.3. Scores 2 654 17.6.4. Options 2 654 17.7. Interpreting output from SPSS 2 655 17.7.1. Preliminary analysis 2 656 17.7.2. Factor extraction 2 660 17.7.3. Factor rotation 2 664 17.7.4. Factor scores 2 669 17.7.5. Summary 2 671 17.8. How to report factor analysis 1 671 17.9. Reliability analysis 2 673 17.9.1. Measures of reliability 3 673 17.9.2. Interpreting Cronbach’s α (some cautionary tales …) 2 675 17.9.3. Reliability analysis on SPSS 2 676 17.9.4. Interpreting the output 2 678 17.10. How to report reliability analysis 2 681 xvii CONTENTS What have I discovered about statistics? 2 682 Key terms that I’ve discovered 682 Smart Alex’s tasks 683 Further reading 685 Online tutorial 685 Interesting real research 685 18 Categorical data 686 18.1. What will this chapter tell me? 1 686 18.2. Analysing categorical data 1 687 18.3. Theory of analysing categorical data 1 687 18.3.1. Pearson’s chi-square test 1 688 18.3.2. Fisher’s exact test 1 690 18.3.3. The likelihood ratio 2 690 18.3.4. Yates’ correction 2 691 18.4. Assumptions of the chi-square test 1 691 18.5. Doing chi-square on SPSS 1 692 18.5.1. Entering data: raw scores 1 692 18.5.2. Entering data: weight cases 1 692 18.5.3. Running the analysis 1 694 18.5.4. Output for the chi-square test 1 696 18.5.5. Breaking down a significant chi-square test with standardized residuals 2 698 18.5.6. Calculating an effect size 2 699 18.5.7. Reporting the results of chi-square 1 700 18.6. Several categorical variables: loglinear analysis 3 702 18.6.1. Chi-square as regression 4 702 18.6.2. Loglinear analysis 3 708 18.7. Assumptions in loglinear analysis 2 710 18.8. Loglinear analysis using SPSS 2 711 18.8.1. Initial considerations 2 711 18.8.2. The loglinear analysis 2 712 18.9. Output from loglinear analysis 3 714 18.10. Following up loglinear analysis 2 719 18.11. Effect sizes in loglinear analysis 2 720 18.12. Reporting the results of loglinear analysis 2 721 What have I discovered about statistics? 1 722 Key terms that I’ve discovered 722 Smart Alex’s tasks 722 Further reading 724 Online tutorial 724 Interesting real research 724 19 multilevel linear models 725 19.1. What will this chapter tell me? 1 725 19.2. Hierarchical data 2 726 19.2.1. The intraclass correlation 2 728 19.2.2. Benefits of multilevel models 2 729 19.3. Theory of multilevel linear models 3 730

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.