DOCUMENT RESUME TM 024 001 ED 384 679 Powers, Donald E.; O'Neill, Kathleen AUTHOR with Inexperienced and Anxious Computer Users: Coping TITLE Test of Academic Skills. The a Computer-Administered Beginning Praxis Series: Professional Assessments for Teachers. Educational Testing Service, Princeton, N.J. INSTITUTION ETS-RR-92-75 REPORT NO Oct 92 PUB DATE 53p. NOTE Research/Technical (143) Reports PUB TYPE Tests /Evaluation Instruments (160) MF01/PC03 Plus Postage. EDRS PRICE *Beginning Teachers; *College Students; Computer DESCRIPTORS Anxiety; *Computer Assisted Testing; Computer Higher Attitudes; Computer Literacy; *Experience; Construction; Education; *Teacher Evaluation; Test Testing Problems *Praxis Series Academic Skills; *Familiarization; IDENTIFIERS ABSTRACT the degree The objective of this study was to assess of the computer-based Academic to which the mode of administration Series: Professional Assessments Skills Assessments of the "Praxis performance differences among for Beginning Teachers" contributes to anxious computer users (Praxis pilot test takers. Inexperienced or graduates and study sample of 145) sample of 446 college students and assessments. The degree to which test were recruited to take the effectively minimized design and test familiarization procedures computers was examined variation due to comfort and familiarity with the extent to which the availability of from three perspectives: (1) information-providing test center supervisor influenced a personal, provided by a computerized test test performances, beyond the help within-test practice on familiarization tutorial; (2) the effect of of the test; and (3) the later performance on a subsequent section attitudes toward relationship of computer-based test performance to The conclusion was that computers and experience in using them. by computer performance on the tests is not unduly affected Two appendixes, administration. Eleven tables present study results. samples' responses to with five more tables, present pilot and study supplemental information. (Contains computer attitude scale items and 38 references.) (Author/SLD) *********************************************************************** be made Reproductions ,applied by EDRS are the best that can from the original document. *********************************************************************** vs LU R iES S E Professional Assessments for Beginning Teachers' Inexperienced and Anxious Computer Users: Coping with a Computer- Administered Test of Academic Skills U S DEPARTMENT OF EDUCATION Donald E. Powers Office of Educational Research and improvement E DU ATIONAL RESOURCES INFORMATION CENTER (ERIC) Kathleen O'Neill This document nIIIS been reproduced as received 1,Orn the person or orulnitation originting it C' Minor changes have been made to improve reproduction Quality Points of new or opinions stated in this (IOC u mem do not necessarily represent official OE RI position or policy PERMISSION TO REPRODUCE THIS MATERIAL HAS BEEN GRANTED BY October 1992 TO 1HE EDUCATIONAL RESOURCES ]Nt-ORMATION CENT ER (ERIC)" HR. 92-75 BEST COPY AVAILABLE Educational Testing Service 2 Copyright 0 1992 by Educational Testing Service. All rights reserved. EDUCATIONAL TESTING SERVICE, ETS, and the ETS logo are registered trademarks of Educational Testing Service. THE PRAXIS SERIES: PROFESSIONAL ASSESSMENTS iR BEGINNING TEACHERS and its design logo are trademarks of Educational Testing Service. 3 Inexperienced and Anxious Computer Users: Coping with a Computer-Administered Test of Academic Skills Donald E. Powers Kathleen O'Neill October 1992 THE PRAXIS SERIES: PROFESSIONAL ASSESSMENTS FOR BEGINNING TEACHEFLSno Acknowledgments The authors extend sincere thanks to the following people for their significant contributions to this study: Nancy Cole, for suggesting a study design that incorporated supervisor help as a factor Laura Jerry, for processing and analyzing the data Charlie Lewis, for advising us on data analysis Brenda Loyd, for permission to use the Computer Attitude Scale Paul Ramsey, for initiating the study Clyde Reese, for advising us on the design of the study Lora Vogel, for researching and identifying information about measuring attitudes toward computers Lou Woodruff and Charlene Canny, for recruiting study participants and arranging the data collection Ruth Yoder, for painstakingly producing the tables and text of the report Craig Mills, Phil Oltman, and Paul Ramsey for providing helpful reviews of an earlier draft of the report. THE PRAXIS SERIES PROFESSIONAL ASSESSMENTS FOR PECINNING TEACHERSTM Abstract The objective of this study was to assess the degree to which the mode of administration of the computer-based Academic Skills Assessments of The Praxis Series: Professional Assessments for Beginning Teachers"' contributes to performance differences among test takers. To make this determination, inexperienced or anxious computer users were recruited to take the assessments. The degree to which test design and test familiarization procedures effectively minimized variation due to comfort and familiarity with computers was examined from three perspectives: the extent to which the availability of a personal, information-providing test center 1. supervisor influenced test performances, beyond the help provided by a computerized test familiarization tutorial, the effect of vAhin-test practice on later performance on a subsequent section of 2. the test, and the relationship of computer-based test performance to attitudes toward computers 3. and experience in using them. The conclusion was that performance on the tests is not unduly affected by computer administration. THE PRAXIS SERIES: PROFESSIONAL ASSESSMENTS FOR BEGINNING TEACHERS Inexperienced and Anxious Computer Users: Coping with a Computer-Administered Test of Academic Skills Computer-based testing affords a number of measurement opportunities that are not the possible, or at least not easily implemented, with paper-and-pencil technology. Among question formats, alternative prospects that have to some degree already been realized are new administration, immediate models of measurement (e.g., adaptive testing), improvements in test of feedback to test takers, and more efficient assessment in terms of the kind and amount Service, 1992; information that can be gathered in a given period of time (Educational Testing develop, Green, 1988; Wise & Plake, 1990). In the future, as computer technologies continue to likely to arise (Bunderson, Inouye, & even more novel and intelligent uses of computers are Olsen, 1989). Despite the many potential advantages of computer-based testing, there is, as with any innovation, a need to address the possibility of unwanted side effects, including any inadvertent differences change in the meaning of test scores. Green (1988) has discussed a number of between computer-based and conventional tests that may affect the interpretation of test scores obtained in each of these modes. Arguably the most serious of the potential unintended consequences of computer-based of testing is the introduction of construct irrelevant factors that may disadvantage some groups the extent that test takers. Inequity may arise in the context of computer-based assessment to comfortable or facile test taking involves procedures with which every test taker is not equally according to (U.S. Congress, Office of Technology Assessment, 1992). Of relevance here is that, "a sizable minority" (perhaps as many as one estimate (Weil, Rosen, & Sears, 1987) for instance, reactions to computer-related technology, ranging one of every three adults) suffers aversive from "mild discomfort" to "severe debilitation." There is also ample evidence to suggest that not THE PRAXIS SERIES: PROFESSIONAL ASSESSMENTS FOR BEGINNING TEACHERS r--- 2 everyone coming through the U.S. educational system has had equal access to computers or is equally skilled in the use of them (Martinez & Mead, 1988). Furthermore, attitudes toward computers and competence in their use may be related to both gender and ethnicity (Dambrot, Watkins-Malek, Silling, Marchall, & Garver, 1985; Martinez & Mead, 1988; Wilder, Mackie, & Cooper, 1985), although these differences may depend on exactly how computers are used (Lockheed, 1985). One approach to equity involves examining the equivalence of computerized and paper- and-pencil versions of a test.' In accordance with APA guidelines for computer-based tests (American Psychological Association, 1986), several studies have in fact investigated the equivalence of scores from automated and conventional paper-and-pencil test versions. Mazzeo and Harvey's (1988) summary of these efforts suggests that under some conditions computer- based tests are more or less equivalent to their paper-and-pencil counterparts. A very thorough recent comparison of computer-based and paper-and-pencil versions of the GRE General Test, for example, (Schaeffer, Reese, Steffen, McKinley, & Mills, in press) showed an extremely tight equivalence between scores based on the two versions. However, under other circumstances, for example when speed is a factor in performance, scores from automated and conventional tests do not appear to be directly comparable. While applicable generally, the concept of equivalence also pertains to specific subgroups of examinees, including perhaps those whose attitudes toward or experiences with computers are not commensurate with those of test takers in general. There has been some conjecture (e.g., 'It should be noted that, though relevant, the concept of equivalence is not entirely germane to the computer-based Academic Skills Assessments of The Praxis Series: Professional Assessments for Beginning Teachers', the test studied here, as this test has no comparable paper-and-pencil counterpart. In contrast, equivalence has been a central issue in converting the existing GRE General Test, for example, to a linear computerized test consisting of the same items used on the paper-and-pencil version of the test. THE PRAXIS SERIES: PROFESSIONAL ASSE:SMENTS FOR BEGINNING TEACHERS 3 individual differences in test the involvement of certain Wise, Harvey, & Plake, 1989) about Although relatively the mode of test administration. performance differentials resulting from experience as moderators of of such traits as attitudes and few studies have examined the role there have been some computer-based and paper-and-pencil tests, test performance on exceptions. higher hypothesized that anxiety levels may have run Lee, Moreno, and Sympson (1986) in their study, thus for the paper-and-pencil test used for the computer-based test than difference was explained the paper-and-pencil test. The explaining the higher average scores on with computers; test takers who had no previous experience mainly by the lower performance of significant difference in least some previous experience, there was no among those having had at that for a In a related study, Lee (1986) found performance between modes of administration. explained by much of the variation (62%) was computerized test of arithmetic reasoning, though small portion of version of the test. A significant, performance on a paper-and-pencil of word previous computer experience (computer courses, use variance (2-3%) was explained by Barnes, Harvey, and jobs requiring computer use). Wise, processing, computerized games, and lack of experience found that neither feelings of anxiety nor Plake (1989), on the other hand, students in an computer-based achievement test for had any effect on performance for a introductory statistics course. reflects the required to ensure that test performance The amount of training that may be issue here. On the mode of testing is, of course, a central construct of interest rather than the be sufficient that "...minimal work with computers may basis of her study, Lee (1986) concluded and White (1980) also found computerized testing" (p.732). Johnson to prepare a person for computer) to reduce (one hour of experience on the that little test familiarization was needed FOR BEGINNING TEACHERSTM THE PRAXIS SERIES: PROFESSIONAL. ASSESSMENTS 4 the disadvantage to elderly examinees taking a computerized version of the Wonder lic Personnel Inventory. Weil, Rosen, and Sears (1987), among others, have developed programs that appear effective in reducing "computer phobia" and, presumably, reducing test mode effects. Some, e.g., Loyd and Gressard (1984a), have found that computer experience is positively related to attitudes about computers. Greater experience apparently does not automatically translate to less apprehension, however (Marcoulides, 1990), and may in fact exacerbate computer anxiety in some instances (Rosen, Sears, & Weil, 1987). The major concern of the study reported here was whether or not the design of the computer-based Academic Skills Assessments of The Praxis Series: Professional Assessments for Beginning Teachersn4 and the test familiarization provided for these assessments were sufficient to ensure that test performances accurately reflect the skills being measured. This concern is much the same as for paper-and-pencil tests -- that performance on a test should not be unduly influenced by familiarity with the procedures required to take it. For computer-based tests, these procedures may include not only those needed for paper-and-pencil tests, e.g., when to make informed guesses and how to use time efficiently, but possibly other ones also, for example those having to do with various aspects of computer use (how to scroll, how to use a mouse, etc.). Our aim, therefore, was to implement a primary standard for educational and psychological testing -- to present evidence that ..."a test does not depend heavily on extraneous constructs" (p. 15) (AERA, APA, NCME, 1985), in this case familiarity, comfort, and experience with computers. Although computer experience and attitudes toward computers are extraneous to the measurement of the constructs of interest here, i.e., the basic academic skills required of beginning teachers, they are not entirely irrelevant to the practice of teaching itself. If, as has THE PRAXIS SERIES: PROFESSIONAL ASSESSMENTS FOR BEGINNING TEACHERS*"
Description: