Developing an Engineering Design Process Assessment Using Think-Aloud Interviews MELTEM ALEMDAR1 [email protected] JEREMY A. LINGLE1 [email protected] STEFANIE A. WIND2 [email protected] ROXANNE A. MOORE1 [email protected] 1The Center for Education Integrating Science, Mathematics, and Computing (CEISMC), College of Sciences, Georgia Institute of Technology, 817 West Peachtree St., Suite 300, Atlanta, GA, 30303, USA 2College of Education, University of Alabama, Box 870231, Tuscaloosa, AL, 35487, USA Abstract: Early exposure to engineering has been found to help students in their decision-making regarding engineering education and career pathways. Subsequently, an NSF-funded project is underway that is focused on development of an engineering curriculum for students in grades six through nine. The Engineering Design Process (EDP) frames this curriculum. The current study presents the validation methods and results of a multiple-choice assessment created to measure students’ understanding of the EDP. The utilization of Think Aloud Interviews and the application and analysis of qualitative coding schemes for the purpose of systematically gathering evidence about the psychometric quality of the assessment are described. Findings from this study support the validity of the EDP assessment through evidence of alignment between the intended skills and the skills elicited in the student interviews. Keywords: Engineering Design Process; Assessment; Validation; Cognitive Interviews 1.Introduction The engineering education community and leaders in the field of technology education have identified the important role K-12 engineering education plays in the success of postsecondary engineering education [1]. Hence, an early exposure to engineering can help students make informed decisions about engineering as a career path. The United States is no exception, where the role of K-12 engineering education continues to be of national interest [2]. Through a National Science Foundation (NSF) funded project, Georgia Institute of Technology partnered with a public school district to bring engineering curriculum to students in grades six through nine. In this project, middle school students explore Science Technology Engineering Mathematics (STEM) Innovation and Design in engineering technology courses. In order to guide instruction related to engineering design, the curriculum utilizes the Engineering Design Process (EDP) as a sequential and/or iterative process. A variety of EDP models have been used as guiding frameworks for engineering curricula that vary in terms of specific terms, order, and sequences [3] (e.g., see [4]). In order to contribute to the vital conversation surrounding development of valid assessments for engineering education[5], the current study describes efforts to develop a valid and reliable assessment to inform the development and implementation of an engineering curriculum. Evidence Centered Design (ECD) [6] is used as a framework for assessment design. Using a mixed-method approach, quantitative and qualitative techniques are used together to explore student responses to a multiple-choice engineering design assessment as evidence to strengthen the validity argument for the instrument and guide revisions to individual items. The quantitative component focuses on exploring the psychometric characteristics of an engineering design assessment using Item Response Theory. The qualitative component, which is the major focus for the current study, includes the use of Think- Aloud Interviews (TAIs) to gather evidence about student conceptions of engineering concepts and their rationale for selecting answer choices. TAIs are useful for identifying cognitive processes and knowledge structures in which students engage as they complete a test[7]. Additionally, the qualitative data were used to explore student responses to the assessment items regarding student cognitive processes and perceptions 1 of item difficulty drivers. This study illustrates the use of TAIs as a systematic method for gathering evidence about the psychometric quality of an EDP assessment. It then presents empirical results from TAIs that indicate the cognitive processes that students employed as they responded to items on an engineering assessment. Lastly, the TAI results are summarized in terms of the ways students defined and utilized engineering design concepts. The major purpose of this study is to explore student responses to multiple-choice (MC) engineering assessment items in order to gain a more complete understanding of student conceptions of engineering design and to inform revisions to and development of new assessment items. Following the methodology described by Hamilton, Nussbaum, and Snow [8] and DeBoer, Lee, and Husic [9], concurrent think-aloud interviews and retrospective probes were used to gather evidence about student conceptions of engineering and their rationale for selecting answer choices. This study is guided by two major research questions: 1. Do the piloted engineering design process items elicit evidence of the intended cognitive processes? 2. What item features contribute to the perceived difficulty of the piloted engineering design process assessment items? This study contributes to the field of engineering education in several ways. First, it provides validation information regarding an assessment of the Engineering Design Process among middle school students. Second, the study provides an illustration and guidance toward a rigorous, systematic approach to validating assessment instrument through the use of Think-Aloud Interviews. 2. Theoretical Framework The Committee on Developing Assessments of Science Proficiency in K-12 issued a set of recommendations for the design of assessments aligned with the Next-Generation Science Standards[10] that reflect an emphasis on the integration of practices, crosscutting concepts, and disciplinary core ideas in science education. The committee called for the use of frameworks for assessment design that “provide a methodological and systematic approach to designing assessment tasks” (p. 52). The final recommendations emphasize the role of evidence as a key aspect of assessment design frameworks that is needed in order to “support the validity argument for an assessment’s intended interpretive use and to ensure equity and fairness” (p. 81). Following this recommendation, the theoretical framework for this study draws upon principles from Evidence-Centered Design (ECD). Recognizing that assessment is an evidentiary reasoning process, it is important to use a systematic process while designing an assessment. ECD is a framework for assessment design that focuses on: the role of evidence in developing assessment tasks and contexts that elicit a particular construct, the intended inferences from assessment scores, and the nature of the evidence that supports the inferences [10]. This process starts by defining as accurately as possible particular aspects of a content domain—in other words, the ways in which students are supposed to know and understand the content. Examples from the current study include the use of the EDP as a cognitive model (see Figure 1). Additionally, the claims that one wants to be able to make about student knowledge play a critical role for the purpose of the assessment [11]. This study focuses on the evidence model component of ECD, in which empirical evidence is examined to support the interpretation of responses to assessment tasks as indicators of student achievement in terms of a construct [12] [13]. 2.1 Using Think Aloud Interviews in Evidence-Centered Assessment Design Think Aloud Interviews (TAIs) are recommended for developing a cognitive model of task performance as a method for gathering validity evidence to support the interpretation and use of an assessment [14]. As described by Leighton [15], developing a cognitive model of task performance is a necessary step because “this model is the type that researchers develop to confirm empirically that students are employing the expected knowledge and skills on the items being developed” (p. 8). Development of a cognitive model was important in this study because the cognitive processes related to the EDP have not been tested or described in the literature. In terms of the evidence model component of ECD, the role of cognitive processes that students employ while completing assessment tasks, and the degree to which these 2 processes reflect the intended construct are very important. Ferrara et al. [16] refer to this type of evidence as item construct validity evidence. One approach to gathering validity evidence is through the use of cognitive labs, or think-aloud interviews, during which students either concurrently or retrospectively describe the cognitive processes they employed when responding to assessment tasks [17]. Therefore, TAIs are useful for identifying the cognitive processes and knowledge structure when students perform a task. During the TAIs, students are directed to freely “think aloud” as they respond to an item, which provides researchers information about the cognitive processing performed when responding to the item. 3. Methods Data for this study were collected using an EDP assessment that includes 18 multiple-choice (MC) items with distractors that were constructed to reflect common student misconceptions about different stages of EDP. An example of such a misconception in understanding of the EDP is the perception of the design process as linear, rather than an iterative process that requires revisiting prior design decisions and evaluating alternative solutions. Another example of a misconception is students’ ignoring constraints and requirements for a design due to a preferred solution by the individual student[18]. The items were developed based on pre-existing engineering assessment items [19] and subject-area expert review. Further, the items were aligned to one or more stages in a conceptual model of the EDP used in the curriculum; this conceptual model is illustrated and defined in Figure 1. Each stage was measured by at least two items: four stages of the EDP were measured by four items each; one stage, Problem Understanding, was by far the most commonly appearing stage and was expected to be elicited by eight of the assessment items. The instrument was pilot-tested in January 2014, and the post-test and cognitive interviews were conducted in May 2014. A cognitive interview protocol was adapted from protocols described in previous TAI studies [9]. This procedure uses EDP MC items as stimuli. The interview began with the researcher modeling “thinking aloud” while answering an example item. The student then reads and chooses a response while verbalizing their thinking. Following the “think aloud” portion of the interview, the interviewer asks the student to elaborate on their understanding of the item, strategies and sources of knowledge used to select a correct response, and rationale for eliminating distractors using a semi-structured interview protocol. Hamilton, Nussbaum, and Snow [8] also stated that this type of interview procedure, combined with multiple-choice items, allows researchers to discover student reasoning processes and strategies for responding to MC items, sources of knowledge applied to MC items, and differences in reasoning and strategies between successful and unsuccessful students. The semi-structured interview protocol was pilot-tested for validity purposes. The pilot test assisted the research team in identifying weaknesses within the interview design, and allowed the researchers to make necessary revisions and estimations of time requirements prior to the implementation of the study. The primary changes made to the design as a result of the pilot test was to reduce the length of the introductory statements describing the interview protocol process, as well as to remove several probes from the interview protocol that were determined to be redundant. 3.1 Case Selection The pre-test scores of students enrolled in the engineering classrooms (see [20]) were used to construct a stratified sample for the cognitive interviews. This was accomplished by placing each student into one of three performance-level groups of approximately equal size (low, medium, or high) based on their pre-test achievement estimates. Similarly, items were categorized according to their difficulty (easy, moderate, or difficult) based on student pre-test performance estimates. A total of six item sets were created so that each set included an easy, moderate, and difficult item. Students from each of the performance-level categorizes were purposefully selected for interviews for each item set, ensuring that students from all achievement levels provided data for items of all difficulty levels. 3.2 Participants Participants in the qualitative component of the EDP assessment development study included a sample of 44 students (four students did not want to participate in the interviews) enrolled in a public middle school with approximately equal numbers of students from each sixth, seventh, and eight grade levels.. Prior to the interviews, all students had participated in the semester-long engineering curriculum; therefore students had a basic understanding of engineering vocabulary to be able to participate in the interviews. Interviews were conducted by a group of eight educational researchers who practiced protocol administration prior to conducting interviews. All interviews were conducted in English and lasted 3 approximately 20 minutes. In order to keep the interviews to a 20-minute time period, only three EDP assessment items were included per interview. All students who participated in the interviews were proficient in English. 3.3 Data Analysis Prior to conducting interviews, a preliminary coding framework was developed based on the framework described by Kaliski et al. [14]. Particularly, codes within four major categories were specified: (Category A) cognitive processing, (Category B) difficulty drivers, (Category C) test-taking behaviors, and (Category D) miscellaneous. This coding framework provides a systematic method for categorizing student responses that aligns with the conceptual model of engineering that is included in the curriculum. Using the framework, four trained researchers independently coded verbal reports using nVivo® software. Codes were modified following initial analyses to better reflect the scope of responses. This study focuses on Category A and Category B. The cognitive processes that students employed as they responded to the items were captured by the codes in Category A. This category included five codes. First, the code “Factual Recall” was used when students recalled specific facts or definitions to answer a question rather than using a specific cognitive skill (e.g., Student response: “It mostly sounds like the right definition for it.”). Second, the code “Engineering Design Process” was used when a student’s response indicated consideration of one or more of the EDP stages. The EDP stages are defined operationally in Figure 1. If the student made reference to the EDP or if engineering reasoning was demonstrated without clear indications of the stage, the code “Evidence of Intended Skills” was used. Examples of these statements made by the participants included: You could go through the design process in your head and think about what your final design must include and then that has to be your goal. That would be the answer choice if you’re using the engineering. Because we were designing problems to design solutions to fix problems. Because you’re going to research through the process, but that’s not really your goal – to research. Because you need to know that to actually fix the problem. You can’t just go to the end and think about problems similar to it. Because engineers have to figure something out, like figure out how to improve stuff. Third, the “Guessing” code was used when the students indicated that they did not have sufficient knowledge to determine the answer. Fourth, student responses were coded for “Process of Elimination” if they used a strategy to eliminate answer choices. Fifth, student responses were coded as “Background Characteristic” if students referenced personal experiences or background characteristics when choosing an answer, such as referencing a family member who is an engineer. The codes in Category B, Difficulty Drivers, describe item features that students indicated as increasing or decreasing the difficulty of an item but were not directly related to the content of the item. “Item Length” was coded because it was hypothesized that longer item stems or answer choices increase the difficulty of an item. Second, comments about additional material included with the item, such as graphics or charts, that made the item more difficult were coded as “Stimulus Material.” This code was also used when the stimulus material served to clarify the item, and thus made it easier. Third, “Degree of Familiarity” was used if students indicated a lack of previous exposure to the information, usually scenario- based (e.g., lack of familiarity with airplane travel). This code appeared rarely in the interviews (see Table 2). In instances where student indicated lack of familiarity, interviewers were trained to probe to determine if this lack of familiarity affected student understanding of the scenario. Fourth, “Quality of Distractors” was applied when students stated that multiple response options appeared plausible or if some distractors were easy to eliminate. Fifth, the “Vocabulary” code was used when the meaning of a word was not known. Sixth, student statements that indicated a misunderstanding of any part of the item were coded as “Misunderstanding.” For frequency of occurrence of each code see Table 2. 4 Although not examined in this study, coding categories C and D are described. Category C focused upon student test-taking strategies, including “Process of Elimination”, “Rereading” or re-stating portions of the item, misreading words that impacted choice selection (“Misread”), changing an answer choice after making a selection (“Change Answer”), and using clues within the item to select or eliminate a response option (“Scaffolding within the Item”). Following Kaliski et al. [14], the “Process of Elimination” code is included in Category A and Category C because this action can function as both a cognitive process and a test-taking strategy. Category D included “Miscellaneous” codes, and included indications of correct or incorrect responses (“Correct Response” or “Incorrect Response), any apparent difficulty with the think- aloud process (“Difficulty thinking aloud”), required prompts from the researcher (“Researcher Prompt”), or student comments indicating that the stimulus material (text, graphics, or charts) was irrelevant (“Stimulus Material Irrelevance”). 3.4 Coding Process The coding was completed in three rounds. First, in order to be certain of a common understanding of the codes and to address potential definition refinements, four researchers used the initial framework to code the transcripts related to three assessment items. If evidence of a code appeared at all for an assessment item, the entire interview was coded. Following the first round of coding, the researchers met and refined the initial definitions. As a result, the initial code definitions were expanded to include comments related to both item stems and answer choices (initially the code verbiage was focused upon answer choices), the definition of “Background Characteristic” was revised to exclude students’ experiences of EDP through their engineering curriculum, and a new code was created to capture references to the EDP that were not clearly indicative of a specific stage (“Evidence of Intended Skills”). In the second round of coding, the same process of using a code per item if evidence of it was apparent was applied using the refined codes. The purpose of this whole-item coding was to explore the frequency of codes per item in order to identify those items that warranted further exploration. The third and final round involved more traditional qualitative coding in which a code could be used multiple times per item. Coding in this manner provides a more in-depth understanding of the areas of primary interest, namely student cognitive strategies, implicit and explicit use of the EDP, difficulty drivers, and misconceptions in student understanding of the EDP process. 4. Results Before examining the cognitive interview transcripts in depth, counts of correct and incorrect responses were calculated for each EDP multiple-choice item. Findings indicated that the number of correct responses corresponded to the achievement-level groups that were assigned based on the pre-test performance, with more incorrect responses appearing among “low performing” group members, and fewer incorrect responses appearing among “high performing” group members. In this section, results from the qualitative analysis of the cognitive interviews are summarized in terms of the guiding research questions for this study. A discussion of conclusions from these findings follows. Research Question 1: Do the piloted engineering design process items elicit evidence of the intended cognitive processes? The degree to which an assessment item elicited EDP knowledge as intended was made apparent through the analysis shown in Table 1. This table presents a summary of results from Round 2 of the qualitative analysis, in which codes appear once per item per interview. The table presents the frequency of transcripts with identified intended skills related to the previously described cognitive processes (including references to the EDP) and the difficulty drivers. This illustration reveals that all assessment items elicited at least one intended skill related to the EDP, and all items except Item 1 were associated with at least one difficulty driver. To provide an example of the examination of alignment to intended skills, consider Item 5 in Table 2, for which six cognitive interviews were conducted. This item was designed to elicit the Concept Evaluation and Prototyping stages of the EDP. Examination of the codes indicated that references to Concept Evaluation were made in one interview, and references to Prototyping were also made in one interview. Five students made general reference to the EDP, although they did not specifically identify a 5 stage. For example, one student responded that: “An engineer fixes a lot of things and it can be like a bunch of stuff. It can be ways to improve the safety of cars.” In this statement, the student seemed to be describing the role of engineering as improving products, suggesting at least some understanding of the EDP, but with too little detail to allow identification of a specific stage. Considering these findings, the researchers examined the specific sections of the relevant transcripts coded during Round 3 and determined that students were inconsistently interpreting the EDP stage presented in the scenario. This discovery resulted in refinements to both the vocabulary used and the scenario for this item. Research Question 2: What item features contribute to the perceived difficulty of the piloted engineering design process assessment items? The researchers also examined the difficulty drivers that were coded for each item. Evidence of perceived difficulty drivers were typically identified as part of the semi-structured, retrospective probe portion of the TAI, when the researchers asked students about why an item was easy or difficult. The difficulty drivers identified are listed in Table 2 in the rightmost column. To continue with Item 5 as an example, the item-level coding results revealed that three students found the vocabulary challenging, and five students found the quality of the distractors to be an issue. Upon review of the specific coded sections of the transcripts for this item, the researchers discovered that language inconsistencies between the scenario and the response options caused some confusion (using “requirements” in the scenario and “criteria” in the response options), which was corrected in subsequent iterations of the item. 5. Conclusion Findings from this study suggested that Think Aloud Interviews, along with the coding framework based on the EDP, proved to be a valuable method for gathering evidence about the psychometric quality of this EDP assessment. Overall, results suggested that the EDP assessment items were generally eliciting the intended skills. Specifically, findings from the qualitative analyses indicated alignment between the intended and observed EDP skills for the new assessment items examined in this study. This finding provides item construct validity evidence for the EDP assessment. Second, certain EDP skills (e.g. Concept Design) were less frequently observed than others (e.g. Concept Evaluation). This variation in the eliciting of certain stages of the EDP is hypothesized to be the result of the emphasis that is placed on these stages in the engineering curriculum, which has a greater focus on evaluating specific designs and less focus on initial development of those designs. This curricular emphasis likely impacts the students’ ability to identify characteristics of more familiar stages. Third, there were several features that students perceived as contributing to the difficulty of the items, primarily related to difficulty choosing between multiple response options they perceived as correct. Results for several items indicated potential issues related to item clarity and vocabulary; these issues were typically simple refinements to make, once identified. However, some vocabulary issues identified in this study contributed to the formative evaluation of the curriculum. For example, student confusion over certain terms considered primary to the understanding of EDP, such as “constraints” and “iterations,” were used for curricular adjustments and related teacher professional development. 6 References 1. Hailey, C., et al., National center for engineering and technology education. The Technology Teacher, 2005. 64(5): p. 23-26. 2. Carr, R.L., L.D. Bennett, and J. Strobel, Engineering in the K-12 STEM Standards of the 50 U.S.States: An Analysis of Presence and Extent. Journal of Engineering Education, 2012. 101(3): p. 1-26. 3. Atman, C.J., et al., Engineering Design Education, in Handbook of Engineering Education Research A. Johri and B.M. Olds, Editors. 2014, Cambridge University Press: New York, NY. p. 201-226. 4. Bailey, R. and Z. Szabo, Assessing Engineering Design Process. International Journal of Engineering Education, 2005. 22(3): p. 508-518. 5. Douglas, K.A. and S. Purzer, Validity: Meaning and Relevancy in Assessment for Engienering Education Research. Journal of Engineering Education, 2015. 104(2): p. 108-118. 6. Almond, R.G., L.S. Steinberg, and R.J. Mislevy, A Four-Process Architecture for Assessment Delivery, with Connections to Assessment Design. Journal of Technology, Learning, and Assessment, 2002. 1(5). 7. Ercikan, K., et al., Application of think aloud protocols for examining and confirming sources of differential item functioning identified by expert reviews. . Educational Measurement: Issues and Practice, 2010. 29(2): p. 24-35. 8. Hamilton, L.S., E.M. Nussbaum, and R.E. Snow, Interview procedures for validating science assessments. Applied Measurement in Education, 2009. 10(2): p. 181-200. 9. DeBoer, G., H.-S. Lee, and F. Husic, Assessing integrated understanding of science, in Coherent Science Education: Implications for Curriculum, Instruction, and Policy, Y. Kali, M.C. Linn, and J.E. Roseman, Editors. 2008, Teachers College Press: New York: Columbia University. p. 153- 182. 10. (NRC), N.R.C., Developing Assessments for the Next Generation Science Standards. Committee on Developing Assessments of Science Proficiency in K-12. , J.W. Pellegrino, et al., Editors. 2014: Washington, DC. 11. Pellegrino, J.W., Assessment of science learning: Living in interesting times. . Journal of Research in Science Teaching, 2012. 49(6): p. 832-841. 12. Mislevy, R.J. and G.D. Haertal, Implications for evidence centered design for educational assessment, . Educational Measurement: Issues and Practice, 2006. 25: p. 6-20. 13. Snow, E., et al., Large Scale Technical Report: Leveraging Evidence-Centered Design in Large- Scale Test Development 2010, SRI International: Menlo Park, CA. 14. Kaliski, P., et al., Using Think Aloud Interviews in Evidence-centered Assessment Design for the AP World History Exam, in American Educational Research Association. 2011: New Orleans, IL. 15. Leighton, J.P., Avoiding misconception, misuse, and missed opportunities: The collection of verbal reports in educational achievement testing. Educational Measurement: Issues and Practice, 2004. 23(4): p. 6-15. 16. Ferrara, S., et al., Examining test score validity by examining item construct validity: Preliminary analysis of evidence of the alignment of targeted and observed content, skills, and cognitive processes in a middle school science assessment., in American Educational Research Association. 2004: San Diego, CA. 17. Ericsson, K.A. and H.A. Simon, Verbal reports as data. Psychological Review, 1980. 87(1): p. 215-251. 18. Atman, C.J., et al., Engineering Design Processes: A Comparison of Students and Expert Practitioners. Journal of Engineering Education, 2007. 96(4): p. 359-379. 19. Moore, T.J. and S.S. Guzey, EngrTEAMS engineering content assessment for grades 4-8. Engineering to Transform the Education of Analysis, Measurement, and Science through a Targeted Mathematics-Science Partnership. 2013, National Science Foundation: MN. 20. Wind, S.A., et al. Developing an engineering design process assessment using mixed methods: An illustration with Rasch measurement theory and cognitive interviews. in American Educational Research Association. 2015. Chicago, IL. 7 Appendix Table 1. Summary of observations about EDP as a cognitive model and Difficulty Drivers EDP Types of Responses Example Student Response Stage/Code Explicit reference to problem definition as a method I used defining the problem…. If you didn't understand the problem up here, then for clarifying the appropriate next steps in an you couldn't really answer this down here because you would be confused. engineering design challenge. (I used) defining (the problem) because our teachers usually use the word define it, Explicit reference to problem definition as a defined they tell us what to do or they have a lot of paper and we read the paper and it help Problem step in the EDP that they learned in class, us define the problem. definition The reason why I chose C is because your main goal is to be able to allow dogs to have enough air to fly safely for eight hours and be sound proof enough that Implicit reference to problem definition by identifying passengers cannot hear barking dogs. You want to be able to meet these and keep or focusing on the specific needs of a customer/client. them and solve the ABC Airline problems. That’s what C is saying to design a new container that solves the Airlines problems. You need to review the requirements and restraints (constraints) of the problem you Implicit reference to problem understanding by are solving. Like, they want the dogs to have enough air to fly for eight hours, they focusing on identifying and understanding want to be sound proof, and it needs... and the requirements need to be, um, the size requirements and constraints for an engineering design and how much it costs, and it can't be poisonous to dogs. That's what it is saying, problem. those are the requirements and problems you are solving. Implicit reference to problem understanding by Because you want to see, you want to find soundproof materials so that the focusing specifically on the requirements and customers can be happy on their trip. constraints in terms of the customer/client. Implicit reference to problem understanding by It's not all about the cost and materials. Instead it's about what it needs to do and focusing on the functions that the designed solution Problem stuff. must carry out. understanding Implicit reference to problem understanding by Since they are so close together, you need to try and make sure you have soundproof focusing on engineering requirements that affect materials that are good to make sure they …. here because they're so close together. customer needs. Focus on relative ordering of problem understanding You don't conduct research on things related to the problem (first), you want to within the EDP. think about what the problem is. If you just make random changes to see if the problem goes away, then you're not really considering the fact that there's a problem at all, because you don't know Focus on the importance of problem understanding where the problem is ... and so if you don't know where it is, then how can you within the EDP know if you're fixing the problem that's in the game, instead of just ... you know, making random changes. 8 Table 1, continued. EDP Types of Responses Example Student Response Stage/Code Implicit reference to conceptual design where students You can't start building a new game until you brainstorm a game into your head, described the relative ordering of brainstorming or ideation until you know what it is. Conceptual within the engineering design process. design Description of conceptual design (brainstorming) as an You know how you first want to build a catapult, but you don't know what the essential process of engineering design that is used for design you want to do is, so first you'd have to brainstorm possible designs. generating ideas. You're not just focusing on soundproof materials because you've got the other Students indicated use of concept evaluation when they Concept things to work on… he wants you to build something good, but you don't need to described the importance of specific customer needs or criteria evaluation focus on the strongest thing because you need all the things right here, all the when considering the quality of a solution. requirements. You shouldn't build a full-scale. You should do a little mini one and test is out to Students described the use of prototypes as a part of iteration. see if it would work. Prototyping Students described the use of prototypes as a method for (Create) a prototype or building a simple drawing of it so you could get a simple understanding potential solutions base idea about what you are going to do without adding all the extras to it yet. Students explicitly referenced the concept of testing as an He should test it more and see what the problem would be. If he documents it, he essential method for several aspects of successful engineering will get the answer for why it messes up. design. Because if the game stops working at level 3, then that means something isn't going right, so he would have to carefully test it ... in order to know what's not Students described testing as a method for diagnosing problems working, and how to solve the problem, and like when he makes the results ... Testing with a design in order to inform iteration when he checks the results then it'll be easier for him to look over them without him getting messed up, or losing where he stopped at. Students described testing as a method for comparing potential You have to test it to see if it will work, and she has to test her different versions solutions. of the device. Of each material. Just because they say it can clean a hundred carts in thirty-five minutes don't Students described testing as a method for verifying a solution. actually mean that it can, so she needs to test it to see It's like when we made a prototype of a cradle design for the catapult. Ours Students noted examples from personal or class experiences wouldn't throw the ball into the safe zone. So we changed up the design but still with iterating on a design kept it the same a little bit and it started working. Students referred to iteration as a method for ensuring If you keep your original design and you begin the game and no one makes it, you adherence to design requirements if an original design was could end up having a bad game and you wouldn’t be able to come back into the Iteration unsuccessful carnival. I would keep running it and running it and make changes and see would that help Students described the concept of improving upon previous it and if it does I would stick to that instead of trying to do the process over. I designs, rather than starting over, within the context of a design would iterate the process I already have and just keep doing it until it works and if challenge it doesn't work at a certain time, then I'll start over. 9 Table 2, continued. Difficulty Drivers Type of Responses Example of Student Response Code Student indicates that the length of item stimulus material I almost picked that because it was too many words and it got confusing. makes an item difficult Student indicates that the length of the item stem makes it Length difficult Student indicates that the length of an answer choice makes it difficult Student indicates that the stimulus for an item (e.g., graphics, I didn't get out what this little thing is, or what it's supposed to be. Stimulus material charts, etc.) makes the item difficult (graphics, charts, Student indicates that the length of the stimulus makes the etc.) item difficult Students have not had an opportunity to learn the content, I never heard (of this). I don't know how they do the shopping carts. I didn't Degree of making the item less familiar know they use this much money to do this. familiarity Students indicate that the item context is unfamiliar Student indicates that some distractors were easy to A and D were sort of kind of alike, so they sort of confused me. Quality of eliminate distractors Student indicates that two or more distractors appear to be plausible options Student indicates that the item was difficult as a result of I'm not going to say D because I don't really know what that second word is. Item vocabulary vocabulary (in the stimulus, item stem, or answer choices) Student does not understand the item stem I was confused with C because I really didn't understand it. Misunderstanding Student does not understand an answer choice 10
Description: