ebook img

ERIC EJ1108346: Evaluating Online Media Literacy in Higher Education: Validity and Reliability of the Digital Online Media Literacy Assessment (DOMLA) PDF

2016·1.2 MB·English
by  ERIC
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview ERIC EJ1108346: Evaluating Online Media Literacy in Higher Education: Validity and Reliability of the Digital Online Media Literacy Assessment (DOMLA)

T. HALLAQ | Journal of Media Literacy Education (2016) Available online at www.jmle.org The National Association for Media Literacy Education’s Journal of Media Literacy Education 8(1), 62 - 84 Evaluating  Online  Media  Literacy  in  Higher  Education:  Validity  and   Reliability  of  the  Digital  Online  Media  Literacy  Assessment  (DOMLA)     Tom  Hallaq   Kansas State University Abstract While new technology continues to develop and become increasingly affordable, and students have increased access to digital media, one might wonder if requiring such technology in the classroom is akin to throwing the car keys to a teen- ager who has not completed a driver’s education course. The purpose of this study was to develop a valid and reliable quantitative survey providing accurate data about the digital online media literacy of university-level students in order to better understand how digital online media can and should be used within a teaching/learning environment at a university. This study identifies core constructs of media literacy as recognized by noted researchers including ethical awareness, media access, media awareness, media evaluation, and media production. Because of the familiarity with media technology by today’s traditional higher education students and the expectation to incorporate these tools in the classroom, the digital divide that once was separated by socio-economic status may be shifting instead to divide generations. While this study is confined to the creation of the instrument, the survey – in the future – is intended to measure digital media literacy levels in both university students and faculty to determine if differences exist between those two groups and to better understand how digital media can and should be used within a teaching/learning environment at a university. Using a 12-step process, the study resulted in a 50-item instrument allowing a quantitative measurement of digital online media literacy. Results repeatedly showed a reliable instrument when viewed as a whole, with individual constructs indicating varying degrees of reliability on their own. The instrument was found to be reliable with a .919 overall coefficient. Keywords: literacy, technology, survey, digital media, online media, ethical awareness, media access, media evaluation, media production, university, higher education, quantitative validation   Several years ago when some universities began requiring students to pack a laptop computer along with their laundry bag, calculator, and other college essentials (Russell 2004, 1). While new technology continues to become increasingly affordable and students have increased access to electronic media through this technology, one might wonder if requiring such technology in the classroom is similar to throwing the car keys to a teenager who has not yet completed a driver’s education course. Media literacy skills “help… people to use media intelligently, to discriminate and evaluate media content, to critically dissect media forms, to investigate media effects and uses, and to construct alternative media” (Kellner & Share 2005, 372) and include the ability to “access, analyze, evaluate and produce communication in a variety of media forms”(Aufderheide 1993, 1). Media literacy has been a part of education for more than forty years in most developed countries (Heins & Cho 2003; Thoman & Jolls 2004) In the U.S. however, it took until the year 2002 for each state to incorporate media literacy initiatives into their educational   62 T. HALLAQ | Journal of Media Literacy Education (2016) plans (Heins & Cho 2003). Today however, educators recognize the need for media literacy education to continue into higher education. (Bordac 2009; Christ 2004; Mihailidis & Hiebert 2005) College faculty may simply expect that their students have already acquired sufficient skills in using computers, the Internet and social media. The lack of media literacy education in higher education may be due in part to the communication gap between what Prensky (2001) refers to as digital natives and digital immigrants. Nearly all educators, especially those in higher education, fall into the category of digital immigrants and “speak” with an “accent” when it comes to digital technology, whereas most students are identified as digital natives, coming to higher education already “speaking” the language of digital technology fluently. Prensky has later recognized digital technology as ‘“the right stuff’ to be teaching our kids today to prepare them for the future” (Prensky 2012, 2). These apparent contradictions illustrate the need for students to learn how to put to use their fluency in technology navigation. Purpose of the Study In this paper, I describe the process used to create a reliable and valid scale for measuring the online media literacy competencies of undergraduate students. The intent of the Digital Online Media Literacy Assessment (DOMLA) is to collect quantitative data that will aid in identifying digital online media literacy levels of university students. The survey instrument was developed based on media literacy literature from leading researchers in the field, input from content experts, and reference to other similar instruments such as Britain’s NIACE survey, Canada’s ICT assessment and surveys used by Hargattai (2005) and Literat (2014). While this paper is confined to a description of the creation of the instrument, ultimately the survey is intended to measure digital media literacy levels in university students and faculty to determine if differences in media literacy exist between those groups, and to better understand how digital media can and should be used within a teaching/learning environment at a university. Media Literacy in Higher Education Recent research on media literacy education in the U.S. has begun to include a look at higher education. Bordac (2009) indicates that not only do faculty expect high levels of media literacy on the part of their students, but students themselves also expect institutions to support such knowledge. Quoting an academic technology specialist from the University of Findlay in Ohio, which features more than 130 technology- enhanced classrooms, Hayes (2010) states, “Quite frankly, students today just expect us to have this kind of technology.” Bordac identified four primary characteristics that faculty associate with media literacy: (a) formal application, (b) theoretical analysis, (c) contextual analysis, and (d) communication (2009, 3). Additionally, specific media literacy skills identified by the faculty members interviewed include production of various media products (i.e. videos, blogs, websites, etc.), analysis of other media products, and the ability to carry on informed discourse about these products, in addition to effective writing skills. Using qualitative research methods, Bordac suggested that the data collected through her research indicates the existence of a general core of media literacy skills spanning various learning disciplines (Bordac 2009, 3). This finding is indicated by the cross identification from both Humanities and Social Sciences faculty who identified skills categorized as applied, theoretical, contextual, or communication skills. Scholars in journalism, mass media and communication have identified valued outcomes for media literacy education in higher education. Christ makes it clear that to provide an accurate picture of media literacy learning outcomes in higher education requires both a precise definition of media literacy and well- developed standards and competencies be achieved by the student (Christ 2004, 92). Mihailidis and Hiebert (2005) claim it is important that students understand how the media work and influence their audiences. One area providing strong media literacy teaching is within the journalism and mass communication curriculum where students are trained to become significant producers of media content. In developing standards, Christ identified two primary organizations that may offer valuable guidelines in order to assure such education,   63 T. HALLAQ | Journal of Media Literacy Education (2016) including the Accrediting Council for Education in Journalism and Mass Communication (ACEJMC) and the National Communication Association (NCA) (Christ 2004, 93). While the ACEJMC standards focus more on the accreditation evaluation of professional mass communication programs within higher education, and NCA standards are geared more for the K-12 educational system, they provide a good starting point for developing media literacy standards for higher education. Christ (2004) also argues that the education of media literacy could potentially challenge the work of higher education’s professional schools, contrasting the goals of these schools’ focus on the practitioner against media literacy’s focus on the citizen. Citizens and practitioners alike benefit from media awareness, media access, media evaluation as well as ethical awareness and media production. Literature Review While the importance of a media literate population has been well documented, measurement of media literacy competencies is still a new area of inquiry. A number of media literacy measures involve qualitative open-ended responses from subjects which are both time-consuming and difficult to code (Literat 2014). The use of digital technology is commonly measured. Aldridge, Tuckett, and Lamb review the results of an annual survey from Britain’s National Institute of Adult Continuing Education (NIACE) and Office of Communications (Ofcom). Though the survey is not from the U.S., many of the trends and results hold interest for those investigating media literacy concerns. The study investigated the use of technology between 2004 and 2008, measuring differences by gender, age, and social status. Results of the 2008 NIACE survey appear to support many of the suppositions about technology use; namely that younger adults tend to use newer technologies more and in more varied ways. The study found that only very small minorities of adults use the Internet for uploading media (just 25 percent have ever uploaded content online and only 15 percent showed an interest in doing so if they had the skills). This result was linked not only to missing skills, but also desire. A strong correlation between social class and access to newer technologies also was found. Interestingly, authors report relatively few differences in the proportion of adults accessing mobile phones and CD and DVD players until the age band 65-74” (Aldridge, Tuckett & Lamb 2008, 12) where a significant decrease was observed. The study also noted, “The gap between social classes is increasing” (Aldridge, Tuckett & Lamb 2008, 16). Communication scholars have investigated digital skills. Hargittai (2005) reviews the methods of her survey on web-based digital literacy levels. She notes that information about digital literacy is an area difficult to assess through survey questions alone. While observing a significant body of research focusing on computer skills, including Internet skills, Hargittai (2005) mentions the fact that most of this literature is based on individuals’ perceptions of their own level of skill rather than any actual measurement or observation of such skills. Basing much of her survey content on questions found in the GSS administered by the National Opinion Research Center (NORC), Hargattai’s (2005) research measured the ability and speed of respondents’ success in finding specific information online. Performance was evaluated on the basis of both effectiveness and efficiency. Additionally, Hargattai’s (2005) respondents where presented survey questions designed to measure Internet-related knowledge. Through her analysis, Hargittai’s study suggests “understanding the various computer- and Internet- related terms is positively correlated with users’ ability to find content online” (Hargatti 2005). Hargatti makes it clear that the “mere existence” (Hargatti 2005, 376) of online content does not guarantee the ability to navigate through the web, thus potentially limiting user benefits of the Internet. She recommends the use of publicly available data from such instruments as the General Social Survey (GSS) in order to incorporate measures into large-scale national databases. As a way to measure new media literacies, Literat’s survey included a total of 60 questions including five items for each of the 12 skills identified by Jenkins et al. (2006). Items investigated both technology- related and non-technology-related behaviors. All questions were randomized in an effort to maximize the validity of the data. This investigation provided greatly needed information in media education literature due to a lack of addressing the specific correlations drawn out from this study. More specifically, the researchers in   64 T. HALLAQ | Journal of Media Literacy Education (2016) this case interpret “the relationship between media use and media literacy [as] a circular one, involving a virtuous feedback loop….” (Literat 2014, 22). Literat’s research claims to support a connection between multimedia creation and new media literacies, finding that respondents with higher NMLs showed a propensity for multimedia creation with the gap between frequent and infrequent “digital creators” as “extremely significant” (2014, 21). Also discovered was a significant difference in overall NML skills between bloggers and non-bloggers with bloggers showing a much higher score in appropriation and networking skills. The researchers also confirmed a connection “between new media literacies and civic engagement, which is emerging as a critical application of NML educational initiatives” (21). Mihailidis reviews his own research into the exploration of what students in higher education actually take away from courses in media literacy. His study uses a sample of 239 undergraduate students; incorporating a pre-post/control quasi-experiment (Mihailidis 2009). Students involved in the research were enrolled in either an open-enrollment media literacy course available through the university’s journalism school (experimental group) or a course in the College of Education (control group). Mihailidis’ multiyear study attempted to discover whether media literacy education prepares students to be engaged citizens of their communities. While the study showed an increase in comprehension, evaluation and analysis of media messages, it also revealed that students failed to gain an essential understanding of media’s role in a democratic society. In fact, Mihailidis states, “teaching media criticism alone can be potentially harmful to students” (Mihailidis 2009, 3) Mihailidis uses this opportunity to outline his recommended plan for media literacy education in higher education classrooms. The author calls for the attainment of “critical skills” on the part of students, followed by transferring these skills to “qualitative learning outcomes” (Mihailidis 2009, 8) including understanding, awareness, and empowerment of media’s social influence. His five-step plan requires (a) establishing connections between critical skills and critical understanding, (b) critical thinking not negative thinking, (c) inclusion of “good” media, (d) setting parameters for the classroom, and (e) teaching through a civic lens. More than answering questions, Mihailidis’ study inspires more questions for researchers with an interest in investigating the outcomes of media education in the university setting. In fact, the author identifies the importance of post-secondary media education for students as “the ability to transfer their classroom performance into critical thought” (Mihailidis 2009, 11). Once this occurs, he claims, benefits of media literacy will become evident. Methodology The purpose of this study was to develop a valid and reliable quantitative survey providing accurate data about the digital online media literacy of university-level students in order to better understand how digital online media can and should be used within a teaching/learning environment at a university. The literature on media literacy is vast, extensive, primarily qualitative in nature, and includes discussion of traditional media (i.e. broadcasting and print). The focus for this study, however, was on digital online media literacy in higher education. Construct and function identification. A set of five constructs was identified as a result of commonalities found in literature authored by media literacy content experts. Constructs are the basic principles found to be common throughout the literature and throughout the strong media literacy education programs across the country. Constructs identified for this study were: media awareness (MAw), media access (MAc), ethical awareness (EA), media evaluation (ME), and media production (MP). These constructs aided in focusing the development of questions in the instrument by more clearly defining the concept of media literacy. Buckingham (2007, 44) defines several of these constructs, explaining: Access… includes the skills and competencies needed to locate media content, using the available technologies and associated software. …Understand includes the ability to decode or interpret media,   65 T. HALLAQ | Journal of Media Literacy Education (2016) for example, through an awareness of formal and generic conventions, design features and rhetorical devices. It also involves knowledge of production processes, and of patterns of ownership and institutional control, and an ability to critique media, for example, in terms of the accuracy or reliability of their representations of the real world. Finally, create involves the ability to use the media to produce and communicate one’s own messages, whether for purposes of self-expression or in order to influence or interact with others. Other authors likewise define other constructs utilized in this study. As far back as 1993, in a report of the National Leadership Conference on Media Literacy, Aufderheide (1) summarized the different perspectives of media literacy practitioners: Just as there are a variety of emphases within the media literacy movement, there are different strategies and processes to achieve them. Some educators may focus their energies on analysis-- perhaps studying the creation and reception of a television program like The Cosby Show, and thus its significance for a mulicultural but racially divided society. Others may emphasize acquiring production skills--for instance, the ability to produce a radio or television documentary or an interactive display on one's own neighborhood. Some may use media literacy as a vehicle to understand the economic infrastructure of mass media, as a key element in the social construction of public knowledge. Others may use it primarily as a method to study and express the unique aesthetic properties of a particular medium. Nearly 20 years later, in an assessment of digital literacy instruments, Covello (2010, 4 – 5) explained: The emergence of Web 2.0, or online social media applications, introduces the additional dimensions of comprehending authorship, privacy and plagiarism … – a mixture of Information Literacy, Technology Literacy, creativity and ethics. …these competences inform objectives and measures of functional, cognitive and ethical proficiencies. Table 1 illustrates the alignment of media constructs used in this research with constructs identified by other media literacy research. As indicated by the table, nearly all authors recognize the need for evaluation and production skills among a media literate population. The ethical awareness construct derived from various terms suggested by authors that, grouped together, indicate the authors’ sense of ethics regarding the media. Some original terminology from those authors includes verbiage such as “critical thinking, problem solving, and decision making” (Covello & Lei 2010, 10), “media violence and sex, advertising and persuasion” (Rockler-Gladen 2007, 1) and “media contain values” (Worsnop 2004, 3). A set of six functions was also identified and aligned with each construct in an effort to specify details about user interaction with online media within a consistent framework. The functions were developed and defined by this researcher to serve primarily as a structure for developing survey questions, ensuring that a consistent number and scope of questions would be created for each construct. Functions include commerce and finance, creative expression, education, entertainment, information, and social interaction. Using a 12-step process, a 50-item instrument was created (see Appendix A) allowing a quantitative measurement of digital online media literacy. This paper describes the steps used to develop the instrument including: Identifying constructs and functions; validating of constructs and functions through subject matter experts (SMEs); developing survey questions; determining face and content validity of the survey questions and Rating questions through SMEs; instrument formatting & layout; instrument validation; validating the instrument by focus groups; revising the instrument for validity; pilot testing the beta version of the instrument for reliability; calculating reliability; revising the instrument based on reliability testing, and pilot testing at Time 2 for reliability. These steps are described in the pages that follow.   66 T. HALLAQ | Journal of Media Literacy Education (2016) Table 1 Media Literacy Constructs Identified by Literature Review Construct Media Media Ethical Media Media Awareness Access Awareness Evaluation Production Author Alliance for a Media Literate ✓ ü ü America (2007) Hobbs (2010) ü ü ü ü ü ü Aufderheide, P. (1993) ü ü ü Buckingham, D. (2007) ü ü Center for Media Literacy ü ü ü (2011) ü ü Covello, S. & Lei, J. (2010) ü ü ü ü Fedorov, A. (2003) ü ü ü ü Hobbs, R. (2007) ü ü ü Kaiser Family Foundation ü ü ü (2003) ü Kellner and Share (2005) ü ü ü Livingstone, S. (2003) ü ü Martin and Grudziecki ü ü (2006) National Association for ü Media Literacy Education ü ü ü (NAMLE) (2009) ü ü ü Rockler-Gladen (2007) ü Thoman, E. & Jolls, T. ü ü ü (2003) ü Ward-Barnes, A. K. (2010) ü ü Worsnop, C. M. (n.d.) ü   ü ü Validating constructs and functions. More than 120 subject-matter experts (SMEs) were identified through media literacy literature, personal colleagues of the researcher, or authorship of various publications, and leadership of media literacy organizations. Of these, 85 SMEs were contacted via email and asked to participate in a Delphi rating of the proposed constructs and functions. Six individuals responded. Upon their agreement, these six SMEs were asked to rate on a scale of 1 to 5 (1 being low, 5 being high) the strength of value for each of the digital online media constructs and functions based on their knowledge of digital online media literacy principles and concepts. Findings from these ratings provided a score ranging from 1 to 5 for each construct and function by averaging the scores from each subject-matter expert. Developing survey questions. Using the constructs and functions, I developed four to six items for each construct/function intersection to produce a total of 120 questions. Each question was given a reference number on the original master list for ease of tracking. The second contact with SMEs involved grouping question lists by function. A list of 45-50 questions was sent to five different SMEs across the country. SMEs were asked to   67 T. HALLAQ | Journal of Media Literacy Education (2016) review and rate each question on a 1-4 scale as to the fit within the construct/function matrix; 1 = no fit, 2 = somewhat fits, 3 = good fit, 4 = perfect fit. Staggering the questions among SMEs provided the review of each question by two SMEs. After SMEs reviewed their list of questions, those questions identified as having a low fit with the constructs and functions were discarded. Others were also later discarded after passing through other procedures including pilot testing. Determining face and content validity of the survey questions. Face validity for the DOMLA was determined through feedback from a number of sources including SMEs from the field of media literacy researchers, student respondents from focus groups, and additional comments made by survey respondents (some respondents wrote comments on the survey papers). During this step two sets of SMEs were referenced; one from the pool of media literacy researchers and the other from a body of researchers studying survey development who were found on the campus and referred by advising faculty. Media literacy SMEs were used to validate the constructs and functions developed and to offer feedback on the initial questions in the early stages of development. Survey development SMEs critiqued the format of question items, assuring that items were structured in a manner conducive to getting accurate responses. Score combinations from the pair of SMEs reviewing each question were coded to provide a score based on SME ratings. Score totals for each question were categorized with a 1 and 1, 1 and 2, or 2 and 2 being low; a 3 and 3, 3 and 4, or 4 and 4 being high; and any combination of a low and high score being coded as moderate. All questions scoring in the low range were eliminated (total 14), moderate range questions were tagged for re-evaluation but remained in the survey and high scoring questions also remained in the survey. Instrument formatting and layout. Self-report survey items for the DOMLA were designed as statements allowing the respondent to indicate his or her level of agreement with statements addressing attitudes, abilities, or comfort level of completing specific online tasks. This format was determined most effective due to the possibility of respondents having the ability to perform a task but choosing not to participate in the specified task. An example might include a respondent who is capable of communicating with friends through a social network such as Facebook, but does not want to post personal information online. When determining the best medium from which to administer the survey instrument, two common formats were considered, electronic via the World-wide Web (WWW), and a paper-and-pencil (PP) method. Citing Beach (1989), Pettit notes the increase in random response errors within computer-based surveys. Pettit also addresses item nonresponse, supporting Webster and Compeau’s (1996) discovery that nonresponse was higher in computer-based survey models. Extreme responding errors were also found to be higher in computer-based surveys (Pettit 2002). As Pettit reviewed the social-desirability aspect of survey responses, two studies (Martin & Nagao 1989; Kiesler & Sproull 1986) based on the Marlower-Crowne Social Desirability Scale (MCSDS) indicated a preference toward PP surveys over an email version. Furthermore, PP response formats continue to be an accepted form among respondents (Pettit 2002, 52). Hargittai incorporated a unique design for the instrument in her development of a web-oriented PP digital literacy survey. She included three bogus survey items as well as an “attentiveness question” designed to assure respondents were paying attention to the survey in general. This design element appears to have value especially within an instrument with several response items. In Hargittai’s case the addition of this question added to the reliability of survey responses by identifying the small percentage of respondents marking the incorrect response, eliminating these questionnaires from the final results. The DOMLA incorporates elements from Hargittai’s design by developing “attentiveness questions” among the items to aid the researcher in identifying respondents who may be responding randomly to survey questions. Further, in order to minimize any possible errors associated with a potential lack of computer literacy among this population, the DOMLA survey also uses the PP format. As Pett et al. emphasize, simply creating survey items in a list is not a sufficient instrument design. Other considerations must include the format, layout and wording within the instrument. Using a print format, several criteria should be considered, specifically those that ease handling and readability of the instrument as well as clarity and organizational style.   68 T. HALLAQ | Journal of Media Literacy Education (2016) Though Likert (1932) originally used a five-point format for responses, more recent research from Pett et al. recommend five to seven scale steps and suggest that even-numbered scales will force respondents to either agree or disagree to some extent with the given statement. Further, these authors observe, “the tendency among respondents was to avoid negative numbers in favor of positive ones,” (Pett 2003, 43) thus suggesting the use of positive integers in the response scale. The DOMLA is designed to be administered through a PP format for several reasons: (a) previous research as well as focus group feedback during this research reflects a greater response rate for PP instruments; (b) to accommodate those who may not feel comfortable navigating a computer; and (c) in response to research provided by Pettit showing PP instruments reduced random response, nonresponse, and extreme responding errors in addition to overall preference by respondents. Much of Pettit’s research was supported by focus-group feedback in this research. As Appendix A shows, the DOMLA was printed in a landscape layout in an effort to accommodate spacing between text groupings (specifically the Likert scaling items) while keeping the font type at a reasonable size and providing clarity for the respondents. Each question was placed in a table cell with a border on each side of the cell, clearly dividing each question apart from the next. The Likert scale response options were printed at the top of each page to remind respondents of the scale and shaded in light grey to clearly set this section apart from the rest of the page. A spacer was also placed mid-way down the page simply to divide the page visually for the respondents. Instrument validation. After coding the results from the original SMEs who validated the constructs, an evaluation rubric was provided to each expert via email allowing for documented feedback about individual questions and numerical comparisons of responses. Following the example of Wasser et al. (2001) developed questions were randomized. A second set of SMEs was asked about the fit of each question in the previously validated construct and function categories where they had been placed. Constructs with fewer than two questions applied were reconsidered and further questions developed. Upon successfully validating constructs and functions through face validity and using content validity to assure the development of cogent questions, criterion and construct validity were then addressed as the instrument itself began to take form. Validating of instrument by focus groups. Two separate classes were used as focus groups. The first was a group of approximately 18 students in a rhetorical studies class while the second group was a class of approximately 13 students in English composition. Because the group feedback was the primary concern at this point in the study, no further demographic data was collected in an effort to maintain anonymity of the group. Early versions of the instrument were administered to a focus group as soon as the questions and structure were validated. The classes were selected for focus group discussion in an attempt to provide as much diversity as possible, since both classes are required of all students at the university. Each class was presented a separate list of 55 questions. Each list included a minimum of two questions from each construct and function plus two “attentiveness” questions. For each list the questions were numbered sequentially with a secondary reference number for use by the researcher. The reference number referred back to the original list of 120 questions and was printed in a smaller font (8 point) so as to be less obvious to respondents. Focus group students were asked to individually complete the survey (these were not used for statistical analysis) and review both the instructions and the individual questions for clarity in wording, form layout, response format, and readability. Revising the instrument for validity. Responding to feedback received from focus groups and SMEs, the instrument was revised to increase validity. Each focus group was asked questions regarding the clarity of question wording and format, their understanding of the individual questions, how they felt about the layout or the instrument including font size, placement of various elements (i.e. response checkboxes, item numbers, etc.), coloring of the print and pages and so forth. Although only a few participants in the focus groups initially suggested changes in the instrument, most participants offered agreement when a suggestion was made. Edits to the instrument were largely based on the comments that received substantial agreement from the group. Suggested changes from SMEs were weighed   69 T. HALLAQ | Journal of Media Literacy Education (2016) more heavily because of the SMEs’ expertise as researchers. Comments from SMEs came in the form of a modified Delphi evaluation. Therefore, a single suggestion from one SME was more likely to be implemented than a suggestion from a single student. Pilot testing the beta version of the instrument for reliability. For the initial pilot test of the DOMLA, a convenience sample of undergraduate level students enrolled during summer sessions at a university in the Intermountain West region of the U.S. were utilized. Students were enrolled in a variety of courses including rhetorical communications, statistics, biology, and nursing and others. These 93 students were used to pilot test the instrument because of their availability to the researcher based on agreement of the individual instructors to accommodate the research. Calculating Reliability. After gathering results from the first pilot test, a number of statistical tests were conducted in an effort to identify the most accurate analysis. The initial pilot test resulted in 93 total responses. Before analyzing the data, incomplete or unreliable surveys were eliminated from the pool of data. Eliminated surveys included those where some questions were left unanswered, multiple responses were given, or incorrect responses were given for the “attentiveness” questions. The “attentiveness” questions were placed randomly into the survey as an attempt to identify respondents who may not have been paying attention, intentionally marked random or incorrect answers, or may have misunderstood the instructions. Once incomplete surveys were deleted, 89 responses remained. Using SPSS, statistical testing included split-half, Cronbach’s alpha, and factor analysis using the data provided through the first pilot test. One of the main advantages of Cronbach’s alpha as a tool for measuring the reliability of an instrument is that it eliminates the variability introduced by randomly assigning items to groups that are not unique. Instead, Cronbach’s alpha provides an average of all split-half possibilities, thus creating a more stable coefficient result. Early analyses through Cronbach’s alpha simply compared the various construct questions, comparing the reliability of each construct to the others. The second analysis eliminated items that showed as weak within the first analysis, based on the SPSS results showing the score “if item is deleted.” Finally, if factor analysis indicated that reliability would increase by eliminating a particular item, that question was eliminated from the survey. For the next set of analyses, each set of items within a construct was equalized, attempting to identify sets of questions for each construct that balance in quantity with other constructs in order to show the equal reliability of each construct as compared with the others. Additional Cronbach’s alpha analyses investigated reliability ratings of questions when grouped by function rather than the intended constructs. The next attempt at scoring the functions eliminated items that showed a potential higher score by eliminating the item. In an attempt to re-analyze the question pairings for construct and function, individual pairs of questions were analyzed using Cronbach’s alpha, being compared again to each other to determine reliability. The goal of this procedure was to identify both strong and weak question pairings and to clarify which pairs of questions actually matched well together. This test attempted to detect which questions then need further review while keeping those question pairings that work well together. The weakest items were marked for question sets with more than two items, repeating the reliability test until only the strongest two items of the set remained. Revising the Instrument Based on Reliability. For the second version of the pilot test instrument, the same formatting was followed as used for the focus groups, but with a few minor alterations. While some of these alterations were simply to enable the researcher to differentiate the new version of the instrument from the previous, others were based on earlier suggestions made by the focus groups. Alterations included changing the question text to a serif font and the response text to sans serif (opposite from the original version). Also the reference number was changed from white text on grey background to black text on grey background. Although this section is intended for use only by the researcher, it became easier to read in photocopies. Revisions for question wording and pairings from SMEs. Upon confirming the weaknesses in several of the questions from the first pilot test, it was determined that the questions with low reliability   70 T. HALLAQ | Journal of Media Literacy Education (2016) required additional review for content and clarity. An additional set of five subject matter experts was recruited for their expertise in research and survey design to review questions. These SMEs were sent a list of 14 to 16 questions including the identified construct and function as well as definitions for each and given the task of reviewing the questions for clarity of wording as well as appropriate pairing with a similar question fitting the same construct/function match. The researcher then met with each SME individually to review each question pairing. Through this process, some survey questions received minor wording changes while others were altered more drastically in an effort to achieve the goals. Pilot Testing 2 for Reliability. For the second pilot test a larger sample was used. Once again, undergraduate-level university courses were employed from several university courses including rhetorical communications, statistics, biology, and nursing. In a small handful of cases, some respondents in this sample had self-identified themselves as having previously participated in pilot test 1 (as students in a previous class) and were therefore excused from repeating their responses. Again, no detailed demographic data was collected because of the pilot testing approach used at this stage in the development of the instrument. A total of 321 responses were gathered in this second pilot test. Of these, 45 were incomplete, an additional 15 respondents answered incorrectly to the first attentiveness question and five more respondents answered incorrectly to the second attentiveness question. Surveys with incorrect responses to attentiveness questions were eliminated from the results leaving 254 finished responses or 301 with some degree of response, excluding those with incorrect responses to attentiveness questions. Although some authors suggest the need for specific analysis on the number of indicators for determining sample size (Westland, 2010), it was determined that a target of 300 responses would be sufficient for this study because the DOMLA focuses on the five media literacy constructs identified. Each construct was addressed by ten items (two functions per construct), resulting in 3,000 data points per construct by reaching 300 responses. While the target 300 responses was only met when including incomplete questionnaires, the 2,540 data points still provided sufficient data to move forward with analysis. Calculating Reliability. As with the results from the initial pilot test, these results were also put through a number of different analyses in order to determine reliability. All analyses used a Cronbach’s alpha to determine reliability. Previously identified reversed items were confirmed and adjusted accordingly. The first analysis was listwise, using all items with only fully complete surveys (254). A second analysis was done pairwise, using all responses to each item. This analysis used the full list of 301 items in order to make the data set as complete as possible. Using listwise analysis, a second calculation was made by eliminating the weakest items within a construct/function intersection if more than three items were found within the construct/function intersection – leaving two items per pairing, or a total of 50 items. This number of items was used due to the goal of including two items per construct/function pairing for the final instrument, an effort to develop sufficient items to create a valid and reliable instrument while keeping the length of the survey reasonably manageable. As Table 2 shows, the Ethical Awareness (EAw) construct was the weakest in the first analysis (.655 listwise, .649 pairwise). Further analysis was applied to this construct in an attempt to strengthen the construct. Since all other constructs resulted in a score above .70, this analysis only looked at the EAw construct with various combinations of items, attempting to achieve a score above .70. In this analysis, the weakest item was number 77 under the Social Interaction (Soc) function. Therefore, the various combinations looked at substituting items within this same pairing. The third analysis provided the second highest score; however, it also eliminated low scoring items that resulted in a total of only nine items, or one item in the Soc function, creating a lop-sided overall result (Table 2). The next analysis of the EAw construct went one step further and provided an even higher score, however an additional item was eliminated from the set, leaving only eight items in the set.     71

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.