ebook img

ERIC EJ983865: Survey Nonresponse Bias in Social Science Research PDF

2007·0.02 MB·English
by  ERIC
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview ERIC EJ983865: Survey Nonresponse Bias in Social Science Research

New Horizons in Adult Education and Human Resource Development 48 Volume 21, Number 1/2, Winter/Spring 2007 PERSPECTIVES ON RESEARCH Survey Nonresponse Bias in Social Science Research Thomas G. Reio, Jr., PhD Educational Leadership, Foundations, and Human Resource Education University of Louisville Surveys continue to be one of the primary research methods in social science research, as they have been useful for exploring subjects ranging from attitudes and intentions to motivations and behaviors, to name but a few. Notwithstanding, response rates in survey research continue to decline (Rogelberg, Conway, Sederburg, Spitzmüller, Aziz, & Knight, 2003) despite the development of more systematic procedures to optimize study participation like Dillman’s Tailored Design Method (Dillman, 2007). Web-based surveys have drawn researcher attention, too, because they seem to be a relatively fast and inexpensive means to conduct research, yet often they are plagued by even lower response rates than conducted via traditional surveys (Sheehan & Hoy, 1999; Simsek & Veiga, 2001). According to Rogelberg et al. (2003), low response rates in survey research can be problematic for three important reasons. First, lower response rates mean fewer participants, which reduces statistical power and prevents the use of certain statistical procedures. Second, low response rates can reduce the perceived credibility of the study’s findings. In general, when faced with unfavorable results, survey sponsors cite low response rates as “the issue” instead of plausible alternatives. Third, low response rates can generate biased samples where study participants are systematically different from nonrespondents. Each of the aforementioned issues can limit unnecessarily the generalizability of the findings to the research population. In this perspective, I explore the third issue of nonresponse bias because it is probably the most serious of these concerns (Rogelberg et al., 2003) and because of its cutting-edge implications for the application of social science research in fields like adult education and human resource development (Bartlett, Reio, & Bartlett, in press). By understanding how survey respondents compare with nonrespondents on key demographic and research variables, researchers can generalize their findings more accurately and confidently. Rogelberg and Luong (1998) state that “Nonresponse bias occurs when the individuals responding to a survey differ from nonrespondents on variables relevant to the survey topic” (p. 60). Quantitatively speaking, nonresponse bias is defined as: nonresponse bias = NR (X – X ) res non NR = proportion of nonrespondents in sample X = Mean of the respondents res X = Mean of the nonrespondents non I must point out that nonresponse in survey research does not necessarily mean that there is nonresponse bias. Strangely enough, a sample may not have nonresponse bias even though its response rate is very low. For example, at an organizational training center where morale may be Reio, T. G., Jr. (2007). Survey nonresponse bias in social science research [Perspectives on Research]. New Horizons in Adult Education and Human Resource Development, 21(1/2), 48-51. http://education.fiu.edu/newhorizons 49 dismal, the mean of respondents’ morale might be 4.0 with a 10% response rate; yet the mean of nonrespondents, if collected, could be 4.0 as well. Thus, there is no evidence of nonresponse bias. However, if the difference between the two group means is substantial, nonresponse bias is evident even though the response rate is very high (Rogelberg & Luong, 1998). Each finding obviously would have implications for appropriately interpreting and generalizing the findings to the research population. Survey nonresponse is typically classified into four discrete categories: inaccessibility, inability, carelessness, and noncompliance (Rogelberg et al., 2003). Inaccessibility means that the respondent never received the survey, whereas inability means that the respondent was not able to respond due to illness or the like. Carelessness means the respondent misplaced the survey, whereas active noncompliance means the respondent consciously decided not to participate in the study. Active noncompliance in particular has received the most research attention. Active noncompliance tends to occur because the prospective participant is not interested in the topic. Assessing interest levels and then testing for possible relationships between interest and the research variable responses may provide valuable insights into the extent to which interest is associated with the variable in question. In addition, research has demonstrated limited empirical support for seeing personality (responsible, intellectual) and sociodemographic (age, gender, socioeconomic status) variables as reasons for noncompliance. Interestingly, McFarlane, Olmstead, Murphy, and Hill (2007) discovered that male responders tended to be early responders, but possible gender bias was reduced through repeated mailings. I would like to introduce the notion that it is possible active noncompliance occurs for other reasons, such as fear of being exposed in situations marked by asymmetry of power or when the prospective respondent could be easily identified because of his or her ethnicity or job title. Further, some even may object to the perceived undue influence that surveys have on news coverage, politics, and public policy and, thus, choose not to participate. For example, exit polls at polling sites during elections have received increased scrutiny in recent years. It is intriguing that very little research has addressed this part of the active noncompliance issue systematically. In general, the most salient reasons for noncompliance have been lack of interest and lower education levels (Rogelberg & Luong, 1998). Thus, researchers need to acknowledge these bias possibilities in their study designs by developing a number of strategies to optimize response rates and thereby reduce the likelihood of nonresponse bias. Although beyond the scope of this article, Dillman’s (2007) Tailored Design Method has particular promise for improving survey response rates. In essence, through a systematic effort of survey personalization, pre-notification, timed mailings, and follow-ups, response rates for studies using Dillman’s method can be as high as 85% for mailed surveys and 55% for web- based surveys. Telephone survey response rates closely mirror mail response rates when employing the tailored design method. Response rates can be increased further with token incentives; but this may introduce another source of bias because those induced to participate may differ significantly from nonrespondents on the research variable. A number of strategies can be used to detect the presence of nonresponse bias after study completion. Rogelberg et al. (2003) describes four approaches: archival, follow-up, wave, and 50 intention. The archival approach constitutes comparing respondents and the population on the research variables. For example, personnel records linked to respondents can be directly compared to those of nonrespondents. Archival data tends to be demographic and thus has limited utility. The follow-up approach involves contacting a random sample of nonrespondents via phone and having them complete a shortened version of the survey. Respondents’ and nonrespondents’ scores are subsequently compared to test for possible differences; however, this method may be subject to social desirability responses. The wave approach compares the scores of those who met the survey completion deadline against those of respondents who were late completers. The assumption is that late responders are more like nonrespondents than respondents. The major limitation of this approach is that actual nonrespondents are not studied. Finally, the intention approach entails asking potential respondents about their intention to complete the survey and then surveying them anyway. Researchers then compare the scores of those who intended to participate and those who did not. As with the wave approach, actual nonrespondents are not examined. The issue of nonresponse is an insidious one. Researchers and practitioners alike must be aware that not examining survey results for possible nonresponse bias limits their generalizability to the research population. The problem is that much of the published social science literature, to its possible detriment, overlooks the topic. The fields of adult education and human resource development certainly can benefit from carefully addressing this issue in published research. Besides taking appropriate steps to facilitate optimal response rates, we can minimize the likelihood of nonresponse bias by specifically testing for it in a number of meaningful ways (Rogelberg et al., 2003). Although the archival, follow-up, wave, and intention approaches have limitations, they can at the very least provide a preliminary sense of the data’s representativeness. If testing for nonresponse bias is not possible, the researcher should acknowledge at least the possibility of nonresponse bias. Productively addressing possible nonresponse bias in survey research can serve only to increase its utility and credibility. References Bartlett, J. E., II, Reio, T. G., Jr., & Bartlett, M. E. (in press). Analysis of nonresponse bias in survey research for business. Delta Pi Epsilon. Dillman, D. A. (2007). Mail and internet surveys: The tailored design approach (2nd ed.). Hoboken, NJ: Wiley & Sons. McFarlane, E., Olmstead, M. G., Murphy, J., & Hill, C. A. (2007). Nonresponse bias in a mail survey of physicians. Evaluation & the Health Professions, 30, 170-185. Rogelberg, S. G., Conway, J. M., Sederburg, M. E., Spitzmüller, C., Aziz, S., & Knight, W. E. (2003). Profiling active and passive nonrespondents to an organizational survey. Journal of Applied Psychology, 88, 1104-1114. Rogelberg, S. G., & Luong, A. (1998). Nonresponse to mailed surveys: A review and guide. Current Directions in Psychological Science, 7, 60-65. Sheehan, K. B., & Hoy, M. G. (1999). Flaming, complaining, abstaining: How online users respond to privacy concerns. Journal of Advertising, 28, 37-51. Simsek, Z., & Veiga, J.F. (2001). A primer on internet organizational surveys. Organizational Research Methods, 4, 218-235. 51 Dr. Reio teaches human resource education, adult development, and educational psychology at the University of Louisville, Louisville, KY. He is an active researcher in the areas of workplace motivation and socialization and how they are linked to learning-related outcomes. His current area of focus is curiosity and risk taking and research methods. Being an interdisciplinary researcher, he has published widely in developmental and educational psychology, as well as adult education and human resource development journals. He has authored or co-authored a number of recent articles concerning research methods and their appropriate use.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.