Paper and Pencil vs. Computerized Experiments (case 5457)
[I][edited from support email][/I]
I have a methodological question for you. I am using MediaLab to replicate a study originally conducted with pencil and paper. Is it known whether computer administration of an instrument affects the results of the research?
more resources on web surveys
Here are some more readings that have proven helpful in my own work that uses web-based surveys:
Couper, M. P., Traugott, M. W., & Lamias, M. J. (2001). Web survey design and administration. Public Opinion Quarterly, 65, 230-253.
Dillman, D. A., & J. D. Smyth. (2007). Design effects in the transition to Web-based surveys. American Journal of Prevention Medicine, 32, S90-S95.
Dillman, Don A., Reips, U. & Matzat, U. (2010). Advice in Surveying the General Public Over the Internet. International Journal of Internet Science, 5, 1-4.
Ganassali, S. (2008). The influence of the design of Web survey questionnaires on the quality of responses. Survey Research Methods, 2, 21-32.
Gosling, S. D., Vazire, S., Srivastava, S., & John, O. P. (2004). Should we trust Web-based studies? A comparative analysis of six preconceptions about internet questionnaires. American Psychologist, 59, 93-104.
Sax, L. J., Gilmartin, S. K., & Bryant, A. N. (2003). Assessing response rates and nonresponse bias in web and paper surveys. Research in Higher Education, 4, 409-432.
non web-based computer adaptive test and paper-pencil test equivalence
[FONT="]Below, I append several references to studies of the similarities and differences between (non web-based) computer adaptive test and paper-pencil test equivalence. Most of these concern tests with correct and incorrect responses, rather than subjective self-reports. The conclusion seems to be that several individual characteristics better predict mode-of-assessment context effects than item characteristics (i.e., most tests are not any easier or more difficult via adaptive and non-adaptive computer assessment).[/FONT]
[FONT="]Clansing, C., & Schmitt, D. (1990). [I]Paper versus CRT: Are reading rate and comprehension affected?[/I]Paper available from ERIC. (ERIC Document Reproduction Service No. ED323924)[/FONT]
[FONT="]Gould, J. D., Alfaro, L., Finn, R., Haupt, B.,&Minuto, A. (1987). Reading from CRT displays can be as fast as reading from paper. [I]Human Factors, 29, [/I]497–517.[/FONT]
[FONT="]Lee, J., Moreno, K. E., & Sympson, J. B. (1986). The effects of mode of test administration on test performance. [I]Educational and Psychological Measurement, 46, [/I]467–473.[/FONT]
[FONT="]Lunz, M. E., & Bergstrom, B. A. (1994). An empirical study of computerized adaptive test administration conditions. [I]Journal of Educational Measurement, 31, [/I]251–263.[/FONT]
[FONT="]Mason, B. J., Patry, M., & Bernstein, D. J. (2001). An examination of the equivalence between non-adaptive computer-based and traditional testing. [I]Journal of Educational Computing Research,[/I][I] [I]24, [/I][/I]29–39.[/FONT]
[FONT="]Mazzeo, J., & Harvey, A. (1988). [I]The equivalence of scores from automated and conventional educational and psychological tests [/I](College Board Rep. No. 88-8). New York: College Entrance Examination Board.[/FONT]
[FONT="]Olsen, J. B., Maynes, D. D., Slawson, D., & Ho, K. (1989). Comparison of paper-administered, computer-administered and computerized adaptive achievement tests. [I]Journal of Educational Computing Research, 5, [/I]311–326.[/FONT]
[FONT="]Pomplun, M., & Custer, M. (2005). The score comparability of paper and pencil and computer K-3 reading tests. [I]Journal of Educational Computing Research, 32, [/I]153–166.[/FONT]
[FONT="]Pomplun, M., Custer, M., & Ritchie, T. D. (2006). [/FONT][FONT="]Factors in paper-and-pencil and computer reading score differences at the primary grades. [I]Educational Assessment, 11[/I], 127-143.[/FONT]
[FONT="]Zandvliet, D., & Farragher, P. (1997). A comparison of computer administered and written tests. [I]Journal of Research on Computers in Education, 29, [/I]423–438.[/FONT]