DEVELOPMENT OF HYBRID COMPUTERIZED ADAPTIVE TESTING SYSTEM BASED ON DIFFERENT ITEM SELECTION AND ITEM EXPOSURE CONTROL METHODS
Main Article Content
Abstract
The purposes of this experiment were to compare the efficiency of ability estimation different item selection methods between Maximum Fisher Information (MFI) and Maximum Priority Index (MPI) and item exposure control methods between Sympson-Hetter Procedure (SH) Progressive-restricted standard error (PR-SE). There were 1,000 examinees who simulated an item pool of 600 items and estimated the possibility of choosing the correct answers of examinees with item pool by the Three-Parameter Model. Efficiency of ability estimation, item selection method, item exposure control methods, and termination criteria were the standard error of estimation less than or equal to 0.3. There were employed to bias, mean squared error (MSE) and compare the Pearson product-moment correlation in estimating examinees.
The research indicates that the Maximum Priority Index (MPI) with Progressive-restricted standard error (PR-SE) has higher efficiency and accuracy when the standard error of estimation is less than or equal to 0.3.
Article Details
I and co-author(s) certify that articles of this proposal had not yet been published and is not in the process of publication in journals or other published sources. I and co-author accept the rules of the manuscript consideration. Both agree that the editors have the right to consider and make recommendations to the appropriate source. With this rights offering articles that have been published to Panyapiwat Institute of Management. If there is a claim of copyright infringement on the part of the text or graphics that appear in the article. I and co-author(s) agree on sole responsibility.
References
Cheng, Y. & Chang, H. H. (2009). The maximum priority index method for severely constrained item selection in computerized adaptive testing. British Journal of Mathematical and Statistical Psychology, 62, 369-383.
Čisar, S. M., Radosav, D., Markoski, B., Pinter, R. & Čisar, P. (2010). Computer Adaptive Testing of Student Knowledge. Acta Polytechnica Hungarica, 7(4), 139-152.
Davis, L. L. & Dodd, B. G. (2005). Strategies for controlling item exposure in computerized adaptive testing with partial credit model (PEM Research Report No. 05-01). Austin, TX: Pearson Educational Measurement.
Deng, H., Ansley, T. & Chang, H. H. (2010). Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing. Journal of Educational Measurement, 47(2), 202–226.
Elissavet, G. E. T. & Economides, A. A. (2007). Review of Item Exposure Control Strategies for Computerized Adaptive Testing Developed from 1983 to 2005. The Journal of Technology, Learning, and Assessment, 5(8), 4-37.
Hambleton, R. K. & Xing, D. (2006). Optimal and nonoptimal computer-based test designs formaking pass–fail decisions. Applied Measurement in Education, 19, 221–239.
Jodoin, M. G., Zenisky, A. & Hambleton, R. K. (2006). Comparison of the psychometric properties of several computer-based test designs for credentialing exams with multiple purposes. Applied Measurement in Education, 19, 203-220.
Kanjanawasee, S. (2013). Classical Test Theory (7th ed.). Bangkok: Chulalongkorn University Publishing.
Kanjanawasee, S. (2012). Modern Test Theories (4th ed.). Bangkok: Chulalongkorn University Publishing.
Koedsri, A. (2014). Efficiency of two item selection methods in computerized adaptive testing for the testlet response model: A comparison between the Monte Carlo CAT method and the Constraint-Weighted a-stratification method. Doctor of Philosophy Program in Educational Measurement and Evaluation, Chulalongkorn University.
Leroux, A. J., Lopez, M., Hembry, I. & Dodd, B. G. (2013). A Comparison of Exposure Control Procedures in CATs Using the 3PL Model. Educational and Psychological Measurement, 73(5), 857 – 874.
McClarty, K. L., Sperling, R. A. & Dodd, B. G. (2006). A variant of the progressive-restricted item exposure control procedure in computerized adaptive testing systems based on the 3PL and partial credit models. Annual meeting of the American Educational Research Association 2006, 7-11 April 2006. San Francisco, CA: AERA.
Patsula, L. N. (1999). A comparison of computerized-adaptive testing and multi-stage testing. Unpublished doctoral dissertation, University of Massachusetts, Amherst.
Promsit, P. (2012). A Study of Efficiency of Multidimensional Computerized Adaptive Testing. Doctor of Philosophy Thesis in Educational Measurement and Evaluation, Graduate School, Khon Kaen University.
Sympson, J. B. & Hetter, R. D. (1985). Controlling item-exposure rates in computerized adaptive testing. Proceedings of the 27th annual meeting of the Military Testing Association. (pp. 973–977). San Diego, CA: Navy Personnel Research and Development Center.
Wang, S., Lin, H., Chang, H. & Douglas, J. (2016). Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design. Journal of Education Measurement, 53(1), 45-62.