Estimation of Item Parameter Indices of NECO Mathematics Multiple Choice Test Items among Nigerian Students

Authors

  • Jimoh Kasali Obafemi Awolowo University, Ile-Ife
  • Adediwura Alaba Adeyemi Obafemi Awolowo University, Ile-Ife

DOI:

https://doi.org/10.21580/jieed.v2i1.10187

Keywords:

Item response parameters, Item response theory, one-parameter logistic, two-parameter logistic, three-parameter logistic, item discriminating, item difficulty, guessing parameter

Abstract

The study estimated the difficulty, discrimination, and vulnerability to guessing 2016 National Examination Council (NECO) Mathematics multiple-choice test items. The study employed an ex-post facto design with 276,338 samples. The research instruments used for the study were Optical Marks Record Sheets for the NECO June/July 2016 Mathematics objectives items. The responses of the testees were scored dichotomously. Data collected were calibrated using four parameters logistic model. The results showed that most items in the 2016 NECO Mathematics test were good as their difficulty parameters were within (-2 to 2). For those items, difficulty parameter estimates are considered good. Also, the results indicated that only 21.7% of 2016 NECO Mathematics test items had a very good discriminating power, and the majority of the items had poor discrimination power. The result implies that most of the items were not effective in discriminating between examinees with the required ability and those that lack the required ability. Finally, the result revealed that the 2016 NECO test items were not vulnerable to guessing (i.e., 86.7% of items were good in terms of guessing). The study concluded that the 2016 NECO Mathematics test items were good for difficulty and guessing parameter indices.

Downloads

Download data is not yet available.

Author Biography

Jimoh Kasali, Obafemi Awolowo University, Ile-Ife

Department of Educational Foundations and Counseling

References

Ariyo, A. O. (2015). Ensuring quality in the test development process through innovations in item calibration: a comparison of classical test theory and item response theory [Conference presentation]. 33rd AEAA Conference, Accra, Ghana.

Ary, D., Jacobs, L.C., & Razavieh, A. (2002) Introduction to Research in Education, sixth eds. Wadsworth, Californi. https://books.google.co.id/books?hl=id&lr=&id=4RREDwAAQBAJ

Boopathiraj, C., & Chellamani, K. (2013). Analysis of test items on difficulty level and discrimination index in the test for research in education. International journal of social science & interdisciplinary research, 2(2), 189-193.

Baker, F. B. (2001). The basics of item response theory (2nd ed.). United States of America: ERIC Clearing House on Assessment and Evaluation. https://eric.ed.gov/?id=ED458219

DeMars, C. (2010). Item response theory: understanding statistics measurement. City: Oxford University Press. https://books.google.co.id/books?hl=id&lr=&id=KOADeYBt7sIC

Hambleton, R. K., & Jones R. W. (1993). Comparison of classical test theory and item response theory and their applications to test development. Educational Measurement: Issues and Practice, 12(3), 38-47. https://eric.ed.gov/?id=EJ471935

Jimoh, M. I., Daramola, D. S., Oladele, J. I., & Sheu, A. L. (2020). Assessment of Items Prone to Guessing in SSCE Economics Multiple-Choice Tests among Students in Kwara State, Nigeria. Anatolian Journal of Education, 5(1), 17-28. https://eric.ed.gov/?id=EJ1249146

Jones L.V, Thissen D (2007) A History and Overview of Psychometrics. Handbook of Statistics, Vol. 26, Sandip Sinharay, Hardbound. https://doi.org/10.1016/S0169-7161(06)26001-2

Kim, J., Frisbie, D. A., Kolen, M. J., & Kim, D. I. (2012). A comparison of calibration methods and proficiency estimators for creating IRT vertical scales [Paper presentation]—National Council on Measurement in Education Annual Meeting, Chicago, IL. https://www.proquest.com/openview/f28ebd6e63601cb5c61ecb89fde9e218/1?pq-origsite=gscholar&cbl=18750

Odukoya, J.A., Adekeye, O., Igbinoba, A.O., et al. Item analysis of university-wide multiple choice objective examinations: the experience of a Nigerian private university. Qual Quant 52, 983–997 (2018). https://doi.org/10.1007/s11135-017-0499-2

Olutola, A. T. (2016). Assessing students' performance in senior school certificate multiple-choice test in biology. Journal Issues and Ideas in Education, 4(1), 11-20. https://doi.org/10.15415/iie.2016.41001

Onunkwo, G. I. N. (2002). Fundamentals of education measurement and evaluation. Owerri: Cape Publishers Int’l Ltd.

Wiersma, W., & Jurs, S. G. (1990). Educational measurement and testing (2nd ed.). Massachusetts: Allyn and Bacon.

Revee, J. (2002). Self-determination theory applied to educational settings. In: E. L. Deci & R. M. Reyan (Eds.), Handbook of self-determination research. (pp. 183-203). Rochester, NY: University of Rochester Press. https://books.google.co.id/books?hl=id&lr=&id=DcAe2b7L-RgC

Van Schuur, W. (2003). Mokken Scale Analysis: Between the Guttman Scale and Parametric Item Response Theory. Political Analysis, 11(2), 139-163. https://doi.org/10.1093/pan/mpg002

Downloads

Published

30-03-2022

How to Cite

Kasali, J., & Adeyemi, A. A. (2022). Estimation of Item Parameter Indices of NECO Mathematics Multiple Choice Test Items among Nigerian Students. Journal of Integrated Elementary Education, 2(1), 43–54. https://doi.org/10.21580/jieed.v2i1.10187

Issue

Section

Articles

Similar Articles

1 2 > >> 

You may also start an advanced similarity search for this article.