Estimation of Item Parameter Indices of NECO Mathematics Multiple Choice Test Items among Nigerian Students

Jimoh Kasali  -  Obafemi Awolowo University, Ile-Ife, Nigeria
Adediwura Alaba Adeyemi*  -  Obafemi Awolowo University, Ile-Ife, Nigeria

(*) Corresponding Author

The study estimated the difficulty, discrimination, and vulnerability to guessing 2016 National Examination Council (NECO) Mathematics multiple-choice test items. The study employed an ex-post facto design with 276,338 samples. The research instruments used for the study were Optical Marks Record Sheets for the NECO June/July 2016 Mathematics objectives items. The responses of the testees were scored dichotomously. Data collected were calibrated using four parameters logistic model. The results showed that most items in the 2016 NECO Mathematics test were good as their difficulty parameters were within (-2 to 2). For those items, difficulty parameter estimates are considered good. Also, the results indicated that only 21.7% of 2016 NECO Mathematics test items had a very good discriminating power, and the majority of the items had poor discrimination power. The result implies that most of the items were not effective in discriminating between examinees with the required ability and those that lack the required ability. Finally, the result revealed that the 2016 NECO test items were not vulnerable to guessing (i.e., 86.7% of items were good in terms of guessing). The study concluded that the 2016 NECO Mathematics test items were good for difficulty and guessing parameter indices.

Keywords: Item response parameters; Item response theory; one-parameter logistic; two-parameter logistic, three-parameter logistic; item discriminating; item difficulty; guessing parameter

  1. Ariyo, A. O. (2015). Ensuring quality in the test development process through innovations in item calibration: a comparison of classical test theory and item response theory [Conference presentation]. 33rd AEAA Conference, Accra, Ghana.
  2. Ary, D., Jacobs, L.C., & Razavieh, A. (2002) Introduction to Research in Education, sixth eds. Wadsworth, Californi. https://books.google.co.id/books?hl=id&lr=&id=4RREDwAAQBAJ
  3. Boopathiraj, C., & Chellamani, K. (2013). Analysis of test items on difficulty level and discrimination index in the test for research in education. International journal of social science & interdisciplinary research, 2(2), 189-193.
  4. Baker, F. B. (2001). The basics of item response theory (2nd ed.). United States of America: ERIC Clearing House on Assessment and Evaluation. https://eric.ed.gov/?id=ED458219
  5. DeMars, C. (2010). Item response theory: understanding statistics measurement. City: Oxford University Press. https://books.google.co.id/books?hl=id&lr=&id=KOADeYBt7sIC
  6. Hambleton, R. K., & Jones R. W. (1993). Comparison of classical test theory and item response theory and their applications to test development. Educational Measurement: Issues and Practice, 12(3), 38-47. https://eric.ed.gov/?id=EJ471935
  7. Jimoh, M. I., Daramola, D. S., Oladele, J. I., & Sheu, A. L. (2020). Assessment of Items Prone to Guessing in SSCE Economics Multiple-Choice Tests among Students in Kwara State, Nigeria. Anatolian Journal of Education, 5(1), 17-28. https://eric.ed.gov/?id=EJ1249146
  8. Jones L.V, Thissen D (2007) A History and Overview of Psychometrics. Handbook of Statistics, Vol. 26, Sandip Sinharay, Hardbound. https://doi.org/10.1016/S0169-7161(06)26001-2
  9. Kim, J., Frisbie, D. A., Kolen, M. J., & Kim, D. I. (2012). A comparison of calibration methods and proficiency estimators for creating IRT vertical scales [Paper presentation]—National Council on Measurement in Education Annual Meeting, Chicago, IL. https://www.proquest.com/openview/f28ebd6e63601cb5c61ecb89fde9e218/1?pq-origsite=gscholar&cbl=18750
  10. Odukoya, J.A., Adekeye, O., Igbinoba, A.O., et al. Item analysis of university-wide multiple choice objective examinations: the experience of a Nigerian private university. Qual Quant 52, 983–997 (2018). https://doi.org/10.1007/s11135-017-0499-2
  11. Olutola, A. T. (2016). Assessing students' performance in senior school certificate multiple-choice test in biology. Journal Issues and Ideas in Education, 4(1), 11-20. https://doi.org/10.15415/iie.2016.41001
  12. Onunkwo, G. I. N. (2002). Fundamentals of education measurement and evaluation. Owerri: Cape Publishers Int’l Ltd.
  13. Wiersma, W., & Jurs, S. G. (1990). Educational measurement and testing (2nd ed.). Massachusetts: Allyn and Bacon.
  14. Revee, J. (2002). Self-determination theory applied to educational settings. In: E. L. Deci & R. M. Reyan (Eds.), Handbook of self-determination research. (pp. 183-203). Rochester, NY: University of Rochester Press. https://books.google.co.id/books?hl=id&lr=&id=DcAe2b7L-RgC
  15. Van Schuur, W. (2003). Mokken Scale Analysis: Between the Guttman Scale and Parametric Item Response Theory. Political Analysis, 11(2), 139-163. https://doi.org/10.1093/pan/mpg002

Open Access Copyright (c) 2022 Jimoh Kasali, Adediwura Alaba Adeyemi
Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Lisensi Creative Commons

All contents published in the Journal of Integrated Elementary Education are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

apps