The main purpose of the paper is to present some theoretical approaches and some methods providing assessment optimization in specialists’ accreditation in area of public health services. The results of research presented in this paper, include the model of multistage adaptive measurements and two methods for reliability and validity analysis, providing high justice decisions in accreditation and corresponding to requirements in High-Stakes Testing procedures. The assessment optimization intends for minimization time of assessment and for reliability and validity data increasing. For optimization the special model of measurements based on multistage adaptive testing is offered. The using of offered model in assessment design allows to realize the advantages of traditional adaptive testing and linear testing, while minimizing their disadvantages. So, this model is recommended as dominating for assessment in accreditation. For validity increasing in assessment in accreditation the approach based on Structural Equation Modeling is offered. This method allows to analyze the significance of relations between observed and latent variables that have any interpretation as causal effects, and to construct the model of their relations. The example of model of casual relations between disciplines, latent variables (competencies) and factors is offered. The model helps to increase construct and content validity of measuring tool using in public health services accreditation. The methods of reliability estimation in multistage measurements, offered in paper, has innovative character. It has branching structure as the value of reliability in multistage measurements depends not only on reliability of separate stages, but also from correlations between them. The presented approaches allow to increase validity and reliability of decisions in public health services specialists’ assessment or in other spheres of assessment during accreditation.
REFERENCES(23)
1.
Baig, L. A., & Violato, C. (2012). Temporal stability of objective structured clinical exams: a longitudinal study employing item response theory. BMC Medical Education, 12(121), 1-6.
Chelyshkova, M. & Zvonnikov, V. (2013). The optimization of formative and summative assessment by adaptive testing and zones of students’ development. Journal of Psychosocial Research, 8(1), 127-132.
Crocker, L., & Algina, J. (2010). Introduction to classical and modern test theory. Under the editorship of V.I. Zvonnikov and M.B. Chelyshkova. Moscow: Logos Publ.
Dorozhkin, E. M., Chelyshkova, M. B., Malygin, A. A., Toymentseva, I. A. & Anopchenko, T. Y. (2016). Innovative approaches to increasing the student assessment procedures effectiveness. International Journal of Environmental and Science Education, 11(14), 7129-7144.
Fu, L., Kayumova, L. R. & Zakirova, V. G. (2017). Simulation Technologies in Preparing Teachers to Deal with Risks. EURASIA Journal of Mathematics, Science and Technology Education, 13(8), 4753-4763.
Hambleton, R. K., & Zaal, J. (2000). Computerized adaptive testing: Theory, applications, and standards, in: R. K. Hambleton, J. Zaal (Eds.). Advances in educational and psychological testing: Theory and applications. Boston: Kluwer Academic Publishers, p. 341-366.
Hawkins, R., Welcher, C., Holmboe, E., Kirk, L., Norcini, J., Simons, K., & Skochelak, S. (2015). Implementation of competency-based medical education: are we addressing the concerns and challenges? Medical Education, 49(11), 1086-1102.
Heeneman, S., Oudkerk, P. A., & Schuwirth, L. W. T. (2015). Department of Pathology, Maastricht The impact of programmatic assessment on student learning: theory versus practice. Medical Education, 49(5), 487-498.
Ke, Z., Borakova, N. U., & Valiullina, G. V. (2017). Peculiarities of Psychological Competence Formation of University Teachers in Inclusive Educational Environment. EURASIA Journal of Mathematics, Science and Technology Education, 13(8), 4701-4713.
Klein, A. L. (1996). Validity and reliability for competency-based systems: Reducing litigation risks. Compensation and Benefits Review, Springer-Verlag, New York.
Kramer, D. (2007). Mathematical data processing in social sciences: modern methods: studies. The grant for students of higher educational institutions / Dunkan Kramer; the translation from English by Timofeeva I. V., Kiseleva J. I., М: Publishing Centre “Academy”.
McKinley, R. K., Fraser, R. C., Van Der Vleuten, C., & Hastings, A. M. (2000), Formative assessment of the consultation performance of medical students in the setting of general practice using a modified version of the Leicester Assessment Package. Medical Education, 34(7), 573–579.
McLachlan, J. C., & Whiten, S. C. (2000). Marks, scores and grades: scaling and aggregating student assessment outcomes. Medical Education, 34(10), 788–797.
Ushakov, A. N., & Romanova, M. L. (2010). Adaptive testing in the structure of educational control. Scientific Notes of P.F. Lesgaft University, 5(63), 87-93.
Zvonnikov, V. I., & Chelyshkova, M. B. (2012). Assessment of training results quality at certification: competence approach (the second edition). Moscow: Logos.
We process personal data collected when visiting the website. The function of obtaining information about users and their behavior is carried out by voluntarily entered information in forms and saving cookies in end devices. Data, including cookies, are used to provide services, improve the user experience and to analyze the traffic in accordance with the Privacy policy. Data are also collected and processed by Google Analytics tool (more).
You can change cookies settings in your browser. Restricted use of cookies in the browser configuration may affect some functionalities of the website.