RESEARCH PAPER
Development and Validation of the Middle Grades Computer Science Concept Inventory (MG-CSCI) Assessment
 
More details
Hide details
1
North Carolina State University, USA
 
2
University of Florida, USA
 
 
Online publication date: 2020-02-14
 
 
Publication date: 2020-02-14
 
 
Corresponding author
Eric Wiebe   

North Carolina State University
 
 
EURASIA J. Math., Sci Tech. Ed 2020;16(5):em1841
 
KEYWORDS
TOPICS
ABSTRACT
The increasing interest in computer science (CS) and CS-integrated STEM teaching and learning has created a need for assessment instruments that can be used to evaluate the efficacy of innovative instructional approaches to K-12 CS education. However, there is a lack of validated assessment tools aligned to core CS concepts for younger students. This paper reports on the development and validation of a CS concept assessment for middle grades (ages 11-13) students. A total of 27 multiple-choice items were developed, guided by focal knowledge, skills and abilities associated with the concepts of variables, loops, conditionals, and algorithms. These items were administered to 457 middle grades students. The items were presented in form of block-based programming code and administered in a week-long computational modeling intervention. A combination of classical test theory and item response theory approaches were used to validate the assessment. Based on results, it was found that only 24 items are considered valid and reliable items to measure CS conceptual understanding. The results also suggested that the assessment can be used as a pre and post-test to investigate students’ learning gains. This work fills an important gap by providing a key resource for researchers and practitioners interested in assessing middle grades student CS conceptual understanding.
REFERENCES (59)
1.
Adams R. J., & Wu, M. (2010). Multidimensional model. (August 2010). Retrieved on March 10, 2019 from https://www.acer.org/files/Con....
 
2.
Adams, R. J., Wu, M., & Wilson, M. R. (2015). ACER ConQuest: Generalised item response modelling software [Computer software]. Version 4. Camberwell, Victoria: Australian Council for Educational Research.
 
3.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is Involved and what is the role of the computer science education community? Inroads, 2(1), 48-54. https://doi.org/1529-3785/20x/....
 
4.
Bienkowski, M., Snow, E., Rutstein, D. W., & Grover, S. (2015). Assessment design patterns for computational thinking practices in secondary computer science: A first look. SRI technical report, 2015.
 
5.
Bond, T. G., & Fox, C. M. (2001). Applying the Rasch model: Fundamental measurement in the human sciences. Mahwah NJ: Lawrence Erlbaum Assoc.
 
6.
Boone, W. J., Staver, J. R., & Yale, M. S. (2013). Rasch analysis in the human sciences. Springer, Dordrecht.
 
7.
Boulden, D. C., Wiebe, E., Akram, B., Aksit, O., Buffum, P. S., Mott, B., ... Lester, J. (2018). Computational thinking integration into middle grades science classrooms: Strategies for meeting the challenges. Middle Grades Review, 4(3), 1-16.
 
8.
Brown, N. C., Mönig, J., Bau, A., & Weintrop, D. (2016, February). Panel: Future directions of block-based programming. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (pp. 315-316). ACM. https://doi.org/10.1145/283950....
 
9.
Buffum, P. S., Lobene, E. V., Frankosky, M. H., Boyer, K. E., Wiebe, E. N., & Lester, J. C. (2015, February). A practical guide to developing and validating computer science knowledge assessments with application to middle school. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (pp. 622-627). ACM. https://doi.org/10.1145/267672....
 
10.
Caceffo, R., Wolfman, S., Booth, K. S., & Azevedo, R. (2016, February). Developing a computer science concept inventory for introductory programming. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (pp. 364-369). ACM. https://doi.org/10.1145/283950....
 
11.
Cameron, I. M., Scott, N. W., Adler, M., & Reid, I. C. (2014). A comparison of three methods of assessing differential item functioning (DIF) in the Hospital Anxiety Depression Scale: ordinal logistic regression, Rasch analysis and the Mantel chi-square procedure. Quality of Life Research, 23(10), 2883-2888.
 
12.
code.org. (2018). Computer science discoveries. Retrieved on January 6, 2019 from https://code.org/educate/csd.
 
13.
Cole, D. A. (1987). Utility of confirmatory factor analysis in test validation research. Journal of Consulting and Clinical Psychology, 55(4), 584-594. https://doi.org/10.1037/0022-0....
 
14.
Computer Science Teachers Association-CSTA. (2017). CSTA K-12 Computer Science Standards 2017.
 
15.
Cuny, J. (2011). Transforming computer science education in high schools. Computer, 44(6), 107-109. https://doi.org/10.1109/mc.201....
 
16.
de Araujo, A. L. S. O., Andrade, W. L., & Guerrero, D. D. S. (2016, October). A systematic mapping study on assessing computational thinking abilities. In 2016 IEEE Frontiers in Education Conference (FIE) (pp. 1-9). IEEE. https://doi.org/10.1109/FIE.20....
 
17.
Decker, A., & McGill, M. M. (2019, February). A topical review of evaluation instruments for computing education. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (pp. 558-564). ACM. https://doi.org/10.1145/328732....
 
18.
DeVellis, R. F. (2016). Scale development: Theory and applications (Vol. 26). Sage publications. Thousand Oaks, CA.
 
19.
Du Boulay, B. (1986). Some difficulties of learning to program. Journal of Educational Computing Research, 2(1), 57-73. https://doi.org/10.2190/3LFX-9....
 
20.
Goode, J., & Chapman, G. (2016). Exploring computer science. University of Oregon, Eugene, OR. Retrieved on April 18, 2018 from http://www.teach21.us/uploads/....
 
21.
Goode, J., Chapman, G., & Margolis, J. (2012). Beyond curriculum: The exploring computer science program. ACM Inroads, 3(2), 47-53. https://doi.org/10.1145/218983....
 
22.
Gross, P., & Powers, K. (2005, October). Evaluating assessments of novice programming environments. In Proceedings of the First International Workshop on Computing Education Research (pp. 99-110). ACM. https://doi.org/10.1145/108978....
 
23.
Grover, S., & Basu, S. (2017, March). Measuring student learning in introductory block-based programming: Examining misconceptions of loops, variables, and Boolean logic. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (pp. 267-272). ACM. https://doi.org/10.1145/301768....
 
24.
Grover, S., & Pea, R. (2018). Computational Thinking: A competency whose time has come. In S. Sentance, E. Barendsen, & C. Schulte (Eds.), Computer Science Education: Perspectives on teaching and learning in school (pp. 19-38). London: Bloomsbury Academic, 19-37.
 
25.
Grover, S., Pea, R., & Cooper, S. (2015). Designing for deeper learning in a blended computer science course for middle school students. Computer Science Education, 25(2), 199-237. https://doi.org/10.1080/089934....
 
26.
Hamouda, S., Edwards, S. H., Elmongui, H. G., Ernst, J. V., & Shaffer, C. A. (2017). A basic recursion concept inventory. Computer Science Education, 27(2), 121-148. https://doi.org/10.1080/089934....
 
27.
Hestenes, D., Wells, M., & Swackhamer, G. (1992). Force concept inventory. The Physics Teacher, 30(3), 141-158. https://doi.org/10.1119/1.2343....
 
28.
Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55. https://doi.org/10.1080/107055....
 
29.
k12cs.org. (2016). K-12 computer science framework. Retrieved on April 18, 2019 from https://k12cs.org/.
 
30.
Korkmaz, Ö., Çakir, R., & Özden, M. Y. (2017). A validity and reliability study of the Computational Thinking Scales (CTS). Computers in Human Behavior, 72, 558-569. https://doi.org/10.1016/j.chb.....
 
31.
Lee, I., Martin, F., Denner, J., Coulter, B., Allan, W., Erickson, J., Malyn-Smith, J., & Werner, L. (2011). Computational thinking for youth in practice. ACM Inroads, 2(1), 32-37.
 
32.
Lewis, C. M. (2011). Is pair programming more effective than other forms of collaboration for young students? Computer Science Education, 21(2), 105-134. https://doi.org/10.1080/089934....
 
33.
Luxton-Reilly, A., Becker, B. A., Cao, Y., McDermott, R., Mirolo, C., Mühling, A., ... & Whalley, J. (2018, January). Developing assessments to determine mastery of programming fundamentals. In Proceedings of the 2017 ITiCSE Conference on Working Group Reports (pp. 47-69). ACM. https://doi.org/10.1145/317478....
 
34.
Mannila, L., Dagiene, V., Demo, B., Grgurina, N., Mirolo, C., Rolandsson, L., & Settle, A. (2014, June). Computational thinking in K-9 education. In Proceedings of the Working Group Reports of the 2014 on Innovation & Technology in Computer Science Education Conference (pp. 1-29). ACM. https://doi.org/10.1145/271360....
 
35.
Meerbaum-Salant, O., Armoni, M., & Ben-Ari, M. (2013). Learning computer science concepts with scratch. Computer Science Education, 23(3), 239-264. https://doi.org/10.1080/089934....
 
36.
Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). Focus article: On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1(1), 3-62. https://doi.org/10.1207/S15366....
 
37.
Mühling, A., Ruf, A., & Hubwieser, P. (2015, November). Design and first results of a psychometric test for measuring basic programming abilities. In Proceedings of the Workshop in Primary and Secondary Computing Education (pp. 2-10). ACM. https://doi.org/10.1145/281831....
 
38.
Parker, M. C., Guzdial, M., & Engleman, S. (2016, August). Replication, validation, and use of a language independent CS1 knowledge assessment. In Proceedings of the 2016 ACM Conference on International Computing Education Research (pp. 93-101). ACM. https://doi.org/10.1145/296031....
 
39.
Raykov, T. (1997). Estimation of composite reliability for congeneric measures. Applied Psychological Measurement, 21(2), 173-184. https://doi.org/10.1177/014662....
 
40.
Román-González, M., Pérez-González, J. C., & Jiménez-Fernández, C. (2017). Which cognitive abilities underlie computational thinking? Criterion validity of the Computational Thinking Test. Computers in Human Behavior, 72, 678-691. https://doi.org/10.1016/j.chb.....
 
41.
Schlomer, G. L., Bauman, S., & Card, N. A. (2010). Best practices for missing data management in counseling psychology. Journal of Counseling Psychology, 57(1), 1-10. https://doi.org/10.1037/a00180....
 
42.
Settle, A., Franke, B., Hansen, R., Spaltro, F., Jurisson, C., Rennert-May, C., & Wildeman, B. (2012, July). Infusing computational thinking into the middle-and high-school curriculum. In Proceedings of the 17th ACM Annual Conference on Innovation and Technology in Computer Science Education (pp. 22-27). ACM. https://doi.org/10.1145/232529....
 
43.
Shneiderman, B., & Mayer, R. (1979). Syntactic/semantic interactions in programmer behavior: A model and experimental results. International Journal of Computer & Information Sciences, 8(3), 219-238. https://doi.org/10.1007/BF0097....
 
44.
Smith IV, D. H., Hao, Q., Jagodzinski, F., Liu, Y., & Gupta, V. (2019, May). Quantifying the Effects of Prior Knowledge in Entry-Level Programming Courses. In Proceedings of the ACM Conference on Global Computing Education (pp. 30-36). ACM. https://doi.org/10.1145/330011....
 
45.
Sudol, L. A., & Studer, C. (2010, March). Analyzing test items: using item response theory to validate assessments. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education (pp. 436-440). ACM. https://doi.org/10.1145/173426....
 
46.
Tabachnick, B. G., Fidell, L. S., & Ullman, J. B. (2007). Using multivariate statistics (Vol. 5). Boston, MA: Pearson.
 
47.
Taylor, C., Zingaro, D., Porter, L., Webb, K. C., Lee, C. B., & Clancy, M. (2014). Computer science concept inventories: past and future. Computer Science Education, 24(4), 253-276. https://doi.org/10.1080/089934....
 
48.
Tew, A. E., & Guzdial, M. (2011, March). The FCS1: a language independent assessment of CS1 knowledge. In Proceedings of the 42nd ACM Technical Symposium on Computer Science Education (pp. 111-116). ACM. https://doi.org/10.1145/195316....
 
49.
Tukiainen, M., & Mönkkönen, E. (2002, June). Programming aptitude testing as a prediction of learning to program. In Proceedings of 14th Workshop of the Psychology of Programming Interest Group (PPIG). 45-57.
 
50.
Weintrop, D., & Wilensky, U. (2015, July). Using commutative assessments to compare conceptual understanding in blocks-based and text-based programs. In ICER (Vol. 15, pp. 101-110).
 
51.
Weintrop, D., Killen, H., & Franke, B. E. (2018). Blocks or text? How programming language modality makes a difference in assessing underrepresented populations. In International Society of the Learning Sciences, Inc. [ISLS]. https://doi.org/10.22318/cscl2....
 
52.
Weintrop, D., Killen, H., Munzar, T., & Franke, B. (2019, February). Block-based Comprehension: Exploring and Explaining Student Outcomes from a Read-only Block-based Exam. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (pp. 1218-1224). ACM. https://doi.org/10.1145/328732....
 
53.
Werner, L., Denner, J., Campe, S., & Kawamoto, D. C. (2012, February). The fairy performance assessment: measuring computational thinking in middle school. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education (pp. 215-220). ACM. https://doi.org/10.1145/215713....
 
54.
Wiebe, E., London, J., Aksit, O., Mott, B. W., Boyer, K. E., & Lester, J. C. (2019, February). Development of a lean computational thinking abilities assessment for middle grades students. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (pp. 456-461). ACM. https://doi.org/10.1145/328732....
 
55.
Winters, T., & Payne, T. (2005, October). What do students know?: an outcomes-based assessment system. In Proceedings of the First International Workshop on Computing Education Research (pp. 165-172). ACM. https://doi.org/10.1145/108978....
 
56.
Wright, B. D., & Linacre, J. M. (1994). Reasonable mean-square fit values. Rasch Measurement Transactions, 8(3), 370.
 
57.
Xie, B., Davidson, M. J., Li, M., & Ko, A. J. (2019, February). An item response theory evaluation of a language-independent cs1 knowledge assessment. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (pp. 699-705). ACM. https://doi.org/10.1145/328732....
 
58.
Zendler, A. (2019). cpm.4.CSE/IRT: Compact process model for measuring competences in computer science education based on IRT models. Education and Information Technologies, 24(1), 843-884. https://doi.org/10.1007/s10639....
 
59.
Zur-Bargury, I., Pârv, B., & Lanzberg, D. (2013, July). A nationwide exam as a tool for improving a new curriculum. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education (pp. 267-272). ACM. https://doi.org/10.1145/246247....
 
eISSN:1305-8223
ISSN:1305-8215
Journals System - logo
Scroll to top