Total views : 128
Measurement of Early Science and Mathematics Standard Instrument: Performance Assessment and psychometric setting using ZPD Concept
Objective: The purpose of this study is to measure an instrument that could assess Early Science and Mathematics Standard in the domain of science and technology. Children’s measurement standards in early years include the performance in thinking skills. The design is a combination of the theoretical framework representing Early Sciences and Mathematics Standard and Zone of Proximal Development (ZPD). Method/Analysis: We developed a performance assessment standard and scoring set (rubric) for measuring children’s responses. Participants in this study were 30 children of 2-year old and 30 children who were 3 years old. These children were from several child-care centres. 21 items on scientific attitude, scientific skills, investigate the nature of life, pre-number experience, concepts of number, shapes and space, and construction were assessed. Mathematical models such as the Rasch model have provided useful representations of social science problems where it coordinates data with the requirements of a useful definition of measurement. Findings: The study shows the Many-facet Rasch Measurement (MFRM) an extended version of the Rasch Model techniques which is combined with children’s measurement standards to examine the validity and reliability of scores for the performance rating scale. Additionally, an infit statistic (.76) for 2 years and (1.46) for 3 years for higher responses support the validity of scores. For reliability, the person reliability is good at .99, rater reliability is at .97, domain reliability is good at 1.00 and the item reliability is at .96 for overall scoring measure. Application/Improvement: This paper provides educators and researchers with a useful tool to facilitate measurement in early childhood years. It gives a great recommendation where the instrument is valid and reliable.
Early Science and Mathematics, Domain of Science and Technology, Performance Assessment Standard, Mathematical model, MFRM
- Scott-Little Catherine, Kagan, Sharon Lynn & Frelow, Victoria Stebbins (2003a). Creating the conditions for success with early standards: Result from a national study of state level standards for children’s learning prior to kindergarten.Early Childhood Research & Practise, 5(2). http:// ecrp.illinois.edu/v5n2/little.html. Retrieved August 4, 2014, from
- Scott-Little Catherine, Kagan, Sharon Lynn & Frelow, Victoria Stebbins (2003b). Standards for preschool children’s learning and development: Who has standards, how were they developed, and how are they used? Greensboro, NC:SERVE., from http://www.serve.org/FileLibraryDetails.aspx?id=78 Retrieved August 4, 2014.
- Visser, L. Selma A.J. Ruiter, Bieuwe F.van der Meulan, Wied A.J.J.M. Ruijssenaars, Marieke E. Timmerman (2012). A Review of Standardized Developmental Assessment Instruments for Young Children and Their Applicability for Children With Special Needs. Journal of Cognitive Education and Psychology, 11(2), 102-127.
- Madhabi, B. (1999). Validation of Scores/Measures from a K-2 Developmental Assessment in Mathematics. Educational and Psychological Measurement, 59(4), 694-715.
- Brown, R (1989). Testing and thoughtfulness. Educational Leadership, 7, 31-33.
- Harrington, H.L., Meisels,S.J., McMahon, McMahon, P., Dichtelmiller, M.L, & Jablon, J.R. (1997). Observing, documenting, and assessing learning: The work sampling system handbook for teacher educators. Ann Arbor, MI: Rebus.
- Meisels, S.J.(1993). Remaking classroom assessment with the Work Sampling System. Young Children, 55, 16-19
- Wortham (2012). Assessment in early childhood education.6th ed. Upper Saddle River: New Jersey.
- Kagan, S. L., and P. R. Britto. (2005). Going Global with Indicators of Child Development. Final Report to UNICEF.New York: United Nations Children’s Fund.
- Hout, B, & Neal, M. (2006). Writing assessment: A technohistory. In C. MacArthur, S. Graham, & J. Fitzgerald, Handbook of writing research (pp. 417-432). New York: Guilford Press.
- White, E.(1985). Teaching and assessing writing. San Francisco: Jossey-Bass.
- Shermis, M. D., Burstein, J., & Leacock, C.(2006). Applications of computers in assessment and analysis of writing. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research. New York: Guilford Publications.
- Knoch, U. (2009). Diagnostic writing assessment: The Development and validation of a rating scale, Frankfurt, Germany: Lang.
- Hills, T.W. (1993). Reaching potentials through appropriate assessment. In S. Bredekamp & T. Rosegrant (Eds.), Reaching potentials: Appropriate curriculum an assessment for young children (pp.43-64) Washington, DC: National Association for the Education of Young Children.
- Schweinhart, L. J., (1993).The High/Scope Child Observation Record Study. Educational and Psychological Measurement, 53, 445-454.
- Scott-Little Catherine, Jim Lesko, Jana Martella & Penny Milburn (2007). Early Learning Standards: Results from a National Survey to Docment Trends in State-Level Policies and Practices. Retrieved August 4, 2014, from http://ecrp.uiuc.edu/v9n1/Little.html
- Tudge, J.R.H. “Processes and Consequences of Peer Collaboration: A Vygotskian Analysis, “Child Development, 63, (1992): 1365.
- Morrison, G.S. (2011). Early Childhood Education Today, 12th Ed. Upper Saddle River: New Jersey.
- Eckes, T. (2008). Rater types in writing performance assessments: A classification approach to rater variability. Language Testing, 25, 155-185.
- Lumley, T. (2005). Assessing second language writing: The rater’s perspective. Frankfurt, Germany: Lang.
- Wolfe, E.W. (1997). The relationship between essay reading style and scoring proficiency in a psychometric scoring system, Assessing Writing, 4, 83-106.
- Mc Namara T.F (1996). Measuring Second Language Performance.London:Longman.
- Eckes, T. (2005). Examining rater effects in TestDaF writing and speaking performance assessments: A many-facet Rasch analysis. Language Assessment Quarterly, 2, 197-221.
- Weigle, S.C. (2002). Assessing writing. Cambridge, UK: Cambridge University Press.
- Linacre, J.M. (1994). Sample Size and item calibration (or person measure) stability. Rasch Measurement Transactions, 11, 546-547.
- Engelhard, G. (1992). The measurement of writing ability with a many-facet Rasch model. Applied Measurement in Education, 5, 171-191.
- Smith A.V. & Kulikowich, J.M. (2004). An application of generalizability theory and many –facet Rasch measurement using a complex problem-solving skills assessment.Educational and Psychological Measurement, 64, 617-639.
- Lunz, M.E., Wright, B.D., & Linacre, J.M. (1990). Measuring the impact of judge severity on examination scores. Applied Measurement in Education, 3, 331-345.
- Lunz, M.E., Stahl, J.A., & Wright, B.D. (1996). The invariance of judge severity calibration. In G. Engelhard & M.Wilson (Eds), Objective measurement: Theory into practice.Norwood, NJ: Ablex. (Vol. 3, pp. 99-112)
- Krechevsky, M. (1991). Project Spectrum: An innovative assessment alternative. Educational Leadership, 49(6), 4348.
- Camp, R.(1993). The place of portfolios in our changing views of writing assessment. In.R.E. Bennet, & W.C. Ward, Construction versus choice in cognitive measurement: Issue in constructed response, performance testing, and portfolio assessment (pp. 183-212). Hillsdale, NJ: Lawrence Erlbaum Associates.
- Ahmad Zamri bin Khairani and Nordin bin Abd. Razak.Modeling a Multiple Choice Mathematics Test with the Rasch Model. Indian Journal of Science and Technology, 2015 June, 8(12) ,1-6.
- Lee Jun-Woo, Jeong Tchae-Won and Yang Chun-Ho. Proposed Skill Assessment Models for College Admissions to the Golf Departments in Korea: An Application of the Rasch Partial Credit Model, Indian Journal of Science and Technology, 2016 November, 9(41), 1-8.
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 3.0 License.