Like this Product? Get CRESST News Every Month!

Don't miss out on the latest CRESST news including our FREE reports and products. Subscribe to the monthly CRESST E-newsletter right now!

We only use this address for the monthly CRESST E-Newsletter and will ask for confirmation before adding you to our list.



No thanks | Don't ask again

Reports

Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.

#782 – Year 3 ASK/FOSS Efficacy Study
Ellen Osmundson, Yunyun Dai, and Joan Herman

Summary
In this interim report, CRESST researchers examine the effect on teaching and student learning between several different science curricula. Using randomly assigned treatment and control groups of 3rd and 4th grade teachers, this interim report provides important lessons for the upcoming 4th year study.

#781 – Evaluation of Seeds of Science/Roots of Reading: Effective Tools For Developing Literacy Through Science in the Early Grades-Light Energy Unit
Pete Goldschmidt, Hyekyung Jung

Summary
This evaluation focuses on the Seeds of Science/Roots of Reading: Effective Tools for Developing Literacy through Science in the Early Grades (Seeds/Roots) model of science-literacy integration. Quantitative results indicate that the Seeds/Roots intervention resulted in statistically and substantively higher student performance in science content, vocabulary, and writing. Qualitative results indicate that teachers overwhelmingly found the Seeds/Roots unit usable, effective, and engaging.

#780 – Aligning Instruction and Assessment with Game and Simulation Design
Richard Wainess, Alan Koenig, Deirdre Kerr

Summary
Effective design of training-related games requires alignment between content and game play. In this study, researchers created 1) a Game Play Model comprising the key components of a game and 2) a Player Interaction Framework defining how players interact with information in a game. They analyzed 34 games (24 popular commercial video games and 10 commercial video games used by the military). Results of the analyses indicate that while the two game types were similar in the amount of instruction devoted to introducing the various components of the Game Play Model, the delivery mechanisms (the Player Interaction Framework) differed in some key areas. In particular, the military games did not provide enough direct instruction and relied too much on the player to actively seek out information.

#779 – When to Exit ELL Students: Monitoring Success and Failure in Mainstream Classrooms After ELLs’ Reclassification
Jinok Kim, Joan L. Herman

Summary
This study evaluated the validity of ELL reclassification policies in existing assessment systems. Using statewide individual-level data we examined the subsequent academic success of reclassified ELLs in mainstream classrooms. We found that ELL students tended to make a smooth transition upon their reclassification and kept pace in mainstream classrooms. This indicated that existing reclassification decisions were, in general, supportive of ELL students' subsequent learning, with a caution that our findings should be tempered by substantial variation in subsequent learning. Our findings also suggest that protracted ELL status due to too stringent ELP criteria may be detrimental to ELLs' learning in mainstream classrooms.

#778 – An Evidence Centered Design for Learning and Assessment in the Digital World
John T. Behrens, Robert J. Mislevy, Kristen E. DiCerbo, Roy Levy

Summary
The digital revolution has created a vast space of interconnected information, communication, and interaction. Functioning effectively in this environment requires so-called 21st century skills such as technological fluency, complex problem solving, and the ability to work effectively with others. Unfortunately, traditional assessment models and methods are inadequate for evaluating or guiding learning in our digital world. This report argues that the framework of evidence-centered assessment design (ECD) supports the design and implementation of assessments that are up to the challenge. We outline the essential ECD structure and discuss how the digital world impacts each phase of assessment design and delivery. The ideas presented in the report are illustrated with examples from our ongoing experiences with the Cisco Networking Academy. We have used this approach to guide our work for more than 10 years and ultimately seek to fundamentally change the way networking skills are taught and assessed throughout the world, including the delivery of 100 million exams in over 160 countries and innovative simulation-based curricular and assessment tools.

#777 – Preparing Students for the 21st Century: Exploring the Effect of Afterschool Participation on Students’ Collaboration Skills, Oral Communication Skills, and Self-Efficacy
Denise Huang, Seth Leon, Cheri Hodson, Deborah La Torre, Nora Obregon, Gwendelyn Rivera

Summary
This study addressed key questions about LA's BEST afterschool students' self-efficacy, collaboration, and communication skills. We compared student perceptions of their own 21st century skills to external outcome measures including the California Standardized Test (CST), attendance, and teacher ratings. We found a substantial relationship between student self-efficacy compared to student oral communication and collaboration skills. However, we did not find that higher attendance in LA's BEST led to higher self-efficacy, though further investigation is needed. We found that LA's BEST students were able to evaluate their abilities so that they are similar to the outcome measures of CST and teacher ratings. Moreover, the high-attendance group demonstrated significantly better alignment with the teacher ratings than the lower attendance groups in self-efficacy, oral communication skills, and collaboration skills.

#776 – A Bayesian Network Approach to Modeling Learning Progressions and Task Performance
Patti West, Daisy Wise Rutstein, Robert J. Mislevy, Junhui Liu, Younyoung Choi, Roy Levy, Aaron Crawford, Kristen E. DiCerbo, Kristina Chappel and John T. Behrens

Summary
A major issue in the study of learning progressions (LPs) is linking student performance on assessment tasks to the progressions. This report describes the challenges faced in making this linkage using Bayesian networks to model LPs in the field of computer networking. The ideas are illustrated with exemplar Bayesian networks built on Cisco Networking Academy LPs and tasks designed to obtain evidence in their terms. We briefly discuss challenges in the development of LPs, and then move to challenges with the implementation of Bayesian networks, including selection of the method, issues of model fit and confirmation, and grainsize. We conclude with a discussion of the challenges we face in ongoing work.

To cite from this report, please use the following as your APA reference:

West, P., Rutstein, D. W., Mislevy, R. J., Liu, J., Choi, Y., Levy, R., … Behrens, J. T. (2010). A Bayesian network approach to modeling learning progressions and task performance. (CRESST Report 776). Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#775 – Automated Assessment of Complex Task Performance in Games and Simulations
Markus R. Iseli, Alan D. Koenig, John J. Lee, and Richard Wainess

Summary
Assessment of complex task performance is crucial to evaluating personnel in critical job functions such as Navy damage control operations aboard ships. Games and simulations can be instrumental in this process, as they can present a broad range of complex scenarios without involving harm to people or property. However, automatic performance assessment of complex tasks is challenging, because it involves the modeling and understanding of how experts think when presented with a series of observed in-game actions. When assessing performance, human expert scoring can be limiting, as it depends on subjective observations of in-game player’s performance, which in turn is used to interpret their mastery of key associated cognitive constructs. We introduce a computational framework that incorporates the automatic performance assessment of complex tasks or action sequences as well as the modeling of real-world, simulated, or cognitive processes by modeling player actions, simulation states and events, conditional simulation state transitions, and cognitive construct dependencies using a dynamic Bayesian network. This novel approach combines a state-space model along with a probabilistic framework of Bayesian statistics, which allows us to draw probabilistic inferences about a player’s decision-making abilities. Through this process, a comparison of human expert scoring and dynamic Bayesian network scoring is presented. The use of the computational framework using a dynamic Bayesian network presented in this report can help reduce or eliminate the need for human raters and decrease the time to score. This has the benefit of potentially reducing costs. In addition, it can facilitate the efficient aggregation, standardization, and reporting of the scores.

To cite from this report, please use the following as your APA reference:

Iseli, M. R., Koenig, A. D., Lee, J. J., & Wainess, R. (2010). Automatic assessment of complex task performance in games and simulations. (CRESST Report 775). Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#774 – Developing High-quality Assessments That Align With Instructional Video Games
Terry P. Vendlinski, Girlie C. Delacruz, Rebecca E. Buschang, Gregory K. W. K. Chung, Eva L. Baker

Summary
This report investigates the technical quality of an instructional game used to measure student skills for adding rational numbers. The authors discuss the knowledge and item specifications, an initial version of the instructional game based on these specifications, and the technical quality of the assessment. They found that the assessment had high internal consistency and test-retest reliability‹Cronbach¹s alpha of 0.9 to 0.94. Alignment between the instructional game and assessment items is also discussed.

#773 – Validity Evidence for Games as Assessment Environments
Girlie C. Delacruz, Gregory K. W. K. Chung, & Eva L. Baker

Summary
This study provides empirical evidence of a highly specific use of games in education—the assessment of the learner. Linear regressions were used to examine the predictive and convergent validity of a math game as assessment of mathematical understanding. Results indicate that prior knowledge significantly predicts game performance. Results also indicate that game performance significantly predicts posttest scores, even when controlling for prior knowledge. These results provide evidence that game performance taps into mathematical understanding.

To cite from this report, please use the following as your APA reference: Delacruz, G. C., Chung, G. K. W. K., & Baker, E. L. (2010). Validity evidence for games as assessment environments (CRESST Report 773). Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).