Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.
Jordan Rickles, Jia Wang, and Joan Herman
With funding from the Bill and Melinda Gates Foundation, CRESST conducted a multi-year evaluation of a major school reform project at Alain Leroy Locke High School, historically one of California’s lowest performing secondary schools. Beginning in 2007, Locke High School transitioned into a set of smaller, Green Dot Charter High Schools, subsequently referred to as Green Dot Locke (GDL) in this supplemental report. This report extended the previous report, which tracked the first and second cohorts of 9th-graders who entered GDL in fall 2007 and 2008 respectively thru the 2010-11 school year, by bringing the second cohort of students to graduation. The CRESST evaluation, employing a rigorous quasi-experimental design with propensity score matching, found statistically significant, positive effects for the GDL transformation including improved achievement, school persistence, graduation, and completion of college preparatory courses.
Joan Herman, Jia Wang, Christine Ong, Rolf Straubhaar, Jon Schweig, and Vivian Hsu
In the fall of 2007, Alain Leroy Locke High School, historically one of California’s lowest performing secondary schools, underwent a transformation. Suffering from a history of extremely low academic performance, student unrest and even violence, the nonprofit charter organization Green Dot Public Schools was charged to transition Locke into a set of smaller charter academies, in partnership with the Los Angeles Unified School District (LAUSD).With a grant from the Bill and Melinda Gates Foundation, the National Center for Research on Evaluation, Standards and Student Testing (CRESST), was charged with monitoring the progress and effects of Green Dot Public Schools’ Locke transformation from 2007 to the present. Previous annual reports have presented findings related to the academic performance of Green Dot Locke (GDL) students. The primary focus of this current report, is to use both quantitative (including teachers’ value-added data based on state test scores) and qualitative data (interviews with 13 teachers and four administrators across GDL academies) to explore potential teacher factors influencing students’ academic progress since the transformation, particularly focusing on teacher recruitment/selection, retention, and support.
Joan Herman, and Robert Linn
Two consortia, the Smarter Balanced Assessment Consortium (Smarter Balanced) and the Partnership for Assessment of Readiness for College and Careers (PARCC), are currently developing comprehensive, technology-based assessment systems to measure students’ attainment of the Common Core State Standards (CCSS). The consequences of the consortia assessments, slated for full operation in the 2014/15 school year, will be significant. The assessments themselves and their results will send powerful signals to schools about the meaning of the CCSS and what students know and are able to do. If history is a guide, educators will align curriculum and teaching to what is tested, and what is not assessed largely will be ignored. Those interested in promoting students’ deeper learning and development of 21st century skills thus have a large stake in trying to assure that consortium assessments represent these goals.
Funded by the William and Flora Hewlett Foundation, UCLA’s National Center for Research on Evaluation, Standards, and Student Testing (CRESST) is monitoring the extent to which the two consortia’s assessment development efforts are likely to produce tests that measure and support goals for deeper learning. This report summarizes CRESST findings thus far, describing the evidence- centered design framework guiding assessment development for both Smarter Balanced and PARCC as well as each consortia’s plans for system development and validation. This report also provides an initial evaluation of the status of deeper learning represented in both consortia’s plans.
Study results indicate that PARCC and Smarter Balanced summative assessments are likely to represent important goals for deeper learning, particularly those related to mastering and being able to apply core academic content and cognitive strategies related to complex thinking, communication, and problem solving. At the same time, the report points to the technical, fiscal, and political challenges that the consortia face in bringing their plans to fruition.
Rebecca E. Buschang
This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e., experimental condition). A pretest of content knowledge was administered, and then the participants in both conditions watched two hours of online videos about natural selection and attended different types of professional development sessions lasting four hours. Significant differences between conditions in favor of the experimental condition were found on participant identification of critical elements of student understanding of natural selection and content knowledge related to natural selection. Results suggest that short-term professional development sessions focused on evaluating student errors and understanding can be effective at focusing a participant‟s evaluation of student work on particularly important elements of student understanding. Results have implications for understanding the types of knowledge necessary to effectively evaluate student work and for the design of professional development.
Deborah La Torre Matrundola, Sandy Chang, and Joan Herman
The purpose of these case studies was to examine the ways technology and professional development supported the use of the SimScientists assessment systems. Qualitative research methodology was used to provide narrative descriptions of six classes implementing simulation-based assessments for either the topic of Ecosystems or Atoms and Molecules. Results revealed both strengths and weaknesses concerning technology support for the assessments, as well as technology and professional development support of the teachers. Furthermore, recommendations are provided concerning potential improvements to the assessments, reflection activities, and professional development.
Rebecca E. Buschang, Gregory K.W.K. Chung, Girlie C. Delacruz, and Eva L. Baker
The purpose of this study was to validate inferences about scores of one task designed to measure subject matter knowledge and three tasks designed to measure aspects of pedagogical content knowledge. Evidence for the validity of inferences was based on two expectations. First, if tasks were sensitive to expertise, we would find group differences. Second, tasks that measured similar types of knowledge would correlate strongly, and tasks that measured different types of knowledge would correlate weakly. We recruited and assessed four groups of participants including 46 experienced algebra teachers (2+ years experience), 17 novice algebra teachers (0-2 years experience), 10 teaching experts, and 13 subject matter experts. Results indicate one task differentiated among levels of expertise and measured several aspects of knowledge needed to teach algebra. Results also highlight that future studies should use a combination of tasks to accurately measure different aspects of teacher knowledge.
Deirdre Kerr and Gregory K.W.K. Chung
Though video games are commonly considered to hold great potential as learning environments, their effectiveness as a teaching tool has yet to be determined. One reason for this is that researchers often run into the problem of multicollinearity between prior knowledge, in-game performance, and posttest scores, thereby making the determination of the amount of learning attributable to the game difficult. This study uses tests for mediation effects to determine the true relationship between in-game performance and posttest performance, determining that in this case in-game performance is a perfect mediator of prior knowledge on posttest score.
Jinok Kim and Joan L. Herman
In English language learners’ (ELLs) reclassification, the tension between assuring sufficient English language proficiency (ELP) in mainstream classrooms and avoiding potential negative consequences of protracted ELL status creates an essential dilemma. This present study focused on ELL students who were reclassified around the time they finished elementary school (specifically students reclassified at Grades 4, 5, or 6) and attempted to examine whether the reclassification decisions used for these students are valid and supportive of their subsequent learning. In doing so, this paper also explores methods that allow for drawing sound inferences on student learning subsequent to reclassification. Recent advances in growth modeling are drawn upon to make comparisons in subsequent learning more meaningful. The study found that although there is evidence that reclassified ELLs tend to continue to catch up to their non-ELL peers after reclassification, the magnitudes may be very modest in virtual scale values over the grades and insufficient to attain proficiency. The study also found that there was no evidence of former ELLs falling behind in academic growth after reclassification, either relative to their non-ELL peers or in terms of absolute academic proficiency levels.
Rebecca E. Buschang, Deirdre Kerr and Gregory K.W.K. Chung
Appropriately designed technology-based learning environments such as video games can be used to give immediate and individualized feedback to students. However, little is known about the design and use of feedback in instructional video games. This study investigated how feedback used in a mathematics video game about fractions impacted student actions in the game. Results indicated the type of feedback did not significantly affect student actions. Process data were also analyzed to identify specific student errors as well as opportunities to provide feedback for future versions of the game. Results of this study suggest that process data are a unique feature of technology-based learning environments that can be used to analyze errors and create targeted feedback for students.
Deirdre S. Kerr and Gregory K. W. K. Chung
Commercial video games undergo usability studies to determine the degree to which the player is able to learn, control, and understand the game. Usability studies allow game designers to improve their games before they are released to the public. If usability studies could be expanded to include information about the presentation of the instructional content, they could help improve educational video games. In this study, cluster analysis was used to identify usability information from the log files from an educational video game called Save Patch. Cluster analysis was able to pinpoint specific levels in the game that could be improved as well as identify specific components of the level design under which certain errors were likely to occur, culminating in specific recommendations to improve the game in ways likely to increase learning.