Like this Product? Get CRESST News Every Month!

Don't miss out on the latest CRESST news including our FREE reports and products. Subscribe to the monthly CRESST E-newsletter right now!

We only use this address for the monthly CRESST E-Newsletter and will ask for confirmation before adding you to our list.



No thanks | Don't ask again

Reports

Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.

#377 – Engaging Teachers in Assessment of Their Students' Narrative Writing: Impact on Teachers' Knowledge and Practice
Maryl Gearhart, Shelby A. Wolf, Bette Burkey, and Andrea K. Whittaker

Summary
In the past two decades, the ways in which writing has been taught and assessed have shifted from a focus on final products to an emphasis on writing as a process. In this latest report from the Writing What You Read (WWYR) project, CRESST researchers Maryl Gearhart, Shelby Wolf, Bette Burkey, and Andrea Whittaker summarize the impact of the WWYR program, designed to enhance elementary teachers' competencies in narrative writing assessment. This comprehensive report details the project's history, the design and implementation of WWYR, and the research methods used to gain insight into teachers' knowledge and practice. Numerous examples of the WWYR workshop materials, including the narrative rubric used to guide teachers' practice in narrative assessment, are provided. One of the findings discussed in the report is that the assessments were not typically implemented as recommended. Teachers perceived the in-service program as imposed, rather than collaboratively designed. As a result, though teachers in the study were able to see productive possibilities for action and change in their methods of assessment, there were differences among the teachers in the pattern of their changes in understanding and practice. "[O]ur story is not a happily-ever-after tale," conclude the researchers, "but a tale of real research with classroom teachers. A central point in Writing What You Read is to take what you learn from literature and carry it in to your own writing. As teachers and researchers, we will take what we have learned from this experience and carry it into our future classrooms and projects, reshaping and learning along the way."

#352 – Collaborative Group Versus Individual Assessment in Mathematics: Group Processes and Outcomes
Noreen Webb

Summary
Several states, such as Connecticut and California, are attempting to incorporate group assessment into their large-scale testing programs. One intention of such efforts is to use scores from group assessments as indicators of individual performance. However, a key technical question for such assessments is "to what extent do scores on a group assessment actually represent individual performance or knowledge." This study by UCLA professor and CRESST researcher Noreen Webb sheds some light on this substantial technical question. Webb gave two seventh-grade classes an initial mathematics test as a group assessment, where exchange of information and assistance was common. Several weeks later, she administered a nearly identical individual test to the same students where assistance was not permitted. The results showed that some students' performance dropped significantly from the group assessment to the individual test. These students apparently depended on the resources of the group in order to get correct answers and when the same resources were not available during the individual test, many of the students were not able to solve the problem. Webb concluded: "Scores from a group assessment may not be valid indicators of some students' individual competence. Furthermore, achievement scores from group assessment contexts provide little information about group functioning." Webb's study suggests that states or school districts who intend to assign individual scores based on group assessments may want to seriously rethink their intentions.

#387 – Specifications for the Design of Problem-Solving Assessments in Science
Brenda Sugrue

Summary
In Specifications for the Design of Problem-Solving Assessments in Science, CRESST researcher Brenda Sugrue draws on the CRESST performance assessment model to develop a new set of test specifications for science. Sugrue recommends that designers follow a straightforward approach for developing alternative science assessments. "Carry out an analysis of the subject matter content to be assessed," says Sugrue, "identifying key concepts, principles, and procedures that are embodied in the content." She adds that much of this analysis already exists in state frameworks or in the national science standards. Either multiple choice, open-ended, or hands-on science tasks can then be created or adapted to measure individual constructs, such as concepts and principles, and the links between concepts and principles. In addition to measuring content-related constructs, Sugrue's model advocates measuring metacognitive constructs and motivational constructs in the context of the content. This permits more specific identification of the sources of students' poor performance. Students may perform poorly because of deficiencies in content knowledge, and/or deficiencies in constructs such as planning and monitoring, and/or maladaptive perceptions of self and task. The more specific the diagnosis of the source of poor performance, the more specific can be instructional interventions to improve performance. Sugrue's model includes specifications for task design, task development, and task scoring, all linked to specific components of problem-solving ability. An upcoming CRESST report will discuss the results of a study designed to evaluate the effectiveness of the model for attributing variance in performance to particular components of problem solving and particular formats for measuring them.

#824 – Evaluation of Green Dot’s Locke Transformation Project: From the Perspective of Teachers and Administrators
Joan Herman, Jia Wang, Christine Ong, Rolf Straubhaar, Jon Schweig, and Vivian Hsu

Summary

In the fall of 2007, Alain Leroy Locke High School, historically one of California’s lowest performing secondary schools, underwent a transformation. Suffering from a history of extremely low academic performance, student unrest and even violence, the nonprofit charter organization Green Dot Public Schools was charged to transition Locke into a set of smaller charter academies, in partnership with the Los Angeles Unified School District (LAUSD).With a grant from the Bill and Melinda Gates Foundation, the National Center for Research on Evaluation, Standards and Student Testing (CRESST), was charged with monitoring the progress and effects of Green Dot Public Schools’ Locke transformation from 2007 to the present. Previous annual reports have presented findings related to the academic performance of Green Dot Locke (GDL) students. The primary focus of this current report, is to use both quantitative (including teachers’ value-added data based on state test scores) and qualitative data (interviews with 13 teachers and four administrators across GDL academies) to explore potential teacher factors influencing students’ academic progress since the transformation, particularly focusing on teacher recruitment/selection, retention, and support.


#762 – Moving to the Next Generation of Standards for Science: Building on Recent Practices
Joan L. Herman

Summary
In this report, Joan Herman, director for the National Center for Research, on Evaluation, Standards, & Student Testing (CRESST) recommends that the new generation of science standards be based on lessons learned from current practice and on recent examples of standards-development methodology. In support of this, recent, promising efforts to develop standards in science and other areas are described, including the National Assessment of Educational Progress (NAEP) 2009 Science Assessment Framework, the Advanced Placement Redesign, and the Common Core State Standards Initiative (CCSSI). From these key documents, there are discussions about promising practices for a national effort to better define science standards. Lastly, this report reviews validation issues including the evidence that one would want to collect to demonstrate that national science standards are achieving their intended purposes.


To cite from this report, please use the following as your APA reference:

Herman, J. L. (2009). Moving to the next generation of standards for science: Building on recent practices (CRESST Report 762). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#473 – Principals' Views of Mathematics Standards, Frameworks, and Assessment in a Context of Reform
Maryl Gearhart

Summary
The purpose of this study was to gather information on principals' views regarding standards, frameworks, and assessment in mathematics. Based on surveys completed by 96 principals from 35 public school districts in Greater Los Angeles - each principal a past participant in events sponsored by the UCLA Principals' Center - our findings reflect the views of principals interested in improving educational practice.

With regard to standards and frameworks, the findings indicate that the principals' schools were not currently building mathematics programs closely on existing standards and frameworks; however, these principals were prepared to support the future implementation of state and/or district mathematics standards in their schools, and they requested resources and assistance with implementation. The principals disagreed on the need for standards at the school level. With regard to testing, the principals were concerned that parents and students may not understand the results of norm-referenced tests and that norm-referenced tests are not aligned with their instructional programs in mathematics. The principals were likely to favor performance-based measures for program evaluation and reporting and for guiding instruction, and they requested resources and assistance for building teacher capacity with new assessments. However, a large minority of the principals favored the use of both forms of mathematics testing, and some principals favored norm-referenced testing. Thus, although these principals represented administrators engaged in school improvement, they differed in their views regarding accountability testing.

The findings suggest that resolution among the views of administrators lies in the design of mathematics standards that embrace a breadth of knowledge and skill, together with the design of a coherent, standards-based assessment system that integrates multiple measures.

#526 – Learning to Write in Urban Elementary and Middle Schools: An Investigation of Teachers' Written Feedback on Student Compositions
Lindsay Clare Matsumura, Rosa Valdés, G. Genevieve Patthey-Chavez

Summary
In writing instruction, feedback from teachers provides a critical opportunity for students to revise their work and improve as writers. Contexts in which students routinely receive feedback on their work include peer reviews and teacher-student conferences. For many teachers, however, written comments on student papers remain a significant method of response. Despite the importance of teacher responses to student work in facilitating the learning process, little research has examined the relationship between teacher feedback on early drafts of student work and the quality of students' subsequent drafts. Even less research has examined the nature of teachers' written feedback to students in K-12 settings. This study investigates the nature of written instructor responses to student writings and the relationship of these written responses to the quality of subsequent student work in urban elementary and middle schools. Most of the 22 instructors who provided the study's corpus of student writings (N = 114) provided their students with some written feedback, and most of their students incorporated that feedback into their subsequent drafts. Instructors tended to focus most on standardizing their students' written output, with measurable success. Student papers received little feedback about content or organization, and these qualities generally did not change over successive drafts.

#759 – Evaluation of the WebPlay Arts Education Program: Findings from the 2006–07 School Year
Noelle Griffin, Jinok Kim, Youngsoon So, Vivian Hsu

Summary
This report presents results from the second year of CRESST’s three-year evaluation of the WebPlay program. WebPlay is an online-enhanced arts education program for K–12 students. The evaluation occurred during the three-year implementation of the program in Grades 3 and 5 in California schools; this report focused on results from the second year of program implementation, 2006–07. Results show that WebPlay participation was significantly related to positive educational engagement/attitude. In terms of California Standards Test (CST) English Language Arts (ELA) scores, despite no overall WebPlay effects, a significant difference was found for limited English proficiency (LEP) students. The results support that a well-designed, theater-based education can improve student engagement; and that it may have academic benefits in language arts content, particularly for those students who are struggling with English proficiency.

#450 – Assessment of Transfer in a Bilingual Cooperative Learning Curriculum
Margaret H.Szymansky, Richard P. Durán

Summary
Although existing standardized language proficiency tests can provide reliable information on students' language arts skills, they fail to provide information on how students develop those skills. In this study of third- and fourth-grade bilingual classrooms, CRESST researchers sought to better understand the link between curriculum and language development. Investigating implementation of a bilingual adaptation of the Cooperative Integrated Reading and Composition curriculum, the researchers analyzed differences in students pre- and posttest performance, focusing on changes in English performance during the school year. The researchers found that increased performance could be attributed to classroom discussions of strategies to answer questions consistent with the curriculum model. "Our results," said Richard Durán and Margaret Szymanski, "suggest the value of studying how assessments of bilingual students' literacy skills might be tied to students' awareness of performance standards."

#395 – How Does My Teacher Know What I Know?
Kathryn Davinroy, Carribeth Bliem, and Vicky Mayfield

Summary
Regardless of the type of assessment used in the classroom, students continue to have the same traditional understandings of assessment suggests a new CRESST study. "...students believe that assessment activities are often aimed at measuring their handwriting, punctuation, [and] expression when reading out loud," say researchers Kathryn Davinroy, Carribeth Bliem, and Vicky Mayfield, in a new CRESST report. The third-grade students involved in the study had been exposed to performance assessments in reading and mathematics for over a year, yet their concepts of assessment did not shift significantly. The authors found that this limited framework applied to multiple topics. When asked what does a math test look like, for example, students still referenced the timed math-fact test. "It has a hundred problems on it," said one student, "and you have to get as many problems as you can down in five minutes." Since students are typically the last to be exposed to changes in assessment, the data tend to confirm that students attitudes towards assessment may also be the last to change. "Our findings," conclude the authors, "about student perceptions, regarding reading, mathematics, and assessment support contentions that reform takes time if perceptions and understandings are going to change significantly."