Reports
Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.
#382 – Analysis of Cognitive Demand in Selected Alternative Science Assessments
Gail Baxter, Robert Glaser, and Kalyani Raghavan
CSE Report 382, 1994
Summary
Working with pilot science assessments in California and Connecticut, the researchers in Analysis of Cognitive Demand in Selected Alternative Science Assessments focused on cognitive activity measured by performance assessment tasks. Of special interest was the degree to which the tasks accurately measured differences in student performance. "We focused, "wrote Baxter, Glaser, and Raghavan, "on the extent to which: (a) tasks allowed students the opportunity to engage in higher order thinking skills and (b) scoring systems reflected differential performance of students with respect to the nature of cognitive activity in which they engaged." Data came from three types of science assessment tasks-exploratory investigation, conceptual integration, and component identification-each varying with respect to grade level, prior knowledge, stage of development and purpose. Analyses of the data resulted in some important recommendations for assessment tasks and scoring. In general, wrote the authors, "tasks should: (a) be procedurally open-ended affording students an opportunity to display their understanding; (b) draw on subject matter knowledge as opposed to knowledge of generally similar facts; and (c) be cognitively rich enough to require thinking." The authors concluded that scoring systems should: (a) link score criteria to task expectations; (b) be sensitive to the meaningful use of knowledge; and (c) capture the [learning] process the students engage in.
#822 – The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection
Rebecca E. Buschang
CRESST Report 822, October 2012
Summary
This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e., experimental condition). A pretest of content knowledge was administered, and then the participants in both conditions watched two hours of online videos about natural selection and attended different types of professional development sessions lasting four hours. Significant differences between conditions in favor of the experimental condition were found on participant identification of critical elements of student understanding of natural selection and content knowledge related to natural selection. Results suggest that short-term professional development sessions focused on evaluating student errors and understanding can be effective at focusing a participant‟s evaluation of student work on particularly important elements of student understanding. Results have implications for understanding the types of knowledge necessary to effectively evaluate student work and for the design of professional development.
#741 – From Evidence to Action: A Seamless Process in Formative Assessment?
Margaret Heritage, Jinok Kim, Terry P. Vendlinski, Joan L. Herman
CRESST Report 741, 2008
Summary
Based on the results of a generalizability study (G study) of measures of teacher knowledge for teaching mathematics developed at The National Center for Research, on Evaluation, Standards, and Student Testing (CRESST) at the University of California, Los Angeles, this report provides evidence that teachers are better at drawing reasonable inferences about student levels of understanding from assessment information than they are in deciding the next instructional steps. We discuss the implications of the results for effective formative assessment and end with considerations of how teachers can be supported to know what to teach next.
#533 – An Analysis of Notebook Writing in Elementary Science Classrooms
Gail P. Baxter, Kristin M. Bass, and Robert Glaser
CSE Report 533, 2000
Summary
Journal or notebook writing is viewed as a critical aspect of science teaching and learning because of its potential to inform the former and assess the latter (National Research Council, 1996; Shepardson & Britsch, 1997). Nevertheless, little is known about the relationship between the contents of student science notebooks and the classroom contexts in which they are used. This study examines the use of notebooks in three fifth grade classrooms during a unit on electric circuits. Our purpose is to ascertain the extent to which notebooks might serve as a tool for monitoring teaching and learning. Analyses of classroom contexts indicated that teachers promoted notebook writing through explicit instructions and prompts, provided frequent opportunities for students to write, and attended to student documentation of the procedural aspects of the investigations. Consistent with these classroom observations, students' science notebooks contained records of teacher-dictated purposes and procedures and student-generated observations for each investigation. Other significant aspects of student performance and observed classroom practice were not documented in the notebooks; these included records of problem-solving strategies, discussions of task-related concepts, and references to variations in problem solutions across student groups. Implications of notebooks as a tool for monitoring science instruction and assessing student learning are discussed.
#561 – Stability of School Building Accountability Scores and Gains
Robert L. Linn and Carolyn Haug
CSE Report 561, 2002
Summary
A number of states have school building accountability systems that rely on comparisons of achievement from one year to the next. Improvement of the performance of schools is judged by changes in the achievement of successive groups of students. Year-to-year changes in scores for successive groups of students have a great deal of volatility. The uncertainty in the scores is the result of measurement and sampling error and nonpersistent factors that affect scores in one year but not the next. The level of uncertainty was investigated using fourth-grade reading results for 4 years of administration of the Colorado Student Assessment Program. It was found that the year-to-year changes are quite unstable, resulting in a near-zero correlation of the school gains from Years 1 to 2 with those from Years 3 to 4. Some suggestions for minimizing volatility in change indices for schools are provided.
#427 – Final Report of Experimental Studies on Motivation and NAEP Test Performance
Harold O'Neil, Jr., Brenda Sugrue, Jamal Abedi, Eva Baker, and Shari Golan
CSE Report 427, 1997
Summary
Educators and policy makers have expressed concern that students have little motivation to perform well on the National Assessment of Educational Progress (NAEP) because there are no consequences for student or school performance. In this study, CRESST and University of Southern California researchers investigated the effects of student motivation on the 1990 NAEP math test. Researchers compared 8th-grade and 12th-grade students' performance for different motivation factors including financial awards, competition with other students, personal accomplishment, and a certificate of accomplishment (for 12th-grade students only).
The researchers found few significant differences in performance except for that of 8th-grade students who received a financial award for the number of questions answered correctly.
"The 8th-grade findings," concluded the authors, "suggest that we may be underestimating the achievement of at least some students when we use scores from low-stakes tests as indicators of achievement."
The researchers added that it is impractical to offer students monetary incentives for NAEP performance but other ways to reward students should be investigated.
#761 – Using Classroom Artifacts to Measure the Efficacy of Professional Development
Yael Silk, David Silver, Stephanie Amerian, Claire Nishimura, Christy Kim Boscardin
CRESST Report 761, 2009
Summary
This report describes a classroom artifact measure and presents early findings from an efficacy study of WestEd's Reading Apprenticeship (RA) professional development program. The professional development is designed to teach high school teachers how to integrate subject-specific literacy instruction into their regular curricula. The current RA study is notable in that it is the first to include random assignment in its design. The National Center for Research on Evaluation, Standards, and Student Testing (CRESST) designed a teacher assignment instrument to address the question of whether treatment teachers demonstrate greater integration of literacy into their instructional practice than control teachers. Early findings based on preliminary data from participating history teachers indicate that treatment teachers outperformed control teachers on 6 out of 11 rubric dimensions. These dimensions address opportunities for reading in the assignment, the strategies in place to support successful reading, teacher support for reading engagement, and student feedback. Data collection will conclude at end of the 2008-2009 school year, followed by a final report.
#809 – Relationships between Teacher Knowledge, Assessment Practice, and Learning- Chicken, Egg, or Omelet?
Joan Herman, Ellen Osmundson, Yunyun Dai, Cathy Ringstaff, and Mike Timms
CRESST Report 809, November 2011
Summary
Drawing from a large efficacy study in upper elementary science, this report
had three purposes: First to examine the quality of teachers'
content-pedagogical knowledge in upper elementary science; second, to
analyze the relationship between teacher knowledge and their assessment
practice; and third, to study the relationship between teacher knowledge,
assessment practice, and student learning. Based on data from 39 teachers,
CRESST researchers found that students whose teachers frequently analyzed
and provided feedback on student work had higher achievement than students
whose teachers spent less time on such activities. The findings support
other research indicating the power of well-implemented formative assessment
to improve learning.
#365 – Dilemmas and Issues in Implementing Classroom-Based Assessments for Literacy
Elfrieda H. Hiebert and Kathryn Davinroy
CSE Report 365, 1993
Summary
Researchers in this study invited third-grade teachers from an urban school district to collaborate in a classroom-based literacy assessment project. The study focused on a series of literacy workshops designed to implement a long-standing perspective on curriculum-instruction and assessment adapted to classroom-based assessment. Some of the early outcomes from observations and transcriptions of the workshops indicated that teachers struggled with a variety of issues including the task of embedding assessments, such as running records and written summaries, into teachers' instructional programs. When the assessments showed teachers that some students were not reading beyond a rudimentary level, teachers wondered how to share this information with students and parents. Despite these challenges, one of the schools moved quickly to implement the assessments and use the information provided by the assessments.
#762 – Moving to the Next Generation of Standards for Science: Building on Recent Practices
Joan L. Herman
CRESST Report 762, October 2009
Summary
In this report, Joan Herman, director for the National Center for Research, on Evaluation, Standards, & Student Testing (CRESST) recommends that the new generation of science standards be based on lessons learned from current practice and on recent examples of standards-development methodology. In support of this, recent, promising efforts to develop standards in science and other areas are described, including the National Assessment of Educational Progress (NAEP) 2009 Science Assessment Framework, the Advanced Placement Redesign, and the Common Core State Standards Initiative (CCSSI). From these key documents, there are discussions about promising practices for a national effort to better define science standards. Lastly, this report reviews validation issues including the evidence that one would want to collect to demonstrate that national science standards are achieving their intended purposes.
To cite from this report, please use the following as your APA reference:
Herman, J. L. (2009). Moving to the next generation of standards for science: Building on recent practices (CRESST Report 762). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).