Like this Product? Get CRESST News Every Month!

Don't miss out on the latest CRESST news including our FREE reports and products. Subscribe to the monthly CRESST E-newsletter right now!

We only use this address for the monthly CRESST E-Newsletter and will ask for confirmation before adding you to our list.



No thanks | Don't ask again

Reports

Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.

#627 – The Effects of Teacher Discourse on Student Behavior and Learning in Peer-Directed Groups
Noreen Webb, Kariane M. Nemer, Nicole Kersting, Marsha Ing, and Jeffrey Forrest

Summary
Previous research on small-group collaboration identifies several behaviors that significantly predict student learning. These reports focus on student behavior to understand why, for example, large numbers of students are unsuccessful in obtaining explanations or applying help received, leaving unexplored the role that teachers play in influencing small-group interaction. We examined the impact of teacher discourse on the behavior and achievement of students in the context of a semester-long program of cooperative learning in four middle school mathematics classrooms. We conclude that student behavior largely mirrored the discourse modeled by and the expectations communicated by teachers. Teachers tended to give unlabeled calculations, procedures, or answers instead of labeled explanations. Teachers often instructed using a recitation approach in which they assumed primary responsibility for solving the problem, having students only provide answers to discrete steps. Finally, teachers rarely encouraged students to verbalize their thinking or to ask questions. Students adopting the role of help-giver showed behavior very similar to that of the teacher: doing most of the work, providing mostly low-level help, and infrequently monitoring other students' level of understanding. The relatively passive behavior of students needing help corresponded to expectations communicated by the teacher about the learner as a fairly passive recipient of the teacher's transmitted knowledge. Finally, we confirmed previous analyses showing that the level of help received from the student or teacher, and the level of student follow-up behavior after receiving help significantly predicted student learning outcomes.

#350 – The Vermont Portfolio Assessment Program: Interim Report on Implementation and Impact, 1991-1992 School Year
Daniel Koretz, Brian Stecher, Edward Deibert

Summary
Vermont is the first state to make portfolios the backbone of a statewide assessment system. Daniel Koretz, Brian Stecher, and Edward Deibert, the authors of this CRESST/RAND report, have been evaluating the Vermont portfolio program for almost two years. The researchers found that support for the Vermont portfolio program, despite tremendous demands on teacher time, is widespread. "Perhaps the most telling sign of support for the Vermont portfolio program," write the authors, "is that [even in the pilot year] the portfolio program had already been extended beyond the grades targeted by the state." An interesting instructional phenomenon was that over 80% of the surveyed teachers in the Vermont study indicated that they had changed their opinion of students' mathematical abilities based upon their students' portfolio work. In many cases, teachers noted that students did not perform as well on the portfolio tasks as on previous classroom work. This finding, supported by other performance assessment research, suggests that portfolios may give teachers another assessment tool that appears to broaden their understanding of student achievement.

#591 – The Los Angeles Annenberg Metropolitan Project: Evaluation Findings
Joan Herman and Eva Baker

Summary
In the latter part of the 1990s, education in California was caught in a whirlwind of change. Schools scrambled to find enough teachers and enough classroom space to fulfill state-mandated class-size reduction requirements. The voters eliminated bilingual education, leaving schools with no specific classroom tool for teaching English-language learners. Schools were required to administer a new standardized test each spring to Grades 2 through 11 that was not aligned to classroom work and yet carried great weight for both students and educators.

Amidst this upheaval, a major new school-reform initiative was trying to make headway in Los Angeles County. From 1994 through 2000, the Los Angeles Annenberg Metropolitan Project, or LAAMP, was one of 18 major school improvement initiatives across the country to be funded by the $1.1 billion Annenberg Challenge. Its centerpiece was a new educational structure known as the School Family, which brought together teachers, administrators, and parents from high schools and their feeder middle schools and elementary schools, plus others with an interest in education. LAAMP organizers hoped the School Families would create a stable learning environment for students by encouraging coordination among schools and between grade levels.

Today, the Annenberg Challenge has drawn to a close. A final report released in June called the national effort a partial success. The report credited the program with strengthening urban, rural, and arts education and with raising the quality of teaching. The report also found that school-reform programs must learn to deal with rapid leadership turnover, changes in direction, and other setbacks. And it found that the grant money, while generous, frequently was spread too thin over too many schools.

The national findings parallel conclusions drawn about the 6-year Los Angeles program, which received $53 million from the Annenberg Challenge in December 1994. LAAMP commissioned a group of education researchers from UCLA and USC to evaluate the local project. Known as the Los Angeles Consortium for Evaluation, or LACE, the researchers found that LAAMP accomplished some of what it set out to do. But for a variety of reasons, it did not attain its ultimate goal of improving student performance.

The tumultuous period of California history that suctioned off time, energy, and financial resources from schools and the people working in them bears much of the blame. Researchers found other explanations. Among them were:
• School Family teams of teachers, administrators, and parents needed more time than was anticipated to develop the group process skills necessary for success and spent much of their time trying to learn how to collaborate instead of instituting change.
• The teams needed time to learn about and understand the concepts of results- or standards-based school reform and to develop the skills required to analyze available data and use them in the planning process.
• There were insufficient resources to ensure adequate support for teachers attempting to implement programs devised by the School Families. There also was no mechanism for extending the reforms to teachers not directly involved in the reform project.

LACE also acknowledged that the research methodologies it used to evaluate LAAMP, although the best that were available, might not have presented a full and accurate picture of the effects the program had on its participating schools. In addition, researchers suggested the need for more sensitive gauges of student accomplishment that measure the actual curriculum taught. The primary measurement used—California’s Stanford 9 test—may not have been the best tool for detecting the effects of specific changes in teaching and learning.

Overall, the researchers found that the LAAMP reform can claim many achievements that have benefited K-12 education in Los Angeles County, including:
• Creation of the School Family concept, which in many cases was responsible for productive changes that could not have been realized by a single school working alone.
• Strengthening of schools’ acceptance of accountability, their focus on performance, and their capacity for self-evaluation especially in regard to accessing and using student-achievement data.
• Creation of valuable teacher professional development activities and access to new instructional programs, which were especially helpful for the many new and uncredentialed teachers who were hired to fulfill class-size reduction requirements.
• Encouragement of parental involvement in the schools and in children’s learning at home, which had demonstrable effects on student performance.
• Demonstration of the potential of stable learning communities for curing many of the ills facing urban schools.

Looking at test scores, LACE researchers saw improvement at LAAMP schools over the 3-year period from 1997-1998 to 2000-2001. However, there was no statistically significant difference between LAAMP schools and non-LAAMP schools with regard to student performance on the state’s Stanford 9 standardized test.

Researchers also found no indication that LAAMP has had a wide impact on classroom practices. In other words, its core school-reform principles have not yet permeated participating schools. However, researchers saw signs that LAAMP initiatives were starting to move into the classroom in the later years of the program after so much time and energy were spent initially on developing the School Family structure.

When Walter Annenberg issued his challenge in December 1993 by giving what at the time was the largest gift ever dedicated to improving public education, he called it a “crusade for the betterment of our country.” Nine years later, that crusade has made a difference. The public schools “in most major cities are still not doing the job they must,” the June report said, but they are “better today than they were a decade ago and teachers are better equipped to help children overcome obstacles and achieve higher standards.”

#791 – Evaluation of the Enhanced Assessment Grants (EAGs) SimScientists Program: SITE VISIT FINDINGS
Joan Herman, Yunyun Dai, Aye Mon Htut, Marcela Martinez, and Nichole Rivera

Summary
This evaluation report addresses the implementation, utility, and feasibility of simulation-based assessments for middle school science classrooms, with particular attention to the use of accommodations available in the program. The SimScientists program includes embedded, formative assessments; reflection activities designed to deepen student understanding of key ideas and processes; and benchmark assessments to gauge student learning at the end of the unit. While teachers and students alike were very positive about their experiences with SimScientists, the evaluation team offered several recommendations for improvement‹in particular, recommendations for refining the program¹s embedded and benchmark assessments and for increasing the feasibility of the reflection activities.

#394 – Effects of Introducing Classroom Performance Assessments on Student Learning
Lorrie Shepard, Roberta Flexer, Elfrieda Hiebert, Scott Marion, Vicky Mayfield, and Timothy Weston

Summary
A new CRESST study says that introducing performance assessments into the classroom does not automatically yield achievement improvements for students. "Results in reading showed no change or improvement attributable to the [performance assessment] project," write researchers in Effects of Introducing Classroom Performance Assessments on Student Learning. Additionally, the authors found only small performance gains in mathematics. However, they did find significant qualitative changes in mathematics classrooms that provide cause for optimism. "We noted qualitative changes in students' answers to math problems which suggest that at least in some project classrooms whole groups of students were having opportunities to develop their mathematical understandings that had not occurred previously."

#517 – The Role of Classroom Assessment in Teaching and Learning
Lorrie Shepard

Summary
Historically, because of their technical requirements, educational tests of any importance were seen as the province of statisticians and not that of teachers or subject matter specialists. Researchers conceptualizing effective teaching did not assign a significant role to assessment as part of the learning process. The past three volumes of the Handbook of Research on Teaching, for example, did not include a chapter on classroom assessment nor even its traditional counterpart, tests and measurement. Achievement tests were addressed in previous handbooks but only as outcome measures in studies of teaching behaviors. In traditional educational measurement courses, preservice teachers learned about domain specifications, item formats, and methods for estimating reliability and validity. Few connections were made in subject matter methods courses to suggest ways that testing might be used instructionally. Subsequent surveys of teaching practice showed that teachers had little use for statistical procedures and mostly devised end-of-unit tests aimed at measuring declarative knowledge of terms, facts, rules, and principles (Fleming & Chambers, 1983).

The purpose of this chapter is to develop a framework for understanding a reformed view of assessment, where assessment plays an integral role in teaching and learning. If assessment is to be used in classrooms to help students learn, it must be transformed in two fundamental ways. First, the content and character of assessments must be significantly improved. Second, the gathering and use of assessment information and insights must become a part of the ongoing learning process. The model I propose is consistent with current assessment reforms being advanced across many disciplines (e.g., International Reading Association/National Council of Teachers of English Joint Task Force on Assessment, 1994; National Council for the Social Studies, 1991; National Council of Teachers of Mathematics, 1995; National Research Council, 1996). It is also consistent with the general argument that assessment content and formats should more directly embody thinking and reasoning abilities that are the ultimate goals of learning (Frederiksen & Collins, 1989; Resnick & Resnick, 1992). Unlike much of the discussion, however, my emphasis is not on external accountability assessments as indirect mechanisms for reforming instructional practice; instead, I consider directly how classroom assessment practices should be transformed to illuminate and enhance the learning process. I acknowledge, though, that for changes to occur at the classroom level, they must be supported and not impeded by external assessments.

#495 – Tensions Between Competing Pedagogical and Accountability Commitments for Exemplary Teachers of Mathematics in Kentucky
Hilda Borko and Rebekah Elliott

Summary
This paper presents a focused case study of Ann and Kay, a team of exemplary elementary teachers, as they worked to modify their mathematics instruction to be consistent with the goals of the Kentucky Education Reform Act (KERA) and Kentucky Instructional Results Information System (KIRIS, its innovative high-stakes assessment system). At the time of our work with Ann and Kay, the mathematics component of KIRIS included three types of measures: open response items, multiple choice items, and mathematics portfolios (in a research and development phase), which together assessed students' understanding of concepts and procedures, as well as their ability to use this understanding to solve problems in other disciplines and real life.

Ann and Kay's efforts to guide students' creation of mathematics portfolios and prepare them for the open response item format focused on increased attention to problem solving, mathematical communication, and connections to real world situations. They often found themselves faced with tensions and struggles as they attempted to put policy into practice without compromising their pedagogical goals and beliefs. .

In this case study, we discuss how they worked with these tensions to create a successful reform-based mathematics program in their 4-5 classroom.

#671 – Overview of the Instructional Quality Assessment
Brian Junker, Yanna Weisberg, Lindsay Clare Matsumura, Amy Crosson, Mikyung Kim Wolf, Allison Levison, and Lauren Resnick

Summary
Educators, policy-makers, and researchers need to be able to assess the efficacy of specific interventions in schools and school Districts. While student achievement is unquestionably the bottom line, it is essential to open up the educational process so that each major factor influencing student achievement can be examined; indeed as a proverb often quoted in industrial quality control goes, “That which cannot be measured, cannot be improved”. Instructional practice is certainly a central factor: if student achievement is not improving, is it because instructional practice is not changing, or because changes in instructional practice are not affecting achievement? A tool is needed to provide snapshots of instructional practice itself, before and after implementing new professional development or other interventions, and at other regular intervals to help monitor and focus efforts to improve instructional practice. In this paper we review our research program building and piloting the Instructional Quality Assessment (IQA), a formal toolkit for rating instructional quality based primarily on classroom observation and student assignments. In the first part of the paper we review the need for, and some other efforts to provide, direct assessments of instructional practice. In the second part of this paper we briefly summarize the development of the IQA in reading comprehension and in mathematics at the elementary school level. In the third part of the paper we report on a large pilot study of the IQA, conducted in Spring 2003 in two moderately large urban school Districts. We conclude with some ideas about future work and future directions for the IQA.

#418 – Assessment and Instruction in the Science Classroom
Gail P. Baxter, Anastasia D. Elder, and Robert Glaser

Summary
Findings from this study of fifth grade students provided further evidence that critical differences exist between students who think and reason well with their knowledge and those who do not. In the study, students received six mystery boxes and were asked to identify the contents by making the components into circuits. The research team found that students who displayed consistently high levels of learning and understanding were able to describe a comprehensive plan for an experiment. Further, these same students demonstrated an efficient approach to problem-solving which included the use of scientific principles. In contrast, lower-performing students invoked a trial-and-error strategy of "hook something up and see what happens" to guide their experiments.

Only 20% of the students performed at high levels, suggesting that even low ability students could complete a problem without understanding the processes or principles involved. The researchers concluded that "Strategies for how to represent problems must be taught as well as strategies for how to solve problems." They suggest that teachers use performance assessments, such as this science experiment, to integrate instruction, assessment, and high levels of student learning.

#661 – Upgrading America’s Use of Information to Improve Student Performance
Margaret Heritage, John Lee, Eva Chen, and Debbie LaTorre

Summary
This report presents a description of web-based decision support tool, the Quality School Portfolio (QSP), developed at the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) at UCLA, a discussion of the professional development to support the implementation of QSP, findings from an evaluation research study of the implementation, and recommendations for a next generation of QSP.