Like this Product? Get CRESST News Every Month!

Don't miss out on the latest CRESST news including our FREE reports and products. Subscribe to the monthly CRESST E-newsletter right now!

We only use this address for the monthly CRESST E-Newsletter and will ask for confirmation before adding you to our list.



No thanks | Don't ask again

Reports

Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.

#762 – Moving to the Next Generation of Standards for Science: Building on Recent Practices
Joan L. Herman

Summary
In this report, Joan Herman, director for the National Center for Research, on Evaluation, Standards, & Student Testing (CRESST) recommends that the new generation of science standards be based on lessons learned from current practice and on recent examples of standards-development methodology. In support of this, recent, promising efforts to develop standards in science and other areas are described, including the National Assessment of Educational Progress (NAEP) 2009 Science Assessment Framework, the Advanced Placement Redesign, and the Common Core State Standards Initiative (CCSSI). From these key documents, there are discussions about promising practices for a national effort to better define science standards. Lastly, this report reviews validation issues including the evidence that one would want to collect to demonstrate that national science standards are achieving their intended purposes.


To cite from this report, please use the following as your APA reference:

Herman, J. L. (2009). Moving to the next generation of standards for science: Building on recent practices (CRESST Report 762). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#611 – An Evidentiary Framework for Operationalizing Academic Language for Broad Application to K-12 Education: A Design Document
Alison L. Bailey and Frances A. Butler

Summary
With the No Child Left Behind Act (2001), all states are required to assess English language development (ELD) of English language learners (ELLs) beginning in the 2002- 2003 school year. Existing ELD assessments do not, however, capture the necessary prerequisite language proficiency for mainstream classroom participation and for taking content-area assessments in English, thus making their assessment of ELD incomplete. What is needed are English language assessments that go beyond the general, social language of existing ELD tests to capture academic language proficiency (ALP) as well, thereby covering the full spectrum of English language ability needed in a school setting. This crucial testing need has provided impetus for examining the construct of academic language (AL) in depth and considering its role in assessment, instruction, and teacher professional development. This document provides an approach for the development of an evidentiary framework for operationalizing ALP for broad K-12 educational applications in these three key areas. Following the National Research Council (2002) call for evidence-based educational research, we assembled a wide array of data from a variety of sources to inform our effort. We propose the integration of analyses of national content standards (National Science Education Standards of the National Research Council), state content standards (California, Florida, New York, and Texas), English as a Second Language (ESL) standards, the language demands of standardized achievement tests, teacher expectations of language comprehension and production across grades, and the language students actually encounter in school through input such as teacher oral language, textbooks, and other print materials. The initial product will be a framework for application of ALP to test specifications including prototype tasks that can be used by language test developers for their work in the K-12 arena. Long-range plans include the development of guidelines for curriculum development and teacher professional development that will help assure that all students, English-only and ELLs alike, receive the necessary English language exposure and instruction to allow them to succeed in education in the United States.

#763 – The Effects of POWERSOURCE Intervention on Student Understanding of Basic Mathematical Principles
Julia Phelan, Kilchan Choi, Terry Vendlinski, Eva L. Baker, Joan L. Herman

Summary
This report describes results from field-testing of POWERSOURCE formative assessment alongside professional development and instructional resources. The researchers at the National Center for Research, on Evaluation, Standards, & Student Testing (CRESST) employed a randomized, controlled design to address the following question: Does the use of POWERSOURCE strategies improve 6th-grade student performance on assessments of the key mathematical ideas relative to the performance of a comparison group? Sixth-grade teachers were recruited from 7 districts and 25 middle schools. A total of 49 POWERSOURCE and 36 comparison group teachers and their students (2,338 POWERSOURCE, 1,753 comparison group students) were included in the study analyses. All students took a pretest of prerequisite knowledge and a transfer measure of tasks drawn from international tests at the end of the study year. Students in the POWERSOURCE group used sets of formative assessment tasks. POWERSOURCE teachers had exposure to professional development and instructional resources. Results indicated that students with higher pretest scores tended to benefit more from the treatment as compared to students with lower pretest scores. In addition, students in the POWERSOURCE group significantly outperformed control group students on distributive property items and the effect was larger as pretest scores increased. Results, limitations and future directions are discussed.


To cite from this report, please use the following as your APA reference:

Phelan, J., Choi, K., Vendlinski, T., Baker, E. L., & Herman, J. L. (2009). The effects of POWERSOURCE intervention on student understanding of basic mathematical principles (CRESST Report 763). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#770 – Capturing Quality in Formative Assessment Practice: Measurement Challenges
Joan L. Herman, Ellen Osmundson, & David Silver

Summary
This study examines measures of formative assessment practice using data from a study of the implementation and effects of adding curriculum embedded measures to a hands-on science program for upper elementary school students. The authors present a unifying conception for measuring critical elements of formative assessment practice, illustrate common measures for doing so, and investigate the relationships among and between scores on these measures. Findings raise important issues with regard to both the challenge of obtaining valid measures of teachers’ assessment practice and the uneven quality nature of current teacher practice.


To cite from this report, please use the following as your APA reference:

Herman, J., L., Osmundson, E., & Silver, D. (2010). Capturing quality in formative assessment practice: Measurement challenges. (CRESST Report 770). Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#592 – An Evaluation of Creative Learning Communities in Classrooms: A Two-Year Study of the Implementation of School Reform
Ann M. Mastergeorge, Ingrid Roberson, Felipe Martinez, Lance Evans, and Andrew Johnson

Summary
The Creative Learning Communities (CLC) grants program, as part of the Disney Learning Partnership, has initiated a philanthropic initiative to assist participating elementary schools involved in school reform to institute collaborative and creative learning environments. This evaluation of the first years of the grants program by the UCLA Center for the Study of Evaluation (CSE) and the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) presents data findings collected over a 2-year period from 32 CLC schools. The report highlights the overall implementation process of the CLC grants, trends in school changes from one year to the next, and a case study component of 8 schools, which includes an analysis of teacher interviews and classroom observations. Interim conclusions and recommendations for improving implementation of the CLC grants program are presented.

#677 – A Multi-Method and Multi-Source Approach for Studying Fidelity of Implementation
Maria Araceli Ruiz-Primo

Summary
Even the best program in education will fail to have the intended impact if its essential elements are not implemented properly. Degree of implementation is, then, critical to draw valid conclusions on program outcomes (e.g., Scheirer & Rezmovic, 1983). Especially important is the information on the fidelity with which a program is implemented. Fidelity of Implementation (FOI) has been defined as the determination of how close the program is implemented according to its original design or as intended (e.g., Dobson & Shaw, 1988; Dusenbury, Brannigan, Falco, & Hanse, 2003; Witt & Elliot, 1985).1 Unfortunately, empirical evidence on the effect of FOI on program success is limited. Many evaluation studies do not collect data on FOI and even fewer examine its impact on program outcomes (Dane & Schenider, 1998; Dusenbury et al., 2003; Lillehoj, Griffin, Spoth, 2004). Furthermore, studies on FOI differ considerably on their approaches (Dane & Schenider, 1998; Dusenbury et al., 2003; Huntley, 2004; Lillehoj, Griffin, Spoth, 2004); there is no set of methods and procedures that is universally known and used as standard procedure in the study of FOI. Whereas the characteristics of each program determine what has to be measured during implementation, there are some commonalities across types of programs and, therefore, some general strategies that can be developed. This paper addresses FOI at three levels: general, conceptual, and applied. The first section provides a short review of literature on the main issues of FOI. The second section proposes a conceptual approach for studying FOI in the context of inquiry-based science curricula. The third section describes a series of studies, currently in progress, in which this conceptual approach is being used.

#366 – Teachers' Ideas and Practices About Assessment and Instruction
Hilda Borko, Maurene Flory, and Kate Cumbo

Summary
Participants involved in this study were part of a year-long intervention designed to help teachers develop performance assessments in reading and mathematics. Seeking to evaluate teachers' knowledge, beliefs, and practices about assessment and instruction, the researchers also studied the changes that occurred to teachers during the first semester of the intervention program. Findings from the study indicated that the performance assessment development and implementation process led to teachers having better understandings and new insights into students' thinking and learning than when teachers relied exclusively on more traditional forms of assessment. However, it was not clear to what extent teachers changed their instructional programs to take advantage of their newly gained insights. Based on their observations so far, researchers feel confident that as the program continues, more extensive changes will occur.

#701 – Studying the Sensitivity of Inferences to Possible Unmeasured Confounding Variables in Multisite Evaluations
Michael Seltzer, Jinok Kim, and Ken Frank

Summary
In multisite evaluation studies, questions of primary interest often focus on whether particular facets of implementation or other aspects of classroom or school environments are critical to a program’s success. However, the differences with which teachers implement programs can depend on an array of factors, including differences in their training and experience, in the prior preparation of their students, and in the degree of support they receive from school administrators. As such, a crucially important implication is that in studying connections between various aspects of implementation and the effectiveness of programs, we need to be alert to factors that may be confounded with differences in implementation. Despite our best efforts to anticipate and measure possible confounding variables, teachers who differ in terms of the quality and frequency with which they implement various program practices use particular program materials and the like, may differ in important ways that have not been measured, giving rise to possible hidden bias. In this paper, we extend Frank’s (2000) work on assessing the impact of omitted confounding variables on coefficients of interest in regression settings to applications of HMs in multiste settings in which interest centers on testing whether certain aspects of implementation are critical to a program’s success. We provide a detailed illustrative example using the data from a study focusing on the effects of reform-minded instructional practices in mathematics (Gearhart et al., 1999; Saxe et al., 1999).

#702 – English Language Learners and Math Achievement: A Study of Opportunity to Learn and Language Accommodation
Jamal Abedi, Mary Courtney, Seth Leon, Jenny Kao, and Tarek Azzam

Summary
This study investigated the interactive effects between students’ opportunity to learn (OTL) in the classroom, two language-related testing accommodations, and English language learner (ELL) students and other students of varying language proficiency, and how these variables impact mathematics performance. Hierarchical linear modeling was employed to investigate three class-level components of OTL, two language accommodations, and ELL status. The three class-level components of OTL were: (1) student report of content coverage; (2) teacher content knowledge; and (3) class prior math ability (as determined by an average of students’ Grade 7 math scores). A total of 2,321 Grade 8 students were administered one of three versions of an algebra test: a standard version with no accommodation, a dual-language (English and Spanish) test version accommodation, or a linguistically modified test version accommodation. These students’ teachers were administered a teacher content knowledge measure. Additionally, 369 of these students were observed for one class period for student-teacher interactions. Students’ scores from the prior year’s state mathematics and reading achievement tests, and other background information were also collected.

Results indicated that all three class-level components of OTL were significantly related to math performance, after controlling for prior math ability at the individual student level. Class prior math ability had the strongest effect on math performance. Results also indicated that teacher content knowledge had a significant differential effect on the math performance of students grouped by a quick reading proficiency measure, but not by students’ ELL status or by their reading achievement test percentile ranking. Results also indicated that the two language accommodations did not impact students’ math performance. Additionally, results suggested that, in general, ELL students reported less content coverage than their non-ELL peers, and they were in classes of overall lower math ability than their non-ELL peers.

While it is understandable why a student’s performance in seventh grade strongly determines the content she or he receives in eighth grade, there is some evidence in this study that students of lower language proficiency can learn algebra and demonstrate algebra knowledge and skills when they are provided with sufficient content and skills delivered by proficient math instructors in a classroom of students who are proficient in math.

#406 – Teachers' and Students' Roles in Large-Scale Portfolio Assessment: Providing Evidence of Competency With the Purposes and Processes of Writing
Maryl Gearhart and Shelby Wolf

Summary
From 1992-1994, the California Department of Education and the Center for Performance Assessment of Educational Testing Service were engaged in the development of a new standards-based portfolio component for the California Learning Assessment System (CLAS). Based on interviews with four teachers from different school settings, the researchers sought answers to the following questions: How did teachers participating in trials of the program understand the CLAS Portfolio Assessment Program and how did they use the dimensions of learning to guide their language arts curriculum and assessment practices? How did their students understand the dimensions of learning, and how did they use the dimensions to guide their portfolio choices? What implications do the findings have for large-scale portfolio assessment?

The CRESST researchers found that teachers' curriculum varied, providing students with quite different opportunities to learn about the dimensions of learning measured by the portfolios; teachers also varied in their approach to documentation of students' writing, providing students with different opportunities to demonstrate their competencies with portfolio choices. Findings suggest a need to balance the vision of student choice as a desirable goal for students with what is needed to ensure that portfolio raters are provided appropriate evidence of student performance.