Like this Product? Get CRESST News Every Month!

Don't miss out on the latest CRESST news including our FREE reports and products. Subscribe to the monthly CRESST E-newsletter right now!

We only use this address for the monthly CRESST E-Newsletter and will ask for confirmation before adding you to our list.



No thanks | Don't ask again

Reports

Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.

#712 – The Afterschool Hours: Examining the Relationship between Afterschool Staff-Based Social Capital and Student Engagement in LA's BEST
Denise Huang, Alison Coordt, Deborah La Torre, Seth Leon, Judy Miyoshi, Patricia Perez, and Cynthia Peterson

Summary
The relationship between afterschool staff and students is very important for encouraging and promoting longevity in school. The primary goal of this study was to examine the connection between perceptions of staff-student relationships and the educational values, future aspirations, and engagement of LA’s BEST students. To this end, we developed a set of research questions which would help us examine the association between strong staff-student relationships—characterized by mutual trust, bonding, and support—and student variables such as academic engagement and future aspirations. To address these evaluation questions, staff and student surveys were piloted and developed by the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) and widely administered to both afterschool staff and students. Descriptive statistics were computed for the survey data; HLM analyses and structural equation models were fitted to examine the variables. Afterschool programs have become much more than childcare providers for working parents or safe havens within violent communities. They have blossomed into powerful learning centers for students with lasting and far-reaching effects. These programs possess an asset that gives them the ability and opportunity to influence students to develop a belief system that will ultimately impact their academic and social futures—that asset is social capital.

#711 – Does Teacher Professional Development Affect Content and Pedagogical Knowledge: How Much and for How Long?
Pete Goldschmidt, Geoffrey Phelps

Summary
We examine the impact of teacher professional development on knowledge growth and subsequent knowledge retention. Specifically we use English Language Arts teacher content and pedagogy assessments to determine whether the California Professional Development Institutes significantly improve teacher content knowledge and whether teachers retain that knowledge six months after the institutes are completed. The results indicate that teachers vary significantly in pre-institute knowledge on the four assessed domains, demonstrate significant knowledge growth, but only retain about one half of what was gained during the institute. Further, pre-existing knowledge gaps are not systematically reduced and teacher perceptions of institute quality are not related to knowledge growth and knowledge retention.

#710 – Drawing Sound Inferences Concerning the Effects of Treatment on Dispersion in Outcomes: Bringing to Light Individual Differences in Response to Treatment
Jinok Kim, Michael Seltzer

Summary
Individual differences in response to a given treatment have been a longstanding interest in education. While many evaluation studies focus on average treatment effects (i.e., the effects of treatments on the levels of outcomes of interest), this paper additionally considers estimating the effects of treatments on the dispersion in outcomes. Differences in dispersion can, under certain circumstances, signal individual differences in response to a given treatment, thereby helping us identify factors that magnify or dampen the effects of treatments that might otherwise go unnoticed. Much of this paper focuses on quasi-experiments in nested settings, which are commonly encountered in multi-site evaluation studies. In such settings, studying differences in dispersion as well as in means (e.g., differences in levels of outcomes for treatment and control group students) entails jointly modeling mean and dispersion structures in a hierarchical modeling (HM) framework. This paper shows how a well-elaborated dispersion structure based on substantive theories mitigate the problem of confounding by cluster characteristics, while a well-elaborated mean structure helps avoid confounding by individual characteristics, with regard to inferences concerning dispersion. We illustrate these ideas with analyses of the data from a study of the effectiveness of two innovative instructional programs relative to traditional instruction in elementary mathematics classrooms. We employ a fully Bayesian approach and discuss its advantages in modeling dispersion. We further discuss possible extensions of the methodology to other evaluation settings, including longitudinal evaluation settings.

#709 – Mathematics and Science Academy: Year 6 Final Evaluation Report
Ellen Osmundson, Joan Herman

Summary
This is an evaluation report for Year 6 of the Math and Science Academy (MSA), an initiative of the Los Alamos National Laboratory. A brief overview of the project, with goals and framework is presented first, followed by a description of methods used for the evaluation. Next, findings from the Year 6 Evaluation are described, including program impact on students and teachers. The report concludes with recommendations for future years of the program.

#708 – Causal Inference in Multilevel Settings in which Selection Processes Vary Across Schools
Junyeop Kim, Michael Seltzer

Summary
In this report we focus on the use of propensity score methodology in multisite studies of the effects of educational programs and practices in which both treatment and control conditions are enacted within each of the schools in a sample, and the assignment to treatment is not random. A key challenge in applying propensity score methodology in such settings is that the process by which students wind up in treatment or control conditions may differ substantially from school to school. To help capture differences in selection processes across schools, and achieve balance on key covariates between treatment and control students in each school, we propose the use of multilevel logistic regression models for propensity score estimation in which intercepts and slopes are treated as varying across schools. Through analyses of the data from the Early Academic Outreach Program (EAOP), we compare the performance of this approach with other possible strategies for estimating propensity scores (e.g., single-level logistic regression models; multilevel logistic regression models with intercepts treated as random and slopes treated as fixed). Furthermore, we draw attention to how the failure to achieve balance within each school can result in misleading inferences concerning the extent to which the effect of a treatment varies across schools, and concerning factors (e.g., differences in implementation across schools) that might dampen or magnify the effects of a treatment.

#707 – Using Artifacts to Characterize Reform-Oriented Instruction: The Scoop Notebook and Rating Guide
Hilda Borko, Brian Stecher, Karin Kuffner

Summary
This document includes the final data collection and scoring tools created by the “Scoop” project, a five‐year project funded through the Center for Evaluation, Standards, and Student Testing (CRESST), to develop an alternative approach for characterizing classroom practice. The goal of the project was to use artifacts and related materials to represent classroom practice well enough that a person unfamiliar with the teacher or the lessons can make valid judgments about selected features of practice solely on the basis of those materials. The artifacts and other materials were collected in a binder called the Scoop Notebook. Thus, the project sought to answer the question, “Can accurate judgments about reform‐oriented instructional practice be made based on the classroom artifacts and teacher reflections assembled in the Scoop Notebook?” This document describes the Scoop Notebook and the rating guides, gives instructions for assembling the materials and explaining the process to teachers, and discusses two potential uses of the Scoop Notebook—as a tool to characterize classroom practice or as a tool for teacher professional development. The appendices present the final versions of the Scoop Notebook and rating guide for both mathematics and science.

#706 – Moving to the Next Generation System Design: Integrating Cognition, Assessment, and Learning
Eva L. Baker

Summary
This paper will describe the relationships between research on learning and its application in assessment models and operational systems. These have been topics of research at the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) for more than 20 years and form a significant part of the intellectual foundation of our present research Center supported by the Institute of Education Sciences. This description serves as the context for the presentation of CRESST efforts in building the POWERSOURCE© assessment system as described in subsequent papers delivered at Session N2 of the 2006 annual meeting of the National Council on Measurement in Education.

#705 – Using Artifacts to Describe Instruction: Lessons Learned from Studying Reform-Oriented Instruction in Middle School Mathematics and Science
Brian Stecher, Hilda Borko, Karin L. Kuffner, Felipe Martinez, Suzanne C. Arnold, Dionne Barnes, Laura Creighton and Mary Lou Gilbert

Summary
It is important to be able to describe instructional practices accurately in order to support research on “what works” in education and professional development as a basis for efforts to improve practice. This report describes a project to develop procedures for characterizing classroom practices in mathematics and science on the basis of collected classroom artifacts. A data collection tool called the “Scoop Notebook” was used to gather classroom artifacts (e.g., lesson plans, instructional materials, student work) and teacher reflections. Scoring guides were developed for rating the Notebooks (and observed classroom behaviors) along ten dimensions of reform-oriented practice in mathematics and science. Field studies were conducted in middle school science and mathematics classrooms to collect information about the reliability, validity, and feasibility of the Scoop Notebook as a measure of classroom practice. The studies yielded positive results, indicating that the Scoop Notebooks and associated scoring guides have promise for providing accurate representations of selected aspects of classroom practice. The report summarizes these results and discusses lessons learned about artifact collection and scoring procedures.

#704 – Developing Expertise With Classroom Assessment in K-12 Science: Learning to Interpret Student Work
Interim Findings From a 2-Year Study

Maryl Gearhart, Sam Nagashima, Jennifer Pfotenhauer, Shaunna Clark, Cheryl Schwab, Terry Vendlinski, Ellen Osmundson, Joan Herman, Diana J. Bernbaum

Summary
This article reports findings on growth in three science teachers’ expertise with interpretation of student work over 1 year of participation in a program. The program was designed to strengthen classroom assessment in the middle grades. Using a framework for classroom assessment expertise, we analyzed patterns of teacher learning, and the roles of the professional program and the quality of the assessments provided with teachers’ instructional materials.

#703 – The Nature and Impact of Teachers’ Formative Assessment Practices
Joan L. Herman, Ellen Osmundson, Carlos Ayala, Stephen Schneider, Mike Timms

Summary
Theory and research suggest the critical role that formative assessment can play in student learning. The use of assessment in guiding instruction has long been advocated: Through the assessment of students’ needs and the monitoring of student progress, learning sequences can be appropriately designed, instruction adjusted during the course of learning, and programs refined to be more effective in promoting student learning goals. Moving toward more modern pedagogical conceptions, assessment moves from an information source on which to base action to part and parcel of the teaching and learning process. The following study provides food for thought about the research methods needed to study teachers’ assessment practices and the complexity of assessing their effects on student learning. On the one hand, our study suggests that effective formative assessment is a highly interactive endeavor, involving the orchestration of multiple dimensions of practice, and demands sophisticated qualitative methods for study. On the other, detecting and understanding learning effects in small samples, even with the availability of comparison groups, poses difficulties to say the least.