Like this Product? Get CRESST News Every Month!

Don't miss out on the latest CRESST news including our FREE reports and products. Subscribe to the monthly CRESST E-newsletter right now!

We only use this address for the monthly CRESST E-Newsletter and will ask for confirmation before adding you to our list.



No thanks | Don't ask again

Reports

Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.

#742 – Exploring Data Use and School Performance in an Urban Public School District
Joan L. Herman, Kyo Yamashiro, Sloane Lefkowitz, Lee Ann Trusela

Summary
This study examined the relationship between data use and achievement at 13 urban Title I schools. Using multiple methods, including test scores, district surveys, school transformation plans, and four case study site visits, the researchers found wide variation in the use of data to inform instruction and planning. In some cases, schools were overwhelmed with the amount of data or were not convinced that alternating test score data from two different tests provided dependable information. The researchers did not find a substantial link between data use and achievement, which may have been a result of the small sample size or different implementation methods between schools. Teachers and principals recommended important needs for more timely data delivery, individual versus group data reports, and better training in assessment and data analysis.

#741 – From Evidence to Action: A Seamless Process in Formative Assessment?
Margaret Heritage, Jinok Kim, Terry P. Vendlinski, Joan L. Herman

Summary
Based on the results of a generalizability study (G study) of measures of teacher knowledge for teaching mathematics developed at The National Center for Research, on Evaluation, Standards, and Student Testing (CRESST) at the University of California, Los Angeles, this report provides evidence that teachers are better at drawing reasonable inferences about student levels of understanding from assessment information than they are in deciding the next instructional steps. We discuss the implications of the results for effective formative assessment and end with considerations of how teachers can be supported to know what to teach next.

#740 – Formative Assessment and the Improvement of Middle School Science Learning: The Role of Teacher Accuracy
Joan L. Herman, Kilchan Choi

Summary
This article articulates a framework for examining the quality of formative assessment practice and provides empirical evidence in support of one of its components. Based on a study of middle school science, the study examines the accuracy of teachers' judgments of students’ understanding and the relationship of such accuracy to middle school students’ learning. Analyses within and between teachers show a consistent, positive relationship between teacher accuracy and student learning. Study results lend support for the power of assessment in improving student learning and also suggest some potential challenges in assuring quality formative assessment practice.

#739 – Improving Formative Assessment Practice with Educational Information Technology
Terry P. Vendlinski, David Niemi, Jia Wang, Sara Monempour

Summary
This report describes a web-based assessment design tool, the Assessment Design and Delivery System (ADDS), that provides teachers both a structure and the resources required to develop and use quality assessments. The tool is applicable across subject domains. The heart of the ADDS is an assessment design workspace that allows teachers to decide the attributes of an assessment, as well as the context and type of responses the students will generate, as part of their assessment design process. Although the tool is very flexible and allows the above steps to be done in any order (or skipped entirely), our goal was to streamline and scaffold the process for teachers by organizing all the materials for them in one place and to provide resources they could use or reuse to create assessments for their students. The tool allows teachers to deliver the assessments to their students either online or on paper. Initial results from our first teacher study suggest that teachers who used the tool developed assessments that were more cognitively demanding of students and addressed the "big ideas" rather than disassociated facts of a domain.

#738 – Providing Validity Evidence to Improve the Assessment of English Language Learners
Mikyung Kim Wolf, Joan L. Herman, Jinok Kim, Jamal Abedi, Seth Leon, Noelle Griffin, Patina L. Bachman, Sandy M. Chang, Tim Farnsworth, Hyekyung Jung, Julie Nollner, Hye Won Shin

Summary
This research project addresses the validity of assessments used to measure the performance of English language learners (ELLs), such as those mandated by the No Child Left Behind Act of 2001 (NCLB, 2002). The goals of the research are to help educators understand and improve ELL performance by investigating the validity of their current assessments, and to provide states with much needed guidance to improve the validity of their English language proficiency (ELP) and academic achievement assessments for ELL students. The research has three phases. In the first phase, the researchers analyze existing data and documents to understand the nature and validity of states’ current practices and their priority needs. This first phase is exploratory in that the researchers identify key validity issues by examining the existing data and formulate research areas where further investigation is needed for the second phase. In the second phase of the research, the researchers will deepen their analysis of the areas identified from Phase I findings. In the third phase of the research, the researchers will develop specific guidelines on which states may base their ELL assessment policy and practice. The present report focuses on the researchers' Phase I research activities and results. The report also discusses preliminary implications and recommendations for improving ELL assessment systems.

#737 – Recommendations for Assessing English Language Learners: English Language Proficiency Measures and Accommodation Uses
Mikyung Kim Wolf, Joan L. Herman, Lyle F. Bachman, Alison L. Bailey, Noelle Griffin

Summary
The No Child Left Behind Act of 2001 (NCLB, 2002) has had a great impact on states’ policies in assessing English language learner (ELL) students. The legislation requires states to develop or adopt sound assessments in order to validly measure the ELL students’ English language proficiency, as well as content knowledge and skills. While states have moved rapidly to meet these requirements, they face challenges to validate their current assessment and accountability systems for ELL students, partly due to the lack of resources. Considering the significant role of assessment in guiding decisions about organizations and individuals, validity is a paramount concern. In light of this, we reviewed the current literature and policy regarding ELL assessment in order to inform practitioners of the key issues to consider in their validation process. Drawn from our review of literature and practice, we developed a set of guidelines and recommendations for practitioners to use as a resource to improve their ELL assessment systems. The present report is the last component of the series, providing recommendations for state policy and practice in assessing ELL students. It also discusses areas for future research and development.

#736 – Assessment Portfolios as Opportunities for Teacher Learning
Maryl Gearhart, Ellen Osmundson

Summary
This report is an analysis of the role of assessment portfolios in teacher learning. Over 18 months, 19 experienced science teachers worked in grade-level teams to design, implement, and evaluate assessments to track student learning throughout a curriculum unit, supported by semi-structured tasks and resources in assessment portfolios. Teachers had the opportunity to complete three assessment portfolios for two or three curriculum units. Evidence of teacher learning included (a) changes over time in the contents of 10 teachers' portfolios spanning Grades 1–9 and (b) the full cohort's self-reported learning in surveys and focus groups. Findings revealed that Academy teachers developed greater understanding of assessment planning, quality assessments and scoring guides, strategies for analysis of student understanding, and use of evidence to guide instruction. Evidence of broad impact on teacher learning was balanced by evidence of uneven growth, particularly with more advanced assessment concepts such as reliability and fairness as well as curriculum-specific methods for developing and using assessments and scoring guides. The findings point to a need for further research on ways to balance general approaches to professional development with content specific strategies to deepen teacher skill and knowledge.

#735 – Templates and Objects in Authoring Problem-Solving Assessments
Terry P. Vendlinski, Eva L. Baker, David Niemi

Summary
Assessing whether students can both re-present a corpus of learned knowledge and also demonstrate that they can apply that knowledge to solve problems is key to assessing student understanding. This notion, in turn, impacts our thinking about what we assess, how we author such assessments, and how we interpret assessment results. The diffusion of technology into venues of learning offers new opportunities in the area of student assessment. Specifically, computer-based simulations seem to provide sufficiently rich environments and the tools necessary to allow us to infer accurately how well a student’s individual mental model of the world can accommodate, integrate, and be used to exploit concepts from a domain of interest. In this paper then, we first identify the characteristics of simulations that our experience suggests are necessary to make them appropriate for pedagogical and assessment purposes. Next, we discuss the models and frameworks (templates) we have used to ensure these characteristics are considered. Finally, we describe two computerized instantiations (objects) of these frameworks and implications for the follow-on design of simulations.

#734 – Using Data and Big Ideas: Teaching Distribution as an Instance of Repeated Addition
Terry P. Vendlinski, Keith E. Howard, Bryan C. Hemberg, Laura Vinyard, Annabel Martel, Elizabeth Kyriacou, Jennifer Casper, Yourim Chai, Julia C. Phelan, Eva L. Baker

Summary
The inability of students to become proficient in algebra seems to be widespread in American schools. One of the reasons often cited for this inability is that instruction seldom builds on prior knowledge. Research suggests that teacher effectiveness is the most critical controllable variable in improving student achievement. This report details a process of formative assessment and professional development (called PowerSource©), which is designed to improve teacher effectiveness and student achievement. We describe the process we used to develop a model of distribution over addition and subtraction, one of three big ideas developed during the year, and the interactions we had with teachers about teaching distribution in various ways. As a consequence of these interactions, we were able to test whether teaching distribution using the notion of multiplication as repeated addition (a concept which students had learned previously), using array or area models, or teaching it procedurally had the greatest effects on student learning. We found that the repeated addition model was not only less likely to create certain student misconceptions, but also found that students taught using the repeated addition model were more likely to correctly answer questions involving distribution than were their counterparts taught using either of the other methods. Teachers subsequently reported that they preferred teaching distribution as an instance of repeated addition than teaching it using other available methods.

#733 – Testing One Premise of Scientific Inquiry in Science Classrooms: A Study That Examines Students' Scientific Explanations
Maria Araceli Ruiz-Primo, Min Li, Shin-Ping Tsai, Julie Schneider

Summary
In this study we analyze the quality of students' written scientific explanations in eight science inquiry-based middle-school classrooms and explore the link between the quality of students' scientific explanations and their students' performance. We analyzed explanations based on three components: claim, evidence to support it, and a reasoning that justifies the link between the claim and evidence. Quality of explanations was linked with students' performance in different types of assessments focusing on the content of the science unit studied. To identify critical features related with high quality explanations we also analyzed the characteristics of the instructional prompts that teachers used. Results indicated that: (a) Students' written explanations can be reliably scored with the proposed approach. (b) The instructional practice of constructing explanations has not been widely implemented despite its significance in the context of inquiry-based science instruction. (c) Overall, a low percentage of students (18%) provided explanations with the three expected components. The majority (40%) of the "explanations" found were presented as claims without any supporting data or reasoning. (d) The magnitude of the correlations between students' quality of explanations and their performance, all positive but of varied magnitude according to the type of assessment, indicate that engaging students in the construction of high quality explanations might be related to higher levels of student performance. The opportunities to construct explanations, however, seem to be limited. We report some general characteristics of instructional prompts that showed higher quality of written explanations.