Like this Product? Get CRESST News Every Month!

Don't miss out on the latest CRESST news including our FREE reports and products. Subscribe to the monthly CRESST E-newsletter right now!

We only use this address for the monthly CRESST E-Newsletter and will ask for confirmation before adding you to our list.



No thanks | Don't ask again

Reports

Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.

#772 – What Probably Works in Alternative Assessment
Eva Baker

Summary
This report provides an overview of what was known about alternative assessment at the time that the article was written in 1991. Topics include beliefs about assessment reform, overview of alternative assessment including research knowledge, evidence of assessment impact, and critical features of alternative assessment. The author notes that in the short term, alternative assessment will generate negative news about student learning and will require massive support to make it a successful reform strategy.

#771 – A Conceptual Framework for Assessing Performance in Games and Simulations
Alan D. Koenig, John J. Lee, Markus Iseli, & Richard Wainess

Summary
The military’s need for high-fidelity games and simulations is substantial, as these environments can be valuable for demonstration of essential knowledge, skills, and abilities required in complex tasks. However assessing performance in these settings can be difficult—particularly in non-linear simulations where more than one pathway to success or failure may exist. The challenge lies not in capturing the raw data arising from game-play, but in interpreting what a player’s actions and decisions mean in the broader context of cognitive readiness for a particular job function or task. The aim of our current research is to develop a conceptual framework for assessing complex behaviors in non-linear, 3-D computer-based simulation environments. Central to this framework is the incorporation of both a domain ontology (which depicts the key constructs and relationships that comprise the domain being simulated), and one or more Bayesian networks (which catalog the probabilities of various sequences of actions related to the constructs in the ontology). For the current research, the domain is damage control related to fire-fighting onboard naval ships, and the two key constructs being assessed are situation awareness and decision-making. A 3-D, computer-based simulation depicting the interior of a naval ship has been developed. Assuming the role of a damage control investigator, the player is tasked with identifying, addressing, and reporting on a variety of potential, imminent, and existing fires and fire hazards. Using a dynamic Bayesian network, all actions and decisions related to situation awareness, communications, and decision-making are evaluated and recorded in real time, and are used for both formative and summative assessments of performance. Using this conceptual framework, our goal is to provide a generic model of assessment that can be incorporated into both new and pre-existing computer-based simulations that depict cognitively complex scenarios.

#770 – Capturing Quality in Formative Assessment Practice: Measurement Challenges
Joan L. Herman, Ellen Osmundson, & David Silver

Summary
This study examines measures of formative assessment practice using data from a study of the implementation and effects of adding curriculum embedded measures to a hands-on science program for upper elementary school students. The authors present a unifying conception for measuring critical elements of formative assessment practice, illustrate common measures for doing so, and investigate the relationships among and between scores on these measures. Findings raise important issues with regard to both the challenge of obtaining valid measures of teachers’ assessment practice and the uneven quality nature of current teacher practice.


To cite from this report, please use the following as your APA reference:

Herman, J., L., Osmundson, E., & Silver, D. (2010). Capturing quality in formative assessment practice: Measurement challenges. (CRESST Report 770). Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#769 – Examining Practices of Staff Recruitment and Retention in Four High-Functioning Afterschool Programs: Extended Study from the National Afterschool Partnership Report
Denise Huang, Jamie Cho, Hannah H. Nam, Deborah La Torre, Christine Oh, Aletha Harven, Lindsay Perez Huber, Zena Rudo, Sarah Caverly

Summary
This study describes how staff qualifications, decisions on staffing procedures, and professional development opportunities support the recruitment and retention of quality staff members. Four high-functioning programs were identified. Qualitative procedures and instruments were designed to capture staff and parents' emic perspectives about relationships and professional development. Study findings revealed that all staff across the four afterschool programs consistently reported an intrinsic reason for working in their program. Interview data implied that program incentives such as a career ladder and an ascending pay scale were not enticing enough to recruit or retain staff. The decisions to stay with a program tend to be altruistic in nature, such as to provide academic, social, or emotional support for the students. Thus, at these four programs, the motivation for the staff to stay with the programs could be the organized environments, clear program structures, open communication, clear program goals, consistent expectations, positive relationships, and program climates that foster staff efficacy in "making a difference" in their students' lives. Thus, promoting strategies in enhancing staff efficacy, such as empowering staff with decision-making and providing professional development opportunities to enhance their professional skills could help programs to recruit and retain quality staff members.


To cite from this report, please use the following as your APA reference:

Huang, D., Cho, J., Nam, H., H., La Torre, D., Oh, C., Harven, … Caverly, S. (2009). Examining practices of staff recruitment and retention in four high-functioning afterschool programs (CRESST Report 769). Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#768 – What Works? Common Practices in High Functioning Afterschool Programs Across the Nation in Math, Reading, Science, Arts, Technology, and Homework--A Study by the National Partnership
Denise Huang, Jamie Cho, Sima Mostafavi, Hannah H. Nam

Summary
In an effort to identify and incorporate exemplary practices into existing and future afterschool programs, the U.S. Department of Education commissioned a large-scale evaluation of the 21st Century Community Learning Center (CCLC) program. The purpose of this evaluation project was to develop resources and professional development that addresses issues relating to the establishment and sustainability of afterschool programs. Fifty-three high functioning programs representative across eight regional divisions of the nation, including rural and urban programs, community-based and school district related programs, were identified using rigorous methods. Exemplary practices in program organization, program structure, and especially in content delivery were studied. The findings were synthesized into the Afterschool Toolkit that was made available to programs nationwide via the world-wide-web. Professional development was conducted consistently and extensively throughout the nation.


To cite from this report, please use the following as your APA reference:

Huang, D., Cho, J., Mostafavi, S., Nam, H., H., Oh, C., Harven, A., & Leon, S. (2009). What works? Common practices in high functioning afterschool programs across the nation in math, reading, science, arts, technology, and homework—A study by the National Partnership. The afterschool program assessment guide (CRESST Report 768). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#767 – Measuring Opportunity to Learn and Academic Language Exposure for English Language Learners in Elementary Science Classrooms
José Felipe Martínez, Alison L. Bailey, Deirdre Kerr, Becky H. Huang, & Stacey Beauregard

Summary
The present study piloted a survey-based measure of Opportunity to Learn (OTL) and Academic Language Exposure (ALE) in fourth grade science classrooms that sought to distinguish teacher practices with ELL (English language learner) and non-ELL students. In the survey, participant teachers reported on their instructional practices and the context in their science classrooms. A small sub-sample was also observed teaching a lesson in their classroom on two occasions. The pilot data were used to investigate basic psychometric properties of the survey: specifically (a) the dimensions underlying the survey items, in particular whether OTL and ALE are distinct or overlapping features or dimensions of science instruction and (b) the match between information reported by teachers in the survey, and that collected by classroom observers. Qualitative analyses of observation and teacher open ended responses in the survey informed the interpretation of the quantitative analysis results and provided useful insights for refining the survey instrument to better capture the classroom experiences of ELL students.


To cite from this report, please use the following as your APA reference:

Martinez, J. F., Bailey, A. L., Kerr, D., Huang, B. H., & Beauregard, S. (2010). Measuring Opportunity to Learn and Academic Language Exposure for English Language Learners in Elementary Science Classrooms (CRESST Report 767). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#766 – Examining the Effectiveness and Validity of Glossary and Read-Aloud Accommodations for English Language Learners in a Math Assessment
Mikyung Kim Wolf, Jinok Kim, Jenny C. Kao, Nichole M. Rivera

Summary
Glossary and reading aloud test items are often listed as allowed in many states' accommodation policies for ELL students, when taking states' large-scale mathematics assessments. However, little empirical research has been conducted on the effects of these two accommodations on ELL students' test performance. Furthermore, no research is available to examine how students use the provided accommodations. The present study employed a randomized experimental design and a think-aloud procedure to delve into the effects of the two accommodations. A total of 605 ELL and non-ELL students from two states participated in the experimental component and a subset of 68 ELL students participated in the think-aloud component of the study. Results showed no significant effect of glossary, and mixed effects of read aloud on ELL students' performance. Read aloud was found to have a significant effect for the ELL sample in one state, but not the other. Significant interaction effects between students' prior content knowledge and accommodations were found, suggesting the given accommodation was effective for the students who had acquired content knowledge. During the think-aloud analysis, students did not actively utilize the provided glossary, indicating lack of familiarity with the accommodation. Implications for the effective use of accommodations and future research agendas are discussed.


To cite from this report, please use the following as your APA reference:

Wolf, M. K., Kim, J., Kao, J. C., & Rivera, N. M. (2009). Examining the effectiveness and validity of glossary and read-aloud accommodations for English language learners in a math assessment (CRESST Report 766). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#765 – Connecting Policy to Practice: Accommodations in States' Large-Scale Math Assessments for English Language Learners
Mikyung Kim Wolf, Noelle Griffin, Jenny C. Kao, Sandy M. Chang, Nichole M. Rivera

Summary
Accommodations have been widely utilized as a way of increasing the validity of content assessments for ELL students. However, concerns have also arisen regarding the validity of accommodation use, as well as accessibility and fairness. While many states have developed ELL-specific accommodation policies and guidelines, little research has been available on how the accommodation policies are carried out in practice. The present study investigated two states' accommodation policies, specifically for the states' respective large-scale Grade 8 math assessments, and conducted a case study to examine teachers' understanding of the policies and uses of accommodations in their respective schools. Results indicated a wide variation in applying the policies in practice, which raises a validity concern for providing accommodations and interpreting accommodated test results. Based on the findings, implications and recommendations for an appropriate use of accommodations are offered.


To cite from this report, please use the following as your APA reference:

Wolf, M. K., Griffin, N., Kao, J. C., Chang, S. M., & Rivera, N. M. (2009). Connecting policy to practice: Accommodations in states' large-scale math assessments for English language learners (CRESST Report 765). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#764 – A Three-State Study of English Learner Progress
Jinok Kim, Joan L. Herman

Summary
In this three-state study, the authors estimate the magnitudes of achievement gaps between EL students and their non-EL peers, while avoiding typical caveats in cross sectional studies. The authors further compare the observed achievement gaps across three distinct dimensions (content areas, grades, and states) and report patterns of EL and non-EL achievement gaps within and across states. The study findings suggest that linguistic barriers and long-term EL designation may contribute to the observed achievement gaps. The findings further suggest that the differences in the stringency of state reclassification criteria may influence the reported size of the EL and non-EL achievement gaps between states.


To cite from this report, please use the following as your APA reference:

Kim, J., & Herman, J. L. (2009). A three-state study of English learner progress (CRESST Report 764). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

#763 – The Effects of POWERSOURCE Intervention on Student Understanding of Basic Mathematical Principles
Julia Phelan, Kilchan Choi, Terry Vendlinski, Eva L. Baker, Joan L. Herman

Summary
This report describes results from field-testing of POWERSOURCE formative assessment alongside professional development and instructional resources. The researchers at the National Center for Research, on Evaluation, Standards, & Student Testing (CRESST) employed a randomized, controlled design to address the following question: Does the use of POWERSOURCE strategies improve 6th-grade student performance on assessments of the key mathematical ideas relative to the performance of a comparison group? Sixth-grade teachers were recruited from 7 districts and 25 middle schools. A total of 49 POWERSOURCE and 36 comparison group teachers and their students (2,338 POWERSOURCE, 1,753 comparison group students) were included in the study analyses. All students took a pretest of prerequisite knowledge and a transfer measure of tasks drawn from international tests at the end of the study year. Students in the POWERSOURCE group used sets of formative assessment tasks. POWERSOURCE teachers had exposure to professional development and instructional resources. Results indicated that students with higher pretest scores tended to benefit more from the treatment as compared to students with lower pretest scores. In addition, students in the POWERSOURCE group significantly outperformed control group students on distributive property items and the effect was larger as pretest scores increased. Results, limitations and future directions are discussed.


To cite from this report, please use the following as your APA reference:

Phelan, J., Choi, K., Vendlinski, T., Baker, E. L., & Herman, J. L. (2009). The effects of POWERSOURCE intervention on student understanding of basic mathematical principles (CRESST Report 763). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).