Reports
Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.
#702 – English Language Learners and Math Achievement: A Study of Opportunity to Learn and Language Accommodation
Jamal Abedi, Mary Courtney, Seth Leon, Jenny Kao, and Tarek Azzam
Jamal Abedi, Mary Courtney, Seth Leon, Jenny Kao, and Tarek Azzam
CSE Report 702, 2006
Summary
Summary
This study investigated the interactive effects between students’ opportunity to learn (OTL) in the classroom, two language-related testing accommodations, and English language learner (ELL) students and other students of varying language proficiency, and how these variables impact mathematics performance. Hierarchical linear modeling was employed to investigate three class-level components of OTL, two language accommodations, and ELL status. The three class-level components of OTL were: (1) student report of content coverage; (2) teacher content knowledge; and (3) class prior math ability (as determined by an average of students’ Grade 7 math scores). A total of 2,321 Grade 8 students were administered one of three versions of an algebra test: a standard version with no accommodation, a dual-language (English and Spanish) test version accommodation, or a linguistically modified test version accommodation. These students’ teachers were administered a teacher content knowledge measure. Additionally, 369 of these students were observed for one class period for student-teacher interactions. Students’ scores from the prior year’s state mathematics and reading achievement tests, and other background information were also collected.
Results indicated that all three class-level components of OTL were significantly related to math performance, after controlling for prior math ability at the individual student level. Class prior math ability had the strongest effect on math performance. Results also indicated that teacher content knowledge had a significant differential effect on the math performance of students grouped by a quick reading proficiency measure, but not by students’ ELL status or by their reading achievement test percentile ranking. Results also indicated that the two language accommodations did not impact students’ math performance. Additionally, results suggested that, in general, ELL students reported less content coverage than their non-ELL peers, and they were in classes of overall lower math ability than their non-ELL peers.
While it is understandable why a student’s performance in seventh grade strongly determines the content she or he receives in eighth grade, there is some evidence in this study that students of lower language proficiency can learn algebra and demonstrate algebra knowledge and skills when they are provided with sufficient content and skills delivered by proficient math instructors in a classroom of students who are proficient in math.
Results indicated that all three class-level components of OTL were significantly related to math performance, after controlling for prior math ability at the individual student level. Class prior math ability had the strongest effect on math performance. Results also indicated that teacher content knowledge had a significant differential effect on the math performance of students grouped by a quick reading proficiency measure, but not by students’ ELL status or by their reading achievement test percentile ranking. Results also indicated that the two language accommodations did not impact students’ math performance. Additionally, results suggested that, in general, ELL students reported less content coverage than their non-ELL peers, and they were in classes of overall lower math ability than their non-ELL peers.
While it is understandable why a student’s performance in seventh grade strongly determines the content she or he receives in eighth grade, there is some evidence in this study that students of lower language proficiency can learn algebra and demonstrate algebra knowledge and skills when they are provided with sufficient content and skills delivered by proficient math instructors in a classroom of students who are proficient in math.
#701 – Studying the Sensitivity of Inferences to Possible Unmeasured Confounding Variables in Multisite Evaluations
Michael Seltzer, Jinok Kim, and Ken Frank
Michael Seltzer, Jinok Kim, and Ken Frank
CSE Report 701, 2006
Summary
Summary
In multisite evaluation studies, questions of primary interest often focus on whether particular facets of implementation or other aspects of classroom or school environments are critical to a program’s success. However, the differences with which teachers implement programs can depend on an array of factors, including differences in their training and experience, in the prior preparation of their students, and in the degree of support they receive from school administrators. As such, a crucially important implication is that in studying connections between various aspects of implementation and the effectiveness of programs, we need to be alert to factors that may be confounded with differences in implementation. Despite our best efforts to anticipate and measure possible confounding variables, teachers who differ in terms of the quality and frequency with which they implement various program practices use particular program materials and the like, may differ in important ways that have not been measured, giving rise to possible hidden bias. In this paper, we extend Frank’s (2000) work on assessing the impact of omitted confounding variables on coefficients of interest in regression settings to applications of HMs in multiste settings in which interest centers on testing whether certain aspects of implementation are critical to a program’s success. We provide a detailed illustrative example using the data from a study focusing on the effects of reform-minded instructional practices in mathematics (Gearhart et al., 1999; Saxe et al., 1999).
#700 – Consequences and Validity of Performance Assessment for English Language Learners: Conceptualizing & Developing Teachers’ Expertise in Academic Language
Zenaida Aguirre-Munoz, Jae Eun Parks, Aprile Benner, Anastasia Amabisca, Christy Kim Boscardin
Zenaida Aguirre-Munoz, Jae Eun Parks, Aprile Benner, Anastasia Amabisca, Christy Kim Boscardin
CSE Report 700, 2006
Summary
Summary
The purpose of this report is to provide the theoretical rationale for the approach to academic language that was adopted to meet the research goals of the second phase of this project as well as to report on the results from the pilot training program that was developed to create the conditions under which varying levels of direct instruction in academic language occurs. The challenge was to find an approach for the instruction of academic language that would serve a dual purpose. The first purpose was aimed at building teachers' understanding of the key components of academic language to improve their instructional decision-making. The second goal was to provide teachers with tools for providing ELLs with direct instruction on academic language and thereby support their English language development. After careful review of the literature, we found that the functional linguistic approach to language development best met these goals. We developed training modules on writing instruction based on the functional linguistic approach, as it has the strongest potential in providing explicit instruction to support ELL student writing development. Overall, teachers responded positively to the functional linguistic approach and were optimistic about its potential for improving ELL writing development. Responses to the pre-and post institute survey revealed that teachers felt better prepared in evaluating student writing from a functional linguistic perspective as well as in developing instructional plans that targeted specific learning needs.
#699 – Characterizing Trainees in the Cognitive Phase using the Human Performance Knowledge Mapping Tool (HPKMT) and Microgenetic Analysis
Girlie C. Delacruz, Gregory K. W. K. Chung, and William L. Bewley
Girlie C. Delacruz, Gregory K. W. K. Chung, and William L. Bewley
CSE Report 699, 2006
Summary
Summary
Models of skill acquisition suggest that learners go through three phrases: (1) cognitive phase-when instruction is most effective, errors are frequent, and performance is inconsistent; (2) associative phase-when the learner begins to integrate the parts of the process or domain as a whole, and errors are gradually eliminated; and (3) autonomous phase-when the process becomes more automatic and less moderated by cognition, and there is less interference from outside distracters. In this paper, we will examine the use of the CRESST Human Performance Knowledge Mapping Tool (HPKMT) to characterize learners in the cognitive phase using Marine Corps 2nd Lieutenants going through entry-level marksmanship training. The capability to characterize learners may direct the level of instruction or practice they are given. HPKMT is designed to measure a learner's knowledge of a domain. Learners express their understanding of a domain by graphically depicting the relations among concepts. Further, the microgenetic analysis methodology provides a finer picture of the learning process by using repeated observations throughout the period of change giving detailed analysis of how and when change occurs. By measuring Marines' knowledge of marksmanship during classroom training, dry-fire practice, live-fire practice, and after qualification, we will have observations of their performance on the HPKMT at key stages of their learning. Our results suggest that the HPKMT can identify four types of learners in the cognitive phase: (1) growing, (2) declining, (3) stable, and (4) inconsistent. The HPKMT is also sensitive to instruction such that the mean difference between expert content scores on the knowledge maps are significantly different from those expert content scores on subsequent days.
#698 – Celebrating 20 Years of Research on Educational Assessment: Proceedings of the 2005 CRESST Conference
Anne Lewis
Anne Lewis
CSE Report 698, 2006
Summary
Summary
The 2005 CRESST conference marked the 20th year of work on critically important accountability topics by the UCLA institution, "a tremendous accomplishment for a research center," according to Aimee Dorr, dean of the Graduate School of Education and Information Studies. In her welcoming remarks, Dean Dorr described why CRESST has achieved such longevity. The center is "independent, very lively, grounded in practice, and very forward looking, with many top accountability experts from around the nation," said Dorr, "interested in new technologies and helping to shape the future of education." She also noted that although it was "good fortune" for the center's senior partner to be located at UCLA, "it is a partnership throughout the country, and one that enriches us here as the partners do on the national scene."
The anniversary for CRESST was an opportunity for the conference program to focus on the achievements in the use of assessment to improve student learning. The two-day gathering described many of the lessons learned from a century of testing. The discussions also featured the newest CRESST initiative, known as POWERSOURCE, a $10 million grant from the Institute of Education Sciences at the U.S. Department of Education to develop formative mathematics assessments in the middle grades to improve student performance and learning.
The anniversary for CRESST was an opportunity for the conference program to focus on the achievements in the use of assessment to improve student learning. The two-day gathering described many of the lessons learned from a century of testing. The discussions also featured the newest CRESST initiative, known as POWERSOURCE, a $10 million grant from the Institute of Education Sciences at the U.S. Department of Education to develop formative mathematics assessments in the middle grades to improve student performance and learning.
#697 – The Power of Big Ideas in Mathematics Education: Development and Pilot Testing of POWERSOURCE Assessments
David Niemi, Julia Vallone, and Terry Vendlinski
David Niemi, Julia Vallone, and Terry Vendlinski
CSE Report 697, 2006
Summary
Summary
The characteristics of expert knowledge-interconnectedness, understanding, and ability to transfer-are inextricably linked, a point that is critically important for educators and constitutes a major theme of this paper. In this paper we explore how an analysis of the architecture of expert knowledge can inform the development of assessments to help teachers move students toward greater expertise in mathematics, and we present examples of such assessments. We also review student responses and preliminary results from pilot tests of assessments administered in sixth-grade classes in a large urban school district. Our preliminary analyses suggest that an assessment strategy based on the structure of mathematical knowledge can reveal deficiencies in student understanding of and ability to apply fundamental concepts of pre-algebra, and has the potential to help teachers remediate those deficiencies.
#696 – Measuring Teachers' Mathematical Knowledge
Margaret Heritage and Terry Vendlinski
Margaret Heritage and Terry Vendlinski
CSE Report 696, 2006
Summary
Summary
Teachers' knowledge of mathematics is pivotal to their capacity to provide effective mathematics instruction and to their ability to assess student learning (Ball, Hill, & Bass, 2005; Ma, 1999; Schifter, 1999). The National Council for the Teaching of Mathematics (NCTM, 2000) makes it clear that teachers need knowledge of the whole domain as well as knowledge about the important ideas that are central to their grade level. POWERSOURCE is expected, through professional development and job aids, to influence teachers' pedagogical content knowledge and assessment practices. To gauge such effects we have developed teacher measures that focus on three key mathematical principles that are central to POWERSOURCE: the distributive property, solving equations, and rational number equivalence.
#695 – A Distance Learning Testbed
William L. Bewley, Gregory K. W. K. Chung, Jin-Ok Kim, John J. Lee, and Farzad Saadat
William L. Bewley, Gregory K. W. K. Chung, Jin-Ok Kim, John J. Lee, and Farzad Saadat
CSE Report 695, 2006
Summary
Summary
Because of the great promise of distance learning for delivering cost-effective instruction, there is great interest in determining whether or not it actually is effective, and-more interesting-determining what variables of design and implementation make it more or less effective. Unfortunately, much of the research has been based on simple comparisons of distance learning to the "traditional" method of instruction rather than examining the variables influencing the effectiveness of distance learning. In addition to not manipulating or controlling important independent variables, the dependent measures used in such studies are often inappropriate, ranging from the obviously inadequate, e.g., the "smile test," to standardized tests that have known psychometric properties but are not aligned with course objectives, to homegrown measures that appear to be aligned with instructional objectives but are of unknown reliability and validity. We have addressed the problem of limitations in dependent measures with research on measures of student achievement based on families of cognitive demands, and have developed assessment models for these families that can be used to design assessments across a variety of subject matters. We have also developed computer-based assessment tools implementing the models, including tools for data collection, scoring, analysis and reporting, assessment authoring, and knowledge acquisition and representation. With support from the Office of Naval Research we have developed a distance learning testbed to apply these models and tools to distance learning research and evaluation. This paper describes our summer of 2004 testbed implementation and presents three examples of the research conducted in the testbed on methods for assessing human performance via distance learning technologies.
#694 – Learning Complex Cognitive Skills With an Interactive Job Aid
Terry P. J. F. Vendlinski, Allen Munro, Quentin A. Pizzini, William L. Bewley, Gregory K. W. K. Chung, Gale Stuart, and Girlie C. Delacruz
Terry P. J. F. Vendlinski, Allen Munro, Quentin A. Pizzini, William L. Bewley, Gregory K. W. K. Chung, Gale Stuart, and Girlie C. Delacruz
CSE Report 694, 2006
Summary
Summary
Engineering Duty Officers (EDOs) in the U.S. Navy manage large development and procurement processes. Their initial training is provided in a six week EDO Basic Course at Port Hueneme, California. The students, who have higher degrees in one or more engineering disciplines, must learn to make complex decisions that incorporate the uncertainty of future events, and to convincingly present their acquisition recommendations to national leaders. Expected value theory provides one framework for making such complex decisions. Students compute an estimated value for each alternative choice by summing the utilities of all the potential consequences of that choice, and weighting those utilities by the estimated likelihood of that outcome. Using the iRides simulation-training system, we developed a software application that provides a simple interface for examining and presenting the expected values of choices in an EDO context. To support the use of this software tool-the EDO Decision Aid-in the context of the class, representations of specific alternative solutions to a class problem were developed and presented to student teams working on that problem. The EDO Decision Aid was designed to record student actions, including the selection of alternatives, the setting of utility values and estimated probabilities, and the setting of decision thresholds.
#693 – Linking Assessment and Instruction Using Ontologies
Gregory K. W. K. Chung, Girlie C. Delacruz, Gary B. Dionne, William L. Bewley
Gregory K. W. K. Chung, Girlie C. Delacruz, Gary B. Dionne, William L. Bewley
CSE Report 693, 2006
Summary
Summary
In this study we report on a test of a method that uses ontologies to individualize instruction by directly linking assessment results to the delivery of relevant content. Our sample was 2nd Lieutenants undergoing entry-level training on rifle marksmanship. Our test of this approach appears feasible and promising. The Bayesian network appeared to be successful in identifying knowledge gaps, and relevant and targeted content was served to Marines. Learning appeared to be occurring at a faster rate over time for Marines who received targeted instruction compared to Marines in a control group. Implications are discussed.

