Like this Product? Get CRESST News Every Month!

Don't miss out on the latest CRESST news including our FREE reports and products. Subscribe to the monthly CRESST E-newsletter right now!

We only use this address for the monthly CRESST E-Newsletter and will ask for confirmation before adding you to our list.



No thanks | Don't ask again

Reports

Please note that CRESST reports were called "CSE Reports" or "CSE Technical Reports" prior to CRESST report 723.

#831 – Automatic Short Essay Scoring Using Natural Language Processing to Extract Semantic Information in the Form of Propositions
Deirdre Kerr, Hamid Mousavi, and Markus R. Iseli

Summary

The Common Core assessments emphasize short essay constructed-response items over multiple-choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way to score them automatically can be found. Current automatic essay-scoring techniques are inappropriate for scoring the content of an essay because they either rely on grammatical measures of quality or machine learning techniques, neither of which identify statements of meaning (propositions) in the text. In this report, we introduce a novel technique for using domain-independent, deep natural language processing techniques to automatically extract meaning from student essays in the form of propositions and match the extracted propositions to the expected response. The empirical results indicate that our technique is able to accurately extract propositions from student short essays, reaching moderate agreement with human rater scores.


1