Abstract
An important, yet largely unstudied, problem in student data analysis is to detect misconceptions from students' responses to open-response questions. Misconception detection enables instructors to deliver more targeted feedback on the misconceptions exhibited by many students in their class, thus improving the quality of instruction. In this paper, we propose a new natural language processing (NLP) framework to detect the common misconceptions among students' textual responses to open-response, short-Answer questions. We introduce a probabilistic model for students' textual responses involving misconceptions and experimentally validate it on a real-world student-response dataset. Preliminary experimental results show that excels at classifying whether a response exhibits one or more misconceptions. More importantly, it can also automatically detect the common misconceptions exhibited across responses from multiple students to multiple questions; this is especially important at large scale, since instructors will no longer need to manually specify all possible misconceptions that students might exhibit.
Original language | English (US) |
---|---|
Title of host publication | L@S 2017 - Proceedings of the 4th (2017) ACM Conference on Learning at Scale |
Publisher | Association for Computing Machinery, Inc |
Pages | 245-248 |
Number of pages | 4 |
ISBN (Electronic) | 9781450344500 |
DOIs | |
State | Published - Apr 12 2017 |
Event | 4th Annual ACM Conference on Learning at Scale, L@S 2017 - Cambridge, United States Duration: Apr 20 2017 → Apr 21 2017 |
Other
Other | 4th Annual ACM Conference on Learning at Scale, L@S 2017 |
---|---|
Country | United States |
City | Cambridge |
Period | 4/20/17 → 4/21/17 |
Keywords
- Learning analytics
- Misconception detection
- Natural language processing
ASJC Scopus subject areas
- Computer Networks and Communications
- Education
- Software
- Computer Science Applications