TY - GEN
T1 - Dealbreaker
T2 - 33rd International Conference on Machine Learning, ICML 2016
AU - Lan, Andrew
AU - Goldstein, Tom
AU - Baraniuk, Richard
AU - Studer, Christoph
PY - 2016
Y1 - 2016
N2 - Statistical models of student responses on assessment questions, such as those in homeworks and exams, enable educators and computer-based personalized learning systems to gain insights into students' knowledge using machine learning. Popular student-response models, including the Rasch model and item response theory models, represent the probability of a student answering a question correctly using an affine function of latent factors. While such models can accurately predict student responses, their ability to interpret the underlying knowledge structure (which is certainly nonlinear) is limited. In response, we develop a new, nonlinear latent variable model that we call the dealbreaker model, in which a student's success probability is determined by their weakest concept mastery. We develop efficient parameter inference algorithms for this model using novel methods for nonconvex optimization. We show that the dealbreaker model achieves comparable or better prediction performance as compared to affine models with real-world educational datasets. We further demonstrate that the parameters learned by the dealbreaker model are interpretable-they provide key insights into which concepts are critical (i.e., the "dealbreaker") to answering a question correctly. We conclude by reporting preliminary results for a movie-rating dataset, which illustrate the broader applicability of the dealbreaker model.
AB - Statistical models of student responses on assessment questions, such as those in homeworks and exams, enable educators and computer-based personalized learning systems to gain insights into students' knowledge using machine learning. Popular student-response models, including the Rasch model and item response theory models, represent the probability of a student answering a question correctly using an affine function of latent factors. While such models can accurately predict student responses, their ability to interpret the underlying knowledge structure (which is certainly nonlinear) is limited. In response, we develop a new, nonlinear latent variable model that we call the dealbreaker model, in which a student's success probability is determined by their weakest concept mastery. We develop efficient parameter inference algorithms for this model using novel methods for nonconvex optimization. We show that the dealbreaker model achieves comparable or better prediction performance as compared to affine models with real-world educational datasets. We further demonstrate that the parameters learned by the dealbreaker model are interpretable-they provide key insights into which concepts are critical (i.e., the "dealbreaker") to answering a question correctly. We conclude by reporting preliminary results for a movie-rating dataset, which illustrate the broader applicability of the dealbreaker model.
UR - http://www.scopus.com/inward/record.url?scp=84997693785&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84997693785&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84997693785
T3 - 33rd International Conference on Machine Learning, ICML 2016
SP - 438
EP - 447
BT - 33rd International Conference on Machine Learning, ICML 2016
A2 - Balcan, Maria Florina
A2 - Weinberger, Kilian Q.
PB - International Machine Learning Society (IMLS)
Y2 - 19 June 2016 through 24 June 2016
ER -