TY - GEN
T1 - qDKT
T2 - 13th International Conference on Educational Data Mining, EDM 2020
AU - Sonkar, Shashank
AU - Waters, Andrew E.
AU - Lan, Andrew S.
AU - Grimaldi, Phillip J.
AU - Baraniuk, Richard G.
N1 - Funding Information:
This work was supported by NSF grants CCF-1911094, IIS-1838177, IIS-1730574, DRL-1631556, IUSE-1842378, NSF-1937134; ONR grants N00014-18-12571 and N00014-17-1-2551; AFOSR grant FA9550-18-1-0478; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.
Publisher Copyright:
© 2020 Proceedings of the 13th International Conference on Educational Data Mining, EDM 2020. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Knowledge tracing (KT) models, e.g., the deep knowledge tracing (DKT) model, track an individual learner’s acquisition of skills over time by examining the learner’s performance on questions related to those skills. A practical limitation in most existing KT models is that all questions nested under a particular skill are treated as equivalent observations of a learner’s ability, which is an inaccurate assumption in real-world educational scenarios. To overcome this limitation we introduce qDKT, a variant of DKT that models every learner’s success probability on individual questions over time. qDKT incorporates graph Laplacian regularization to smooth predictions under each skill, which is particularly useful when the number of questions in the dataset is big. qDKT also uses an initialization scheme inspired by the fastText algorithm, which has found great success in a variety of language modeling tasks. Our experiments on several real-world datasets show that qDKT achieves state-of-art performance predicting learner outcomes. Thus, qDKT can serve as a simple, yet tough-to-beat, baseline for new question-centric KT models.
AB - Knowledge tracing (KT) models, e.g., the deep knowledge tracing (DKT) model, track an individual learner’s acquisition of skills over time by examining the learner’s performance on questions related to those skills. A practical limitation in most existing KT models is that all questions nested under a particular skill are treated as equivalent observations of a learner’s ability, which is an inaccurate assumption in real-world educational scenarios. To overcome this limitation we introduce qDKT, a variant of DKT that models every learner’s success probability on individual questions over time. qDKT incorporates graph Laplacian regularization to smooth predictions under each skill, which is particularly useful when the number of questions in the dataset is big. qDKT also uses an initialization scheme inspired by the fastText algorithm, which has found great success in a variety of language modeling tasks. Our experiments on several real-world datasets show that qDKT achieves state-of-art performance predicting learner outcomes. Thus, qDKT can serve as a simple, yet tough-to-beat, baseline for new question-centric KT models.
UR - http://www.scopus.com/inward/record.url?scp=85174805109&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85174805109&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85174805109
T3 - Proceedings of the 13th International Conference on Educational Data Mining, EDM 2020
SP - 677
EP - 681
BT - Proceedings of the 13th International Conference on Educational Data Mining, EDM 2020
A2 - Rafferty, Anna N.
A2 - Whitehill, Jacob
A2 - Romero, Cristobal
A2 - Cavalli-Sforza, Violetta
PB - International Educational Data Mining Society
Y2 - 10 July 2020 through 13 July 2020
ER -