Test-size reduction for concept estimation

Divyanshu Vats, Christoph Studer, Andrew S. Lan, Lawrence Carin, Richard G. Baraniuk

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Consider a large database of questions that assess the knowledge of learners on a range of different concepts. In this paper, we study the problem of maximizing the estimation accuracy of each learner’s knowledge about a concept while minimizing the number of questions each learner must answer. We refer to this problem as test-size reduction (TeSR). Using the SPARse Factor Analysis (SPARFA) framework, we propose two novel TeSR algorithms. The first algorithm is nonadaptive and uses graded responses from a prior set of learners. This algorithm is appropriate when the instructor has access to only the learners’ responses after all questions have been solved. The second algorithm adaptively selects the “next best question” for each learner based on their graded responses to date. We demonstrate the efficacy of our TeSR methods using synthetic and educational data.

Original languageEnglish (US)
Title of host publicationProceedings of the 6th International Conference on Educational Data Mining, EDM 2013
EditorsSidney K. D'Mello, Rafael A. Calvo, Andrew Olney
PublisherInternational Educational Data Mining Society
ISBN (Electronic)9780983952527
StatePublished - Jan 1 2013
Event6th International Conference on Educational Data Mining, EDM 2013 - Memphis, United States
Duration: Jul 6 2013Jul 9 2013


Other6th International Conference on Educational Data Mining, EDM 2013
Country/TerritoryUnited States


  • Adaptive and non-adaptive testing
  • Learning analytics
  • Maximum likelihood estimation
  • Sparse factor analysis

ASJC Scopus subject areas

  • Computer Science Applications
  • Information Systems


Dive into the research topics of 'Test-size reduction for concept estimation'. Together they form a unique fingerprint.

Cite this