Khoi Nguyen Tran, Jey Han Lau, et al.
EDM 2018
As education gets increasingly digitized, and intelligent tutoring systems gain commercial prominence, scalable assessment generation mechanisms become a critical requirement for enabling increased learning outcomes. Assessments provide a way to measure learners’ level of understanding and difficulty, and personalize their learning. There have been separate efforts in different areas to solve this by looking at different parts of the problem. This paper is a first effort to bring together techniques from diverse areas such as knowledge representation and reasoning, machine learning, inference on graphs, and pedagogy to generate automated assessments at scale. In this paper, we specifically address the problem of Multiple Choice Question (MCQ) generation for vocabulary learning assessments, specially catered to young learners (YL). We evaluate the efficacy of our approach by asking human annotators to annotate the questions generated by the system based on relevance. We also compare our approach with one baseline model and report high usability of MCQs generated by our system compared to the baseline.
Khoi Nguyen Tran, Jey Han Lau, et al.
EDM 2018
Anjali Singh, Ruhi Sharma Mittal, et al.
AAAI 2019
Michael Desmond, Honglei Guo, et al.
IBM J. Res. Dev
Rosario Uceda-Sosa, Nandana Mihindukulasooriya, et al.
CODS-COMAD 2022