TY - GEN
T1 - MultiQG-TI
T2 - 18th Workshop on Innovative Use of NLP for Building Educational Applications, BEA 2023
AU - Wang, Zichao
AU - Baraniuk, Richard G.
N1 - Publisher Copyright:
© 2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - We study the new problem of automatic question generation (QG) from multi-modal sources containing images and texts, significantly expanding the scope of most of the existing work that focuses exclusively on QG from only textual sources. We propose a simple solution for our new problem, called MultiQG-TI, which enables a text-only question generator to process visual input in addition to textual input. Specifically, we leverage an image-to-text model and an optical character recognition model to obtain the textual description of the image and extract any texts in the image, respectively, and then feed them together with the input texts to the question generator. We only fine-tune the question generator while keeping the other components fixed. On the challenging ScienceQA dataset, we demonstrate that MultiQG-TI significantly outperforms ChatGPT with few-shot prompting, despite having hundred-times less trainable parameters. Additional analyses empirically confirm the necessity of both visual and textual signals for QG and show the impact of various modeling choices. Code is available at https://rb.gy/020tw.
AB - We study the new problem of automatic question generation (QG) from multi-modal sources containing images and texts, significantly expanding the scope of most of the existing work that focuses exclusively on QG from only textual sources. We propose a simple solution for our new problem, called MultiQG-TI, which enables a text-only question generator to process visual input in addition to textual input. Specifically, we leverage an image-to-text model and an optical character recognition model to obtain the textual description of the image and extract any texts in the image, respectively, and then feed them together with the input texts to the question generator. We only fine-tune the question generator while keeping the other components fixed. On the challenging ScienceQA dataset, we demonstrate that MultiQG-TI significantly outperforms ChatGPT with few-shot prompting, despite having hundred-times less trainable parameters. Additional analyses empirically confirm the necessity of both visual and textual signals for QG and show the impact of various modeling choices. Code is available at https://rb.gy/020tw.
UR - http://www.scopus.com/inward/record.url?scp=85174499228&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85174499228&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85174499228
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 682
EP - 691
BT - BEA 2023 - 18th Workshop on Innovative Use of NLP for Building Educational Applications, Proceedings of the Workshop
A2 - Kochmar, Ekaterina
A2 - Burstein, Jill
A2 - Horbach, Andrea
A2 - Horbach, Andrea
A2 - Horbach, Andrea
A2 - Laarmann-Quante, Ronja
A2 - Madnani, Nitin
A2 - Tack, Anais
A2 - Yaneva, Victoria
A2 - Yuan, Zheng
A2 - Zesch, Torsten
A2 - Zesch, Torsten
PB - Association for Computational Linguistics (ACL)
Y2 - 13 July 2023
ER -