TY - JOUR
T1 - Results of the BioASQ tasks of the question answering lab at CLEF 2015
AU - Balikas, Georgios
AU - Kosmopoulos, Aris
AU - Krithara, Anastasia
AU - Paliouras, Georgios
AU - Kakadiaris, Ioannis
N1 - Funding Information:
The third edition of BioASQ is supported by a conference grant from the NIH/NLM (number 1R13LM012214-01) and sponsored by the companies Viseo and Atypon.
PY - 2015
Y1 - 2015
N2 - The goal of the BioASQ challenge is to push research towards highly precise biomedical information access systems.We aim to promote systems and approaches that are able to deal with the whole diversity of the Web, especially for, but not restricted to, the context of bio-medicine. The third challenge consisted of two tasks: semantic indexing and question answering. 59 systems by 18 different teams participated in the semantic indexing task (Task 3a). The question answering task was further subdivided into two phases. 24 systems from 9 different teams participates in the annotation phase (Task 3b-phase A), while 26 systems of 10 different teams participated in the answer generation phase (Task 3b-phase B). Overall, the best systems were able to outperform the strong baselines provided by the organizers. In this paper, we present the data used during the challenge as well as the technologies which were used by the participants.
AB - The goal of the BioASQ challenge is to push research towards highly precise biomedical information access systems.We aim to promote systems and approaches that are able to deal with the whole diversity of the Web, especially for, but not restricted to, the context of bio-medicine. The third challenge consisted of two tasks: semantic indexing and question answering. 59 systems by 18 different teams participated in the semantic indexing task (Task 3a). The question answering task was further subdivided into two phases. 24 systems from 9 different teams participates in the annotation phase (Task 3b-phase A), while 26 systems of 10 different teams participated in the answer generation phase (Task 3b-phase B). Overall, the best systems were able to outperform the strong baselines provided by the organizers. In this paper, we present the data used during the challenge as well as the technologies which were used by the participants.
UR - http://www.scopus.com/inward/record.url?scp=84982862078&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84982862078&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:84982862078
VL - 1391
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
SN - 1613-0073
T2 - 16th Conference and Labs of the Evaluation Forum, CLEF 2015
Y2 - 8 September 2015 through 11 September 2015
ER -