The website is still under testing.
JEC-QA is a question-answering dataset collected from the National Judicial Examination of China. It contains 26,365 multiple-choice and multiple-answer questions in total. The task of the dataset is to predict the answer using the questions and relevant articles. To do well on JEC-QA, both retrieving and answering are important.
For more details, please refer to the paper JEC-QA: A Legal-Domain Question Answering Dataset.
If you want to use JEC-QA, please visit this website.
If you want to test your models on test set, please submit it to Codalab.
If you want to visit the leaderboard, please visit this website.
If you want to be included in the leaderboard, please visit this website.
We have implemented several baselines for the evaluation of JEC-QA. The setting of the experiments can be found from the paper, and you can find the implementation from here. Note that the performance we report here may be different from the performance provided in the paper, as we are re-running all the experiments to submit on CodaLab.
KD-Questions | AC-Questions | All | ||||
---|---|---|---|---|---|---|
Model | Single | All | Single | All | Single | All |
BiDAF | 39.24 | 21.55 | 39.23 | 23.83 | 38.71 | 21.84 |
HAF | 36.54 | 21.70 | 43.55 | 24.16 | 41.60 | 21.78 |
CoMatch | 23.33 | 23.33 | 23.33 | 23.33 | 23.33 | 23.33 |
Ask your questions as issues on github.
If you want to use JEC-QA, you should cite using the following bib:
xxxxxxxxxx
@inproceedings{zhong2019jec,
title={JEC-QA: A Legal-Domain Question Answering Dataset},
author={Zhong, Haoxi and Xiao, Chaojun and Tu, Cunchao and Zhang, Tianyang and Liu, Zhiyuan and Sun, Maosong},
booktitle={Proceedings of AAAI},
year={2020},
}