![]() ![]() ![]() Experimental results on 11 QA benchmarks demonstrate that ProQA consistently boosts performance on both full data fine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore, ProQA is pre-trained with structural prompt-formatted large-scale synthesized corpus, which empowers the model with the commonly-required QA ability. Through a structurally designed prompt-based input schema, ProQA concurrently models the knowledge generalization for all QA tasks while keeping the knowledge customization for every specific QA task. ProQA takes a unified structural prompt as the bridge and improves the QA-centric ability by structural prompt-based pre-training. ![]() To address this issue, we present ProQA, a unified QA paradigm that solves various tasks through a single model. The specialty in QA research hinders systems from modeling commonalities between tasks and generalization for wider applications. Existing QA works mostly focus on specific question types, knowledge domains, or reasoning skills. Publisher = "Association for Computational Linguistics",Ībstract = "Question Answering (QA) is a longstanding challenge in natural language processing. Cite (Informal): ProQA: Structural Prompt-based Pre-training for Unified Question Answering (Zhong et al., NAACL 2022) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: Video: Code zhongwanjun/proqa Data DREAM, DROP, MCTest, NarrativeQA, NewsQA, PAQ, Quoref, RACE, = ": Structural Prompt-based Pre-training for Unified Question Answering",īooktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", Association for Computational Linguistics. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4230–4243, Seattle, United States. ProQA: Structural Prompt-based Pre-training for Unified Question Answering. Anthology ID: 2022.naacl-main.313 Volume: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Month: July Year: 2022 Address: Seattle, United States Venue: NAACL SIG: Publisher: Association for Computational Linguistics Note: Pages: 4230–4243 Language: URL: DOI: 10.18653/v1/2022.naacl-main.313 Bibkey: zhong-etal-2022-proqa Cite (ACL): Wanjun Zhong, Yifan Gao, Ning Ding, Yujia Qin, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, and Nan Duan. Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt. Abstract Question Answering (QA) is a longstanding challenge in natural language processing.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |