您现在所在的位置是: 首页 > 正文

实验室3篇论文被人工智能顶级会议AAAI 2024接收

时间:2023年12月12日 | 栏目:新闻动态

实验室博士生李波波、硕士生郑理、柴雨阳的论文被人工智能顶级会议AAAI 2024接收。 

https://aaai.org/aaai-conference/

AAAI2024收到了12000多篇论文投稿,录用了2342篇,论文录用率23.75%。以下是我们被录用论文的题目和摘要介绍。


(1)Reverse Multi-Choice Dialogue Commonsense Inference with Graph-of-Thought

With the proliferation of dialogic data across the Internet, the Dialogue Commonsense Multi-choice Question Answering (DC-MCQ) task has emerged as a response to the challenge of comprehending user queries and intentions. Although prevailing methodologies exhibit effectiveness in addressing single-choice questions, they encounter difficulties in handling multi-choice queries due to the heightened intricacy and informational density. In this paper, inspired by the human cognitive process of progressively excluding options, we propose a three-step Reverse Exclusion Graph-of-Thought (ReX-GoT) framework, including Option Exclusion, Error Analysis, and Combine Information. Specifically, our ReX-GoT mimics human reasoning by gradually excluding irrelevant options and learning the reasons for option errors to choose the optimal path of the GoT and ultimately infer the correct answer. By progressively integrating intricate clues, our method effectively reduces the difficulty of multi-choice reasoning and provides a novel solution for DC-MCQ. Extensive experiments on the CICERO and CICERO_v2 datasets validate the significant improvement of our approach on DC-MCQ task. On zero-shot setting, our model outperform the best baseline by 17.67% in terms of F1 score for the multi-choice task. Most strikingly, our GPT3.5-based ReX-GoT framework achieves a remarkable 39.44% increase in F1 score.


(2)Harnessing Holistic Discourse Features and Triadic Interaction for Sentiment Quadruple Extraction in Dialogues

Dialogue Aspect-based Sentiment Quadruple (DiaASQ) is a newly-emergent task aiming to extract the sentiment quadruple (i.e., targets, aspects, opinions, and sentiments) from conversations. While showing promising performance, the prior DiaASQ approach unfortunately falls prey to the key crux of DiaASQ, including insufficient modeling of discourse features, and lacking quadruple extraction, which hinders further task improvement. To this end, we introduce a novel framework that not only capitalizes on comprehensive discourse feature modeling, but also captures the intrinsic interaction for optimal quadruple extraction. On the one hand, drawing upon multiple discourse features, our approach constructs a token-level heterogeneous graph and enhances token interactions through a heterogeneous attention network. We further propose a novel triadic scorer, strengthening weak token relations within a quadruple, thereby enhancing the cohesion of the quadruple extraction. Experimental results on the DiaASQ benchmark showcase that our model significantly outperforms existing baselines across both English and Chinese datasets.


(3)Compositional Generalization for Multi-label Text Classification: A Data-Augmentation Approach

Multi-label text classification identifies multiple labels associated with a given input text. This task has wide applications in various fields, such as sentiment analysis, article subject identification, and movie genre classification. Despite significant advancements, whether existing multi-label classification models can generalize to novel and seldom-encountered complex concepts, which are compositions of elementary ones, remains underexplored. This study presents the first investigation, unmasking this overlooked aspect. By creating a unique data split across three benchmarks, we probe the compositional generalization ability of existing multi-label textual classification models. Our findings reveal a consistent struggle in these models to generalize to compositional concepts infrequently encountered during training. This struggle leads to inferior classification performance on texts with new label compositions. To address this problem, we introduce a data augmentation method that leverages two innovative generation models designed to enhance the models' capacity to generalize compositionally. Encouragingly, our experiments demonstrate that our data augmentation approach significantly improves the performance of classification models on our benchmarks crafted to assess compositional generalization capabilities. Moreover, both generation models outperform other baseline methods typically employed in augmentation, resulting in a more substantial improvement in classification performance.



Copyright © 2020 - 2024 武汉大学语言和认知计算实验室. All Rights Reserved
地址:湖北省武汉市东西湖区国家网络安全人才培养与创新基地新珈楼C402室