Command Palette
Search for a command to run...
Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots
Jia-Chen Gu; Tianda Li; Quan Liu; Zhen-Hua Ling; Zhiming Su; Si Wei; Xiaodan Zhu

Abstract
In this paper, we study the problem of employing pre-trained language models for multi-turn response selection in retrieval-based chatbots. A new model, named Speaker-Aware BERT (SA-BERT), is proposed in order to make the model aware of the speaker change information, which is an important and intrinsic property of multi-turn dialogues. Furthermore, a speaker-aware disentanglement strategy is proposed to tackle the entangled dialogues. This strategy selects a small number of most important utterances as the filtered context according to the speakers' information in them. Finally, domain adaptation is performed to incorporate the in-domain knowledge into pre-trained language models. Experiments on five public datasets show that our proposed model outperforms the present models on all metrics by large margins and achieves new state-of-the-art performances for multi-turn response selection.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| conversational-response-selection-on-douban-1 | SA-BERT | MAP: 0.619 MRR: 0.659 P@1: 0.496 R10@1: 0.313 R10@2: 0.481 R10@5: 0.847 |
| conversational-response-selection-on-e | SA-BERT | R10@1: 0.704 R10@2: 0.879 R10@5: 0.985 |
| conversational-response-selection-on-rrs | SA-BERT+BERT-FP | MAP: 0.701 MRR: 0.715 P@1: 0.555 R10@1: 0.497 R10@2: 0.685 R10@5: 0.931 |
| conversational-response-selection-on-rrs-1 | SA-BERT+BERT-FP | NDCG@3: 0.674 NDCG@5: 0.753 |
| conversational-response-selection-on-ubuntu-1 | SA-BERT | R10@1: 0.855 R10@2: 0.928 R10@5: 0.983 R2@1: 0.965 |
| conversational-response-selection-on-ubuntu-3 | SA-BERT | Accuracy: 60.42 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.