Command Palette
Search for a command to run...
Dial-MAE: ConTextual Masked Auto-Encoder for Retrieval-based Dialogue Systems
Zhenpeng Su; Xing Wu; Wei Zhou; Guangyuan Ma; Songlin Hu

Abstract
Dialogue response selection aims to select an appropriate response from several candidates based on a given user and system utterance history. Most existing works primarily focus on post-training and fine-tuning tailored for cross-encoders. However, there are no post-training methods tailored for dense encoders in dialogue response selection. We argue that when the current language model, based on dense dialogue systems (such as BERT), is employed as a dense encoder, it separately encodes dialogue context and response, leading to a struggle to achieve the alignment of both representations. Thus, we propose Dial-MAE (Dialogue Contextual Masking Auto-Encoder), a straightforward yet effective post-training technique tailored for dense encoders in dialogue response selection. Dial-MAE uses an asymmetric encoder-decoder architecture to compress the dialogue semantics into dense vectors, which achieves better alignment between the features of the dialogue context and response. Our experiments have demonstrated that Dial-MAE is highly effective, achieving state-of-the-art performance on two commonly evaluated benchmarks.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| conversational-response-selection-on-e | DialMAE | R10@1: 0.930 R10@2: 0.977 R10@5: 0.997 |
| conversational-response-selection-on-ubuntu-1 | Dial-MAE | R10@1: 0.918 R10@2: 0.964 R10@5: 0.993 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.