| | --- |
| | language: en |
| | license: mit |
| | library_name: sentence-transformers |
| | tags: |
| | - sentence-transformers |
| | - sentence-similarity |
| | - task-oriented-dialogues |
| | - dialog-flow |
| | datasets: |
| | - sergioburdisso/dialog2flow-dataset |
| | - Salesforce/dialogstudio |
| | pipeline_tag: sentence-similarity |
| | base_model: |
| | - aws-ai/dse-bert-base |
| | widget: |
| | - source_sentence: your phone please |
| | sentences: |
| | - please get their phone number |
| | - okay can i get your phone number please to make that booking |
| | - okay can i please get your id number |
| | output: |
| | - label: '0' |
| | score: 0.9 |
| | - label: '1' |
| | score: 0.85 |
| | - label: '2' |
| | score: 0.27 |
| | --- |
| | |
| |  |
| |
|
| | # **Dialog2Flow single target model** (DSE-base) |
| |
|
| | This a variation of the **D2F$_{single}$** model introduced in the paper ["Dialog2Flow: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction"](https://arxiv.org/abs/2410.18481) published in the EMNLP 2024 main conference. |
| | This version uses DSE-base as the backbone model which yields to an increase in performance as compared to the vanilla version using BERT-base as the backbone (results reported in Appendix C). |
| | |
| | Implementation-wise, this is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or search. |
| | |
| | <!--- Describe your model here --> |
| | |
| | ## Usage (Sentence-Transformers) |
| | |
| | Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: |
| | |
| | ``` |
| | pip install -U sentence-transformers |
| | ``` |
| | |
| | Then you can use the model like this: |
| | |
| | ```python |
| | from sentence_transformers import SentenceTransformer |
| | sentences = ["your phone please", "okay may i have your telephone number please"] |
| | |
| | model = SentenceTransformer('sergioburdisso/dialog2flow-single-dse-base') |
| | embeddings = model.encode(sentences) |
| | print(embeddings) |
| | ``` |
| | |
| | |
| | |
| | ## Usage (HuggingFace Transformers) |
| | Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. |
| | |
| | ```python |
| | from transformers import AutoTokenizer, AutoModel |
| | import torch |
| | |
| | |
| | #Mean Pooling - Take attention mask into account for correct averaging |
| | def mean_pooling(model_output, attention_mask): |
| | token_embeddings = model_output[0] #First element of model_output contains all token embeddings |
| | input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() |
| | return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) |
| | |
| | |
| | # Sentences we want sentence embeddings for |
| | sentences = ['your phone please', 'okay may i have your telephone number please'] |
| | |
| | # Load model from HuggingFace Hub |
| | tokenizer = AutoTokenizer.from_pretrained('sergioburdisso/dialog2flow-single-dse-base') |
| | model = AutoModel.from_pretrained('sergioburdisso/dialog2flow-single-dse-base') |
| | |
| | # Tokenize sentences |
| | encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') |
| | |
| | # Compute token embeddings |
| | with torch.no_grad(): |
| | model_output = model(**encoded_input) |
| | |
| | # Perform pooling. In this case, mean pooling. |
| | sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) |
| | |
| | print("Sentence embeddings:") |
| | print(sentence_embeddings) |
| | ``` |
| | |
| | ## Training |
| | The model was trained with the parameters: |
| | |
| | **DataLoader**: |
| | |
| | `torch.utils.data.dataloader.DataLoader` of length 363506 with parameters: |
| | ``` |
| | {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} |
| | ``` |
| | |
| | **Loss**: |
| | |
| | `spretrainer.losses.LabeledContrastiveLoss.LabeledContrastiveLoss` |
| | |
| | **DataLoader**: |
| | |
| | `torch.utils.data.dataloader.DataLoader` of length 49478 with parameters: |
| | ``` |
| | {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} |
| | ``` |
| | |
| | **Loss**: |
| | |
| | `spretrainer.losses.LabeledContrastiveLoss.LabeledContrastiveLoss` |
| | |
| | Parameters of the fit()-Method: |
| | ``` |
| | { |
| | "epochs": 15, |
| | "evaluation_steps": 164, |
| | "evaluator": [ |
| | "spretrainer.evaluation.FewShotClassificationEvaluator.FewShotClassificationEvaluator" |
| | ], |
| | "max_grad_norm": 1, |
| | "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", |
| | "optimizer_params": { |
| | "lr": 3e-06 |
| | }, |
| | "scheduler": "WarmupLinear", |
| | "warmup_steps": 100, |
| | "weight_decay": 0.01 |
| | } |
| | ``` |
| | |
| | |
| | ## Full Model Architecture |
| | ``` |
| | SentenceTransformer( |
| | (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel |
| | (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) |
| | ) |
| | ``` |
| | |
| | ## Citation |
| | |
| | If you found the paper and/or this repository useful, please consider citing our work :) |
| | |
| | EMNLP paper: [here](https://aclanthology.org/2024.emnlp-main.310/). |
| | |
| | ```bibtex |
| | @inproceedings{burdisso-etal-2024-dialog2flow, |
| | title = "{D}ialog2{F}low: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction", |
| | author = "Burdisso, Sergio and |
| | Madikeri, Srikanth and |
| | Motlicek, Petr", |
| | editor = "Al-Onaizan, Yaser and |
| | Bansal, Mohit and |
| | Chen, Yun-Nung", |
| | booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", |
| | month = nov, |
| | year = "2024", |
| | address = "Miami, Florida, USA", |
| | publisher = "Association for Computational Linguistics", |
| | url = "https://aclanthology.org/2024.emnlp-main.310", |
| | pages = "5421--5440", |
| | } |
| | ``` |
| | |
| | ## License |
| | |
| | Copyright (c) 2024 [Idiap Research Institute](https://www.idiap.ch/). |
| | MIT License. |