| --- |
| language: |
| - en |
| - ar |
| - zh |
| - nl |
| - fr |
| - de |
| - hi |
| - in |
| - it |
| - ja |
| - pt |
| - ru |
| - es |
| - vi |
| - multilingual |
| license: apache-2.0 |
| datasets: |
| - unicamp-dl/mmarco |
| widget: |
| - text: Python ist eine universelle, �blicherweise interpretierte, h�here Programmiersprache. |
| Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu f�rdern. |
| So werden beispielsweise Bl�cke nicht durch geschweifte Klammern, sondern durch |
| Einr�ckungen strukturiert. |
| --- |
| |
| # doc2query/msmarco-14langs-mt5-base-v1 |
|
|
| This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It was trained on all 14 languages of [mMARCO dataset](https://github.com/unicamp-dl/mMARCO), i.e. you can input a passage in any of the 14 languages, and it will generate a query in the same language. |
|
|
| It can be used for: |
| - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini. |
| - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. |
|
|
| ## Usage |
| ```python |
| from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
| import torch |
| |
| model_name = 'doc2query/msmarco-14langs-mt5-base-v1' |
| tokenizer = AutoTokenizer.from_pretrained(model_name) |
| model = AutoModelForSeq2SeqLM.from_pretrained(model_name) |
| |
| text = "Python ist eine universelle, �blicherweise interpretierte, h�here Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu f�rdern. So werden beispielsweise Bl�cke nicht durch geschweifte Klammern, sondern durch Einr�ckungen strukturiert." |
| |
| |
| def create_queries(para): |
| input_ids = tokenizer.encode(para, return_tensors='pt') |
| with torch.no_grad(): |
| # Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality |
| sampling_outputs = model.generate( |
| input_ids=input_ids, |
| max_length=64, |
| do_sample=True, |
| top_p=0.95, |
| top_k=10, |
| num_return_sequences=5 |
| ) |
| |
| # Here we use Beam-search. It generates better quality queries, but with less diversity |
| beam_outputs = model.generate( |
| input_ids=input_ids, |
| max_length=64, |
| num_beams=5, |
| no_repeat_ngram_size=2, |
| num_return_sequences=5, |
| early_stopping=True |
| ) |
| |
| |
| print("Paragraph:") |
| print(para) |
| |
| print("\nBeam Outputs:") |
| for i in range(len(beam_outputs)): |
| query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True) |
| print(f'{i + 1}: {query}') |
| |
| print("\nSampling Outputs:") |
| for i in range(len(sampling_outputs)): |
| query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True) |
| print(f'{i + 1}: {query}') |
| |
| create_queries(text) |
| |
| ``` |
|
|
| **Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it. |
|
|
| ## Training |
| This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 525k training steps on all 14 languages from [mMARCO dataset](https://github.com/unicamp-dl/mMARCO). For the training script, see the `train_script.py` in this repository. |
|
|
| The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. |
|
|
| This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO). |
|
|
|
|
|
|
|
|