| | --- |
| | language: |
| | - en |
| |
|
| | tags: |
| | - text2text-generation |
| |
|
| | widget: |
| | - text: "Please answer to the following question. Who is going to be the next Ballon d'or?" |
| | example_title: "Question Answering" |
| | - text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering." |
| | example_title: "Logical reasoning" |
| | - text: "Please answer the following question. What is the boiling point of Nitrogen?" |
| | example_title: "Scientific knowledge" |
| | - text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?" |
| | example_title: "Yes/no question" |
| | - text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?" |
| | example_title: "Reasoning task" |
| | - text: "Q: ( False or not False or False ) is? A: Let's think step by step" |
| | example_title: "Boolean Expressions" |
| | - text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?" |
| | example_title: "Math reasoning" |
| | - text: "Premise: At my age you will probably have learned one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?" |
| | example_title: "Premise and hypothesis" |
| |
|
| | datasets: |
| | - Open-Orca/SlimOrca-Dedup |
| | - GAIR/lima |
| | - nomic-ai/gpt4all-j-prompt-generations |
| | - HuggingFaceH4/ultrachat_200k |
| | - ZenMoore/RoleBench |
| | - WizardLM/WizardLM_evol_instruct_V2_196 |
| | - c-s-ale/alpaca-gpt4-data |
| | - THUDM/AgentInstruct |
| |
|
| |
|
| | license: mit |
| | --- |
| | # Model Card of instructionRoberta-base for Bertology |
| |
|
| |  |
| |
|
| | A minimalistic instruction model with an already good analysed and pretrained encoder like roBERTa. |
| | So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf). |
| |
|
| | The training code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert). |
| | We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose. |
| |
|
| | ## Run the model with a longer output |
| |
|
| | ```python |
| | from transformers import AutoTokenizer, EncoderDecoderModel |
| | # load the fine-tuned seq2seq model and corresponding tokenizer |
| | model_name = "Bachstelze/instructionRoberta-base" |
| | model = EncoderDecoderModel.from_pretrained(model_name) |
| | tokenizer = AutoTokenizer.from_pretrained(model_name) |
| | input = "Write a poem about love, peace and pancake." |
| | input_ids = tokenizer(input, return_tensors="pt").input_ids |
| | output_ids = model.generate(input_ids, max_new_tokens=200) |
| | print(tokenizer.decode(output_ids[0])) |
| | ``` |
| |
|
| | ## Training parameters |
| |
|
| | - base model: "roberta-base" |
| | - trained for 1 epoche |
| | - batch size of 16 |
| | - 20000 warm-up steps |
| | - learning rate of 0.0001 |
| |
|
| | ## Purpose of instructionRoberta-base |
| | InstructionBERT is intended for research purposes. The model-generated text should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications. |