--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- Download the model ```python # This is to set the path to save the model from pathlib import Path models_path = Path.home().joinpath('Question_Generation_model', 'UTeMGPT') models_path.mkdir(parents=True, exist_ok=True) # Download the model from huggingface_hub import snapshot_download my_model = snapshot_download(repo_id="KLimaLima/finetuned-Question-Generation-mistral-7b-instruct", local_dir=models_path) ``` To load the model that have been downloaded ```python max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally! dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = my_model, max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference ``` This model uses alpaca prompt format such as below ```python alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" instruction = 'Write an inquisitive question about a specific text span in a given sentence such that the answer is not in the text.' sentence = "I want to bake a cake during my free time. I need to know the ingredients that need to be use." inputs = tokenizer( [ alpaca_prompt.format( instruction, sentence, "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") ``` To generate output ```python outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True) tokenizer.batch_decode(outputs) ``` # Uploaded model - **Developed by:** KLimaLima - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)