Model: BAI_LLM_FinArg

  • Developed by: varadsrivastava
  • License: apache-2.0
  • Base Model : unsloth/llama-3-8b-Instruct-bnb-4bit

For Proper Inference, please use:

!pip install "unsloth[colab-new] @ git+https://GitHub.com/unslothai/unsloth.git

Loading the fine-tuned model and the tokenizer for inference

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained( model_name = "varadsrivastava/BAI_LLM_FinArg", max_seq_length = 20, dtype = torch.bfloat16, load_in_4bit = True )

Using FastLanguageModel for fast inference

FastLanguageModel.for_inference(model)

Prompt template:

"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{instruction}<|eot_id|><|start_header_id|>user<|end_header_id|>

Sentence: {row['text']}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Class: {row['label']}<|eot_id|>"""

NOTE: This model was trained 2x faster using Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for varadsrivastava/BAI_Arg_Alpha

Finetuned
(1071)
this model