varadsrivastava commited on
Commit
5cf5723
·
verified ·
1 Parent(s): fc84d2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -4
README.md CHANGED
@@ -11,12 +11,31 @@ tags:
11
  base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
12
  ---
13
 
14
- # Uploaded model
15
 
16
  - **Developed by:** varadsrivastava
17
  - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
12
  ---
13
 
14
+ # Model: BAI_LLM_FinArg
15
 
16
  - **Developed by:** varadsrivastava
17
  - **License:** apache-2.0
18
+ - **Base Model :** unsloth/llama-3-8b-Instruct-bnb-4bit
19
 
20
+ # For Proper Inference, please use:
21
+ !pip install "unsloth[colab-new] @ git+https://GitHub.com/unslothai/unsloth.git@April-Llama-3-2024"
22
 
23
+ ### Loading the fine-tuned model and the tokenizer for inference
24
+ from unsloth import FastLanguageModel
25
+ model, tokenizer = FastLanguageModel.from_pretrained(
26
+ model_name = "varadsrivastava/BAI_LLM_FinArg",
27
+ max_seq_length = 20,
28
+ dtype = torch.bfloat16,
29
+ load_in_4bit = True
30
+ )
31
+
32
+ ### Using FastLanguageModel for fast inference
33
+ FastLanguageModel.for_inference(model)
34
+
35
+ # Prompt template:
36
+ """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
37
+ {instruction}<|eot_id|><|start_header_id|>user<|end_header_id|>
38
+ Sentence: {row['text']}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
39
+ Class: {row['label']}<|eot_id|>"""
40
+
41
+ NOTE: This model was trained 2x faster using Unsloth and Huggingface's TRL library.