alibidaran commited on
Commit
78cda43
·
verified ·
1 Parent(s): e15a211

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -4
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
 
3
  tags:
4
  - text-generation-inference
5
  - transformers
@@ -9,14 +10,36 @@ tags:
9
  license: apache-2.0
10
  language:
11
  - en
 
 
12
  ---
13
 
14
  # Uploaded model
15
 
16
  - **Developed by:** alibidaran
17
  - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - alibidaran/LLAMA3-instructive_reasoning
4
  tags:
5
  - text-generation-inference
6
  - transformers
 
10
  license: apache-2.0
11
  language:
12
  - en
13
+ datasets:
14
+ - adarshxs/Therapy-Alpaca
15
  ---
16
 
17
  # Uploaded model
18
 
19
  - **Developed by:** alibidaran
20
  - **License:** apache-2.0
21
+ - **Finetuned from model :** alibidaran/LLAMA3-instructive_reasoning
22
 
23
+ This model is Fined-tune with GRPO algorithm to make reasoning responses for mental health and consulting applications.
24
+ The following link illustrates how to design reward models to train our model with GRPO algorithm.
25
 
26
+ https://www.kaggle.com/code/alibidaran/reasoning-consueling
27
+ ### Direct Usages:
28
+ ```python
29
+ messages = [
30
+ {'content':system_prompt,
31
+ 'role': 'system'},
32
+ {"role": "user", "content": "I want to cut down drinking alchohol but when I am with my firends I need to drink. what should I do?"},
33
+ ]
34
+ inputs = tokenizer.apply_chat_template(
35
+ messages,
36
+ tokenize = True,
37
+ add_generation_prompt = True, # Must add for generation
38
+ return_tensors = "pt",
39
+ ).to("cuda")
40
+
41
+ from transformers import TextStreamer
42
+ text_streamer = TextStreamer(tokenizer, skip_prompt = True)
43
+ _ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 1024,
44
+ use_cache = True, temperature = 0.7, min_p = 0.9)
45
+ ```