Pinkstack commited on
Commit
902f0a4
·
verified ·
1 Parent(s): cd5e4a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -6
README.md CHANGED
@@ -19,23 +19,28 @@ library_name: transformers
19
  😁:```Hi Fijik!```
20
 
21
  🤖:```Hello! What's up? How may I help?```
22
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/Dub2iaHaWhxfMC_ZGBYtc.png)
23
  # What is it
24
  Fijik is a **6 billion** parameter, dense 56 layer transformer LLM based on llama 3.2, specifically, it was merged using Mergekit to be twice as large as llama 3.2 3B.
25
 
26
  After merging, we used a custom dataset mix meant for this model, to improve its performance even more.
27
  - **Step 1 for fine-tuning via unsloth:** SFT on an estimated 20 million tokens. (more or less)
28
  - **Step 2 for the fine-tuning via unsloth:** DPO for 2 epochs for even better instruction following.
29
- After these two steps, we got a powerful model which has less parameters than llama 3.1 8B yet performs just as good if not better, Note that unlike our other models, it is not a thinking model. our theory behind this model is that a smaller yet deeper model can outperform for it's size.
30
 
31
- Meta states that LLAMA 3.2 was pre-trained on up to 9 trillion high quality tokens, with a knowledge cutoff date of December 2023.
32
 
33
  # What should Fijik be used for?
34
- Fijik
35
-
 
 
 
36
  # Examples
37
 
38
  # Limitations
 
 
39
 
40
  # Notices
41
 
@@ -43,6 +48,6 @@ Fijik
43
 
44
  - **Developed by:** Pinkstack
45
  - **License:** Llama 3.2 community license
46
- - **Finetuned from model :** Pinkstack/Fijik-6b-v1
47
 
48
  This llama model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
19
  😁:```Hi Fijik!```
20
 
21
  🤖:```Hello! What's up? How may I help?```
22
+ ![Fijik 1.0 6B banner](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/Dub2iaHaWhxfMC_ZGBYtc.png)
23
  # What is it
24
  Fijik is a **6 billion** parameter, dense 56 layer transformer LLM based on llama 3.2, specifically, it was merged using Mergekit to be twice as large as llama 3.2 3B.
25
 
26
  After merging, we used a custom dataset mix meant for this model, to improve its performance even more.
27
  - **Step 1 for fine-tuning via unsloth:** SFT on an estimated 20 million tokens. (more or less)
28
  - **Step 2 for the fine-tuning via unsloth:** DPO for 2 epochs for even better instruction following.
29
+ After these two steps, we got a powerful model which has less parameters than llama 3.1 8B yet performs just as good if not better, Note that unlike our other recent models, it is not a thinking model, yet it can reason quite well. Our theory behind this model is that a smaller yet deeper model can outperform for it's size.
30
 
31
+ Meta states that LLAMA 3.2 was pre-trained on up to 9 trillion high-quality tokens, with a knowledge cutoff date of December 2023. This model supports up to **131072** input tokens and can generate up to **8192** tokens.
32
 
33
  # What should Fijik be used for?
34
+ Fijik 1.0 6B is by design, meant to be a production-ready, general use, high-performance model, which is also small enough to be run at high token throughputs while minimising performance loss.
35
+ - We made some efforts at ensuring the model is safe while keeping it useable. In addition, it is sensitive to system prompts (in a good way, adheres to them well), so it is very customisable. We did not put in our fine-tuning data any information about the identity of the model; rather it just knows that it is a Large Language Model (LLM), but it does not know it is Fijik, unless you specify in the system prompt.
36
+ - Due to the large context of the model, It can be used for RAG, but like any other LLM out there, you should be aware that it *may* hallucinate.
37
+ - In our fine-tuning data we included quite a bit of creative writing examples, so the model is pretty good at it.
38
+ - Coding, Math: In our SFT, DPO fine-tuning data we have put an effort into improving coding and step-by-step math performance, while it is indeed not perfect, no LLM is.
39
  # Examples
40
 
41
  # Limitations
42
+ This model is not uncensored, yet it may produce erotic outputs. You are solely responsible for the outputs from the model.
43
+ Like any other LLM, users and hosters alike should be aware that AI language models may hallucinate and produce inaccurate, dangerous, or even completly nonsensical outputs, all the information the model provides may seem accurate, but please, for important tasks always double check responses with credible sources.
44
 
45
  # Notices
46
 
 
48
 
49
  - **Developed by:** Pinkstack
50
  - **License:** Llama 3.2 community license
51
+ - **Finetuned from model :** Pinkstack/Fijik-6b-v1 (sft)
52
 
53
  This llama model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.