Davidsv commited on
Commit
a04fe33
·
verified ·
1 Parent(s): e9c8fba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -33
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  base_model:
3
  - teknium/OpenHermes-2.5-Mistral-7B
4
  - NousResearch/Nous-Hermes-2-Mistral-7B-DPO
@@ -6,18 +7,32 @@ tags:
6
  - merge
7
  - mergekit
8
  - lazymergekit
9
- - teknium/OpenHermes-2.5-Mistral-7B
10
- - NousResearch/Nous-Hermes-2-Mistral-7B-DPO
 
11
  ---
12
-
13
  # SUONG-2
14
 
15
- SUONG-2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  * [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
17
  * [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)
18
 
19
- ## 🧩 Configuration
20
-
21
  ```yaml
22
  slices:
23
  - sources:
@@ -34,30 +49,4 @@ parameters:
34
  - filter: mlp
35
  value: [1, 0.5, 0.7, 0.3, 0]
36
  - value: 0.5
37
- dtype: bfloat16
38
- ```
39
-
40
- ## 💻 Usage
41
-
42
- ```python
43
- !pip install -qU transformers accelerate
44
-
45
- from transformers import AutoTokenizer
46
- import transformers
47
- import torch
48
-
49
- model = "Davidsv/SUONG-2"
50
- messages = [{"role": "user", "content": "What is a large language model?"}]
51
-
52
- tokenizer = AutoTokenizer.from_pretrained(model)
53
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
54
- pipeline = transformers.pipeline(
55
- "text-generation",
56
- model=model,
57
- torch_dtype=torch.float16,
58
- device_map="auto",
59
- )
60
-
61
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
62
- print(outputs[0]["generated_text"])
63
- ```
 
1
  ---
2
+ license: apache-2.0
3
  base_model:
4
  - teknium/OpenHermes-2.5-Mistral-7B
5
  - NousResearch/Nous-Hermes-2-Mistral-7B-DPO
 
7
  - merge
8
  - mergekit
9
  - lazymergekit
10
+ - mistral
11
+ - hermes
12
+ - dpo
13
  ---
 
14
  # SUONG-2
15
 
16
+ This is a merge of two leading Hermes models created using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing), combining OpenHermes's robust capabilities with Nous-Hermes-DPO's refined instruction following.
17
+
18
+ ## About Me
19
+ I'm David Soeiro-Vuong, a third-year Computer Science student working as an apprentice at TW3 Partners, a company specialized in Generative AI. Passionate about artificial intelligence and language models optimization, I focus on creating efficient model merges that balance performance and capabilities.
20
+
21
+ 🔗 [Connect with me on LinkedIn](https://www.linkedin.com/in/david-soeiro-vuong-a28b582ba/)
22
+
23
+ ## Merge Details
24
+ ### Merge Method
25
+ This model uses SLERP (Spherical Linear Interpolation) with carefully tuned parameters:
26
+ - Progressive attention layer fusion patterns
27
+ - Balanced MLP layer transitions
28
+ - bfloat16 format for efficient memory usage
29
+ - Full layer utilization for maximum capability retention
30
+
31
+ ### Models Merged
32
  * [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
33
  * [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)
34
 
35
+ ### Configuration
 
36
  ```yaml
37
  slices:
38
  - sources:
 
49
  - filter: mlp
50
  value: [1, 0.5, 0.7, 0.3, 0]
51
  - value: 0.5
52
+ dtype: bfloat16