Alsebay commited on
Commit
2f581e8
ยท
verified ยท
1 Parent(s): 6b53eb1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -6
README.md CHANGED
@@ -6,17 +6,21 @@ language:
6
  ## Leaderboard
7
  |Rank|Name|Parameter|Context Length|Tag|Note|
8
  |:---:|---|:---:|:---:|:---:|---|
9
- |๐Ÿ’Ž1|[HyouKan Series](https://huggingface.co/Alsebay/HyouKan-3x7B)|3x7B|<span style="color:cyan">8K</span> - <span style="color:red">32K</span>|<span style="color:#40C5F0">Neurral</span>|<span style="color:red">ATTENTION: DON'T USE GGUF VERSION SINCE IT HAVE SOME BUGS (VARY BY VERSION)</span> All-rounded Roleplay model. Understand well Character Card and good logic. The first version have 8k context lenght. <span style="color:red">|
10
- |๐Ÿ†2|[SunnyRain](https://huggingface.co/Alsebay/SunnyRain-2x10.7B)|2x10.7B|<span style="color:green">4K</span>|<span style="color:#F53A85">Lewd</span>| To be real, it perform approximate like HyouKan in Roleplay, just got some strange behavious.|
11
- |โœŒ3|[RainyMotip](https://huggingface.co/Alsebay/RainyMotip-2x7B)|2x7B|<span style="color:red">32K</span>|<span style="color:#40C5F0">Neurral</span> |Good enough model, ok in Roleplay.|
12
- |4|[Nutopia](https://huggingface.co/Alsebay/Nutopia-7B)|7B|<span style="color:red">32K</span>|<span style="color:#F2EC4E">Not for Roleplay</span>|I don't think this work for Roleplay, but it good for solving problem|
13
- |5|[TripedalChiken](https://huggingface.co/Alsebay/TripedalChiken)|2x7B|<span style="color:red">32K</span>|<span style="color:#F2EC4E">Not for Roleplay</span>|Solving problem is good, but for Roleplay, I don't think so|
 
 
 
14
 
15
  ## Note:
16
  - <span style="color:#F53A85">Lewd</span> : perform well NSFW content. Some of lewd words will appear in normal content if your Character Card have NSFW informations.
17
  - <span style="color:#40C5F0">Neurral</span> : perform well SFW content, can perform well NSFW content (limited maybe). Lewd words will less appear in chat/roleplay than <span style="color:#F53A85">Lewd</span>
18
  - <span style="color:#F2EC4E">Not for Roleplay</span> : seem that those model with this tag not understand well Character Card. But its logical is very good.
19
-
 
20
  # Some experience:
21
  - The Context Length affect too much to your Memory. Let's say I have 16GB Vram card, I can run the model in 2 ways, using Text-Generation-WebUI:
22
  1. Inference: download the origin model, apply args: ``--load-in-4bit --use_double_quant``. I can run all of my model in leaderboard. The bigger parameter is, the slower token can generate. (Ex:7B model could run in 15 token/s, since 3x7b model could only run in ~4-5 token/s)
 
6
  ## Leaderboard
7
  |Rank|Name|Parameter|Context Length|Tag|Note|
8
  |:---:|---|:---:|:---:|:---:|---|
9
+ |๐Ÿ’Ž1|[Narumashi-RT](https://huggingface.co/Alsebay/Narumashi-RT-11B-test)|11B|<span style="color:green">4K</span>|<span style="color:#F53A85">Lewd</span>|Good for Roleplay, although it is LLAMA2. Thank Sao10k :) Could handle some (limited) TSF content.|
10
+ |๐Ÿ†2|[NaruMoE](https://huggingface.co/Alsebay/NaruMOE-v1-3x7B)|3x7B|<span style="color:cyan">8K</span> - <span style="color:red">32K</span>|<span style="color:#40C5F0">Neurral</span>| AVG model, could only handle limited extra content I want. |
11
+ |โœŒ3|[NarumashiRTS](https://huggingface.co/Alsebay/NarumashiRTS-V2)|7B|<span style="color:cyan">8K</span> - <span style="color:red">32K</span>|<span style="color:#40C5F0">Neurral</span>| Base on Kunoichi-7B, so it good enough. Know the extra content. Not lewd and will skip lewd content sometime.|
12
+ |4|[HyouKan Series](https://huggingface.co/Alsebay/HyouKan-3x7B)|3x7B|<span style="color:cyan">8K</span> - <span style="color:red">32K</span>|<span style="color:#40C5F0">Neurral</span>|<span style="color:red">ATTENTION: DON'T USE GGUF VERSION SINCE IT HAVE SOME BUGS (VARY BY VERSION)</span> All-rounded Roleplay model. Understand well Character Card and good logic. The first version have 8k context lenght. <span style="color:red">|
13
+ |5|[SunnyRain](https://huggingface.co/Alsebay/SunnyRain-2x10.7B)|2x10.7B|<span style="color:green">4K</span>|<span style="color:#F53A85">Lewd</span>| To be real, it perform approximate like HyouKan in Roleplay, just got some strange behavious.|
14
+ |6|[RainyMotip](https://huggingface.co/Alsebay/RainyMotip-2x7B)|2x7B|<span style="color:red">32K</span>|<span style="color:#40C5F0">Neurral</span> |Good enough model, ok in Roleplay.|
15
+ |7|[Nutopia](https://huggingface.co/Alsebay/Nutopia-7B)|7B|<span style="color:red">32K</span>|<span style="color:#F2EC4E">Not for Roleplay</span>|I don't think this work for Roleplay, but it good for solving problem|
16
+ |8|[TripedalChiken](https://huggingface.co/Alsebay/TripedalChiken)|2x7B|<span style="color:red">32K</span>|<span style="color:#F2EC4E">Not for Roleplay</span>|Solving problem is good, but for Roleplay, I don't think so|
17
 
18
  ## Note:
19
  - <span style="color:#F53A85">Lewd</span> : perform well NSFW content. Some of lewd words will appear in normal content if your Character Card have NSFW informations.
20
  - <span style="color:#40C5F0">Neurral</span> : perform well SFW content, can perform well NSFW content (limited maybe). Lewd words will less appear in chat/roleplay than <span style="color:#F53A85">Lewd</span>
21
  - <span style="color:#F2EC4E">Not for Roleplay</span> : seem that those model with this tag not understand well Character Card. But its logical is very good.
22
+ - **RT**: Rough Translation Dataset that could lead to worse performance than original model.
23
+ - **CN**: Chinese dataset pretrain, maybe not understand extra content in English. (I can't find any good english verion.)
24
  # Some experience:
25
  - The Context Length affect too much to your Memory. Let's say I have 16GB Vram card, I can run the model in 2 ways, using Text-Generation-WebUI:
26
  1. Inference: download the origin model, apply args: ``--load-in-4bit --use_double_quant``. I can run all of my model in leaderboard. The bigger parameter is, the slower token can generate. (Ex:7B model could run in 15 token/s, since 3x7b model could only run in ~4-5 token/s)