Update README.md
Browse files
README.md
CHANGED
|
@@ -130,64 +130,6 @@ Thank you to all my generous patrons and donaters!
|
|
| 130 |
|
| 131 |
# Original model card: rewoo's Planner 7B
|
| 132 |
|
| 133 |
-
|
| 134 |
-
<!-- header start -->
|
| 135 |
-
<div style="width: 100%;">
|
| 136 |
-
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
| 137 |
-
</div>
|
| 138 |
-
<div style="display: flex; justify-content: space-between; width: 100%;">
|
| 139 |
-
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
| 140 |
-
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
| 141 |
-
</div>
|
| 142 |
-
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
| 143 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
| 144 |
-
</div>
|
| 145 |
-
</div>
|
| 146 |
-
<!-- header end -->
|
| 147 |
-
|
| 148 |
-
# rewoo's Planner 7B GGML
|
| 149 |
-
|
| 150 |
-
These files are fp16 pytorch format model files for [rewoo's Planner 7B](https://huggingface.co/rewoo/planner_7B).
|
| 151 |
-
|
| 152 |
-
They are the result of merging the LoRA adapter at the above repo with the base LLaMa 7B model.
|
| 153 |
-
|
| 154 |
-
## Repositories available
|
| 155 |
-
|
| 156 |
-
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Planner-7B-GPTQ)
|
| 157 |
-
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Planner-7B-GGML)
|
| 158 |
-
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Planner-7B-fp16)
|
| 159 |
-
|
| 160 |
-
<!-- footer start -->
|
| 161 |
-
## Discord
|
| 162 |
-
|
| 163 |
-
For further support, and discussions on these models and AI in general, join us at:
|
| 164 |
-
|
| 165 |
-
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
| 166 |
-
|
| 167 |
-
## Thanks, and how to contribute.
|
| 168 |
-
|
| 169 |
-
Thanks to the [chirper.ai](https://chirper.ai) team!
|
| 170 |
-
|
| 171 |
-
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
| 172 |
-
|
| 173 |
-
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
| 174 |
-
|
| 175 |
-
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
| 176 |
-
|
| 177 |
-
* Patreon: https://patreon.com/TheBlokeAI
|
| 178 |
-
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
| 179 |
-
|
| 180 |
-
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
| 181 |
-
|
| 182 |
-
**Patreon special mentions**: Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
|
| 183 |
-
|
| 184 |
-
Thank you to all my generous patrons and donaters!
|
| 185 |
-
|
| 186 |
-
<!-- footer end -->
|
| 187 |
-
|
| 188 |
-
# Original model card: rewoo's Planner 7B
|
| 189 |
-
|
| 190 |
-
|
| 191 |
Alpaca Lora adapter weight fine-tuned on following instruction dataset.
|
| 192 |
|
| 193 |
https://huggingface.co/datasets/rewoo/planner_instruction_tuning_2k/blob/main/README.md
|
|
|
|
| 130 |
|
| 131 |
# Original model card: rewoo's Planner 7B
|
| 132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 133 |
Alpaca Lora adapter weight fine-tuned on following instruction dataset.
|
| 134 |
|
| 135 |
https://huggingface.co/datasets/rewoo/planner_instruction_tuning_2k/blob/main/README.md
|