Update README.md
Browse files
README.md
CHANGED
|
@@ -16,7 +16,7 @@ datasets:
|
|
| 16 |
- Sweaterdog/MindCraft-LLM-tuning
|
| 17 |
---
|
| 18 |
|
| 19 |
-
# Uploaded
|
| 20 |
|
| 21 |
- **Developed by:** Sweaterdog
|
| 22 |
- **License:** apache-2.0
|
|
@@ -32,7 +32,7 @@ This model is built and designed to play Minecraft via the extension named "[Min
|
|
| 32 |
While, yes, models that aren't fine tuned to play Minecraft *Can* play Minecraft, most are slow, innaccurate, and not as smart, in the fine tuning, it expands reasoning, conversation examples, and command (tool) usage.
|
| 33 |
- What kind of Dataset was used?
|
| 34 |
#
|
| 35 |
-
I'm deeming the first generation of this model, Hermesv1, for future generations, they will be named
|
| 36 |
- Why choose Qwen2.5 for the base model?
|
| 37 |
#
|
| 38 |
During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
|
|
@@ -47,6 +47,7 @@ This model is built and designed to play Minecraft via the extension named "[Min
|
|
| 47 |
- Will there ever be vision fine tuning?
|
| 48 |
#
|
| 49 |
Yes! In MindCraft there will be vision support for VLM's *(vision language models)*, Most likely, the model will be Qwen2-VL-7b, or LLaMa3.2-11b-vision since they are relatively new, yes, I am still holding out hope for llama3.2
|
|
|
|
| 50 |
# How to Use
|
| 51 |
In order to use this model, A, download the GGUF file of the version you want, either a Qwen, or Llama model, and then the Modelfile, after you download both, in the Modelfile, change the directory of the model, to your model. Here is a simple guide if needed for the rest:
|
| 52 |
#
|
|
@@ -94,11 +95,11 @@ goto loop
|
|
| 94 |
12. Enjoy having a model play Minecraft with you, hopefully it is smarter than regular Gemini models!
|
| 95 |
#
|
| 96 |
|
| 97 |
-
**WARNING** The new v3 generation of models suck! That is because they were also trained for building *(coding)* and often do not use commands! I recommend using the v2 generation still,
|
| 98 |
|
| 99 |
#
|
| 100 |
|
| 101 |
-
For Anybody who is wondering what the context length is,
|
| 102 |
|
| 103 |
#
|
| 104 |
|
|
|
|
| 16 |
- Sweaterdog/MindCraft-LLM-tuning
|
| 17 |
---
|
| 18 |
|
| 19 |
+
# Uploaded models
|
| 20 |
|
| 21 |
- **Developed by:** Sweaterdog
|
| 22 |
- **License:** apache-2.0
|
|
|
|
| 32 |
While, yes, models that aren't fine tuned to play Minecraft *Can* play Minecraft, most are slow, innaccurate, and not as smart, in the fine tuning, it expands reasoning, conversation examples, and command (tool) usage.
|
| 33 |
- What kind of Dataset was used?
|
| 34 |
#
|
| 35 |
+
I'm deeming the first generation of this model, Hermesv1, for future generations, they will be named ***"Andy"*** based from the actual MindCraft plugin's default character. it was trained for reasoning by using examples of in-game "Vision" as well as examples of spacial reasoning, for expanding thinking, I also added puzzle examples where the model broke down the process step by step to reach the goal.
|
| 36 |
- Why choose Qwen2.5 for the base model?
|
| 37 |
#
|
| 38 |
During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
|
|
|
|
| 47 |
- Will there ever be vision fine tuning?
|
| 48 |
#
|
| 49 |
Yes! In MindCraft there will be vision support for VLM's *(vision language models)*, Most likely, the model will be Qwen2-VL-7b, or LLaMa3.2-11b-vision since they are relatively new, yes, I am still holding out hope for llama3.2
|
| 50 |
+
|
| 51 |
# How to Use
|
| 52 |
In order to use this model, A, download the GGUF file of the version you want, either a Qwen, or Llama model, and then the Modelfile, after you download both, in the Modelfile, change the directory of the model, to your model. Here is a simple guide if needed for the rest:
|
| 53 |
#
|
|
|
|
| 95 |
12. Enjoy having a model play Minecraft with you, hopefully it is smarter than regular Gemini models!
|
| 96 |
#
|
| 97 |
|
| 98 |
+
**WARNING** The new v3 generation of models suck! That is because they were also trained for building *(coding)* and often do not use commands! I recommend using the v2 generation still, the LLaMa version is in the [deprecated models folder](https://huggingface.co/Sweaterdog/MindCraft-LLM-tuning/tree/main/deprecated-models).
|
| 99 |
|
| 100 |
#
|
| 101 |
|
| 102 |
+
For Anybody who is wondering what the context length is, the Qwen version, has a length of 64000 tokens, for the Llama version, it has a 128000 token context window. *(***NOTE***Any model can support a longer context, but these are the supported values in training)*
|
| 103 |
|
| 104 |
#
|
| 105 |
|