Update README.md
Browse files
README.md
CHANGED
|
@@ -1,116 +1,42 @@
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
- unsloth/Qwen2.5-7B-bnb-4bit
|
| 4 |
-
- unsloth/Llama-3.2-3B-Instruct
|
| 5 |
tags:
|
| 6 |
- text-generation-inference
|
| 7 |
- transformers
|
| 8 |
- unsloth
|
| 9 |
- qwen2
|
| 10 |
-
- llama3
|
| 11 |
- trl
|
| 12 |
license: apache-2.0
|
| 13 |
language:
|
| 14 |
- en
|
| 15 |
datasets:
|
| 16 |
-
- Sweaterdog/
|
| 17 |
---
|
| 18 |
|
| 19 |
# Uploaded models
|
| 20 |
|
| 21 |
- **Developed by:** Sweaterdog
|
| 22 |
- **License:** apache-2.0
|
| 23 |
-
- **Finetuned from model :** unsloth/Qwen2.5-7B-bnb-4bit
|
| 24 |
|
| 25 |
-
The MindCraft LLM tuning CSV file can be found here, this can be tweaked as needed. [MindCraft-LLM](https://huggingface.co/datasets/Sweaterdog/
|
| 26 |
|
| 27 |
-
#
|
|
|
|
| 28 |
|
| 29 |
-
This
|
| 30 |
-
- Why a new model?
|
| 31 |
-
#
|
| 32 |
-
While, yes, models that aren't fine tuned to play Minecraft *Can* play Minecraft, most are slow, innaccurate, and not as smart, in the fine tuning, it expands reasoning, conversation examples, and command (tool) usage.
|
| 33 |
-
- What kind of Dataset was used?
|
| 34 |
-
#
|
| 35 |
-
I'm deeming the first generation of this model, Hermesv1, for future generations, they will be named ***"Andy"*** based from the actual MindCraft plugin's default character. it was trained for reasoning by using examples of in-game "Vision" as well as examples of spatial reasoning, for expanding thinking, I also added puzzle examples where the model broke down the process step by step to reach the goal.
|
| 36 |
-
- Why choose Qwen2.5 for the base model?
|
| 37 |
-
#
|
| 38 |
-
During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
|
| 39 |
-
- If Gemma 2 and Qwen 2.5 are the best before fine tuning, why include Llama 3.2, especially the lower intelligence, 3B parameter version?
|
| 40 |
-
#
|
| 41 |
-
That is a great question, I know since Llama 3.2 3b has low amounts of parameters, it is dumb, and doesn't play minecraft well without fine tuning, but, it is a lot smaller than other models which are for people with less powerful computers, and the hope is, once the model is tuned, it will become much better at minecraft.
|
| 42 |
|
| 43 |
-
|
| 44 |
-
#
|
| 45 |
-
Well, you see, I do not have the most powerful computer, and Unsloth, the thing I'm using for fine tuning, has a google colab set up, so I am waiting for GPU time to tune the models, but they will be released ASAP, I promise.
|
| 46 |
|
| 47 |
-
-
|
| 48 |
-
#
|
| 49 |
-
Yes! In MindCraft there will be vision support for VLM's *(vision language models)*, Most likely, the model will be Qwen2-VL-7b, or LLaMa3.2-11b-vision since they are relatively new, yes, I am still holding out hope for llama3.2
|
| 50 |
-
|
| 51 |
-
# How to Use
|
| 52 |
-
In order to use this model, A, download the GGUF file of the version you want, either a Qwen, or Llama model, and then the Modelfile, after you download both, in the Modelfile, change the directory of the model, to your model. Here is a simple guide if needed for the rest:
|
| 53 |
-
#
|
| 54 |
-
1. Download the .gguf Model u want. For this example it is in the standard Windows "Download" Folder
|
| 55 |
|
| 56 |
-
|
| 57 |
|
| 58 |
-
|
| 59 |
|
| 60 |
-
|
| 61 |
|
| 62 |
-
|
| 63 |
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
7. Wait until finished
|
| 67 |
-
|
| 68 |
-
8. In the CMD window, type "ollama run Hermes1" (replace the name with whatever you called it)
|
| 69 |
-
|
| 70 |
-
9. (Optional, needed for versions after the 11/15/24 update) If you downloaded a model that was tuned from Qwen, and in the model name you kept Qwen, you need to go into the file "prompter.js" and remove the qwen section, if you named it something that doesn't include qwen in the name, you can skip this step.
|
| 71 |
-
|
| 72 |
-
# How to fine tune a Gemini Model
|
| 73 |
-
1. Download the CSV for [MindCraft-LLM-tuning](https://huggingface.co/datasets/Sweaterdog/MindCraft-LLM-tuning)
|
| 74 |
-
2. Open sheet.google.com, and upload the CSV file
|
| 75 |
-
3. Go to [API keys and Services](https://aistudio.google.com/app/apikey), then click on "New Tuned Model" on the left popup bar
|
| 76 |
-
4. Press "Import" and then select the CSV file you uploaded to google sheets
|
| 77 |
-
5. Rename the model to whatever you want, set the training settings, epochs, learning rate, and batch size
|
| 78 |
-
6. Change the model to either Gemini-1.0-pro or Gemini-1.5-flash **NOTE** Gemini 1.0 pro will be deprecated on February 15, 2025, meaning the model WILL BE deleted!
|
| 79 |
-
7. Hit tune and wait.
|
| 80 |
-
8. After the model is finished training, hit "Add API access" and select the google project you'd like to connect it to
|
| 81 |
-
9. Copy the model ID, and paste it into the Gemini.json file in MindCraft, then name the model to whatever you want.
|
| 82 |
-
10. (Optional) Test the model by pressing "Use in chat" and ask it basic actions, such as "Grapevine_eater: Come here!" and see the output, if it is not to your liking, train the model again with different settings,
|
| 83 |
-
11. (Optional) Since the rates for Gemini models are limited (If you do not have billing enabled) I recommend making a launch.bat file in the MindCraft folder, instead of crashing and having you need to manually start the program every time the rate limit is reached. Here is the code I use in launch.bat
|
| 84 |
-
```
|
| 85 |
-
@echo off
|
| 86 |
-
setlocal enabledelayedexpansion
|
| 87 |
-
|
| 88 |
-
:loop
|
| 89 |
-
node main.js
|
| 90 |
-
timeout /t 10 /nobreak
|
| 91 |
-
|
| 92 |
-
echo Restarting...
|
| 93 |
-
goto loop
|
| 94 |
-
```
|
| 95 |
-
12. Enjoy having a model play Minecraft with you, hopefully it is smarter than regular Gemini models!
|
| 96 |
-
#
|
| 97 |
-
|
| 98 |
-
**WARNING** The new v3 generation of models suck! That is because they were also trained for building *(coding)* and often do not use commands! I recommend using the v2 generation still, the LLaMa version is in the [deprecated models folder](https://huggingface.co/Sweaterdog/MindCraft-LLM-tuning/tree/main/deprecated-models).
|
| 99 |
-
|
| 100 |
-
#
|
| 101 |
-
|
| 102 |
-
For Anybody who is wondering what the context length is, the Qwen version, has a length of 64000 tokens, for the Llama version, it has a 128000 token context window. *(***NOTE*** Any model can support a longer context, but these are the supported values in training)*
|
| 103 |
-
|
| 104 |
-
#
|
| 105 |
-
|
| 106 |
-
I wanted to include the google colab link, in case you wanted to know how to train models via CSV, or use my dataset to train your own model, on your own settings, on a different model. [Google Colab](https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS#scrollTo=2eSvM9zX_2d3)
|
| 107 |
-
|
| 108 |
-
#
|
| 109 |
-
|
| 110 |
-
**UPDATE** The Qwen and Llama models are out, with the expanded dataset! I have found the llama models are incredibly dumb, but changing the Modelfile may provide better results, With the Qwen version of Andy, the Q4_K_M, it took 2 minutes to craft a wooden pickaxe, collected stone after that, took 5 minutes.
|
| 111 |
-
|
| 112 |
-
#
|
| 113 |
-
|
| 114 |
-
This qwen2 and llama3.2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
| 115 |
-
|
| 116 |
-
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
- unsloth/Qwen2.5-7B-bnb-4bit
|
|
|
|
| 4 |
tags:
|
| 5 |
- text-generation-inference
|
| 6 |
- transformers
|
| 7 |
- unsloth
|
| 8 |
- qwen2
|
|
|
|
| 9 |
- trl
|
| 10 |
license: apache-2.0
|
| 11 |
language:
|
| 12 |
- en
|
| 13 |
datasets:
|
| 14 |
+
- Sweaterdog/Andy-v3.5-Beta
|
| 15 |
---
|
| 16 |
|
| 17 |
# Uploaded models
|
| 18 |
|
| 19 |
- **Developed by:** Sweaterdog
|
| 20 |
- **License:** apache-2.0
|
| 21 |
+
- **Finetuned from model :** unsloth/Qwen2.5-7B-bnb-4bit
|
| 22 |
|
| 23 |
+
The MindCraft LLM tuning CSV file can be found here, this can be tweaked as needed. [MindCraft-LLM](https://huggingface.co/datasets/Sweaterdog/Andy-v3.5-Beta)
|
| 24 |
|
| 25 |
+
# This is a very very early access Beta Model
|
| 26 |
+
This model is NOT a final version, but instead is a test to see how well models can be with a small dataset. This dataset is also a test of how smaller models can be improved from extremely high quality, and as close to real-world scenarios as possible.
|
| 27 |
|
| 28 |
+
This small dataset finally allows the model to code, and to store history, of course the crux of this dataset is in the playing part.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
+
The storing memory parts are real examples from in-game interactions
|
|
|
|
|
|
|
| 31 |
|
| 32 |
+
The coding is artifical and was generated by GPT-o1, with the instruction to include reasoning and thinking in the comments of the code
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
+
The playing is artificial and was generated by me, a human, and used prompts focusing on points where some models fail, such as mining.
|
| 35 |
|
| 36 |
+
This model should not be a reflection on how smaller models play Minecraft, if it performs well, and better than Andy-v2-qwen, then Yay! If not, I wasn't expecting it to be better, (And neither should you!)
|
| 37 |
|
| 38 |
+
You are totally allowed to test the beta model.
|
| 39 |
|
| 40 |
+
I hope this model performs well for you!
|
| 41 |
|
| 42 |
+
*BTW, if you want to download this model, I suggest using [this tool](https://huggingface.co/spaces/ggml-org/gguf-my-repo) to make a quantization of it, I would have done it during tuning but I ran out of GPU time on google colab*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|