Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,8 @@ tags:
|
|
| 7 |
- base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct
|
| 8 |
- lora
|
| 9 |
- transformers
|
|
|
|
|
|
|
| 10 |
datasets:
|
| 11 |
- darwinkernelpanic/luau_corpus_axolotl
|
| 12 |
pipeline_tag: text-generation
|
|
@@ -100,18 +102,28 @@ It achieves the following results on the evaluation set:
|
|
| 100 |
|
| 101 |
## Model description
|
| 102 |
|
| 103 |
-
|
|
|
|
| 104 |
|
| 105 |
## Intended uses & limitations
|
| 106 |
|
| 107 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
|
| 109 |
## Training and evaluation data
|
| 110 |
|
| 111 |
-
|
| 112 |
|
| 113 |
## Training procedure
|
| 114 |
|
|
|
|
|
|
|
| 115 |
### Training hyperparameters
|
| 116 |
|
| 117 |
The following hyperparameters were used during training:
|
|
|
|
| 7 |
- base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct
|
| 8 |
- lora
|
| 9 |
- transformers
|
| 10 |
+
- roblox
|
| 11 |
+
- luau
|
| 12 |
datasets:
|
| 13 |
- darwinkernelpanic/luau_corpus_axolotl
|
| 14 |
pipeline_tag: text-generation
|
|
|
|
| 102 |
|
| 103 |
## Model description
|
| 104 |
|
| 105 |
+
The model was fine-tuned on the Roblox/luau_corpus dataset which was converted to have the "prompt" collum replaced by "text" for compatibility reasons.
|
| 106 |
+
It was fine-tuned for improved knowledge and performance on Luau code (Roblox's Lua dialect, see [luau.org](https://luau.org)), which should end up improving code quality for Luau and Roblox projects.
|
| 107 |
|
| 108 |
## Intended uses & limitations
|
| 109 |
|
| 110 |
+
This model is intended for use within applications that use the Luau programming language, including but not limited to
|
| 111 |
+
- Roblox projects
|
| 112 |
+
- Standalone Luau projects (Lune?)
|
| 113 |
+
|
| 114 |
+
It may have limitations for projects that
|
| 115 |
+
- Use alternative languages
|
| 116 |
+
- Use Lua
|
| 117 |
+
- Non programming related projects
|
| 118 |
|
| 119 |
## Training and evaluation data
|
| 120 |
|
| 121 |
+
N/A
|
| 122 |
|
| 123 |
## Training procedure
|
| 124 |
|
| 125 |
+
Trained on 1x RTX 6000Ada
|
| 126 |
+
|
| 127 |
### Training hyperparameters
|
| 128 |
|
| 129 |
The following hyperparameters were used during training:
|