Update README.md
Browse files
README.md
CHANGED
|
@@ -57,14 +57,10 @@ Along with GRaPE Mini, a 7B MoE (Mixture of Experts) model based from OlMoE will
|
|
| 57 |
## Capabilties of GRaPE Mini
|
| 58 |
|
| 59 |
GRaPE was trained to be a coding assistant, and to excel in STEM topics. The model *may* falter on historical information, or factual information due to the low parameter size.
|
| 60 |
-
A demo of a website it had generated for itself can be found [here](GRaPE_Mini_Beta.html).
|
| 61 |
|
| 62 |
***
|
| 63 |
|
| 64 |
-
## 🚀 Benchmarks
|
| 65 |
-
|
| 66 |
-
**BENCHMARKS COMING SOON**
|
| 67 |
-
|
| 68 |
## 🧠 Model Philosophy: The Art of the Finetune
|
| 69 |
|
| 70 |
While GRaPE Mini is not trained "from-scratch" (i.e., from random weights), it represents an extensive and highly curated instruction-tuning process. A base model possesses linguistic structure but lacks the ability to follow instructions, reason, or converse. The true "creation" of an assistant like GRaPE lies in the meticulous selection, blending, and application of high-quality datasets. This finetuning process is what transforms a raw linguistic engine into a capable and helpful agent.
|
|
|
|
| 57 |
## Capabilties of GRaPE Mini
|
| 58 |
|
| 59 |
GRaPE was trained to be a coding assistant, and to excel in STEM topics. The model *may* falter on historical information, or factual information due to the low parameter size.
|
| 60 |
+
A demo of a website it had generated for itself can be found [here](https://huggingface.co/Sweaterdog/GRaPE-Mini-Beta/blob/main/GRaPE_Mini_Beta.html).
|
| 61 |
|
| 62 |
***
|
| 63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
## 🧠 Model Philosophy: The Art of the Finetune
|
| 65 |
|
| 66 |
While GRaPE Mini is not trained "from-scratch" (i.e., from random weights), it represents an extensive and highly curated instruction-tuning process. A base model possesses linguistic structure but lacks the ability to follow instructions, reason, or converse. The true "creation" of an assistant like GRaPE lies in the meticulous selection, blending, and application of high-quality datasets. This finetuning process is what transforms a raw linguistic engine into a capable and helpful agent.
|