Initial upload of fine‑tuned Gemma + custom tokenizer
Browse files
README.md
CHANGED
|
@@ -4,7 +4,9 @@ The following is a a model trained by [...suspense...] that is meant to:
|
|
| 4 |
- be a really good, approximately bayesian in-context learner;
|
| 5 |
- fit an data generation process
|
| 6 |
- be calibrated over distributions of possible outputs wrt a population or epistemic uncertainty
|
| 7 |
-
It is initialized from
|
|
|
|
|
|
|
| 8 |
|
| 9 |
Loading model example:
|
| 10 |
```
|
|
|
|
| 4 |
- be a really good, approximately bayesian in-context learner;
|
| 5 |
- fit an data generation process
|
| 6 |
- be calibrated over distributions of possible outputs wrt a population or epistemic uncertainty
|
| 7 |
+
It is initialized from `google/gemma-3-12b-pt`.
|
| 8 |
+
|
| 9 |
+
This model/repo is a work in progress - expect updates.
|
| 10 |
|
| 11 |
Loading model example:
|
| 12 |
```
|