Add model card
Browse files
README.md
ADDED
|
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Lemrd
|
| 2 |
+
|
| 3 |
+
A Gemma 4 31B dense fine-tune by [Lethean Network](https://lthn.ai/lemrd).
|
| 4 |
+
|
| 5 |
+
EUPL-1.2 路 Apache 2.0 base 路 [lthn.ai/lemrd](https://lthn.ai/lemrd)
|
| 6 |
+
|
| 7 |
+
## Use
|
| 8 |
+
|
| 9 |
+
### MLX
|
| 10 |
+
|
| 11 |
+
```bash
|
| 12 |
+
pip install mlx-lm
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
```python
|
| 16 |
+
from mlx_lm import load, generate
|
| 17 |
+
|
| 18 |
+
model, tokenizer = load("lthn/lemrd", revision="4bit")
|
| 19 |
+
response = generate(model, tokenizer, prompt="Hello", max_tokens=200)
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
### Ollama
|
| 23 |
+
|
| 24 |
+
```bash
|
| 25 |
+
# Coming soon
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
### HF Transformers
|
| 29 |
+
|
| 30 |
+
```python
|
| 31 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 32 |
+
|
| 33 |
+
model = AutoModelForCausalLM.from_pretrained("lthn/lemrd", revision="bf16-hf")
|
| 34 |
+
tokenizer = AutoTokenizer.from_pretrained("lthn/lemrd", revision="bf16-hf")
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
## Branches
|
| 38 |
+
|
| 39 |
+
### MLX
|
| 40 |
+
|
| 41 |
+
| Branch | Size |
|
| 42 |
+
|--------|------|
|
| 43 |
+
| `bf16` | 57G |
|
| 44 |
+
| `8bit` | 30G |
|
| 45 |
+
| `6bit` | 23G |
|
| 46 |
+
| `5bit` | 20G |
|
| 47 |
+
| `4bit` | 16G |
|
| 48 |
+
| `mxfp8` | 30G |
|
| 49 |
+
| `mxfp4` | 15G |
|
| 50 |
+
| `nvfp4` | 16G |
|
| 51 |
+
|
| 52 |
+
### GGUF
|
| 53 |
+
|
| 54 |
+
| Branch | Size |
|
| 55 |
+
|--------|------|
|
| 56 |
+
| `bf16-gguf` | Coming soon |
|
| 57 |
+
| `8bit-gguf` | Coming soon |
|
| 58 |
+
| `6bit-gguf` | Coming soon |
|
| 59 |
+
| `5bit-gguf` | Coming soon |
|
| 60 |
+
| `4bit-gguf` | Coming soon |
|
| 61 |
+
|
| 62 |
+
### HF Transformers
|
| 63 |
+
|
| 64 |
+
| Branch | Size |
|
| 65 |
+
|--------|------|
|
| 66 |
+
| `bf16-hf` | Coming soon |
|
| 67 |
+
|
| 68 |
+
## Base
|
| 69 |
+
|
| 70 |
+
[google/gemma-4-31B-it](https://huggingface.co/google/gemma-4-31B-it)
|
| 71 |
+
|
| 72 |
+
## More
|
| 73 |
+
|
| 74 |
+
- [lthn.ai/lemrd](https://lthn.ai/lemrd)
|
| 75 |
+
- [Lethean Network](https://lthn.ai)
|
| 76 |
+
- [GitHub](https://github.com/dappcore)
|
| 77 |
+
|
| 78 |
+
## Licence
|
| 79 |
+
|
| 80 |
+
Training data and adapter: [EUPL-1.2](https://joinup.ec.europa.eu/collection/eupl/eupl-text-eupl-12)
|
| 81 |
+
Base model: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|