Update README.md
Browse files
README.md
CHANGED
|
@@ -25,10 +25,10 @@ pipeline_tag: text-generation
|
|
| 25 |
|
| 26 |
## Overview
|
| 27 |
|
| 28 |
-
LIMI is an agentic model fine‑tuned from GLM‑4.5
|
| 29 |
|
| 30 |
- Targeted capabilities: tool use, multi‑turn correction, spec compliance
|
| 31 |
-
- Long‑context
|
| 32 |
- OpenAI‑style `messages` with optional function/tool calls
|
| 33 |
|
| 34 |
## Model Details
|
|
@@ -36,7 +36,7 @@ LIMI is an agentic model fine‑tuned from GLM‑4.5 (355B) using compact, high
|
|
| 36 |
- Base model: `zai-org/GLM-4.5`
|
| 37 |
- Context: up to 128k tokens (training budget)
|
| 38 |
- Training framework: slime
|
| 39 |
-
- Training data: curated conversations from [GAIR/
|
| 40 |
|
| 41 |
## Key Results
|
| 42 |
|
|
@@ -141,7 +141,7 @@ print(out[0].outputs[0].text)
|
|
| 141 |
|
| 142 |
```bibtex
|
| 143 |
@article{LIMI2025,
|
| 144 |
-
title = {Less is More for
|
| 145 |
author = {LIMI Authors},
|
| 146 |
year = {2025},
|
| 147 |
journal = {arXiv preprint arXiv:2502.03387}
|
|
|
|
| 25 |
|
| 26 |
## Overview
|
| 27 |
|
| 28 |
+
LIMI is an agentic model fine‑tuned from [GLM‑4.5](https://huggingface.co/zai-org/GLM-4.5) using compact, high‑quality data to emphasize:
|
| 29 |
|
| 30 |
- Targeted capabilities: tool use, multi‑turn correction, spec compliance
|
| 31 |
+
- Long‑context trajectory with tokenizer‑filtered samples (≤128k tokens)
|
| 32 |
- OpenAI‑style `messages` with optional function/tool calls
|
| 33 |
|
| 34 |
## Model Details
|
|
|
|
| 36 |
- Base model: `zai-org/GLM-4.5`
|
| 37 |
- Context: up to 128k tokens (training budget)
|
| 38 |
- Training framework: slime
|
| 39 |
+
- Training data: curated conversations from [GAIR/LIMI](https://huggingface.co/datasets/GAIR/LIMI)
|
| 40 |
|
| 41 |
## Key Results
|
| 42 |
|
|
|
|
| 141 |
|
| 142 |
```bibtex
|
| 143 |
@article{LIMI2025,
|
| 144 |
+
title = {Less is More for Agency},
|
| 145 |
author = {LIMI Authors},
|
| 146 |
year = {2025},
|
| 147 |
journal = {arXiv preprint arXiv:2502.03387}
|