Update README.md
Browse files
README.md
CHANGED
|
@@ -22,8 +22,6 @@ datasets:
|
|
| 22 |
* **Dataset**: [`cheapresearch/CheapResearch-DS-33k`](https://huggingface.co/datasets/cheapresearch/CheapResearch-DS-33k)
|
| 23 |
* **Primary Use**: Fast, low-cost **DeepResearch** agent runs (browsing, multi-step reasoning, source-grounded answers)
|
| 24 |
|
| 25 |
-
---
|
| 26 |
-
|
| 27 |
### Intended Use
|
| 28 |
|
| 29 |
* Browser-based local research assistant (via **Alibaba-NLP/DeepResearch**)
|
|
@@ -34,8 +32,6 @@ datasets:
|
|
| 34 |
|
| 35 |
* **Primary dataset**: [`cheapresearch/CheapResearch-DS-33k`](https://huggingface.co/datasets/cheapresearch/CheapResearch-DS-33k)
|
| 36 |
|
| 37 |
-
---
|
| 38 |
-
|
| 39 |
## Inference with Alibaba-NLP/DeepResearch (Recommended)
|
| 40 |
|
| 41 |
This model is intended to be used **directly** with the DeepResearch repo.
|
|
@@ -62,12 +58,10 @@ MODEL_PATH=cheapresearch/CheapResearch-4B-Thinking
|
|
| 62 |
|
| 63 |
* **Single 12–16GB GPU** is enough for 4B FP16; FP8/INT4 quantization allows smaller VRAM. If you quantize, the summary model can be local as well.
|
| 64 |
|
| 65 |
-
---
|
| 66 |
-
|
| 67 |
## Evaluation
|
| 68 |
|
| 69 |
-
<img src='hle.png' width='
|
| 70 |
-
<img src='simpleqa.png' width='
|
| 71 |
|
| 72 |
## Acknowledgements
|
| 73 |
|
|
@@ -105,9 +99,9 @@ And the dataset:
|
|
| 105 |
|
| 106 |
## Changelog
|
| 107 |
|
| 108 |
-
* **v1.0.0 (2025-10-
|
|
|
|
| 109 |
|
| 110 |
-
---
|
| 111 |
|
| 112 |
### Model Card Metadata (Hugging Face)
|
| 113 |
|
|
|
|
| 22 |
* **Dataset**: [`cheapresearch/CheapResearch-DS-33k`](https://huggingface.co/datasets/cheapresearch/CheapResearch-DS-33k)
|
| 23 |
* **Primary Use**: Fast, low-cost **DeepResearch** agent runs (browsing, multi-step reasoning, source-grounded answers)
|
| 24 |
|
|
|
|
|
|
|
| 25 |
### Intended Use
|
| 26 |
|
| 27 |
* Browser-based local research assistant (via **Alibaba-NLP/DeepResearch**)
|
|
|
|
| 32 |
|
| 33 |
* **Primary dataset**: [`cheapresearch/CheapResearch-DS-33k`](https://huggingface.co/datasets/cheapresearch/CheapResearch-DS-33k)
|
| 34 |
|
|
|
|
|
|
|
| 35 |
## Inference with Alibaba-NLP/DeepResearch (Recommended)
|
| 36 |
|
| 37 |
This model is intended to be used **directly** with the DeepResearch repo.
|
|
|
|
| 58 |
|
| 59 |
* **Single 12–16GB GPU** is enough for 4B FP16; FP8/INT4 quantization allows smaller VRAM. If you quantize, the summary model can be local as well.
|
| 60 |
|
|
|
|
|
|
|
| 61 |
## Evaluation
|
| 62 |
|
| 63 |
+
<img src='hle.png' width='500'>
|
| 64 |
+
<img src='simpleqa.png' width='500'>
|
| 65 |
|
| 66 |
## Acknowledgements
|
| 67 |
|
|
|
|
| 99 |
|
| 100 |
## Changelog
|
| 101 |
|
| 102 |
+
* **v1.0.0 (2025-10-04)** — First public release (33k distillation, DeepResearch-ready)
|
| 103 |
+
|
| 104 |
|
|
|
|
| 105 |
|
| 106 |
### Model Card Metadata (Hugging Face)
|
| 107 |
|