Safetensors
qwen3
sumo43 commited on
Commit
d0f0f98
·
verified ·
1 Parent(s): eafdbfd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -9
README.md CHANGED
@@ -22,11 +22,10 @@ datasets:
22
  * **Dataset**: [`cheapresearch/CheapResearch-DS-33k`](https://huggingface.co/datasets/cheapresearch/CheapResearch-DS-33k)
23
  * **Primary Use**: Fast, low-cost **DeepResearch** agent runs (browsing, multi-step reasoning, source-grounded answers)
24
 
25
- ### Intended Use
26
-
27
- * Browser-based local research assistant (via **Alibaba-NLP/DeepResearch**)
28
- * Low-latency DR on modest GPUs/CPUs
29
 
 
 
30
 
31
  ## Training Data
32
 
@@ -58,10 +57,7 @@ MODEL_PATH=cheapresearch/CheapResearch-4B-Thinking
58
 
59
  * **Single 12–16GB GPU** is enough for 4B FP16; FP8/INT4 quantization allows smaller VRAM. If you quantize, the summary model can be local as well.
60
 
61
- ## Evaluation
62
 
63
- <img src='hle.png' width='500'>
64
- <img src='simpleqa.png' width='500'>
65
 
66
  ## Acknowledgements
67
 
@@ -132,6 +128,5 @@ model-index:
132
  ---
133
  ```
134
 
135
- ---
136
 
137
- If you share your **actual model ID** and any concrete eval numbers or training hyperparams, I’ll slot them in and tighten the “Training Procedure” and “Evaluation” sections for you.
 
22
  * **Dataset**: [`cheapresearch/CheapResearch-DS-33k`](https://huggingface.co/datasets/cheapresearch/CheapResearch-DS-33k)
23
  * **Primary Use**: Fast, low-cost **DeepResearch** agent runs (browsing, multi-step reasoning, source-grounded answers)
24
 
25
+ ## Evaluation
 
 
 
26
 
27
+ <img src='hle.png' width='500'>
28
+ <img src='simpleqa.png' width='500'>
29
 
30
  ## Training Data
31
 
 
57
 
58
  * **Single 12–16GB GPU** is enough for 4B FP16; FP8/INT4 quantization allows smaller VRAM. If you quantize, the summary model can be local as well.
59
 
 
60
 
 
 
61
 
62
  ## Acknowledgements
63
 
 
128
  ---
129
  ```
130
 
 
131
 
132
+