Add library_name and pipeline_tag metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -1,7 +1,9 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - Qwen/Qwen3-32B
 
 
 
5
  ---
6
 
7
  [![GitHub](https://img.shields.io/badge/GitHub-Astra-blue?logo=github)](https://github.com/LianjiaTech/astra)
@@ -14,6 +16,8 @@ base_model:
14
 
15
  The **ASTRA-32B-Thinking-v1** model is derived from [**Qwen3-32B**](https://huggingface.co/Qwen/Qwen3-32B) and specifically optimized for multi-step, tool-augmented tasks, with enhanced agentic capabilities in complex tool use and structured reasoning.
16
 
 
 
17
  We also provide a **14B** variant [**ASTRA-14B-Thinking-v1**](https://huggingface.co/Emperorizzis/ASTRAL-14B-Thinking-v1).
18
 
19
 
@@ -69,4 +73,4 @@ We then conduct **multi-turn**, **tool-integrated** Reinforcement Learning with
69
  }
70
  ```
71
 
72
- > **Note**: Although the model was trained with bf16 precision, verl saves checkpoints in float32 by default, and we did not change this setting.
 
1
  ---
 
2
  base_model:
3
  - Qwen/Qwen3-32B
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
 
9
  [![GitHub](https://img.shields.io/badge/GitHub-Astra-blue?logo=github)](https://github.com/LianjiaTech/astra)
 
16
 
17
  The **ASTRA-32B-Thinking-v1** model is derived from [**Qwen3-32B**](https://huggingface.co/Qwen/Qwen3-32B) and specifically optimized for multi-step, tool-augmented tasks, with enhanced agentic capabilities in complex tool use and structured reasoning.
18
 
19
+ The model was introduced in the paper [ASTRA: Automated Synthesis of agentic Trajectories and Reinforcement Arenas](https://huggingface.co/papers/2601.21558).
20
+
21
  We also provide a **14B** variant [**ASTRA-14B-Thinking-v1**](https://huggingface.co/Emperorizzis/ASTRAL-14B-Thinking-v1).
22
 
23
 
 
73
  }
74
  ```
75
 
76
+ > **Note**: Although the model was trained with bf16 precision, verl saves checkpoints in float32 by default, and we did not change this setting.