Improve model card and add metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +45 -0
README.md CHANGED
@@ -1,5 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
  This is the 32B model for the paper [Large-Scale Terminal Agentic Trajectory Generation from Dockerized Environments](https://huggingface.co/papers/2602.01244).
2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ## Citation
4
 
5
  **BibTeX:**
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ library_name: transformers
4
+ base_model: Qwen/Qwen2.5-Coder-32B-Instruct
5
+ tags:
6
+ - agent
7
+ - terminal
8
+ - code
9
+ ---
10
+
11
+ # TerminalTraj-32B
12
+
13
  This is the 32B model for the paper [Large-Scale Terminal Agentic Trajectory Generation from Dockerized Environments](https://huggingface.co/papers/2602.01244).
14
 
15
+ **TerminalTraj** is a scalable pipeline that (i) filters high-quality repositories to construct Dockerized execution environments, (ii) generates Docker-aligned task instances, and (iii) synthesizes agent trajectories with executable validation code.
16
+
17
+ TerminalTraj-32B is trained on 50,733 verified terminal trajectories using the Qwen2.5-Coder backbone. It achieves state-of-the-art performance among models under 100B parameters on both **TerminalBench (TB) 1.0** and **TB 2.0**, approaching the performance of much larger models.
18
+
19
+ ## Resources
20
+ - **Paper:** [Large-Scale Terminal Agentic Trajectory Generation from Dockerized Environments](https://huggingface.co/papers/2602.01244)
21
+ - **Repository:** [multimodal-art-projection/TerminalTraj](https://github.com/multimodal-art-projection/TerminalTraj)
22
+ - **Dataset:** [m-a-p/TerminalTraj](https://huggingface.co/datasets/m-a-p/TerminalTraj)
23
+
24
+ ## Sample Usage
25
+
26
+ ```python
27
+ from transformers import AutoTokenizer, AutoModelForCausalLM
28
+ import torch
29
+
30
+ model_id = "m-a-p/TerminalTraj-32B"
31
+
32
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
33
+
34
+ model = AutoModelForCausalLM.from_pretrained(
35
+ model_id,
36
+ torch_dtype=torch.bfloat16,
37
+ device_map="auto"
38
+ )
39
+
40
+ # Example: generate a terminal command
41
+ prompt = "Write a python script to check the current system load."
42
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
43
+
44
+ outputs = model.generate(**inputs, max_new_tokens=200)
45
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
46
+ ```
47
+
48
  ## Citation
49
 
50
  **BibTeX:**