nielsr HF Staff commited on
Commit
8c68903
·
verified ·
1 Parent(s): 400a949

Update pipeline tag, add library name and GitHub link, fix example import

Browse files

This PR improves the model card for the Ariadne model by:
- Correcting the `pipeline_tag` from `reinforcement-learning` to `image-text-to-text`, accurately reflecting the model's multimodal input and text generation capabilities.
- Adding `library_name: transformers` to the metadata, which enables the automated "how to use" widget on the Hugging Face Hub, as the model demonstrates compatibility with the `transformers` library.
- Including a direct link to the GitHub repository (`https://github.com/Minghe-Shen/Ariadne`) in the content for easier access to the associated code and project details.
- Adding `import torch` to the sample usage code snippet, which is necessary for the snippet to execute correctly due to its use of `torch_dtype` and `torch.cuda.is_available()`.

Files changed (1) hide show
  1. README.md +10 -6
README.md CHANGED
@@ -1,10 +1,11 @@
1
  ---
2
- license: mit
3
- language:
4
- - en
5
  base_model:
6
  - Qwen/Qwen2.5-VL-7B-Instruct
7
- pipeline_tag: reinforcement-learning
 
 
 
 
8
  ---
9
 
10
  # 🧠 Ariadne
@@ -12,10 +13,12 @@ pipeline_tag: reinforcement-learning
12
  This is the official model checkpoint for the paper:
13
  **[Ariadne: A Controllable Framework for Probing and Extending VLM Reasoning Boundaries](https://arxiv.org/abs/2511.00710)**
14
 
 
 
15
  ### 🔬 Example
16
 
17
  ```python
18
-
19
  from transformers import AutoModelForImageTextToText, AutoProcessor
20
 
21
  MODEL_ID = "..." # path
@@ -62,4 +65,5 @@ input_len = inputs["input_ids"].shape[1]
62
  gen_ids = sequences[0, input_len:]
63
  resp_text = processor.tokenizer.decode(
64
  gen_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
65
- ).strip()
 
 
1
  ---
 
 
 
2
  base_model:
3
  - Qwen/Qwen2.5-VL-7B-Instruct
4
+ language:
5
+ - en
6
+ license: mit
7
+ pipeline_tag: image-text-to-text
8
+ library_name: transformers
9
  ---
10
 
11
  # 🧠 Ariadne
 
13
  This is the official model checkpoint for the paper:
14
  **[Ariadne: A Controllable Framework for Probing and Extending VLM Reasoning Boundaries](https://arxiv.org/abs/2511.00710)**
15
 
16
+ Code: https://github.com/Minghe-Shen/Ariadne
17
+
18
  ### 🔬 Example
19
 
20
  ```python
21
+ import torch # Added for torch.bfloat16 and torch.cuda.is_available()
22
  from transformers import AutoModelForImageTextToText, AutoProcessor
23
 
24
  MODEL_ID = "..." # path
 
65
  gen_ids = sequences[0, input_len:]
66
  resp_text = processor.tokenizer.decode(
67
  gen_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
68
+ ).strip()
69
+ ```