Add `library_name` and sample usage to model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +47 -2
README.md CHANGED
@@ -1,8 +1,9 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
 
5
  pipeline_tag: text-generation
 
6
  tags:
7
  - agent
8
  - communication
@@ -15,4 +16,48 @@ Cache-to-Cache (C2C) enables Large Language Models to communicate directly throu
15
 
16
  Please visit our [GitHub repo](https://github.com/thu-nics/C2C) for more information.
17
 
18
- Project page: [https://fuvty.github.io/C2C_Project_Page/](https://fuvty.github.io/C2C_Project_Page/)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - en
4
+ license: apache-2.0
5
  pipeline_tag: text-generation
6
+ library_name: transformers
7
  tags:
8
  - agent
9
  - communication
 
16
 
17
  Please visit our [GitHub repo](https://github.com/thu-nics/C2C) for more information.
18
 
19
+ Project page: [https://fuvty.github.io/C2C_Project_Page/](https://fuvty.github.io/C2C_Project_Page/)
20
+
21
+ ## Sample Usage
22
+
23
+ Minimal example to load published C2C weights from the Hugging Face collection and run the provided inference script:
24
+
25
+ ```python
26
+ import torch
27
+ from huggingface_hub import snapshot_download
28
+ from script.playground.inference_example import load_rosetta_model, run_inference_example
29
+ from transformers import AutoTokenizer # Added for clarity as tokenizer.apply_chat_template is used
30
+
31
+ checkpoint_dir = snapshot_download(
32
+ repo_id="nics-efc/C2C_Fuser",
33
+ allow_patterns=["qwen3_0.6b+qwen2.5_0.5b_Fuser/*"],
34
+ )
35
+
36
+ model_config = {
37
+ "rosetta_config": {
38
+ "base_model": "Qwen/Qwen3-0.6B",
39
+ "teacher_model": "Qwen/Qwen2.5-0.5B-Instruct",
40
+ "checkpoints_dir": f"{checkpoint_dir}/qwen3_0.6b+qwen2.5_0.5b_Fuser/final",
41
+ }
42
+ }
43
+
44
+ rosetta_model, tokenizer = load_rosetta_model(model_config, eval_config={}, device=torch.device("cuda"))
45
+ device = rosetta_model.device
46
+
47
+ prompt = [{"role": "user", "content": "Say hello in one short sentence."}]
48
+ input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True, enable_thinking=False)
49
+ inputs = tokenizer(input_text, return_tensors="pt").to(device)
50
+
51
+ instruction_index = torch.tensor([1, 0], dtype=torch.long).repeat(inputs['input_ids'].shape[1] - 1, 1).unsqueeze(0).to(device)
52
+ label_index = torch.tensor([-1, 0], dtype=torch.long).repeat(1, 1).unsqueeze(0).to(device)
53
+ kv_cache_index = [instruction_index, label_index]
54
+
55
+ with torch.no_grad():
56
+ sampling_params = {
57
+ 'do_sample': False,
58
+ 'max_new_tokens': 256
59
+ }
60
+ outputs = rosetta_model.generate(**inputs, kv_cache_index=kv_cache_index, **sampling_params)
61
+ output_text = tokenizer.decode(outputs[0, instruction_index.shape[1] + 1:], skip_special_tokens=True)
62
+ print(f"C2C output text: {output_text}")
63
+ ```