Improve model card: Add pipeline tag, library name, and GitHub link, update license

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +16 -7
README.md CHANGED
@@ -1,16 +1,24 @@
1
  ---
2
- license: other
3
- license_name: '-'
4
- license_link: https://ai.meta.com/llama/license
5
  ---
6
 
 
 
 
 
 
 
 
 
7
  This code demonstrates how to generate responses using MedCEG.
8
  ```python
9
  import transformers
10
  import torch
11
 
12
  # 1. Load Model & Tokenizer
13
- model_id = "XXX/MedCEG"
14
  tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
15
  model = transformers.AutoModelForCausalLM.from_pretrained(
16
  model_id,
@@ -20,13 +28,14 @@ model = transformers.AutoModelForCausalLM.from_pretrained(
20
 
21
  # 2. Define Input
22
  question = "A 78-year-old Caucasian woman presented with..."
23
- suffix = "\nPut your final answer in \\boxed{}."
 
24
  messages = [{"role": "user", "content": question + suffix}]
25
 
26
  # 3. Generate
27
  input_ids = tokenizer.apply_chat_template(
28
- messages,
29
- add_generation_prompt=True,
30
  return_tensors="pt"
31
  ).to(model.device)
32
 
 
1
  ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: question-answering
5
  ---
6
 
7
+ # MedCEG: Reinforcing Verifiable Medical Reasoning with Critical Evidence Graph
8
+
9
+ This repository contains the MedCEG model, presented in the paper [MedCEG: Reinforcing Verifiable Medical Reasoning with Critical Evidence Graph](https://huggingface.co/papers/2512.13510).
10
+
11
+ **MedCEG** is a framework that augments medical language models with clinically valid reasoning pathways. It explicitly supervises the reasoning process through a **Critical Evidence Graph (CEG)**, ensuring verifiable and logical medical deductions.
12
+
13
+ For code and more details, see the [GitHub repository](https://github.com/LinjieMu/MedCEG).
14
+
15
  This code demonstrates how to generate responses using MedCEG.
16
  ```python
17
  import transformers
18
  import torch
19
 
20
  # 1. Load Model & Tokenizer
21
+ model_id = "LinjieMu/MedCEG"
22
  tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
23
  model = transformers.AutoModelForCausalLM.from_pretrained(
24
  model_id,
 
28
 
29
  # 2. Define Input
30
  question = "A 78-year-old Caucasian woman presented with..."
31
+ suffix = "
32
+ Put your final answer in \\boxed{}."
33
  messages = [{"role": "user", "content": question + suffix}]
34
 
35
  # 3. Generate
36
  input_ids = tokenizer.apply_chat_template(
37
+ messages,
38
+ add_generation_prompt=True,
39
  return_tensors="pt"
40
  ).to(model.device)
41