HaoyuHuang2 commited on
Commit
d77c649
·
verified ·
1 Parent(s): 03196d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -32
README.md CHANGED
@@ -1,33 +1,33 @@
1
- ---
2
- license: mit
3
- ---
4
-
5
- # GraphJudger
6
-
7
- This is the LoRA weights for the paper [Can LLMs be Good Graph Judger for Knowledge Graph Construction?](https://arxiv.org/abs/2411.17388).
8
-
9
- ## Example
10
-
11
- Loading:
12
-
13
- ```python
14
- BASE_MODEL = "models/llama-2-7b-hf"
15
- LORA_WEIGHTS = "models/llama2-7b-lora-genwiki-context/"
16
-
17
- model = LlamaForCausalLM.from_pretrained(
18
- BASE_MODEL,
19
- load_in_8bit=False
20
- ).half().cuda()
21
- pipeline = transformers.pipeline (
22
- "text-generation",
23
- model=model,
24
- tokenizer=tokenizer,
25
- torch_dtype=torch.float16,
26
- device=device
27
- )
28
- pipeline.model = PeftModel.from_pretrained(
29
- model,
30
- LORA_WEIGHTS,
31
- torch_dtype=torch.float16,
32
- ).half().cuda()
33
  ```
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # GraphJudger
6
+
7
+ This is the LoRA weights for the paper [Can LLMs be Good Graph Judge for Knowledge Graph Construction?](https://arxiv.org/abs/2411.17388).
8
+
9
+ ## Example
10
+
11
+ Loading:
12
+
13
+ ```python
14
+ BASE_MODEL = "models/llama-2-7b-hf"
15
+ LORA_WEIGHTS = "models/llama2-7b-lora-genwiki-context/"
16
+
17
+ model = LlamaForCausalLM.from_pretrained(
18
+ BASE_MODEL,
19
+ load_in_8bit=False
20
+ ).half().cuda()
21
+ pipeline = transformers.pipeline (
22
+ "text-generation",
23
+ model=model,
24
+ tokenizer=tokenizer,
25
+ torch_dtype=torch.float16,
26
+ device=device
27
+ )
28
+ pipeline.model = PeftModel.from_pretrained(
29
+ model,
30
+ LORA_WEIGHTS,
31
+ torch_dtype=torch.float16,
32
+ ).half().cuda()
33
  ```