REILX commited on
Commit
fbe80d5
·
verified ·
1 Parent(s): ee714db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -36,6 +36,45 @@ Total Training Duration:69h18m17s
36
  }
37
  ```
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
 
36
  }
37
  ```
38
 
39
+ ### Sample inference code
40
+ ```python
41
+ import torch
42
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
43
+
44
+ torch.random.manual_seed(0)
45
+ model_id = "/home/intelisu/models/weight/phi3/Phi-3-medium-128k-instruct/"
46
+ model = AutoModelForCausalLM.from_pretrained(
47
+ model_id,
48
+ device_map="cuda",
49
+ torch_dtype="auto",
50
+ trust_remote_code=True,
51
+ )
52
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
53
+
54
+ messages = [
55
+ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
56
+ {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
57
+ {"role": "user", "content": "Write a python code to train llm mode by lora and sft ?"},
58
+ ]
59
+
60
+ pipe = pipeline(
61
+ "text-generation",
62
+ model=model,
63
+ tokenizer=tokenizer,
64
+ )
65
+
66
+ generation_args = {
67
+ "max_new_tokens": 4096,
68
+ "return_full_text": False,
69
+ "temperature": 0.0,
70
+ "do_sample": False,
71
+ }
72
+
73
+ output = pipe(messages, **generation_args)
74
+ print(output[0]['generated_text'])
75
+
76
+ ```
77
+
78
  ### Training hyperparameters
79
 
80
  The following hyperparameters were used during training: