amithsourya commited on
Commit
2b6bb51
·
verified ·
1 Parent(s): ff13b83

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -1
README.md CHANGED
@@ -26,8 +26,32 @@ Generate 4GL Scripts from english prompts
26
  <!-- Provide the basic links for the model. -->
27
 
28
  - **Repository:** https://huggingface.co/amithsourya/Script-Generate-4GL-V1.0/blob/main/adapter_model.safetensors
29
- - **Demo:** https://colab.research.google.com/drive/1wdveoznPyiKrb-kZ4gTRRMpq82Qih8Z_?usp=sharing#scrollTo=N40paBum-cc7
 
 
 
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ## Environmental Impact
33
 
 
26
  <!-- Provide the basic links for the model. -->
27
 
28
  - **Repository:** https://huggingface.co/amithsourya/Script-Generate-4GL-V1.0/blob/main/adapter_model.safetensors
29
+ - **Demo:**
30
+ ```python
31
+ from huggingface_hub import notebook_login
32
+ notebook_login()
33
 
34
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
35
+ from peft import PeftModel, PeftConfig
36
+
37
+ lora_path = "amithsourya/Script-Generate-4GL-V1.0"
38
+ peft_config = PeftConfig.from_pretrained(lora_path)
39
+
40
+ base_model = AutoModelForCausalLM.from_pretrained(
41
+ peft_config.base_model_name_or_path,
42
+ device_map="auto",
43
+ torch_dtype="auto"
44
+ )
45
+
46
+ model = PeftModel.from_pretrained(base_model, lora_path)
47
+ tokenizer = AutoTokenizer.from_pretrained(peft_config.base_model_name_or_path)
48
+
49
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map="auto")
50
+ prompt = "invoke a BO for read"
51
+ outputs = pipe(prompt, max_new_tokens=256)
52
+
53
+ print(outputs[0]["generated_text"])
54
+ ```
55
 
56
  ## Environmental Impact
57