Rahulwale12 commited on
Commit
24a1705
·
verified ·
1 Parent(s): ea63f75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -20
README.md CHANGED
@@ -38,25 +38,6 @@ Guided message tasks for natural turn-taking
38
 
39
  (Exact dataset details are kept abstract to maintain clarity while ensuring transparency.)
40
 
41
- ## 🚀 How to Use GanLLM
42
-
43
- You can use GanLLM easily with the Transformers library:
44
-
45
- ```python
46
- from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
47
-
48
- # Load tokenizer and model
49
- tokenizer = AutoTokenizer.from_pretrained("Rahulwale12/ganllm")
50
- model = AutoModelForCausalLM.from_pretrained("Rahulwale12/ganllm", device_map="auto")
51
-
52
- # Create text-generation pipeline
53
- generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
54
-
55
- # Example prompt
56
- prompt = "### Instruction:\nPersona: I live in Delhi and love cricket.\nDialogue so far: Do you follow IPL?\n\n### Response:\n"
57
-
58
- output = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.7, top_p=0.9)
59
- print(output[0]["generated_text"]) ```
60
 
61
  ## ⚡ Intended Use
62
 
@@ -84,4 +65,26 @@ This model is released for research and personal use only. For any commercial ap
84
 
85
  ## 🙌 Acknowledgements
86
 
87
- Developed and maintained by Rahul Wale – AI Developer.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  (Exact dataset details are kept abstract to maintain clarity while ensuring transparency.)
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ## ⚡ Intended Use
43
 
 
65
 
66
  ## 🙌 Acknowledgements
67
 
68
+ Developed and maintained by Rahul Wale – AI Developer.
69
+
70
+ ## 🚀 How to Use GanLLM
71
+
72
+ You can use GanLLM easily with the Transformers library:
73
+
74
+ ```python
75
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
76
+
77
+ # Load tokenizer and model
78
+ tokenizer = AutoTokenizer.from_pretrained("Rahulwale12/ganllm")
79
+ model = AutoModelForCausalLM.from_pretrained("Rahulwale12/ganllm", device_map="auto")
80
+
81
+ # Create text-generation pipeline
82
+ generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
83
+
84
+ # Example prompt
85
+ prompt = "### Instruction:\nPersona: I live in Delhi and love cricket.\nDialogue so far: Do you follow IPL?\n\n### Response:\n"
86
+
87
+ output = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.7, top_p=0.9)
88
+ print(output[0]["generated_text"])
89
+
90
+ python ```