Sandhanapandiyan commited on
Commit
5b4888b
·
verified ·
1 Parent(s): 6870c53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -4
README.md CHANGED
@@ -17,13 +17,13 @@ tags: []
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
  - **Shared by [optional]:** [More Information Needed]
23
  - **Model type:** [More Information Needed]
24
  - **Language(s) (NLP):** [More Information Needed]
25
  - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
@@ -39,7 +39,42 @@ This is the model card of a 🤗 transformers model that has been pushed on the
39
 
40
  ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  [More Information Needed]
45
 
 
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
+ - **Developed by:** Sandhanapandiyan
21
+ - **Funded by [optional]:**
22
  - **Shared by [optional]:** [More Information Needed]
23
  - **Model type:** [More Information Needed]
24
  - **Language(s) (NLP):** [More Information Needed]
25
  - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** microsoft/phi
27
 
28
  ### Model Sources [optional]
29
 
 
39
 
40
  ### Direct Use
41
 
42
+ from transformers import AutoTokenizer, AutoModelForCausalLM
43
+ import torch
44
+
45
+ # Load the model and tokenizer
46
+ model_path = "/content/drive/MyDrive/sandhanapandiyan/Responce Generator"
47
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
48
+ model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype=torch.float16)
49
+
50
+ # Generate a response
51
+ def generate_response(user_query, sql_result, max_tokens=150):
52
+ prompt = f"User: {user_query}\nSQL Result: {sql_result}\nAssistant:"
53
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
54
+
55
+ with torch.no_grad():
56
+ outputs = model.generate(
57
+ **inputs,
58
+ max_new_tokens=max_tokens,
59
+ do_sample=True,
60
+ temperature=0.7,
61
+ top_p=0.9,
62
+ eos_token_id=tokenizer.eos_token_id,
63
+ )
64
+
65
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
66
+
67
+ # Extract only the assistant's response
68
+ return generated_text.split("Assistant:")[-1].strip()
69
+
70
+ # Example usage
71
+ user_query = "list all the employee"
72
+ sql_result = "Emily Watson"
73
+ response = generate_response(user_query, sql_result)
74
+
75
+ print("🔍 Generated Response:")
76
+ print(response)
77
+
78
 
79
  [More Information Needed]
80