Text Generation
Transformers
Safetensors
English
gpt2
text-generation-inference
Gerson Fabian Buenahora Ormaza commited on
Commit
0e0be5a
·
verified ·
1 Parent(s): e6695af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -49,19 +49,18 @@ To use the model:
49
  ```python
50
  from transformers import AutoTokenizer, AutoModelForCausalLM
51
 
52
- model_name = "BueormLLC/RAGPT-2"
53
- tokenizer = AutoTokenizer.from_pretrained(model_name)
54
- model = AutoModelForCausalLM.from_pretrained(model_name)
55
 
56
- # Prepare input
57
- context = "Your context here"
58
- question = "Your question here"
59
- input_text = f"Contexto: {context}\nPregunta: {question}\nRespuesta:"
60
 
61
- # Generate answer
62
  input_ids = tokenizer.encode(input_text, return_tensors="pt")
63
  output = model.generate(input_ids, max_length=150, num_return_sequences=1)
64
  answer = tokenizer.decode(output[0], skip_special_tokens=True)
 
 
65
  ```
66
 
67
  ## Limitations
 
49
  ```python
50
  from transformers import AutoTokenizer, AutoModelForCausalLM
51
 
52
+ tokenizer = AutoTokenizer.from_pretrained("BueormLLC/RAGPT")
53
+ model = AutoModelForCausalLM.from_pretrained("BueormLLC/RAGPT")
 
54
 
55
+ context = "Mount Everest is the highest mountain in the world, with a height of 8,848 meters."
56
+ question = "What is the height of Mount Everest?"
57
+ input_text = f"Context: {context}\nquestion: {question}\nanswer:"
 
58
 
 
59
  input_ids = tokenizer.encode(input_text, return_tensors="pt")
60
  output = model.generate(input_ids, max_length=150, num_return_sequences=1)
61
  answer = tokenizer.decode(output[0], skip_special_tokens=True)
62
+
63
+ print(f"Respuesta generada: {answer}")
64
  ```
65
 
66
  ## Limitations