Salesforce/wikitext
Viewer • Updated • 3.71M • 1.34M • 684
How to use prithivMLmods/Gpt2-Wikitext-9180 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="prithivMLmods/Gpt2-Wikitext-9180") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Gpt2-Wikitext-9180")
model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Gpt2-Wikitext-9180")How to use prithivMLmods/Gpt2-Wikitext-9180 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "prithivMLmods/Gpt2-Wikitext-9180"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "prithivMLmods/Gpt2-Wikitext-9180",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/prithivMLmods/Gpt2-Wikitext-9180
How to use prithivMLmods/Gpt2-Wikitext-9180 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "prithivMLmods/Gpt2-Wikitext-9180" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "prithivMLmods/Gpt2-Wikitext-9180",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "prithivMLmods/Gpt2-Wikitext-9180" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "prithivMLmods/Gpt2-Wikitext-9180",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use prithivMLmods/Gpt2-Wikitext-9180 with Docker Model Runner:
docker model run hf.co/prithivMLmods/Gpt2-Wikitext-9180
Gpt2-Wikitext-9180, fine-tuned from GPT-2, is a Transformer-based language model trained on a large English corpus (WikiText) using self-supervised learning. This means it was trained on raw, unlabeled text data, using an automated process to create inputs and labels by predicting the next word in a sentence. No manual annotation was involved, allowing the model to leverage a vast amount of publicly available data.
pip install transformers
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Loading pre-trained GPT-2 model and tokenizer
model_name = "prithivMLmods/Gpt2-Wikitext-9180"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
# Set the model to evaluation mode
model.eval()
def generate_text(prompt, max_length=100, temperature=0.8, top_k=50):
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(
input_ids,
max_length=max_length,
temperature=temperature,
top_k=top_k,
pad_token_id=tokenizer.eos_token_id,
do_sample=True
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
# Example prompt
prompt = "Once upon a time"
generated_text = generate_text(prompt, max_length=68)
# Print the generated text
print(generated_text)
docker model run hf.co/prithivMLmods/Gpt2-Wikitext-9180