Refer to https://github.com/VJyzCELERY/GPT2-StoryGenerator for the model source To use the model simply do :

import torch
from src.model import Config,GPT
from src.inference import GPTInfer
from huggingface_hub import hf_hub_download
model_path = hf_hub_download(
    repo_id="VJyzCELERY/GPT2-GutenbergStoryGenerator",  
    filename="GPT2-GutenbergStoryGenerator.pt"                 
)
checkpoint = torch.load(model_path, weights_only=False)
model = GPT(config=checkpoint['config'])
model.load_state_dict(checkpoint['model'])
model = model.to(device)
token_encoder = tiktoken.get_encoding('gpt2')
generator = GPTInfer(model, token_encoder, device)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using VJyzCELERY/GPT2-GutenbergStoryGenerator 1