Instructions to use DarwinAnim8or/NoSleepPromptGen with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DarwinAnim8or/NoSleepPromptGen with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DarwinAnim8or/NoSleepPromptGen")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("DarwinAnim8or/NoSleepPromptGen") model = AutoModelForCausalLM.from_pretrained("DarwinAnim8or/NoSleepPromptGen") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use DarwinAnim8or/NoSleepPromptGen with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DarwinAnim8or/NoSleepPromptGen" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DarwinAnim8or/NoSleepPromptGen", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/DarwinAnim8or/NoSleepPromptGen
- SGLang
How to use DarwinAnim8or/NoSleepPromptGen with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DarwinAnim8or/NoSleepPromptGen" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DarwinAnim8or/NoSleepPromptGen", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "DarwinAnim8or/NoSleepPromptGen" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DarwinAnim8or/NoSleepPromptGen", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use DarwinAnim8or/NoSleepPromptGen with Docker Model Runner:
docker model run hf.co/DarwinAnim8or/NoSleepPromptGen
"NoSleep" Writing Prompt Generator
Finetuned version of GPT2 to facilitate generation of Writing Prompts for the GPT-NoSleep-355m model
You can use the space linked on the right to use this model, then use the NoSleep model in tandem to generate stories!
Training Procedure
This was trained on the 'reddt-nosleep-posts' dataset, using the "HappyTransformers" library on Google Colab. This model was trained for X epochs with learning rate 1e-2.
Biases & Limitations
This likely contains the same biases and limitations as the original GPT2 that it is based on, and additionally heavy biases from the dataset. It likely will generate offensive output.
Intended Use
This model is meant for fun, nothing else.
Sample Use
from happytransformer import GENSettings
args_top_k = GENSettings(no_repeat_ngram_size=1, do_sample=True, top_k=80, temperature=0.4, max_length=25, early_stopping=True)
result = happy_gen.generate_text("[WP] \"", args=args_top_k)
print(result.text)
- Downloads last month
- 6
docker model run hf.co/DarwinAnim8or/NoSleepPromptGen