--- language: en license: mit tags: - text-generation - transformers - safetensors base_model: distilbert/distilgpt2 parameters: 81912576 --- # Text2GPT 🤖 Text2GPT is a lightweight text generation model fine-tuned from [DistilGPT2](https://huggingface.co/distilbert/distilgpt2), with 81.9M parameters, designed for efficient and coherent text generation. It leverages the power of transformers and supports Safetensors for secure model loading. Ideal for creative writing, text completion, and more! 🚀 --- ## Features ✨ - Generates human-like text with minimal input 📝 - Supports Safetensors for safe and efficient loading 🔒 - Fine-tuned for low-resource environments ⚡ - Compatible with Hugging Face `transformers` and vLLM 🚀 ## Installation 🛠️ Install the required dependencies: ```bash pip install transformers torch safetensors ``` ## Usage 🎮 ### Loading the Model with Transformers Use the Hugging Face `transformers` library to load and generate text: ```python from transformers import AutoTokenizer, AutoModelForCausalLM # Load model and tokenizer model_name = "kulia-moon/Text2GPT" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Generate text input_text = "Once upon a time" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_length=50, do_sample=True) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### Using Pipeline for Simplicity For quick text generation: ```python from transformers import pipeline pipe = pipeline("text-generation", model="kulia-moon/Text2GPT") print(pipe("My name is Julien and I like to", max_length=30, do_sample=True)[0]["generated_text"]) ``` ### vLLM Deployment for Scalability Deploy with vLLM for high-throughput inference: ```bash docker run --runtime nvidia --gpus all -v ~/.cache/huggingface:/root/.cache/huggingface -p 8000:8000 --ipc=host vllm/vllm-openai:latest --model kulia-moon/Text2GPT ``` Then, serve the model: ```bash docker exec -it my_vllm_container bash -c "vllm serve kulia-moon/Text2GPT" ``` ## Widget Examples 🖱️ Try these prompts on the [model page](https://huggingface.co/kulia-moon/Text2GPT): - "Once upon a time" ⏳ - "My name is Julien and I like to" 😊 - "Paris is an amazing place to visit," 🗼 - "I like traveling by train because" 🚂 **Example Output**: **Input**: "Once upon a time" **Output**: "Once upon a time, a curious AI roamed the digital realm, crafting tales of wonder." ## Model Details 📊 - **Architecture**: DistilGPT2-based, 6 layers, 81.9M parameters - **Base Model**: [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) - **Safetensors**: Supported, 81,912,576 parameters (non-sharded, non-quantized) - **Intended Use**: Text generation, creative writing, dialogue completion - **Limitations**: May produce biased or repetitive outputs; not optimized for sensitive tasks ## Evaluation Report 📈 Evaluation metrics are under development. Preliminary tests suggest performance comparable to DistilGPT2 (perplexity ~21.1 on WikiText-103). Contributions for detailed metrics are welcome via [discussions](https://huggingface.co/kulia-moon/Text2GPT/discussions)! 🙌 ## Requirements ⚙️ - Python 3.8+ - `transformers>=4.30.0` - `torch>=2.0.0` - `safetensors>=0.4.0` ## License 📜 This model is licensed under the [MIT License](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md). ## Community & Support 💬 Join the conversation or seek help at: - [Hugging Face Discussions](https://huggingface.co/kulia-moon/Text2GPT/discussions) - [Model Page](https://huggingface.co/kulia-moon/Text2GPT) Contributions and feedback are welcome! 🌟