Instructions to use archit11/GPT2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use archit11/GPT2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="archit11/GPT2")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("archit11/GPT2") model = AutoModelForCausalLM.from_pretrained("archit11/GPT2") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use archit11/GPT2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "archit11/GPT2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "archit11/GPT2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/archit11/GPT2
- SGLang
How to use archit11/GPT2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "archit11/GPT2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "archit11/GPT2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "archit11/GPT2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "archit11/GPT2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use archit11/GPT2 with Docker Model Runner:
docker model run hf.co/archit11/GPT2
This project is an open-source AI-powered translation system designed to make communication easier between English and Malawian local languages. It uses modern machine learning and natural language processing (NLP) models to translate text and speech accurately across languages spoken in Malawi. --- Supported Languages - The system currently supports: Chichewa (Nyanja) Chitumbuka Chiyao Chilomwe Chisena Chitonga - Additional languages and dialects can be added as data becomes available. Project Goals - Break language barriers in Malawi through accessible AI tools. - Support communication in education, health, agriculture, and government. - Preserve and promote Malawian indigenous languages in digital technology. - Provide open datasets and models for researchers and developers. Features - Text translation: English Local languages - Speech recognition: Convert spoken language to text - Text-to-speech: Speak translated text naturally - Chat integration: Support for WhatsApp and web interfaces - Offline capability: Small models for mobile and rural use.
#1 opened 7 months ago
by
Ezek3121