YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
remiai3 β Universal AI Project Pack (9 CPU/GPU-Compatible Demos)
Each project runs on CPU-only or GPU with the same dependencies. All use Apache 2.0 / MIT licensed models.
Quick start (any project)
# 1) Create env
python -m venv venv && source .venv/bin/activate # Windows: ./venv/Scripts/activate
# 2) Install deps
pip install -r requirements.txt
# 3) Run
python main.py --help
Tip: If you have a GPU + CUDA, PyTorch will auto-use it. If not, everything runs on CPU (slower but works).
Projects
- Sentiment Analysis β
distilbert-base-uncased-finetuned-sst-2-english(Apache-2.0) - Named Entity Recognition (NER) β
dslim/bert-base-NER(Apache-2.0) - Text Summarization β
sshleifer/distilbart-cnn-12-6(Apache-2.0) - Keyword Extraction (Embeddings) β
sentence-transformers/all-MiniLM-L6-v2(Apache-2.0) - Simple Chatbot β
microsoft/DialoGPT-small(MIT) - Image Classification β
torchvision.models.mobilenet_v2(Apache-2.0) - OCR β
easyocr(Apache-2.0) - Speech-to-Text β
openai/whisper-tiny(MIT) - Image Captioning β
nlpconnect/vit-gpt2-image-captioning(MIT)
Universal requirements
Each folder has its own requirements.txt identical across all projects.
If you need CPU-only PyTorch wheels, install the default first (pip install torch) β it will fetch the right build automatically.
If you already have a CUDA wheel, the scripts will use it.
System notes
- FFmpeg needed for Whisper STT (
brew install ffmpeg,choco install ffmpeg, or use your package manager). - TTS downloads models on first run to
~/.local/share/tts(Linux/macOS) or%APPDATA%\tts(Windows). - All code defaults to English; you can change languages in the code comments.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support