Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
1
winter.sci.dev
enzoescipy
Follow
0 followers
ยท
2 following
https://www.winter-sci-dev.com/about/
enzoescipy
AI & ML interests
None yet
Recent Activity
reacted
to
sagar007
's
post
with ๐ฅ
18 days ago
๐ I built a Multimodal Vision-Language Model from using Gemma-270M + CLIP! Just finished training my multimodal model on the full LLaVA-Instruct-150K dataset (157K samples) and wanted to share the results! ๐ง What I Built: A vision-language model that can understand images and answer questions about them, combining: - Google Gemma-3-270M (language) - OpenAI CLIP ViT-Large/14 (vision) - LoRA fine-tuning for efficiency ๐ Training Stats: - 157,712 training samples (full LLaVA dataset) - 3 epochs on A100 40GB - ~9 hours training time - Final loss: 1.333 training / 1.430 validation - Only 18.6M trainable params (3.4% of 539M total) ๐ https://huggingface.co/sagar007/multigemma Benchmark Results: - VQA Accuracy: 53.8% - Works great for: animal detection, room identification, scene understanding ๐ **Try it yourself:** - ๐ค Model: https://huggingface.co/sagar007/multigemma - ๐ฎ Demo: https://huggingface.co/spaces/sagar007/Multimodal-Gemma - ๐ป GitHub: https://github.com/sagar431/multimodal-gemma-270m Built with PyTorch Lightning + MLflow for experiment tracking. Full MLOps pipeline with CI/CD! Would love to hear your feedback! ๐ #multimodal #gemma #clip #llava #vision-language #pytorch
updated
a model
about 1 month ago
enzoescipy/llama-embed-nemotron-8b-model2vec-pca3
published
a model
about 1 month ago
enzoescipy/llama-embed-nemotron-8b-model2vec-pca3
View all activity
Organizations
enzoescipy
's Spaces
1
Sort:ย Recently updated
pinned
Sleeping
Finesse Benchmark - Long Context Embedder Leaderboard
๐ฌ