Complete ML project with training, advanced inference, interpretability and production deployment
Explore how the model makes decisions through attention visualizations and SHAP analysis
Analyze a text to see how the model's attention mechanism focuses on different words and phrases.
The visualization will show:
SHAP (SHapley Additive exPlanations) provides detailed feature importance analysis.
Understanding SHAP values:
See which words contribute most to the model's decision.
This visualization shows:
Dataset IMDB
50K reseñas
Tokenización
DistilBERT
DistilBERT
Fine-tuning
FastAPI
Inferencia
React/JS
UI Interactiva
Este proyecto demuestra una implementación completa de análisis de sentimientos usando Transformers, desde el entrenamiento hasta el deployment en producción.
Accuracy: 74%
Latencia: ~100ms
Throughput: 1000+ req/s
Horizontal scaling
Load balancing
Auto-restart