|
|
--- |
|
|
license: mit |
|
|
language: |
|
|
- es |
|
|
- en |
|
|
tags: |
|
|
- cisco |
|
|
- networking |
|
|
- packet-tracer |
|
|
- lora |
|
|
- mistral |
|
|
base_model: mistralai/Mistral-7B-Instruct-v0.3 |
|
|
--- |
|
|
|
|
|
# TechMind Pro v9 ULTIMATE |
|
|
|
|
|
🚀 Asistente IA especializado en Redes Cisco y Packet Tracer |
|
|
|
|
|
## 📊 Métricas |
|
|
|
|
|
- **Accuracy:** 93% verificada |
|
|
- **Dataset:** 1,191 ejemplos únicos |
|
|
- **Base Model:** Mistral-7B-Instruct-v0.3 |
|
|
- **Fine-tuning:** LoRA (r=64, alpha=128) |
|
|
|
|
|
## 🎯 Características |
|
|
|
|
|
- ✅ Configuraciones Cisco paso a paso |
|
|
- ✅ Troubleshooting guiado |
|
|
- ✅ Integración Packet Tracer |
|
|
- ✅ Soporte OSPF, BGP, VLANs, ACLs, etc. |
|
|
- ✅ Respuestas en español e inglés |
|
|
|
|
|
## 💻 Uso |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
from peft import PeftModel |
|
|
import torch |
|
|
|
|
|
# Cargar modelo base |
|
|
base_model = "mistralai/Mistral-7B-Instruct-v0.3" |
|
|
tokenizer = AutoTokenizer.from_pretrained(base_model) |
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
base_model, |
|
|
load_in_8bit=True, |
|
|
device_map="auto" |
|
|
) |
|
|
|
|
|
# Cargar LoRA |
|
|
model = PeftModel.from_pretrained(model, "Delta0723/techmind-pro-v9") |
|
|
|
|
|
# Inferencia |
|
|
prompt = "<s>[INST] ¿Cómo configuro IP 192.168.1.1 en GigabitEthernet0/0? [/INST]" |
|
|
inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
|
|
outputs = model.generate(**inputs, max_new_tokens=500) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|
|
|
## 🌐 Demo Online |
|
|
|
|
|
- **Landing:** https://techmind-landing.vercel.app |
|
|
- **GitHub:** https://github.com/Moreno360/techmind-landing |
|
|
|
|
|
## 📚 Casos de Uso |
|
|
|
|
|
1. **Estudiantes CCNA/CCNP:** Generación rápida de configuraciones |
|
|
2. **Profesores:** Material de ejemplo para clases |
|
|
3. **Profesionales:** Troubleshooting rápido |
|
|
4. **Packet Tracer:** Guías paso a paso |
|
|
|
|
|
## 🎓 Entrenamiento |
|
|
|
|
|
- **Método:** LoRA (Low-Rank Adaptation) |
|
|
- **Hardware:** RunPod RTX A6000 |
|
|
- **Duración:** ~6 horas |
|
|
- **Framework:** HuggingFace Transformers + PEFT |
|
|
|
|
|
## 📝 Licencia |
|
|
|
|
|
MIT License - Uso libre |
|
|
|
|
|
## 👤 Autor |
|
|
|
|
|
Creado por [Delta0723](https://github.com/Moreno360) |
|
|
|
|
|
## 🔗 Links |
|
|
|
|
|
- [Demo Web](https://techmind-landing.vercel.app) |
|
|
- [GitHub Repo](https://github.com/Moreno360/techmind-landing) |
|
|
- [Documentación](https://github.com/Moreno360/techmind-landing#readme) |
|
|
|