torchforge / WINDOWS_GUIDE.md
meetanilp's picture
Initial release: TorchForge v1.0.0
f206b57 verified
# TorchForge - Windows Installation & Usage Guide
Complete guide for setting up and running TorchForge on Windows Dell Laptop.
## Prerequisites
### System Requirements
- Windows 10/11 (64-bit)
- Python 3.8 or higher
- 8GB RAM minimum (16GB recommended)
- 10GB free disk space
- Git for Windows
### Optional for GPU Support
- NVIDIA GPU with CUDA 11.8 or higher
- NVIDIA CUDA Toolkit
- cuDNN library
## Installation Steps
### 1. Install Python
Download and install Python from [python.org](https://www.python.org/downloads/)
```powershell
# Verify installation
python --version
pip --version
```
### 2. Install Git
Download and install Git from [git-scm.com](https://git-scm.com/download/win)
```powershell
# Verify installation
git --version
```
### 3. Clone TorchForge Repository
```powershell
# Open PowerShell or Command Prompt
cd C:\Users\YourUsername\Projects
# Clone repository
git clone https://github.com/anilprasad/torchforge.git
cd torchforge
```
### 4. Create Virtual Environment
```powershell
# Create virtual environment
python -m venv venv
# Activate virtual environment
.\venv\Scripts\activate
# You should see (venv) in your prompt
```
### 5. Install TorchForge
```powershell
# Install in development mode
pip install -e .
# Or install specific extras
pip install -e ".[all]"
# Verify installation
python -c "import torchforge; print(torchforge.__version__)"
```
## Running Examples
### Basic Example
```powershell
# Navigate to examples directory
cd examples
# Run comprehensive examples
python comprehensive_examples.py
```
Expected output:
```
==========================================================
TorchForge - Comprehensive Examples
Author: Anil Prasad
==========================================================
Example 1: Basic Classification
...
✓ Example 1 completed successfully!
```
### Custom Model Example
Create a file `my_model.py`:
```python
import torch
import torch.nn as nn
from torchforge import ForgeModel, ForgeConfig
# Define your PyTorch model
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10, 64)
self.fc2 = nn.Linear(64, 2)
self.relu = nn.ReLU()
def forward(self, x):
x = self.relu(self.fc1(x))
return self.fc2(x)
# Create TorchForge configuration
config = ForgeConfig(
model_name="my_custom_model",
version="1.0.0",
enable_monitoring=True,
enable_governance=True
)
# Wrap with TorchForge
model = ForgeModel(MyModel(), config=config)
# Use the model
x = torch.randn(32, 10)
output = model(x)
print(f"Output shape: {output.shape}")
# Get metrics
metrics = model.get_metrics_summary()
print(f"Metrics: {metrics}")
```
Run it:
```powershell
python my_model.py
```
## Running Tests
```powershell
# Install test dependencies
pip install pytest pytest-cov
# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=torchforge --cov-report=html
# View coverage report
start htmlcov\index.html
```
## Docker Deployment on Windows
### 1. Install Docker Desktop
Download from [docker.com](https://www.docker.com/products/docker-desktop)
### 2. Build Docker Image
```powershell
# Build image
docker build -t torchforge:1.0.0 .
# Verify image
docker images | findstr torchforge
```
### 3. Run Container
```powershell
# Run container
docker run -p 8000:8000 torchforge:1.0.0
# Run with volume mounts
docker run -p 8000:8000 `
-v ${PWD}\models:/app/models `
-v ${PWD}\logs:/app/logs `
torchforge:1.0.0
```
### 4. Run with Docker Compose
```powershell
# Start services
docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -f
# Stop services
docker-compose down
```
## Cloud Deployment
### AWS Deployment
```python
from torchforge import ForgeModel, ForgeConfig
from torchforge.cloud import AWSDeployer
# Create model
config = ForgeConfig(model_name="my_model", version="1.0.0")
model = ForgeModel(MyModel(), config=config)
# Deploy to AWS SageMaker
deployer = AWSDeployer(model)
endpoint = deployer.deploy_sagemaker(
instance_type="ml.m5.large",
endpoint_name="torchforge-prod"
)
print(f"Model deployed: {endpoint.url}")
```
### Azure Deployment
```python
from torchforge.cloud import AzureDeployer
deployer = AzureDeployer(model)
service = deployer.deploy_aks(
cluster_name="ml-cluster",
cpu_cores=4,
memory_gb=16
)
```
### GCP Deployment
```python
from torchforge.cloud import GCPDeployer
deployer = GCPDeployer(model)
endpoint = deployer.deploy_vertex(
machine_type="n1-standard-4",
accelerator_type="NVIDIA_TESLA_T4"
)
```
## Common Issues & Solutions
### Issue: ModuleNotFoundError
**Solution:**
```powershell
# Ensure virtual environment is activated
.\venv\Scripts\activate
# Reinstall TorchForge
pip install -e .
```
### Issue: CUDA Not Available
**Solution:**
```powershell
# Install PyTorch with CUDA support
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
```
### Issue: Permission Denied
**Solution:**
```powershell
# Run PowerShell as Administrator
# Or add current user to docker-users group
net localgroup docker-users "%USERDOMAIN%\%USERNAME%" /ADD
```
### Issue: Port Already in Use
**Solution:**
```powershell
# Find process using port 8000
netstat -ano | findstr :8000
# Kill process (replace PID)
taskkill /PID <PID> /F
```
## Performance Optimization
### Enable GPU Support
```python
import torch
# Check CUDA availability
if torch.cuda.is_available():
device = torch.device("cuda")
model = model.to(device)
print(f"Using GPU: {torch.cuda.get_device_name(0)}")
else:
print("CUDA not available, using CPU")
```
### Memory Optimization
```python
# Enable memory optimization
config.optimization.memory_optimization = True
# Enable quantization
config.optimization.quantization = "int8"
```
## Development Workflow
### 1. Setup Development Environment
```powershell
# Install dev dependencies
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
```
### 2. Run Code Formatters
```powershell
# Format code with black
black torchforge/
# Sort imports
isort torchforge/
# Check style
flake8 torchforge/
```
### 3. Type Checking
```powershell
# Run mypy
mypy torchforge/
```
## Monitoring in Production
### View Metrics
```python
# Get metrics summary
metrics = model.get_metrics_summary()
print(f"Total Inferences: {metrics['inference_count']}")
print(f"Mean Latency: {metrics['latency_mean_ms']:.2f}ms")
print(f"P95 Latency: {metrics['latency_p95_ms']:.2f}ms")
```
### Export Compliance Report
```python
from torchforge.governance import ComplianceChecker
checker = ComplianceChecker()
report = checker.assess_model(model)
# Export reports
report.export_json("compliance_report.json")
report.export_pdf("compliance_report.pdf")
```
## Support & Resources
- **GitHub Issues**: https://github.com/anilprasad/torchforge/issues
- **Documentation**: https://torchforge.readthedocs.io
- **LinkedIn**: [Anil Prasad](https://www.linkedin.com/in/anilsprasad/)
- **Email**: anilprasad@example.com
## Next Steps
1. Try the comprehensive examples
2. Build your own model with TorchForge
3. Deploy to production
4. Check compliance and governance
5. Monitor in real-time
6. Contribute to the project!
---
**Built with ❤️ by Anil Prasad**