--- library_name: ultralytics tags: - yolov11 - object-detection - instance-segmentation - computer-vision - deep-learning - port-detection license: agpl-3.0 --- # Port Model This is a custom trained YOLOv11 segmentation model for port detection. ## Model Details - **Model Type**: YOLOv11 Instance Segmentation - **Framework**: Ultralytics YOLOv11 - **Task**: Instance Segmentation - **Classes**: 2 - **Input Size**: 1408x1408 - **Dataset**: Custom Port Dataset ## Classes - Class 0: Port-capped - Class 1: Port-Empty ## Model Configuration ```json { "model_type": "yolov11-seg", "task": "image-segmentation", "framework": "ultralytics", "num_classes": 2, "id2label": { "0": "Port-capped", "1": "Port-Empty" }, "input_size": 1408, "confidence_threshold": 0.25, "iou_threshold": 0.45 } ``` ### Training Configuration - **Epochs**: 100 - **Batch Size**: 16 - **Optimizer**: AdamW - **Dataset**: Custom Port Dataset ## Usage ### Using Ultralytics (Local Inference) ```python from ultralytics import YOLO # Load model model = YOLO('model.pt') # Run inference results = model('image.jpg', conf=0.25, iou=0.45) # Process results for result in results: masks = result.masks # Segmentation masks boxes = result.boxes # Bounding boxes # Get class names for box in boxes: class_id = int(box.cls) class_name = {"0": "Port-capped", "1": "Port-Empty"}[str(class_id)] confidence = float(box.conf) print(f"Detected: {class_name} ({confidence:.2f})") # Visualize result.show() ``` ### Using Hugging Face Inference API ```python import requests import json API_URL = "https://router.huggingface.co/models/Sunix2026/Port-model" headers = {"Authorization": "Bearer YOUR_HF_TOKEN"} def query(filename): with open(filename, "rb") as f: data = f.read() response = requests.post(API_URL, headers=headers, data=data) return response.json() # Run inference output = query("image.jpg") print(json.dumps(output, indent=2)) ``` ### Using the Python Client ```python from yolov11_hf_inference import YOLOv11HFInference # Initialize client client = YOLOv11HFInference( model_url="Sunix2026/Port-model", access_token="YOUR_HF_TOKEN" ) # Run inference result = client.predict_from_path("image.jpg") if result["success"]: predictions = result["predictions"] # Map class IDs to names id2label = {"0": "Port-capped", "1": "Port-Empty"} for pred in predictions: class_name = id2label.get(str(pred.get('label', '')), 'Unknown') confidence = pred.get('score', 0) print(f"Found: {class_name} ({confidence:.2%})") else: print(f"Error: {result['error']}") ``` ## Performance Metrics | Metric | Value | |--------|-------| | Confidence Threshold | 0.25 | | IoU Threshold | 0.45 | | Input Resolution | 1408x1408 | ## Applications This model can be used for: - Port detection and classification - Automated quality control - Manufacturing inspection - Inventory management ## Limitations - Model is trained specifically for port detection - Performance may vary with different lighting conditions - Best results with images similar to training data ## License AGPL-3.0 ## Citation If you use this model, please cite: ```bibtex @misc{Port-model, author = {Sunix2026}, title = {Port Model}, year = {2025}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/Sunix2026/Port-model}} } ```