riversnow's picture
Upload folder using huggingface_hub
72f2ac0 verified
---
library_name: ultralytics
tags:
- yolo
- yolo11
- instance-segmentation
- robotics
- so101
pipeline_tag: image-segmentation
model_index:
- name: SO101 Nexus Segmentation
results:
- task:
type: instance-segmentation
---
# SO101 segmentation model
This is a model for segementation of images of the [so101 robot arm](https://github.com/TheRobotStudio/SO-ARM100), it was fine tuned over yolo11s
![SO101 with various parts of it displayed via a segmentation model, which means it's grippers, base, etc... are clearly labelled](image.png)
## Sample code
Here's some sample code to use it
```python
import cv2
import numpy as np
from ultralytics import YOLO
model = YOLO("weights/best.pt")
cap = cv2.VideoCapture("test_video.mp4")
w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = cap.get(cv2.CAP_PROP_FPS)
fourcc = cv2.VideoWriter_fourcc(*"avc1")
out = cv2.VideoWriter("comparison_output.mp4", fourcc, fps, (w * 2, h))
print("Generating side-by-side video...")
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
results = model(frame)
left_side = frame
black_bg = np.zeros_like(frame)
right_side = results[0].plot(img=black_bg, boxes=False, labels=True)
combined_frame = np.hstack((left_side, right_side))
out.write(combined_frame)
cap.release()
out.release()
print("Done! Check comparison_output.mp4")
```
Disclaimer : I vibe coded most of the code here, since it was one-time use code and I don't expect to publish it anywhere, I used https://github.com/johnsutor/so101-nexus to generate the synthetic images