id
int64
0
190k
prompt
stringlengths
21
13.4M
docstring
stringlengths
1
12k
7,349
import torch from mmcv.ops import convex_iou, points_in_polygons from mmdet.core.bbox.assigners.assign_result import AssignResult from mmdet.core.bbox.assigners.base_assigner import BaseAssigner from ..builder import ROTATED_BBOX_ASSIGNERS The provided code snippet includes necessary dependencies for implementing the `convex_overlaps` function. Write a Python function `def convex_overlaps(gt_rbboxes, points)` to solve the following problem: Compute overlaps between polygons and points. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). points (torch.Tensor): Points to be assigned, shape(n, 18). Returns: overlaps (torch.Tensor): Overlaps between k gt_bboxes and n bboxes, shape(k, n). Here is the function: def convex_overlaps(gt_rbboxes, points): """Compute overlaps between polygons and points. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). points (torch.Tensor): Points to be assigned, shape(n, 18). Returns: overlaps (torch.Tensor): Overlaps between k gt_bboxes and n bboxes, shape(k, n). """ if gt_rbboxes.shape[0] == 0: return gt_rbboxes.new_zeros((0, points.shape[0])) overlaps = convex_iou(points, gt_rbboxes) return overlaps
Compute overlaps between polygons and points. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). points (torch.Tensor): Points to be assigned, shape(n, 18). Returns: overlaps (torch.Tensor): Overlaps between k gt_bboxes and n bboxes, shape(k, n).
7,350
import torch from mmcv.ops import convex_iou, points_in_polygons from mmdet.core.bbox.assigners.assign_result import AssignResult from mmdet.core.bbox.assigners.base_assigner import BaseAssigner from ..builder import ROTATED_BBOX_ASSIGNERS The provided code snippet includes necessary dependencies for implementing the `get_horizontal_bboxes` function. Write a Python function `def get_horizontal_bboxes(gt_rbboxes)` to solve the following problem: Get horizontal bboxes from polygons. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). Returns: gt_rect_bboxes (torch.Tensor): The horizontal bboxes, shape (k, 4). Here is the function: def get_horizontal_bboxes(gt_rbboxes): """Get horizontal bboxes from polygons. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). Returns: gt_rect_bboxes (torch.Tensor): The horizontal bboxes, shape (k, 4). """ gt_xs, gt_ys = gt_rbboxes[:, 0::2], gt_rbboxes[:, 1::2] gt_xmin, _ = gt_xs.min(1) gt_ymin, _ = gt_ys.min(1) gt_xmax, _ = gt_xs.max(1) gt_ymax, _ = gt_ys.max(1) gt_rect_bboxes = torch.cat([ gt_xmin[:, None], gt_ymin[:, None], gt_xmax[:, None], gt_ymax[:, None] ], dim=1) return gt_rect_bboxes
Get horizontal bboxes from polygons. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). Returns: gt_rect_bboxes (torch.Tensor): The horizontal bboxes, shape (k, 4).
7,351
import torch from mmcv.ops import convex_iou, points_in_polygons from mmdet.core.bbox.assigners.assign_result import AssignResult from mmdet.core.bbox.assigners.base_assigner import BaseAssigner from ..builder import ROTATED_BBOX_ASSIGNERS The provided code snippet includes necessary dependencies for implementing the `AspectRatio` function. Write a Python function `def AspectRatio(gt_rbboxes)` to solve the following problem: Compute the aspect ratio of all gts. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). Returns: ratios (torch.Tensor): The aspect ratio of gt_rbboxes, shape (k, 1). Here is the function: def AspectRatio(gt_rbboxes): """Compute the aspect ratio of all gts. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). Returns: ratios (torch.Tensor): The aspect ratio of gt_rbboxes, shape (k, 1). """ pt1, pt2, pt3, pt4 = gt_rbboxes[..., :8].chunk(4, 1) edge1 = torch.sqrt( torch.pow(pt1[..., 0] - pt2[..., 0], 2) + torch.pow(pt1[..., 1] - pt2[..., 1], 2)) edge2 = torch.sqrt( torch.pow(pt2[..., 0] - pt3[..., 0], 2) + torch.pow(pt2[..., 1] - pt3[..., 1], 2)) edges = torch.stack([edge1, edge2], dim=1) width, _ = torch.max(edges, 1) height, _ = torch.min(edges, 1) ratios = (width / height) return ratios
Compute the aspect ratio of all gts. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). Returns: ratios (torch.Tensor): The aspect ratio of gt_rbboxes, shape (k, 1).
7,352
from mmcv.ops import box_iou_rotated from .builder import ROTATED_IOU_CALCULATORS The provided code snippet includes necessary dependencies for implementing the `rbbox_overlaps` function. Write a Python function `def rbbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False)` to solve the following problem: Calculate overlap between two set of bboxes. Args: bboxes1 (torch.Tensor): shape (B, m, 5) in <cx, cy, w, h, a> format or empty. bboxes2 (torch.Tensor): shape (B, n, 5) in <cx, cy, w, h, a> format or empty. mode (str): "iou" (intersection over union), "iof" (intersection over foreground) or "giou" (generalized intersection over union). Default "iou". is_aligned (bool, optional): If True, then m and n must be equal. Default False. Returns: Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) Here is the function: def rbbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False): """Calculate overlap between two set of bboxes. Args: bboxes1 (torch.Tensor): shape (B, m, 5) in <cx, cy, w, h, a> format or empty. bboxes2 (torch.Tensor): shape (B, n, 5) in <cx, cy, w, h, a> format or empty. mode (str): "iou" (intersection over union), "iof" (intersection over foreground) or "giou" (generalized intersection over union). Default "iou". is_aligned (bool, optional): If True, then m and n must be equal. Default False. Returns: Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) """ assert mode in ['iou', 'iof'] # Either the boxes are empty or the length of boxes's last dimension is 5 assert (bboxes1.size(-1) == 5 or bboxes1.size(0) == 0) assert (bboxes2.size(-1) == 5 or bboxes2.size(0) == 0) rows = bboxes1.size(0) cols = bboxes2.size(0) if is_aligned: assert rows == cols if rows * cols == 0: return bboxes1.new(rows, 1) if is_aligned else bboxes1.new(rows, cols) # resolve `rbbox_overlaps` abnormal when input rbbox is too small. clamped_bboxes1 = bboxes1.detach().clone() clamped_bboxes2 = bboxes2.detach().clone() clamped_bboxes1[:, 2:4].clamp_(min=1e-3) clamped_bboxes2[:, 2:4].clamp_(min=1e-3) return box_iou_rotated(clamped_bboxes1, clamped_bboxes2, mode, is_aligned)
Calculate overlap between two set of bboxes. Args: bboxes1 (torch.Tensor): shape (B, m, 5) in <cx, cy, w, h, a> format or empty. bboxes2 (torch.Tensor): shape (B, n, 5) in <cx, cy, w, h, a> format or empty. mode (str): "iou" (intersection over union), "iof" (intersection over foreground) or "giou" (generalized intersection over union). Default "iou". is_aligned (bool, optional): If True, then m and n must be equal. Default False. Returns: Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,)
7,353
from mmcv.utils import build_from_cfg from mmdet.core.bbox.iou_calculators.builder import IOU_CALCULATORS ROTATED_IOU_CALCULATORS = IOU_CALCULATORS The provided code snippet includes necessary dependencies for implementing the `build_iou_calculator` function. Write a Python function `def build_iou_calculator(cfg, default_args=None)` to solve the following problem: Builder of IoU calculator. Here is the function: def build_iou_calculator(cfg, default_args=None): """Builder of IoU calculator.""" return build_from_cfg(cfg, ROTATED_IOU_CALCULATORS, default_args)
Builder of IoU calculator.
7,354
from mmcv.utils import build_from_cfg from mmdet.core.bbox.builder import BBOX_ASSIGNERS, BBOX_CODERS, BBOX_SAMPLERS ROTATED_BBOX_ASSIGNERS = BBOX_ASSIGNERS The provided code snippet includes necessary dependencies for implementing the `build_assigner` function. Write a Python function `def build_assigner(cfg, **default_args)` to solve the following problem: Builder of box assigner. Here is the function: def build_assigner(cfg, **default_args): """Builder of box assigner.""" return build_from_cfg(cfg, ROTATED_BBOX_ASSIGNERS, default_args)
Builder of box assigner.
7,355
from mmcv.utils import build_from_cfg from mmdet.core.bbox.builder import BBOX_ASSIGNERS, BBOX_CODERS, BBOX_SAMPLERS ROTATED_BBOX_SAMPLERS = BBOX_SAMPLERS The provided code snippet includes necessary dependencies for implementing the `build_sampler` function. Write a Python function `def build_sampler(cfg, **default_args)` to solve the following problem: Builder of box sampler. Here is the function: def build_sampler(cfg, **default_args): """Builder of box sampler.""" return build_from_cfg(cfg, ROTATED_BBOX_SAMPLERS, default_args)
Builder of box sampler.
7,356
from mmcv.utils import build_from_cfg from mmdet.core.bbox.builder import BBOX_ASSIGNERS, BBOX_CODERS, BBOX_SAMPLERS ROTATED_BBOX_CODERS = BBOX_CODERS The provided code snippet includes necessary dependencies for implementing the `build_bbox_coder` function. Write a Python function `def build_bbox_coder(cfg, **default_args)` to solve the following problem: Builder of box coder. Here is the function: def build_bbox_coder(cfg, **default_args): """Builder of box coder.""" return build_from_cfg(cfg, ROTATED_BBOX_CODERS, default_args)
Builder of box coder.
7,357
The provided code snippet includes necessary dependencies for implementing the `rotated_anchor_inside_flags` function. Write a Python function `def rotated_anchor_inside_flags(flat_anchors, valid_flags, img_shape, allowed_border=0)` to solve the following problem: Check whether the rotated anchors are inside the border. Args: flat_anchors (torch.Tensor): Flatten anchors, shape (n, 5). valid_flags (torch.Tensor): An existing valid flags of anchors. img_shape (tuple(int)): Shape of current image. allowed_border (int, optional): The border to allow the valid anchor. Defaults to 0. Returns: torch.Tensor: Flags indicating whether the anchors are inside a valid range. Here is the function: def rotated_anchor_inside_flags(flat_anchors, valid_flags, img_shape, allowed_border=0): """Check whether the rotated anchors are inside the border. Args: flat_anchors (torch.Tensor): Flatten anchors, shape (n, 5). valid_flags (torch.Tensor): An existing valid flags of anchors. img_shape (tuple(int)): Shape of current image. allowed_border (int, optional): The border to allow the valid anchor. Defaults to 0. Returns: torch.Tensor: Flags indicating whether the anchors are inside a valid range. """ img_h, img_w = img_shape[:2] if allowed_border >= 0: cx, cy = (flat_anchors[:, i] for i in range(2)) inside_flags = \ valid_flags & \ (cx >= -allowed_border) & \ (cy >= -allowed_border) & \ (cx < img_w + allowed_border) & \ (cy < img_h + allowed_border) else: inside_flags = valid_flags return inside_flags
Check whether the rotated anchors are inside the border. Args: flat_anchors (torch.Tensor): Flatten anchors, shape (n, 5). valid_flags (torch.Tensor): An existing valid flags of anchors. img_shape (tuple(int)): Shape of current image. allowed_border (int, optional): The border to allow the valid anchor. Defaults to 0. Returns: torch.Tensor: Flags indicating whether the anchors are inside a valid range.
7,358
from mmcv.utils import build_from_cfg from mmdet.core.anchor.builder import ANCHOR_GENERATORS ROTATED_ANCHOR_GENERATORS = ANCHOR_GENERATORS def build_prior_generator(cfg, default_args=None): return build_from_cfg(cfg, ROTATED_ANCHOR_GENERATORS, default_args)
null
7,359
import numpy as np import torch from mmcv.ops import nms, nms_rotated def translate_bboxes(bboxes, offset): """Translate bboxes according to its shape. If the bbox shape is (n, 5), the bboxes are regarded as horizontal bboxes and in (x, y, x, y, score) format. If the bbox shape is (n, 6), the bboxes are regarded as rotated bboxes and in (x, y, w, h, theta, score) format. Args: bboxes (np.ndarray): The bboxes need to be translated. Its shape can only be (n, 5) and (n, 6). offset (np.ndarray): The offset to translate with shape being (2, ). Returns: np.ndarray: Translated bboxes. """ if bboxes.shape[1] == 5: bboxes[:, :4] = bboxes[:, :4] + np.tile(offset, 2) elif bboxes.shape[1] == 6: bboxes[:, :2] = bboxes[:, :2] + offset else: raise TypeError('Require the shape of `bboxes` to be (n, 5) or (n, 6),' f' but get `bboxes` with shape being {bboxes.shape}.') return bboxes def map_masks(masks, offset, new_shape): """Map masks to the huge image. Args: masks (list[np.ndarray]): masks need to be mapped. offset (np.ndarray): The offset to translate with shape being (2, ). new_shape (tuple): A tuple of the huge image's width and height. Returns: list[np.ndarray]: Mapped masks. """ if not masks: return masks new_width, new_height = new_shape x_start, y_start = offset mapped = [] for mask in masks: ori_height, ori_width = mask.shape[:2] x_end = x_start + ori_width if x_end > new_width: ori_width -= x_end - new_width x_end = new_width y_end = y_start + ori_height if y_end > new_height: ori_height -= y_end - new_height y_end = new_height extended_mask = np.zeros((new_height, new_width), dtype=bool) extended_mask[y_start:y_end, x_start:x_end] = mask[:ori_height, :ori_width] mapped.append(extended_mask) return mapped The provided code snippet includes necessary dependencies for implementing the `merge_results` function. Write a Python function `def merge_results(results, offsets, img_shape, iou_thr=0.1, device='cpu')` to solve the following problem: Merge patch results via nms. Args: results (list[np.ndarray] | list[tuple]): A list of patches results. offsets (np.ndarray): Positions of the left top points of patches. img_shape (tuple): A tuple of the huge image's width and height. iou_thr (float): The IoU threshold of NMS. device (str): The device to call nms. Retunrns: list[np.ndarray]: Detection results after merging. Here is the function: def merge_results(results, offsets, img_shape, iou_thr=0.1, device='cpu'): """Merge patch results via nms. Args: results (list[np.ndarray] | list[tuple]): A list of patches results. offsets (np.ndarray): Positions of the left top points of patches. img_shape (tuple): A tuple of the huge image's width and height. iou_thr (float): The IoU threshold of NMS. device (str): The device to call nms. Retunrns: list[np.ndarray]: Detection results after merging. """ assert len(results) == offsets.shape[0], 'The `results` should has the ' \ 'same length with `offsets`.' with_mask = isinstance(results[0], tuple) num_patches = len(results) num_classes = len(results[0][0]) if with_mask else len(results[0]) merged_bboxes = [] merged_masks = [] for cls in range(num_classes): if with_mask: dets_per_cls = [results[i][0][cls] for i in range(num_patches)] masks_per_cls = [results[i][1][cls] for i in range(num_patches)] else: dets_per_cls = [results[i][cls] for i in range(num_patches)] masks_per_cls = None dets_per_cls = [ translate_bboxes(dets_per_cls[i], offsets[i]) for i in range(num_patches) ] dets_per_cls = np.concatenate(dets_per_cls, axis=0) if with_mask: masks_placeholder = [] for i, masks in enumerate(masks_per_cls): translated = map_masks(masks, offsets[i], img_shape) masks_placeholder.extend(translated) masks_per_cls = masks_placeholder if dets_per_cls.size == 0: merged_bboxes.append(dets_per_cls) if with_mask: merged_masks.append(masks_per_cls) else: dets_per_cls = torch.from_numpy(dets_per_cls).to(device) nms_func = nms if dets_per_cls.size(1) == 5 else nms_rotated nms_dets, keeps = nms_func(dets_per_cls[:, :-1], dets_per_cls[:, -1], iou_thr) merged_bboxes.append(nms_dets.cpu().numpy()) if with_mask: keeps = keeps.cpu().numpy() merged_masks.append([masks_per_cls[i] for i in keeps]) if with_mask: return merged_bboxes, merged_masks else: return merged_bboxes
Merge patch results via nms. Args: results (list[np.ndarray] | list[tuple]): A list of patches results. offsets (np.ndarray): Positions of the left top points of patches. img_shape (tuple): A tuple of the huge image's width and height. iou_thr (float): The IoU threshold of NMS. device (str): The device to call nms. Retunrns: list[np.ndarray]: Detection results after merging.
7,360
from itertools import product from math import ceil import numpy as np The provided code snippet includes necessary dependencies for implementing the `get_multiscale_patch` function. Write a Python function `def get_multiscale_patch(sizes, steps, ratios)` to solve the following problem: Get multiscale patch sizes and steps. Args: sizes (list): A list of patch sizes. steps (list): A list of steps to slide patches. ratios (list): Multiscale ratios. devidie to each size and step and generate patches in new scales. Returns: new_sizes (list): A list of multiscale patch sizes. new_steps (list): A list of steps corresponding to new_sizes. Here is the function: def get_multiscale_patch(sizes, steps, ratios): """Get multiscale patch sizes and steps. Args: sizes (list): A list of patch sizes. steps (list): A list of steps to slide patches. ratios (list): Multiscale ratios. devidie to each size and step and generate patches in new scales. Returns: new_sizes (list): A list of multiscale patch sizes. new_steps (list): A list of steps corresponding to new_sizes. """ assert len(sizes) == len(steps), 'The length of `sizes` and `steps`' \ 'should be the same.' new_sizes, new_steps = [], [] size_steps = list(zip(sizes, steps)) for (size, step), ratio in product(size_steps, ratios): new_sizes.append(int(size / ratio)) new_steps.append(int(step / ratio)) return new_sizes, new_steps
Get multiscale patch sizes and steps. Args: sizes (list): A list of patch sizes. steps (list): A list of steps to slide patches. ratios (list): Multiscale ratios. devidie to each size and step and generate patches in new scales. Returns: new_sizes (list): A list of multiscale patch sizes. new_steps (list): A list of steps corresponding to new_sizes.
7,361
from itertools import product from math import ceil import numpy as np The provided code snippet includes necessary dependencies for implementing the `slide_window` function. Write a Python function `def slide_window(width, height, sizes, steps, img_rate_thr=0.6)` to solve the following problem: Slide windows in images and get window position. Args: width (int): The width of the image. height (int): The height of the image. sizes (list): List of window's sizes. steps (list): List of window's steps. img_rate_thr (float): Threshold of window area divided by image area. Returns: np.ndarray: Information of valid windows. Here is the function: def slide_window(width, height, sizes, steps, img_rate_thr=0.6): """Slide windows in images and get window position. Args: width (int): The width of the image. height (int): The height of the image. sizes (list): List of window's sizes. steps (list): List of window's steps. img_rate_thr (float): Threshold of window area divided by image area. Returns: np.ndarray: Information of valid windows. """ assert 1 >= img_rate_thr >= 0, 'The `in_rate_thr` should lie in 0~1' windows = [] # Sliding windows. for size, step in zip(sizes, steps): assert size > step, 'Size should large than step' x_num = 1 if width <= size else ceil((width - size) / step + 1) x_start = [step * i for i in range(x_num)] if len(x_start) > 1 and x_start[-1] + size > width: x_start[-1] = width - size y_num = 1 if height <= size else ceil((height - size) / step + 1) y_start = [step * i for i in range(y_num)] if len(y_start) > 1 and y_start[-1] + size > height: y_start[-1] = height - size start = np.array(list(product(x_start, y_start)), dtype=np.int64) windows.append(np.concatenate([start, start + size], axis=1)) windows = np.concatenate(windows, axis=0) # Calculate the rate of image part in each window. img_in_wins = windows.copy() img_in_wins[:, 0::2] = np.clip(img_in_wins[:, 0::2], 0, width) img_in_wins[:, 1::2] = np.clip(img_in_wins[:, 1::2], 0, height) img_areas = (img_in_wins[:, 2] - img_in_wins[:, 0]) * \ (img_in_wins[:, 3] - img_in_wins[:, 1]) win_areas = (windows[:, 2] - windows[:, 0]) * \ (windows[:, 3] - windows[:, 1]) img_rates = img_areas / win_areas if not (img_rates >= img_rate_thr).any(): img_rates[img_rates == img_rates.max()] = 1 return windows[img_rates >= img_rate_thr]
Slide windows in images and get window position. Args: width (int): The width of the image. height (int): The height of the image. sizes (list): List of window's sizes. steps (list): List of window's steps. img_rate_thr (float): Threshold of window area divided by image area. Returns: np.ndarray: Information of valid windows.
7,362
import cv2 import matplotlib.pyplot as plt import mmcv import numpy as np from matplotlib.collections import PatchCollection from matplotlib.patches import Polygon from mmdet.core.visualization import palette_val from mmdet.core.visualization.image import draw_labels, draw_masks from mmrotate.core.visualization.palette import get_palette EPS = 1e-2 def _get_adaptive_scales(areas, min_area=800, max_area=30000): """Get adaptive scales according to areas. The scale range is [0.5, 1.0]. When the area is less than ``'min_area'``, the scale is 0.5 while the area is larger than ``'max_area'``, the scale is 1.0. Args: areas (ndarray): The areas of bboxes or masks with the shape of (n, ). min_area (int): Lower bound areas for adaptive scales. Default: 800. max_area (int): Upper bound areas for adaptive scales. Default: 30000. Returns: ndarray: The adaotive scales with the shape of (n, ). """ scales = 0.5 + (areas - min_area) / (max_area - min_area) scales = np.clip(scales, 0.5, 1.0) return scales def draw_rbboxes(ax, bboxes, color='g', alpha=0.8, thickness=2): """Draw oriented bounding boxes on the axes. Args: ax (matplotlib.Axes): The input axes. bboxes (ndarray): The input bounding boxes with the shape of (n, 5). color (list[tuple] | matplotlib.color): the colors for each bounding boxes. alpha (float): Transparency of bounding boxes. Default: 0.8. thickness (int): Thickness of lines. Default: 2. Returns: matplotlib.Axes: The result axes. """ polygons = [] for i, bbox in enumerate(bboxes): xc, yc, w, h, ag = bbox[:5] wx, wy = w / 2 * np.cos(ag), w / 2 * np.sin(ag) hx, hy = -h / 2 * np.sin(ag), h / 2 * np.cos(ag) p1 = (xc - wx - hx, yc - wy - hy) p2 = (xc + wx - hx, yc + wy - hy) p3 = (xc + wx + hx, yc + wy + hy) p4 = (xc - wx + hx, yc - wy + hy) poly = np.int0(np.array([p1, p2, p3, p4])) polygons.append(Polygon(poly)) p = PatchCollection( polygons, facecolor='none', edgecolors=color, linewidths=thickness, alpha=alpha) ax.add_collection(p) return ax def get_palette(palette, num_classes): """Get palette from various inputs. Args: palette (list[tuple] | str | tuple | :obj:`Color`): palette inputs. num_classes (int): the number of classes. Returns: list[tuple[int]]: A list of color tuples. """ assert isinstance(num_classes, int) if isinstance(palette, list): dataset_palette = palette elif isinstance(palette, tuple): dataset_palette = [palette] * num_classes elif palette == 'random' or palette is None: state = np.random.get_state() # random color np.random.seed(42) palette = np.random.randint(0, 256, size=(num_classes, 3)) np.random.set_state(state) dataset_palette = [tuple(c) for c in palette] elif palette == 'dota': from mmrotate.datasets import DOTADataset dataset_palette = DOTADataset.PALETTE elif palette == 'sar': from mmrotate.datasets import SARDataset dataset_palette = SARDataset.PALETTE elif palette == 'hrsc': from mmrotate.datasets import HRSCDataset dataset_palette = HRSCDataset.PALETTE elif palette == 'hrsc_classwise': from mmrotate.datasets import HRSCDataset dataset_palette = HRSCDataset.CLASSWISE_PALETTE elif mmcv.is_str(palette): dataset_palette = [mmcv.color_val(palette)[::-1]] * num_classes else: raise TypeError(f'Invalid type for palette: {type(palette)}') assert len(dataset_palette) >= num_classes, \ 'The length of palette should not be less than `num_classes`.' return dataset_palette The provided code snippet includes necessary dependencies for implementing the `imshow_det_rbboxes` function. Write a Python function `def imshow_det_rbboxes(img, bboxes=None, labels=None, segms=None, class_names=None, score_thr=0, bbox_color='green', text_color='green', mask_color=None, thickness=2, font_size=13, win_name='', show=True, wait_time=0, out_file=None)` to solve the following problem: Draw bboxes and class labels (with scores) on an image. Args: img (str | ndarray): The image to be displayed. bboxes (ndarray): Bounding boxes (with scores), shaped (n, 5) or (n, 6). labels (ndarray): Labels of bboxes. segms (ndarray | None): Masks, shaped (n,h,w) or None. class_names (list[str]): Names of each classes. score_thr (float): Minimum score of bboxes to be shown. Default: 0. bbox_color (list[tuple] | tuple | str | None): Colors of bbox lines. If a single color is given, it will be applied to all classes. The tuple of color should be in RGB order. Default: 'green'. text_color (list[tuple] | tuple | str | None): Colors of texts. If a single color is given, it will be applied to all classes. The tuple of color should be in RGB order. Default: 'green'. mask_color (list[tuple] | tuple | str | None, optional): Colors of masks. If a single color is given, it will be applied to all classes. The tuple of color should be in RGB order. Default: None. thickness (int): Thickness of lines. Default: 2. font_size (int): Font size of texts. Default: 13. show (bool): Whether to show the image. Default: True. win_name (str): The window name. Default: ''. wait_time (float): Value of waitKey param. Default: 0. out_file (str, optional): The filename to write the image. Default: None. Returns: ndarray: The image with bboxes drawn on it. Here is the function: def imshow_det_rbboxes(img, bboxes=None, labels=None, segms=None, class_names=None, score_thr=0, bbox_color='green', text_color='green', mask_color=None, thickness=2, font_size=13, win_name='', show=True, wait_time=0, out_file=None): """Draw bboxes and class labels (with scores) on an image. Args: img (str | ndarray): The image to be displayed. bboxes (ndarray): Bounding boxes (with scores), shaped (n, 5) or (n, 6). labels (ndarray): Labels of bboxes. segms (ndarray | None): Masks, shaped (n,h,w) or None. class_names (list[str]): Names of each classes. score_thr (float): Minimum score of bboxes to be shown. Default: 0. bbox_color (list[tuple] | tuple | str | None): Colors of bbox lines. If a single color is given, it will be applied to all classes. The tuple of color should be in RGB order. Default: 'green'. text_color (list[tuple] | tuple | str | None): Colors of texts. If a single color is given, it will be applied to all classes. The tuple of color should be in RGB order. Default: 'green'. mask_color (list[tuple] | tuple | str | None, optional): Colors of masks. If a single color is given, it will be applied to all classes. The tuple of color should be in RGB order. Default: None. thickness (int): Thickness of lines. Default: 2. font_size (int): Font size of texts. Default: 13. show (bool): Whether to show the image. Default: True. win_name (str): The window name. Default: ''. wait_time (float): Value of waitKey param. Default: 0. out_file (str, optional): The filename to write the image. Default: None. Returns: ndarray: The image with bboxes drawn on it. """ assert bboxes is None or bboxes.ndim == 2, \ f' bboxes ndim should be 2, but its ndim is {bboxes.ndim}.' assert labels.ndim == 1, \ f' labels ndim should be 1, but its ndim is {labels.ndim}.' assert bboxes is None or bboxes.shape[1] == 5 or bboxes.shape[1] == 6, \ f' bboxes.shape[1] should be 5 or 6, but its {bboxes.shape[1]}.' assert bboxes is None or bboxes.shape[0] <= labels.shape[0], \ 'labels.shape[0] should not be less than bboxes.shape[0].' assert segms is None or segms.shape[0] == labels.shape[0], \ 'segms.shape[0] and labels.shape[0] should have the same length.' assert segms is not None or bboxes is not None, \ 'segms and bboxes should not be None at the same time.' img = mmcv.imread(img).astype(np.uint8) if score_thr > 0: assert bboxes is not None and bboxes.shape[1] == 6 scores = bboxes[:, -1] inds = scores > score_thr bboxes = bboxes[inds, :] labels = labels[inds] if segms is not None: segms = segms[inds, ...] img = mmcv.bgr2rgb(img) width, height = img.shape[1], img.shape[0] img = np.ascontiguousarray(img) fig = plt.figure(win_name, frameon=False) plt.title(win_name) canvas = fig.canvas dpi = fig.get_dpi() # add a small EPS to avoid precision lost due to matplotlib's truncation # (https://github.com/matplotlib/matplotlib/issues/15363) fig.set_size_inches((width + EPS) / dpi, (height + EPS) / dpi) # remove white edges by set subplot margin plt.subplots_adjust(left=0, right=1, bottom=0, top=1) ax = plt.gca() ax.axis('off') max_label = int(max(labels) if len(labels) > 0 else 0) text_palette = palette_val(get_palette(text_color, max_label + 1)) text_colors = [text_palette[label] for label in labels] num_bboxes = 0 if bboxes is not None: num_bboxes = bboxes.shape[0] bbox_palette = palette_val(get_palette(bbox_color, max_label + 1)) colors = [bbox_palette[label] for label in labels[:num_bboxes]] draw_rbboxes(ax, bboxes, colors, alpha=0.8, thickness=thickness) horizontal_alignment = 'left' positions = bboxes[:, :2].astype(np.int32) + thickness areas = bboxes[:, 2] * bboxes[:, 3] scales = _get_adaptive_scales(areas) scores = bboxes[:, 5] if bboxes.shape[1] == 6 else None draw_labels( ax, labels[:num_bboxes], positions, scores=scores, class_names=class_names, color=text_colors, font_size=font_size, scales=scales, horizontal_alignment=horizontal_alignment) if segms is not None: mask_palette = get_palette(mask_color, max_label + 1) colors = [mask_palette[label] for label in labels] colors = np.array(colors, dtype=np.uint8) draw_masks(ax, img, segms, colors, with_edge=True) if num_bboxes < segms.shape[0]: segms = segms[num_bboxes:] horizontal_alignment = 'center' areas = [] positions = [] for mask in segms: _, _, stats, centroids = cv2.connectedComponentsWithStats( mask.astype(np.uint8), connectivity=8) largest_id = np.argmax(stats[1:, -1]) + 1 positions.append(centroids[largest_id]) areas.append(stats[largest_id, -1]) areas = np.stack(areas, axis=0) scales = _get_adaptive_scales(areas) draw_labels( ax, labels[num_bboxes:], positions, class_names=class_names, color=text_colors, font_size=font_size, scales=scales, horizontal_alignment=horizontal_alignment) plt.imshow(img) stream, _ = canvas.print_to_buffer() buffer = np.frombuffer(stream, dtype='uint8') img_rgba = buffer.reshape(height, width, 4) rgb, alpha = np.split(img_rgba, [3], axis=2) img = rgb.astype('uint8') img = mmcv.rgb2bgr(img) if show: # We do not use cv2 for display because in some cases, opencv will # conflict with Qt, it will output a warning: Current thread # is not the object's thread. You can refer to # https://github.com/opencv/opencv-python/issues/46 for details if wait_time == 0: plt.show() else: plt.show(block=False) plt.pause(wait_time) if out_file is not None: mmcv.imwrite(img, out_file) plt.close() return img
Draw bboxes and class labels (with scores) on an image. Args: img (str | ndarray): The image to be displayed. bboxes (ndarray): Bounding boxes (with scores), shaped (n, 5) or (n, 6). labels (ndarray): Labels of bboxes. segms (ndarray | None): Masks, shaped (n,h,w) or None. class_names (list[str]): Names of each classes. score_thr (float): Minimum score of bboxes to be shown. Default: 0. bbox_color (list[tuple] | tuple | str | None): Colors of bbox lines. If a single color is given, it will be applied to all classes. The tuple of color should be in RGB order. Default: 'green'. text_color (list[tuple] | tuple | str | None): Colors of texts. If a single color is given, it will be applied to all classes. The tuple of color should be in RGB order. Default: 'green'. mask_color (list[tuple] | tuple | str | None, optional): Colors of masks. If a single color is given, it will be applied to all classes. The tuple of color should be in RGB order. Default: None. thickness (int): Thickness of lines. Default: 2. font_size (int): Font size of texts. Default: 13. show (bool): Whether to show the image. Default: True. win_name (str): The window name. Default: ''. wait_time (float): Value of waitKey param. Default: 0. out_file (str, optional): The filename to write the image. Default: None. Returns: ndarray: The image with bboxes drawn on it.
7,363
from multiprocessing import get_context import numpy as np import torch from mmcv.ops import box_iou_rotated from mmcv.utils import print_log from mmdet.core import average_precision from terminaltables import AsciiTable def tpfp_default(det_bboxes, gt_bboxes, gt_bboxes_ignore=None, iou_thr=0.5, area_ranges=None): """Check if detected bboxes are true positive or false positive. Args: det_bboxes (ndarray): Detected bboxes of this image, of shape (m, 6). gt_bboxes (ndarray): GT bboxes of this image, of shape (n, 5). gt_bboxes_ignore (ndarray): Ignored gt bboxes of this image, of shape (k, 5). Default: None iou_thr (float): IoU threshold to be considered as matched. Default: 0.5. area_ranges (list[tuple] | None): Range of bbox areas to be evaluated, in the format [(min1, max1), (min2, max2), ...]. Default: None. Returns: tuple[np.ndarray]: (tp, fp) whose elements are 0 and 1. The shape of each array is (num_scales, m). """ # an indicator of ignored gts det_bboxes = np.array(det_bboxes) gt_ignore_inds = np.concatenate( (np.zeros(gt_bboxes.shape[0], dtype=bool), np.ones(gt_bboxes_ignore.shape[0], dtype=bool))) # stack gt_bboxes and gt_bboxes_ignore for convenience gt_bboxes = np.vstack((gt_bboxes, gt_bboxes_ignore)) num_dets = det_bboxes.shape[0] num_gts = gt_bboxes.shape[0] if area_ranges is None: area_ranges = [(None, None)] num_scales = len(area_ranges) # tp and fp are of shape (num_scales, num_gts), each row is tp or fp of # a certain scale tp = np.zeros((num_scales, num_dets), dtype=np.float32) fp = np.zeros((num_scales, num_dets), dtype=np.float32) # if there is no gt bboxes in this image, then all det bboxes # within area range are false positives if gt_bboxes.shape[0] == 0: if area_ranges == [(None, None)]: fp[...] = 1 else: raise NotImplementedError return tp, fp ious = box_iou_rotated( torch.from_numpy(det_bboxes).float(), torch.from_numpy(gt_bboxes).float()).numpy() # for each det, the max iou with all gts ious_max = ious.max(axis=1) # for each det, which gt overlaps most with it ious_argmax = ious.argmax(axis=1) # sort all dets in descending order by scores sort_inds = np.argsort(-det_bboxes[:, -1]) for k, (min_area, max_area) in enumerate(area_ranges): gt_covered = np.zeros(num_gts, dtype=bool) # if no area range is specified, gt_area_ignore is all False if min_area is None: gt_area_ignore = np.zeros_like(gt_ignore_inds, dtype=bool) else: raise NotImplementedError for i in sort_inds: if ious_max[i] >= iou_thr: matched_gt = ious_argmax[i] if not (gt_ignore_inds[matched_gt] or gt_area_ignore[matched_gt]): if not gt_covered[matched_gt]: gt_covered[matched_gt] = True tp[k, i] = 1 else: fp[k, i] = 1 # otherwise ignore this detected bbox, tp = 0, fp = 0 elif min_area is None: fp[k, i] = 1 else: bbox = det_bboxes[i, :5] area = bbox[2] * bbox[3] if area >= min_area and area < max_area: fp[k, i] = 1 return tp, fp def get_cls_results(det_results, annotations, class_id): """Get det results and gt information of a certain class. Args: det_results (list[list]): Same as `eval_map()`. annotations (list[dict]): Same as `eval_map()`. class_id (int): ID of a specific class. Returns: tuple[list[np.ndarray]]: detected bboxes, gt bboxes, ignored gt bboxes """ cls_dets = [img_res[class_id] for img_res in det_results] cls_gts = [] cls_gts_ignore = [] for ann in annotations: gt_inds = ann['labels'] == class_id cls_gts.append(ann['bboxes'][gt_inds, :]) if ann.get('labels_ignore', None) is not None: ignore_inds = ann['labels_ignore'] == class_id cls_gts_ignore.append(ann['bboxes_ignore'][ignore_inds, :]) else: cls_gts_ignore.append(torch.zeros((0, 5), dtype=torch.float64)) return cls_dets, cls_gts, cls_gts_ignore def print_map_summary(mean_ap, results, dataset=None, scale_ranges=None, logger=None): """Print mAP and results of each class. A table will be printed to show the gts/dets/recall/AP of each class and the mAP. Args: mean_ap (float): Calculated from `eval_map()`. results (list[dict]): Calculated from `eval_map()`. dataset (list[str] | str | None): Dataset name or dataset classes. scale_ranges (list[tuple] | None): Range of scales to be evaluated. logger (logging.Logger | str | None): The way to print the mAP summary. See `mmcv.utils.print_log()` for details. Default: None. """ if logger == 'silent': return if isinstance(results[0]['ap'], np.ndarray): num_scales = len(results[0]['ap']) else: num_scales = 1 if scale_ranges is not None: assert len(scale_ranges) == num_scales num_classes = len(results) recalls = np.zeros((num_scales, num_classes), dtype=np.float32) aps = np.zeros((num_scales, num_classes), dtype=np.float32) num_gts = np.zeros((num_scales, num_classes), dtype=int) for i, cls_result in enumerate(results): if cls_result['recall'].size > 0: recalls[:, i] = np.array(cls_result['recall'], ndmin=2)[:, -1] aps[:, i] = cls_result['ap'] num_gts[:, i] = cls_result['num_gts'] if dataset is None: label_names = [str(i) for i in range(num_classes)] else: label_names = dataset if not isinstance(mean_ap, list): mean_ap = [mean_ap] header = ['class', 'gts', 'dets', 'recall', 'ap'] for i in range(num_scales): if scale_ranges is not None: print_log(f'Scale range {scale_ranges[i]}', logger=logger) table_data = [header] for j in range(num_classes): row_data = [ label_names[j], num_gts[i, j], results[j]['num_dets'], f'{recalls[i, j]:.3f}', f'{aps[i, j]:.3f}' ] table_data.append(row_data) table_data.append(['mAP', '', '', '', f'{mean_ap[i]:.3f}']) table = AsciiTable(table_data) table.inner_footing_row_border = True print_log('\n' + table.table, logger=logger) The provided code snippet includes necessary dependencies for implementing the `eval_rbbox_map` function. Write a Python function `def eval_rbbox_map(det_results, annotations, scale_ranges=None, iou_thr=0.5, use_07_metric=True, dataset=None, logger=None, nproc=4)` to solve the following problem: Evaluate mAP of a rotated dataset. Args: det_results (list[list]): [[cls1_det, cls2_det, ...], ...]. The outer list indicates images, and the inner list indicates per-class detected bboxes. annotations (list[dict]): Ground truth annotations where each item of the list indicates an image. Keys of annotations are: - `bboxes`: numpy array of shape (n, 5) - `labels`: numpy array of shape (n, ) - `bboxes_ignore` (optional): numpy array of shape (k, 5) - `labels_ignore` (optional): numpy array of shape (k, ) scale_ranges (list[tuple] | None): Range of scales to be evaluated, in the format [(min1, max1), (min2, max2), ...]. A range of (32, 64) means the area range between (32**2, 64**2). Default: None. iou_thr (float): IoU threshold to be considered as matched. Default: 0.5. use_07_metric (bool): Whether to use the voc07 metric. dataset (list[str] | str | None): Dataset name or dataset classes, there are minor differences in metrics for different datasets, e.g. "voc07", "imagenet_det", etc. Default: None. logger (logging.Logger | str | None): The way to print the mAP summary. See `mmcv.utils.print_log()` for details. Default: None. nproc (int): Processes used for computing TP and FP. Default: 4. Returns: tuple: (mAP, [dict, dict, ...]) Here is the function: def eval_rbbox_map(det_results, annotations, scale_ranges=None, iou_thr=0.5, use_07_metric=True, dataset=None, logger=None, nproc=4): """Evaluate mAP of a rotated dataset. Args: det_results (list[list]): [[cls1_det, cls2_det, ...], ...]. The outer list indicates images, and the inner list indicates per-class detected bboxes. annotations (list[dict]): Ground truth annotations where each item of the list indicates an image. Keys of annotations are: - `bboxes`: numpy array of shape (n, 5) - `labels`: numpy array of shape (n, ) - `bboxes_ignore` (optional): numpy array of shape (k, 5) - `labels_ignore` (optional): numpy array of shape (k, ) scale_ranges (list[tuple] | None): Range of scales to be evaluated, in the format [(min1, max1), (min2, max2), ...]. A range of (32, 64) means the area range between (32**2, 64**2). Default: None. iou_thr (float): IoU threshold to be considered as matched. Default: 0.5. use_07_metric (bool): Whether to use the voc07 metric. dataset (list[str] | str | None): Dataset name or dataset classes, there are minor differences in metrics for different datasets, e.g. "voc07", "imagenet_det", etc. Default: None. logger (logging.Logger | str | None): The way to print the mAP summary. See `mmcv.utils.print_log()` for details. Default: None. nproc (int): Processes used for computing TP and FP. Default: 4. Returns: tuple: (mAP, [dict, dict, ...]) """ assert len(det_results) == len(annotations) num_imgs = len(det_results) num_scales = len(scale_ranges) if scale_ranges is not None else 1 num_classes = len(det_results[0]) # positive class num area_ranges = ([(rg[0]**2, rg[1]**2) for rg in scale_ranges] if scale_ranges is not None else None) pool = get_context('spawn').Pool(nproc) eval_results = [] for i in range(num_classes): # get gt and det bboxes of this class cls_dets, cls_gts, cls_gts_ignore = get_cls_results( det_results, annotations, i) # compute tp and fp for each image with multiple processes tpfp = pool.starmap( tpfp_default, zip(cls_dets, cls_gts, cls_gts_ignore, [iou_thr for _ in range(num_imgs)], [area_ranges for _ in range(num_imgs)])) tp, fp = tuple(zip(*tpfp)) # calculate gt number of each scale # ignored gts or gts beyond the specific scale are not counted num_gts = np.zeros(num_scales, dtype=int) for _, bbox in enumerate(cls_gts): if area_ranges is None: num_gts[0] += bbox.shape[0] else: gt_areas = bbox[:, 2] * bbox[:, 3] for k, (min_area, max_area) in enumerate(area_ranges): num_gts[k] += np.sum((gt_areas >= min_area) & (gt_areas < max_area)) # sort all det bboxes by score, also sort tp and fp cls_dets = np.vstack(cls_dets) num_dets = cls_dets.shape[0] sort_inds = np.argsort(-cls_dets[:, -1]) tp = np.hstack(tp)[:, sort_inds] fp = np.hstack(fp)[:, sort_inds] # calculate recall and precision with tp and fp tp = np.cumsum(tp, axis=1) fp = np.cumsum(fp, axis=1) eps = np.finfo(np.float32).eps recalls = tp / np.maximum(num_gts[:, np.newaxis], eps) precisions = tp / np.maximum((tp + fp), eps) # calculate AP if scale_ranges is None: recalls = recalls[0, :] precisions = precisions[0, :] num_gts = num_gts.item() mode = 'area' if not use_07_metric else '11points' ap = average_precision(recalls, precisions, mode) eval_results.append({ 'num_gts': num_gts, 'num_dets': num_dets, 'recall': recalls, 'precision': precisions, 'ap': ap }) pool.close() if scale_ranges is not None: # shape (num_classes, num_scales) all_ap = np.vstack([cls_result['ap'] for cls_result in eval_results]) all_num_gts = np.vstack( [cls_result['num_gts'] for cls_result in eval_results]) mean_ap = [] for i in range(num_scales): if np.any(all_num_gts[:, i] > 0): mean_ap.append(all_ap[all_num_gts[:, i] > 0, i].mean()) else: mean_ap.append(0.0) else: aps = [] for cls_result in eval_results: if cls_result['num_gts'] > 0: aps.append(cls_result['ap']) mean_ap = np.array(aps).mean().item() if aps else 0.0 print_map_summary( mean_ap, eval_results, dataset, area_ranges, logger=logger) return mean_ap, eval_results
Evaluate mAP of a rotated dataset. Args: det_results (list[list]): [[cls1_det, cls2_det, ...], ...]. The outer list indicates images, and the inner list indicates per-class detected bboxes. annotations (list[dict]): Ground truth annotations where each item of the list indicates an image. Keys of annotations are: - `bboxes`: numpy array of shape (n, 5) - `labels`: numpy array of shape (n, ) - `bboxes_ignore` (optional): numpy array of shape (k, 5) - `labels_ignore` (optional): numpy array of shape (k, ) scale_ranges (list[tuple] | None): Range of scales to be evaluated, in the format [(min1, max1), (min2, max2), ...]. A range of (32, 64) means the area range between (32**2, 64**2). Default: None. iou_thr (float): IoU threshold to be considered as matched. Default: 0.5. use_07_metric (bool): Whether to use the voc07 metric. dataset (list[str] | str | None): Dataset name or dataset classes, there are minor differences in metrics for different datasets, e.g. "voc07", "imagenet_det", etc. Default: None. logger (logging.Logger | str | None): The way to print the mAP summary. See `mmcv.utils.print_log()` for details. Default: None. nproc (int): Processes used for computing TP and FP. Default: 4. Returns: tuple: (mAP, [dict, dict, ...])
7,364
import torch from mmcv.ops import nms_rotated The provided code snippet includes necessary dependencies for implementing the `multiclass_nms_rotated` function. Write a Python function `def multiclass_nms_rotated(multi_bboxes, multi_scores, score_thr, nms, max_num=-1, score_factors=None, return_inds=False)` to solve the following problem: NMS for multi-class bboxes. Args: multi_bboxes (torch.Tensor): shape (n, #class*5) or (n, 5) multi_scores (torch.Tensor): shape (n, #class), where the last column contains scores of the background class, but this will be ignored. score_thr (float): bbox threshold, bboxes with scores lower than it will not be considered. nms (float): Config of NMS. max_num (int, optional): if there are more than max_num bboxes after NMS, only top max_num will be kept. Default to -1. score_factors (Tensor, optional): The factors multiplied to scores before applying NMS. Default to None. return_inds (bool, optional): Whether return the indices of kept bboxes. Default to False. Returns: tuple (dets, labels, indices (optional)): tensors of shape (k, 5), \ (k), and (k). Dets are boxes with scores. Labels are 0-based. Here is the function: def multiclass_nms_rotated(multi_bboxes, multi_scores, score_thr, nms, max_num=-1, score_factors=None, return_inds=False): """NMS for multi-class bboxes. Args: multi_bboxes (torch.Tensor): shape (n, #class*5) or (n, 5) multi_scores (torch.Tensor): shape (n, #class), where the last column contains scores of the background class, but this will be ignored. score_thr (float): bbox threshold, bboxes with scores lower than it will not be considered. nms (float): Config of NMS. max_num (int, optional): if there are more than max_num bboxes after NMS, only top max_num will be kept. Default to -1. score_factors (Tensor, optional): The factors multiplied to scores before applying NMS. Default to None. return_inds (bool, optional): Whether return the indices of kept bboxes. Default to False. Returns: tuple (dets, labels, indices (optional)): tensors of shape (k, 5), \ (k), and (k). Dets are boxes with scores. Labels are 0-based. """ num_classes = multi_scores.size(1) - 1 # exclude background category if multi_bboxes.shape[1] > 5: bboxes = multi_bboxes.view(multi_scores.size(0), -1, 5) else: bboxes = multi_bboxes[:, None].expand( multi_scores.size(0), num_classes, 5) scores = multi_scores[:, :-1] labels = torch.arange(num_classes, dtype=torch.long, device=scores.device) labels = labels.view(1, -1).expand_as(scores) bboxes = bboxes.reshape(-1, 5) scores = scores.reshape(-1) labels = labels.reshape(-1) # remove low scoring boxes valid_mask = scores > score_thr if score_factors is not None: # expand the shape to match original shape of score score_factors = score_factors.view(-1, 1).expand( multi_scores.size(0), num_classes) score_factors = score_factors.reshape(-1) scores = scores * score_factors inds = valid_mask.nonzero(as_tuple=False).squeeze(1) bboxes, scores, labels = bboxes[inds], scores[inds], labels[inds] if bboxes.numel() == 0: dets = torch.cat([bboxes, scores[:, None]], -1) if return_inds: return dets, labels, inds else: return dets, labels # Strictly, the maximum coordinates of the rotating box (x,y,w,h,a) # should be calculated by polygon coordinates. # But the conversion from rbbox to polygon will slow down the speed. # So we use max(x,y) + max(w,h) as max coordinate # which is larger than polygon max coordinate # max(x1, y1, x2, y2,x3, y3, x4, y4) max_coordinate = bboxes[:, :2].max() + bboxes[:, 2:4].max() offsets = labels.to(bboxes) * (max_coordinate + 1) if bboxes.size(-1) == 5: bboxes_for_nms = bboxes.clone() bboxes_for_nms[:, :2] = bboxes_for_nms[:, :2] + offsets[:, None] else: bboxes_for_nms = bboxes + offsets[:, None] _, keep = nms_rotated(bboxes_for_nms, scores, nms.iou_thr) if max_num > 0: keep = keep[:max_num] bboxes = bboxes[keep] scores = scores[keep] labels = labels[keep] if return_inds: return torch.cat([bboxes, scores[:, None]], 1), labels, keep else: return torch.cat([bboxes, scores[:, None]], 1), labels
NMS for multi-class bboxes. Args: multi_bboxes (torch.Tensor): shape (n, #class*5) or (n, 5) multi_scores (torch.Tensor): shape (n, #class), where the last column contains scores of the background class, but this will be ignored. score_thr (float): bbox threshold, bboxes with scores lower than it will not be considered. nms (float): Config of NMS. max_num (int, optional): if there are more than max_num bboxes after NMS, only top max_num will be kept. Default to -1. score_factors (Tensor, optional): The factors multiplied to scores before applying NMS. Default to None. return_inds (bool, optional): Whether return the indices of kept bboxes. Default to False. Returns: tuple (dets, labels, indices (optional)): tensors of shape (k, 5), \ (k), and (k). Dets are boxes with scores. Labels are 0-based.
7,365
import torch from mmcv.ops import nms_rotated The provided code snippet includes necessary dependencies for implementing the `aug_multiclass_nms_rotated` function. Write a Python function `def aug_multiclass_nms_rotated(merged_bboxes, merged_labels, score_thr, nms, max_num, classes)` to solve the following problem: NMS for aug multi-class bboxes. Args: multi_bboxes (torch.Tensor): shape (n, #class*5) or (n, 5) multi_scores (torch.Tensor): shape (n, #class), where the last column contains scores of the background class, but this will be ignored. score_thr (float): bbox threshold, bboxes with scores lower than it will not be considered. nms (float): Config of NMS. max_num (int, optional): if there are more than max_num bboxes after NMS, only top max_num will be kept. Default to -1. classes (int): number of classes. Returns: tuple (dets, labels): tensors of shape (k, 5), and (k). Dets are boxes with scores. Labels are 0-based. Here is the function: def aug_multiclass_nms_rotated(merged_bboxes, merged_labels, score_thr, nms, max_num, classes): """NMS for aug multi-class bboxes. Args: multi_bboxes (torch.Tensor): shape (n, #class*5) or (n, 5) multi_scores (torch.Tensor): shape (n, #class), where the last column contains scores of the background class, but this will be ignored. score_thr (float): bbox threshold, bboxes with scores lower than it will not be considered. nms (float): Config of NMS. max_num (int, optional): if there are more than max_num bboxes after NMS, only top max_num will be kept. Default to -1. classes (int): number of classes. Returns: tuple (dets, labels): tensors of shape (k, 5), and (k). Dets are boxes with scores. Labels are 0-based. """ bboxes, labels = [], [] for cls in range(classes): cls_bboxes = merged_bboxes[merged_labels == cls] inds = cls_bboxes[:, -1] > score_thr if len(inds) == 0: continue cur_bboxes = cls_bboxes[inds, :] cls_dets, _ = nms_rotated(cur_bboxes[:, :5], cur_bboxes[:, -1], nms.iou_thr) cls_labels = merged_bboxes.new_full((cls_dets.shape[0], ), cls, dtype=torch.long) if cls_dets.size()[0] == 0: continue bboxes.append(cls_dets) labels.append(cls_labels) if bboxes: bboxes = torch.cat(bboxes) labels = torch.cat(labels) if bboxes.shape[0] > max_num: _, _inds = bboxes[:, -1].sort(descending=True) _inds = _inds[:max_num] bboxes = bboxes[_inds] labels = labels[_inds] else: bboxes = merged_bboxes.new_zeros((0, merged_bboxes.size(-1))) labels = merged_bboxes.new_zeros((0, ), dtype=torch.long) return bboxes, labels
NMS for aug multi-class bboxes. Args: multi_bboxes (torch.Tensor): shape (n, #class*5) or (n, 5) multi_scores (torch.Tensor): shape (n, #class), where the last column contains scores of the background class, but this will be ignored. score_thr (float): bbox threshold, bboxes with scores lower than it will not be considered. nms (float): Config of NMS. max_num (int, optional): if there are more than max_num bboxes after NMS, only top max_num will be kept. Default to -1. classes (int): number of classes. Returns: tuple (dets, labels): tensors of shape (k, 5), and (k). Dets are boxes with scores. Labels are 0-based.
7,366
import copy import platform from mmcv.utils import build_from_cfg from mmdet.datasets import DATASETS, PIPELINES from mmdet.datasets.builder import _concat_dataset ROTATED_DATASETS = DATASETS def build_dataset(cfg, default_args=None): from mmdet.datasets.dataset_wrappers import (ClassBalancedDataset, ConcatDataset, MultiImageMixDataset, RepeatDataset) if isinstance(cfg, (list, tuple)): dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) elif cfg['type'] == 'ConcatDataset': dataset = ConcatDataset( [build_dataset(c, default_args) for c in cfg['datasets']], cfg.get('separate_eval', True)) elif cfg['type'] == 'RepeatDataset': dataset = RepeatDataset( build_dataset(cfg['dataset'], default_args), cfg['times']) elif cfg['type'] == 'ClassBalancedDataset': dataset = ClassBalancedDataset( build_dataset(cfg['dataset'], default_args), cfg['oversample_thr']) elif cfg['type'] == 'MultiImageMixDataset': cp_cfg = copy.deepcopy(cfg) cp_cfg['dataset'] = build_dataset(cp_cfg['dataset']) cp_cfg.pop('type') dataset = MultiImageMixDataset(**cp_cfg) elif isinstance(cfg.get('ann_file'), (list, tuple)): dataset = _concat_dataset(cfg, default_args) else: dataset = build_from_cfg(cfg, ROTATED_DATASETS, default_args) return dataset
null
7,367
import glob import os import os.path as osp import re import tempfile import time import warnings import zipfile from collections import defaultdict from functools import partial import mmcv import numpy as np import torch from mmcv.ops import nms_rotated from mmdet.datasets.custom import CustomDataset from mmrotate.core import eval_rbbox_map, obb2poly_np, poly2obb_np from .builder import ROTATED_DATASETS The provided code snippet includes necessary dependencies for implementing the `_merge_func` function. Write a Python function `def _merge_func(info, CLASSES, iou_thr)` to solve the following problem: Merging patch bboxes into full image. Args: CLASSES (list): Label category. iou_thr (float): Threshold of IoU. Here is the function: def _merge_func(info, CLASSES, iou_thr): """Merging patch bboxes into full image. Args: CLASSES (list): Label category. iou_thr (float): Threshold of IoU. """ img_id, label_dets = info label_dets = np.concatenate(label_dets, axis=0) labels, dets = label_dets[:, 0], label_dets[:, 1:] big_img_results = [] for i in range(len(CLASSES)): if len(dets[labels == i]) == 0: big_img_results.append(dets[labels == i]) else: try: cls_dets = torch.from_numpy(dets[labels == i]).cuda() except: # noqa: E722 cls_dets = torch.from_numpy(dets[labels == i]) nms_dets, keep_inds = nms_rotated(cls_dets[:, :5], cls_dets[:, -1], iou_thr) big_img_results.append(nms_dets.cpu().numpy()) return img_id, big_img_results
Merging patch bboxes into full image. Args: CLASSES (list): Label category. iou_thr (float): Threshold of IoU.
7,368
import torch import torch.nn as nn from mmdet.models.losses.utils import weighted_loss from mmrotate.core import GaussianMixture, gt2gaussian from ..builder import ROTATED_LOSSES def kld_single2single(g1, g2): """Compute Kullback-Leibler Divergence. Args: g1 (dict[str, torch.Tensor]): Gaussian distribution 1. g2 (torch.Tensor): Gaussian distribution 2. Returns: torch.Tensor: Kullback-Leibler Divergence. """ p_mu = g1.mu p_var = g1.var assert p_mu.dim() == 3 and p_mu.size()[1] == 1 assert p_var.dim() == 4 and p_var.size()[1] == 1 p_mu = p_mu.squeeze(1) p_var = p_var.squeeze(1) t_mu, t_var = g2 delta = (p_mu - t_mu).unsqueeze(-1) t_inv = torch.inverse(t_var) term1 = delta.transpose(-1, -2).matmul(t_inv).matmul(delta).squeeze(-1) term2 = torch.diagonal( t_inv.matmul(p_var), dim1=-2, dim2=-1).sum(dim=-1, keepdim=True) + \ torch.log(torch.det(t_var) / torch.det(p_var)).reshape(-1, 1) return 0.5 * (term1 + term2) - 1 The provided code snippet includes necessary dependencies for implementing the `kld_loss` function. Write a Python function `def kld_loss(pred, target, eps=1e-6)` to solve the following problem: Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Convexes with shape (N, 9, 2). target (torch.Tensor): Polygons with shape (N, 4, 2). eps (float): Defaults to 1e-6. Returns: torch.Tensor: Kullback-Leibler Divergence loss. Here is the function: def kld_loss(pred, target, eps=1e-6): """Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Convexes with shape (N, 9, 2). target (torch.Tensor): Polygons with shape (N, 4, 2). eps (float): Defaults to 1e-6. Returns: torch.Tensor: Kullback-Leibler Divergence loss. """ pred = pred.reshape(-1, 9, 2) target = target.reshape(-1, 4, 2) assert pred.size()[0] == target.size()[0] and target.numel() > 0 gmm = GaussianMixture(n_components=1, requires_grad=True) gmm.fit(pred) kld = kld_single2single(gmm, gt2gaussian(target)) kl_agg = kld.clamp(min=eps) loss = 1 - 1 / (2 + torch.sqrt(kl_agg)) return loss
Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Convexes with shape (N, 9, 2). target (torch.Tensor): Polygons with shape (N, 4, 2). eps (float): Defaults to 1e-6. Returns: torch.Tensor: Kullback-Leibler Divergence loss.
7,369
from copy import deepcopy import torch from mmdet.models.losses.utils import weighted_loss from torch import nn from ..builder import ROTATED_LOSSES The provided code snippet includes necessary dependencies for implementing the `xy_wh_r_2_xy_sigma` function. Write a Python function `def xy_wh_r_2_xy_sigma(xywhr)` to solve the following problem: Convert oriented bounding box to 2-D Gaussian distribution. Args: xywhr (torch.Tensor): rbboxes with shape (N, 5). Returns: xy (torch.Tensor): center point of 2-D Gaussian distribution with shape (N, 2). sigma (torch.Tensor): covariance matrix of 2-D Gaussian distribution with shape (N, 2, 2). Here is the function: def xy_wh_r_2_xy_sigma(xywhr): """Convert oriented bounding box to 2-D Gaussian distribution. Args: xywhr (torch.Tensor): rbboxes with shape (N, 5). Returns: xy (torch.Tensor): center point of 2-D Gaussian distribution with shape (N, 2). sigma (torch.Tensor): covariance matrix of 2-D Gaussian distribution with shape (N, 2, 2). """ _shape = xywhr.shape assert _shape[-1] == 5 xy = xywhr[..., :2] wh = xywhr[..., 2:4].clamp(min=1e-7, max=1e7).reshape(-1, 2) r = xywhr[..., 4] cos_r = torch.cos(r) sin_r = torch.sin(r) R = torch.stack((cos_r, -sin_r, sin_r, cos_r), dim=-1).reshape(-1, 2, 2) S = 0.5 * torch.diag_embed(wh) sigma = R.bmm(S.square()).bmm(R.permute(0, 2, 1)).reshape(_shape[:-1] + (2, 2)) return xy, sigma
Convert oriented bounding box to 2-D Gaussian distribution. Args: xywhr (torch.Tensor): rbboxes with shape (N, 5). Returns: xy (torch.Tensor): center point of 2-D Gaussian distribution with shape (N, 2). sigma (torch.Tensor): covariance matrix of 2-D Gaussian distribution with shape (N, 2, 2).
7,370
from copy import deepcopy import torch from mmdet.models.losses.utils import weighted_loss from torch import nn from ..builder import ROTATED_LOSSES The provided code snippet includes necessary dependencies for implementing the `xy_stddev_pearson_2_xy_sigma` function. Write a Python function `def xy_stddev_pearson_2_xy_sigma(xy_stddev_pearson)` to solve the following problem: Convert oriented bounding box from the Pearson coordinate system to 2-D Gaussian distribution. Args: xy_stddev_pearson (torch.Tensor): rbboxes with shape (N, 5). Returns: xy (torch.Tensor): center point of 2-D Gaussian distribution with shape (N, 2). sigma (torch.Tensor): covariance matrix of 2-D Gaussian distribution with shape (N, 2, 2). Here is the function: def xy_stddev_pearson_2_xy_sigma(xy_stddev_pearson): """Convert oriented bounding box from the Pearson coordinate system to 2-D Gaussian distribution. Args: xy_stddev_pearson (torch.Tensor): rbboxes with shape (N, 5). Returns: xy (torch.Tensor): center point of 2-D Gaussian distribution with shape (N, 2). sigma (torch.Tensor): covariance matrix of 2-D Gaussian distribution with shape (N, 2, 2). """ _shape = xy_stddev_pearson.shape assert _shape[-1] == 5 xy = xy_stddev_pearson[..., :2] stddev = xy_stddev_pearson[..., 2:4] pearson = xy_stddev_pearson[..., 4].clamp(min=1e-7 - 1, max=1 - 1e-7) covar = pearson * stddev.prod(dim=-1) var = stddev.square() sigma = torch.stack((var[..., 0], covar, covar, var[..., 1]), dim=-1).reshape(_shape[:-1] + (2, 2)) return xy, sigma
Convert oriented bounding box from the Pearson coordinate system to 2-D Gaussian distribution. Args: xy_stddev_pearson (torch.Tensor): rbboxes with shape (N, 5). Returns: xy (torch.Tensor): center point of 2-D Gaussian distribution with shape (N, 2). sigma (torch.Tensor): covariance matrix of 2-D Gaussian distribution with shape (N, 2, 2).
7,371
from copy import deepcopy import torch from mmdet.models.losses.utils import weighted_loss from torch import nn from ..builder import ROTATED_LOSSES def postprocess(distance, fun='log1p', tau=1.0): """Convert distance to loss. Args: distance (torch.Tensor) fun (str, optional): The function applied to distance. Defaults to 'log1p'. tau (float, optional): Defaults to 1.0. Returns: loss (torch.Tensor) """ if fun == 'log1p': distance = torch.log1p(distance) elif fun == 'sqrt': distance = torch.sqrt(distance.clamp(1e-7)) elif fun == 'none': pass else: raise ValueError(f'Invalid non-linear function {fun}') if tau >= 1.0: return 1 - 1 / (tau + distance) else: return distance The provided code snippet includes necessary dependencies for implementing the `gwd_loss` function. Write a Python function `def gwd_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, normalize=True)` to solve the following problem: Gaussian Wasserstein distance loss. Derivation and simplification: Given any positive-definite symmetrical 2*2 matrix Z: :math:`Tr(Z^{1/2}) = λ_1^{1/2} + λ_2^{1/2}` where :math:`λ_1` and :math:`λ_2` are the eigen values of Z Meanwhile we have: :math:`Tr(Z) = λ_1 + λ_2` :math:`det(Z) = λ_1 * λ_2` Combination with following formula: :math:`(λ_1^{1/2}+λ_2^{1/2})^2 = λ_1+λ_2+2 *(λ_1 * λ_2)^{1/2}` Yield: :math:`Tr(Z^{1/2}) = (Tr(Z) + 2 * (det(Z))^{1/2})^{1/2}` For gwd loss the frustrating coupling part is: :math:`Tr((Σ_p^{1/2} * Σ_t * Σp^{1/2})^{1/2})` Assuming :math:`Z = Σ_p^{1/2} * Σ_t * Σ_p^{1/2}` then: :math:`Tr(Z) = Tr(Σ_p^{1/2} * Σ_t * Σ_p^{1/2}) = Tr(Σ_p^{1/2} * Σ_p^{1/2} * Σ_t) = Tr(Σ_p * Σ_t)` :math:`det(Z) = det(Σ_p^{1/2} * Σ_t * Σ_p^{1/2}) = det(Σ_p^{1/2}) * det(Σ_t) * det(Σ_p^{1/2}) = det(Σ_p * Σ_t)` and thus we can rewrite the coupling part as: :math:`Tr(Z^{1/2}) = (Tr(Z) + 2 * (det(Z))^{1/2})^{1/2}` :math:`Tr((Σ_p^{1/2} * Σ_t * Σ_p^{1/2})^{1/2}) = (Tr(Σ_p * Σ_t) + 2 * (det(Σ_p * Σ_t))^{1/2})^{1/2}` Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. normalize (bool): Whether to normalize the distance. Defaults to True. Returns: loss (torch.Tensor) Here is the function: def gwd_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, normalize=True): """Gaussian Wasserstein distance loss. Derivation and simplification: Given any positive-definite symmetrical 2*2 matrix Z: :math:`Tr(Z^{1/2}) = λ_1^{1/2} + λ_2^{1/2}` where :math:`λ_1` and :math:`λ_2` are the eigen values of Z Meanwhile we have: :math:`Tr(Z) = λ_1 + λ_2` :math:`det(Z) = λ_1 * λ_2` Combination with following formula: :math:`(λ_1^{1/2}+λ_2^{1/2})^2 = λ_1+λ_2+2 *(λ_1 * λ_2)^{1/2}` Yield: :math:`Tr(Z^{1/2}) = (Tr(Z) + 2 * (det(Z))^{1/2})^{1/2}` For gwd loss the frustrating coupling part is: :math:`Tr((Σ_p^{1/2} * Σ_t * Σp^{1/2})^{1/2})` Assuming :math:`Z = Σ_p^{1/2} * Σ_t * Σ_p^{1/2}` then: :math:`Tr(Z) = Tr(Σ_p^{1/2} * Σ_t * Σ_p^{1/2}) = Tr(Σ_p^{1/2} * Σ_p^{1/2} * Σ_t) = Tr(Σ_p * Σ_t)` :math:`det(Z) = det(Σ_p^{1/2} * Σ_t * Σ_p^{1/2}) = det(Σ_p^{1/2}) * det(Σ_t) * det(Σ_p^{1/2}) = det(Σ_p * Σ_t)` and thus we can rewrite the coupling part as: :math:`Tr(Z^{1/2}) = (Tr(Z) + 2 * (det(Z))^{1/2})^{1/2}` :math:`Tr((Σ_p^{1/2} * Σ_t * Σ_p^{1/2})^{1/2}) = (Tr(Σ_p * Σ_t) + 2 * (det(Σ_p * Σ_t))^{1/2})^{1/2}` Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. normalize (bool): Whether to normalize the distance. Defaults to True. Returns: loss (torch.Tensor) """ xy_p, Sigma_p = pred xy_t, Sigma_t = target xy_distance = (xy_p - xy_t).square().sum(dim=-1) whr_distance = Sigma_p.diagonal(dim1=-2, dim2=-1).sum(dim=-1) whr_distance = whr_distance + Sigma_t.diagonal( dim1=-2, dim2=-1).sum(dim=-1) _t_tr = (Sigma_p.bmm(Sigma_t)).diagonal(dim1=-2, dim2=-1).sum(dim=-1) _t_det_sqrt = (Sigma_p.det() * Sigma_t.det()).clamp(1e-7).sqrt() whr_distance = whr_distance + (-2) * ( (_t_tr + 2 * _t_det_sqrt).clamp(1e-7).sqrt()) distance = (xy_distance + alpha * alpha * whr_distance).clamp(1e-7).sqrt() if normalize: scale = 2 * ( _t_det_sqrt.clamp(1e-7).sqrt().clamp(1e-7).sqrt()).clamp(1e-7) distance = distance / scale return postprocess(distance, fun=fun, tau=tau)
Gaussian Wasserstein distance loss. Derivation and simplification: Given any positive-definite symmetrical 2*2 matrix Z: :math:`Tr(Z^{1/2}) = λ_1^{1/2} + λ_2^{1/2}` where :math:`λ_1` and :math:`λ_2` are the eigen values of Z Meanwhile we have: :math:`Tr(Z) = λ_1 + λ_2` :math:`det(Z) = λ_1 * λ_2` Combination with following formula: :math:`(λ_1^{1/2}+λ_2^{1/2})^2 = λ_1+λ_2+2 *(λ_1 * λ_2)^{1/2}` Yield: :math:`Tr(Z^{1/2}) = (Tr(Z) + 2 * (det(Z))^{1/2})^{1/2}` For gwd loss the frustrating coupling part is: :math:`Tr((Σ_p^{1/2} * Σ_t * Σp^{1/2})^{1/2})` Assuming :math:`Z = Σ_p^{1/2} * Σ_t * Σ_p^{1/2}` then: :math:`Tr(Z) = Tr(Σ_p^{1/2} * Σ_t * Σ_p^{1/2}) = Tr(Σ_p^{1/2} * Σ_p^{1/2} * Σ_t) = Tr(Σ_p * Σ_t)` :math:`det(Z) = det(Σ_p^{1/2} * Σ_t * Σ_p^{1/2}) = det(Σ_p^{1/2}) * det(Σ_t) * det(Σ_p^{1/2}) = det(Σ_p * Σ_t)` and thus we can rewrite the coupling part as: :math:`Tr(Z^{1/2}) = (Tr(Z) + 2 * (det(Z))^{1/2})^{1/2}` :math:`Tr((Σ_p^{1/2} * Σ_t * Σ_p^{1/2})^{1/2}) = (Tr(Σ_p * Σ_t) + 2 * (det(Σ_p * Σ_t))^{1/2})^{1/2}` Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. normalize (bool): Whether to normalize the distance. Defaults to True. Returns: loss (torch.Tensor)
7,372
from copy import deepcopy import torch from mmdet.models.losses.utils import weighted_loss from torch import nn from ..builder import ROTATED_LOSSES def postprocess(distance, fun='log1p', tau=1.0): """Convert distance to loss. Args: distance (torch.Tensor) fun (str, optional): The function applied to distance. Defaults to 'log1p'. tau (float, optional): Defaults to 1.0. Returns: loss (torch.Tensor) """ if fun == 'log1p': distance = torch.log1p(distance) elif fun == 'sqrt': distance = torch.sqrt(distance.clamp(1e-7)) elif fun == 'none': pass else: raise ValueError(f'Invalid non-linear function {fun}') if tau >= 1.0: return 1 - 1 / (tau + distance) else: return distance def kld_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, sqrt=True): """Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor) """ xy_p, Sigma_p = pred xy_t, Sigma_t = target _shape = xy_p.shape xy_p = xy_p.reshape(-1, 2) xy_t = xy_t.reshape(-1, 2) Sigma_p = Sigma_p.reshape(-1, 2, 2) Sigma_t = Sigma_t.reshape(-1, 2, 2) Sigma_p_inv = torch.stack((Sigma_p[..., 1, 1], -Sigma_p[..., 0, 1], -Sigma_p[..., 1, 0], Sigma_p[..., 0, 0]), dim=-1).reshape(-1, 2, 2) Sigma_p_inv = Sigma_p_inv / Sigma_p.det().unsqueeze(-1).unsqueeze(-1) dxy = (xy_p - xy_t).unsqueeze(-1) xy_distance = 0.5 * dxy.permute(0, 2, 1).bmm(Sigma_p_inv).bmm(dxy).view(-1) whr_distance = 0.5 * Sigma_p_inv.bmm(Sigma_t).diagonal( dim1=-2, dim2=-1).sum(dim=-1) Sigma_p_det_log = Sigma_p.det().log() Sigma_t_det_log = Sigma_t.det().log() whr_distance = whr_distance + 0.5 * (Sigma_p_det_log - Sigma_t_det_log) whr_distance = whr_distance - 1 distance = (xy_distance / (alpha * alpha) + whr_distance) if sqrt: distance = distance.clamp(1e-7).sqrt() distance = distance.reshape(_shape[:-1]) return postprocess(distance, fun=fun, tau=tau) The provided code snippet includes necessary dependencies for implementing the `jd_loss` function. Write a Python function `def jd_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, sqrt=True)` to solve the following problem: Symmetrical Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor) Here is the function: def jd_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, sqrt=True): """Symmetrical Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor) """ jd = kld_loss( pred, target, fun='none', tau=0, alpha=alpha, sqrt=False, reduction='none') jd = jd + kld_loss( target, pred, fun='none', tau=0, alpha=alpha, sqrt=False, reduction='none') jd = jd * 0.5 if sqrt: jd = jd.clamp(1e-7).sqrt() return postprocess(jd, fun=fun, tau=tau)
Symmetrical Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor)
7,373
from copy import deepcopy import torch from mmdet.models.losses.utils import weighted_loss from torch import nn from ..builder import ROTATED_LOSSES def postprocess(distance, fun='log1p', tau=1.0): """Convert distance to loss. Args: distance (torch.Tensor) fun (str, optional): The function applied to distance. Defaults to 'log1p'. tau (float, optional): Defaults to 1.0. Returns: loss (torch.Tensor) """ if fun == 'log1p': distance = torch.log1p(distance) elif fun == 'sqrt': distance = torch.sqrt(distance.clamp(1e-7)) elif fun == 'none': pass else: raise ValueError(f'Invalid non-linear function {fun}') if tau >= 1.0: return 1 - 1 / (tau + distance) else: return distance def kld_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, sqrt=True): """Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor) """ xy_p, Sigma_p = pred xy_t, Sigma_t = target _shape = xy_p.shape xy_p = xy_p.reshape(-1, 2) xy_t = xy_t.reshape(-1, 2) Sigma_p = Sigma_p.reshape(-1, 2, 2) Sigma_t = Sigma_t.reshape(-1, 2, 2) Sigma_p_inv = torch.stack((Sigma_p[..., 1, 1], -Sigma_p[..., 0, 1], -Sigma_p[..., 1, 0], Sigma_p[..., 0, 0]), dim=-1).reshape(-1, 2, 2) Sigma_p_inv = Sigma_p_inv / Sigma_p.det().unsqueeze(-1).unsqueeze(-1) dxy = (xy_p - xy_t).unsqueeze(-1) xy_distance = 0.5 * dxy.permute(0, 2, 1).bmm(Sigma_p_inv).bmm(dxy).view(-1) whr_distance = 0.5 * Sigma_p_inv.bmm(Sigma_t).diagonal( dim1=-2, dim2=-1).sum(dim=-1) Sigma_p_det_log = Sigma_p.det().log() Sigma_t_det_log = Sigma_t.det().log() whr_distance = whr_distance + 0.5 * (Sigma_p_det_log - Sigma_t_det_log) whr_distance = whr_distance - 1 distance = (xy_distance / (alpha * alpha) + whr_distance) if sqrt: distance = distance.clamp(1e-7).sqrt() distance = distance.reshape(_shape[:-1]) return postprocess(distance, fun=fun, tau=tau) The provided code snippet includes necessary dependencies for implementing the `kld_symmax_loss` function. Write a Python function `def kld_symmax_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, sqrt=True)` to solve the following problem: Symmetrical Max Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor) Here is the function: def kld_symmax_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, sqrt=True): """Symmetrical Max Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor) """ kld_pt = kld_loss( pred, target, fun='none', tau=0, alpha=alpha, sqrt=sqrt, reduction='none') kld_tp = kld_loss( target, pred, fun='none', tau=0, alpha=alpha, sqrt=sqrt, reduction='none') kld_symmax = torch.max(kld_pt, kld_tp) return postprocess(kld_symmax, fun=fun, tau=tau)
Symmetrical Max Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor)
7,374
from copy import deepcopy import torch from mmdet.models.losses.utils import weighted_loss from torch import nn from ..builder import ROTATED_LOSSES def postprocess(distance, fun='log1p', tau=1.0): """Convert distance to loss. Args: distance (torch.Tensor) fun (str, optional): The function applied to distance. Defaults to 'log1p'. tau (float, optional): Defaults to 1.0. Returns: loss (torch.Tensor) """ if fun == 'log1p': distance = torch.log1p(distance) elif fun == 'sqrt': distance = torch.sqrt(distance.clamp(1e-7)) elif fun == 'none': pass else: raise ValueError(f'Invalid non-linear function {fun}') if tau >= 1.0: return 1 - 1 / (tau + distance) else: return distance def kld_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, sqrt=True): """Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor) """ xy_p, Sigma_p = pred xy_t, Sigma_t = target _shape = xy_p.shape xy_p = xy_p.reshape(-1, 2) xy_t = xy_t.reshape(-1, 2) Sigma_p = Sigma_p.reshape(-1, 2, 2) Sigma_t = Sigma_t.reshape(-1, 2, 2) Sigma_p_inv = torch.stack((Sigma_p[..., 1, 1], -Sigma_p[..., 0, 1], -Sigma_p[..., 1, 0], Sigma_p[..., 0, 0]), dim=-1).reshape(-1, 2, 2) Sigma_p_inv = Sigma_p_inv / Sigma_p.det().unsqueeze(-1).unsqueeze(-1) dxy = (xy_p - xy_t).unsqueeze(-1) xy_distance = 0.5 * dxy.permute(0, 2, 1).bmm(Sigma_p_inv).bmm(dxy).view(-1) whr_distance = 0.5 * Sigma_p_inv.bmm(Sigma_t).diagonal( dim1=-2, dim2=-1).sum(dim=-1) Sigma_p_det_log = Sigma_p.det().log() Sigma_t_det_log = Sigma_t.det().log() whr_distance = whr_distance + 0.5 * (Sigma_p_det_log - Sigma_t_det_log) whr_distance = whr_distance - 1 distance = (xy_distance / (alpha * alpha) + whr_distance) if sqrt: distance = distance.clamp(1e-7).sqrt() distance = distance.reshape(_shape[:-1]) return postprocess(distance, fun=fun, tau=tau) The provided code snippet includes necessary dependencies for implementing the `kld_symmin_loss` function. Write a Python function `def kld_symmin_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, sqrt=True)` to solve the following problem: Symmetrical Min Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor) Here is the function: def kld_symmin_loss(pred, target, fun='log1p', tau=1.0, alpha=1.0, sqrt=True): """Symmetrical Min Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor) """ kld_pt = kld_loss( pred, target, fun='none', tau=0, alpha=alpha, sqrt=sqrt, reduction='none') kld_tp = kld_loss( target, pred, fun='none', tau=0, alpha=alpha, sqrt=sqrt, reduction='none') kld_symmin = torch.min(kld_pt, kld_tp) return postprocess(kld_symmin, fun=fun, tau=tau)
Symmetrical Min Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. alpha (float): Defaults to 1.0. sqrt (bool): Whether to sqrt the distance. Defaults to True. Returns: loss (torch.Tensor)
7,375
import torch import torch.nn as nn from mmcv.ops import convex_giou from torch.autograd import Function from torch.autograd.function import once_differentiable from ..builder import ROTATED_LOSSES The provided code snippet includes necessary dependencies for implementing the `AspectRatio` function. Write a Python function `def AspectRatio(gt_rbboxes)` to solve the following problem: Compute the aspect ratio of all gts. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). Returns: ratios (torch.Tensor): The aspect ratio of gt_rbboxes, shape (k, 1). Here is the function: def AspectRatio(gt_rbboxes): """Compute the aspect ratio of all gts. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). Returns: ratios (torch.Tensor): The aspect ratio of gt_rbboxes, shape (k, 1). """ pt1, pt2, pt3, pt4 = gt_rbboxes[..., :8].chunk(4, 1) edge1 = torch.sqrt( torch.pow(pt1[..., 0] - pt2[..., 0], 2) + torch.pow(pt1[..., 1] - pt2[..., 1], 2)) edge2 = torch.sqrt( torch.pow(pt2[..., 0] - pt3[..., 0], 2) + torch.pow(pt2[..., 1] - pt3[..., 1], 2)) edges = torch.stack([edge1, edge2], dim=1) width, _ = torch.max(edges, 1) height, _ = torch.min(edges, 1) ratios = (width / height) return ratios
Compute the aspect ratio of all gts. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). Returns: ratios (torch.Tensor): The aspect ratio of gt_rbboxes, shape (k, 1).
7,376
import warnings import torch import torch.nn as nn from mmdet.models.losses.utils import weighted_loss from ..builder import ROTATED_LOSSES The provided code snippet includes necessary dependencies for implementing the `rotated_iou_loss` function. Write a Python function `def rotated_iou_loss(pred, target, linear=False, mode='log', eps=1e-6)` to solve the following problem: Rotated IoU loss. Computing the IoU loss between a set of predicted rbboxes and target rbboxes. The loss is calculated as negative log of IoU. Args: pred (torch.Tensor): Predicted bboxes of format (x, y, h, w, angle), shape (n, 5). target (torch.Tensor): Corresponding gt bboxes, shape (n, 5). linear (bool, optional): If True, use linear scale of loss instead of log scale. Default: False. mode (str): Loss scaling mode, including "linear", "square", and "log". Default: 'log' eps (float): Eps to avoid log(0). Return: torch.Tensor: Loss tensor. Here is the function: def rotated_iou_loss(pred, target, linear=False, mode='log', eps=1e-6): """Rotated IoU loss. Computing the IoU loss between a set of predicted rbboxes and target rbboxes. The loss is calculated as negative log of IoU. Args: pred (torch.Tensor): Predicted bboxes of format (x, y, h, w, angle), shape (n, 5). target (torch.Tensor): Corresponding gt bboxes, shape (n, 5). linear (bool, optional): If True, use linear scale of loss instead of log scale. Default: False. mode (str): Loss scaling mode, including "linear", "square", and "log". Default: 'log' eps (float): Eps to avoid log(0). Return: torch.Tensor: Loss tensor. """ assert mode in ['linear', 'square', 'log'] if linear: mode = 'linear' warnings.warn( 'DeprecationWarning: Setting "linear=True" in ' 'poly_iou_loss is deprecated, please use "mode=`linear`" ' 'instead.') if diff_iou_rotated_2d is None: raise ImportError('Please install mmcv-full >= 1.5.0.') ious = diff_iou_rotated_2d(pred.unsqueeze(0), target.unsqueeze(0)) ious = ious.squeeze(0).clamp(min=eps) if mode == 'linear': loss = 1 - ious elif mode == 'square': loss = 1 - ious**2 elif mode == 'log': loss = -ious.log() else: raise NotImplementedError return loss
Rotated IoU loss. Computing the IoU loss between a set of predicted rbboxes and target rbboxes. The loss is calculated as negative log of IoU. Args: pred (torch.Tensor): Predicted bboxes of format (x, y, h, w, angle), shape (n, 5). target (torch.Tensor): Corresponding gt bboxes, shape (n, 5). linear (bool, optional): If True, use linear scale of loss instead of log scale. Default: False. mode (str): Loss scaling mode, including "linear", "square", and "log". Default: 'log' eps (float): Eps to avoid log(0). Return: torch.Tensor: Loss tensor.
7,377
import torch import torch.nn as nn from mmcv.ops import points_in_polygons from ..builder import ROTATED_LOSSES def spatial_border_loss(pts, gt_bboxes): """The loss is used to penalize the learning points out of the assigned ground truth boxes (polygon by default). Args: pts (torch.Tensor): point sets with shape (N, 9*2). gt_bboxes (torch.Tensor): gt_bboxes with polygon form with shape(N, 8) Returns: loss (torch.Tensor) """ num_gts, num_pointsets = gt_bboxes.size(0), pts.size(0) num_point = int(pts.size(1) / 2.0) loss = pts.new_zeros([0]) if num_gts > 0: inside_flag_list = [] for i in range(num_point): pt = pts[:, (2 * i):(2 * i + 2)].reshape(num_pointsets, 2).contiguous() inside_pt_flag = points_in_polygons(pt, gt_bboxes) inside_pt_flag = torch.diag(inside_pt_flag) inside_flag_list.append(inside_pt_flag) inside_flag = torch.stack(inside_flag_list, dim=1) pts = pts.reshape(-1, num_point, 2) out_border_pts = pts[torch.where(inside_flag == 0)] if out_border_pts.size(0) > 0: corr_gt_boxes = gt_bboxes[torch.where(inside_flag == 0)[0]] corr_gt_boxes_center_x = (corr_gt_boxes[:, 0] + corr_gt_boxes[:, 4]) / 2.0 corr_gt_boxes_center_y = (corr_gt_boxes[:, 1] + corr_gt_boxes[:, 5]) / 2.0 corr_gt_boxes_center = torch.stack( [corr_gt_boxes_center_x, corr_gt_boxes_center_y], dim=1) distance_out_pts = 0.2 * (( (out_border_pts - corr_gt_boxes_center)**2).sum(dim=1).sqrt()) loss = distance_out_pts.sum() / out_border_pts.size(0) return loss The provided code snippet includes necessary dependencies for implementing the `weighted_spatial_border_loss` function. Write a Python function `def weighted_spatial_border_loss(pts, gt_bboxes, weight, avg_factor=None)` to solve the following problem: Weghted spatial border loss. Args: pts (torch.Tensor): point sets with shape (N, 9*2). gt_bboxes (torch.Tensor): gt_bboxes with polygon form with shape(N, 8) weight (torch.Tensor): weights for point sets with shape (N) Returns: loss (torch.Tensor) Here is the function: def weighted_spatial_border_loss(pts, gt_bboxes, weight, avg_factor=None): """Weghted spatial border loss. Args: pts (torch.Tensor): point sets with shape (N, 9*2). gt_bboxes (torch.Tensor): gt_bboxes with polygon form with shape(N, 8) weight (torch.Tensor): weights for point sets with shape (N) Returns: loss (torch.Tensor) """ weight = weight.unsqueeze(dim=1).repeat(1, 4) assert weight.dim() == 2 if avg_factor is None: avg_factor = torch.sum(weight > 0).float().item() / 4 + 1e-6 loss = spatial_border_loss(pts, gt_bboxes) return torch.sum(loss)[None] / avg_factor
Weghted spatial border loss. Args: pts (torch.Tensor): point sets with shape (N, 9*2). gt_bboxes (torch.Tensor): gt_bboxes with polygon form with shape(N, 8) weight (torch.Tensor): weights for point sets with shape (N) Returns: loss (torch.Tensor)
7,378
import torch from mmdet.models.losses.utils import weighted_loss from torch import nn from ..builder import ROTATED_LOSSES def xy_wh_r_2_xy_sigma(xywhr): """Convert oriented bounding box to 2-D Gaussian distribution. Args: xywhr (torch.Tensor): rbboxes with shape (N, 5). Returns: xy (torch.Tensor): center point of 2-D Gaussian distribution with shape (N, 2). sigma (torch.Tensor): covariance matrix of 2-D Gaussian distribution with shape (N, 2, 2). """ _shape = xywhr.shape assert _shape[-1] == 5 xy = xywhr[..., :2] wh = xywhr[..., 2:4].clamp(min=1e-7, max=1e7).reshape(-1, 2) r = xywhr[..., 4] cos_r = torch.cos(r) sin_r = torch.sin(r) R = torch.stack((cos_r, -sin_r, sin_r, cos_r), dim=-1).reshape(-1, 2, 2) S = 0.5 * torch.diag_embed(wh) sigma = R.bmm(S.square()).bmm(R.permute(0, 2, 1)).reshape(_shape[:-1] + (2, 2)) return xy, sigma The provided code snippet includes necessary dependencies for implementing the `kfiou_loss` function. Write a Python function `def kfiou_loss(pred, target, pred_decode=None, targets_decode=None, fun=None, beta=1.0 / 9.0, eps=1e-6)` to solve the following problem: Kalman filter IoU loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. pred_decode (torch.Tensor): Predicted decode bboxes. targets_decode (torch.Tensor): Corresponding gt decode bboxes. fun (str): The function applied to distance. Defaults to None. beta (float): Defaults to 1.0/9.0. eps (float): Defaults to 1e-6. Returns: loss (torch.Tensor) Here is the function: def kfiou_loss(pred, target, pred_decode=None, targets_decode=None, fun=None, beta=1.0 / 9.0, eps=1e-6): """Kalman filter IoU loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. pred_decode (torch.Tensor): Predicted decode bboxes. targets_decode (torch.Tensor): Corresponding gt decode bboxes. fun (str): The function applied to distance. Defaults to None. beta (float): Defaults to 1.0/9.0. eps (float): Defaults to 1e-6. Returns: loss (torch.Tensor) """ xy_p = pred[:, :2] xy_t = target[:, :2] _, Sigma_p = xy_wh_r_2_xy_sigma(pred_decode) _, Sigma_t = xy_wh_r_2_xy_sigma(targets_decode) # Smooth-L1 norm diff = torch.abs(xy_p - xy_t) xy_loss = torch.where(diff < beta, 0.5 * diff * diff / beta, diff - 0.5 * beta).sum(dim=-1) Vb_p = 4 * Sigma_p.det().sqrt() Vb_t = 4 * Sigma_t.det().sqrt() K = Sigma_p.bmm((Sigma_p + Sigma_t).inverse()) Sigma = Sigma_p - K.bmm(Sigma_p) Vb = 4 * Sigma.det().sqrt() Vb = torch.where(torch.isnan(Vb), torch.full_like(Vb, 0), Vb) KFIoU = Vb / (Vb_p + Vb_t - Vb + eps) if fun == 'ln': kf_loss = -torch.log(KFIoU + eps) elif fun == 'exp': kf_loss = torch.exp(1 - KFIoU) - 1 else: kf_loss = 1 - KFIoU loss = (xy_loss + kf_loss).clamp(0) return loss
Kalman filter IoU loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. pred_decode (torch.Tensor): Predicted decode bboxes. targets_decode (torch.Tensor): Corresponding gt decode bboxes. fun (str): The function applied to distance. Defaults to None. beta (float): Defaults to 1.0/9.0. eps (float): Defaults to 1e-6. Returns: loss (torch.Tensor)
7,379
from copy import deepcopy import torch from torch import nn from ..builder import ROTATED_LOSSES The provided code snippet includes necessary dependencies for implementing the `xy_wh_r_2_xy_sigma` function. Write a Python function `def xy_wh_r_2_xy_sigma(xywhr)` to solve the following problem: Convert oriented bounding box to 2-D Gaussian distribution. Args: xywhr (torch.Tensor): rbboxes with shape (N, 5). Returns: xy (torch.Tensor): center point of 2-D Gaussian distribution with shape (N, 2). sigma (torch.Tensor): covariance matrix of 2-D Gaussian distribution with shape (N, 2, 2). Here is the function: def xy_wh_r_2_xy_sigma(xywhr): """Convert oriented bounding box to 2-D Gaussian distribution. Args: xywhr (torch.Tensor): rbboxes with shape (N, 5). Returns: xy (torch.Tensor): center point of 2-D Gaussian distribution with shape (N, 2). sigma (torch.Tensor): covariance matrix of 2-D Gaussian distribution with shape (N, 2, 2). """ _shape = xywhr.shape assert _shape[-1] == 5 xy = xywhr[..., :2] wh = xywhr[..., 2:4].clamp(min=1e-7, max=1e7).reshape(-1, 2) r = xywhr[..., 4] cos_r = torch.cos(r) sin_r = torch.sin(r) R = torch.stack((cos_r, -sin_r, sin_r, cos_r), dim=-1).reshape(-1, 2, 2) S = 0.5 * torch.diag_embed(wh) sigma = R.bmm(S.square()).bmm(R.permute(0, 2, 1)).reshape(_shape[:-1] + (2, 2)) return xy, sigma
Convert oriented bounding box to 2-D Gaussian distribution. Args: xywhr (torch.Tensor): rbboxes with shape (N, 5). Returns: xy (torch.Tensor): center point of 2-D Gaussian distribution with shape (N, 2). sigma (torch.Tensor): covariance matrix of 2-D Gaussian distribution with shape (N, 2, 2).
7,380
from copy import deepcopy import torch from torch import nn from ..builder import ROTATED_LOSSES The provided code snippet includes necessary dependencies for implementing the `gwd_loss` function. Write a Python function `def gwd_loss(pred, target, fun='sqrt', tau=2.0)` to solve the following problem: Gaussian Wasserstein distance loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. Returns: loss (torch.Tensor) Here is the function: def gwd_loss(pred, target, fun='sqrt', tau=2.0): """Gaussian Wasserstein distance loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. Returns: loss (torch.Tensor) """ mu_p, sigma_p = pred mu_t, sigma_t = target xy_distance = (mu_p - mu_t).square().sum(dim=-1) whr_distance = sigma_p.diagonal(dim1=-2, dim2=-1).sum(dim=-1) whr_distance = whr_distance + sigma_t.diagonal( dim1=-2, dim2=-1).sum(dim=-1) _t_tr = (sigma_p.bmm(sigma_t)).diagonal(dim1=-2, dim2=-1).sum(dim=-1) _t_det_sqrt = (sigma_p.det() * sigma_t.det()).clamp(0).sqrt() whr_distance += (-2) * (_t_tr + 2 * _t_det_sqrt).clamp(0).sqrt() dis = xy_distance + whr_distance gwd_dis = dis.clamp(min=1e-6) if fun == 'sqrt': loss = 1 - 1 / (tau + torch.sqrt(gwd_dis)) elif fun == 'log1p': loss = 1 - 1 / (tau + torch.log1p(gwd_dis)) else: scale = 2 * (_t_det_sqrt.sqrt().sqrt()).clamp(1e-7) loss = torch.log1p(torch.sqrt(gwd_dis) / scale) return loss
Gaussian Wasserstein distance loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. Returns: loss (torch.Tensor)
7,381
from copy import deepcopy import torch from torch import nn from ..builder import ROTATED_LOSSES The provided code snippet includes necessary dependencies for implementing the `bcd_loss` function. Write a Python function `def bcd_loss(pred, target, fun='log1p', tau=1.0)` to solve the following problem: Bhatacharyya distance loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. Returns: loss (torch.Tensor) Here is the function: def bcd_loss(pred, target, fun='log1p', tau=1.0): """Bhatacharyya distance loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. Returns: loss (torch.Tensor) """ mu_p, sigma_p = pred mu_t, sigma_t = target mu_p = mu_p.reshape(-1, 2) mu_t = mu_t.reshape(-1, 2) sigma_p = sigma_p.reshape(-1, 2, 2) sigma_t = sigma_t.reshape(-1, 2, 2) delta = (mu_p - mu_t).unsqueeze(-1) sigma = 0.5 * (sigma_p + sigma_t) sigma_inv = torch.inverse(sigma) term1 = torch.log( torch.det(sigma) / (torch.sqrt(torch.det(sigma_t.matmul(sigma_p))))).reshape(-1, 1) term2 = delta.transpose(-1, -2).matmul(sigma_inv).matmul(delta).squeeze(-1) dis = 0.5 * term1 + 0.125 * term2 bcd_dis = dis.clamp(min=1e-6) if fun == 'sqrt': loss = 1 - 1 / (tau + torch.sqrt(bcd_dis)) elif fun == 'log1p': loss = 1 - 1 / (tau + torch.log1p(bcd_dis)) else: loss = 1 - 1 / (tau + bcd_dis) return loss
Bhatacharyya distance loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. Returns: loss (torch.Tensor)
7,382
from copy import deepcopy import torch from torch import nn from ..builder import ROTATED_LOSSES The provided code snippet includes necessary dependencies for implementing the `kld_loss` function. Write a Python function `def kld_loss(pred, target, fun='log1p', tau=1.0)` to solve the following problem: Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. Returns: loss (torch.Tensor) Here is the function: def kld_loss(pred, target, fun='log1p', tau=1.0): """Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. Returns: loss (torch.Tensor) """ mu_p, sigma_p = pred mu_t, sigma_t = target mu_p = mu_p.reshape(-1, 2) mu_t = mu_t.reshape(-1, 2) sigma_p = sigma_p.reshape(-1, 2, 2) sigma_t = sigma_t.reshape(-1, 2, 2) delta = (mu_p - mu_t).unsqueeze(-1) sigma_t_inv = torch.inverse(sigma_t) term1 = delta.transpose(-1, -2).matmul(sigma_t_inv).matmul(delta).squeeze(-1) term2 = torch.diagonal( sigma_t_inv.matmul(sigma_p), dim1=-2, dim2=-1).sum(dim=-1, keepdim=True) + \ torch.log(torch.det(sigma_t) / torch.det(sigma_p)).reshape(-1, 1) dis = term1 + term2 - 2 kl_dis = dis.clamp(min=1e-6) if fun == 'sqrt': kl_loss = 1 - 1 / (tau + torch.sqrt(kl_dis)) else: kl_loss = 1 - 1 / (tau + torch.log1p(kl_dis)) return kl_loss
Kullback-Leibler Divergence loss. Args: pred (torch.Tensor): Predicted bboxes. target (torch.Tensor): Corresponding gt bboxes. fun (str): The function applied to distance. Defaults to 'log1p'. tau (float): Defaults to 1.0. Returns: loss (torch.Tensor)
7,383
import torch.nn as nn import torch.nn.functional as F from mmdet.models import weight_reduce_loss from ..builder import ROTATED_LOSSES The provided code snippet includes necessary dependencies for implementing the `smooth_focal_loss` function. Write a Python function `def smooth_focal_loss(pred, target, weight=None, gamma=2.0, alpha=0.25, reduction='mean', avg_factor=None)` to solve the following problem: Smooth Focal Loss proposed in Circular Smooth Label (CSL). Args: pred (torch.Tensor): The prediction. target (torch.Tensor): The learning label of the prediction. weight (torch.Tensor, optional): The weight of loss for each prediction. Defaults to None. gamma (float, optional): The gamma for calculating the modulating factor. Defaults to 2.0. alpha (float, optional): A balanced form for Focal Loss. Defaults to 0.25. reduction (str, optional): The reduction method used to override the original reduction method of the loss. Options are "none", "mean" and "sum". avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. Returns: torch.Tensor: The calculated loss Here is the function: def smooth_focal_loss(pred, target, weight=None, gamma=2.0, alpha=0.25, reduction='mean', avg_factor=None): """Smooth Focal Loss proposed in Circular Smooth Label (CSL). Args: pred (torch.Tensor): The prediction. target (torch.Tensor): The learning label of the prediction. weight (torch.Tensor, optional): The weight of loss for each prediction. Defaults to None. gamma (float, optional): The gamma for calculating the modulating factor. Defaults to 2.0. alpha (float, optional): A balanced form for Focal Loss. Defaults to 0.25. reduction (str, optional): The reduction method used to override the original reduction method of the loss. Options are "none", "mean" and "sum". avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. Returns: torch.Tensor: The calculated loss """ pred_sigmoid = pred.sigmoid() target = target.type_as(pred) pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target) focal_weight = (alpha * target + (1 - alpha) * (1 - target)) * pt.pow(gamma) loss = F.binary_cross_entropy_with_logits( pred, target, reduction='none') * focal_weight if weight is not None: if weight.shape != loss.shape: if weight.size(0) == loss.size(0): # For most cases, weight is of shape (num_priors, ), # which means it does not have the second axis num_class weight = weight.view(-1, 1) else: # Sometimes, weight per anchor per class is also needed. e.g. # in FSAF. But it may be flattened of shape # (num_priors x num_class, ), while loss is still of shape # (num_priors, num_class). assert weight.numel() == loss.numel() weight = weight.view(loss.size(0), -1) assert weight.ndim == loss.ndim loss = weight_reduce_loss(loss, weight, reduction, avg_factor) return loss
Smooth Focal Loss proposed in Circular Smooth Label (CSL). Args: pred (torch.Tensor): The prediction. target (torch.Tensor): The learning label of the prediction. weight (torch.Tensor, optional): The weight of loss for each prediction. Defaults to None. gamma (float, optional): The gamma for calculating the modulating factor. Defaults to 2.0. alpha (float, optional): A balanced form for Focal Loss. Defaults to 0.25. reduction (str, optional): The reduction method used to override the original reduction method of the loss. Options are "none", "mean" and "sum". avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. Returns: torch.Tensor: The calculated loss
7,384
import math import numpy as np import torch import torch.nn as nn from mmcv.cnn import ConvModule from mmcv.ops import DeformConv2d, chamfer_distance, min_area_polygons from mmcv.runner import force_fp32 from mmdet.core import images_to_levels, multi_apply, unmap from mmdet.core.anchor.point_generator import MlvlPointGenerator from mmdet.core.utils import select_single_mlvl from mmdet.models.dense_heads.base_dense_head import BaseDenseHead from mmrotate.core import (build_assigner, build_sampler, multiclass_nms_rotated, obb2poly, poly2obb) from ..builder import ROTATED_HEADS, build_loss from .utils import levels_to_images The provided code snippet includes necessary dependencies for implementing the `ChamferDistance2D` function. Write a Python function `def ChamferDistance2D(point_set_1, point_set_2, distance_weight=0.05, eps=1e-12)` to solve the following problem: Compute the Chamfer distance between two point sets. Args: point_set_1 (torch.tensor): point set 1 with shape (N_pointsets, N_points, 2) point_set_2 (torch.tensor): point set 2 with shape (N_pointsets, N_points, 2) Returns: dist (torch.tensor): chamfer distance between two point sets with shape (N_pointsets,) Here is the function: def ChamferDistance2D(point_set_1, point_set_2, distance_weight=0.05, eps=1e-12): """Compute the Chamfer distance between two point sets. Args: point_set_1 (torch.tensor): point set 1 with shape (N_pointsets, N_points, 2) point_set_2 (torch.tensor): point set 2 with shape (N_pointsets, N_points, 2) Returns: dist (torch.tensor): chamfer distance between two point sets with shape (N_pointsets,) """ assert point_set_1.dim() == point_set_2.dim() assert point_set_1.shape[-1] == point_set_2.shape[-1] assert point_set_1.dim() <= 3 dist1, dist2, _, _ = chamfer_distance(point_set_1, point_set_2) dist1 = torch.sqrt(torch.clamp(dist1, eps)) dist2 = torch.sqrt(torch.clamp(dist2, eps)) dist = distance_weight * (dist1.mean(-1) + dist2.mean(-1)) / 2.0 return dist
Compute the Chamfer distance between two point sets. Args: point_set_1 (torch.tensor): point set 1 with shape (N_pointsets, N_points, 2) point_set_2 (torch.tensor): point set 2 with shape (N_pointsets, N_points, 2) Returns: dist (torch.tensor): chamfer distance between two point sets with shape (N_pointsets,)
7,385
import torch from mmcv.ops import convex_iou The provided code snippet includes necessary dependencies for implementing the `points_center_pts` function. Write a Python function `def points_center_pts(RPoints, y_first=True)` to solve the following problem: Compute center point of Pointsets. Args: RPoints (torch.Tensor): the lists of Pointsets, shape (k, 18). y_first (bool, optional): if True, the sequence of Pointsets is (y,x). Returns: center_pts (torch.Tensor): the mean_center coordination of Pointsets, shape (k, 18). Here is the function: def points_center_pts(RPoints, y_first=True): """Compute center point of Pointsets. Args: RPoints (torch.Tensor): the lists of Pointsets, shape (k, 18). y_first (bool, optional): if True, the sequence of Pointsets is (y,x). Returns: center_pts (torch.Tensor): the mean_center coordination of Pointsets, shape (k, 18). """ RPoints = RPoints.reshape(-1, 9, 2) if y_first: pts_dy = RPoints[:, :, 0::2] pts_dx = RPoints[:, :, 1::2] else: pts_dx = RPoints[:, :, 0::2] pts_dy = RPoints[:, :, 1::2] pts_dy_mean = pts_dy.mean(dim=1, keepdim=True).reshape(-1, 1) pts_dx_mean = pts_dx.mean(dim=1, keepdim=True).reshape(-1, 1) center_pts = torch.cat([pts_dx_mean, pts_dy_mean], dim=1).reshape(-1, 2) return center_pts
Compute center point of Pointsets. Args: RPoints (torch.Tensor): the lists of Pointsets, shape (k, 18). y_first (bool, optional): if True, the sequence of Pointsets is (y,x). Returns: center_pts (torch.Tensor): the mean_center coordination of Pointsets, shape (k, 18).
7,386
import torch from mmcv.ops import convex_iou The provided code snippet includes necessary dependencies for implementing the `convex_overlaps` function. Write a Python function `def convex_overlaps(gt_bboxes, points)` to solve the following problem: Compute overlaps between polygons and points. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). points (torch.Tensor): Points to be assigned, shape(n, 18). Returns: overlaps (torch.Tensor): Overlaps between k gt_bboxes and n bboxes, shape(k, n). Here is the function: def convex_overlaps(gt_bboxes, points): """Compute overlaps between polygons and points. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). points (torch.Tensor): Points to be assigned, shape(n, 18). Returns: overlaps (torch.Tensor): Overlaps between k gt_bboxes and n bboxes, shape(k, n). """ overlaps = convex_iou(points, gt_bboxes) overlaps = overlaps.transpose(1, 0) return overlaps
Compute overlaps between polygons and points. Args: gt_rbboxes (torch.Tensor): Groundtruth polygons, shape (k, 8). points (torch.Tensor): Points to be assigned, shape(n, 18). Returns: overlaps (torch.Tensor): Overlaps between k gt_bboxes and n bboxes, shape(k, n).
7,387
import torch from mmcv.ops import convex_iou The provided code snippet includes necessary dependencies for implementing the `levels_to_images` function. Write a Python function `def levels_to_images(mlvl_tensor, flatten=False)` to solve the following problem: Concat multi-level feature maps by image. [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] Convert the shape of each element in mlvl_tensor from (N, C, H, W) to (N, H*W , C), then split the element to N elements with shape (H*W, C), and concat elements in same image of all level along first dimension. Args: mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from corresponding level. Each element is of shape (N, C, H, W) flatten (bool, optional): if shape of mlvl_tensor is (N, C, H, W) set False, if shape of mlvl_tensor is (N, H, W, C) set True. Returns: list[torch.Tensor]: A list that contains N tensors and each tensor is of shape (num_elements, C) Here is the function: def levels_to_images(mlvl_tensor, flatten=False): """Concat multi-level feature maps by image. [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] Convert the shape of each element in mlvl_tensor from (N, C, H, W) to (N, H*W , C), then split the element to N elements with shape (H*W, C), and concat elements in same image of all level along first dimension. Args: mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from corresponding level. Each element is of shape (N, C, H, W) flatten (bool, optional): if shape of mlvl_tensor is (N, C, H, W) set False, if shape of mlvl_tensor is (N, H, W, C) set True. Returns: list[torch.Tensor]: A list that contains N tensors and each tensor is of shape (num_elements, C) """ batch_size = mlvl_tensor[0].size(0) batch_list = [[] for _ in range(batch_size)] if flatten: channels = mlvl_tensor[0].size(-1) else: channels = mlvl_tensor[0].size(1) for t in mlvl_tensor: if not flatten: t = t.permute(0, 2, 3, 1) t = t.view(batch_size, -1, channels).contiguous() for img in range(batch_size): batch_list[img].append(t[img]) return [torch.cat(item, 0) for item in batch_list]
Concat multi-level feature maps by image. [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] Convert the shape of each element in mlvl_tensor from (N, C, H, W) to (N, H*W , C), then split the element to N elements with shape (H*W, C), and concat elements in same image of all level along first dimension. Args: mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from corresponding level. Each element is of shape (N, C, H, W) flatten (bool, optional): if shape of mlvl_tensor is (N, C, H, W) set False, if shape of mlvl_tensor is (N, H, W, C) set True. Returns: list[torch.Tensor]: A list that contains N tensors and each tensor is of shape (num_elements, C)
7,388
import torch from mmcv.ops import convex_iou The provided code snippet includes necessary dependencies for implementing the `get_num_level_anchors_inside` function. Write a Python function `def get_num_level_anchors_inside(num_level_anchors, inside_flags)` to solve the following problem: Get number of every level anchors inside. Args: num_level_anchors (List[int]): List of number of every level's anchors. inside_flags (torch.Tensor): Flags of all anchors. Returns: List[int]: List of number of inside anchors. Here is the function: def get_num_level_anchors_inside(num_level_anchors, inside_flags): """Get number of every level anchors inside. Args: num_level_anchors (List[int]): List of number of every level's anchors. inside_flags (torch.Tensor): Flags of all anchors. Returns: List[int]: List of number of inside anchors. """ split_inside_flags = torch.split(inside_flags, num_level_anchors) num_level_anchors_inside = [ int(flags.sum()) for flags in split_inside_flags ] return num_level_anchors_inside
Get number of every level anchors inside. Args: num_level_anchors (List[int]): List of number of every level's anchors. inside_flags (torch.Tensor): Flags of all anchors. Returns: List[int]: List of number of inside anchors.
7,389
import warnings import e2cnn.nn as enn import torch.nn as nn import torch.utils.checkpoint as cp from mmcv.runner import BaseModule from torch.nn.modules.batchnorm import _BatchNorm from ..builder import ROTATED_BACKBONES from ..utils import (build_enn_divide_feature, build_enn_norm_layer, build_enn_trivial_feature, ennAvgPool, ennConv, ennMaxPool, ennReLU, ennTrivialConv) class BasicBlock(enn.EquivariantModule): """BasicBlock for ReResNet. Args: in_channels (int): Input channels of this block. out_channels (int): Output channels of this block. expansion (int): The ratio of ``out_channels/mid_channels`` where ``mid_channels`` is the output channels of conv1. This is a reserved argument in BasicBlock and should always be 1. Default: 1. stride (int): stride of the block. Default: 1 dilation (int): dilation of convolution. Default: 1 downsample (nn.Module): downsample operation on identity branch. Default: None. style (str): `pytorch` or `caffe`. It is unused and reserved for unified API with Bottleneck. with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. conv_cfg (dict): dictionary to construct and config conv layer. Default: None norm_cfg (dict): dictionary to construct and config norm layer. Default: dict(type='BN') init_cfg (dict or list[dict], optional): Initialization config dict. """ def __init__(self, in_channels, out_channels, expansion=1, stride=1, dilation=1, downsample=None, style='pytorch', with_cp=False, conv_cfg=None, norm_cfg=dict(type='BN'), init_cfg=None): super(BasicBlock, self).__init__() self.in_type = build_enn_divide_feature(in_channels) self.out_type = build_enn_divide_feature(out_channels) self.in_channels = in_channels self.out_channels = out_channels self.expansion = expansion assert self.expansion == 1 assert out_channels % expansion == 0 self.mid_channels = out_channels // expansion self.stride = stride self.dilation = dilation self.style = style self.with_cp = with_cp self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg self.norm1_name, norm1 = build_enn_norm_layer( self.mid_channels, postfix=1) self.norm2_name, norm2 = build_enn_norm_layer(out_channels, postfix=2) self.conv1 = ennConv( in_channels, self.mid_channels, 3, stride=stride, padding=dilation, dilation=dilation, bias=False) self.add_module(self.norm1_name, norm1) self.relu1 = ennReLU(self.mid_channels) self.conv2 = ennConv( self.mid_channels, out_channels, 3, padding=1, bias=False) self.add_module(self.norm2_name, norm2) self.relu2 = ennReLU(out_channels) self.downsample = downsample def norm1(self): """Get normalizion layer's name.""" return getattr(self, self.norm1_name) def norm2(self): """Get normalizion layer's name.""" return getattr(self, self.norm2_name) def forward(self, x): """Forward function of BasicBlock.""" def _inner_forward(x): identity = x out = self.conv1(x) out = self.norm1(out) out = self.relu1(out) out = self.conv2(out) out = self.norm2(out) if self.downsample is not None: identity = self.downsample(x) out += identity return out if self.with_cp and x.requires_grad: out = cp.checkpoint(_inner_forward, x) else: out = _inner_forward(x) out = self.relu2(out) return out def evaluate_output_shape(self, input_shape): """Evaluate output shape.""" assert len(input_shape) == 4 assert input_shape[1] == self.in_type.size if self.downsample is not None: return self.downsample.evaluate_output_shape(input_shape) else: return input_shape class Bottleneck(enn.EquivariantModule): """Bottleneck block for ReResNet. Args: in_channels (int): Input channels of this block. out_channels (int): Output channels of this block. expansion (int): The ratio of ``out_channels/mid_channels`` where ``mid_channels`` is the input/output channels of conv2. Default: 4. stride (int): stride of the block. Default: 1 dilation (int): dilation of convolution. Default: 1 downsample (nn.Module): downsample operation on identity branch. Default: None. style (str): ``"pytorch"`` or ``"caffe"``. If set to "pytorch", the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer. Default: "pytorch". with_cp (bool): Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. conv_cfg (dict): dictionary to construct and config conv layer. Default: None norm_cfg (dict): dictionary to construct and config norm layer. Default: dict(type='BN') """ def __init__(self, in_channels, out_channels, expansion=4, stride=1, dilation=1, downsample=None, style='pytorch', with_cp=False, conv_cfg=None, norm_cfg=dict(type='BN'), init_cfg=None): super(Bottleneck, self).__init__() assert style in ['pytorch', 'caffe'] self.in_type = build_enn_divide_feature(in_channels) self.out_type = build_enn_divide_feature(out_channels) self.in_channels = in_channels self.out_channels = out_channels self.expansion = expansion assert out_channels % expansion == 0 self.mid_channels = out_channels // expansion self.stride = stride self.dilation = dilation self.style = style self.with_cp = with_cp self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg if self.style == 'pytorch': self.conv1_stride = 1 self.conv2_stride = stride else: self.conv1_stride = stride self.conv2_stride = 1 self.norm1_name, norm1 = build_enn_norm_layer( self.mid_channels, postfix=1) self.norm2_name, norm2 = build_enn_norm_layer( self.mid_channels, postfix=2) self.norm3_name, norm3 = build_enn_norm_layer(out_channels, postfix=3) self.conv1 = ennConv( in_channels, self.mid_channels, kernel_size=1, stride=self.conv1_stride, bias=False) self.add_module(self.norm1_name, norm1) self.relu1 = ennReLU(self.mid_channels) self.conv2 = ennConv( self.mid_channels, self.mid_channels, kernel_size=3, stride=self.conv2_stride, padding=dilation, dilation=dilation, bias=False) self.add_module(self.norm2_name, norm2) self.relu2 = ennReLU(self.mid_channels) self.conv3 = ennConv( self.mid_channels, out_channels, kernel_size=1, bias=False) self.add_module(self.norm3_name, norm3) self.relu3 = ennReLU(out_channels) self.downsample = downsample def norm1(self): """Get normalizion layer's name.""" return getattr(self, self.norm1_name) def norm2(self): """Get normalizion layer's name.""" return getattr(self, self.norm2_name) def norm3(self): """Get normalizion layer's name.""" return getattr(self, self.norm3_name) def forward(self, x): """Forward function of Bottleneck.""" def _inner_forward(x): identity = x out = self.conv1(x) out = self.norm1(out) out = self.relu1(out) out = self.conv2(out) out = self.norm2(out) out = self.relu2(out) out = self.conv3(out) out = self.norm3(out) if self.downsample is not None: identity = self.downsample(x) out += identity return out if self.with_cp and x.requires_grad: out = cp.checkpoint(_inner_forward, x) else: out = _inner_forward(x) out = self.relu3(out) return out def evaluate_output_shape(self, input_shape): """Evaluate output shape.""" assert len(input_shape) == 4 assert input_shape[1] == self.in_type.size if self.downsample is not None: return self.downsample.evaluate_output_shape(input_shape) else: return input_shape The provided code snippet includes necessary dependencies for implementing the `get_expansion` function. Write a Python function `def get_expansion(block, expansion=None)` to solve the following problem: Get the expansion of a residual block. The block expansion will be obtained by the following order: 1. If ``expansion`` is given, just return it. 2. If ``block`` has the attribute ``expansion``, then return ``block.expansion``. 3. Return the default value according the the block type: 1 for ``BasicBlock`` and 4 for ``Bottleneck``. Args: block (class): The block class. expansion (int | None): The given expansion ratio. Returns: int: The expansion of the block. Here is the function: def get_expansion(block, expansion=None): """Get the expansion of a residual block. The block expansion will be obtained by the following order: 1. If ``expansion`` is given, just return it. 2. If ``block`` has the attribute ``expansion``, then return ``block.expansion``. 3. Return the default value according the the block type: 1 for ``BasicBlock`` and 4 for ``Bottleneck``. Args: block (class): The block class. expansion (int | None): The given expansion ratio. Returns: int: The expansion of the block. """ if isinstance(expansion, int): assert expansion > 0 elif expansion is None: if hasattr(block, 'expansion'): expansion = block.expansion elif issubclass(block, BasicBlock): expansion = 1 elif issubclass(block, Bottleneck): expansion = 4 else: raise TypeError(f'expansion is not specified for {block.__name__}') else: raise TypeError('expansion must be an integer or None') return expansion
Get the expansion of a residual block. The block expansion will be obtained by the following order: 1. If ``expansion`` is given, just return it. 2. If ``block`` has the attribute ``expansion``, then return ``block.expansion``. 3. Return the default value according the the block type: 1 for ``BasicBlock`` and 4 for ``Bottleneck``. Args: block (class): The block class. expansion (int | None): The given expansion ratio. Returns: int: The expansion of the block.
7,390
import warnings from mmdet.models.builder import MODELS ROTATED_BACKBONES = MODELS The provided code snippet includes necessary dependencies for implementing the `build_backbone` function. Write a Python function `def build_backbone(cfg)` to solve the following problem: Build backbone. Here is the function: def build_backbone(cfg): """Build backbone.""" return ROTATED_BACKBONES.build(cfg)
Build backbone.
7,391
import warnings from mmdet.models.builder import MODELS ROTATED_NECKS = MODELS The provided code snippet includes necessary dependencies for implementing the `build_neck` function. Write a Python function `def build_neck(cfg)` to solve the following problem: Build neck. Here is the function: def build_neck(cfg): """Build neck.""" return ROTATED_NECKS.build(cfg)
Build neck.
7,392
import warnings from mmdet.models.builder import MODELS ROTATED_ROI_EXTRACTORS = MODELS The provided code snippet includes necessary dependencies for implementing the `build_roi_extractor` function. Write a Python function `def build_roi_extractor(cfg)` to solve the following problem: Build roi extractor. Here is the function: def build_roi_extractor(cfg): """Build roi extractor.""" return ROTATED_ROI_EXTRACTORS.build(cfg)
Build roi extractor.
7,393
import warnings from mmdet.models.builder import MODELS ROTATED_SHARED_HEADS = MODELS The provided code snippet includes necessary dependencies for implementing the `build_shared_head` function. Write a Python function `def build_shared_head(cfg)` to solve the following problem: Build shared head. Here is the function: def build_shared_head(cfg): """Build shared head.""" return ROTATED_SHARED_HEADS.build(cfg)
Build shared head.
7,394
import warnings from mmdet.models.builder import MODELS ROTATED_HEADS = MODELS The provided code snippet includes necessary dependencies for implementing the `build_head` function. Write a Python function `def build_head(cfg)` to solve the following problem: Build head. Here is the function: def build_head(cfg): """Build head.""" return ROTATED_HEADS.build(cfg)
Build head.
7,395
import warnings from mmdet.models.builder import MODELS ROTATED_LOSSES = MODELS The provided code snippet includes necessary dependencies for implementing the `build_loss` function. Write a Python function `def build_loss(cfg)` to solve the following problem: Build loss. Here is the function: def build_loss(cfg): """Build loss.""" return ROTATED_LOSSES.build(cfg)
Build loss.
7,396
import warnings from mmdet.models.builder import MODELS ROTATED_DETECTORS = MODELS The provided code snippet includes necessary dependencies for implementing the `build_detector` function. Write a Python function `def build_detector(cfg, train_cfg=None, test_cfg=None)` to solve the following problem: Build detector. Here is the function: def build_detector(cfg, train_cfg=None, test_cfg=None): """Build detector.""" if train_cfg is not None or test_cfg is not None: warnings.warn( 'train_cfg and test_cfg is deprecated, ' 'please specify them in model', UserWarning) assert cfg.get('train_cfg') is None or train_cfg is None, \ 'train_cfg specified in both outer field and model field ' assert cfg.get('test_cfg') is None or test_cfg is None, \ 'test_cfg specified in both outer field and model field ' return ROTATED_DETECTORS.build( cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg))
Build detector.
7,397
import e2cnn.nn as enn from e2cnn import gspaces gspace = gspaces.Rot2dOnR2(N=N) The provided code snippet includes necessary dependencies for implementing the `build_enn_feature` function. Write a Python function `def build_enn_feature(planes)` to solve the following problem: build a enn regular feature map with the specified number of channels. Here is the function: def build_enn_feature(planes): """build a enn regular feature map with the specified number of channels.""" return enn.FieldType(gspace, planes * [gspace.regular_repr])
build a enn regular feature map with the specified number of channels.
7,398
import e2cnn.nn as enn from e2cnn import gspaces def build_enn_divide_feature(planes): """build a enn regular feature map with the specified number of channels divided by N.""" assert gspace.fibergroup.order() > 0 N = gspace.fibergroup.order() planes = planes / N planes = int(planes) return enn.FieldType(gspace, [gspace.regular_repr] * planes) The provided code snippet includes necessary dependencies for implementing the `build_enn_norm_layer` function. Write a Python function `def build_enn_norm_layer(num_features, postfix='')` to solve the following problem: build an enn normalizion layer. Here is the function: def build_enn_norm_layer(num_features, postfix=''): """build an enn normalizion layer.""" in_type = build_enn_divide_feature(num_features) return 'bn' + str(postfix), enn.InnerBatchNorm(in_type)
build an enn normalizion layer.
7,399
import e2cnn.nn as enn from e2cnn import gspaces def build_enn_divide_feature(planes): """build a enn regular feature map with the specified number of channels divided by N.""" assert gspace.fibergroup.order() > 0 N = gspace.fibergroup.order() planes = planes / N planes = int(planes) return enn.FieldType(gspace, [gspace.regular_repr] * planes) The provided code snippet includes necessary dependencies for implementing the `ennConv` function. Write a Python function `def ennConv(inplanes, outplanes, kernel_size=3, stride=1, padding=0, groups=1, bias=False, dilation=1)` to solve the following problem: enn convolution. Args: in_channels (List[int]): Number of input channels per scale. out_channels (int): Number of output channels (used at each scale). kernel_size (int, optional): The size of kernel. stride (int, optional): Stride of the convolution. Default: 1. padding (int or tuple): Zero-padding added to both sides of the input. Default: 0. groups (int): Number of blocked connections from input. channels to output channels. Default: 1. bias (bool): If True, adds a learnable bias to the output. Default: False. dilation (int or tuple): Spacing between kernel elements. Default: 1. Here is the function: def ennConv(inplanes, outplanes, kernel_size=3, stride=1, padding=0, groups=1, bias=False, dilation=1): """enn convolution. Args: in_channels (List[int]): Number of input channels per scale. out_channels (int): Number of output channels (used at each scale). kernel_size (int, optional): The size of kernel. stride (int, optional): Stride of the convolution. Default: 1. padding (int or tuple): Zero-padding added to both sides of the input. Default: 0. groups (int): Number of blocked connections from input. channels to output channels. Default: 1. bias (bool): If True, adds a learnable bias to the output. Default: False. dilation (int or tuple): Spacing between kernel elements. Default: 1. """ in_type = build_enn_divide_feature(inplanes) out_type = build_enn_divide_feature(outplanes) return enn.R2Conv( in_type, out_type, kernel_size, stride=stride, padding=padding, groups=groups, bias=bias, dilation=dilation, sigma=None, frequencies_cutoff=lambda r: 3 * r, )
enn convolution. Args: in_channels (List[int]): Number of input channels per scale. out_channels (int): Number of output channels (used at each scale). kernel_size (int, optional): The size of kernel. stride (int, optional): Stride of the convolution. Default: 1. padding (int or tuple): Zero-padding added to both sides of the input. Default: 0. groups (int): Number of blocked connections from input. channels to output channels. Default: 1. bias (bool): If True, adds a learnable bias to the output. Default: False. dilation (int or tuple): Spacing between kernel elements. Default: 1.
7,400
import e2cnn.nn as enn from e2cnn import gspaces def build_enn_divide_feature(planes): """build a enn regular feature map with the specified number of channels divided by N.""" assert gspace.fibergroup.order() > 0 N = gspace.fibergroup.order() planes = planes / N planes = int(planes) return enn.FieldType(gspace, [gspace.regular_repr] * planes) def build_enn_trivial_feature(planes): """build a enn trivial feature map with the specified number of channels.""" return enn.FieldType(gspace, planes * [gspace.trivial_repr]) The provided code snippet includes necessary dependencies for implementing the `ennTrivialConv` function. Write a Python function `def ennTrivialConv(inplanes, outplanes, kernel_size=3, stride=1, padding=0, groups=1, bias=False, dilation=1)` to solve the following problem: enn convolution with trivial input featurn. Args: in_channels (List[int]): Number of input channels per scale. out_channels (int): Number of output channels (used at each scale). kernel_size (int, optional): The size of kernel. stride (int, optional): Stride of the convolution. Default: 1. padding (int or tuple): Zero-padding added to both sides of the input. Default: 0. groups (int): Number of blocked connections from input. channels to output channels. Default: 1. bias (bool): If True, adds a learnable bias to the output. Default: False. dilation (int or tuple): Spacing between kernel elements. Default: 1. Here is the function: def ennTrivialConv(inplanes, outplanes, kernel_size=3, stride=1, padding=0, groups=1, bias=False, dilation=1): """enn convolution with trivial input featurn. Args: in_channels (List[int]): Number of input channels per scale. out_channels (int): Number of output channels (used at each scale). kernel_size (int, optional): The size of kernel. stride (int, optional): Stride of the convolution. Default: 1. padding (int or tuple): Zero-padding added to both sides of the input. Default: 0. groups (int): Number of blocked connections from input. channels to output channels. Default: 1. bias (bool): If True, adds a learnable bias to the output. Default: False. dilation (int or tuple): Spacing between kernel elements. Default: 1. """ in_type = build_enn_trivial_feature(inplanes) out_type = build_enn_divide_feature(outplanes) return enn.R2Conv( in_type, out_type, kernel_size, stride=stride, padding=padding, groups=groups, bias=bias, dilation=dilation, sigma=None, frequencies_cutoff=lambda r: 3 * r, )
enn convolution with trivial input featurn. Args: in_channels (List[int]): Number of input channels per scale. out_channels (int): Number of output channels (used at each scale). kernel_size (int, optional): The size of kernel. stride (int, optional): Stride of the convolution. Default: 1. padding (int or tuple): Zero-padding added to both sides of the input. Default: 0. groups (int): Number of blocked connections from input. channels to output channels. Default: 1. bias (bool): If True, adds a learnable bias to the output. Default: False. dilation (int or tuple): Spacing between kernel elements. Default: 1.
7,401
import e2cnn.nn as enn from e2cnn import gspaces def build_enn_divide_feature(planes): """build a enn regular feature map with the specified number of channels divided by N.""" assert gspace.fibergroup.order() > 0 N = gspace.fibergroup.order() planes = planes / N planes = int(planes) return enn.FieldType(gspace, [gspace.regular_repr] * planes) The provided code snippet includes necessary dependencies for implementing the `ennReLU` function. Write a Python function `def ennReLU(inplanes)` to solve the following problem: enn ReLU. Here is the function: def ennReLU(inplanes): """enn ReLU.""" in_type = build_enn_divide_feature(inplanes) return enn.ReLU(in_type, inplace=False)
enn ReLU.
7,402
import e2cnn.nn as enn from e2cnn import gspaces def build_enn_divide_feature(planes): """build a enn regular feature map with the specified number of channels divided by N.""" assert gspace.fibergroup.order() > 0 N = gspace.fibergroup.order() planes = planes / N planes = int(planes) return enn.FieldType(gspace, [gspace.regular_repr] * planes) The provided code snippet includes necessary dependencies for implementing the `ennAvgPool` function. Write a Python function `def ennAvgPool(inplanes, kernel_size=1, stride=None, padding=0, ceil_mode=False)` to solve the following problem: enn Average Pooling. Args: inplanes (int): The number of input channel. kernel_size (int, optional): The size of kernel. stride (int, optional): Stride of the convolution. Default: 1. padding (int or tuple): Zero-padding added to both sides of the input. Default: 0. ceil_mode (bool, optional): if True, keep information in the corner of feature map. Here is the function: def ennAvgPool(inplanes, kernel_size=1, stride=None, padding=0, ceil_mode=False): """enn Average Pooling. Args: inplanes (int): The number of input channel. kernel_size (int, optional): The size of kernel. stride (int, optional): Stride of the convolution. Default: 1. padding (int or tuple): Zero-padding added to both sides of the input. Default: 0. ceil_mode (bool, optional): if True, keep information in the corner of feature map. """ in_type = build_enn_divide_feature(inplanes) return enn.PointwiseAvgPool( in_type, kernel_size, stride=stride, padding=padding, ceil_mode=ceil_mode)
enn Average Pooling. Args: inplanes (int): The number of input channel. kernel_size (int, optional): The size of kernel. stride (int, optional): Stride of the convolution. Default: 1. padding (int or tuple): Zero-padding added to both sides of the input. Default: 0. ceil_mode (bool, optional): if True, keep information in the corner of feature map.
7,403
import e2cnn.nn as enn from e2cnn import gspaces def build_enn_divide_feature(planes): """build a enn regular feature map with the specified number of channels divided by N.""" assert gspace.fibergroup.order() > 0 N = gspace.fibergroup.order() planes = planes / N planes = int(planes) return enn.FieldType(gspace, [gspace.regular_repr] * planes) The provided code snippet includes necessary dependencies for implementing the `ennMaxPool` function. Write a Python function `def ennMaxPool(inplanes, kernel_size, stride=1, padding=0)` to solve the following problem: enn Max Pooling. Here is the function: def ennMaxPool(inplanes, kernel_size, stride=1, padding=0): """enn Max Pooling.""" in_type = build_enn_divide_feature(inplanes) return enn.PointwiseMaxPool( in_type, kernel_size=kernel_size, stride=stride, padding=padding)
enn Max Pooling.
7,404
import e2cnn.nn as enn from e2cnn import gspaces def build_enn_divide_feature(planes): """build a enn regular feature map with the specified number of channels divided by N.""" assert gspace.fibergroup.order() > 0 N = gspace.fibergroup.order() planes = planes / N planes = int(planes) return enn.FieldType(gspace, [gspace.regular_repr] * planes) The provided code snippet includes necessary dependencies for implementing the `ennInterpolate` function. Write a Python function `def ennInterpolate(inplanes, scale_factor, mode='nearest', align_corners=False)` to solve the following problem: enn Interpolate. Here is the function: def ennInterpolate(inplanes, scale_factor, mode='nearest', align_corners=False): """enn Interpolate.""" in_type = build_enn_divide_feature(inplanes) return enn.R2Upsampling( in_type, scale_factor, mode=mode, align_corners=align_corners)
enn Interpolate.
7,405
import os import platform import warnings import cv2 import torch.multiprocessing as mp The provided code snippet includes necessary dependencies for implementing the `setup_multi_processes` function. Write a Python function `def setup_multi_processes(cfg)` to solve the following problem: Setup multi-processing environment variables. Here is the function: def setup_multi_processes(cfg): """Setup multi-processing environment variables.""" # set multi-process start method as `fork` to speed up the training if platform.system() != 'Windows': mp_start_method = cfg.get('mp_start_method', 'fork') current_method = mp.get_start_method(allow_none=True) if current_method is not None and current_method != mp_start_method: warnings.warn( f'Multi-processing start method `{mp_start_method}` is ' f'different from the previous setting `{current_method}`.' f'It will be force set to `{mp_start_method}`. You can change ' f'this behavior by changing `mp_start_method` in your config.') mp.set_start_method(mp_start_method, force=True) # disable opencv multithreading to avoid system being overloaded opencv_num_threads = cfg.get('opencv_num_threads', 0) cv2.setNumThreads(opencv_num_threads) # setup OMP threads # This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa workers_per_gpu = cfg.data.get('workers_per_gpu', 1) if 'train_dataloader' in cfg.data: workers_per_gpu = \ max(cfg.data.train_dataloader.get('workers_per_gpu', 1), workers_per_gpu) if 'OMP_NUM_THREADS' not in os.environ and workers_per_gpu > 1: omp_num_threads = 1 warnings.warn( f'Setting OMP_NUM_THREADS environment variable for each process ' f'to be {omp_num_threads} in default, to avoid your system being ' f'overloaded, please further tune the variable for optimal ' f'performance in your application as needed.') os.environ['OMP_NUM_THREADS'] = str(omp_num_threads) # setup MKL threads if 'MKL_NUM_THREADS' not in os.environ and workers_per_gpu > 1: mkl_num_threads = 1 warnings.warn( f'Setting MKL_NUM_THREADS environment variable for each process ' f'to be {mkl_num_threads} in default, to avoid your system being ' f'overloaded, please further tune the variable for optimal ' f'performance in your application as needed.') os.environ['MKL_NUM_THREADS'] = str(mkl_num_threads)
Setup multi-processing environment variables.
7,406
from mmcv.utils import collect_env as collect_basic_env from mmcv.utils import get_git_hash import mmrotate The provided code snippet includes necessary dependencies for implementing the `collect_env` function. Write a Python function `def collect_env()` to solve the following problem: Collect environment information. Here is the function: def collect_env(): """Collect environment information.""" env_info = collect_basic_env() env_info['MMRotate'] = ( mmrotate.__version__ + '+' + get_git_hash(digits=7)) return env_info
Collect environment information.
7,407
import torch from mmcv.parallel import MMDataParallel, MMDistributedDataParallel dp_factory = {'cuda': MMDataParallel, 'cpu': MMDataParallel} The provided code snippet includes necessary dependencies for implementing the `build_dp` function. Write a Python function `def build_dp(model, device='cuda', dim=0, *args, **kwargs)` to solve the following problem: build DataParallel module by device type. if device is cuda, return a MMDataParallel model; if device is mlu, return a MLUDataParallel model. Args: model (:class:`nn.Module`): model to be parallelized. device (str): device type, cuda, cpu or mlu. Defaults to cuda. dim (int): Dimension used to scatter the data. Defaults to 0. Returns: nn.Module: the model to be parallelized. Here is the function: def build_dp(model, device='cuda', dim=0, *args, **kwargs): """build DataParallel module by device type. if device is cuda, return a MMDataParallel model; if device is mlu, return a MLUDataParallel model. Args: model (:class:`nn.Module`): model to be parallelized. device (str): device type, cuda, cpu or mlu. Defaults to cuda. dim (int): Dimension used to scatter the data. Defaults to 0. Returns: nn.Module: the model to be parallelized. """ if device == 'npu': from mmcv.device.npu import NPUDataParallel dp_factory['npu'] = NPUDataParallel torch.npu.set_device(kwargs['device_ids'][0]) torch.npu.set_compile_mode(jit_compile=False) model = model.npu() elif device == 'cuda': model = model.cuda(kwargs['device_ids'][0]) return dp_factory[device](model, dim=dim, *args, **kwargs)
build DataParallel module by device type. if device is cuda, return a MMDataParallel model; if device is mlu, return a MLUDataParallel model. Args: model (:class:`nn.Module`): model to be parallelized. device (str): device type, cuda, cpu or mlu. Defaults to cuda. dim (int): Dimension used to scatter the data. Defaults to 0. Returns: nn.Module: the model to be parallelized.
7,408
import torch from mmcv.parallel import MMDataParallel, MMDistributedDataParallel ddp_factory = {'cuda': MMDistributedDataParallel} The provided code snippet includes necessary dependencies for implementing the `build_ddp` function. Write a Python function `def build_ddp(model, device='cuda', *args, **kwargs)` to solve the following problem: Build DistributedDataParallel module by device type. If device is cuda, return a MMDistributedDataParallel model; if device is mlu, return a MLUDistributedDataParallel model. Args: model (:class:`nn.Module`): module to be parallelized. device (str): device type, mlu or cuda. Returns: :class:`nn.Module`: the module to be parallelized References: .. [1] https://pytorch.org/docs/stable/generated/torch.nn.parallel. DistributedDataParallel.html Here is the function: def build_ddp(model, device='cuda', *args, **kwargs): """Build DistributedDataParallel module by device type. If device is cuda, return a MMDistributedDataParallel model; if device is mlu, return a MLUDistributedDataParallel model. Args: model (:class:`nn.Module`): module to be parallelized. device (str): device type, mlu or cuda. Returns: :class:`nn.Module`: the module to be parallelized References: .. [1] https://pytorch.org/docs/stable/generated/torch.nn.parallel. DistributedDataParallel.html """ assert device in ['cuda', 'npu'], 'Only available for cuda or npu devices.' if device == 'npu': from mmcv.device.npu import NPUDistributedDataParallel torch.npu.set_compile_mode(jit_compile=False) ddp_factory['npu'] = NPUDistributedDataParallel model = model.npu() elif device == 'cuda': model = model.cuda() return ddp_factory[device](model, *args, **kwargs)
Build DistributedDataParallel module by device type. If device is cuda, return a MMDistributedDataParallel model; if device is mlu, return a MLUDistributedDataParallel model. Args: model (:class:`nn.Module`): module to be parallelized. device (str): device type, mlu or cuda. Returns: :class:`nn.Module`: the module to be parallelized References: .. [1] https://pytorch.org/docs/stable/generated/torch.nn.parallel. DistributedDataParallel.html
7,409
import torch from mmcv.parallel import MMDataParallel, MMDistributedDataParallel def is_npu_available(): """Returns a bool indicating if NPU is currently available.""" return hasattr(torch, 'npu') and torch.npu.is_available() The provided code snippet includes necessary dependencies for implementing the `get_device` function. Write a Python function `def get_device()` to solve the following problem: Returns an available device, cpu, cuda. Here is the function: def get_device(): """Returns an available device, cpu, cuda.""" is_device_available = { 'npu': is_npu_available(), 'cuda': torch.cuda.is_available(), } device_list = [k for k, v in is_device_available.items() if v] return device_list[0] if len(device_list) >= 1 else 'cpu'
Returns an available device, cpu, cuda.
7,410
import glob import os.path as osp import warnings The provided code snippet includes necessary dependencies for implementing the `find_latest_checkpoint` function. Write a Python function `def find_latest_checkpoint(path, suffix='pth')` to solve the following problem: Find the latest checkpoint from the working directory. Args: path(str): The path to find checkpoints. suffix(str): File extension. Defaults to pth. Returns: latest_path(str | None): File path of the latest checkpoint. References: .. [1] https://github.com/microsoft/SoftTeacher /blob/main/ssod/utils/patch.py Here is the function: def find_latest_checkpoint(path, suffix='pth'): """Find the latest checkpoint from the working directory. Args: path(str): The path to find checkpoints. suffix(str): File extension. Defaults to pth. Returns: latest_path(str | None): File path of the latest checkpoint. References: .. [1] https://github.com/microsoft/SoftTeacher /blob/main/ssod/utils/patch.py """ if not osp.exists(path): warnings.warn('The path of checkpoints does not exist.') return None if osp.exists(osp.join(path, f'latest.{suffix}')): return osp.join(path, f'latest.{suffix}') checkpoints = glob.glob(osp.join(path, f'*.{suffix}')) if len(checkpoints) == 0: warnings.warn('There are no checkpoints in the path.') return None latest = -1 latest_path = None for checkpoint in checkpoints: count = int(osp.basename(checkpoint).split('_')[-1].split('.')[0]) if count > latest: latest = count latest_path = checkpoint return latest_path
Find the latest checkpoint from the working directory. Args: path(str): The path to find checkpoints. suffix(str): File extension. Defaults to pth. Returns: latest_path(str | None): File path of the latest checkpoint. References: .. [1] https://github.com/microsoft/SoftTeacher /blob/main/ssod/utils/patch.py
7,411
import copy import warnings from mmcv import ConfigDict def compat_runner_args(cfg): if 'runner' not in cfg: cfg.runner = ConfigDict({ 'type': 'EpochBasedRunner', 'max_epochs': cfg.total_epochs }) warnings.warn( 'config is now expected to have a `runner` section, ' 'please set `runner` in your config.', UserWarning) else: if 'total_epochs' in cfg: assert cfg.total_epochs == cfg.runner.max_epochs return cfg def compat_imgs_per_gpu(cfg): cfg = copy.deepcopy(cfg) if 'imgs_per_gpu' in cfg.data: warnings.warn('"imgs_per_gpu" is deprecated in MMDet V2.0. ' 'Please use "samples_per_gpu" instead') if 'samples_per_gpu' in cfg.data: warnings.warn( f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' f'={cfg.data.imgs_per_gpu} is used in this experiments') else: warnings.warn('Automatically set "samples_per_gpu"="imgs_per_gpu"=' f'{cfg.data.imgs_per_gpu} in this experiments') cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu return cfg def compat_loader_args(cfg): """Deprecated sample_per_gpu in cfg.data.""" cfg = copy.deepcopy(cfg) if 'train_dataloader' not in cfg.data: cfg.data['train_dataloader'] = ConfigDict() if 'val_dataloader' not in cfg.data: cfg.data['val_dataloader'] = ConfigDict() if 'test_dataloader' not in cfg.data: cfg.data['test_dataloader'] = ConfigDict() # special process for train_dataloader if 'samples_per_gpu' in cfg.data: samples_per_gpu = cfg.data.pop('samples_per_gpu') assert 'samples_per_gpu' not in \ cfg.data.train_dataloader, ('`samples_per_gpu` are set ' 'in `data` field and ` ' 'data.train_dataloader` ' 'at the same time. ' 'Please only set it in ' '`data.train_dataloader`. ') cfg.data.train_dataloader['samples_per_gpu'] = samples_per_gpu if 'persistent_workers' in cfg.data: persistent_workers = cfg.data.pop('persistent_workers') assert 'persistent_workers' not in \ cfg.data.train_dataloader, ('`persistent_workers` are set ' 'in `data` field and ` ' 'data.train_dataloader` ' 'at the same time. ' 'Please only set it in ' '`data.train_dataloader`. ') cfg.data.train_dataloader['persistent_workers'] = persistent_workers if 'workers_per_gpu' in cfg.data: workers_per_gpu = cfg.data.pop('workers_per_gpu') cfg.data.train_dataloader['workers_per_gpu'] = workers_per_gpu cfg.data.val_dataloader['workers_per_gpu'] = workers_per_gpu cfg.data.test_dataloader['workers_per_gpu'] = workers_per_gpu # special process for val_dataloader if 'samples_per_gpu' in cfg.data.val: # keep default value of `sample_per_gpu` is 1 assert 'samples_per_gpu' not in \ cfg.data.val_dataloader, ('`samples_per_gpu` are set ' 'in `data.val` field and ` ' 'data.val_dataloader` at ' 'the same time. ' 'Please only set it in ' '`data.val_dataloader`. ') cfg.data.val_dataloader['samples_per_gpu'] = \ cfg.data.val.pop('samples_per_gpu') # special process for val_dataloader # in case the test dataset is concatenated if isinstance(cfg.data.test, dict): if 'samples_per_gpu' in cfg.data.test: assert 'samples_per_gpu' not in \ cfg.data.test_dataloader, ('`samples_per_gpu` are set ' 'in `data.test` field and ` ' 'data.test_dataloader` ' 'at the same time. ' 'Please only set it in ' '`data.test_dataloader`. ') cfg.data.test_dataloader['samples_per_gpu'] = \ cfg.data.test.pop('samples_per_gpu') elif isinstance(cfg.data.test, list): for ds_cfg in cfg.data.test: if 'samples_per_gpu' in ds_cfg: assert 'samples_per_gpu' not in \ cfg.data.test_dataloader, ('`samples_per_gpu` are set ' 'in `data.test` field and ` ' 'data.test_dataloader` at' ' the same time. ' 'Please only set it in ' '`data.test_dataloader`. ') samples_per_gpu = max( [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test]) cfg.data.test_dataloader['samples_per_gpu'] = samples_per_gpu return cfg The provided code snippet includes necessary dependencies for implementing the `compat_cfg` function. Write a Python function `def compat_cfg(cfg)` to solve the following problem: This function would modify some filed to keep the compatibility of config. For example, it will move some args which will be deprecated to the correct fields. Here is the function: def compat_cfg(cfg): """This function would modify some filed to keep the compatibility of config. For example, it will move some args which will be deprecated to the correct fields. """ cfg = copy.deepcopy(cfg) cfg = compat_imgs_per_gpu(cfg) cfg = compat_loader_args(cfg) cfg = compat_runner_args(cfg) return cfg
This function would modify some filed to keep the compatibility of config. For example, it will move some args which will be deprecated to the correct fields.
7,412
import logging from mmcv.utils import get_logger The provided code snippet includes necessary dependencies for implementing the `get_root_logger` function. Write a Python function `def get_root_logger(log_file=None, log_level=logging.INFO)` to solve the following problem: Get root logger. Args: log_file (str, optional): File path of log. Defaults to None. log_level (int, optional): The level of logger. Defaults to logging.INFO. Returns: :obj:`logging.Logger`: The obtained logger Here is the function: def get_root_logger(log_file=None, log_level=logging.INFO): """Get root logger. Args: log_file (str, optional): File path of log. Defaults to None. log_level (int, optional): The level of logger. Defaults to logging.INFO. Returns: :obj:`logging.Logger`: The obtained logger """ logger = get_logger( name='mmrotate', log_file=log_file, log_level=log_level) return logger
Get root logger. Args: log_file (str, optional): File path of log. Defaults to None. log_level (int, optional): The level of logger. Defaults to logging.INFO. Returns: :obj:`logging.Logger`: The obtained logger
7,413
from argparse import ArgumentParser from mmdet.apis import inference_detector, init_detector, show_result_pyplot import mmrotate def parse_args(): parser = ArgumentParser() parser.add_argument('img', help='Image file') parser.add_argument('config', help='Config file') parser.add_argument('checkpoint', help='Checkpoint file') parser.add_argument('--out-file', default=None, help='Path to output file') parser.add_argument( '--device', default='cuda:0', help='Device used for inference') parser.add_argument( '--palette', default='dota', choices=['dota', 'sar', 'hrsc', 'hrsc_classwise', 'random'], help='Color palette used for visualization') parser.add_argument( '--score-thr', type=float, default=0.3, help='bbox score threshold') args = parser.parse_args() return args
null
7,414
from argparse import ArgumentParser from mmdet.apis import init_detector, show_result_pyplot from mmrotate.apis import inference_detector_by_patches def parse_args(): parser = ArgumentParser() parser.add_argument('img', help='Image file') parser.add_argument('config', help='Config file') parser.add_argument('checkpoint', help='Checkpoint file') parser.add_argument( '--patch_sizes', type=int, nargs='+', default=[1024], help='The sizes of patches') parser.add_argument( '--patch_steps', type=int, nargs='+', default=[824], help='The steps between two patches') parser.add_argument( '--img_ratios', type=float, nargs='+', default=[1.0], help='Image resizing ratios for multi-scale detecting') parser.add_argument( '--merge_iou_thr', type=float, default=0.1, help='IoU threshould for merging results') parser.add_argument( '--device', default='cuda:0', help='Device used for inference') parser.add_argument( '--palette', default='dota', choices=['dota', 'sar', 'hrsc', 'hrsc_classwise', 'random'], help='Color palette used for visualization') parser.add_argument( '--score-thr', type=float, default=0.3, help='bbox score threshold') args = parser.parse_args() return args
null
7,415
import argparse import copy import os import os.path as osp import time import warnings import mmcv import torch import torch.distributed as dist from mmcv import Config, DictAction from mmcv.runner import get_dist_info, init_dist from mmcv.utils import get_git_hash from mmdet import __version__ from mmdet.apis import init_random_seed, set_random_seed from mmrotate.apis import train_detector from mmrotate.datasets import build_dataset from mmrotate.models import build_detector from mmrotate.utils import (collect_env, get_device, get_root_logger, setup_multi_processes) def parse_args(): parser = argparse.ArgumentParser(description='Train a detector') parser.add_argument('config', help='train config file path') parser.add_argument('--work-dir', help='the dir to save logs and models') parser.add_argument( '--resume-from', help='the checkpoint file to resume from') parser.add_argument( '--auto-resume', action='store_true', help='resume from the latest checkpoint automatically') parser.add_argument( '--no-validate', action='store_true', help='whether not to evaluate the checkpoint during training') group_gpus = parser.add_mutually_exclusive_group() group_gpus.add_argument( '--gpus', type=int, help='number of gpus to use ' '(only applicable to non-distributed training)') group_gpus.add_argument( '--gpu-ids', type=int, nargs='+', help='ids of gpus to use ' '(only applicable to non-distributed training)') parser.add_argument('--seed', type=int, default=None, help='random seed') parser.add_argument( '--diff-seed', action='store_true', help='Whether or not set different seeds for different ranks') parser.add_argument( '--deterministic', action='store_true', help='whether to set deterministic options for CUDNN backend.') parser.add_argument( '--cfg-options', nargs='+', action=DictAction, help='override some settings in the used config, the key-value pair ' 'in xxx=yyy format will be merged into config file. If the value to ' 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' 'Note that the quotation marks are necessary and that no white space ' 'is allowed.') parser.add_argument( '--launcher', choices=['none', 'pytorch', 'slurm', 'mpi'], default='none', help='job launcher') parser.add_argument('--local_rank', type=int, default=0) args = parser.parse_args() if 'LOCAL_RANK' not in os.environ: os.environ['LOCAL_RANK'] = str(args.local_rank) return args
null
7,416
import argparse import numpy as np import torch from mmcv import Config, DictAction from mmrotate.models import build_detector def parse_args(): parser = argparse.ArgumentParser(description='Train a detector') parser.add_argument('config', help='train config file path') parser.add_argument( '--shape', type=int, nargs='+', default=[1024, 1024], help='input image size') parser.add_argument( '--cfg-options', nargs='+', action=DictAction, help='override some settings in the used config, the key-value pair ' 'in xxx=yyy format will be merged into config file. If the value to ' 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' 'Note that the quotation marks are necessary and that no white space ' 'is allowed.') parser.add_argument( '--size-divisor', type=int, default=32, help='Pad the input image, the minimum size that is divisible ' 'by size_divisor, -1 means do not pad the image.') args = parser.parse_args() return args
null
7,417
import argparse import os import matplotlib.pyplot as plt import mmcv import numpy as np import torch from matplotlib.ticker import MultipleLocator from mmcv import Config, DictAction from mmcv.ops import nms_rotated from mmdet.datasets import build_dataset from mmrotate.core.bbox import rbbox_overlaps def parse_args(): parser = argparse.ArgumentParser( description='Generate confusion matrix from detection results') parser.add_argument('config', help='test config file path') parser.add_argument( 'prediction_path', help='prediction path where test .pkl result') parser.add_argument( 'save_dir', help='directory where confusion matrix will be saved') parser.add_argument( '--show', action='store_true', help='show confusion matrix') parser.add_argument( '--color-theme', default='plasma', help='theme of the matrix color map') parser.add_argument( '--score-thr', type=float, default=0.3, help='score threshold to filter detection bboxes') parser.add_argument( '--tp-iou-thr', type=float, default=0.5, help='IoU threshold to be considered as matched') parser.add_argument( '--nms-iou-thr', type=float, default=None, help='nms IoU threshold, only applied when users want to change the' 'nms IoU threshold.') parser.add_argument( '--cfg-options', nargs='+', action=DictAction, help='override some settings in the used config, the key-value pair ' 'in xxx=yyy format will be merged into config file. If the value to ' 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' 'Note that the quotation marks are necessary and that no white space ' 'is allowed.') args = parser.parse_args() return args
null
7,418
import argparse import os import matplotlib.pyplot as plt import mmcv import numpy as np import torch from matplotlib.ticker import MultipleLocator from mmcv import Config, DictAction from mmcv.ops import nms_rotated from mmdet.datasets import build_dataset from mmrotate.core.bbox import rbbox_overlaps def analyze_per_img_dets(confusion_matrix, gt_bboxes, gt_labels, result, score_thr=0, tp_iou_thr=0.5, nms_iou_thr=None): """Analyze detection results on each image. Args: confusion_matrix (ndarray): The confusion matrix, has shape (num_classes + 1, num_classes + 1). gt_bboxes (ndarray): Ground truth bboxes, has shape (num_gt, 4). gt_labels (ndarray): Ground truth labels, has shape (num_gt). result (ndarray): Detection results, has shape (num_classes, num_bboxes, 5). score_thr (float): Score threshold to filter bboxes. Default: 0. tp_iou_thr (float): IoU threshold to be considered as matched. Default: 0.5. nms_iou_thr (float|optional): nms IoU threshold, the detection results have done nms in the detector, only applied when users want to change the nms IoU threshold. Default: None. """ true_positives = np.zeros_like(gt_labels) gt_bboxes = torch.from_numpy(gt_bboxes).float() for det_label, det_bboxes in enumerate(result): det_bboxes = torch.from_numpy(det_bboxes).float() if nms_iou_thr: det_bboxes, _ = nms_rotated( det_bboxes[:, :5], det_bboxes[:, -1], nms_iou_thr, score_threshold=score_thr) ious = rbbox_overlaps(det_bboxes[:, :5], gt_bboxes) for i, det_bbox in enumerate(det_bboxes): score = det_bbox[5] det_match = 0 if score >= score_thr: for j, gt_label in enumerate(gt_labels): if ious[i, j] >= tp_iou_thr: det_match += 1 if gt_label == det_label: true_positives[j] += 1 # TP confusion_matrix[gt_label, det_label] += 1 if det_match == 0: # BG FP confusion_matrix[-1, det_label] += 1 for num_tp, gt_label in zip(true_positives, gt_labels): if num_tp == 0: # FN confusion_matrix[gt_label, -1] += 1 The provided code snippet includes necessary dependencies for implementing the `calculate_confusion_matrix` function. Write a Python function `def calculate_confusion_matrix(dataset, results, score_thr=0, nms_iou_thr=None, tp_iou_thr=0.5)` to solve the following problem: Calculate the confusion matrix. Args: dataset (Dataset): Test or val dataset. results (list[ndarray]): A list of detection results in each image. score_thr (float|optional): Score threshold to filter bboxes. Default: 0. nms_iou_thr (float|optional): nms IoU threshold, the detection results have done nms in the detector, only applied when users want to change the nms IoU threshold. Default: None. tp_iou_thr (float|optional): IoU threshold to be considered as matched. Default: 0.5. Here is the function: def calculate_confusion_matrix(dataset, results, score_thr=0, nms_iou_thr=None, tp_iou_thr=0.5): """Calculate the confusion matrix. Args: dataset (Dataset): Test or val dataset. results (list[ndarray]): A list of detection results in each image. score_thr (float|optional): Score threshold to filter bboxes. Default: 0. nms_iou_thr (float|optional): nms IoU threshold, the detection results have done nms in the detector, only applied when users want to change the nms IoU threshold. Default: None. tp_iou_thr (float|optional): IoU threshold to be considered as matched. Default: 0.5. """ num_classes = len(dataset.CLASSES) confusion_matrix = np.zeros(shape=[num_classes + 1, num_classes + 1]) assert len(dataset) == len(results) prog_bar = mmcv.ProgressBar(len(results)) for idx, per_img_res in enumerate(results): if isinstance(per_img_res, tuple): res_bboxes, _ = per_img_res else: res_bboxes = per_img_res ann = dataset.get_ann_info(idx) gt_bboxes = ann['bboxes'] labels = ann['labels'] analyze_per_img_dets(confusion_matrix, gt_bboxes, labels, res_bboxes, score_thr, tp_iou_thr, nms_iou_thr) prog_bar.update() return confusion_matrix
Calculate the confusion matrix. Args: dataset (Dataset): Test or val dataset. results (list[ndarray]): A list of detection results in each image. score_thr (float|optional): Score threshold to filter bboxes. Default: 0. nms_iou_thr (float|optional): nms IoU threshold, the detection results have done nms in the detector, only applied when users want to change the nms IoU threshold. Default: None. tp_iou_thr (float|optional): IoU threshold to be considered as matched. Default: 0.5.
7,419
import argparse import os import matplotlib.pyplot as plt import mmcv import numpy as np import torch from matplotlib.ticker import MultipleLocator from mmcv import Config, DictAction from mmcv.ops import nms_rotated from mmdet.datasets import build_dataset from mmrotate.core.bbox import rbbox_overlaps The provided code snippet includes necessary dependencies for implementing the `plot_confusion_matrix` function. Write a Python function `def plot_confusion_matrix(confusion_matrix, labels, save_dir=None, show=True, title='Normalized Confusion Matrix', color_theme='plasma')` to solve the following problem: Draw confusion matrix with matplotlib. Args: confusion_matrix (ndarray): The confusion matrix. labels (list[str]): List of class names. save_dir (str|optional): If set, save the confusion matrix plot to the given path. Default: None. show (bool): Whether to show the plot. Default: True. title (str): Title of the plot. Default: `Normalized Confusion Matrix`. color_theme (str): Theme of the matrix color map. Default: `plasma`. Here is the function: def plot_confusion_matrix(confusion_matrix, labels, save_dir=None, show=True, title='Normalized Confusion Matrix', color_theme='plasma'): """Draw confusion matrix with matplotlib. Args: confusion_matrix (ndarray): The confusion matrix. labels (list[str]): List of class names. save_dir (str|optional): If set, save the confusion matrix plot to the given path. Default: None. show (bool): Whether to show the plot. Default: True. title (str): Title of the plot. Default: `Normalized Confusion Matrix`. color_theme (str): Theme of the matrix color map. Default: `plasma`. """ # normalize the confusion matrix per_label_sums = confusion_matrix.sum(axis=1)[:, np.newaxis] confusion_matrix = \ confusion_matrix.astype(np.float32) / per_label_sums * 100 num_classes = len(labels) fig, ax = plt.subplots( figsize=(0.5 * num_classes, 0.5 * num_classes * 0.8), dpi=180) cmap = plt.get_cmap(color_theme) im = ax.imshow(confusion_matrix, cmap=cmap) plt.colorbar(mappable=im, ax=ax) title_font = {'weight': 'bold', 'size': 12} ax.set_title(title, fontdict=title_font) label_font = {'size': 10} plt.ylabel('Ground Truth Label', fontdict=label_font) plt.xlabel('Prediction Label', fontdict=label_font) # draw locator xmajor_locator = MultipleLocator(1) xminor_locator = MultipleLocator(0.5) ax.xaxis.set_major_locator(xmajor_locator) ax.xaxis.set_minor_locator(xminor_locator) ymajor_locator = MultipleLocator(1) yminor_locator = MultipleLocator(0.5) ax.yaxis.set_major_locator(ymajor_locator) ax.yaxis.set_minor_locator(yminor_locator) # draw grid ax.grid(True, which='minor', linestyle='-') # draw label ax.set_xticks(np.arange(num_classes)) ax.set_yticks(np.arange(num_classes)) ax.set_xticklabels(labels) ax.set_yticklabels(labels) ax.tick_params( axis='x', bottom=False, top=True, labelbottom=False, labeltop=True) plt.setp( ax.get_xticklabels(), rotation=45, ha='left', rotation_mode='anchor') # draw confution matrix value for i in range(num_classes): for j in range(num_classes): ax.text( j, i, '{}%'.format( int(confusion_matrix[ i, j]) if not np.isnan(confusion_matrix[i, j]) else -1), ha='center', va='center', color='w', size=7) ax.set_ylim(len(confusion_matrix) - 0.5, -0.5) # matplotlib>3.1.1 fig.tight_layout() if save_dir is not None: plt.savefig( os.path.join(save_dir, 'confusion_matrix.png'), format='png') if show: plt.show()
Draw confusion matrix with matplotlib. Args: confusion_matrix (ndarray): The confusion matrix. labels (list[str]): List of class names. save_dir (str|optional): If set, save the confusion matrix plot to the given path. Default: None. show (bool): Whether to show the plot. Default: True. title (str): Title of the plot. Default: `Normalized Confusion Matrix`. color_theme (str): Theme of the matrix color map. Default: `plasma`.
7,420
import argparse import json from collections import defaultdict import matplotlib.pyplot as plt import numpy as np The provided code snippet includes necessary dependencies for implementing the `cal_train_time` function. Write a Python function `def cal_train_time(log_dicts, args)` to solve the following problem: calculate the training time. Here is the function: def cal_train_time(log_dicts, args): """calculate the training time.""" for i, log_dict in enumerate(log_dicts): print(f'{"-" * 5}Analyze train time of {args.json_logs[i]}{"-" * 5}') all_times = [] for epoch in log_dict.keys(): if args.include_outliers: all_times.append(log_dict[epoch]['time']) else: all_times.append(log_dict[epoch]['time'][1:]) all_times = np.array(all_times) epoch_ave_time = all_times.mean(-1) slowest_epoch = epoch_ave_time.argmax() fastest_epoch = epoch_ave_time.argmin() std_over_epoch = epoch_ave_time.std() print(f'slowest epoch {slowest_epoch + 1}, ' f'average time is {epoch_ave_time[slowest_epoch]:.4f}') print(f'fastest epoch {fastest_epoch + 1}, ' f'average time is {epoch_ave_time[fastest_epoch]:.4f}') print(f'time std over epochs is {std_over_epoch:.4f}') print(f'average iter time: {np.mean(all_times):.4f} s/iter') print()
calculate the training time.
7,421
import argparse import json from collections import defaultdict import matplotlib.pyplot as plt import numpy as np The provided code snippet includes necessary dependencies for implementing the `plot_curve` function. Write a Python function `def plot_curve(log_dicts, args)` to solve the following problem: Plot curve. Here is the function: def plot_curve(log_dicts, args): """Plot curve.""" if args.backend is not None: plt.switch_backend(args.backend) if sns is None: raise ImportError('Please run "pip install seaborn" ' 'to install seaborn first.') sns.set_style(args.style) # if legend is None, use {filename}_{key} as legend legend = args.legend if legend is None: legend = [] for json_log in args.json_logs: for metric in args.keys: legend.append(f'{json_log}_{metric}') assert len(legend) == (len(args.json_logs) * len(args.keys)) metrics = args.keys num_metrics = len(metrics) for i, log_dict in enumerate(log_dicts): epochs = list(log_dict.keys()) for j, metric in enumerate(metrics): print(f'plot curve of {args.json_logs[i]}, metric is {metric}') if metric not in log_dict[epochs[0]]: raise KeyError( f'{args.json_logs[i]} does not contain metric {metric}') if 'mAP' in metric: xs = np.arange(1, max(epochs) + 1) ys = [] for epoch in epochs: ys += log_dict[epoch][metric] ax = plt.gca() ax.set_xticks(xs) plt.xlabel('epoch') plt.plot(xs, ys, label=legend[i * num_metrics + j], marker='o') else: xs = [] ys = [] num_iters_per_epoch = log_dict[epochs[0]]['iter'][-2] for epoch in epochs: iters = log_dict[epoch]['iter'] if log_dict[epoch]['mode'][-1] == 'val': iters = iters[:-1] xs.append( np.array(iters) + (epoch - 1) * num_iters_per_epoch) ys.append(np.array(log_dict[epoch][metric][:len(iters)])) xs = np.concatenate(xs) ys = np.concatenate(ys) plt.xlabel('iter') plt.plot( xs, ys, label=legend[i * num_metrics + j], linewidth=0.5) plt.legend() if args.title is not None: plt.title(args.title) if args.out is None: plt.show() else: print(f'save curve to: {args.out}') plt.savefig(args.out) plt.cla()
Plot curve.
7,422
import argparse import json from collections import defaultdict import matplotlib.pyplot as plt import numpy as np def add_plot_parser(subparsers): """Add plot parser.""" parser_plt = subparsers.add_parser( 'plot_curve', help='parser for plotting curves') parser_plt.add_argument( 'json_logs', type=str, nargs='+', help='path of train log in json format') parser_plt.add_argument( '--keys', type=str, nargs='+', default=['bbox_mAP'], help='the metric that you want to plot') parser_plt.add_argument('--title', type=str, help='title of figure') parser_plt.add_argument( '--legend', type=str, nargs='+', default=None, help='legend of each plot') parser_plt.add_argument( '--backend', type=str, default=None, help='backend of plt') parser_plt.add_argument( '--style', type=str, default='dark', help='style of plt') parser_plt.add_argument('--out', type=str, default=None) def add_time_parser(subparsers): """Add time parser.""" parser_time = subparsers.add_parser( 'cal_train_time', help='parser for computing the average time per training iteration') parser_time.add_argument( 'json_logs', type=str, nargs='+', help='path of train log in json format') parser_time.add_argument( '--include-outliers', action='store_true', help='include the first value of every epoch when computing ' 'the average time') The provided code snippet includes necessary dependencies for implementing the `parse_args` function. Write a Python function `def parse_args()` to solve the following problem: Parse parameters. Here is the function: def parse_args(): """Parse parameters.""" parser = argparse.ArgumentParser(description='Analyze Json Log') # currently only support plot curve and calculate average train time subparsers = parser.add_subparsers(dest='task', help='task parser') add_plot_parser(subparsers) add_time_parser(subparsers) args = parser.parse_args() return args
Parse parameters.
7,423
import argparse import json from collections import defaultdict import matplotlib.pyplot as plt import numpy as np The provided code snippet includes necessary dependencies for implementing the `load_json_logs` function. Write a Python function `def load_json_logs(json_logs)` to solve the following problem: Load and convert json_logs to log_dict, key is epoch, value is a sub dict keys of sub dict is different metrics, e.g. memory, bbox_mAP value of sub dict is a list of corresponding values of all iterations. Args: json_logs (str): json file of logs. Returns: dict: dict of logs. Here is the function: def load_json_logs(json_logs): """Load and convert json_logs to log_dict, key is epoch, value is a sub dict keys of sub dict is different metrics, e.g. memory, bbox_mAP value of sub dict is a list of corresponding values of all iterations. Args: json_logs (str): json file of logs. Returns: dict: dict of logs. """ log_dicts = [{} for _ in json_logs] for json_log, log_dict in zip(json_logs, log_dicts): with open(json_log, 'r') as log_file: for line in log_file: log = json.loads(line.strip()) # skip lines without `epoch` field if 'epoch' not in log: continue epoch = log.pop('epoch') if epoch not in log_dict: log_dict[epoch] = defaultdict(list) for k, v in log.items(): log_dict[epoch][k].append(v) return log_dicts
Load and convert json_logs to log_dict, key is epoch, value is a sub dict keys of sub dict is different metrics, e.g. memory, bbox_mAP value of sub dict is a list of corresponding values of all iterations. Args: json_logs (str): json file of logs. Returns: dict: dict of logs.
7,424
import argparse import copy import os import time import torch from mmcv import Config, DictAction from mmcv.cnn import fuse_conv_bn from mmcv.parallel import MMDistributedDataParallel from mmcv.runner import init_dist, load_checkpoint, wrap_fp16_model from mmdet.datasets import build_dataloader, replace_ImageToTensor from mmrotate.datasets import build_dataset from mmrotate.models import build_detector def parse_args(): parser = argparse.ArgumentParser(description='mmrotate benchmark a model') parser.add_argument('config', help='test config file path') parser.add_argument('checkpoint', help='checkpoint file') parser.add_argument( '--repeat-num', type=int, default=1, help='number of repeat times of measurement for averaging the results') parser.add_argument( '--max-iter', type=int, default=2000, help='num of max iter') parser.add_argument( '--log-interval', type=int, default=50, help='interval of logging') parser.add_argument( '--fuse-conv-bn', action='store_true', help='Whether to fuse conv and bn, this will slightly increase' 'the inference speed') parser.add_argument( '--use-fp16', action='store_true', help='Whether to use fp16 to inference') parser.add_argument( '--cfg-options', nargs='+', action=DictAction, help='override some settings in the used config, the key-value pair ' 'in xxx=yyy format will be merged into config file. If the value to ' 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' 'Note that the quotation marks are necessary and that no white space ' 'is allowed.') parser.add_argument( '--launcher', choices=['none', 'pytorch', 'slurm', 'mpi'], default='none', help='job launcher') parser.add_argument('--local_rank', type=int, default=0) args = parser.parse_args() if 'LOCAL_RANK' not in os.environ: os.environ['LOCAL_RANK'] = str(args.local_rank) return args
null
7,425
import argparse import copy import os import time import torch from mmcv import Config, DictAction from mmcv.cnn import fuse_conv_bn from mmcv.parallel import MMDistributedDataParallel from mmcv.runner import init_dist, load_checkpoint, wrap_fp16_model from mmdet.datasets import build_dataloader, replace_ImageToTensor from mmrotate.datasets import build_dataset from mmrotate.models import build_detector def measure_inference_speed(cfg, checkpoint, max_iter, log_interval, is_fuse_conv_bn, use_fp16): """Inference speed statistics. Args: cfg (object): Test config object. checkpoint (str): Checkpoint file path. max_iter (int): Num of max iter. log_interval (int): Interval of logging. is_fuse_conv_bn (bool): Whether to fuse conv and bn, this will slightly increase the inference speed use_fp16 (bool): Whether to use fp16 to inference. Returns: fps (float): Average speed of inference (fps). """ # set cudnn_benchmark if cfg.get('cudnn_benchmark', False): torch.backends.cudnn.benchmark = True cfg.model.pretrained = None cfg.data.test.test_mode = True # build the dataloader samples_per_gpu = cfg.data.test.pop('samples_per_gpu', 1) if samples_per_gpu > 1: # Replace 'ImageToTensor' to 'DefaultFormatBundle' cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline) dataset = build_dataset(cfg.data.test) data_loader = build_dataloader( dataset, samples_per_gpu=1, # Because multiple processes will occupy additional CPU resources, # FPS statistics will be more unstable when workers_per_gpu is not 0. # It is reasonable to set workers_per_gpu to 0. workers_per_gpu=0, dist=True, shuffle=False) # build the model and load checkpoint cfg.model.train_cfg = None model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg')) if use_fp16: wrap_fp16_model(model) load_checkpoint(model, checkpoint, map_location='cpu') if is_fuse_conv_bn: model = fuse_conv_bn(model) model = MMDistributedDataParallel( model.cuda(), device_ids=[torch.cuda.current_device()], broadcast_buffers=False) model.eval() if use_fp16: model.half() # the first several iterations may be very slow so skip them num_warmup = 5 pure_inf_time = 0 fps = 0 # benchmark with 2000 image and take the average for i, data in enumerate(data_loader): torch.cuda.synchronize() start_time = time.perf_counter() with torch.no_grad(): model(return_loss=False, rescale=True, **data) torch.cuda.synchronize() elapsed = time.perf_counter() - start_time if i >= num_warmup: pure_inf_time += elapsed if (i + 1) % log_interval == 0: fps = (i + 1 - num_warmup) / pure_inf_time print( f'Done image [{i + 1:<3}/ {max_iter}], ' f'fps: {fps:.1f} img / s, ' f'times per image: {1000 / fps:.1f} ms / img', flush=True) if (i + 1) == max_iter: fps = (i + 1 - num_warmup) / pure_inf_time print( f'Overall fps: {fps:.1f} img / s, ' f'times per image: {1000 / fps:.1f} ms / img', flush=True) break return fps The provided code snippet includes necessary dependencies for implementing the `repeat_measure_inference_speed` function. Write a Python function `def repeat_measure_inference_speed(cfg, checkpoint, max_iter, log_interval, is_fuse_conv_bn, use_fp16, repeat_num=1)` to solve the following problem: Repeat to inference several times and take the average. Args: cfg (object): Test config object. checkpoint (str): Checkpoint file path. max_iter (int): Num of max iter. log_interval (int): Interval of logging. is_fuse_conv_bn (bool): Whether to fuse conv and bn, this will slightly increase the inference speed use_fp16 (bool): Whether to use fp16 to inference. repeat_num (int): Number of repeat times of measurement for averaging the results. Returns: fps (float of list(float)): Inference speed(fps) or list of inference speed(fps) for repeating measurements. Here is the function: def repeat_measure_inference_speed(cfg, checkpoint, max_iter, log_interval, is_fuse_conv_bn, use_fp16, repeat_num=1): """Repeat to inference several times and take the average. Args: cfg (object): Test config object. checkpoint (str): Checkpoint file path. max_iter (int): Num of max iter. log_interval (int): Interval of logging. is_fuse_conv_bn (bool): Whether to fuse conv and bn, this will slightly increase the inference speed use_fp16 (bool): Whether to use fp16 to inference. repeat_num (int): Number of repeat times of measurement for averaging the results. Returns: fps (float of list(float)): Inference speed(fps) or list of inference speed(fps) for repeating measurements. """ assert repeat_num >= 1 fps_list = [] for _ in range(repeat_num): # cp_cfg = copy.deepcopy(cfg) fps_list.append( measure_inference_speed(cp_cfg, checkpoint, max_iter, log_interval, is_fuse_conv_bn, use_fp16)) if repeat_num > 1: fps_list_ = [round(fps, 1) for fps in fps_list] times_pre_image_list_ = [round(1000 / fps, 1) for fps in fps_list] mean_fps_ = sum(fps_list_) / len(fps_list_) mean_times_pre_image_ = sum(times_pre_image_list_) / len( times_pre_image_list_) print( f'Overall fps: {fps_list_}[{mean_fps_:.1f}] img / s, ' f'times per image: ' f'{times_pre_image_list_}[{mean_times_pre_image_:.1f}] ms / img', flush=True) return fps_list return fps_list[0]
Repeat to inference several times and take the average. Args: cfg (object): Test config object. checkpoint (str): Checkpoint file path. max_iter (int): Num of max iter. log_interval (int): Interval of logging. is_fuse_conv_bn (bool): Whether to fuse conv and bn, this will slightly increase the inference speed use_fp16 (bool): Whether to use fp16 to inference. repeat_num (int): Number of repeat times of measurement for averaging the results. Returns: fps (float of list(float)): Inference speed(fps) or list of inference speed(fps) for repeating measurements.
7,426
import argparse import subprocess import torch The provided code snippet includes necessary dependencies for implementing the `parse_args` function. Write a Python function `def parse_args()` to solve the following problem: Parse parameters. Here is the function: def parse_args(): """Parse parameters.""" parser = argparse.ArgumentParser( description='Process a checkpoint to be published') parser.add_argument('in_file', help='input checkpoint filename') parser.add_argument('out_file', help='output checkpoint filename') args = parser.parse_args() return args
Parse parameters.
7,427
import argparse import subprocess import torch The provided code snippet includes necessary dependencies for implementing the `process_checkpoint` function. Write a Python function `def process_checkpoint(in_file, out_file)` to solve the following problem: Only inference related parameters are retained. Args: in_file (str): Filename of input checkpoint. out_file (str): Filename of output checkpoint. Here is the function: def process_checkpoint(in_file, out_file): """Only inference related parameters are retained. Args: in_file (str): Filename of input checkpoint. out_file (str): Filename of output checkpoint. """ checkpoint = torch.load(in_file, map_location='cpu') # remove optimizer for smaller file size if 'optimizer' in checkpoint: del checkpoint['optimizer'] # if it is necessary to remove some sensitive data in checkpoint['meta'], # add the code here. if torch.__version__ >= '1.6': torch.save(checkpoint, out_file, _use_new_zipfile_serialization=False) else: torch.save(checkpoint, out_file) sha = subprocess.check_output(['sha256sum', out_file]).decode() if out_file.endswith('.pth'): out_file_name = out_file[:-4] else: out_file_name = out_file final_file = out_file_name + f'-{sha[:8]}.pth' subprocess.Popen(['mv', out_file, final_file])
Only inference related parameters are retained. Args: in_file (str): Filename of input checkpoint. out_file (str): Filename of output checkpoint.
7,428
import argparse import codecs import datetime import itertools import json import logging import os import os.path as osp import time from functools import partial, reduce from math import ceil from multiprocessing import Manager, Pool import cv2 import numpy as np from PIL import Image def add_parser(parser): """Add arguments.""" parser.add_argument( '--base-json', type=str, default=None, help='json config file for split images') parser.add_argument( '--nproc', type=int, default=10, help='the procession number') # argument for loading data parser.add_argument( '--img-dirs', nargs='+', type=str, default=None, help='images dirs, must give a value') parser.add_argument( '--ann-dirs', nargs='+', type=str, default=None, help='annotations dirs, optional') # argument for splitting image parser.add_argument( '--sizes', nargs='+', type=int, default=[1024], help='the sizes of sliding windows') parser.add_argument( '--gaps', nargs='+', type=int, default=[512], help='the steps of sliding widnows') parser.add_argument( '--rates', nargs='+', type=float, default=[1.], help='same as DOTA devkit rate, but only change windows size') parser.add_argument( '--img-rate-thr', type=float, default=0.6, help='the minimal rate of image in window and window') parser.add_argument( '--iof-thr', type=float, default=0.7, help='the minimal iof between a object and a window') parser.add_argument( '--no-padding', action='store_true', help='not padding patches in regular size') parser.add_argument( '--padding-value', nargs='+', type=int, default=[0], help='padding value, 1 or channel number') # argument for saving parser.add_argument( '--save-dir', type=str, default='.', help='to save pkl and split images') parser.add_argument( '--save-ext', type=str, default='.png', help='the extension of saving images') The provided code snippet includes necessary dependencies for implementing the `parse_args` function. Write a Python function `def parse_args()` to solve the following problem: Parse arguments. Here is the function: def parse_args(): """Parse arguments.""" parser = argparse.ArgumentParser(description='Splitting images') add_parser(parser) args = parser.parse_args() if args.base_json is not None: with open(args.base_json, 'r') as f: prior_config = json.load(f) for action in parser._actions: if action.dest not in prior_config or \ not hasattr(action, 'default'): continue action.default = prior_config[action.dest] args = parser.parse_args() # assert arguments assert args.img_dirs is not None, "argument img_dirs can't be None" assert args.ann_dirs is None or len(args.ann_dirs) == len(args.img_dirs) assert len(args.sizes) == len(args.gaps) assert len(args.sizes) == 1 or len(args.rates) == 1 assert args.save_ext in ['.png', '.jpg', 'bmp', '.tif'] assert args.iof_thr >= 0 and args.iof_thr < 1 assert args.iof_thr >= 0 and args.iof_thr <= 1 assert not osp.exists(args.save_dir), \ f'{osp.join(args.save_dir)} already exists' return args
Parse arguments.
7,429
import argparse import codecs import datetime import itertools import json import logging import os import os.path as osp import time from functools import partial, reduce from math import ceil from multiprocessing import Manager, Pool import cv2 import numpy as np from PIL import Image def get_sliding_window(info, sizes, gaps, img_rate_thr): """Get sliding windows. Args: info (dict): Dict of image's width and height. sizes (list): List of window's sizes. gaps (list): List of window's gaps. img_rate_thr (float): Threshold of window area divided by image area. Returns: list[np.array]: Information of valid windows. """ eps = 0.01 windows = [] width, height = info['width'], info['height'] for size, gap in zip(sizes, gaps): assert size > gap, f'invaild size gap pair [{size} {gap}]' step = size - gap x_num = 1 if width <= size else ceil((width - size) / step + 1) x_start = [step * i for i in range(x_num)] if len(x_start) > 1 and x_start[-1] + size > width: x_start[-1] = width - size y_num = 1 if height <= size else ceil((height - size) / step + 1) y_start = [step * i for i in range(y_num)] if len(y_start) > 1 and y_start[-1] + size > height: y_start[-1] = height - size start = np.array( list(itertools.product(x_start, y_start)), dtype=np.int64) stop = start + size windows.append(np.concatenate([start, stop], axis=1)) windows = np.concatenate(windows, axis=0) img_in_wins = windows.copy() img_in_wins[:, 0::2] = np.clip(img_in_wins[:, 0::2], 0, width) img_in_wins[:, 1::2] = np.clip(img_in_wins[:, 1::2], 0, height) img_areas = (img_in_wins[:, 2] - img_in_wins[:, 0]) * \ (img_in_wins[:, 3] - img_in_wins[:, 1]) win_areas = (windows[:, 2] - windows[:, 0]) * \ (windows[:, 3] - windows[:, 1]) img_rates = img_areas / win_areas if not (img_rates > img_rate_thr).any(): max_rate = img_rates.max() img_rates[abs(img_rates - max_rate) < eps] = 1 return windows[img_rates > img_rate_thr] def get_window_obj(info, windows, iof_thr): """ Args: info (dict): Dict of bbox annotations. windows (np.array): information of sliding windows. iof_thr (float): Threshold of overlaps between bbox and window. Returns: list[dict]: List of bbox annotations of every window. """ bboxes = info['ann']['bboxes'] iofs = bbox_overlaps_iof(bboxes, windows) window_anns = [] for i in range(windows.shape[0]): win_iofs = iofs[:, i] pos_inds = np.nonzero(win_iofs >= iof_thr)[0].tolist() win_ann = dict() for k, v in info['ann'].items(): try: win_ann[k] = v[pos_inds] except TypeError: win_ann[k] = [v[i] for i in pos_inds] win_ann['trunc'] = win_iofs[pos_inds] < 1 window_anns.append(win_ann) return window_anns def crop_and_save_img(info, windows, window_anns, img_dir, no_padding, padding_value, save_dir, anno_dir, img_ext): """ Args: info (dict): Image's information. windows (np.array): information of sliding windows. window_anns (list[dict]): List of bbox annotations of every window. img_dir (str): Path of images. no_padding (bool): If True, no padding. padding_value (tuple[int|float]): Padding value. save_dir (str): Save filename. anno_dir (str): Annotation filename. img_ext (str): Picture suffix. Returns: list[dict]: Information of paths. """ img = cv2.imread(osp.join(img_dir, info['filename'])) patch_infos = [] for i in range(windows.shape[0]): patch_info = dict() for k, v in info.items(): if k not in ['id', 'fileanme', 'width', 'height', 'ann']: patch_info[k] = v window = windows[i] x_start, y_start, x_stop, y_stop = window.tolist() patch_info['x_start'] = x_start patch_info['y_start'] = y_start patch_info['id'] = \ info['id'] + '__' + str(x_stop - x_start) + \ '__' + str(x_start) + '___' + str(y_start) patch_info['ori_id'] = info['id'] ann = window_anns[i] ann['bboxes'] = translate(ann['bboxes'], -x_start, -y_start) patch_info['ann'] = ann patch = img[y_start:y_stop, x_start:x_stop] if not no_padding: height = y_stop - y_start width = x_stop - x_start if height > patch.shape[0] or width > patch.shape[1]: padding_patch = np.empty((height, width, patch.shape[-1]), dtype=np.uint8) if not isinstance(padding_value, (int, float)): assert len(padding_value) == patch.shape[-1] padding_patch[...] = padding_value padding_patch[:patch.shape[0], :patch.shape[1], ...] = patch patch = padding_patch patch_info['height'] = patch.shape[0] patch_info['width'] = patch.shape[1] cv2.imwrite(osp.join(save_dir, patch_info['id'] + img_ext), patch) patch_info['filename'] = patch_info['id'] + img_ext patch_infos.append(patch_info) bboxes_num = patch_info['ann']['bboxes'].shape[0] outdir = os.path.join(anno_dir, patch_info['id'] + '.txt') with codecs.open(outdir, 'w', 'utf-8') as f_out: if bboxes_num == 0: pass else: for idx in range(bboxes_num): obj = patch_info['ann'] outline = ' '.join(list(map(str, obj['bboxes'][idx]))) diffs = str( obj['diffs'][idx]) if not obj['trunc'][idx] else '2' outline = outline + ' ' + obj['labels'][idx] + ' ' + diffs f_out.write(outline + '\n') return patch_infos The provided code snippet includes necessary dependencies for implementing the `single_split` function. Write a Python function `def single_split(arguments, sizes, gaps, img_rate_thr, iof_thr, no_padding, padding_value, save_dir, anno_dir, img_ext, lock, prog, total, logger)` to solve the following problem: Args: arguments (object): Parameters. sizes (list): List of window's sizes. gaps (list): List of window's gaps. img_rate_thr (float): Threshold of window area divided by image area. iof_thr (float): Threshold of overlaps between bbox and window. no_padding (bool): If True, no padding. padding_value (tuple[int|float]): Padding value. save_dir (str): Save filename. anno_dir (str): Annotation filename. img_ext (str): Picture suffix. lock (object): Lock of Manager. prog (object): Progress of Manager. total (object): Length of infos. logger (object): Logger. Returns: list[dict]: Information of paths. Here is the function: def single_split(arguments, sizes, gaps, img_rate_thr, iof_thr, no_padding, padding_value, save_dir, anno_dir, img_ext, lock, prog, total, logger): """ Args: arguments (object): Parameters. sizes (list): List of window's sizes. gaps (list): List of window's gaps. img_rate_thr (float): Threshold of window area divided by image area. iof_thr (float): Threshold of overlaps between bbox and window. no_padding (bool): If True, no padding. padding_value (tuple[int|float]): Padding value. save_dir (str): Save filename. anno_dir (str): Annotation filename. img_ext (str): Picture suffix. lock (object): Lock of Manager. prog (object): Progress of Manager. total (object): Length of infos. logger (object): Logger. Returns: list[dict]: Information of paths. """ info, img_dir = arguments windows = get_sliding_window(info, sizes, gaps, img_rate_thr) window_anns = get_window_obj(info, windows, iof_thr) patch_infos = crop_and_save_img(info, windows, window_anns, img_dir, no_padding, padding_value, save_dir, anno_dir, img_ext) assert patch_infos lock.acquire() prog.value += 1 msg = f'({prog.value / total:3.1%} {prog.value}:{total})' msg += ' - ' + f"Filename: {info['filename']}" msg += ' - ' + f"width: {info['width']:<5d}" msg += ' - ' + f"height: {info['height']:<5d}" msg += ' - ' + f"Objects: {len(info['ann']['bboxes']):<5d}" msg += ' - ' + f'Patches: {len(patch_infos)}' logger.info(msg) lock.release() return patch_infos
Args: arguments (object): Parameters. sizes (list): List of window's sizes. gaps (list): List of window's gaps. img_rate_thr (float): Threshold of window area divided by image area. iof_thr (float): Threshold of overlaps between bbox and window. no_padding (bool): If True, no padding. padding_value (tuple[int|float]): Padding value. save_dir (str): Save filename. anno_dir (str): Annotation filename. img_ext (str): Picture suffix. lock (object): Lock of Manager. prog (object): Progress of Manager. total (object): Length of infos. logger (object): Logger. Returns: list[dict]: Information of paths.
7,430
import argparse import codecs import datetime import itertools import json import logging import os import os.path as osp import time from functools import partial, reduce from math import ceil from multiprocessing import Manager, Pool import cv2 import numpy as np from PIL import Image The provided code snippet includes necessary dependencies for implementing the `setup_logger` function. Write a Python function `def setup_logger(log_path)` to solve the following problem: Setup logger. Args: log_path (str): Path of log. Returns: object: Logger. Here is the function: def setup_logger(log_path): """Setup logger. Args: log_path (str): Path of log. Returns: object: Logger. """ logger = logging.getLogger('img split') formatter = logging.Formatter('%(asctime)s - %(message)s') now = datetime.datetime.now().strftime('%Y%m%d_%H%M%S') log_path = osp.join(log_path, now + '.log') handlers = [logging.StreamHandler(), logging.FileHandler(log_path, 'w')] for handler in handlers: handler.setFormatter(formatter) handler.setLevel(logging.INFO) logger.addHandler(handler) logger.setLevel(logging.INFO) return logger
Setup logger. Args: log_path (str): Path of log. Returns: object: Logger.
7,431
import argparse import codecs import datetime import itertools import json import logging import os import os.path as osp import time from functools import partial, reduce from math import ceil from multiprocessing import Manager, Pool import cv2 import numpy as np from PIL import Image def _load_dota_single(imgfile, img_dir, ann_dir): """Load DOTA's single image. Args: imgfile (str): Filename of single image. img_dir (str): Path of images. ann_dir (str): Path of annotations. Returns: dict: Content of single image. """ img_id, ext = osp.splitext(imgfile) if ext not in ['.jpg', '.JPG', '.png', '.tif', '.bmp']: return None imgpath = osp.join(img_dir, imgfile) size = Image.open(imgpath).size txtfile = None if ann_dir is None else osp.join(ann_dir, img_id + '.txt') content = _load_dota_txt(txtfile) content.update( dict(width=size[0], height=size[1], filename=imgfile, id=img_id)) return content The provided code snippet includes necessary dependencies for implementing the `load_dota` function. Write a Python function `def load_dota(img_dir, ann_dir=None, nproc=10)` to solve the following problem: Load DOTA dataset. Args: img_dir (str): Path of images. ann_dir (str): Path of annotations. nproc (int): number of processes. Returns: list: Dataset's contents. Here is the function: def load_dota(img_dir, ann_dir=None, nproc=10): """Load DOTA dataset. Args: img_dir (str): Path of images. ann_dir (str): Path of annotations. nproc (int): number of processes. Returns: list: Dataset's contents. """ assert osp.isdir(img_dir), f'The {img_dir} is not an existing dir!' assert ann_dir is None or osp.isdir( ann_dir), f'The {ann_dir} is not an existing dir!' print('Starting loading DOTA dataset information.') start_time = time.time() _load_func = partial(_load_dota_single, img_dir=img_dir, ann_dir=ann_dir) if nproc > 1: pool = Pool(nproc) contents = pool.map(_load_func, os.listdir(img_dir)) pool.close() else: contents = list(map(_load_func, os.listdir(img_dir))) contents = [c for c in contents if c is not None] end_time = time.time() print(f'Finishing loading DOTA, get {len(contents)} iamges,', f'using {end_time - start_time:.3f}s.') return contents
Load DOTA dataset. Args: img_dir (str): Path of images. ann_dir (str): Path of annotations. nproc (int): number of processes. Returns: list: Dataset's contents.
7,432
import argparse import os from collections import Sequence from pathlib import Path import mmcv from mmcv import Config, DictAction from mmdet.datasets.builder import build_dataset from mmrotate.core.visualization import imshow_det_rbboxes def parse_args(): parser = argparse.ArgumentParser(description='Browse a dataset') parser.add_argument('config', help='train config file path') parser.add_argument( '--skip-type', type=str, nargs='+', default=['DefaultFormatBundle', 'Normalize', 'Collect'], help='skip some useless pipeline') parser.add_argument( '--output-dir', default=None, type=str, help='If there is no display interface, you can save it') parser.add_argument('--not-show', default=False, action='store_true') parser.add_argument( '--show-interval', type=float, default=2, help='the interval of show (s)') parser.add_argument( '--cfg-options', nargs='+', action=DictAction, help='override some settings in the used config, the key-value pair ' 'in xxx=yyy format will be merged into config file. If the value to ' 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' 'Note that the quotation marks are necessary and that no white space ' 'is allowed.') args = parser.parse_args() return args
null
7,433
import argparse import os from collections import Sequence from pathlib import Path import mmcv from mmcv import Config, DictAction from mmdet.datasets.builder import build_dataset from mmrotate.core.visualization import imshow_det_rbboxes The provided code snippet includes necessary dependencies for implementing the `retrieve_data_cfg` function. Write a Python function `def retrieve_data_cfg(config_path, skip_type, cfg_options)` to solve the following problem: Retrieve the dataset config file. Args: config_path (str): Path of the config file. skip_type (list[str]): List of the useless pipeline to skip. cfg_options (dict): dict of configs to merge from. Here is the function: def retrieve_data_cfg(config_path, skip_type, cfg_options): """Retrieve the dataset config file. Args: config_path (str): Path of the config file. skip_type (list[str]): List of the useless pipeline to skip. cfg_options (dict): dict of configs to merge from. """ def skip_pipeline_steps(config): config['pipeline'] = [ x for x in config.pipeline if x['type'] not in skip_type ] cfg = Config.fromfile(config_path) if cfg_options is not None: cfg.merge_from_dict(cfg_options) train_data_cfg = cfg.data.train while 'dataset' in train_data_cfg and train_data_cfg[ 'type'] != 'MultiImageMixDataset': train_data_cfg = train_data_cfg['dataset'] if isinstance(train_data_cfg, Sequence): [skip_pipeline_steps(c) for c in train_data_cfg] else: skip_pipeline_steps(train_data_cfg) return cfg
Retrieve the dataset config file. Args: config_path (str): Path of the config file. skip_type (list[str]): List of the useless pipeline to skip. cfg_options (dict): dict of configs to merge from.
7,434
import argparse import warnings from mmcv import Config, DictAction The provided code snippet includes necessary dependencies for implementing the `parse_args` function. Write a Python function `def parse_args()` to solve the following problem: Parse arguments. Here is the function: def parse_args(): """Parse arguments.""" parser = argparse.ArgumentParser(description='Print the whole config') parser.add_argument('config', help='config file path') parser.add_argument( '--options', nargs='+', action=DictAction, help='override some settings in the used config, the key-value pair ' 'in xxx=yyy format will be merged into config file (deprecate), ' 'change to --cfg-options instead.') parser.add_argument( '--cfg-options', nargs='+', action=DictAction, help='override some settings in the used config, the key-value pair ' 'in xxx=yyy format will be merged into config file. If the value to ' 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' 'Note that the quotation marks are necessary and that no white space ' 'is allowed.') args = parser.parse_args() if args.options and args.cfg_options: raise ValueError( '--options and --cfg-options cannot be both ' 'specified, --options is deprecated in favor of --cfg-options') if args.options: warnings.warn('--options is deprecated in favor of --cfg-options') args.cfg_options = args.options return args
Parse arguments.
7,435
from argparse import ArgumentParser, Namespace from pathlib import Path from tempfile import TemporaryDirectory import mmcv The provided code snippet includes necessary dependencies for implementing the `mmrotate2torchserve` function. Write a Python function `def mmrotate2torchserve( config_file: str, checkpoint_file: str, output_folder: str, model_name: str, model_version: str = '1.0', force: bool = False, )` to solve the following problem: Converts MMRotate model (config + checkpoint) to TorchServe `.mar`. Args: config_file: In MMRotate config format. The contents vary for each task repository. checkpoint_file: In MMRotate checkpoint format. The contents vary for each task repository. output_folder: Folder where `{model_name}.mar` will be created. The file created will be in TorchServe archive format. model_name: If not None, used for naming the `{model_name}.mar` file that will be created under `output_folder`. If None, `{Path(checkpoint_file).stem}` will be used. model_version: Model's version. force: If True, if there is an existing `{model_name}.mar` file under `output_folder` it will be overwritten. Here is the function: def mmrotate2torchserve( config_file: str, checkpoint_file: str, output_folder: str, model_name: str, model_version: str = '1.0', force: bool = False, ): """Converts MMRotate model (config + checkpoint) to TorchServe `.mar`. Args: config_file: In MMRotate config format. The contents vary for each task repository. checkpoint_file: In MMRotate checkpoint format. The contents vary for each task repository. output_folder: Folder where `{model_name}.mar` will be created. The file created will be in TorchServe archive format. model_name: If not None, used for naming the `{model_name}.mar` file that will be created under `output_folder`. If None, `{Path(checkpoint_file).stem}` will be used. model_version: Model's version. force: If True, if there is an existing `{model_name}.mar` file under `output_folder` it will be overwritten. """ mmcv.mkdir_or_exist(output_folder) config = mmcv.Config.fromfile(config_file) with TemporaryDirectory() as tmpdir: config.dump(f'{tmpdir}/config.py') args = Namespace( **{ 'model_file': f'{tmpdir}/config.py', 'serialized_file': checkpoint_file, 'handler': f'{Path(__file__).parent}/mmrotate_handler.py', 'model_name': model_name or Path(checkpoint_file).stem, 'version': model_version, 'export_path': output_folder, 'force': force, 'requirements_file': None, 'extra_files': None, 'runtime': 'python', 'archive_format': 'default' }) manifest = ModelExportUtils.generate_manifest_json(args) package_model(args, manifest)
Converts MMRotate model (config + checkpoint) to TorchServe `.mar`. Args: config_file: In MMRotate config format. The contents vary for each task repository. checkpoint_file: In MMRotate checkpoint format. The contents vary for each task repository. output_folder: Folder where `{model_name}.mar` will be created. The file created will be in TorchServe archive format. model_name: If not None, used for naming the `{model_name}.mar` file that will be created under `output_folder`. If None, `{Path(checkpoint_file).stem}` will be used. model_version: Model's version. force: If True, if there is an existing `{model_name}.mar` file under `output_folder` it will be overwritten.
7,436
from argparse import ArgumentParser, Namespace from pathlib import Path from tempfile import TemporaryDirectory import mmcv def parse_args(): parser = ArgumentParser( description='Convert MMRotate models to TorchServe `.mar` format.') parser.add_argument('config', type=str, help='config file path') parser.add_argument('checkpoint', type=str, help='checkpoint file path') parser.add_argument( '--output-folder', type=str, required=True, help='Folder where `{model_name}.mar` will be created.') parser.add_argument( '--model-name', type=str, default=None, help='If not None, used for naming the `{model_name}.mar`' 'file that will be created under `output_folder`.' 'If None, `{Path(checkpoint_file).stem}` will be used.') parser.add_argument( '--model-version', type=str, default='1.0', help='Number used for versioning.') parser.add_argument( '-f', '--force', action='store_true', help='overwrite the existing `{model_name}.mar`') args = parser.parse_args() return args
null
7,437
import os import gc import time import base64 from contextlib import asynccontextmanager from typing import List, Literal, Union, Tuple, Optional import torch import uvicorn from fastapi import FastAPI, HTTPException from fastapi.middleware.cors import CORSMiddleware from loguru import logger from pydantic import BaseModel, Field from sse_starlette.sse import EventSourceResponse from transformers import AutoModelForCausalLM, LlamaTokenizer, PreTrainedModel, PreTrainedTokenizer, \ TextIteratorStreamer from PIL import Image from io import BytesIO torch.cuda.empty_cache() The provided code snippet includes necessary dependencies for implementing the `lifespan` function. Write a Python function `async def lifespan(app: FastAPI)` to solve the following problem: An asynchronous context manager for managing the lifecycle of the FastAPI app. It ensures that GPU memory is cleared after the app's lifecycle ends, which is essential for efficient resource management in GPU environments. Here is the function: async def lifespan(app: FastAPI): """ An asynchronous context manager for managing the lifecycle of the FastAPI app. It ensures that GPU memory is cleared after the app's lifecycle ends, which is essential for efficient resource management in GPU environments. """ yield if torch.cuda.is_available(): torch.cuda.empty_cache() torch.cuda.ipc_collect()
An asynchronous context manager for managing the lifecycle of the FastAPI app. It ensures that GPU memory is cleared after the app's lifecycle ends, which is essential for efficient resource management in GPU environments.
7,438
import os import gc import time import base64 from contextlib import asynccontextmanager from typing import List, Literal, Union, Tuple, Optional import torch import uvicorn from fastapi import FastAPI, HTTPException from fastapi.middleware.cors import CORSMiddleware from loguru import logger from pydantic import BaseModel, Field from sse_starlette.sse import EventSourceResponse from transformers import AutoModelForCausalLM, LlamaTokenizer, PreTrainedModel, PreTrainedTokenizer, \ TextIteratorStreamer from PIL import Image from io import BytesIO class ModelCard(BaseModel): """ A Pydantic model representing a model card, which provides metadata about a machine learning model. It includes fields like model ID, owner, and creation time. """ id: str object: str = "model" created: int = Field(default_factory=lambda: int(time.time())) owned_by: str = "owner" root: Optional[str] = None parent: Optional[str] = None permission: Optional[list] = None class ModelList(BaseModel): object: str = "list" data: List[ModelCard] = [] The provided code snippet includes necessary dependencies for implementing the `list_models` function. Write a Python function `async def list_models()` to solve the following problem: An endpoint to list available models. It returns a list of model cards. This is useful for clients to query and understand what models are available for use. Here is the function: async def list_models(): """ An endpoint to list available models. It returns a list of model cards. This is useful for clients to query and understand what models are available for use. """ model_card = ModelCard(id="cogvlm-chat-17b") # can be replaced by your model id like cogagent-chat-18b return ModelList(data=[model_card])
An endpoint to list available models. It returns a list of model cards. This is useful for clients to query and understand what models are available for use.
7,439
import os import gc import time import base64 from contextlib import asynccontextmanager from typing import List, Literal, Union, Tuple, Optional import torch import uvicorn from fastapi import FastAPI, HTTPException from fastapi.middleware.cors import CORSMiddleware from loguru import logger from pydantic import BaseModel, Field from sse_starlette.sse import EventSourceResponse from transformers import AutoModelForCausalLM, LlamaTokenizer, PreTrainedModel, PreTrainedTokenizer, \ TextIteratorStreamer from PIL import Image from io import BytesIO class ChatMessageResponse(BaseModel): role: Literal["assistant"] content: str = None name: Optional[str] = None class ChatCompletionRequest(BaseModel): model: str messages: List[ChatMessageInput] temperature: Optional[float] = 0.8 top_p: Optional[float] = 0.8 max_tokens: Optional[int] = None stream: Optional[bool] = False # Additional parameters repetition_penalty: Optional[float] = 1.0 class ChatCompletionResponseChoice(BaseModel): index: int message: ChatMessageResponse class UsageInfo(BaseModel): prompt_tokens: int = 0 total_tokens: int = 0 completion_tokens: Optional[int] = 0 class ChatCompletionResponse(BaseModel): model: str object: Literal["chat.completion", "chat.completion.chunk"] choices: List[Union[ChatCompletionResponseChoice, ChatCompletionResponseStreamChoice]] created: Optional[int] = Field(default_factory=lambda: int(time.time())) usage: Optional[UsageInfo] = None async def predict(model_id: str, params: dict): """ Handle streaming predictions. It continuously generates responses for a given input stream. This is particularly useful for real-time, continuous interactions with the model. """ global model, tokenizer choice_data = ChatCompletionResponseStreamChoice( index=0, delta=DeltaMessage(role="assistant"), finish_reason=None ) chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object="chat.completion.chunk") yield "{}".format(chunk.model_dump_json(exclude_unset=True)) previous_text = "" for new_response in generate_stream_cogvlm(model, tokenizer, params): decoded_unicode = new_response["text"] delta_text = decoded_unicode[len(previous_text):] previous_text = decoded_unicode delta = DeltaMessage( content=delta_text, role="assistant", ) choice_data = ChatCompletionResponseStreamChoice( index=0, delta=delta, ) chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object="chat.completion.chunk") yield "{}".format(chunk.model_dump_json(exclude_unset=True)) choice_data = ChatCompletionResponseStreamChoice( index=0, delta=DeltaMessage(), ) chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object="chat.completion.chunk") yield "{}".format(chunk.model_dump_json(exclude_unset=True)) def generate_cogvlm(model: PreTrainedModel, tokenizer: PreTrainedTokenizer, params: dict): """ Generates a response using the CogVLM model. It processes the chat history and image data, if any, and then invokes the model to generate a response. """ for response in generate_stream_cogvlm(model, tokenizer, params): pass return response async def create_chat_completion(request: ChatCompletionRequest): global model, tokenizer if len(request.messages) < 1 or request.messages[-1].role == "assistant": raise HTTPException(status_code=400, detail="Invalid request") gen_params = dict( messages=request.messages, temperature=request.temperature, top_p=request.top_p, max_tokens=request.max_tokens or 1024, echo=False, stream=request.stream, ) if request.stream: generate = predict(request.model, gen_params) return EventSourceResponse(generate, media_type="text/event-stream") response = generate_cogvlm(model, tokenizer, gen_params) usage = UsageInfo() message = ChatMessageResponse( role="assistant", content=response["text"], ) logger.debug(f"==== message ====\n{message}") choice_data = ChatCompletionResponseChoice( index=0, message=message, ) task_usage = UsageInfo.model_validate(response["usage"]) for usage_key, usage_value in task_usage.model_dump().items(): setattr(usage, usage_key, getattr(usage, usage_key) + usage_value) return ChatCompletionResponse(model=request.model, choices=[choice_data], object="chat.completion", usage=usage)
null
7,440
import requests import json import base64 def create_chat_completion(model, messages, temperature=0.8, max_tokens=2048, top_p=0.8, use_stream=False): """ This function sends a request to the chat API to generate a response based on the given messages. Args: model (str): The name of the model to use for generating the response. messages (list): A list of message dictionaries representing the conversation history. temperature (float): Controls randomness in response generation. Higher values lead to more random responses. max_tokens (int): The maximum length of the generated response. top_p (float): Controls diversity of response by filtering less likely options. use_stream (bool): Determines whether to use a streaming response or a single response. The function constructs a JSON payload with the specified parameters and sends a POST request to the API. It then handles the response, either as a stream (for ongoing responses) or a single message. """ data = { "model": model, "messages": messages, "stream": use_stream, "max_tokens": max_tokens, "temperature": temperature, "top_p": top_p, } response = requests.post(f"{base_url}/v1/chat/completions", json=data, stream=use_stream) if response.status_code == 200: if use_stream: # 处理流式响应 for line in response.iter_lines(): if line: decoded_line = line.decode('utf-8')[6:] try: response_json = json.loads(decoded_line) content = response_json.get("choices", [{}])[0].get("delta", {}).get("content", "") print(content) except: print("Special Token:", decoded_line) else: # 处理非流式响应 decoded_line = response.json() content = decoded_line.get("choices", [{}])[0].get("message", "").get("content", "") print(content) else: print("Error:", response.status_code) return None def encode_image(image_path): """ Encodes an image file into a base64 string. Args: image_path (str): The path to the image file. This function opens the specified image file, reads its content, and encodes it into a base64 string. The base64 encoding is used to send images over HTTP as text. """ with open(image_path, "rb") as image_file: return base64.b64encode(image_file.read()).decode("utf-8") The provided code snippet includes necessary dependencies for implementing the `simple_image_chat` function. Write a Python function `def simple_image_chat(use_stream=True, img_path=None)` to solve the following problem: Facilitates a simple chat interaction involving an image. Args: use_stream (bool): Specifies whether to use streaming for chat responses. img_path (str): Path to the image file to be included in the chat. This function encodes the specified image and constructs a predefined conversation involving the image. It then calls `create_chat_completion` to generate a response from the model. The conversation includes asking about the content of the image and a follow-up question. Here is the function: def simple_image_chat(use_stream=True, img_path=None): """ Facilitates a simple chat interaction involving an image. Args: use_stream (bool): Specifies whether to use streaming for chat responses. img_path (str): Path to the image file to be included in the chat. This function encodes the specified image and constructs a predefined conversation involving the image. It then calls `create_chat_completion` to generate a response from the model. The conversation includes asking about the content of the image and a follow-up question. """ img_url = f"data:image/jpeg;base64,{encode_image(img_path)}" messages = [ { "role": "user", "content": [ { "type": "text", "text": "What’s in this image?", }, { "type": "image_url", "image_url": { "url": img_url }, }, ], }, { "role": "assistant", "content": "The image displays a wooden boardwalk extending through a vibrant green grassy wetland. The sky is partly cloudy with soft, wispy clouds, indicating nice weather. Vegetation is seen on either side of the boardwalk, and trees are present in the background, suggesting that this area might be a natural reserve or park designed for ecological preservation and outdoor recreation. The boardwalk allows visitors to explore the area without disturbing the natural habitat.", }, { "role": "user", "content": "Do you think this is a spring or winter photo?" }, ] create_chat_completion("cogvlm-chat-17b", messages=messages, use_stream=use_stream)
Facilitates a simple chat interaction involving an image. Args: use_stream (bool): Specifies whether to use streaming for chat responses. img_path (str): Path to the image file to be included in the chat. This function encodes the specified image and constructs a predefined conversation involving the image. It then calls `create_chat_completion` to generate a response from the model. The conversation includes asking about the content of the image and a follow-up question.
7,441
import gradio as gr import os, sys from PIL import Image import torch import time from sat.model.mixins import CachedAutoregressiveMixin from sat.mpu import get_model_parallel_world_size from sat.model import AutoModel from utils.utils import chat, llama2_tokenizer, llama2_text_processor_inference, get_image_processor, parse_response from utils.models import CogAgentModel, CogVLMModel model = image_processor = text_processor_infer = None from sat.quantization.kernels import quantize def load_model(args): model, model_args = AutoModel.from_pretrained( args.from_pretrained, args=argparse.Namespace( deepspeed=None, local_rank=0, rank=0, world_size=world_size, model_parallel_size=world_size, mode='inference', fp16=args.fp16, bf16=args.bf16, skip_init=True, use_gpu_initialization=True if (torch.cuda.is_available() and args.quant is None) else False, device='cpu' if args.quant else 'cuda'), overwrite_args={'model_parallel_size': world_size} if world_size != 1 else {} ) model = model.eval() assert world_size == get_model_parallel_world_size(), "world size must equal to model parallel size for cli_demo!" language_processor_version = model_args.text_processor_version if 'text_processor_version' in model_args else args.version tokenizer = llama2_tokenizer(args.local_tokenizer, signal_type=language_processor_version) image_processor = get_image_processor(model_args.eva_args["image_size"][0]) cross_image_processor = get_image_processor(model_args.cross_image_pix) if "cross_image_pix" in model_args else None if args.quant: quantize(model, args.quant) if torch.cuda.is_available(): model = model.cuda() model.add_mixin('auto-regressive', CachedAutoregressiveMixin()) text_processor_infer = llama2_text_processor_inference(tokenizer, args.max_length, model.image_length) return model, image_processor, cross_image_processor, text_processor_infer
null
7,442
import gradio as gr import os, sys from PIL import Image import torch import time from sat.model.mixins import CachedAutoregressiveMixin from sat.mpu import get_model_parallel_world_size from sat.model import AutoModel from utils.utils import chat, llama2_tokenizer, llama2_text_processor_inference, get_image_processor, parse_response from utils.models import CogAgentModel, CogVLMModel model = image_processor = text_processor_infer = None is_grounding = False def process_image_without_resize(image_prompt): image = Image.open(image_prompt) # print(f"height:{image.height}, width:{image.width}") timestamp = int(time.time()) file_ext = os.path.splitext(image_prompt)[1] filename_grounding = f"examples/{timestamp}_grounding{file_ext}" return image, filename_grounding from sat.quantization.kernels import quantize def chat(image_path, model, text_processor, img_processor, query: str, history: List[Tuple[str, str]] = None, cross_img_processor=None, image: Image = None, max_length: int = 4096, top_p=0.95, top_k=5, temperature=0.95, repetition_penalty=1.0, invalid_slices=[], no_prompt=False, args=None ): if image is None: assert image_path is not None if not history: history = [] if no_prompt: query = '' prompt = text_processor.history_to_prompt(query, history) (torch_image, pil_img, cross_image) = process_image(image_path, img_processor, cross_img_processor, image) if torch_image is not None: for k in torch_image: if type(torch_image[k]) is torch.Tensor and torch_image[k].dtype is not torch.int and torch_image[k].dtype is not torch.long: torch_image[k] = torch_image[k].to(torch.bfloat16 if args.bf16 else torch.float16) if type(torch_image[k]) is torch.Tensor: torch_image[k] = torch_image[k].to(next(model.parameters()).device) if cross_image is not None: for k in cross_image: if type(cross_image[k]) is torch.Tensor and cross_image[k].dtype is not torch.int and cross_image[k].dtype is not torch.long: cross_image[k] = cross_image[k].to(torch.bfloat16 if args.bf16 else torch.float16) if type(cross_image[k]) is torch.Tensor: cross_image[k] = cross_image[k].to(next(model.parameters()).device) inputs_dic = text_processor(prompt) for k in inputs_dic: if type(inputs_dic[k]) is torch.Tensor and inputs_dic[k].dtype is not torch.int and inputs_dic[k].dtype is not torch.long: inputs_dic[k] = inputs_dic[k].to(torch.bfloat16 if args.bf16 else torch.float16) if type(inputs_dic[k]) is torch.Tensor: inputs_dic[k] = inputs_dic[k].to(next(model.parameters()).device) input_ids = inputs_dic['input_ids'].to(model.parameters().__next__().device)[0] if max_length-len(input_ids) <= 1: response = "The prompt exceeds the context length limit, please try again." return response, history, (torch_image, pil_img) seq = torch.cat( [input_ids, torch.tensor([-1]*(max_length-len(input_ids)), device=input_ids.device)], dim=0 ) strategy = BaseStrategy(temperature=temperature, top_p=top_p, top_k=top_k, end_tokens=[text_processor.tokenizer.eos_token_id], invalid_slices=invalid_slices, repetition_penalty=repetition_penalty) # use beam search to get a better result # strategy = BeamSearchStrategy(temperature=temperature, top_p=top_p, top_k=top_k, end_tokens=[text_processor.tokenizer.eos_token_id], # num_beams=5, consider_end=True, repetition_penalty=repetition_penalty) get_func = text_processor.get_func(input_ids, **inputs_dic) if hasattr(text_processor, 'get_func') else get_masks_and_position_ids_default img_inputs = {'vision_'+k: v for k, v in torch_image.items()} if cross_image is not None: img_inputs = {**img_inputs, **{'cross_'+k:v for k,v in cross_image.items()}} inputs_dic.pop('input_ids') inputs = {**img_inputs, **inputs_dic} if args.stream_chat: filling_stream = stream_filling_sequence( model, seq, batch_size=1, get_masks_and_position_ids=get_func, strategy=strategy, **inputs ) if get_model_parallel_rank() == 0: if 'chinese' in args and not args.chinese: print("Model: ", end='') else: print("模型:", end='') offset = len(text_processor.tokenizer.decode(input_ids)) for tokens, mems in filling_stream: torch.cuda.empty_cache() tmp_response = text_processor.tokenizer.decode(tokens[0]) if tmp_response[-1] != "�": if get_model_parallel_rank() == 0: tmp_response_offseted = tmp_response[offset:] if hasattr(text_processor, 'process_response'): tmp_response_offseted = text_processor.process_response(tmp_response_offseted) print(tmp_response_offseted, end='', flush=True) offset = len(tmp_response) if get_model_parallel_rank() == 0: print() output = strategy.finalize(tokens, mems)[0] response = text_processor.tokenizer.decode(output[0]) else: output = filling_sequence( model, seq, batch_size=1, get_masks_and_position_ids=get_func, strategy=strategy, **inputs )[0] # drop memory # --------------- # port from inference_glm.py, more general than chat mode # clip -1s and fill back generated things into seq if type(output) is not list: output_list = output.tolist() else: output_list = output response = text_processor.tokenizer.decode(output_list[0]) # print('original:', response) if hasattr(text_processor, 'process_response'): response = text_processor.process_response(response) response = response.split(text_processor.sep)[-1].strip() if get_model_parallel_rank() == 0: from utils.utils.grounding_parser import parse_response parse_response(pil_img, response) history = history + [(query, response)] return response, history, (torch_image, pil_img, cross_image) def post( input_text, temperature, top_p, top_k, image_prompt, result_previous, hidden_image, state ): result_text = [(ele[0], ele[1]) for ele in result_previous] for i in range(len(result_text)-1, -1, -1): if result_text[i][0] == "" or result_text[i][0] == None: del result_text[i] print(f"history {result_text}") global model, image_processor, cross_image_processor, text_processor_infer, is_grounding try: with torch.no_grad(): pil_img, image_path_grounding = process_image_without_resize(image_prompt) response, _, cache_image = chat( image_path="", model=model, text_processor=text_processor_infer, img_processor=image_processor, query=input_text, history=result_text, cross_img_processor=cross_image_processor, image=pil_img, max_length=2048, top_p=top_p, temperature=temperature, top_k=top_k, invalid_slices=text_processor_infer.invalid_slices if hasattr(text_processor_infer, "invalid_slices") else [], no_prompt=False, args=state['args'] ) except Exception as e: print("error message", e) result_text.append((input_text, 'Timeout! Please wait a few minutes and retry.')) return "", result_text, hidden_image answer = response if is_grounding: parse_response(pil_img, answer, image_path_grounding) new_answer = answer.replace(input_text, "") result_text.append((input_text, new_answer)) result_text.append((None, (image_path_grounding,))) else: result_text.append((input_text, answer)) print(result_text) print('finished') return "", result_text, hidden_image
null
7,443
import gradio as gr import os, sys from PIL import Image import torch import time from sat.model.mixins import CachedAutoregressiveMixin from sat.mpu import get_model_parallel_world_size from sat.model import AutoModel from utils.utils import chat, llama2_tokenizer, llama2_text_processor_inference, get_image_processor, parse_response from utils.models import CogAgentModel, CogVLMModel default_chatbox = [("", "Hi, What do you want to know about this image?")] from sat.quantization.kernels import quantize def clear_fn(value): return "", default_chatbox, None
null
7,444
import gradio as gr import os, sys from PIL import Image import torch import time from sat.model.mixins import CachedAutoregressiveMixin from sat.mpu import get_model_parallel_world_size from sat.model import AutoModel from utils.utils import chat, llama2_tokenizer, llama2_text_processor_inference, get_image_processor, parse_response from utils.models import CogAgentModel, CogVLMModel default_chatbox = [("", "Hi, What do you want to know about this image?")] from sat.quantization.kernels import quantize def clear_fn2(value): return default_chatbox
null
7,445
import base64 from io import BytesIO from PIL import Image The provided code snippet includes necessary dependencies for implementing the `images_are_same` function. Write a Python function `def images_are_same(img1: Image, img2: Image) -> bool` to solve the following problem: Compare two PIL images. Here is the function: def images_are_same(img1: Image, img2: Image) -> bool: """ Compare two PIL images. """ if img1.size != img2.size or img1.mode != img2.mode: return False return list(img1.getdata()) == list(img2.getdata())
Compare two PIL images.
7,446
import base64 from io import BytesIO from PIL import Image The provided code snippet includes necessary dependencies for implementing the `encode_file_to_base64` function. Write a Python function `def encode_file_to_base64(file)` to solve the following problem: Convert a file to base64. Here is the function: def encode_file_to_base64(file): """ Convert a file to base64. """ buffer = BytesIO() buffer.write(file.read()) return base64.b64encode(buffer.getvalue()).decode()
Convert a file to base64.
7,447
import requests import re import streamlit as st from dataclasses import dataclass from enum import auto, Enum from PIL.Image import Image from PIL import ImageDraw from streamlit.delta_generator import DeltaGenerator class Role(Enum): """ CogVLM | CogAgent Only have 2 roles: USER, ASSISTANT Represents the roles in a conversation, specifically for CogVLM and CogAgent applications. There are two roles available: - USER: The user of the system, typically the one asking questions or initiating conversation. - ASSISTANT: The system or AI assistant responding to the user's queries. Methods: get_message(self): Retrieves a Streamlit chat message component based on the role. For the USER role, it returns a chat message with the name "user" and user avatar. For the ASSISTANT role, it returns a chat message with the name "assistant" and assistant avatar. """ USER = auto() ASSISTANT = auto() def get_message(self): match self.value: case Role.USER.value: return st.chat_message(name="user", avatar="user") case Role.ASSISTANT.value: return st.chat_message(name="assistant", avatar="assistant") case _: st.error(f'Unexpected role: {self}') class Conversation: """ Represents a single conversation turn within a dialogue. Attributes: role (Role): The role of the speaker in the conversation (USER or ASSISTANT). content (str): The textual content of the conversation turn. image (Image, optional): An optional image associated with the conversation turn. content_show (str, optional): The content to be displayed in the WebUI. This may differ from `content` if translation or other processing is applied. translate (bool, optional): Whether to translate the content of the conversation turn. Methods: __str__(self) -> str: Returns a string representation of the conversation turn, including the role and content. show(self, placeholder: DeltaGenerator | None = None) -> str: Displays the conversation turn in the WebUI. If `placeholder` is provided, the content is shown in the specified Streamlit container. Otherwise, it uses the message style determined by the role. """ role: Role = Role.USER content: str = "" image: Image | None = None content_show: str | None = None translate: bool = False def __str__(self) -> str: print(self.role, self.content) match self.role: case Role.USER | Role.ASSISTANT: return f'{self.role}\n{self.content}' def show(self, placeholder: DeltaGenerator | None = None) -> str: """ show in markdown formate """ if placeholder: message = placeholder else: message = self.role.get_message() # for Chinese WebUI show if self.role == Role.USER: if self.translate: self.content = translate_baidu(self.content_show, source_lan="zh", target_lan="en") if self.content == "error": self.content_show = "Please Enter your Baidu Translation API Key in function translate_baidu()" else: self.content = self.content_show if self.role == Role.ASSISTANT: if self.translate: self.content_show = translate_baidu(self.content, source_lan="en", target_lan="zh") else: self.content_show = self.content self.content_show = self.content_show.replace('\n', ' \n') message.markdown(self.content_show) if self.image: message.image(self.image) The provided code snippet includes necessary dependencies for implementing the `preprocess_text` function. Write a Python function `def preprocess_text(history: list[Conversation], ) -> str` to solve the following problem: Prepares the conversation history for processing by concatenating the content of each turn. Args: history (list[Conversation]): The conversation history, a list of Conversation objects. Returns: str: A single string that concatenates the content of each conversation turn, followed by the ASSISTANT role indicator. This string is suitable for use as input to a text generation model. Here is the function: def preprocess_text(history: list[Conversation], ) -> str: """ Prepares the conversation history for processing by concatenating the content of each turn. Args: history (list[Conversation]): The conversation history, a list of Conversation objects. Returns: str: A single string that concatenates the content of each conversation turn, followed by the ASSISTANT role indicator. This string is suitable for use as input to a text generation model. """ prompt = "" for conversation in history: prompt += f'{conversation}' prompt += f'{Role.ASSISTANT}\n' return prompt
Prepares the conversation history for processing by concatenating the content of each turn. Args: history (list[Conversation]): The conversation history, a list of Conversation objects. Returns: str: A single string that concatenates the content of each conversation turn, followed by the ASSISTANT role indicator. This string is suitable for use as input to a text generation model.
7,448
import requests import re import streamlit as st from dataclasses import dataclass from enum import auto, Enum from PIL.Image import Image from PIL import ImageDraw from streamlit.delta_generator import DeltaGenerator The provided code snippet includes necessary dependencies for implementing the `postprocess_text` function. Write a Python function `def postprocess_text(template: str, text: str) -> str` to solve the following problem: Post-processes the generated text by incorporating it into a given template. Args: template (str): A template string containing a placeholder for the generated text. text (str): The generated text to be incorporated into the template. Returns: str: The template with the generated text replacing the placeholder. Here is the function: def postprocess_text(template: str, text: str) -> str: """ Post-processes the generated text by incorporating it into a given template. Args: template (str): A template string containing a placeholder for the generated text. text (str): The generated text to be incorporated into the template. Returns: str: The template with the generated text replacing the placeholder. """ quoted_text = f'"{text.strip()}"' return template.replace("<TASK>", quoted_text).strip() if template != "" else text.strip()
Post-processes the generated text by incorporating it into a given template. Args: template (str): A template string containing a placeholder for the generated text. text (str): The generated text to be incorporated into the template. Returns: str: The template with the generated text replacing the placeholder.