code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def filtrate_objects(self, obj_list):
"""
Discard objects which are not in self.classes (or its similar classes)
:param obj_list: list
:return: list
"""
type_whitelist = self.classes
if self.mode == 'TRAIN' and cfg.INCLUDE_SIMILAR_TYPE:
type_whitelist ... |
Discard objects which are not in self.classes (or its similar classes)
:param obj_list: list
:return: list
| filtrate_objects | python | sshaoshuai/PointRCNN | lib/datasets/kitti_rcnn_dataset.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/datasets/kitti_rcnn_dataset.py | MIT |
def get_valid_flag(pts_rect, pts_img, pts_rect_depth, img_shape):
"""
Valid point should be in the image (and in the PC_AREA_SCOPE)
:param pts_rect:
:param pts_img:
:param pts_rect_depth:
:param img_shape:
:return:
"""
val_flag_1 = np.logical_and(p... |
Valid point should be in the image (and in the PC_AREA_SCOPE)
:param pts_rect:
:param pts_img:
:param pts_rect_depth:
:param img_shape:
:return:
| get_valid_flag | python | sshaoshuai/PointRCNN | lib/datasets/kitti_rcnn_dataset.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/datasets/kitti_rcnn_dataset.py | MIT |
def aug_roi_by_noise(self, roi_info):
"""
add noise to original roi to get aug_box3d
:param roi_info:
:return:
"""
roi_box3d, gt_box3d = roi_info['roi_box3d'], roi_info['gt_box3d']
original_iou = roi_info['iou3d']
temp_iou = cnt = 0
pos_thresh = mi... |
add noise to original roi to get aug_box3d
:param roi_info:
:return:
| aug_roi_by_noise | python | sshaoshuai/PointRCNN | lib/datasets/kitti_rcnn_dataset.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/datasets/kitti_rcnn_dataset.py | MIT |
def random_aug_box3d(box3d):
"""
:param box3d: (7) [x, y, z, h, w, l, ry]
random shift, scale, orientation
"""
if cfg.RCNN.REG_AUG_METHOD == 'single':
pos_shift = (np.random.rand(3) - 0.5) # [-0.5 ~ 0.5]
hwl_scale = (np.random.rand(3) - 0.5) / (0.5 / 0.15... |
:param box3d: (7) [x, y, z, h, w, l, ry]
random shift, scale, orientation
| random_aug_box3d | python | sshaoshuai/PointRCNN | lib/datasets/kitti_rcnn_dataset.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/datasets/kitti_rcnn_dataset.py | MIT |
def distance_based_proposal(self, scores, proposals, order):
"""
propose rois in two area based on the distance
:param scores: (N)
:param proposals: (N, 7)
:param order: (N)
"""
nms_range_list = [0, 40.0, 80.0]
pre_tot_top_n = cfg[self.mode].RPN_PRE_NMS_T... |
propose rois in two area based on the distance
:param scores: (N)
:param proposals: (N, 7)
:param order: (N)
| distance_based_proposal | python | sshaoshuai/PointRCNN | lib/rpn/proposal_layer.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/rpn/proposal_layer.py | MIT |
def score_based_proposal(self, scores, proposals, order):
"""
propose rois in two area based on the distance
:param scores: (N)
:param proposals: (N, 7)
:param order: (N)
"""
# sort by score
scores_ordered = scores[order]
proposals_ordered = propo... |
propose rois in two area based on the distance
:param scores: (N)
:param proposals: (N, 7)
:param order: (N)
| score_based_proposal | python | sshaoshuai/PointRCNN | lib/rpn/proposal_layer.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/rpn/proposal_layer.py | MIT |
def random_aug_box3d(box3d):
"""
:param box3d: (7) [x, y, z, h, w, l, ry]
random shift, scale, orientation
"""
if cfg.RCNN.REG_AUG_METHOD == 'single':
pos_shift = (torch.rand(3, device=box3d.device) - 0.5) # [-0.5 ~ 0.5]
hwl_scale = (torch.rand(3, device=... |
:param box3d: (7) [x, y, z, h, w, l, ry]
random shift, scale, orientation
| random_aug_box3d | python | sshaoshuai/PointRCNN | lib/rpn/proposal_target_layer.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/rpn/proposal_target_layer.py | MIT |
def data_augmentation(self, pts, rois, gt_of_rois):
"""
:param pts: (B, M, 512, 3)
:param rois: (B, M. 7)
:param gt_of_rois: (B, M, 7)
:return:
"""
batch_size, boxes_num = pts.shape[0], pts.shape[1]
# rotation augmentation
angles = (torch.rand((ba... |
:param pts: (B, M, 512, 3)
:param rois: (B, M. 7)
:param gt_of_rois: (B, M, 7)
:return:
| data_augmentation | python | sshaoshuai/PointRCNN | lib/rpn/proposal_target_layer.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/rpn/proposal_target_layer.py | MIT |
def decode_bbox_target(roi_box3d, pred_reg, loc_scope, loc_bin_size, num_head_bin, anchor_size,
get_xz_fine=True, get_y_by_bin=False, loc_y_scope=0.5, loc_y_bin_size=0.25, get_ry_fine=False):
"""
:param roi_box3d: (N, 7)
:param pred_reg: (N, C)
:param loc_scope:
:param loc_bin... |
:param roi_box3d: (N, 7)
:param pred_reg: (N, C)
:param loc_scope:
:param loc_bin_size:
:param num_head_bin:
:param anchor_size:
:param get_xz_fine:
:param get_y_by_bin:
:param loc_y_scope:
:param loc_y_bin_size:
:param get_ry_fine:
:return:
| decode_bbox_target | python | sshaoshuai/PointRCNN | lib/utils/bbox_transform.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/bbox_transform.py | MIT |
def corners3d_to_img_boxes(self, corners3d):
"""
:param corners3d: (N, 8, 3) corners in rect coordinate
:return: boxes: (None, 4) [x1, y1, x2, y2] in rgb coordinate
:return: boxes_corner: (None, 8) [xi, yi] in rgb coordinate
"""
sample_num = corners3d.shape[0]
cor... |
:param corners3d: (N, 8, 3) corners in rect coordinate
:return: boxes: (None, 4) [x1, y1, x2, y2] in rgb coordinate
:return: boxes_corner: (None, 8) [xi, yi] in rgb coordinate
| corners3d_to_img_boxes | python | sshaoshuai/PointRCNN | lib/utils/calibration.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/calibration.py | MIT |
def camera_dis_to_rect(self, u, v, d):
"""
Can only process valid u, v, d, which means u, v can not beyond the image shape, reprojection error 0.02
:param u: (N)
:param v: (N)
:param d: (N), the distance between camera and 3d points, d^2 = x^2 + y^2 + z^2
:return:
... |
Can only process valid u, v, d, which means u, v can not beyond the image shape, reprojection error 0.02
:param u: (N)
:param v: (N)
:param d: (N), the distance between camera and 3d points, d^2 = x^2 + y^2 + z^2
:return:
| camera_dis_to_rect | python | sshaoshuai/PointRCNN | lib/utils/calibration.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/calibration.py | MIT |
def dist_to_plane(plane, points):
"""
Calculates the signed distance from a 3D plane to each point in a list of points
:param plane: (a, b, c, d)
:param points: (N, 3)
:return: (N), signed distance of each point to the plane
"""
a, b, c, d = plane
points = np.array(points)
x = point... |
Calculates the signed distance from a 3D plane to each point in a list of points
:param plane: (a, b, c, d)
:param points: (N, 3)
:return: (N), signed distance of each point to the plane
| dist_to_plane | python | sshaoshuai/PointRCNN | lib/utils/kitti_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/kitti_utils.py | MIT |
def rotate_pc_along_y(pc, rot_angle):
"""
params pc: (N, 3+C), (N, 3) is in the rectified camera coordinate
params rot_angle: rad scalar
Output pc: updated pc with XYZ rotated
"""
cosval = np.cos(rot_angle)
sinval = np.sin(rot_angle)
rotmat = np.array([[cosval, -sinval], [sinval, cosval]... |
params pc: (N, 3+C), (N, 3) is in the rectified camera coordinate
params rot_angle: rad scalar
Output pc: updated pc with XYZ rotated
| rotate_pc_along_y | python | sshaoshuai/PointRCNN | lib/utils/kitti_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/kitti_utils.py | MIT |
def rotate_pc_along_y_torch(pc, rot_angle):
"""
:param pc: (N, 512, 3 + C)
:param rot_angle: (N)
:return:
TODO: merge with rotate_pc_along_y_torch in bbox_transform.py
"""
cosa = torch.cos(rot_angle).view(-1, 1) # (N, 1)
sina = torch.sin(rot_angle).view(-1, 1) # (N, 1)
raw_1 = tor... |
:param pc: (N, 512, 3 + C)
:param rot_angle: (N)
:return:
TODO: merge with rotate_pc_along_y_torch in bbox_transform.py
| rotate_pc_along_y_torch | python | sshaoshuai/PointRCNN | lib/utils/kitti_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/kitti_utils.py | MIT |
def in_hull(p, hull):
"""
:param p: (N, K) test points
:param hull: (M, K) M corners of a box
:return (N) bool
"""
try:
if not isinstance(hull, Delaunay):
hull = Delaunay(hull)
flag = hull.find_simplex(p) >= 0
except scipy.spatial.qhull.QhullError:
print('... |
:param p: (N, K) test points
:param hull: (M, K) M corners of a box
:return (N) bool
| in_hull | python | sshaoshuai/PointRCNN | lib/utils/kitti_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/kitti_utils.py | MIT |
def get_iou3d(corners3d, query_corners3d, need_bev=False):
"""
:param corners3d: (N, 8, 3) in rect coords
:param query_corners3d: (M, 8, 3)
:return:
"""
from shapely.geometry import Polygon
A, B = corners3d, query_corners3d
N, M = A.shape[0], B.shape[0]
iou3d = np.zeros((N, M), d... |
:param corners3d: (N, 8, 3) in rect coords
:param query_corners3d: (M, 8, 3)
:return:
| get_iou3d | python | sshaoshuai/PointRCNN | lib/utils/kitti_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/kitti_utils.py | MIT |
def forward(self, input, target):
"""
:param input: (N), logit
:param target: (N), {0, 1}
:return:
"""
input = torch.sigmoid(input.view(-1))
target = target.float().view(-1)
mask = (target != self.ignore_target).float()
return 1.0 - (torch.min(inpu... |
:param input: (N), logit
:param target: (N), {0, 1}
:return:
| forward | python | sshaoshuai/PointRCNN | lib/utils/loss_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/loss_utils.py | MIT |
def __init__(self, gamma=2.0, alpha=0.25):
"""Constructor.
Args:
gamma: exponent of the modulating factor (1 - p_t) ^ gamma.
alpha: optional alpha weighting factor to balance positives vs negatives.
all_zero_negative: bool. if True, will treat all zero as background.
... | Constructor.
Args:
gamma: exponent of the modulating factor (1 - p_t) ^ gamma.
alpha: optional alpha weighting factor to balance positives vs negatives.
all_zero_negative: bool. if True, will treat all zero as background.
else, will treat first label as background... | __init__ | python | sshaoshuai/PointRCNN | lib/utils/loss_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/loss_utils.py | MIT |
def forward(self,
prediction_tensor,
target_tensor,
weights):
"""Compute loss function.
Args:
prediction_tensor: A float tensor of shape [batch_size, num_anchors,
num_classes] representing the predicted logits for each class
... | Compute loss function.
Args:
prediction_tensor: A float tensor of shape [batch_size, num_anchors,
num_classes] representing the predicted logits for each class
target_tensor: A float tensor of shape [batch_size, num_anchors,
num_classes] representing one-hot ... | forward | python | sshaoshuai/PointRCNN | lib/utils/loss_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/loss_utils.py | MIT |
def get_reg_loss(pred_reg, reg_label, loc_scope, loc_bin_size, num_head_bin, anchor_size,
get_xz_fine=True, get_y_by_bin=False, loc_y_scope=0.5, loc_y_bin_size=0.25, get_ry_fine=False):
"""
Bin-based 3D bounding boxes regression loss. See https://arxiv.org/abs/1812.04244 for more details.
... |
Bin-based 3D bounding boxes regression loss. See https://arxiv.org/abs/1812.04244 for more details.
:param pred_reg: (N, C)
:param reg_label: (N, 7) [dx, dy, dz, h, w, l, ry]
:param loc_scope: constant
:param loc_bin_size: constant
:param num_head_bin: constant
:param anchor_size: (N, ... | get_reg_loss | python | sshaoshuai/PointRCNN | lib/utils/loss_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/loss_utils.py | MIT |
def generate_corners3d(self):
"""
generate corners3d representation for this object
:return corners_3d: (8, 3) corners of box3d in camera coord
"""
l, h, w = self.l, self.h, self.w
x_corners = [l / 2, l / 2, -l / 2, -l / 2, l / 2, l / 2, -l / 2, -l / 2]
y_corners ... |
generate corners3d representation for this object
:return corners_3d: (8, 3) corners of box3d in camera coord
| generate_corners3d | python | sshaoshuai/PointRCNN | lib/utils/object3d.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/object3d.py | MIT |
def to_bev_box2d(self, oblique=True, voxel_size=0.1):
"""
:param bev_shape: (2) for bev shape (h, w), => (y_max, x_max) in image
:param voxel_size: float, 0.1m
:param oblique:
:return: box2d (4, 2)/ (4) in image coordinate
"""
if oblique:
corners3d = s... |
:param bev_shape: (2) for bev shape (h, w), => (y_max, x_max) in image
:param voxel_size: float, 0.1m
:param oblique:
:return: box2d (4, 2)/ (4) in image coordinate
| to_bev_box2d | python | sshaoshuai/PointRCNN | lib/utils/object3d.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/object3d.py | MIT |
def nms_gpu(boxes, scores, thresh):
"""
:param boxes: (N, 5) [x1, y1, x2, y2, ry]
:param scores: (N)
:param thresh:
:return:
"""
# areas = (x2 - x1) * (y2 - y1)
order = scores.sort(0, descending=True)[1]
boxes = boxes[order].contiguous()
keep = torch.LongTensor(boxes.size(0))
... |
:param boxes: (N, 5) [x1, y1, x2, y2, ry]
:param scores: (N)
:param thresh:
:return:
| nms_gpu | python | sshaoshuai/PointRCNN | lib/utils/iou3d/iou3d_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/iou3d/iou3d_utils.py | MIT |
def nms_normal_gpu(boxes, scores, thresh):
"""
:param boxes: (N, 5) [x1, y1, x2, y2, ry]
:param scores: (N)
:param thresh:
:return:
"""
# areas = (x2 - x1) * (y2 - y1)
order = scores.sort(0, descending=True)[1]
boxes = boxes[order].contiguous()
keep = torch.LongTensor(boxes.siz... |
:param boxes: (N, 5) [x1, y1, x2, y2, ry]
:param scores: (N)
:param thresh:
:return:
| nms_normal_gpu | python | sshaoshuai/PointRCNN | lib/utils/iou3d/iou3d_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/iou3d/iou3d_utils.py | MIT |
def roipool3d_gpu(pts, pts_feature, boxes3d, pool_extra_width, sampled_pt_num=512):
"""
:param pts: (B, N, 3)
:param pts_feature: (B, N, C)
:param boxes3d: (B, M, 7)
:param pool_extra_width: float
:param sampled_pt_num: int
:return:
pooled_features: (B, M, 512, 3 + C)
pooled_... |
:param pts: (B, N, 3)
:param pts_feature: (B, N, C)
:param boxes3d: (B, M, 7)
:param pool_extra_width: float
:param sampled_pt_num: int
:return:
pooled_features: (B, M, 512, 3 + C)
pooled_empty_flag: (B, M)
| roipool3d_gpu | python | sshaoshuai/PointRCNN | lib/utils/roipool3d/roipool3d_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/roipool3d/roipool3d_utils.py | MIT |
def pts_in_boxes3d_cpu(pts, boxes3d):
"""
:param pts: (N, 3) in rect-camera coords
:param boxes3d: (M, 7)
:return: boxes_pts_mask_list: (M), list with [(N), (N), ..]
"""
if not pts.is_cuda:
pts = pts.float().contiguous()
boxes3d = boxes3d.float().contiguous()
pts_flag = t... |
:param pts: (N, 3) in rect-camera coords
:param boxes3d: (M, 7)
:return: boxes_pts_mask_list: (M), list with [(N), (N), ..]
| pts_in_boxes3d_cpu | python | sshaoshuai/PointRCNN | lib/utils/roipool3d/roipool3d_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/roipool3d/roipool3d_utils.py | MIT |
def roipool_pc_cpu(pts, pts_feature, boxes3d, sampled_pt_num):
"""
:param pts: (N, 3)
:param pts_feature: (N, C)
:param boxes3d: (M, 7)
:param sampled_pt_num: int
:return:
"""
pts = pts.cpu().float().contiguous()
pts_feature = pts_feature.cpu().float().contiguous()
boxes3d = boxe... |
:param pts: (N, 3)
:param pts_feature: (N, C)
:param boxes3d: (M, 7)
:param sampled_pt_num: int
:return:
| roipool_pc_cpu | python | sshaoshuai/PointRCNN | lib/utils/roipool3d/roipool3d_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/roipool3d/roipool3d_utils.py | MIT |
def roipool3d_cpu(boxes3d, pts, pts_feature, pts_extra_input, pool_extra_width, sampled_pt_num=512,
canonical_transform=True):
"""
:param boxes3d: (N, 7)
:param pts: (N, 3)
:param pts_feature: (N, C)
:param pts_extra_input: (N, C2)
:param pool_extra_width: constant
:param s... |
:param boxes3d: (N, 7)
:param pts: (N, 3)
:param pts_feature: (N, C)
:param pts_extra_input: (N, C2)
:param pool_extra_width: constant
:param sampled_pt_num: constant
:return:
| roipool3d_cpu | python | sshaoshuai/PointRCNN | lib/utils/roipool3d/roipool3d_utils.py | https://github.com/sshaoshuai/PointRCNN/blob/master/lib/utils/roipool3d/roipool3d_utils.py | MIT |
def get_valid_flag(pts_rect, pts_img, pts_rect_depth, img_shape):
"""
Valid point should be in the image (and in the PC_AREA_SCOPE)
:param pts_rect:
:param pts_img:
:param pts_rect_depth:
:param img_shape:
:return:
"""
val_flag_1 = np.logical_and(p... |
Valid point should be in the image (and in the PC_AREA_SCOPE)
:param pts_rect:
:param pts_img:
:param pts_rect_depth:
:param img_shape:
:return:
| get_valid_flag | python | sshaoshuai/PointRCNN | tools/generate_aug_scene.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/generate_aug_scene.py | MIT |
def calculate_iou_partly(gt_annos, dt_annos, metric, num_parts=50):
"""fast iou algorithm. this function can be used independently to
do result analysis. Must be used in CAMERA coordinate system.
Args:
gt_annos: dict, must from get_label_annos() in kitti_common.py
dt_annos: dict, must from g... | fast iou algorithm. this function can be used independently to
do result analysis. Must be used in CAMERA coordinate system.
Args:
gt_annos: dict, must from get_label_annos() in kitti_common.py
dt_annos: dict, must from get_label_annos() in kitti_common.py
metric: eval type. 0: bbox, 1: ... | calculate_iou_partly | python | sshaoshuai/PointRCNN | tools/kitti_object_eval_python/eval.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/kitti_object_eval_python/eval.py | MIT |
def eval_class(gt_annos,
dt_annos,
current_classes,
difficultys,
metric,
min_overlaps,
compute_aos=False,
num_parts=50):
"""Kitti eval. support 2d/bev/3d/aos eval. support 0.5:0.05:0.95 coco AP.
Args:
... | Kitti eval. support 2d/bev/3d/aos eval. support 0.5:0.05:0.95 coco AP.
Args:
gt_annos: dict, must from get_label_annos() in kitti_common.py
dt_annos: dict, must from get_label_annos() in kitti_common.py
current_classes: list of int, 0: car, 1: pedestrian, 2: cyclist
difficultys: list... | eval_class | python | sshaoshuai/PointRCNN | tools/kitti_object_eval_python/eval.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/kitti_object_eval_python/eval.py | MIT |
def area(boxes, add1=False):
"""Computes area of boxes.
Args:
boxes: Numpy array with shape [N, 4] holding N boxes
Returns:
a numpy array with shape [N*1] representing box areas
"""
if add1:
return (boxes[:, 2] - boxes[:, 0] + 1.0) * (
boxes[:, 3] - boxes[:, 1] ... | Computes area of boxes.
Args:
boxes: Numpy array with shape [N, 4] holding N boxes
Returns:
a numpy array with shape [N*1] representing box areas
| area | python | sshaoshuai/PointRCNN | tools/kitti_object_eval_python/kitti_common.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/kitti_object_eval_python/kitti_common.py | MIT |
def intersection(boxes1, boxes2, add1=False):
"""Compute pairwise intersection areas between boxes.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes
boxes2: a numpy array with shape [M, 4] holding M boxes
Returns:
a numpy array with shape [N*M] representing pairwise in... | Compute pairwise intersection areas between boxes.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes
boxes2: a numpy array with shape [M, 4] holding M boxes
Returns:
a numpy array with shape [N*M] representing pairwise intersection area
| intersection | python | sshaoshuai/PointRCNN | tools/kitti_object_eval_python/kitti_common.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/kitti_object_eval_python/kitti_common.py | MIT |
def iou(boxes1, boxes2, add1=False):
"""Computes pairwise intersection-over-union between box collections.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes.
boxes2: a numpy array with shape [M, 4] holding N boxes.
Returns:
a numpy array with shape [N, M] representing p... | Computes pairwise intersection-over-union between box collections.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes.
boxes2: a numpy array with shape [M, 4] holding N boxes.
Returns:
a numpy array with shape [N, M] representing pairwise iou scores.
| iou | python | sshaoshuai/PointRCNN | tools/kitti_object_eval_python/kitti_common.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/kitti_object_eval_python/kitti_common.py | MIT |
def rotate_iou_gpu_eval(boxes, query_boxes, criterion=-1, device_id=0):
"""rotated box iou running in gpu. 500x faster than cpu version
(take 5ms in one example with numba.cuda code).
convert from [this project](
https://github.com/hongzhenwang/RRPN-revise/tree/master/lib/rotation).
Args:
... | rotated box iou running in gpu. 500x faster than cpu version
(take 5ms in one example with numba.cuda code).
convert from [this project](
https://github.com/hongzhenwang/RRPN-revise/tree/master/lib/rotation).
Args:
boxes (float tensor: [N, 5]): rbboxes. format: centers, dims,
... | rotate_iou_gpu_eval | python | sshaoshuai/PointRCNN | tools/kitti_object_eval_python/rotate_iou.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/kitti_object_eval_python/rotate_iou.py | MIT |
def split_bn_bias(layer_groups):
"Split the layers in `layer_groups` into batchnorm (`bn_types`) and non-batchnorm groups."
split_groups = []
for l in layer_groups:
l1, l2 = [], []
for c in l.children():
if isinstance(c, bn_types):
l2.append(c)
else:
... | Split the layers in `layer_groups` into batchnorm (`bn_types`) and non-batchnorm groups. | split_bn_bias | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def get_master(layer_groups, flat_master: bool = False):
"Return two lists, one for the model parameters in FP16 and one for the master parameters in FP32."
split_groups = split_bn_bias(layer_groups)
model_params = [[param for param in lg.parameters() if param.requires_grad] for lg in split_groups]
if f... | Return two lists, one for the model parameters in FP16 and one for the master parameters in FP32. | get_master | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def model_g2master_g(model_params, master_params, flat_master: bool = False) -> None:
"Copy the `model_params` gradients to `master_params` for the optimizer step."
if flat_master:
for model_group, master_group in zip(model_params, master_params):
if len(master_group) != 0:
m... | Copy the `model_params` gradients to `master_params` for the optimizer step. | model_g2master_g | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def listify(p=None, q=None):
"Make `p` listy and the same length as `q`."
if p is None:
p = []
elif isinstance(p, str):
p = [p]
elif not isinstance(p, Iterable):
p = [p]
n = q if type(q) == int else len(p) if q is None else len(q)
if len(p) == 1: p = p * n
assert len(... | Make `p` listy and the same length as `q`. | listify | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def trainable_params(m: nn.Module):
"Return list of trainable params in `m`."
res = filter(lambda p: p.requires_grad, m.parameters())
return res | Return list of trainable params in `m`. | trainable_params | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def create(cls, opt_func, lr,
layer_groups, **kwargs):
"Create an `optim.Optimizer` from `opt_func` with `lr`. Set lr on `layer_groups`."
split_groups = split_bn_bias(layer_groups)
opt = opt_func([{'params': trainable_params(l), 'lr': 0} for l in split_groups])
opt = cls(o... | Create an `optim.Optimizer` from `opt_func` with `lr`. Set lr on `layer_groups`. | create | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def new(self, layer_groups):
"Create a new `OptimWrapper` from `self` with another `layer_groups` but the same hyper-parameters."
opt_func = getattr(self, 'opt_func', self.opt.__class__)
split_groups = split_bn_bias(layer_groups)
opt = opt_func([{'params': trainable_params(l), 'lr': 0} f... | Create a new `OptimWrapper` from `self` with another `layer_groups` but the same hyper-parameters. | new | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def step(self) -> None:
"Set weight decay and step optimizer."
# weight decay outside of optimizer step (AdamW)
if self.true_wd:
for lr, wd, pg1, pg2 in zip(self._lr, self._wd, self.opt.param_groups[::2], self.opt.param_groups[1::2]):
for p in pg1['params']:
... | Set weight decay and step optimizer. | step | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def clear(self):
"Reset the state of the inner optimizer."
sd = self.state_dict()
sd['state'] = {}
self.load_state_dict(sd) | Reset the state of the inner optimizer. | clear | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def beta(self, val: float) -> None:
"Set beta (or alpha as makes sense for given optimizer)."
if val is None: return
if 'betas' in self.opt_keys:
self.set_val('betas', (self._mom, listify(val, self._beta)))
elif 'alpha' in self.opt_keys:
self.set_val('alpha', list... | Set beta (or alpha as makes sense for given optimizer). | beta | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def read_defaults(self) -> None:
"Read the values inside the optimizer for the hyper-parameters."
self._beta = None
if 'lr' in self.opt_keys: self._lr = self.read_val('lr')
if 'momentum' in self.opt_keys: self._mom = self.read_val('momentum')
if 'alpha' in self.opt_keys: self._be... | Read the values inside the optimizer for the hyper-parameters. | read_defaults | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def set_val(self, key: str, val, bn_groups: bool = True):
"Set `val` inside the optimizer dictionary at `key`."
if is_tuple(val): val = [(v1, v2) for v1, v2 in zip(*val)]
for v, pg1, pg2 in zip(val, self.opt.param_groups[::2], self.opt.param_groups[1::2]):
pg1[key] = v
if... | Set `val` inside the optimizer dictionary at `key`. | set_val | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def read_val(self, key: str):
"Read a hyperparameter `key` in the optimizer dictionary."
val = [pg[key] for pg in self.opt.param_groups[::2]]
if is_tuple(val[0]): val = [o[0] for o in val], [o[1] for o in val]
return val | Read a hyperparameter `key` in the optimizer dictionary. | read_val | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def create(cls, opt_func, lr,
layer_groups, model, flat_master=False, loss_scale=512.0, **kwargs):
"Create an `optim.Optimizer` from `opt_func` with `lr`. Set lr on `layer_groups`."
opt = OptimWrapper.create(opt_func, lr, layer_groups, **kwargs)
opt.model_params, opt.master_params... | Create an `optim.Optimizer` from `opt_func` with `lr`. Set lr on `layer_groups`. | create | python | sshaoshuai/PointRCNN | tools/train_utils/fastai_optim.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/fastai_optim.py | MIT |
def annealing_cos(start, end, pct):
# print(pct, start, end)
"Cosine anneal from `start` to `end` as pct goes from 0.0 to 1.0."
cos_out = np.cos(np.pi * pct) + 1
return end + (start - end) / 2 * cos_out | Cosine anneal from `start` to `end` as pct goes from 0.0 to 1.0. | annealing_cos | python | sshaoshuai/PointRCNN | tools/train_utils/learning_schedules_fastai.py | https://github.com/sshaoshuai/PointRCNN/blob/master/tools/train_utils/learning_schedules_fastai.py | MIT |
def extend_body_states(
self,
extend_body_pos: torch.Tensor,
extend_body_parent_ids: list[int],
):
"""
This function is for appending the link states to the robot state. For example, the H1 robot doesn't have hands
and a head in its robot state. However, we are still ... |
This function is for appending the link states to the robot state. For example, the H1 robot doesn't have hands
and a head in its robot state. However, we are still interested in computing its error and considering these as
important key points. Thus, we will use this function to add the head a... | extend_body_states | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/body_state.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/body_state.py | Apache-2.0 |
def get_observations(self) -> torch.Tensor:
"""Gets policy observations for each environment based on the mode."""
if self._mode.is_distill_mode():
return self.get_student_observations()
return self.get_teacher_observations() | Gets policy observations for each environment based on the mode. | get_observations | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/environment_wrapper.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/environment_wrapper.py | Apache-2.0 |
def _init_empty_frames(self, frame: Frame):
"""Initialize empty frame buffers to store trajectory data for all environments.
Creates zero-filled tensors/arrays sized to hold the maximum possible number of frames
and environments, matching the data types and shapes of the input frame.
""... | Initialize empty frame buffers to store trajectory data for all environments.
Creates zero-filled tensors/arrays sized to hold the maximum possible number of frames
and environments, matching the data types and shapes of the input frame.
| _init_empty_frames | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def add_frame(self, frame: Frame):
"""Add a frame to each trajectory in the episode.
Args:
frame (Frame): Frame containing trajectory data for all environments at this timestep
"""
# Initialize frame buffers if this is the first frame being added
if len(self._frames)... | Add a frame to each trajectory in the episode.
Args:
frame (Frame): Frame containing trajectory data for all environments at this timestep
| add_frame | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def complete(self):
"""Aggregate frames into episode data more efficiently.
Instead of splitting data environment by environment, we can use tensor operations
to split all environments at once, significantly reducing loop overhead.
"""
num_envs = self.max_frames_per_env.shape[0]... | Aggregate frames into episode data more efficiently.
Instead of splitting data environment by environment, we can use tensor operations
to split all environments at once, significantly reducing loop overhead.
| complete | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def filter(self, ids: list[int]) -> Episode:
"""Filter episode data to only include specified environment indices."""
# Create new empty episode to store filtered data
filtered = Episode(self.max_frames_per_env)
# Iterate through all attributes of this episode
for attr, data in ... | Filter episode data to only include specified environment indices. | filter | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def trim(self, terminated_frame: torch.Tensor, end_id: int):
"""Helper method to cut data based on terminated frame.
This function creates a new Episode object with truncated data. For each environment,
it keeps only the frames up to the termination point specified in terminated_frame.
... | Helper method to cut data based on terminated frame.
This function creates a new Episode object with truncated data. For each environment,
it keeps only the frames up to the termination point specified in terminated_frame.
It then further filters to keep only environments up to end_id.
... | trim | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def update(
self,
episode: Episode,
episode_gt: Episode,
success_ids: list,
):
"""Update and compute metrics for trajectories from all simulation instances in one episode."""
self.num_motions += episode.num_envs
# First, compute metrics on trajectories from al... | Update and compute metrics for trajectories from all simulation instances in one episode. | update | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def _compute_link_metrics(
self,
body_pos: list[torch.Tensor],
body_pos_gt: list[torch.Tensor],
storage: dict[str, dict[str, list[float]]],
):
"""Compute metrics of trajectories and save them by their means and number of elements (as weights)."""
# compute_metrics_lit... | Compute metrics of trajectories and save them by their means and number of elements (as weights). | _compute_link_metrics | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def _compute_joint_metrics(
self,
episode: Episode,
episode_gt: Episode,
frame_weights: torch.Tensor,
storage: dict[str, dict[str, list[float]]],
):
"""Compute metrics of trajectories and save them by their means and number of elements (as weights)."""
self._c... | Compute metrics of trajectories and save them by their means and number of elements (as weights). | _compute_joint_metrics | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def compute_joint_tracking_error(
joint_pos: torch.Tensor, joint_pos_gt: torch.Tensor, frame_weights: torch.Tensor, num_envs: int
) -> float:
"""Compute weighted mean absolute joint position error across environments.
For each environment:
1. Take absolute differ... | Compute weighted mean absolute joint position error across environments.
For each environment:
1. Take absolute difference between predicted and ground truth joint positions
2. Weight the differences by frame_weights to normalize across varying trajectory lengths
3. Take... | compute_joint_tracking_error | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def compute_height_error(
root_pos: torch.Tensor, root_pos_gt: torch.Tensor, frame_weights: torch.Tensor, num_envs: int
) -> float:
"""Compute weighted mean absolute height error across environments.
For each environment:
1. Takes absolute difference between pred... | Compute weighted mean absolute height error across environments.
For each environment:
1. Takes absolute difference between predicted and ground truth root z-coordinates
2. Weights the differences by frame_weights to normalize across varying trajectory lengths
3. Takes m... | compute_height_error | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def compute_vel_error(
vel: torch.Tensor,
rot: torch.Tensor,
vel_gt: torch.Tensor,
rot_gt: torch.Tensor,
frame_weights: torch.Tensor,
num_envs: int,
) -> float:
"""Compute weighted mean velocity tracking error across environment... | Compute weighted mean velocity tracking error across environments.
For each environment:
1. Convert velocities to local frame using inverse rotation
2. Take L2 norm of difference between predicted and ground truth local velocities
3. Weight by frame_weights and average a... | compute_vel_error | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def _compute_root_rot_tracking_error(
self,
root_rot: torch.Tensor,
root_rot_gt: torch.Tensor,
frame_weights: torch.Tensor,
num_envs: int,
storage: dict[str, dict[str, list[float]]],
):
"""Compute root rotation tracking error.
Args:
root_r... | Compute root rotation tracking error.
Args:
root_rot: Root rotation quaternions
root_rot_gt: Ground truth root rotation quaternions
Returns:
dict: Dictionary containing roll, pitch, yaw errors
| _compute_root_rot_tracking_error | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def compute_rot_tracking_error(
quat1: torch.Tensor, quat2: torch.Tensor, frame_weights: torch.Tensor, num_envs: int
) -> tuple[float, float, float]:
"""Compute weighted mean rotation tracking error across environments.
For each environment:
1. Compute quaternion... | Compute weighted mean rotation tracking error across environments.
For each environment:
1. Compute quaternion difference between predicted and ground truth rotations
2. Convert difference quaternion to Euler angles (roll, pitch, yaw)
3. Take absolute value of angles and... | compute_rot_tracking_error | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def _record_metrics(self, name: str, mean: float, weight: int, storage: dict[str, dict[str, list[float]]]):
"""Record metrics by their means and number of elements (as weights)."""
if name not in storage:
storage[name] = {"means": [], "weights": []}
storage[name]["means"].append(mean... | Record metrics by their means and number of elements (as weights). | _record_metrics | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def conclude(self):
"""At the end of the evaluation, computes the metrics over all tasks."""
self.all_metrics = {
key: np.average(value["means"], weights=value["weights"])
for key, value in self._all_metrics_by_episode.items()
}
self.success_metrics = {
... | At the end of the evaluation, computes the metrics over all tasks. | conclude | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def save(self, directory: str):
"""Saves metrics to a time-stamped json file in ``directory``.
Args:
directory (str): Directory to stored the file to.
"""
file_dir = Path(directory)
file_dir.mkdir(parents=True, exist_ok=True)
timestamp = time.strftime("%Y%m%d... | Saves metrics to a time-stamped json file in ``directory``.
Args:
directory (str): Directory to stored the file to.
| save | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def __init__(
self,
env_wrapper: EnvironmentWrapper,
metrics_path: str | None = None,
):
"""Initializes the evaluator.
Args:
env_wrapper (EnvironmentWrapper): The environment that the evaluation is taking place.
metrics_path (str | None, optional): Th... | Initializes the evaluator.
Args:
env_wrapper (EnvironmentWrapper): The environment that the evaluation is taking place.
metrics_path (str | None, optional): The directory that the metrics will be saved to. Defaults to None.
| __init__ | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def collect(self, dones: torch.Tensor, info: dict) -> bool:
"""Collects data from a step and updates internal states.
Args:
dones (torch.Tensor): environments that are terminated (failed) or truncated (timed out).
info (dict): Extra information collected from a step.
Re... | Collects data from a step and updates internal states.
Args:
dones (torch.Tensor): environments that are terminated (failed) or truncated (timed out).
info (dict): Extra information collected from a step.
Returns:
bool: Whether all current reference motions are eval... | collect | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def _collect_step_data(self, newly_terminated: torch.Tensor, info: dict):
"""Collects data after each step.
Args:
newly_terminated(torch.Tensor(bool)): Newly terminated env
info (dict): Extra information collected from a step.
"""
state_data = info["data"]["stat... | Collects data after each step.
Args:
newly_terminated(torch.Tensor(bool)): Newly terminated env
info (dict): Extra information collected from a step.
| _collect_step_data | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def _build_frame(
self, data: dict, mask: torch.Tensor, num_envs: int, upper_joint_ids: list, lower_joint_ids: list
) -> Frame:
"""Builds a frame from the data and mask.
Args:
data (dict): Dictionary containing trajectory data including body positions, joint positions, etc.
... | Builds a frame from the data and mask.
Args:
data (dict): Dictionary containing trajectory data including body positions, joint positions, etc.
mask (torch.Tensor): Boolean mask array indicating which bodies to include in masked data.
num_envs (int): Number of environments.
... | _build_frame | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def _update_failure_metrics(self, newly_terminated: torch.Tensor, info: dict):
"""Updates failure metrics based on termination conditions."""
start_id = self._ref_motion_start_id
end_id = min(self._ref_motion_start_id + self._num_envs, self._num_unique_ref_motions)
counted_envs = end_id ... | Updates failure metrics based on termination conditions. | _update_failure_metrics | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def _reset_data_buffer(self):
"""Resets data buffer for new episodes."""
self._terminated[:] = False
self._pbar.update(1)
self._pbar.refresh()
self._ref_motion_frames = self._ref_motion_mgr.get_motion_num_steps()
self._episode = Episode(max_frames_per_env=self._ref_motion... | Resets data buffer for new episodes. | _reset_data_buffer | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def _update_status_bar(self):
"""Updates status bar in the console to display current progress and selected metrics."""
update_str = (
f"Terminated: {self._terminated.sum().item()} | max frames: {self._ref_motion_frames.max()} | steps"
f" {self._curr_steps} | Start: {self._ref_m... | Updates status bar in the console to display current progress and selected metrics. | _update_status_bar | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def conclude(self):
"""Concludes evaluation by computing, printing and optionally saving metrics."""
self._pbar.close()
self._metrics.conclude()
self._metrics.print()
if self._metrics_path:
self._metrics.save(self._metrics_path) | Concludes evaluation by computing, printing and optionally saving metrics. | conclude | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def forward_motion_samples(self):
"""Steps forward in the list of reference motions.
All simulated environments must be reset following this function call.
"""
self._ref_motion_start_id += self._num_envs
self._ref_motion_mgr.load_motions(random_sample=False, start_idx=self._ref_... | Steps forward in the list of reference motions.
All simulated environments must be reset following this function call.
| forward_motion_samples | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/evaluator.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/evaluator.py | Apache-2.0 |
def create_mask_element_names(body_names: list[str], joint_names: list[str]):
"""Get a name for each element of the mask."""
body_names = [name + "_local_pos_" for name in body_names]
joint_names = [name + "_joint_pos" for name in joint_names]
root_reference_names = [
"root_linear_velocity_x",
... | Get a name for each element of the mask. | create_mask_element_names | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/mask.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/mask.py | Apache-2.0 |
def create_mask(
num_envs: int,
mask_element_names: list[str],
mask_modes: dict[str, dict[str, list[str]]],
enable_sparsity_randomization: bool,
device: torch.device,
) -> torch.Tensor:
"""
Create a mask where all enabled states are set to 1.
This mask can be used directly or multiplied ... |
Create a mask where all enabled states are set to 1.
This mask can be used directly or multiplied with 0.5 and then be used as the probability of
a state being enabled.
Args:
mask_element_names: The name corresponding to every element in the mask.
mask_modes: A nested dictionary config... | create_mask | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/mask.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/mask.py | Apache-2.0 |
def scale_transform(x: torch.Tensor, lower: torch.Tensor, upper: torch.Tensor) -> torch.Tensor:
"""Normalizes a given input tensor to a range of [-1, 1].
.. note::
It uses pytorch broadcasting functionality to deal with batched input.
Args:
x: Input tensor of shape (N, dims).
lower... | Normalizes a given input tensor to a range of [-1, 1].
.. note::
It uses pytorch broadcasting functionality to deal with batched input.
Args:
x: Input tensor of shape (N, dims).
lower: The minimum value of the tensor. Shape is (N, dims) or (dims,).
upper: The maximum value of t... | scale_transform | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def unscale_transform(x: torch.Tensor, lower: torch.Tensor, upper: torch.Tensor) -> torch.Tensor:
"""De-normalizes a given input tensor from range of [-1, 1] to (lower, upper).
.. note::
It uses pytorch broadcasting functionality to deal with batched input.
Args:
x: Input tensor of shape (... | De-normalizes a given input tensor from range of [-1, 1] to (lower, upper).
.. note::
It uses pytorch broadcasting functionality to deal with batched input.
Args:
x: Input tensor of shape (N, dims).
lower: The minimum value of the tensor. Shape is (N, dims) or (dims,).
upper: T... | unscale_transform | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def wrap_to_pi(angles: torch.Tensor) -> torch.Tensor:
r"""Wraps input angles (in radians) to the range :math:`[-\pi, \pi]`.
This function wraps angles in radians to the range :math:`[-\pi, \pi]`, such that
:math:`\pi` maps to :math:`\pi`, and :math:`-\pi` maps to :math:`-\pi`. In general,
odd positive ... | Wraps input angles (in radians) to the range :math:`[-\pi, \pi]`.
This function wraps angles in radians to the range :math:`[-\pi, \pi]`, such that
:math:`\pi` maps to :math:`\pi`, and :math:`-\pi` maps to :math:`-\pi`. In general,
odd positive multiples of :math:`\pi` are mapped to :math:`\pi`, and odd ne... | wrap_to_pi | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def copysign(mag: float, other: torch.Tensor) -> torch.Tensor:
"""Create a new floating-point tensor with the magnitude of input and the sign of other, element-wise.
Note:
The implementation follows from `torch.copysign`. The function allows a scalar magnitude.
Args:
mag: The magnitude sca... | Create a new floating-point tensor with the magnitude of input and the sign of other, element-wise.
Note:
The implementation follows from `torch.copysign`. The function allows a scalar magnitude.
Args:
mag: The magnitude scalar.
other: The tensor containing values whose signbits are ap... | copysign | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def matrix_from_quat(quaternions: torch.Tensor) -> torch.Tensor:
"""Convert rotations given as quaternions to rotation matrices.
Args:
quaternions: The quaternion orientation in (w, x, y, z). Shape is (..., 4).
Returns:
Rotation matrices. The shape is (..., 3, 3).
Reference:
h... | Convert rotations given as quaternions to rotation matrices.
Args:
quaternions: The quaternion orientation in (w, x, y, z). Shape is (..., 4).
Returns:
Rotation matrices. The shape is (..., 3, 3).
Reference:
https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/transfo... | matrix_from_quat | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def convert_quat(quat: torch.Tensor | np.ndarray, to: Literal["xyzw", "wxyz"] = "xyzw") -> torch.Tensor | np.ndarray:
"""Converts quaternion from one convention to another.
The convention to convert TO is specified as an optional argument. If to == 'xyzw',
then the input is in 'wxyz' format, and vice-versa... | Converts quaternion from one convention to another.
The convention to convert TO is specified as an optional argument. If to == 'xyzw',
then the input is in 'wxyz' format, and vice-versa.
Args:
quat: The quaternion of shape (..., 4).
to: Convention to convert quaternion to.. Defaults to "x... | convert_quat | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def quat_conjugate(q: torch.Tensor) -> torch.Tensor:
"""Computes the conjugate of a quaternion.
Args:
q: The quaternion orientation in (w, x, y, z). Shape is (..., 4).
Returns:
The conjugate quaternion in (w, x, y, z). Shape is (..., 4).
"""
shape = q.shape
q = q.reshape(-1, 4)... | Computes the conjugate of a quaternion.
Args:
q: The quaternion orientation in (w, x, y, z). Shape is (..., 4).
Returns:
The conjugate quaternion in (w, x, y, z). Shape is (..., 4).
| quat_conjugate | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def quat_from_euler_xyz(roll: torch.Tensor, pitch: torch.Tensor, yaw: torch.Tensor) -> torch.Tensor:
"""Convert rotations given as Euler angles in radians to Quaternions.
Note:
The euler angles are assumed in XYZ convention.
Args:
roll: Rotation around x-axis (in radians). Shape is (N,).
... | Convert rotations given as Euler angles in radians to Quaternions.
Note:
The euler angles are assumed in XYZ convention.
Args:
roll: Rotation around x-axis (in radians). Shape is (N,).
pitch: Rotation around y-axis (in radians). Shape is (N,).
yaw: Rotation around z-axis (in ra... | quat_from_euler_xyz | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def _sqrt_positive_part(x: torch.Tensor) -> torch.Tensor:
"""Returns torch.sqrt(torch.max(0, x)) but with a zero sub-gradient where x is 0.
Reference:
https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/transforms/rotation_conversions.py#L91-L99
"""
ret = torch.zeros_like(x)
p... | Returns torch.sqrt(torch.max(0, x)) but with a zero sub-gradient where x is 0.
Reference:
https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/transforms/rotation_conversions.py#L91-L99
| _sqrt_positive_part | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def quat_from_matrix(matrix: torch.Tensor) -> torch.Tensor:
"""Convert rotations given as rotation matrices to quaternions.
Args:
matrix: The rotation matrices. Shape is (..., 3, 3).
Returns:
The quaternion in (w, x, y, z). Shape is (..., 4).
Reference:
https://github.com/face... | Convert rotations given as rotation matrices to quaternions.
Args:
matrix: The rotation matrices. Shape is (..., 3, 3).
Returns:
The quaternion in (w, x, y, z). Shape is (..., 4).
Reference:
https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/transforms/rotation_conv... | quat_from_matrix | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def _axis_angle_rotation(axis: Literal["X", "Y", "Z"], angle: torch.Tensor) -> torch.Tensor:
"""Return the rotation matrices for one of the rotations about an axis of which Euler angles describe,
for each value of the angle given.
Args:
axis: Axis label "X" or "Y or "Z".
angle: Euler angles... | Return the rotation matrices for one of the rotations about an axis of which Euler angles describe,
for each value of the angle given.
Args:
axis: Axis label "X" or "Y or "Z".
angle: Euler angles in radians of any shape.
Returns:
Rotation matrices. Shape is (..., 3, 3).
Refere... | _axis_angle_rotation | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def matrix_from_euler(euler_angles: torch.Tensor, convention: str) -> torch.Tensor:
"""
Convert rotations given as Euler angles in radians to rotation matrices.
Args:
euler_angles: Euler angles in radians. Shape is (..., 3).
convention: Convention string of three uppercase letters from {"X"... |
Convert rotations given as Euler angles in radians to rotation matrices.
Args:
euler_angles: Euler angles in radians. Shape is (..., 3).
convention: Convention string of three uppercase letters from {"X", "Y", and "Z"}.
For example, "XYZ" means that the rotations should be applied ... | matrix_from_euler | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def euler_xyz_from_quat(quat: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Convert rotations given as quaternions to Euler angles in radians.
Note:
The euler angles are assumed in XYZ convention.
Args:
quat: The quaternion orientation in (w, x, y, z). Shape is (N, 4... | Convert rotations given as quaternions to Euler angles in radians.
Note:
The euler angles are assumed in XYZ convention.
Args:
quat: The quaternion orientation in (w, x, y, z). Shape is (N, 4).
Returns:
A tuple containing roll-pitch-yaw. Each element is a tensor of shape (N,).
... | euler_xyz_from_quat | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def quat_mul(q1: torch.Tensor, q2: torch.Tensor) -> torch.Tensor:
"""Multiply two quaternions together.
Args:
q1: The first quaternion in (w, x, y, z). Shape is (..., 4).
q2: The second quaternion in (w, x, y, z). Shape is (..., 4).
Returns:
The product of the two quaternions in (w... | Multiply two quaternions together.
Args:
q1: The first quaternion in (w, x, y, z). Shape is (..., 4).
q2: The second quaternion in (w, x, y, z). Shape is (..., 4).
Returns:
The product of the two quaternions in (w, x, y, z). Shape is (..., 4).
Raises:
ValueError: Input sha... | quat_mul | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def quat_box_minus(q1: torch.Tensor, q2: torch.Tensor) -> torch.Tensor:
"""The box-minus operator (quaternion difference) between two quaternions.
Args:
q1: The first quaternion in (w, x, y, z). Shape is (N, 4).
q2: The second quaternion in (w, x, y, z). Shape is (N, 4).
Returns:
T... | The box-minus operator (quaternion difference) between two quaternions.
Args:
q1: The first quaternion in (w, x, y, z). Shape is (N, 4).
q2: The second quaternion in (w, x, y, z). Shape is (N, 4).
Returns:
The difference between the two quaternions. Shape is (N, 3).
| quat_box_minus | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def yaw_quat(quat: torch.Tensor) -> torch.Tensor:
"""Extract the yaw component of a quaternion.
Args:
quat: The orientation in (w, x, y, z). Shape is (..., 4)
Returns:
A quaternion with only yaw component.
"""
shape = quat.shape
quat_yaw = quat.clone().view(-1, 4)
qw = quat... | Extract the yaw component of a quaternion.
Args:
quat: The orientation in (w, x, y, z). Shape is (..., 4)
Returns:
A quaternion with only yaw component.
| yaw_quat | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def quat_apply(quat: torch.Tensor, vec: torch.Tensor) -> torch.Tensor:
"""Apply a quaternion rotation to a vector.
Args:
quat: The quaternion in (w, x, y, z). Shape is (..., 4).
vec: The vector in (x, y, z). Shape is (..., 3).
Returns:
The rotated vector in (x, y, z). Shape is (...... | Apply a quaternion rotation to a vector.
Args:
quat: The quaternion in (w, x, y, z). Shape is (..., 4).
vec: The vector in (x, y, z). Shape is (..., 3).
Returns:
The rotated vector in (x, y, z). Shape is (..., 3).
| quat_apply | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def quat_apply_yaw(quat: torch.Tensor, vec: torch.Tensor) -> torch.Tensor:
"""Rotate a vector only around the yaw-direction.
Args:
quat: The orientation in (w, x, y, z). Shape is (N, 4).
vec: The vector in (x, y, z). Shape is (N, 3).
Returns:
The rotated vector in (x, y, z). Shape ... | Rotate a vector only around the yaw-direction.
Args:
quat: The orientation in (w, x, y, z). Shape is (N, 4).
vec: The vector in (x, y, z). Shape is (N, 3).
Returns:
The rotated vector in (x, y, z). Shape is (N, 3).
| quat_apply_yaw | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def quat_rotate(q: torch.Tensor, v: torch.Tensor) -> torch.Tensor:
"""Rotate a vector by a quaternion along the last dimension of q and v.
Args:
q: The quaternion in (w, x, y, z). Shape is (..., 4).
v: The vector in (x, y, z). Shape is (..., 3).
Returns:
The rotated vector in (x, y... | Rotate a vector by a quaternion along the last dimension of q and v.
Args:
q: The quaternion in (w, x, y, z). Shape is (..., 4).
v: The vector in (x, y, z). Shape is (..., 3).
Returns:
The rotated vector in (x, y, z). Shape is (..., 3).
| quat_rotate | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def quat_rotate_inverse(q: torch.Tensor, v: torch.Tensor) -> torch.Tensor:
"""Rotate a vector by the inverse of a quaternion along the last dimension of q and v.
Args:
q: The quaternion in (w, x, y, z). Shape is (..., 4).
v: The vector in (x, y, z). Shape is (..., 3).
Returns:
The ... | Rotate a vector by the inverse of a quaternion along the last dimension of q and v.
Args:
q: The quaternion in (w, x, y, z). Shape is (..., 4).
v: The vector in (x, y, z). Shape is (..., 3).
Returns:
The rotated vector in (x, y, z). Shape is (..., 3).
| quat_rotate_inverse | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
def quat_from_angle_axis(angle: torch.Tensor, axis: torch.Tensor) -> torch.Tensor:
"""Convert rotations given as angle-axis to quaternions.
Args:
angle: The angle turned anti-clockwise in radians around the vector's direction. Shape is (N,).
axis: The axis of rotation. Shape is (N, 3).
Ret... | Convert rotations given as angle-axis to quaternions.
Args:
angle: The angle turned anti-clockwise in radians around the vector's direction. Shape is (N,).
axis: The axis of rotation. Shape is (N, 3).
Returns:
The quaternion in (w, x, y, z). Shape is (N, 4).
| quat_from_angle_axis | python | NVlabs/HOVER | neural_wbc/core/neural_wbc/core/math_utils.py | https://github.com/NVlabs/HOVER/blob/master/neural_wbc/core/neural_wbc/core/math_utils.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.