id int64 0 190k | prompt stringlengths 21 13.4M | docstring stringlengths 1 12k ⌀ |
|---|---|---|
14,331 | import numpy as np
import trimesh
from trimesh.proximity import closest_point
from .mesh_eval import compute_similarity_transform
The provided code snippet includes necessary dependencies for implementing the `keypoint_accel_error` function. Write a Python function `def keypoint_accel_error(gt, pred, mask=None)` to solve the following problem:
Computes acceleration error: Note that for each frame that is not visible, three entries in the acceleration error should be zero'd out. Args: gt (Nx14x3). pred (Nx14x3). mask (N). Returns: error_accel (N-2).
Here is the function:
def keypoint_accel_error(gt, pred, mask=None):
"""Computes acceleration error:
Note that for each frame that is not visible, three entries in the
acceleration error should be zero'd out.
Args:
gt (Nx14x3).
pred (Nx14x3).
mask (N).
Returns:
error_accel (N-2).
"""
# (N-2)x14x3
accel_gt = gt[:-2] - 2 * gt[1:-1] + gt[2:]
accel_pred = pred[:-2] - 2 * pred[1:-1] + pred[2:]
normed = np.linalg.norm(accel_pred - accel_gt, axis=2)
if mask is None:
new_vis = np.ones(len(normed), dtype=bool)
else:
invis = np.logical_not(mask)
invis1 = np.roll(invis, -1)
invis2 = np.roll(invis, -2)
new_invis = np.logical_or(invis, np.logical_or(invis1, invis2))[:-2]
new_vis = np.logical_not(new_invis)
return np.mean(normed[new_vis], axis=1) | Computes acceleration error: Note that for each frame that is not visible, three entries in the acceleration error should be zero'd out. Args: gt (Nx14x3). pred (Nx14x3). mask (N). Returns: error_accel (N-2). |
14,332 | import numpy as np
import trimesh
from trimesh.proximity import closest_point
from .mesh_eval import compute_similarity_transform
def compute_similarity_transform(source_points,
target_points,
return_tform=False):
"""Computes a similarity transform (sR, t) that takes a set of 3D points
source_points (N x 3) closest to a set of 3D points target_points, where R
is an 3x3 rotation matrix, t 3x1 translation, s scale.
And return the
transformed 3D points source_points_hat (N x 3). i.e. solves the orthogonal
Procrutes problem.
Notes:
Points number: N
Args:
source_points (np.ndarray([N, 3])): Source point set.
target_points (np.ndarray([N, 3])): Target point set.
return_tform (bool) : Whether return transform
Returns:
source_points_hat (np.ndarray([N, 3])): Transformed source point set.
transform (dict): Returns if return_tform is True.
Returns rotation: r, 'scale': s, 'translation':t.
"""
assert target_points.shape[0] == source_points.shape[0]
assert target_points.shape[1] == 3 and source_points.shape[1] == 3
source_points = source_points.T
target_points = target_points.T
# 1. Remove mean.
mu1 = source_points.mean(axis=1, keepdims=True)
mu2 = target_points.mean(axis=1, keepdims=True)
X1 = source_points - mu1
X2 = target_points - mu2
# 2. Compute variance of X1 used for scale.
var1 = np.sum(X1**2)
# 3. The outer product of X1 and X2.
K = X1.dot(X2.T)
# 4. Solution that Maximizes trace(R'K) is R=U*V', where U, V are
# singular vectors of K.
U, _, Vh = np.linalg.svd(K)
V = Vh.T
# Construct Z that fixes the orientation of R to get det(R)=1.
Z = np.eye(U.shape[0])
Z[-1, -1] *= np.sign(np.linalg.det(U.dot(V.T)))
# Construct R.
R = V.dot(Z.dot(U.T))
# 5. Recover scale.
scale = np.trace(R.dot(K)) / var1
# 6. Recover translation.
t = mu2 - scale * (R.dot(mu1))
# 7. Transform the source points:
source_points_hat = scale * R.dot(source_points) + t
source_points_hat = source_points_hat.T
if return_tform:
return source_points_hat, {
'rotation': R,
'scale': scale,
'translation': t
}
return source_points_hat
The provided code snippet includes necessary dependencies for implementing the `vertice_pve` function. Write a Python function `def vertice_pve(pred_verts, target_verts, alignment='none')` to solve the following problem:
Computes per vertex error (PVE). Args: verts_gt (N x verts_num x 3). verts_pred (N x verts_num x 3). alignment (str, optional): method to align the prediction with the groundtruth. Supported options are: - ``'none'``: no alignment will be applied - ``'scale'``: align in the least-square sense in scale - ``'procrustes'``: align in the least-square sense in scale, rotation and translation. Returns: error_verts.
Here is the function:
def vertice_pve(pred_verts, target_verts, alignment='none'):
"""Computes per vertex error (PVE).
Args:
verts_gt (N x verts_num x 3).
verts_pred (N x verts_num x 3).
alignment (str, optional): method to align the prediction with the
groundtruth. Supported options are:
- ``'none'``: no alignment will be applied
- ``'scale'``: align in the least-square sense in scale
- ``'procrustes'``: align in the least-square sense in scale,
rotation and translation.
Returns:
error_verts.
"""
assert len(pred_verts) == len(target_verts)
if alignment == 'none':
pass
elif alignment == 'procrustes':
pred_verts = np.stack([
compute_similarity_transform(pred_i, gt_i)
for pred_i, gt_i in zip(pred_verts, target_verts)
])
elif alignment == 'scale':
pred_dot_pred = np.einsum('nkc,nkc->n', pred_verts, pred_verts)
pred_dot_gt = np.einsum('nkc,nkc->n', pred_verts, target_verts)
scale_factor = pred_dot_gt / pred_dot_pred
pred_verts = pred_verts * scale_factor[:, None, None]
else:
raise ValueError(f'Invalid value for alignment: {alignment}')
error = np.linalg.norm(pred_verts - target_verts, ord=2, axis=-1).mean()
return error | Computes per vertex error (PVE). Args: verts_gt (N x verts_num x 3). verts_pred (N x verts_num x 3). alignment (str, optional): method to align the prediction with the groundtruth. Supported options are: - ``'none'``: no alignment will be applied - ``'scale'``: align in the least-square sense in scale - ``'procrustes'``: align in the least-square sense in scale, rotation and translation. Returns: error_verts. |
14,333 | import numpy as np
import trimesh
from trimesh.proximity import closest_point
from .mesh_eval import compute_similarity_transform
def compute_similarity_transform(source_points,
target_points,
return_tform=False):
"""Computes a similarity transform (sR, t) that takes a set of 3D points
source_points (N x 3) closest to a set of 3D points target_points, where R
is an 3x3 rotation matrix, t 3x1 translation, s scale.
And return the
transformed 3D points source_points_hat (N x 3). i.e. solves the orthogonal
Procrutes problem.
Notes:
Points number: N
Args:
source_points (np.ndarray([N, 3])): Source point set.
target_points (np.ndarray([N, 3])): Target point set.
return_tform (bool) : Whether return transform
Returns:
source_points_hat (np.ndarray([N, 3])): Transformed source point set.
transform (dict): Returns if return_tform is True.
Returns rotation: r, 'scale': s, 'translation':t.
"""
assert target_points.shape[0] == source_points.shape[0]
assert target_points.shape[1] == 3 and source_points.shape[1] == 3
source_points = source_points.T
target_points = target_points.T
# 1. Remove mean.
mu1 = source_points.mean(axis=1, keepdims=True)
mu2 = target_points.mean(axis=1, keepdims=True)
X1 = source_points - mu1
X2 = target_points - mu2
# 2. Compute variance of X1 used for scale.
var1 = np.sum(X1**2)
# 3. The outer product of X1 and X2.
K = X1.dot(X2.T)
# 4. Solution that Maximizes trace(R'K) is R=U*V', where U, V are
# singular vectors of K.
U, _, Vh = np.linalg.svd(K)
V = Vh.T
# Construct Z that fixes the orientation of R to get det(R)=1.
Z = np.eye(U.shape[0])
Z[-1, -1] *= np.sign(np.linalg.det(U.dot(V.T)))
# Construct R.
R = V.dot(Z.dot(U.T))
# 5. Recover scale.
scale = np.trace(R.dot(K)) / var1
# 6. Recover translation.
t = mu2 - scale * (R.dot(mu1))
# 7. Transform the source points:
source_points_hat = scale * R.dot(source_points) + t
source_points_hat = source_points_hat.T
if return_tform:
return source_points_hat, {
'rotation': R,
'scale': scale,
'translation': t
}
return source_points_hat
The provided code snippet includes necessary dependencies for implementing the `keypoint_3d_pck` function. Write a Python function `def keypoint_3d_pck(pred, gt, mask, alignment='none', threshold=150.)` to solve the following problem:
Calculate the Percentage of Correct Keypoints (3DPCK) w. or w/o rigid alignment. Paper ref: `Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision' 3DV'2017. <https://arxiv.org/pdf/1611.09813>`__ . Note: - batch_size: N - num_keypoints: K - keypoint_dims: C Args: pred (np.ndarray[N, K, C]): Predicted keypoint location. gt (np.ndarray[N, K, C]): Groundtruth keypoint location. mask (np.ndarray[N, K]): Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation. alignment (str, optional): method to align the prediction with the groundtruth. Supported options are: - ``'none'``: no alignment will be applied - ``'scale'``: align in the least-square sense in scale - ``'procrustes'``: align in the least-square sense in scale, rotation and translation. threshold: If L2 distance between the prediction and the groundtruth is less then threshold, the predicted result is considered as correct. Default: 150 (mm). Returns: pck: percentage of correct keypoints.
Here is the function:
def keypoint_3d_pck(pred, gt, mask, alignment='none', threshold=150.):
"""Calculate the Percentage of Correct Keypoints (3DPCK) w. or w/o rigid
alignment.
Paper ref: `Monocular 3D Human Pose Estimation In The Wild Using Improved
CNN Supervision' 3DV'2017. <https://arxiv.org/pdf/1611.09813>`__ .
Note:
- batch_size: N
- num_keypoints: K
- keypoint_dims: C
Args:
pred (np.ndarray[N, K, C]): Predicted keypoint location.
gt (np.ndarray[N, K, C]): Groundtruth keypoint location.
mask (np.ndarray[N, K]): Visibility of the target. False for invisible
joints, and True for visible. Invisible joints will be ignored for
accuracy calculation.
alignment (str, optional): method to align the prediction with the
groundtruth. Supported options are:
- ``'none'``: no alignment will be applied
- ``'scale'``: align in the least-square sense in scale
- ``'procrustes'``: align in the least-square sense in scale,
rotation and translation.
threshold: If L2 distance between the prediction and the groundtruth
is less then threshold, the predicted result is considered as
correct. Default: 150 (mm).
Returns:
pck: percentage of correct keypoints.
"""
assert mask.any()
if alignment == 'none':
pass
elif alignment == 'procrustes':
pred = np.stack([
compute_similarity_transform(pred_i, gt_i)
for pred_i, gt_i in zip(pred, gt)
])
elif alignment == 'scale':
pred_dot_pred = np.einsum('nkc,nkc->n', pred, pred)
pred_dot_gt = np.einsum('nkc,nkc->n', pred, gt)
scale_factor = pred_dot_gt / pred_dot_pred
pred = pred * scale_factor[:, None, None]
else:
raise ValueError(f'Invalid value for alignment: {alignment}')
error = np.linalg.norm(pred - gt, ord=2, axis=-1)
pck = (error < threshold).astype(np.float32)[mask].mean() * 100
return pck | Calculate the Percentage of Correct Keypoints (3DPCK) w. or w/o rigid alignment. Paper ref: `Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision' 3DV'2017. <https://arxiv.org/pdf/1611.09813>`__ . Note: - batch_size: N - num_keypoints: K - keypoint_dims: C Args: pred (np.ndarray[N, K, C]): Predicted keypoint location. gt (np.ndarray[N, K, C]): Groundtruth keypoint location. mask (np.ndarray[N, K]): Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation. alignment (str, optional): method to align the prediction with the groundtruth. Supported options are: - ``'none'``: no alignment will be applied - ``'scale'``: align in the least-square sense in scale - ``'procrustes'``: align in the least-square sense in scale, rotation and translation. threshold: If L2 distance between the prediction and the groundtruth is less then threshold, the predicted result is considered as correct. Default: 150 (mm). Returns: pck: percentage of correct keypoints. |
14,334 | import numpy as np
import trimesh
from trimesh.proximity import closest_point
from .mesh_eval import compute_similarity_transform
def compute_similarity_transform(source_points,
target_points,
return_tform=False):
"""Computes a similarity transform (sR, t) that takes a set of 3D points
source_points (N x 3) closest to a set of 3D points target_points, where R
is an 3x3 rotation matrix, t 3x1 translation, s scale.
And return the
transformed 3D points source_points_hat (N x 3). i.e. solves the orthogonal
Procrutes problem.
Notes:
Points number: N
Args:
source_points (np.ndarray([N, 3])): Source point set.
target_points (np.ndarray([N, 3])): Target point set.
return_tform (bool) : Whether return transform
Returns:
source_points_hat (np.ndarray([N, 3])): Transformed source point set.
transform (dict): Returns if return_tform is True.
Returns rotation: r, 'scale': s, 'translation':t.
"""
assert target_points.shape[0] == source_points.shape[0]
assert target_points.shape[1] == 3 and source_points.shape[1] == 3
source_points = source_points.T
target_points = target_points.T
# 1. Remove mean.
mu1 = source_points.mean(axis=1, keepdims=True)
mu2 = target_points.mean(axis=1, keepdims=True)
X1 = source_points - mu1
X2 = target_points - mu2
# 2. Compute variance of X1 used for scale.
var1 = np.sum(X1**2)
# 3. The outer product of X1 and X2.
K = X1.dot(X2.T)
# 4. Solution that Maximizes trace(R'K) is R=U*V', where U, V are
# singular vectors of K.
U, _, Vh = np.linalg.svd(K)
V = Vh.T
# Construct Z that fixes the orientation of R to get det(R)=1.
Z = np.eye(U.shape[0])
Z[-1, -1] *= np.sign(np.linalg.det(U.dot(V.T)))
# Construct R.
R = V.dot(Z.dot(U.T))
# 5. Recover scale.
scale = np.trace(R.dot(K)) / var1
# 6. Recover translation.
t = mu2 - scale * (R.dot(mu1))
# 7. Transform the source points:
source_points_hat = scale * R.dot(source_points) + t
source_points_hat = source_points_hat.T
if return_tform:
return source_points_hat, {
'rotation': R,
'scale': scale,
'translation': t
}
return source_points_hat
The provided code snippet includes necessary dependencies for implementing the `keypoint_3d_auc` function. Write a Python function `def keypoint_3d_auc(pred, gt, mask, alignment='none')` to solve the following problem:
Calculate the Area Under the Curve (3DAUC) computed for a range of 3DPCK thresholds. Paper ref: `Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision' 3DV'2017. <https://arxiv.org/pdf/1611.09813>`__ . This implementation is derived from mpii_compute_3d_pck.m, which is provided as part of the MPI-INF-3DHP test data release. Note: batch_size: N num_keypoints: K keypoint_dims: C Args: pred (np.ndarray[N, K, C]): Predicted keypoint location. gt (np.ndarray[N, K, C]): Groundtruth keypoint location. mask (np.ndarray[N, K]): Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation. alignment (str, optional): method to align the prediction with the groundtruth. Supported options are: - ``'none'``: no alignment will be applied - ``'scale'``: align in the least-square sense in scale - ``'procrustes'``: align in the least-square sense in scale, rotation and translation. Returns: auc: AUC computed for a range of 3DPCK thresholds.
Here is the function:
def keypoint_3d_auc(pred, gt, mask, alignment='none'):
"""Calculate the Area Under the Curve (3DAUC) computed for a range of 3DPCK
thresholds.
Paper ref: `Monocular 3D Human Pose Estimation In The Wild Using Improved
CNN Supervision' 3DV'2017. <https://arxiv.org/pdf/1611.09813>`__ .
This implementation is derived from mpii_compute_3d_pck.m, which is
provided as part of the MPI-INF-3DHP test data release.
Note:
batch_size: N
num_keypoints: K
keypoint_dims: C
Args:
pred (np.ndarray[N, K, C]): Predicted keypoint location.
gt (np.ndarray[N, K, C]): Groundtruth keypoint location.
mask (np.ndarray[N, K]): Visibility of the target. False for invisible
joints, and True for visible. Invisible joints will be ignored for
accuracy calculation.
alignment (str, optional): method to align the prediction with the
groundtruth. Supported options are:
- ``'none'``: no alignment will be applied
- ``'scale'``: align in the least-square sense in scale
- ``'procrustes'``: align in the least-square sense in scale,
rotation and translation.
Returns:
auc: AUC computed for a range of 3DPCK thresholds.
"""
assert mask.any()
if alignment == 'none':
pass
elif alignment == 'procrustes':
pred = np.stack([
compute_similarity_transform(pred_i, gt_i)
for pred_i, gt_i in zip(pred, gt)
])
elif alignment == 'scale':
pred_dot_pred = np.einsum('nkc,nkc->n', pred, pred)
pred_dot_gt = np.einsum('nkc,nkc->n', pred, gt)
scale_factor = pred_dot_gt / pred_dot_pred
pred = pred * scale_factor[:, None, None]
else:
raise ValueError(f'Invalid value for alignment: {alignment}')
error = np.linalg.norm(pred - gt, ord=2, axis=-1)
thresholds = np.linspace(0., 150, 31)
pck_values = np.zeros(len(thresholds))
for i in range(len(thresholds)):
pck_values[i] = (error < thresholds[i]).astype(np.float32)[mask].mean()
auc = pck_values.mean() * 100
return auc | Calculate the Area Under the Curve (3DAUC) computed for a range of 3DPCK thresholds. Paper ref: `Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision' 3DV'2017. <https://arxiv.org/pdf/1611.09813>`__ . This implementation is derived from mpii_compute_3d_pck.m, which is provided as part of the MPI-INF-3DHP test data release. Note: batch_size: N num_keypoints: K keypoint_dims: C Args: pred (np.ndarray[N, K, C]): Predicted keypoint location. gt (np.ndarray[N, K, C]): Groundtruth keypoint location. mask (np.ndarray[N, K]): Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation. alignment (str, optional): method to align the prediction with the groundtruth. Supported options are: - ``'none'``: no alignment will be applied - ``'scale'``: align in the least-square sense in scale - ``'procrustes'``: align in the least-square sense in scale, rotation and translation. Returns: auc: AUC computed for a range of 3DPCK thresholds. |
14,335 | import numpy as np
import trimesh
from trimesh.proximity import closest_point
from .mesh_eval import compute_similarity_transform
def compute_similarity_transform(source_points,
target_points,
return_tform=False):
"""Computes a similarity transform (sR, t) that takes a set of 3D points
source_points (N x 3) closest to a set of 3D points target_points, where R
is an 3x3 rotation matrix, t 3x1 translation, s scale.
And return the
transformed 3D points source_points_hat (N x 3). i.e. solves the orthogonal
Procrutes problem.
Notes:
Points number: N
Args:
source_points (np.ndarray([N, 3])): Source point set.
target_points (np.ndarray([N, 3])): Target point set.
return_tform (bool) : Whether return transform
Returns:
source_points_hat (np.ndarray([N, 3])): Transformed source point set.
transform (dict): Returns if return_tform is True.
Returns rotation: r, 'scale': s, 'translation':t.
"""
assert target_points.shape[0] == source_points.shape[0]
assert target_points.shape[1] == 3 and source_points.shape[1] == 3
source_points = source_points.T
target_points = target_points.T
# 1. Remove mean.
mu1 = source_points.mean(axis=1, keepdims=True)
mu2 = target_points.mean(axis=1, keepdims=True)
X1 = source_points - mu1
X2 = target_points - mu2
# 2. Compute variance of X1 used for scale.
var1 = np.sum(X1**2)
# 3. The outer product of X1 and X2.
K = X1.dot(X2.T)
# 4. Solution that Maximizes trace(R'K) is R=U*V', where U, V are
# singular vectors of K.
U, _, Vh = np.linalg.svd(K)
V = Vh.T
# Construct Z that fixes the orientation of R to get det(R)=1.
Z = np.eye(U.shape[0])
Z[-1, -1] *= np.sign(np.linalg.det(U.dot(V.T)))
# Construct R.
R = V.dot(Z.dot(U.T))
# 5. Recover scale.
scale = np.trace(R.dot(K)) / var1
# 6. Recover translation.
t = mu2 - scale * (R.dot(mu1))
# 7. Transform the source points:
source_points_hat = scale * R.dot(source_points) + t
source_points_hat = source_points_hat.T
if return_tform:
return source_points_hat, {
'rotation': R,
'scale': scale,
'translation': t
}
return source_points_hat
The provided code snippet includes necessary dependencies for implementing the `fg_vertices_to_mesh_distance` function. Write a Python function `def fg_vertices_to_mesh_distance(groundtruth_vertices, grundtruth_landmark_points, predicted_mesh_vertices, predicted_mesh_faces, predicted_mesh_landmark_points)` to solve the following problem:
This script computes the reconstruction error between an input mesh and a ground truth mesh. Args: groundtruth_vertices (np.ndarray[N,3]): Ground truth vertices. grundtruth_landmark_points (np.ndarray[7,3]): Ground truth annotations. predicted_mesh_vertices (np.ndarray[M,3]): Predicted vertices. predicted_mesh_faces (np.ndarray[K,3]): Vertex indices composing the predicted mesh. predicted_mesh_landmark_points (np.ndarray[7,3]): Predicted points. Return: distance: Mean point to mesh distance. The grundtruth_landmark_points and predicted_mesh_landmark_points have to contain points in the following order: (1) right eye outer corner, (2) right eye inner corner, (3) left eye inner corner, (4) left eye outer corner, (5) nose bottom, (6) right mouth corner, (7) left mouth corner.
Here is the function:
def fg_vertices_to_mesh_distance(groundtruth_vertices,
grundtruth_landmark_points,
predicted_mesh_vertices, predicted_mesh_faces,
predicted_mesh_landmark_points):
"""This script computes the reconstruction error between an input mesh and
a ground truth mesh.
Args:
groundtruth_vertices (np.ndarray[N,3]): Ground truth vertices.
grundtruth_landmark_points (np.ndarray[7,3]): Ground truth annotations.
predicted_mesh_vertices (np.ndarray[M,3]): Predicted vertices.
predicted_mesh_faces (np.ndarray[K,3]): Vertex indices
composing the predicted mesh.
predicted_mesh_landmark_points (np.ndarray[7,3]): Predicted points.
Return:
distance: Mean point to mesh distance.
The grundtruth_landmark_points and predicted_mesh_landmark_points have to
contain points in the following order:
(1) right eye outer corner, (2) right eye inner corner,
(3) left eye inner corner, (4) left eye outer corner,
(5) nose bottom, (6) right mouth corner, (7) left mouth corner.
"""
# Do procrustes based on the 7 points:
_, tform = compute_similarity_transform(
predicted_mesh_landmark_points,
grundtruth_landmark_points,
return_tform=True)
# Use tform to transform all vertices.
predicted_mesh_vertices_aligned = (
tform['scale'] * tform['rotation'].dot(predicted_mesh_vertices.T) +
tform['translation']).T
# Compute the mask: A circular area around the center of the face.
nose_bottom = np.array(grundtruth_landmark_points[4])
nose_bridge = (np.array(grundtruth_landmark_points[1]) + np.array(
grundtruth_landmark_points[2])) / 2 # between the inner eye corners
face_centre = nose_bottom + 0.3 * (nose_bridge - nose_bottom)
# Compute the radius for the face mask:
outer_eye_dist = np.linalg.norm(
np.array(grundtruth_landmark_points[0]) -
np.array(grundtruth_landmark_points[3]))
nose_dist = np.linalg.norm(nose_bridge - nose_bottom)
mask_radius = 1.2 * (outer_eye_dist + nose_dist) / 2
# Find all the vertex indices in mask area.
vertex_indices_mask = []
# vertex indices in the source mesh (the ground truth scan)
points_on_groundtruth_scan_to_measure_from = []
for vertex_idx, vertex in enumerate(groundtruth_vertices):
dist = np.linalg.norm(
vertex - face_centre
) # We use Euclidean distance for the mask area for now.
if dist <= mask_radius:
vertex_indices_mask.append(vertex_idx)
points_on_groundtruth_scan_to_measure_from.append(vertex)
assert len(vertex_indices_mask) == len(
points_on_groundtruth_scan_to_measure_from)
# Calculate the distance to the surface of the predicted mesh.
predicted_mesh = trimesh.Trimesh(predicted_mesh_vertices_aligned,
predicted_mesh_faces)
_, distance, _ = closest_point(predicted_mesh,
points_on_groundtruth_scan_to_measure_from)
return distance.mean() | This script computes the reconstruction error between an input mesh and a ground truth mesh. Args: groundtruth_vertices (np.ndarray[N,3]): Ground truth vertices. grundtruth_landmark_points (np.ndarray[7,3]): Ground truth annotations. predicted_mesh_vertices (np.ndarray[M,3]): Predicted vertices. predicted_mesh_faces (np.ndarray[K,3]): Vertex indices composing the predicted mesh. predicted_mesh_landmark_points (np.ndarray[7,3]): Predicted points. Return: distance: Mean point to mesh distance. The grundtruth_landmark_points and predicted_mesh_landmark_points have to contain points in the following order: (1) right eye outer corner, (2) right eye inner corner, (3) left eye inner corner, (4) left eye outer corner, (5) nose bottom, (6) right mouth corner, (7) left mouth corner. |
14,336 | from mmcv.utils import Registry
POST_PROCESSING = Registry('post_processing')
The provided code snippet includes necessary dependencies for implementing the `build_post_processing` function. Write a Python function `def build_post_processing(cfg)` to solve the following problem:
Build post processing function.
Here is the function:
def build_post_processing(cfg):
"""Build post processing function."""
return POST_PROCESSING.build(cfg) | Build post processing function. |
14,337 | import math
import warnings
import numpy as np
import torch
from ..builder import POST_PROCESSING
def smoothing_factor(t_e, cutoff):
r = 2 * math.pi * cutoff * t_e
return r / (r + 1) | null |
14,338 | import math
import warnings
import numpy as np
import torch
from ..builder import POST_PROCESSING
def exponential_smoothing(a, x, x_prev):
return a * x + (1 - a) * x_prev | null |
14,339 | import copy
import math
from typing import Optional
import numpy as np
import torch
import torch.nn.functional as F
from mmcv.runner import load_checkpoint
from torch import Tensor, nn
from mmhuman3d.utils.transforms import (
aa_to_rotmat,
rot6d_to_rotmat,
rotmat_to_aa,
rotmat_to_rot6d,
)
from ..builder import POST_PROCESSING
def _get_clones(module, N):
return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) | null |
14,340 | import copy
import math
from typing import Optional
import numpy as np
import torch
import torch.nn.functional as F
from mmcv.runner import load_checkpoint
from torch import Tensor, nn
from mmhuman3d.utils.transforms import (
aa_to_rotmat,
rot6d_to_rotmat,
rotmat_to_aa,
rotmat_to_rot6d,
)
from ..builder import POST_PROCESSING
class DeciWatchTransformer(nn.Module):
def __init__(self,
input_nc,
encoder_hidden_dim=512,
decoder_hidden_dim=512,
nhead=8,
num_encoder_layers=6,
num_decoder_layers=6,
dim_feedforward=2048,
dropout=0.1,
activation='relu',
pre_norm=False):
super(DeciWatchTransformer, self).__init__()
self.joints_dim = input_nc
# bring in semantic (5 frames) temporal information into tokens
self.decoder_embed = nn.Conv1d(
self.joints_dim,
decoder_hidden_dim,
kernel_size=5,
stride=1,
padding=2)
self.encoder_embed = nn.Linear(self.joints_dim, encoder_hidden_dim)
encoder_layer = DeciWatchTransformerEncoderLayer(
encoder_hidden_dim, nhead, dim_feedforward, dropout, activation,
pre_norm)
encoder_norm = nn.LayerNorm(encoder_hidden_dim) if pre_norm else None
self.encoder = DeciWatchTransformerEncoder(encoder_layer,
num_encoder_layers,
encoder_norm)
decoder_layer = DeciWatchTransformerDecoderLayer(
decoder_hidden_dim, nhead, dim_feedforward, dropout, activation,
pre_norm)
decoder_norm = nn.LayerNorm(decoder_hidden_dim)
self.decoder = DeciWatchTransformerDecoder(decoder_layer,
num_decoder_layers,
decoder_norm)
self.decoder_joints_embed = nn.Linear(decoder_hidden_dim,
self.joints_dim)
self.encoder_joints_embed = nn.Linear(encoder_hidden_dim,
self.joints_dim)
# reset parameters
for p in self.parameters():
if p.dim() > 1:
nn.init.xavier_uniform_(p)
self.encoder_hidden_dim = encoder_hidden_dim
self.decoder_hidden_dim = decoder_hidden_dim
self.nhead = nhead
def _generate_square_subsequent_mask(self, sz):
mask = torch.triu(torch.ones(sz, sz), 1)
mask = mask.masked_fill(mask == 1, float('-inf'))
return mask
def interpolate_embedding(self, input, rate):
tmp = input.clone()
seq_len = input.shape[0]
indice = torch.arange(seq_len, dtype=int).to(self.device)
chunk = torch.div(indice, rate).type(torch.long)
remain = indice % rate
prev = tmp[chunk * rate]
next = torch.cat([tmp[(chunk[:-1] + 1) * rate], tmp[-1].unsqueeze(0)],
dim=0)
interpolate = (prev / rate * (rate - remain.view(-1, 1, 1))) + (
next / rate * remain.view(-1, 1, 1))
return interpolate
def forward(self, input_seq, encoder_mask, encoder_pos_embed,
input_seq_interp, decoder_mask, decoder_pos_embed,
sample_interval, device):
self.device = device
# flatten NxCxL to LxNxC
bs, c, _ = input_seq.shape
input_seq = input_seq.permute(2, 0, 1)
input_seq_interp = input_seq_interp.permute(2, 0, 1)
input = input_seq.clone()
# mask on all sequences:
trans_src = self.encoder_embed(input_seq)
mem = self.encode(trans_src, encoder_mask, encoder_pos_embed)
reco = self.encoder_joints_embed(mem) + input
interp = self.interpolate_embedding(reco, sample_interval)
center = interp.clone()
trans_tgt = self.decoder_embed(interp.permute(1, 2,
0)).permute(2, 0, 1)
output = self.decode(mem, encoder_mask, encoder_pos_embed, trans_tgt,
decoder_mask, decoder_pos_embed)
joints = self.decoder_joints_embed(output) + center
return joints, reco
def encode(self, src, src_mask, pos_embed):
mask = torch.eye(src.shape[0]).bool().to(src.device)
memory = self.encoder(
src, mask=mask, src_key_padding_mask=src_mask, pos=pos_embed)
return memory
def decode(self, memory, memory_mask, memory_pos, tgt, tgt_mask, tgt_pos):
hs = self.decoder(
tgt,
memory,
tgt_key_padding_mask=tgt_mask,
memory_key_padding_mask=memory_mask,
pos=memory_pos,
query_pos=tgt_pos)
return hs
def build_model(args):
return DeciWatchTransformer(
input_nc=args['input_dim'],
decoder_hidden_dim=args['decoder_hidden_dim'],
encoder_hidden_dim=args['encoder_hidden_dim'],
dropout=args['dropout'],
nhead=args['nheads'],
dim_feedforward=args['dim_feedforward'],
num_encoder_layers=args['enc_layers'],
num_decoder_layers=args['dec_layers'],
activation=args['activation'],
pre_norm=args['pre_norm'],
) | null |
14,341 | import copy
import math
from typing import Optional
import numpy as np
import torch
import torch.nn.functional as F
from mmcv.runner import load_checkpoint
from torch import Tensor, nn
from mmhuman3d.utils.transforms import (
aa_to_rotmat,
rot6d_to_rotmat,
rotmat_to_aa,
rotmat_to_rot6d,
)
from ..builder import POST_PROCESSING
The provided code snippet includes necessary dependencies for implementing the `_get_activation_fn` function. Write a Python function `def _get_activation_fn(activation)` to solve the following problem:
Return an activation function given a string.
Here is the function:
def _get_activation_fn(activation):
"""Return an activation function given a string."""
if activation == 'relu':
return F.relu
if activation == 'gelu':
return F.gelu
if activation == 'glu':
return F.glu
if activation == 'leaky_relu':
return F.leaky_relu
raise RuntimeError(F'activation should be relu/gelu, not {activation}.') | Return an activation function given a string. |
14,342 | import warnings
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
import torch
from mmhuman3d.utils.transforms import ee_to_rotmat, rotmat_to_ee
The provided code snippet includes necessary dependencies for implementing the `convert_K_4x4_to_3x3` function. Write a Python function `def convert_K_4x4_to_3x3( K: Union[torch.Tensor, np.ndarray], is_perspective: bool = True) -> Union[torch.Tensor, np.ndarray]` to solve the following problem:
Convert opencv 4x4 intrinsic matrix to 3x3. Args: K (Union[torch.Tensor, np.ndarray]): Input 4x4 intrinsic matrix, left mm defined. for perspective: [[fx, 0, px, 0], [0, fy, py, 0], [0, 0, 0, 1], [0, 0, 1, 0]] for orthographics: [[fx, 0, 0, px], [0, fy, 0, py], [0, 0, 1, 0], [0, 0, 0, 1]] is_perspective (bool, optional): whether is perspective projection. Defaults to True. Raises: TypeError: type K should be `Tensor` or `array`. ValueError: Shape is not (batch, 3, 3) or (3, 3). Returns: Union[torch.Tensor, np.ndarray]: Output 3x3 intrinsic matrix, left mm defined. [[fx, 0, px], [0, fy, py], [0, 0, 1]]
Here is the function:
def convert_K_4x4_to_3x3(
K: Union[torch.Tensor, np.ndarray],
is_perspective: bool = True) -> Union[torch.Tensor, np.ndarray]:
"""Convert opencv 4x4 intrinsic matrix to 3x3.
Args:
K (Union[torch.Tensor, np.ndarray]):
Input 4x4 intrinsic matrix, left mm defined.
for perspective:
[[fx, 0, px, 0],
[0, fy, py, 0],
[0, 0, 0, 1],
[0, 0, 1, 0]]
for orthographics:
[[fx, 0, 0, px],
[0, fy, 0, py],
[0, 0, 1, 0],
[0, 0, 0, 1]]
is_perspective (bool, optional): whether is perspective projection.
Defaults to True.
Raises:
TypeError: type K should be `Tensor` or `array`.
ValueError: Shape is not (batch, 3, 3) or (3, 3).
Returns:
Union[torch.Tensor, np.ndarray]:
Output 3x3 intrinsic matrix, left mm defined.
[[fx, 0, px],
[0, fy, py],
[0, 0, 1]]
"""
if isinstance(K, torch.Tensor):
K = K.clone()
elif isinstance(K, np.ndarray):
K = K.copy()
else:
raise TypeError('K should be `torch.Tensor` or `numpy.ndarray`, '
f'type(K): {type(K)}.')
if K.shape[-2:] == (3, 3):
warnings.warn(
f'shape of K already is {K.shape}, will pass converting.')
return K
use_numpy = True if isinstance(K, np.ndarray) else False
if K.ndim == 2:
K = K[None].reshape(-1, 4, 4)
elif K.ndim == 3:
K = K.reshape(-1, 4, 4)
else:
raise ValueError(f'Wrong ndim of K: {K.ndim}')
if use_numpy:
K_ = np.eye(3, 3)[None].repeat(K.shape[0], 0)
else:
K_ = torch.eye(3, 3)[None].repeat(K.shape[0], 1, 1)
if is_perspective:
K_[:, :2, :3] = K[:, :2, :3]
else:
K_[:, :2, :2] = K[:, :2, :2]
K_[:, :2, 2:3] = K[:, :2, 3:4]
return K_ | Convert opencv 4x4 intrinsic matrix to 3x3. Args: K (Union[torch.Tensor, np.ndarray]): Input 4x4 intrinsic matrix, left mm defined. for perspective: [[fx, 0, px, 0], [0, fy, py, 0], [0, 0, 0, 1], [0, 0, 1, 0]] for orthographics: [[fx, 0, 0, px], [0, fy, 0, py], [0, 0, 1, 0], [0, 0, 0, 1]] is_perspective (bool, optional): whether is perspective projection. Defaults to True. Raises: TypeError: type K should be `Tensor` or `array`. ValueError: Shape is not (batch, 3, 3) or (3, 3). Returns: Union[torch.Tensor, np.ndarray]: Output 3x3 intrinsic matrix, left mm defined. [[fx, 0, px], [0, fy, py], [0, 0, 1]] |
14,343 | from typing import Tuple, Union
import numpy as np
import torch
from .convert_convention import convert_camera_matrix
def convert_camera_matrix(
K: Optional[Union[torch.Tensor, np.ndarray]] = None,
R: Optional[Union[torch.Tensor, np.ndarray]] = None,
T: Optional[Union[torch.Tensor, np.ndarray]] = None,
is_perspective: bool = True,
convention_src: str = 'opencv',
convention_dst: str = 'pytorch3d',
in_ndc_src: bool = True,
in_ndc_dst: bool = True,
resolution_src: Optional[Union[int, Tuple[int, int], torch.Tensor,
np.ndarray]] = None,
resolution_dst: Optional[Union[int, Tuple[int, int], torch.Tensor,
np.ndarray]] = None,
camera_conventions: dict = CAMERA_CONVENTIONS,
) -> Tuple[Union[torch.Tensor, np.ndarray], Union[torch.Tensor, np.ndarray],
Union[torch.Tensor, np.ndarray]]:
"""Convert the intrinsic matrix K and extrinsic matrix [R|T] from source
convention to destination convention.
Args:
K (Union[torch.Tensor, np.ndarray]): Intrinsic matrix,
shape should be (batch_size, 4, 4) or (batch_size, 3, 3).
Will be ignored if None.
R (Optional[Union[torch.Tensor, np.ndarray]], optional):
Extrinsic rotation matrix. Shape should be (batch_size, 3, 3).
Will be identity if None.
Defaults to None.
T (Optional[Union[torch.Tensor, np.ndarray]], optional):
Extrinsic translation matrix. Shape should be (batch_size, 3).
Will be zeros if None.
Defaults to None.
is_perspective (bool, optional): whether is perspective projection.
Defaults to True.
_____________________________________________________________________
# Camera dependent args
convention_src (str, optional): convention of source camera,
convention_dst (str, optional): convention of destination camera,
We define the convention of cameras by the order of right, front and
up.
E.g., the first one is pyrender and its convention should be
'+x+z+y'. '+' could be ignored.
The second one is opencv and its convention should be '+x-z-y'.
The third one is pytorch3d and its convention should be '-xzy'.
opengl(pyrender) opencv pytorch3d
y z y
| / |
| / |
|_______x /________x x________ |
/ | /
/ | /
z / y | z /
in_ndc_src (bool, optional): Whether is the source camera defined
in ndc.
Defaults to True.
in_ndc_dst (bool, optional): Whether is the destination camera defined
in ndc.
Defaults to True.
in camera_convention, we define these args as:
1). `left_mm_ex` means extrinsic matrix `K` is left matrix
multiplcation defined.
2). `left_mm_in` means intrinsic matrix [`R`| `T`] is left
matrix multiplcation defined.
3) `view_to_world` means extrinsic matrix [`R`| `T`] is defined
as view to world.
resolution_src (Optional[Union[int, Tuple[int, int], torch.Tensor,
np.ndarray]], optional):
Source camera image size of (height, width).
Required if defined in screen.
Will be square if int.
Shape should be (2,) if `array` or `tensor`.
Defaults to None.
resolution_dst (Optional[Union[int, Tuple[int, int], torch.Tensor,
np.ndarray]], optional):
Destination camera image size of (height, width).
Required if defined in screen.
Will be square if int.
Shape should be (2,) if `array` or `tensor`.
Defaults to None.
camera_conventions: (dict, optional): `dict` containing
pre-defined camera convention information.
Defaults to CAMERA_CONVENTIONS.
Raises:
TypeError: K, R, T should all be `torch.Tensor` or `np.ndarray`.
Returns:
Tuple[Union[torch.Tensor, None], Union[torch.Tensor, None],
Union[torch.Tensor, None]]:
Converted K, R, T matrix of `tensor`.
"""
convention_dst = convention_dst.lower()
convention_src = convention_src.lower()
assert convention_dst in CAMERA_CONVENTIONS
assert convention_src in CAMERA_CONVENTIONS
left_mm_ex_src = CAMERA_CONVENTIONS[convention_src].get(
'left_mm_extrinsic', True)
view_to_world_src = CAMERA_CONVENTIONS[convention_src].get(
'view_to_world', False)
left_mm_in_src = CAMERA_CONVENTIONS[convention_src].get(
'left_mm_intrinsic', False)
left_mm_ex_dst = CAMERA_CONVENTIONS[convention_dst].get(
'left_mm_extrinsic', True)
view_to_world_dst = CAMERA_CONVENTIONS[convention_dst].get(
'view_to_world', False)
left_mm_in_dst = CAMERA_CONVENTIONS[convention_dst].get(
'left_mm_intrinsic', False)
sign_src, axis_src = enc_camera_convention(convention_src,
camera_conventions)
sign_dst, axis_dst = enc_camera_convention(convention_dst,
camera_conventions)
sign = torch.Tensor(sign_dst) / torch.Tensor(sign_src)
type_ = []
for x in [K, R, T]:
if x is not None:
type_.append(type(x))
if len(type_) > 0:
if not all(x == type_[0] for x in type_):
raise TypeError('Input type should be the same.')
use_numpy = False
if np.ndarray in type_:
use_numpy = True
# convert raw matrix to tensor
if isinstance(K, np.ndarray):
new_K = torch.Tensor(K)
elif K is None:
new_K = None
elif isinstance(K, torch.Tensor):
new_K = K.clone()
else:
raise TypeError(
f'K should be `torch.Tensor` or `numpy.ndarray`, type(K): '
f'{type(K)}')
if isinstance(R, np.ndarray):
new_R = torch.Tensor(R).view(-1, 3, 3)
elif R is None:
new_R = torch.eye(3, 3)[None]
elif isinstance(R, torch.Tensor):
new_R = R.clone().view(-1, 3, 3)
else:
raise TypeError(
f'R should be `torch.Tensor` or `numpy.ndarray`, type(R): '
f'{type(R)}')
if isinstance(T, np.ndarray):
new_T = torch.Tensor(T).view(-1, 3)
elif T is None:
new_T = torch.zeros(1, 3)
elif isinstance(T, torch.Tensor):
new_T = T.clone().view(-1, 3)
else:
raise TypeError(
f'T should be `torch.Tensor` or `numpy.ndarray`, type(T): '
f'{type(T)}')
if axis_dst != axis_src:
new_R = ee_to_rotmat(
rotmat_to_ee(new_R, convention=axis_src), convention=axis_dst)
# convert extrinsic to world_to_view
if view_to_world_src is True:
new_R, new_T = convert_world_view(new_R, new_T)
# right mm to left mm
if (not left_mm_ex_src) and left_mm_ex_dst:
new_R *= sign.to(new_R.device)
new_R = new_R.permute(0, 2, 1)
# left mm to right mm
elif left_mm_ex_src and (not left_mm_ex_dst):
new_R = new_R.permute(0, 2, 1)
new_R *= sign.to(new_R.device)
# right_mm to right mm
elif (not left_mm_ex_dst) and (not left_mm_ex_src):
new_R *= sign.to(new_R.device)
# left mm to left mm
elif left_mm_ex_src and left_mm_ex_dst:
new_R *= sign.view(3, 1).to(new_R.device)
new_T *= sign.to(new_T.device)
# convert extrinsic to as definition
if view_to_world_dst is True:
new_R, new_T = convert_world_view(new_R, new_T)
# in ndc or in screen
if in_ndc_dst is False and in_ndc_src is True:
assert resolution_dst is not None, \
'dst in screen, should specify resolution_dst.'
if in_ndc_src is False and in_ndc_dst is True:
assert resolution_src is not None, \
'src in screen, should specify resolution_dst.'
if resolution_dst is None:
resolution_dst = 2.0
if resolution_src is None:
resolution_src = 2.0
if new_K is not None:
if left_mm_in_src is False and left_mm_in_dst is True:
new_K = new_K.permute(0, 2, 1)
if new_K.shape[-2:] == (3, 3):
new_K = convert_K_3x3_to_4x4(new_K, is_perspective)
# src in ndc, dst in screen
if in_ndc_src is True and (in_ndc_dst is False):
new_K = convert_ndc_to_screen(
K=new_K,
is_perspective=is_perspective,
sign=sign.to(new_K.device),
resolution=resolution_dst)
# src in screen, dst in ndc
elif in_ndc_src is False and in_ndc_dst is True:
new_K = convert_screen_to_ndc(
K=new_K,
is_perspective=is_perspective,
sign=sign.to(new_K.device),
resolution=resolution_src)
# src in ndc, dst in ndc
elif in_ndc_src is True and in_ndc_dst is True:
if is_perspective:
new_K[:, 0, 2] *= sign[0].to(new_K.device)
new_K[:, 1, 2] *= sign[1].to(new_K.device)
else:
new_K[:, 0, 3] *= sign[0].to(new_K.device)
new_K[:, 1, 3] *= sign[1].to(new_K.device)
# src in screen, dst in screen
else:
pass
if left_mm_in_src is True and left_mm_in_dst is False:
new_K = new_K.permute(0, 2, 1)
num_batch = max(new_K.shape[0], new_R.shape[0], new_T.shape[0])
if new_K.shape[0] == 1:
new_K = new_K.repeat(num_batch, 1, 1)
if new_R.shape[0] == 1:
new_R = new_R.repeat(num_batch, 1, 1)
if new_T.shape[0] == 1:
new_T = new_T.repeat(num_batch, 1)
if use_numpy:
if isinstance(new_K, torch.Tensor):
new_K = new_K.cpu().numpy()
if isinstance(new_R, torch.Tensor):
new_R = new_R.cpu().numpy()
if isinstance(new_T, torch.Tensor):
new_T = new_T.cpu().numpy()
return new_K, new_R, new_T
The provided code snippet includes necessary dependencies for implementing the `convert_weakperspective_to_perspective` function. Write a Python function `def convert_weakperspective_to_perspective( K: Union[torch.Tensor, np.ndarray], zmean: Union[torch.Tensor, np.ndarray, int, float], resolution: Union[int, Tuple[int, int], torch.Tensor, np.ndarray] = None, in_ndc: bool = False, convention: str = 'opencv') -> Union[torch.Tensor, np.ndarray]` to solve the following problem:
Convert perspective to weakperspective intrinsic matrix. Args: K (Union[torch.Tensor, np.ndarray]): input intrinsic matrix, shape should be (batch, 4, 4) or (batch, 3, 3). zmean (Union[torch.Tensor, np.ndarray, int, float]): zmean for object. shape should be (batch, ) or singleton number. resolution (Union[int, Tuple[int, int], torch.Tensor, np.ndarray], optional): (height, width) of image. Defaults to None. in_ndc (bool, optional): whether defined in ndc. Defaults to False. convention (str, optional): camera convention. Defaults to 'opencv'. Returns: Union[torch.Tensor, np.ndarray]: output weakperspective pred_cam, shape is (batch, 4)
Here is the function:
def convert_weakperspective_to_perspective(
K: Union[torch.Tensor, np.ndarray],
zmean: Union[torch.Tensor, np.ndarray, int, float],
resolution: Union[int, Tuple[int, int], torch.Tensor,
np.ndarray] = None,
in_ndc: bool = False,
convention: str = 'opencv') -> Union[torch.Tensor, np.ndarray]:
"""Convert perspective to weakperspective intrinsic matrix.
Args:
K (Union[torch.Tensor, np.ndarray]): input intrinsic matrix, shape
should be (batch, 4, 4) or (batch, 3, 3).
zmean (Union[torch.Tensor, np.ndarray, int, float]): zmean for object.
shape should be (batch, ) or singleton number.
resolution (Union[int, Tuple[int, int], torch.Tensor, np.ndarray],
optional): (height, width) of image. Defaults to None.
in_ndc (bool, optional): whether defined in ndc. Defaults to False.
convention (str, optional): camera convention. Defaults to 'opencv'.
Returns:
Union[torch.Tensor, np.ndarray]: output weakperspective pred_cam,
shape is (batch, 4)
"""
if K.ndim == 2:
K = K[None]
if isinstance(zmean, np.ndarray):
zmean = torch.Tensor(zmean)
elif isinstance(zmean, (float, int)):
zmean = torch.Tensor([zmean])
zmean = zmean.view(-1)
_N = max(K.shape[0], zmean.shape[0])
s1 = K[:, 0, 0]
s2 = K[:, 1, 1]
c1 = K[:, 0, 3]
c2 = K[:, 1, 3]
new_K = torch.zeros(_N, 4, 4)
new_K[:, 0, 0] = zmean * s1
new_K[:, 1, 1] = zmean * s2
new_K[:, 0, 2] = c1
new_K[:, 1, 2] = c2
new_K[:, 2, 3] = 1
new_K[:, 3, 2] = 1
new_K, _, _ = convert_camera_matrix(
K=new_K,
convention_src=convention,
convention_dst='pytorch3d',
is_perspective=True,
in_ndc_src=in_ndc,
in_ndc_dst=True,
resolution_src=resolution)
return new_K | Convert perspective to weakperspective intrinsic matrix. Args: K (Union[torch.Tensor, np.ndarray]): input intrinsic matrix, shape should be (batch, 4, 4) or (batch, 3, 3). zmean (Union[torch.Tensor, np.ndarray, int, float]): zmean for object. shape should be (batch, ) or singleton number. resolution (Union[int, Tuple[int, int], torch.Tensor, np.ndarray], optional): (height, width) of image. Defaults to None. in_ndc (bool, optional): whether defined in ndc. Defaults to False. convention (str, optional): camera convention. Defaults to 'opencv'. Returns: Union[torch.Tensor, np.ndarray]: output weakperspective pred_cam, shape is (batch, 4) |
14,344 | import numpy as np
from mmhuman3d.utils.transforms import aa_to_rotmat, rotmat_to_aa
def aa_to_rotmat(
axis_angle: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""
Convert axis_angle to rotation matrixs.
Args:
axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3, 3).
"""
if axis_angle.shape[-1] != 3:
raise ValueError(
f'Invalid input axis angles shape f{axis_angle.shape}.')
t = Compose([axis_angle_to_matrix])
return t(axis_angle)
def rotmat_to_aa(
matrix: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert rotation matrixs to axis angles.
Args:
matrix (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3, 3). ndim of input is unlimited.
convention (str, optional): Convention string of three letters
from {“x”, “y”, and “z”}. Defaults to 'xyz'.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
"""
if matrix.shape[-1] != 3 or matrix.shape[-2] != 3:
raise ValueError(f'Invalid rotation matrix shape f{matrix.shape}.')
t = Compose([matrix_to_quaternion, quaternion_to_axis_angle])
return t(matrix)
The provided code snippet includes necessary dependencies for implementing the `transform_to_camera_frame` function. Write a Python function `def transform_to_camera_frame(global_orient, transl, pelvis, extrinsic)` to solve the following problem:
Transform body model parameters to camera frame. Args: global_orient (numpy.ndarray): shape (3, ). Only global_orient and transl needs to be updated in the rigid transformation transl (numpy.ndarray): shape (3, ). pelvis (numpy.ndarray): shape (3, ). 3D joint location of pelvis This is necessary to eliminate the offset from SMPL canonical space origin to pelvis, because the global orient is conducted around the pelvis, not the canonical space origin extrinsic (numpy.ndarray): shape (4, 4). Transformation matrix from world frame to camera frame Returns: (new_gloabl_orient, new_transl) new_gloabl_orient: transformed global orient new_transl: transformed transl
Here is the function:
def transform_to_camera_frame(global_orient, transl, pelvis, extrinsic):
"""Transform body model parameters to camera frame.
Args:
global_orient (numpy.ndarray): shape (3, ). Only global_orient and
transl needs to be updated in the rigid transformation
transl (numpy.ndarray): shape (3, ).
pelvis (numpy.ndarray): shape (3, ). 3D joint location of pelvis
This is necessary to eliminate the offset from SMPL
canonical space origin to pelvis, because the global orient
is conducted around the pelvis, not the canonical space origin
extrinsic (numpy.ndarray): shape (4, 4). Transformation matrix
from world frame to camera frame
Returns:
(new_gloabl_orient, new_transl)
new_gloabl_orient: transformed global orient
new_transl: transformed transl
"""
# take out the small offset from smpl origin to pelvis
transl_offset = pelvis - transl
T_p2w = np.eye(4)
T_p2w[:3, 3] = transl_offset
# camera extrinsic: transformation from world frame to camera frame
T_w2c = extrinsic
# smpl transformation: from vertex frame to world frame
T_v2p = np.eye(4)
global_orient_mat = aa_to_rotmat(global_orient)
T_v2p[:3, :3] = global_orient_mat
T_v2p[:3, 3] = transl
# compute combined transformation from vertex to world
T_v2w = T_p2w @ T_v2p
# compute transformation from vertex to camera
T_v2c = T_w2c @ T_v2w
# decompose vertex to camera transformation
# np: new pelvis frame
# T_v2c = T_np2c x T_v2np
T_np2c = T_p2w
T_v2np = np.linalg.inv(T_np2c) @ T_v2c
# decompose into new global orient and new transl
new_global_orient_mat = T_v2np[:3, :3]
new_global_orient = rotmat_to_aa(new_global_orient_mat)
new_transl = T_v2np[:3, 3]
return new_global_orient, new_transl | Transform body model parameters to camera frame. Args: global_orient (numpy.ndarray): shape (3, ). Only global_orient and transl needs to be updated in the rigid transformation transl (numpy.ndarray): shape (3, ). pelvis (numpy.ndarray): shape (3, ). 3D joint location of pelvis This is necessary to eliminate the offset from SMPL canonical space origin to pelvis, because the global orient is conducted around the pelvis, not the canonical space origin extrinsic (numpy.ndarray): shape (4, 4). Transformation matrix from world frame to camera frame Returns: (new_gloabl_orient, new_transl) new_gloabl_orient: transformed global orient new_transl: transformed transl |
14,345 | import numpy as np
from mmhuman3d.utils.transforms import aa_to_rotmat, rotmat_to_aa
def aa_to_rotmat(
axis_angle: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""
Convert axis_angle to rotation matrixs.
Args:
axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3, 3).
"""
if axis_angle.shape[-1] != 3:
raise ValueError(
f'Invalid input axis angles shape f{axis_angle.shape}.')
t = Compose([axis_angle_to_matrix])
return t(axis_angle)
def rotmat_to_aa(
matrix: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert rotation matrixs to axis angles.
Args:
matrix (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3, 3). ndim of input is unlimited.
convention (str, optional): Convention string of three letters
from {“x”, “y”, and “z”}. Defaults to 'xyz'.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
"""
if matrix.shape[-1] != 3 or matrix.shape[-2] != 3:
raise ValueError(f'Invalid rotation matrix shape f{matrix.shape}.')
t = Compose([matrix_to_quaternion, quaternion_to_axis_angle])
return t(matrix)
The provided code snippet includes necessary dependencies for implementing the `batch_transform_to_camera_frame` function. Write a Python function `def batch_transform_to_camera_frame(global_orient, transl, pelvis, extrinsic)` to solve the following problem:
Transform body model parameters to camera frame by batch. Args: global_orient (np.ndarray): shape (N, 3). Only global_orient and transl needs to be updated in the rigid transformation transl (np.ndarray): shape (N, 3). pelvis (np.ndarray): shape (N, 3). 3D joint location of pelvis This is necessary to eliminate the offset from SMPL canonical space origin to pelvis, because the global orient is conducted around the pelvis, not the canonical space origin extrinsic (np.ndarray): shape (4, 4). Transformation matrix from world frame to camera frame Returns: (new_gloabl_orient, new_transl) new_gloabl_orient: transformed global orient new_transl: transformed transl
Here is the function:
def batch_transform_to_camera_frame(global_orient, transl, pelvis, extrinsic):
"""Transform body model parameters to camera frame by batch.
Args:
global_orient (np.ndarray): shape (N, 3). Only global_orient and
transl needs to be updated in the rigid transformation
transl (np.ndarray): shape (N, 3).
pelvis (np.ndarray): shape (N, 3). 3D joint location of pelvis
This is necessary to eliminate the offset from SMPL
canonical space origin to pelvis, because the global orient
is conducted around the pelvis, not the canonical space origin
extrinsic (np.ndarray): shape (4, 4). Transformation matrix
from world frame to camera frame
Returns:
(new_gloabl_orient, new_transl)
new_gloabl_orient: transformed global orient
new_transl: transformed transl
"""
N = len(global_orient)
assert global_orient.shape == (N, 3)
assert transl.shape == (N, 3)
assert pelvis.shape == (N, 3)
# take out the small offset from smpl origin to pelvis
transl_offset = pelvis - transl
T_p2w = np.eye(4).reshape(1, 4, 4).repeat(N, axis=0)
T_p2w[:, :3, 3] = transl_offset
# camera extrinsic: transformation from world frame to camera frame
T_w2c = extrinsic
# smpl transformation: from vertex frame to world frame
T_v2p = np.eye(4).reshape(1, 4, 4).repeat(N, axis=0)
global_orient_mat = aa_to_rotmat(global_orient)
T_v2p[:, :3, :3] = global_orient_mat
T_v2p[:, :3, 3] = transl
# compute combined transformation from vertex to world
T_v2w = T_p2w @ T_v2p
# compute transformation from vertex to camera
T_v2c = T_w2c @ T_v2w
# decompose vertex to camera transformation
# np: new pelvis frame
# T_v2c = T_np2c x T_v2np
T_np2c = T_p2w
T_v2np = np.linalg.inv(T_np2c) @ T_v2c
# decompose into new global orient and new transl
new_global_orient_mat = T_v2np[:, :3, :3]
new_global_orient = rotmat_to_aa(new_global_orient_mat)
new_transl = T_v2np[:, :3, 3]
assert new_global_orient.shape == (N, 3)
assert new_transl.shape == (N, 3)
return new_global_orient, new_transl | Transform body model parameters to camera frame by batch. Args: global_orient (np.ndarray): shape (N, 3). Only global_orient and transl needs to be updated in the rigid transformation transl (np.ndarray): shape (N, 3). pelvis (np.ndarray): shape (N, 3). 3D joint location of pelvis This is necessary to eliminate the offset from SMPL canonical space origin to pelvis, because the global orient is conducted around the pelvis, not the canonical space origin extrinsic (np.ndarray): shape (4, 4). Transformation matrix from world frame to camera frame Returns: (new_gloabl_orient, new_transl) new_gloabl_orient: transformed global orient new_transl: transformed transl |
14,346 | from typing import Optional
import numpy as np
import torch
from smplx import SMPL as _SMPL
from smplx.lbs import (
batch_rigid_transform,
blend_shapes,
transform_mat,
vertices2joints,
)
from mmhuman3d.core.conventions.keypoints_mapping import (
convert_kps,
get_keypoint_num,
)
from mmhuman3d.core.conventions.segmentation import body_segmentation
from mmhuman3d.models.utils import batch_inverse_kinematics_transform
from mmhuman3d.utils.transforms import quat_to_rotmat
def to_tensor(array, dtype=torch.float32):
if 'torch.tensor' not in str(type(array)):
return torch.tensor(array, dtype=dtype) | null |
14,347 | from typing import Optional
import numpy as np
import torch
from smplx import SMPL as _SMPL
from smplx.lbs import (
batch_rigid_transform,
blend_shapes,
transform_mat,
vertices2joints,
)
from mmhuman3d.core.conventions.keypoints_mapping import (
convert_kps,
get_keypoint_num,
)
from mmhuman3d.core.conventions.segmentation import body_segmentation
from mmhuman3d.models.utils import batch_inverse_kinematics_transform
from mmhuman3d.utils.transforms import quat_to_rotmat
def to_np(array, dtype=np.float32):
if 'scipy.sparse' in str(type(array)):
array = array.todense()
return np.array(array, dtype=dtype) | null |
14,348 | import torch
import torch.nn as nn
from .utils import weighted_loss
The provided code snippet includes necessary dependencies for implementing the `smooth_l1_loss` function. Write a Python function `def smooth_l1_loss(pred, target, beta=1.0)` to solve the following problem:
Smooth L1 loss. Args: pred (torch.Tensor): The prediction. target (torch.Tensor): The learning target of the prediction. beta (float, optional): The threshold in the piecewise function. Defaults to 1.0. Returns: torch.Tensor: Calculated loss
Here is the function:
def smooth_l1_loss(pred, target, beta=1.0):
"""Smooth L1 loss.
Args:
pred (torch.Tensor): The prediction.
target (torch.Tensor): The learning target of the prediction.
beta (float, optional): The threshold in the piecewise function.
Defaults to 1.0.
Returns:
torch.Tensor: Calculated loss
"""
assert beta > 0
assert pred.size() == target.size() and target.numel() > 0
diff = torch.abs(pred - target)
loss = torch.where(diff < beta, 0.5 * diff * diff / beta,
diff - 0.5 * beta)
return loss | Smooth L1 loss. Args: pred (torch.Tensor): The prediction. target (torch.Tensor): The learning target of the prediction. beta (float, optional): The threshold in the piecewise function. Defaults to 1.0. Returns: torch.Tensor: Calculated loss |
14,349 | import torch
import torch.nn as nn
from .utils import weighted_loss
The provided code snippet includes necessary dependencies for implementing the `l1_loss` function. Write a Python function `def l1_loss(pred, target)` to solve the following problem:
L1 loss. Args: pred (torch.Tensor): The prediction. target (torch.Tensor): The learning target of the prediction. Returns: torch.Tensor: Calculated loss
Here is the function:
def l1_loss(pred, target):
"""L1 loss.
Args:
pred (torch.Tensor): The prediction.
target (torch.Tensor): The learning target of the prediction.
Returns:
torch.Tensor: Calculated loss
"""
assert pred.size() == target.size() and target.numel() > 0
loss = torch.abs(pred - target)
return loss | L1 loss. Args: pred (torch.Tensor): The prediction. target (torch.Tensor): The learning target of the prediction. Returns: torch.Tensor: Calculated loss |
14,350 | import functools
import torch
import torch.nn.functional as F
def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):
"""Apply element-wise weight and reduce loss.
Args:
loss (Tensor): Element-wise loss.
weight (Tensor): Element-wise weights.
reduction (str): Same as built-in losses of PyTorch.
avg_factor (float): Average factor when computing the mean of losses.
Returns:
Tensor: Processed loss values.
"""
# if weight is specified, apply element-wise weight
if weight is not None:
loss = loss * weight
# if avg_factor is not specified, just reduce the loss
if avg_factor is None:
loss = reduce_loss(loss, reduction)
else:
# if reduction is mean, then average the loss by avg_factor
if reduction == 'mean':
loss = loss.sum() / avg_factor
# if reduction is 'none', then do nothing, otherwise raise an error
elif reduction != 'none':
raise ValueError('avg_factor can not be used with reduction="sum"')
return loss
The provided code snippet includes necessary dependencies for implementing the `weighted_loss` function. Write a Python function `def weighted_loss(loss_func)` to solve the following problem:
Create a weighted version of a given loss function. To use this decorator, the loss function must have the signature like `loss_func(pred, target, **kwargs)`. The function only needs to compute element-wise loss without any reduction. This decorator will add weight and reduction arguments to the function. The decorated function will have the signature like `loss_func(pred, target, weight=None, reduction='mean', avg_factor=None, **kwargs)`. :Example: >>> import torch >>> @weighted_loss >>> def l1_loss(pred, target): >>> return (pred - target).abs() >>> pred = torch.Tensor([0, 2, 3]) >>> target = torch.Tensor([1, 1, 1]) >>> weight = torch.Tensor([1, 0, 1]) >>> l1_loss(pred, target) tensor(1.3333) >>> l1_loss(pred, target, weight) tensor(1.) >>> l1_loss(pred, target, reduction='none') tensor([1., 1., 2.]) >>> l1_loss(pred, target, weight, avg_factor=2) tensor(1.5000)
Here is the function:
def weighted_loss(loss_func):
"""Create a weighted version of a given loss function.
To use this decorator, the loss function must have the signature like
`loss_func(pred, target, **kwargs)`. The function only needs to compute
element-wise loss without any reduction. This decorator will add weight
and reduction arguments to the function. The decorated function will have
the signature like `loss_func(pred, target, weight=None, reduction='mean',
avg_factor=None, **kwargs)`.
:Example:
>>> import torch
>>> @weighted_loss
>>> def l1_loss(pred, target):
>>> return (pred - target).abs()
>>> pred = torch.Tensor([0, 2, 3])
>>> target = torch.Tensor([1, 1, 1])
>>> weight = torch.Tensor([1, 0, 1])
>>> l1_loss(pred, target)
tensor(1.3333)
>>> l1_loss(pred, target, weight)
tensor(1.)
>>> l1_loss(pred, target, reduction='none')
tensor([1., 1., 2.])
>>> l1_loss(pred, target, weight, avg_factor=2)
tensor(1.5000)
"""
@functools.wraps(loss_func)
def wrapper(pred,
target,
weight=None,
reduction='mean',
avg_factor=None,
**kwargs):
# get element-wise loss
loss = loss_func(pred, target, **kwargs)
loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
return loss
return wrapper | Create a weighted version of a given loss function. To use this decorator, the loss function must have the signature like `loss_func(pred, target, **kwargs)`. The function only needs to compute element-wise loss without any reduction. This decorator will add weight and reduction arguments to the function. The decorated function will have the signature like `loss_func(pred, target, weight=None, reduction='mean', avg_factor=None, **kwargs)`. :Example: >>> import torch >>> @weighted_loss >>> def l1_loss(pred, target): >>> return (pred - target).abs() >>> pred = torch.Tensor([0, 2, 3]) >>> target = torch.Tensor([1, 1, 1]) >>> weight = torch.Tensor([1, 0, 1]) >>> l1_loss(pred, target) tensor(1.3333) >>> l1_loss(pred, target, weight) tensor(1.) >>> l1_loss(pred, target, reduction='none') tensor([1., 1., 2.]) >>> l1_loss(pred, target, weight, avg_factor=2) tensor(1.5000) |
14,351 | import functools
import torch
import torch.nn.functional as F
The provided code snippet includes necessary dependencies for implementing the `convert_to_one_hot` function. Write a Python function `def convert_to_one_hot(targets: torch.Tensor, classes) -> torch.Tensor` to solve the following problem:
This function converts target class indices to one-hot vectors, given the number of classes. Args: targets (Tensor): The ground truth label of the prediction with shape (N, 1) classes (int): the number of classes. Returns: Tensor: Processed loss values.
Here is the function:
def convert_to_one_hot(targets: torch.Tensor, classes) -> torch.Tensor:
"""This function converts target class indices to one-hot vectors, given
the number of classes.
Args:
targets (Tensor): The ground truth label of the prediction
with shape (N, 1)
classes (int): the number of classes.
Returns:
Tensor: Processed loss values.
"""
assert (torch.max(targets).item() <
classes), 'Class Index must be less than number of classes'
one_hot_targets = torch.zeros((targets.shape[0], classes),
dtype=torch.long,
device=targets.device)
one_hot_targets.scatter_(1, targets.long(), 1)
return one_hot_targets | This function converts target class indices to one-hot vectors, given the number of classes. Args: targets (Tensor): The ground truth label of the prediction with shape (N, 1) classes (int): the number of classes. Returns: Tensor: Processed loss values. |
14,352 | import torch
import torch.nn as nn
import torch.nn.functional as F
from .utils import weighted_loss
def gmof(x, sigma):
"""Geman-McClure error function."""
x_squared = x**2
sigma_squared = sigma**2
return (sigma_squared * x_squared) / (sigma_squared + x_squared)
def mse_loss(pred, target):
"""Warpper of mse loss."""
return F.mse_loss(pred, target, reduction='none')
The provided code snippet includes necessary dependencies for implementing the `mse_loss_with_gmof` function. Write a Python function `def mse_loss_with_gmof(pred, target, sigma)` to solve the following problem:
Extended MSE Loss with GMOF.
Here is the function:
def mse_loss_with_gmof(pred, target, sigma):
"""Extended MSE Loss with GMOF."""
loss = F.mse_loss(pred, target, reduction='none')
loss = gmof(loss, sigma)
return loss | Extended MSE Loss with GMOF. |
14,353 | from typing import Optional, Union
import torch
import torch.distributed as dist
import torch.nn.functional as F
from mmcv.runner import get_dist_info
from torch.nn.modules.loss import _Loss
from .utils import weighted_loss
The provided code snippet includes necessary dependencies for implementing the `bmc_loss_md` function. Write a Python function `def bmc_loss_md(pred: torch.Tensor, target: torch.Tensor, noise_var: torch.Tensor, all_gather: bool, loss_mse_weight: float, loss_debias_weight: float) -> torch.Tensor` to solve the following problem:
Args: pred (torch.Tensor): The prediction. Shape should be (N, L). target (torch.Tensor): The learning target of the prediction. noise_var (torch.Tensor): Noise var of ground truth distribution. all_gather (bool): Whether gather tensors across all sub-processes. Only used in DDP training scheme. loss_mse_weight (float, optional): The weight of the mse term. loss_debias_weight (float, optional): The weight of the debiased term. Returns: torch.Tensor: The calculated loss
Here is the function:
def bmc_loss_md(pred: torch.Tensor, target: torch.Tensor,
noise_var: torch.Tensor, all_gather: bool,
loss_mse_weight: float,
loss_debias_weight: float) -> torch.Tensor:
"""
Args:
pred (torch.Tensor): The prediction. Shape should be (N, L).
target (torch.Tensor): The learning target of the prediction.
noise_var (torch.Tensor): Noise var of ground truth distribution.
all_gather (bool): Whether gather tensors across all sub-processes.
Only used in DDP training scheme.
loss_mse_weight (float, optional): The weight of the mse term.
loss_debias_weight (float, optional): The weight of the debiased term.
Returns:
torch.Tensor: The calculated loss
"""
N = pred.shape[0]
L = pred.shape[1]
device = pred.device
loss_mse = F.mse_loss(pred, target, reduction='none').sum(-1)
loss_mse = loss_mse / noise_var
if all_gather:
rank, world_size = get_dist_info()
bs, length = target.shape
all_bs = [torch.zeros(1).to(device) for _ in range(world_size)]
dist.all_gather(all_bs, torch.Tensor([bs]).to(device))
all_bs_int = [int(v.item()) for v in all_bs]
max_bs_int = max(all_bs_int)
target_padding = torch.zeros(max_bs_int, length).to(device)
target_padding[:bs] = target
all_tensor = []
for _ in range(world_size):
all_tensor.append(torch.zeros(max_bs_int, length).type_as(target))
dist.all_gather(all_tensor, target_padding)
# remove padding
for i in range(world_size):
all_tensor[i] = all_tensor[i][:all_bs_int[i]]
target = torch.cat(all_tensor, dim=0)
# Debias term
target = target.unsqueeze(0).repeat(N, 1, 1)
pred = pred.unsqueeze(1).expand_as(target)
debias_term = F.mse_loss(pred, target, reduction='none').sum(-1)
debias_term = -0.5 * debias_term / noise_var
loss_debias = torch.logsumexp(debias_term, dim=1).squeeze(-1)
loss = loss_mse * loss_mse_weight + loss_debias * loss_debias_weight
# recover loss scale of mse_loss
loss = loss / L * noise_var.detach()
return loss | Args: pred (torch.Tensor): The prediction. Shape should be (N, L). target (torch.Tensor): The learning target of the prediction. noise_var (torch.Tensor): Noise var of ground truth distribution. all_gather (bool): Whether gather tensors across all sub-processes. Only used in DDP training scheme. loss_mse_weight (float, optional): The weight of the mse term. loss_debias_weight (float, optional): The weight of the debiased term. Returns: torch.Tensor: The calculated loss |
14,354 | from mmcv.utils import Registry
from .balanced_mse_loss import BMCLossMD
from .cross_entropy_loss import CrossEntropyLoss
from .gan_loss import GANLoss
from .mse_loss import KeypointMSELoss, MSELoss
from .prior_loss import (
CameraPriorLoss,
JointPriorLoss,
LimbLengthLoss,
MaxMixturePrior,
PoseRegLoss,
ShapePriorLoss,
ShapeThresholdPriorLoss,
SmoothJointLoss,
SmoothPelvisLoss,
SmoothTranslationLoss,
)
from .rotaion_distance_loss import RotationDistance
from .smooth_l1_loss import L1Loss, SmoothL1Loss
LOSSES = Registry('losses')
LOSSES.register_module(name='GANLoss', module=GANLoss)
LOSSES.register_module(name='MSELoss', module=MSELoss)
LOSSES.register_module(name='KeypointMSELoss', module=KeypointMSELoss)
LOSSES.register_module(name='ShapePriorLoss', module=ShapePriorLoss)
LOSSES.register_module(name='PoseRegLoss', module=PoseRegLoss)
LOSSES.register_module(name='LimbLengthLoss', module=LimbLengthLoss)
LOSSES.register_module(name='JointPriorLoss', module=JointPriorLoss)
LOSSES.register_module(name='SmoothJointLoss', module=SmoothJointLoss)
LOSSES.register_module(name='SmoothPelvisLoss', module=SmoothPelvisLoss)
LOSSES.register_module(
name='SmoothTranslationLoss', module=SmoothTranslationLoss)
LOSSES.register_module(
name='ShapeThresholdPriorLoss', module=ShapeThresholdPriorLoss)
LOSSES.register_module(name='CameraPriorLoss', module=CameraPriorLoss)
LOSSES.register_module(name='MaxMixturePrior', module=MaxMixturePrior)
LOSSES.register_module(name='L1Loss', module=L1Loss)
LOSSES.register_module(name='SmoothL1Loss', module=SmoothL1Loss)
LOSSES.register_module(name='CrossEntropyLoss', module=CrossEntropyLoss)
LOSSES.register_module(name='RotationDistance', module=RotationDistance)
LOSSES.register_module(name='BMCLossMD', module=BMCLossMD)
The provided code snippet includes necessary dependencies for implementing the `build_loss` function. Write a Python function `def build_loss(cfg)` to solve the following problem:
Build loss.
Here is the function:
def build_loss(cfg):
"""Build loss."""
if cfg is None:
return None
return LOSSES.build(cfg) | Build loss. |
14,355 | import torch
import torch.nn as nn
import torch.nn.functional as F
from .utils import weight_reduce_loss
def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):
"""Apply element-wise weight and reduce loss.
Args:
loss (Tensor): Element-wise loss.
weight (Tensor): Element-wise weights.
reduction (str): Same as built-in losses of PyTorch.
avg_factor (float): Average factor when computing the mean of losses.
Returns:
Tensor: Processed loss values.
"""
# if weight is specified, apply element-wise weight
if weight is not None:
loss = loss * weight
# if avg_factor is not specified, just reduce the loss
if avg_factor is None:
loss = reduce_loss(loss, reduction)
else:
# if reduction is mean, then average the loss by avg_factor
if reduction == 'mean':
loss = loss.sum() / avg_factor
# if reduction is 'none', then do nothing, otherwise raise an error
elif reduction != 'none':
raise ValueError('avg_factor can not be used with reduction="sum"')
return loss
The provided code snippet includes necessary dependencies for implementing the `cross_entropy` function. Write a Python function `def cross_entropy(pred, label, weight=None, reduction='mean', avg_factor=None, class_weight=None, ignore_index=-100)` to solve the following problem:
Calculate the CrossEntropy loss. Args: pred (torch.Tensor): The prediction with shape (N, C), C is the number of classes. label (torch.Tensor): The learning label of the prediction. weight (torch.Tensor, optional): Sample-wise loss weight. reduction (str, optional): The method used to reduce the loss. avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. class_weight (list[float], optional): The weight for each class. ignore_index (int | None): The label index to be ignored. If None, it will be set to default value. Default: -100. Returns: torch.Tensor: The calculated loss
Here is the function:
def cross_entropy(pred,
label,
weight=None,
reduction='mean',
avg_factor=None,
class_weight=None,
ignore_index=-100):
"""Calculate the CrossEntropy loss.
Args:
pred (torch.Tensor): The prediction with shape (N, C), C is the number
of classes.
label (torch.Tensor): The learning label of the prediction.
weight (torch.Tensor, optional): Sample-wise loss weight.
reduction (str, optional): The method used to reduce the loss.
avg_factor (int, optional): Average factor that is used to average
the loss. Defaults to None.
class_weight (list[float], optional): The weight for each class.
ignore_index (int | None): The label index to be ignored.
If None, it will be set to default value. Default: -100.
Returns:
torch.Tensor: The calculated loss
"""
# The default value of ignore_index is the same as F.cross_entropy
ignore_index = -100 if ignore_index is None else ignore_index
# element-wise losses
loss = F.cross_entropy(
pred,
label,
weight=class_weight,
reduction='none',
ignore_index=ignore_index)
# apply weights and do the reduction
if weight is not None:
weight = weight.float()
loss = weight_reduce_loss(
loss, weight=weight, reduction=reduction, avg_factor=avg_factor)
return loss | Calculate the CrossEntropy loss. Args: pred (torch.Tensor): The prediction with shape (N, C), C is the number of classes. label (torch.Tensor): The learning label of the prediction. weight (torch.Tensor, optional): Sample-wise loss weight. reduction (str, optional): The method used to reduce the loss. avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. class_weight (list[float], optional): The weight for each class. ignore_index (int | None): The label index to be ignored. If None, it will be set to default value. Default: -100. Returns: torch.Tensor: The calculated loss |
14,356 | import torch
import torch.nn as nn
import torch.nn.functional as F
from .utils import weight_reduce_loss
def _expand_onehot_labels(labels, label_weights, label_channels, ignore_index):
"""Expand onehot labels to match the size of prediction."""
bin_labels = labels.new_full((labels.size(0), label_channels), 0)
valid_mask = (labels >= 0) & (labels != ignore_index)
inds = torch.nonzero(
valid_mask & (labels < label_channels), as_tuple=False)
if inds.numel() > 0:
bin_labels[inds, labels[inds]] = 1
valid_mask = valid_mask.view(-1, 1).expand(labels.size(0),
label_channels).float()
if label_weights is None:
bin_label_weights = valid_mask
else:
bin_label_weights = label_weights.view(-1, 1).repeat(1, label_channels)
bin_label_weights *= valid_mask
return bin_labels, bin_label_weights
def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):
"""Apply element-wise weight and reduce loss.
Args:
loss (Tensor): Element-wise loss.
weight (Tensor): Element-wise weights.
reduction (str): Same as built-in losses of PyTorch.
avg_factor (float): Average factor when computing the mean of losses.
Returns:
Tensor: Processed loss values.
"""
# if weight is specified, apply element-wise weight
if weight is not None:
loss = loss * weight
# if avg_factor is not specified, just reduce the loss
if avg_factor is None:
loss = reduce_loss(loss, reduction)
else:
# if reduction is mean, then average the loss by avg_factor
if reduction == 'mean':
loss = loss.sum() / avg_factor
# if reduction is 'none', then do nothing, otherwise raise an error
elif reduction != 'none':
raise ValueError('avg_factor can not be used with reduction="sum"')
return loss
The provided code snippet includes necessary dependencies for implementing the `binary_cross_entropy` function. Write a Python function `def binary_cross_entropy(pred, label, weight=None, reduction='mean', avg_factor=None, class_weight=None, ignore_index=-100)` to solve the following problem:
Calculate the binary CrossEntropy loss. Args: pred (torch.Tensor): The prediction with shape (N, 1). label (torch.Tensor): The learning label of the prediction. weight (torch.Tensor, optional): Sample-wise loss weight. reduction (str, optional): The method used to reduce the loss. Options are "none", "mean" and "sum". avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. class_weight (list[float], optional): The weight for each class. ignore_index (int | None): The label index to be ignored. If None, it will be set to default value. Default: -100. Returns: torch.Tensor: The calculated loss.
Here is the function:
def binary_cross_entropy(pred,
label,
weight=None,
reduction='mean',
avg_factor=None,
class_weight=None,
ignore_index=-100):
"""Calculate the binary CrossEntropy loss.
Args:
pred (torch.Tensor): The prediction with shape (N, 1).
label (torch.Tensor): The learning label of the prediction.
weight (torch.Tensor, optional): Sample-wise loss weight.
reduction (str, optional): The method used to reduce the loss.
Options are "none", "mean" and "sum".
avg_factor (int, optional): Average factor that is used to average
the loss. Defaults to None.
class_weight (list[float], optional): The weight for each class.
ignore_index (int | None): The label index to be ignored.
If None, it will be set to default value. Default: -100.
Returns:
torch.Tensor: The calculated loss.
"""
# The default value of ignore_index is the same as F.cross_entropy
ignore_index = -100 if ignore_index is None else ignore_index
if pred.dim() != label.dim():
label, weight = _expand_onehot_labels(label, weight, pred.size(-1),
ignore_index)
# weighted element-wise losses
if weight is not None:
weight = weight.float()
loss = F.binary_cross_entropy_with_logits(
pred, label.float(), pos_weight=class_weight, reduction='none')
# do the reduction for the weighted loss
loss = weight_reduce_loss(
loss, weight, reduction=reduction, avg_factor=avg_factor)
return loss | Calculate the binary CrossEntropy loss. Args: pred (torch.Tensor): The prediction with shape (N, 1). label (torch.Tensor): The learning label of the prediction. weight (torch.Tensor, optional): Sample-wise loss weight. reduction (str, optional): The method used to reduce the loss. Options are "none", "mean" and "sum". avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. class_weight (list[float], optional): The weight for each class. ignore_index (int | None): The label index to be ignored. If None, it will be set to default value. Default: -100. Returns: torch.Tensor: The calculated loss. |
14,357 | import torch
import torch.nn as nn
import torch.nn.functional as F
from .utils import weight_reduce_loss
The provided code snippet includes necessary dependencies for implementing the `mask_cross_entropy` function. Write a Python function `def mask_cross_entropy(pred, target, label, reduction='mean', avg_factor=None, class_weight=None, ignore_index=None)` to solve the following problem:
Calculate the CrossEntropy loss for masks. Args: pred (torch.Tensor): The prediction with shape (N, C, *), C is the number of classes. The trailing * indicates arbitrary shape. target (torch.Tensor): The learning label of the prediction. label (torch.Tensor): ``label`` indicates the class label of the mask corresponding object. This will be used to select the mask in the of the class which the object belongs to when the mask prediction if not class-agnostic. reduction (str, optional): The method used to reduce the loss. Options are "none", "mean" and "sum". avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. class_weight (list[float], optional): The weight for each class. ignore_index (None): Placeholder, to be consistent with other loss. Default: None. Returns: torch.Tensor: The calculated loss Example: >>> N, C = 3, 11 >>> H, W = 2, 2 >>> pred = torch.randn(N, C, H, W) * 1000 >>> target = torch.rand(N, H, W) >>> label = torch.randint(0, C, size=(N,)) >>> reduction = 'mean' >>> avg_factor = None >>> class_weights = None >>> loss = mask_cross_entropy(pred, target, label, reduction, >>> avg_factor, class_weights) >>> assert loss.shape == (1,)
Here is the function:
def mask_cross_entropy(pred,
target,
label,
reduction='mean',
avg_factor=None,
class_weight=None,
ignore_index=None):
"""Calculate the CrossEntropy loss for masks.
Args:
pred (torch.Tensor): The prediction with shape (N, C, *), C is the
number of classes. The trailing * indicates arbitrary shape.
target (torch.Tensor): The learning label of the prediction.
label (torch.Tensor): ``label`` indicates the class label of the mask
corresponding object. This will be used to select the mask in the
of the class which the object belongs to when the mask prediction
if not class-agnostic.
reduction (str, optional): The method used to reduce the loss.
Options are "none", "mean" and "sum".
avg_factor (int, optional): Average factor that is used to average
the loss. Defaults to None.
class_weight (list[float], optional): The weight for each class.
ignore_index (None): Placeholder, to be consistent with other loss.
Default: None.
Returns:
torch.Tensor: The calculated loss
Example:
>>> N, C = 3, 11
>>> H, W = 2, 2
>>> pred = torch.randn(N, C, H, W) * 1000
>>> target = torch.rand(N, H, W)
>>> label = torch.randint(0, C, size=(N,))
>>> reduction = 'mean'
>>> avg_factor = None
>>> class_weights = None
>>> loss = mask_cross_entropy(pred, target, label, reduction,
>>> avg_factor, class_weights)
>>> assert loss.shape == (1,)
"""
assert ignore_index is None, 'BCE loss does not support ignore_index'
# TODO: handle these two reserved arguments
assert reduction == 'mean' and avg_factor is None
num_rois = pred.size()[0]
inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device)
pred_slice = pred[inds, label].squeeze(1)
return F.binary_cross_entropy_with_logits(
pred_slice, target, weight=class_weight, reduction='mean')[None] | Calculate the CrossEntropy loss for masks. Args: pred (torch.Tensor): The prediction with shape (N, C, *), C is the number of classes. The trailing * indicates arbitrary shape. target (torch.Tensor): The learning label of the prediction. label (torch.Tensor): ``label`` indicates the class label of the mask corresponding object. This will be used to select the mask in the of the class which the object belongs to when the mask prediction if not class-agnostic. reduction (str, optional): The method used to reduce the loss. Options are "none", "mean" and "sum". avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. class_weight (list[float], optional): The weight for each class. ignore_index (None): Placeholder, to be consistent with other loss. Default: None. Returns: torch.Tensor: The calculated loss Example: >>> N, C = 3, 11 >>> H, W = 2, 2 >>> pred = torch.randn(N, C, H, W) * 1000 >>> target = torch.rand(N, H, W) >>> label = torch.randint(0, C, size=(N,)) >>> reduction = 'mean' >>> avg_factor = None >>> class_weights = None >>> loss = mask_cross_entropy(pred, target, label, reduction, >>> avg_factor, class_weights) >>> assert loss.shape == (1,) |
14,358 | import torch
import torch.nn as nn
The provided code snippet includes necessary dependencies for implementing the `rotation_distance_loss` function. Write a Python function `def rotation_distance_loss(pred, target, epsilon)` to solve the following problem:
Warpper of rotation distance loss.
Here is the function:
def rotation_distance_loss(pred, target, epsilon):
"""Warpper of rotation distance loss."""
tr = torch.einsum(
'bij,bij->b',
[pred.view(-1, 3, 3), target.view(-1, 3, 3)])
theta = (tr - 1) * 0.5
loss = torch.acos(torch.clamp(theta, -1 + epsilon, 1 - epsilon))
return loss | Warpper of rotation distance loss. |
14,359 | from mmcv.utils import Registry
from .temporal_encoder import TemporalGRUEncoder
NECKS = Registry('necks')
NECKS.register_module(name='TemporalGRUEncoder', module=TemporalGRUEncoder)
The provided code snippet includes necessary dependencies for implementing the `build_neck` function. Write a Python function `def build_neck(cfg)` to solve the following problem:
Build neck.
Here is the function:
def build_neck(cfg):
"""Build neck."""
if cfg is None:
return None
return NECKS.build(cfg) | Build neck. |
14,360 | from mmcv.utils import Registry
from .pose_discriminator import (
FullPoseDiscriminator,
PoseDiscriminator,
ShapeDiscriminator,
SMPLDiscriminator,
)
DISCRIMINATORS = Registry('discriminators')
DISCRIMINATORS.register_module(
name='ShapeDiscriminator', module=ShapeDiscriminator)
DISCRIMINATORS.register_module(
name='PoseDiscriminator', module=PoseDiscriminator)
DISCRIMINATORS.register_module(
name='FullPoseDiscriminator', module=FullPoseDiscriminator)
DISCRIMINATORS.register_module(
name='SMPLDiscriminator', module=SMPLDiscriminator)
The provided code snippet includes necessary dependencies for implementing the `build_discriminator` function. Write a Python function `def build_discriminator(cfg)` to solve the following problem:
Build discriminator.
Here is the function:
def build_discriminator(cfg):
"""Build discriminator."""
if cfg is None:
return None
return DISCRIMINATORS.build(cfg) | Build discriminator. |
14,361 | from mmcv.utils import Registry
from .smplify import SMPLify
from .smplifyx import SMPLifyX
REGISTRANTS = Registry('registrants')
REGISTRANTS.register_module(name='SMPLify', module=SMPLify)
REGISTRANTS.register_module(name='SMPLifyX', module=SMPLifyX)
The provided code snippet includes necessary dependencies for implementing the `build_registrant` function. Write a Python function `def build_registrant(cfg)` to solve the following problem:
Build registrant.
Here is the function:
def build_registrant(cfg):
"""Build registrant."""
if cfg is None:
return None
return REGISTRANTS.build(cfg) | Build registrant. |
14,362 | import numpy as np
import torch
import torch.cuda.comm
import torch.nn as nn
from mmcv.runner.base_module import BaseModule
from torch.nn import functional as F
from mmhuman3d.core.conventions.keypoints_mapping import get_flip_pairs
The provided code snippet includes necessary dependencies for implementing the `norm_heatmap` function. Write a Python function `def norm_heatmap(norm_type, heatmap)` to solve the following problem:
Normalize heatmap. Args: norm_type (str): type of normalization. Currently only 'softmax' is supported heatmap (torch.Tensor): model output heatmap with shape (Bx29xF^2) where F^2 refers to number of squared feature channels F Returns: heatmap (torch.Tensor): normalized heatmap according to specified type with shape (Bx29xF^2)
Here is the function:
def norm_heatmap(norm_type, heatmap):
"""Normalize heatmap.
Args:
norm_type (str):
type of normalization. Currently only 'softmax' is supported
heatmap (torch.Tensor):
model output heatmap with shape (Bx29xF^2) where F^2 refers to
number of squared feature channels F
Returns:
heatmap (torch.Tensor):
normalized heatmap according to specified type with
shape (Bx29xF^2)
"""
# Input tensor shape: [N,C,...]
shape = heatmap.shape
if norm_type == 'softmax':
heatmap = heatmap.reshape(*shape[:2], -1)
# global soft max
heatmap = F.softmax(heatmap, 2)
return heatmap.reshape(*shape)
else:
raise NotImplementedError | Normalize heatmap. Args: norm_type (str): type of normalization. Currently only 'softmax' is supported heatmap (torch.Tensor): model output heatmap with shape (Bx29xF^2) where F^2 refers to number of squared feature channels F Returns: heatmap (torch.Tensor): normalized heatmap according to specified type with shape (Bx29xF^2) |
14,363 | import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from mmcv.runner.base_module import BaseModule
from torch.nn.modules.utils import _pair
from mmhuman3d.utils.geometry import rot6d_to_rotmat
The provided code snippet includes necessary dependencies for implementing the `interpolate` function. Write a Python function `def interpolate(feat, uv)` to solve the following problem:
Args: feat (torch.Tensor): [B, C, H, W] image features uv (torch.Tensor): [B, 2, N] uv coordinates in the image plane, range [-1, 1] Returns: samples[:, :, :, 0] (torch.Tensor): [B, C, N] image features at the uv coordinates
Here is the function:
def interpolate(feat, uv):
"""
Args:
feat (torch.Tensor): [B, C, H, W] image features
uv (torch.Tensor): [B, 2, N] uv coordinates
in the image plane, range [-1, 1]
Returns:
samples[:, :, :, 0] (torch.Tensor):
[B, C, N] image features at the uv coordinates
"""
if uv.shape[-1] != 2:
uv = uv.transpose(1, 2) # [B, N, 2]
uv = uv.unsqueeze(2) # [B, N, 1, 2]
# NOTE: for newer PyTorch, it seems that training
# results are degraded due to implementation diff in F.grid_sample
# for old versions, simply remove the aligned_corners argument.
if int(torch.__version__.split('.')[1]) < 4:
samples = torch.nn.functional.grid_sample(feat, uv) # [B, C, N, 1]
else:
samples = torch.nn.functional.grid_sample(
feat, uv, align_corners=True) # [B, C, N, 1]
return samples[:, :, :, 0] # [B, C, N] | Args: feat (torch.Tensor): [B, C, H, W] image features uv (torch.Tensor): [B, 2, N] uv coordinates in the image plane, range [-1, 1] Returns: samples[:, :, :, 0] (torch.Tensor): [B, C, N] image features at the uv coordinates |
14,364 | import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from mmcv.runner.base_module import BaseModule
from torch.nn.modules.utils import _pair
from mmhuman3d.utils.geometry import rot6d_to_rotmat
def _softmax(tensor, temperature, dim=-1):
return F.softmax(tensor * temperature, dim=dim)
The provided code snippet includes necessary dependencies for implementing the `softargmax2d` function. Write a Python function `def softargmax2d( heatmaps, temperature=None, normalize_keypoints=True, )` to solve the following problem:
Softargmax layer for heatmaps.
Here is the function:
def softargmax2d(
heatmaps,
temperature=None,
normalize_keypoints=True,
):
"""Softargmax layer for heatmaps."""
dtype, device = heatmaps.dtype, heatmaps.device
if temperature is None:
temperature = torch.tensor(1.0, dtype=dtype, device=device)
batch_size, num_channels, height, width = heatmaps.shape
x = torch.arange(
0, width, device=device,
dtype=dtype).reshape(1, 1, 1, width).expand(batch_size, -1, height, -1)
y = torch.arange(
0, height, device=device,
dtype=dtype).reshape(1, 1, height, 1).expand(batch_size, -1, -1, width)
# Should be Bx2xHxW
points = torch.cat([x, y], dim=1)
normalized_heatmap = _softmax(
heatmaps.reshape(batch_size, num_channels, -1),
temperature=temperature.reshape(1, -1, 1),
dim=-1)
# Should be BxJx2
keypoints = (
normalized_heatmap.reshape(batch_size, -1, 1, height * width) *
points.reshape(batch_size, 1, 2, -1)).sum(dim=-1)
if normalize_keypoints:
# Normalize keypoints to [-1, 1]
keypoints[:, :, 0] = (keypoints[:, :, 0] / (width - 1) * 2 - 1)
keypoints[:, :, 1] = (keypoints[:, :, 1] / (height - 1) * 2 - 1)
return keypoints, normalized_heatmap.reshape(batch_size, -1, height, width) | Softargmax layer for heatmaps. |
14,365 | import math
import numpy as np
import scipy
import torch
import torch.cuda.comm
import torch.nn as nn
from mmcv.runner.base_module import BaseModule
from torch.nn import functional as F
from mmhuman3d.core.conventions.keypoints_mapping.flame import (
FLAME_73_KEYPOINTS,
)
from mmhuman3d.core.conventions.keypoints_mapping.mano import (
MANO_RIGHT_REORDER_KEYPOINTS,
)
from mmhuman3d.core.conventions.keypoints_mapping.openpose import (
OPENPOSE_25_KEYPOINTS,
)
from mmhuman3d.core.conventions.keypoints_mapping.spin_smplx import (
SPIN_SMPLX_KEYPOINTS,
)
from mmhuman3d.models.body_models.smpl import SMPL
from mmhuman3d.models.heads.bert.modeling_bert import (
BertConfig,
BertIntermediate,
BertOutput,
BertPreTrainedModel,
BertSelfOutput,
)
from mmhuman3d.models.utils.SMPLX import get_partial_smpl
from mmhuman3d.utils.camera_utils import homo_vector
from mmhuman3d.utils.geometry import (
compute_twist_rotation,
projection,
rot6d_to_rotmat,
rotation_matrix_to_angle_axis,
)
from mmhuman3d.utils.keypoint_utils import transform_kps2d
from mmhuman3d.utils.transforms import aa_to_rotmat
def get_att_block(config_path: str,
img_feature_dim=2048,
output_feat_dim=512,
hidden_feat_dim=1024,
num_attention_heads=4,
num_hidden_layers=1):
"""Get attention block."""
config_class = BertConfig
config = config_class.from_pretrained(config_path)
interm_size_scale = 2
config.output_attentions = False
config.img_feature_dim = img_feature_dim
config.hidden_size = hidden_feat_dim
config.intermediate_size = int(config.hidden_size * interm_size_scale)
config.num_hidden_layers = num_hidden_layers
config.num_attention_heads = num_attention_heads
config.max_position_embeddings = 900
# init a transformer encoder and append it to a list
assert config.hidden_size % config.num_attention_heads == 0
att_model = EncoderBlock(config=config)
return att_model
The provided code snippet includes necessary dependencies for implementing the `get_attention_modules` function. Write a Python function `def get_attention_modules(config_path: str, module_keys: list, img_feature_dim: dict, hidden_feat_dim: int, n_iter: int, num_attention_heads: int = 1)` to solve the following problem:
Get attention modules. Args: config_path (str): Attention config path. module_keys (list): Model name. img_feature_dim (dict): Image feature dimension. hidden_feat_dim (int): Attention feature dimension. n_iter (int): Number of iterations. num_attention_heads (int, optional): Defaults to 1. Returns: Attention modules
Here is the function:
def get_attention_modules(config_path: str,
module_keys: list,
img_feature_dim: dict,
hidden_feat_dim: int,
n_iter: int,
num_attention_heads: int = 1):
"""Get attention modules.
Args:
config_path (str): Attention config path.
module_keys (list): Model name.
img_feature_dim (dict): Image feature dimension.
hidden_feat_dim (int): Attention feature dimension.
n_iter (int): Number of iterations.
num_attention_heads (int, optional): Defaults to 1.
Returns:
Attention modules
"""
align_attention = nn.ModuleDict()
for k in module_keys:
align_attention[k] = nn.ModuleList()
for i in range(n_iter):
align_attention[k].append(
get_att_block(
config_path,
img_feature_dim=img_feature_dim[k][i],
hidden_feat_dim=hidden_feat_dim,
num_attention_heads=num_attention_heads))
return align_attention | Get attention modules. Args: config_path (str): Attention config path. module_keys (list): Model name. img_feature_dim (dict): Image feature dimension. hidden_feat_dim (int): Attention feature dimension. n_iter (int): Number of iterations. num_attention_heads (int, optional): Defaults to 1. Returns: Attention modules |
14,366 | from mmcv.utils import Registry
from .cliff_head import CliffHead
from .expose_head import ExPoseBodyHead, ExPoseFaceHead, ExPoseHandHead
from .hmr_head import HMRHead
from .hybrik_head import HybrIKHead
from .pare_head import PareHead
from .pymafx_head import PyMAFXHead, Regressor
HEADS = Registry('heads')
HEADS.register_module(name='HybrIKHead', module=HybrIKHead)
HEADS.register_module(name='HMRHead', module=HMRHead)
HEADS.register_module(name='PareHead', module=PareHead)
HEADS.register_module(name='ExPoseBodyHead', module=ExPoseBodyHead)
HEADS.register_module(name='ExPoseHandHead', module=ExPoseHandHead)
HEADS.register_module(name='ExPoseFaceHead', module=ExPoseFaceHead)
HEADS.register_module(name='CliffHead', module=CliffHead)
HEADS.register_module(name='PyMAFXHead', module=PyMAFXHead)
HEADS.register_module(name='Regressor', module=Regressor)
The provided code snippet includes necessary dependencies for implementing the `build_head` function. Write a Python function `def build_head(cfg)` to solve the following problem:
Build head.
Here is the function:
def build_head(cfg):
"""Build head."""
if cfg is None:
return None
return HEADS.build(cfg) | Build head. |
14,367 | import json
import math
import sys
from io import open
import torch
from torch import nn
from .modeling_utils import PretrainedConfig, PreTrainedModel
The provided code snippet includes necessary dependencies for implementing the `gelu` function. Write a Python function `def gelu(x)` to solve the following problem:
Implementation of the gelu activation function. For information: OpenAI GPT's gelu is slightly different (and gives slightly different results): 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * ( x + 0.044715 * torch.pow(x, 3)))) Also see https://arxiv.org/abs/1606.08415
Here is the function:
def gelu(x):
"""Implementation of the gelu activation function. For information: OpenAI
GPT's gelu is slightly different (and gives slightly different results):
0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (
x + 0.044715 * torch.pow(x, 3))))
Also see https://arxiv.org/abs/1606.08415
"""
return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0))) | Implementation of the gelu activation function. For information: OpenAI GPT's gelu is slightly different (and gives slightly different results): 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * ( x + 0.044715 * torch.pow(x, 3)))) Also see https://arxiv.org/abs/1606.08415 |
14,368 | import json
import math
import sys
from io import open
import torch
from torch import nn
from .modeling_utils import PretrainedConfig, PreTrainedModel
def swish(x):
return x * torch.sigmoid(x) | null |
14,369 | from abc import ABCMeta, abstractmethod
from typing import Optional, Tuple, Union
import torch
import torch.nn.functional as F
import mmhuman3d.core.visualization.visualize_smpl as visualize_smpl
from mmhuman3d.core.conventions.keypoints_mapping import get_keypoint_idx
from mmhuman3d.models.utils import FitsDict
from mmhuman3d.utils.geometry import (
batch_rodrigues,
estimate_translation,
project_points,
rotation_matrix_to_angle_axis,
)
from ..backbones.builder import build_backbone
from ..body_models.builder import build_body_model
from ..discriminators.builder import build_discriminator
from ..heads.builder import build_head
from ..losses.builder import build_loss
from ..necks.builder import build_neck
from ..registrants.builder import build_registrant
from .base_architecture import BaseArchitecture
The provided code snippet includes necessary dependencies for implementing the `set_requires_grad` function. Write a Python function `def set_requires_grad(nets, requires_grad=False)` to solve the following problem:
Set requies_grad for all the networks. Args: nets (nn.Module | list[nn.Module]): A list of networks or a single network. requires_grad (bool): Whether the networks require gradients or not
Here is the function:
def set_requires_grad(nets, requires_grad=False):
"""Set requies_grad for all the networks.
Args:
nets (nn.Module | list[nn.Module]): A list of networks or a single
network.
requires_grad (bool): Whether the networks require gradients or not
"""
if not isinstance(nets, list):
nets = [nets]
for net in nets:
if net is not None:
for param in net.parameters():
param.requires_grad = requires_grad | Set requies_grad for all the networks. Args: nets (nn.Module | list[nn.Module]): A list of networks or a single network. requires_grad (bool): Whether the networks require gradients or not |
14,370 | from abc import ABCMeta
from typing import Optional, Union
import torch
import torch.nn as nn
from mmhuman3d.core.conventions.keypoints_mapping.flame import (
FLAME_73_KEYPOINTS,
)
from mmhuman3d.core.conventions.keypoints_mapping.mano import (
MANO_RIGHT_REORDER_KEYPOINTS,
)
from mmhuman3d.models.body_models.smplx import GenderedSMPLX
from mmhuman3d.models.heads.pymafx_head import (
IUV_predict_layer,
MAF_Extractor,
Mesh_Sampler,
get_attention_modules,
)
from ..backbones.builder import build_backbone
from ..heads.builder import build_head
from .base_architecture import BaseArchitecture
def get_fusion_modules(module_keys, ma_feat_dim, grid_feat_dim, n_iter,
out_feat_len):
feat_fusion = nn.ModuleDict()
for k in module_keys:
feat_fusion[k] = nn.ModuleList()
for i in range(n_iter):
feat_fusion[k].append(
nn.Linear(grid_feat_dim + ma_feat_dim[k], out_feat_len[k]))
return feat_fusion | null |
14,371 | from abc import ABCMeta, abstractmethod
from typing import Optional, Tuple, Union
import torch
import torch.nn.functional as F
import mmhuman3d.core.visualization.visualize_smpl as visualize_smpl
from mmhuman3d.core.conventions.keypoints_mapping import get_keypoint_idx
from mmhuman3d.models.utils import FitsDict
from mmhuman3d.utils.geometry import (
batch_rodrigues,
cam_crop2full,
estimate_translation,
perspective_projection,
project_points,
rotation_matrix_to_angle_axis,
)
from ..backbones.builder import build_backbone
from ..body_models.builder import build_body_model
from ..discriminators.builder import build_discriminator
from ..heads.builder import build_head
from ..losses.builder import build_loss
from ..necks.builder import build_neck
from ..registrants.builder import build_registrant
from .base_architecture import BaseArchitecture
The provided code snippet includes necessary dependencies for implementing the `set_requires_grad` function. Write a Python function `def set_requires_grad(nets, requires_grad=False)` to solve the following problem:
Set requies_grad for all the networks. Args: nets (nn.Module | list[nn.Module]): A list of networks or a single network. requires_grad (bool): Whether the networks require gradients or not
Here is the function:
def set_requires_grad(nets, requires_grad=False):
"""Set requies_grad for all the networks.
Args:
nets (nn.Module | list[nn.Module]): A list of networks or a single
network.
requires_grad (bool): Whether the networks require gradients or not
"""
if not isinstance(nets, list):
nets = [nets]
for net in nets:
if net is not None:
for param in net.parameters():
param.requires_grad = requires_grad | Set requies_grad for all the networks. Args: nets (nn.Module | list[nn.Module]): A list of networks or a single network. requires_grad (bool): Whether the networks require gradients or not |
14,372 | from abc import ABCMeta, abstractmethod
from typing import Optional, Union
import torch
import torch.nn as nn
import torch.nn.functional as F
from mmhuman3d.core.conventions.keypoints_mapping import (
get_keypoint_idx,
get_keypoint_idxs_by_part,
)
from mmhuman3d.utils.geometry import (
batch_rodrigues,
weak_perspective_projection,
)
from ..backbones.builder import build_backbone
from ..body_models.builder import build_body_model
from ..heads.builder import build_head
from ..losses.builder import build_loss
from ..necks.builder import build_neck
from ..utils import (
SMPLXFaceCropFunc,
SMPLXFaceMergeFunc,
SMPLXHandCropFunc,
SMPLXHandMergeFunc,
)
from .base_architecture import BaseArchitecture
The provided code snippet includes necessary dependencies for implementing the `set_requires_grad` function. Write a Python function `def set_requires_grad(nets, requires_grad=False)` to solve the following problem:
Set requies_grad for all the networks. Args: nets (nn.Module | list[nn.Module]): A list of networks or a single network. requires_grad (bool): Whether the networks require gradients or not
Here is the function:
def set_requires_grad(nets, requires_grad=False):
"""Set requies_grad for all the networks.
Args:
nets (nn.Module | list[nn.Module]): A list of networks or a single
network.
requires_grad (bool): Whether the networks require gradients or not
"""
if not isinstance(nets, list):
nets = [nets]
for net in nets:
if net is not None:
for param in net.parameters():
param.requires_grad = requires_grad | Set requies_grad for all the networks. Args: nets (nn.Module | list[nn.Module]): A list of networks or a single network. requires_grad (bool): Whether the networks require gradients or not |
14,373 | from abc import ABCMeta, abstractmethod
from typing import Optional, Union
import torch
import torch.nn as nn
import torch.nn.functional as F
from mmhuman3d.core.conventions.keypoints_mapping import (
get_keypoint_idx,
get_keypoint_idxs_by_part,
)
from mmhuman3d.utils.geometry import (
batch_rodrigues,
weak_perspective_projection,
)
from ..backbones.builder import build_backbone
from ..body_models.builder import build_body_model
from ..heads.builder import build_head
from ..losses.builder import build_loss
from ..necks.builder import build_neck
from ..utils import (
SMPLXFaceCropFunc,
SMPLXFaceMergeFunc,
SMPLXHandCropFunc,
SMPLXHandMergeFunc,
)
from .base_architecture import BaseArchitecture
def batch_rodrigues(theta):
"""Convert axis-angle representation to rotation matrix.
Args:
theta: size = [B, 3]
Returns:
Rotation matrix corresponding to the quaternion -- size = [B, 3, 3]
"""
l1norm = torch.norm(theta + 1e-8, p=2, dim=1)
angle = torch.unsqueeze(l1norm, -1)
normalized = torch.div(theta, angle)
angle = angle * 0.5
v_cos = torch.cos(angle)
v_sin = torch.sin(angle)
quat = torch.cat([v_cos, v_sin * normalized], dim=1)
return quat_to_rotmat(quat)
The provided code snippet includes necessary dependencies for implementing the `pose2rotmat` function. Write a Python function `def pose2rotmat(pred_pose)` to solve the following problem:
aa2rotmat.
Here is the function:
def pose2rotmat(pred_pose):
"""aa2rotmat."""
if len(pred_pose.shape) == 3:
num_joints = pred_pose.shape[1]
pred_pose = batch_rodrigues(pred_pose.view(-1, 3)).view(
-1, num_joints, 3, 3)
return pred_pose | aa2rotmat. |
14,374 | from mmcv.cnn import MODELS as MMCV_MODELS
from mmcv.utils import Registry
from .cliff_mesh_estimator import CliffImageBodyModelEstimator
from .expressive_mesh_estimator import SMPLXImageBodyModelEstimator
from .hybrik import HybrIK_trainer
from .mesh_estimator import ImageBodyModelEstimator, VideoBodyModelEstimator
from .pymafx import PyMAFX
def build_from_cfg(cfg, registry, default_args=None):
if cfg is None:
return None
return MMCV_MODELS.build_func(cfg, registry, default_args) | null |
14,375 | from abc import ABCMeta
import torch
from mmhuman3d.data.datasets.pipelines.hybrik_transforms import heatmap2coord
from mmhuman3d.utils.transforms import rotmat_to_quat
from ..backbones.builder import build_backbone
from ..body_models.builder import build_body_model
from ..heads.builder import build_head
from ..losses.builder import build_loss
from ..necks.builder import build_neck
from .base_architecture import BaseArchitecture
The provided code snippet includes necessary dependencies for implementing the `set_requires_grad` function. Write a Python function `def set_requires_grad(nets, requires_grad=False)` to solve the following problem:
Set requies_grad for all the networks. Args: nets (nn.Module | list[nn.Module]): A list of networks or a single network. requires_grad (bool): Whether the networks require gradients or not
Here is the function:
def set_requires_grad(nets, requires_grad=False):
"""Set requies_grad for all the networks.
Args:
nets (nn.Module | list[nn.Module]): A list of networks or a single
network.
requires_grad (bool): Whether the networks require gradients or not
"""
if not isinstance(nets, list):
nets = [nets]
for net in nets:
if net is not None:
for param in net.parameters():
param.requires_grad = requires_grad | Set requies_grad for all the networks. Args: nets (nn.Module | list[nn.Module]): A list of networks or a single network. requires_grad (bool): Whether the networks require gradients or not |
14,376 | import os
from typing import List
import numpy as np
import torch
import torch.nn.functional as F
from smplx.utils import find_joint_kin_chain
from mmhuman3d.core.conventions.keypoints_mapping import (
get_keypoint_idx,
get_keypoint_idxs_by_part,
)
from mmhuman3d.utils.geometry import weak_perspective_projection
def points_to_bbox(points, bbox_scale_factor: float = 1.0):
"""Get scaled bounding box from keypoints 2D."""
min_coords, _ = torch.min(points, dim=1)
xmin, ymin = min_coords[:, 0], min_coords[:, 1]
max_coords, _ = torch.max(points, dim=1)
xmax, ymax = max_coords[:, 0], max_coords[:, 1]
center = torch.stack([xmax + xmin, ymax + ymin], dim=-1) * 0.5
width = (xmax - xmin)
height = (ymax - ymin)
# Convert the bounding box to a square box
size = torch.max(width, height) * bbox_scale_factor
return center, size
The provided code snippet includes necessary dependencies for implementing the `get_crop_info` function. Write a Python function `def get_crop_info(points, img_metas, scale_factor: float = 1.0, crop_size: int = 256)` to solve the following problem:
Get the transformation of points on the cropped image to the points on the original image.
Here is the function:
def get_crop_info(points,
img_metas,
scale_factor: float = 1.0,
crop_size: int = 256):
"""Get the transformation of points on the cropped image to the points on
the original image."""
device = points.device
dtype = points.dtype
batch_size = points.shape[0]
# Get the image to crop transformations and bounding box sizes
crop_transforms = []
img_bbox_sizes = []
for img_meta in img_metas:
crop_transforms.append(img_meta['crop_transform'])
img_bbox_sizes.append(img_meta['scale'].max())
img_bbox_sizes = torch.tensor(img_bbox_sizes, dtype=dtype, device=device)
crop_transforms = torch.tensor(crop_transforms, dtype=dtype, device=device)
crop_transforms = torch.cat([
crop_transforms,
torch.tensor([0.0, 0.0, 1.0], dtype=dtype, device=device).expand(
[batch_size, 1, 3])
],
dim=1)
inv_crop_transforms = torch.inverse(crop_transforms)
# center on the cropped body image
center_body_crop, bbox_size = points_to_bbox(
points, bbox_scale_factor=scale_factor)
orig_bbox_size = bbox_size / crop_size * img_bbox_sizes
# Compute the center of the crop in the original image
center = (
torch.einsum('bij,bj->bi',
[inv_crop_transforms[:, :2, :2], center_body_crop]) +
inv_crop_transforms[:, :2, 2])
return {
'center': center.reshape(-1, 2),
'orig_bbox_size': orig_bbox_size,
# 'bbox_size': bbox_size.reshape(-1),
'inv_crop_transforms': inv_crop_transforms,
# 'center_body_crop': 2 * center_body_crop / (crop_size-1) - 1,
} | Get the transformation of points on the cropped image to the points on the original image. |
14,377 | import os
from typing import List
import numpy as np
import torch
import torch.nn.functional as F
from smplx.utils import find_joint_kin_chain
from mmhuman3d.core.conventions.keypoints_mapping import (
get_keypoint_idx,
get_keypoint_idxs_by_part,
)
from mmhuman3d.utils.geometry import weak_perspective_projection
The provided code snippet includes necessary dependencies for implementing the `concat_images` function. Write a Python function `def concat_images(images: List[torch.Tensor])` to solve the following problem:
Concat images of different size.
Here is the function:
def concat_images(images: List[torch.Tensor]):
"""Concat images of different size."""
sizes = [img.shape[1:] for img in images]
H, W = [max(s) for s in zip(*sizes)]
batch_size = len(images)
batched_shape = (batch_size, images[0].shape[0], H, W)
batched = torch.zeros(
batched_shape, device=images[0].device, dtype=images[0].dtype)
for ii, img in enumerate(images):
shape = img.shape
batched[ii, :shape[0], :shape[1], :shape[2]] = img
return batched | Concat images of different size. |
14,378 | import os
from typing import List
import numpy as np
import torch
import torch.nn.functional as F
from smplx.utils import find_joint_kin_chain
from mmhuman3d.core.conventions.keypoints_mapping import (
get_keypoint_idx,
get_keypoint_idxs_by_part,
)
from mmhuman3d.utils.geometry import weak_perspective_projection
The provided code snippet includes necessary dependencies for implementing the `flip_rotmat` function. Write a Python function `def flip_rotmat(pose_rotmat)` to solve the following problem:
Flip function. Flip rotmat.
Here is the function:
def flip_rotmat(pose_rotmat):
"""Flip function.
Flip rotmat.
"""
rot_mats = pose_rotmat.reshape(-1, 9).clone()
rot_mats[:, [1, 2, 3, 6]] *= -1
return rot_mats.view_as(pose_rotmat) | Flip function. Flip rotmat. |
14,379 | import os
from typing import List
import numpy as np
import torch
import torch.nn.functional as F
from smplx.utils import find_joint_kin_chain
from mmhuman3d.core.conventions.keypoints_mapping import (
get_keypoint_idx,
get_keypoint_idxs_by_part,
)
from mmhuman3d.utils.geometry import weak_perspective_projection
The provided code snippet includes necessary dependencies for implementing the `find_joint_global_rotation` function. Write a Python function `def find_joint_global_rotation(kin_chain, root_pose, body_pose)` to solve the following problem:
Computes the absolute rotation of a joint from the kinematic chain.
Here is the function:
def find_joint_global_rotation(kin_chain, root_pose, body_pose):
"""Computes the absolute rotation of a joint from the kinematic chain."""
# Create a single vector with all the poses
parents_pose = torch.cat([root_pose, body_pose], dim=1)[:, kin_chain]
output_pose = parents_pose[:, 0]
for idx in range(1, parents_pose.shape[1]):
output_pose = torch.bmm(parents_pose[:, idx], output_pose)
return output_pose | Computes the absolute rotation of a joint from the kinematic chain. |
14,380 | import os
from typing import List
import numpy as np
import torch
import torch.nn.functional as F
from smplx.utils import find_joint_kin_chain
from mmhuman3d.core.conventions.keypoints_mapping import (
get_keypoint_idx,
get_keypoint_idxs_by_part,
)
from mmhuman3d.utils.geometry import weak_perspective_projection
The provided code snippet includes necessary dependencies for implementing the `get_partial_smpl` function. Write a Python function `def get_partial_smpl(partial_mesh_path: str = 'data/partial_mesh/')` to solve the following problem:
Get partial mesh of SMPL. Returns: part_vert_faces
Here is the function:
def get_partial_smpl(partial_mesh_path: str = 'data/partial_mesh/'):
"""Get partial mesh of SMPL.
Returns:
part_vert_faces
"""
part_vert_faces = {}
for part in [
'lhand', 'rhand', 'face', 'arm', 'forearm', 'larm', 'rarm',
'lwrist', 'rwrist'
]:
part_vid_fname = os.path.join(partial_mesh_path,
f'smpl_{part}_vids.npz')
if os.path.exists(part_vid_fname):
part_vids = np.load(part_vid_fname)
part_vert_faces[part] = {
'vids': part_vids['vids'],
'faces': part_vids['faces']
}
else:
raise FileNotFoundError(f'{part_vid_fname} does not exist!')
return part_vert_faces | Get partial mesh of SMPL. Returns: part_vert_faces |
14,381 | from __future__ import absolute_import, division, print_function
import torch
from mmhuman3d.utils.transforms import aa_to_rotmat
def batch_get_pelvis_orient_svd(rel_pose_skeleton, rel_rest_pose, parents,
children, dtype):
"""Get pelvis orientation svd for batch data.
Args:
rel_pose_skeleton (torch.tensor):
Locations of root-normalized pose skeleton with shape (Bx29x3)
rel_rest_pose (torch.tensor):
Locations of rest/ template pose with shape (Bx29x3)
parents (List[int]): list of indexes of kinematic parents with len 29
children (List[int]): list of indexes of kinematic children with len 29
dtype (torch.dtype, optional):
Data type of the created tensors, the default is torch.float32
Returns:
rot_mat (torch.tensor):
Rotation matrix of pelvis with shape (Bx3x3)
"""
pelvis_child = [int(children[0])]
for i in range(1, parents.shape[0]):
if parents[i] == 0 and i not in pelvis_child:
pelvis_child.append(i)
rest_mat = []
target_mat = []
for child in pelvis_child:
rest_mat.append(rel_rest_pose[:, child].clone())
target_mat.append(rel_pose_skeleton[:, child].clone())
rest_mat = torch.cat(rest_mat, dim=2)
target_mat = torch.cat(target_mat, dim=2)
S = rest_mat.bmm(target_mat.transpose(1, 2))
mask_zero = S.sum(dim=(1, 2))
S_non_zero = S[mask_zero != 0].reshape(-1, 3, 3)
U, _, V = torch.svd(S_non_zero)
rot_mat = torch.zeros_like(S)
rot_mat[mask_zero == 0] = torch.eye(3, device=S.device)
rot_mat_non_zero = torch.bmm(V, U.transpose(1, 2))
rot_mat[mask_zero != 0] = rot_mat_non_zero
assert torch.sum(torch.isnan(rot_mat)) == 0, ('rot_mat', rot_mat)
return rot_mat
def batch_get_pelvis_orient(rel_pose_skeleton, rel_rest_pose, parents,
children, dtype):
"""Get pelvis orientation for batch data.
Args:
rel_pose_skeleton (torch.tensor):
Locations of root-normalized pose skeleton with shape (Bx29x3)
rel_rest_pose (torch.tensor):
Locations of rest/ template pose with shape (Bx29x3)
parents (List[int]): list of indexes of kinematic parents with len 29
children (List[int]): list of indexes of kinematic children with len 29
dtype (torch.dtype, optional):
Data type of the created tensors, the default is torch.float32
Returns:
rot_mat (torch.tensor):
Rotation matrix of pelvis with shape (Bx3x3)
"""
batch_size = rel_pose_skeleton.shape[0]
device = rel_pose_skeleton.device
assert children[0] == 3
pelvis_child = [int(children[0])]
for i in range(1, parents.shape[0]):
if parents[i] == 0 and i not in pelvis_child:
pelvis_child.append(i)
spine_final_loc = rel_pose_skeleton[:, int(children[0])].clone()
spine_rest_loc = rel_rest_pose[:, int(children[0])].clone()
# spine_norm = torch.norm(spine_final_loc, dim=1, keepdim=True)
# spine_norm = spine_final_loc / (spine_norm + 1e-8)
# rot_mat_spine = vectors2rotmat(spine_rest_loc, spine_final_loc, dtype)
# (B, 1, 1)
vec_final_norm = torch.norm(spine_final_loc, dim=1, keepdim=True)
vec_rest_norm = torch.norm(spine_rest_loc, dim=1, keepdim=True)
spine_norm = spine_final_loc / (vec_final_norm + 1e-8)
# (B, 3, 1)
axis = torch.cross(spine_rest_loc, spine_final_loc, dim=1)
axis_norm = torch.norm(axis, dim=1, keepdim=True)
axis = axis / (axis_norm + 1e-8)
angle = torch.arccos(
torch.sum(spine_rest_loc * spine_final_loc, dim=1, keepdim=True) /
(vec_rest_norm * vec_final_norm + 1e-8))
axis_angle = (angle * axis).squeeze()
# aa to rotmat
rot_mat_spine = aa_to_rotmat(axis_angle)
assert torch.sum(torch.isnan(rot_mat_spine)) == 0, ('rot_mat_spine',
rot_mat_spine)
center_final_loc = 0
center_rest_loc = 0
for child in pelvis_child:
if child == int(children[0]):
continue
center_final_loc = center_final_loc + rel_pose_skeleton[:,
child].clone()
center_rest_loc = center_rest_loc + rel_rest_pose[:, child].clone()
center_final_loc = center_final_loc / (len(pelvis_child) - 1)
center_rest_loc = center_rest_loc / (len(pelvis_child) - 1)
center_rest_loc = torch.matmul(rot_mat_spine, center_rest_loc)
center_final_loc = center_final_loc - torch.sum(
center_final_loc * spine_norm, dim=1, keepdim=True) * spine_norm
center_rest_loc = center_rest_loc - torch.sum(
center_rest_loc * spine_norm, dim=1, keepdim=True) * spine_norm
center_final_loc_norm = torch.norm(center_final_loc, dim=1, keepdim=True)
center_rest_loc_norm = torch.norm(center_rest_loc, dim=1, keepdim=True)
# (B, 3, 1)
axis = torch.cross(center_rest_loc, center_final_loc, dim=1)
axis_norm = torch.norm(axis, dim=1, keepdim=True)
# (B, 1, 1)
cos = torch.sum(
center_rest_loc * center_final_loc, dim=1, keepdim=True) / (
center_rest_loc_norm * center_final_loc_norm + 1e-8)
sin = axis_norm / (center_rest_loc_norm * center_final_loc_norm + 1e-8)
assert torch.sum(torch.isnan(cos)) == 0, ('cos', cos)
assert torch.sum(torch.isnan(sin)) == 0, ('sin', sin)
# (B, 3, 1)
axis = axis / (axis_norm + 1e-8)
# Convert location revolve to rot_mat by rodrigues
# (B, 1, 1)
rx, ry, rz = torch.split(axis, 1, dim=1)
zeros = torch.zeros((batch_size, 1, 1), dtype=dtype, device=device)
K = torch.cat([zeros, -rz, ry, rz, zeros, -rx, -ry, rx, zeros], dim=1) \
.view((batch_size, 3, 3))
ident = torch.eye(3, dtype=dtype, device=device).unsqueeze(dim=0)
rot_mat_center = ident + sin * K + (1 - cos) * torch.bmm(K, K)
rot_mat = torch.matmul(rot_mat_center, rot_mat_spine)
return rot_mat
def batch_get_3children_orient_svd(rel_pose_skeleton, rel_rest_pose,
rot_mat_chain_parent, children_list, dtype):
"""Get pelvis orientation for batch data.
Args:
rel_pose_skeleton (torch.tensor):
Locations of root-normalized pose skeleton with shape (Bx29x3)
rel_rest_pose (torch.tensor):
Locations of rest/ template pose with shape (Bx29x3)
rot_mat_chain_parents (torch.tensor):
parent's rotation matrix with shape (Bx3x3)
children (List[int]): list of indexes of kinematic children with len 29
dtype (torch.dtype, optional):
Data type of the created tensors, the default is torch.float32
Returns:
rot_mat (torch.tensor):
Child's rotation matrix with shape (Bx3x3)
"""
rest_mat = []
target_mat = []
for c, child in enumerate(children_list):
if isinstance(rel_pose_skeleton, list):
target = rel_pose_skeleton[c].clone()
template = rel_rest_pose[c].clone()
else:
target = rel_pose_skeleton[:, child].clone()
template = rel_rest_pose[:, child].clone()
target = torch.matmul(rot_mat_chain_parent.transpose(1, 2), target)
target_mat.append(target)
rest_mat.append(template)
rest_mat = torch.cat(rest_mat, dim=2)
target_mat = torch.cat(target_mat, dim=2)
S = rest_mat.bmm(target_mat.transpose(1, 2))
U, _, V = torch.svd(S)
rot_mat = torch.bmm(V, U.transpose(1, 2))
assert torch.sum(torch.isnan(rot_mat)) == 0, ('3children rot_mat', rot_mat)
return rot_mat
The provided code snippet includes necessary dependencies for implementing the `batch_inverse_kinematics_transform` function. Write a Python function `def batch_inverse_kinematics_transform(pose_skeleton, global_orient, phis, rest_pose, children, parents, dtype=torch.float32, train=False, leaf_thetas=None)` to solve the following problem:
Applies inverse kinematics transform to joints in a batch. Args: pose_skeleton (torch.tensor): Locations of estimated pose skeleton with shape (Bx29x3) global_orient (torch.tensor|none): Tensor of global rotation matrices with shape (Bx1x3x3) phis (torch.tensor): Rotation on bone axis parameters with shape (Bx23x2) rest_pose (torch.tensor): Locations of rest (Template) pose with shape (Bx29x3) children (List[int]): list of indexes of kinematic children with len 29 parents (List[int]): list of indexes of kinematic parents with len 29 dtype (torch.dtype, optional): Data type of the created tensors. Default: torch.float32 train (bool): Store True in train mode. Default: False leaf_thetas (torch.tensor, optional): Rotation matrixes for 5 leaf joints (Bx5x3x3). Default: None Returns: rot_mats (torch.tensor): Rotation matrics of all joints with shape (Bx29x3x3) rotate_rest_pose (torch.tensor): Locations of rotated rest/ template pose with shape (Bx29x3)
Here is the function:
def batch_inverse_kinematics_transform(pose_skeleton,
global_orient,
phis,
rest_pose,
children,
parents,
dtype=torch.float32,
train=False,
leaf_thetas=None):
"""Applies inverse kinematics transform to joints in a batch.
Args:
pose_skeleton (torch.tensor):
Locations of estimated pose skeleton with shape (Bx29x3)
global_orient (torch.tensor|none):
Tensor of global rotation matrices with shape (Bx1x3x3)
phis (torch.tensor):
Rotation on bone axis parameters with shape (Bx23x2)
rest_pose (torch.tensor):
Locations of rest (Template) pose with shape (Bx29x3)
children (List[int]): list of indexes of kinematic children with len 29
parents (List[int]): list of indexes of kinematic parents with len 29
dtype (torch.dtype, optional):
Data type of the created tensors. Default: torch.float32
train (bool):
Store True in train mode. Default: False
leaf_thetas (torch.tensor, optional):
Rotation matrixes for 5 leaf joints (Bx5x3x3). Default: None
Returns:
rot_mats (torch.tensor):
Rotation matrics of all joints with shape (Bx29x3x3)
rotate_rest_pose (torch.tensor):
Locations of rotated rest/ template pose with shape (Bx29x3)
"""
batch_size = pose_skeleton.shape[0]
device = pose_skeleton.device
rel_rest_pose = rest_pose.clone()
# vec_t_k = t_k - t_pa(k)
rel_rest_pose[:, 1:] -= rest_pose[:, parents[1:]].clone()
rel_rest_pose = torch.unsqueeze(rel_rest_pose, dim=-1)
# rotate the T pose
rotate_rest_pose = torch.zeros_like(rel_rest_pose)
# set up the root
rotate_rest_pose[:, 0] = rel_rest_pose[:, 0]
rel_pose_skeleton = torch.unsqueeze(pose_skeleton.clone(), dim=-1).detach()
rel_pose_skeleton[:, 1:] -= rel_pose_skeleton[:, parents[1:]].clone()
rel_pose_skeleton[:, 0] = rel_rest_pose[:, 0]
# the predicted final pose
final_pose_skeleton = torch.unsqueeze(pose_skeleton.clone(), dim=-1)
if train:
final_pose_skeleton[:, 1:] -= \
final_pose_skeleton[:, parents[1:]].clone()
final_pose_skeleton[:, 0] = rel_rest_pose[:, 0]
else:
final_pose_skeleton += \
rel_rest_pose[:, 0:1] - final_pose_skeleton[:, 0:1]
rel_rest_pose = rel_rest_pose
rel_pose_skeleton = rel_pose_skeleton
final_pose_skeleton = final_pose_skeleton
rotate_rest_pose = rotate_rest_pose
assert phis.dim() == 3
phis = phis / (torch.norm(phis, dim=2, keepdim=True) + 1e-8)
if train:
global_orient_mat = batch_get_pelvis_orient(rel_pose_skeleton.clone(),
rel_rest_pose.clone(),
parents, children, dtype)
else:
global_orient_mat = batch_get_pelvis_orient_svd(
rel_pose_skeleton.clone(), rel_rest_pose.clone(), parents,
children, dtype)
rot_mat_chain = [global_orient_mat]
rot_mat_local = [global_orient_mat]
# leaf nodes rot_mats
if leaf_thetas is not None:
leaf_cnt = 0
leaf_rot_mats = leaf_thetas.view([batch_size, 5, 3, 3])
for i in range(1, parents.shape[0]):
if children[i] == -1:
# leaf nodes
if leaf_thetas is not None:
rot_mat = leaf_rot_mats[:, leaf_cnt, :, :]
leaf_cnt += 1
rotate_rest_pose[:, i] = rotate_rest_pose[:, parents[
i]] + torch.matmul(rot_mat_chain[parents[i]],
rel_rest_pose[:, i])
rot_mat_chain.append(
torch.matmul(rot_mat_chain[parents[i]], rot_mat))
rot_mat_local.append(rot_mat)
elif children[i] == -3:
# three children
rotate_rest_pose[:, i] = rotate_rest_pose[:, parents[i]] + \
torch.matmul(rot_mat_chain[parents[i]], rel_rest_pose[:, i])
spine_child = []
for c in range(1, parents.shape[0]):
if parents[c] == i and c not in spine_child:
spine_child.append(c)
# original
spine_child = []
for c in range(1, parents.shape[0]):
if parents[c] == i and c not in spine_child:
spine_child.append(c)
children_final_loc = []
children_rest_loc = []
for c in spine_child:
temp = final_pose_skeleton[:, c] - rotate_rest_pose[:, i]
children_final_loc.append(temp)
children_rest_loc.append(rel_rest_pose[:, c].clone())
rot_mat = batch_get_3children_orient_svd(children_final_loc,
children_rest_loc,
rot_mat_chain[parents[i]],
spine_child, dtype)
rot_mat_chain.append(
torch.matmul(rot_mat_chain[parents[i]], rot_mat))
rot_mat_local.append(rot_mat)
else:
# Naive Hybrik
if train:
# i: the index of k-th joint
child_rest_loc = rel_rest_pose[:, i]
child_final_loc = final_pose_skeleton[:, i]
# q_pa(k) = q_pa^2(k) + R_pa(k)(t_pa(k) - t_pa^2(k))
rotate_rest_pose[:, i] = rotate_rest_pose[:, parents[i]] + \
torch.matmul(rot_mat_chain[parents[i]], rel_rest_pose[:, i])
# Adaptive HybrIK
if not train:
# children[i]: the index of k-th joint
child_rest_loc = rel_rest_pose[:, children[i]]
child_final_loc = final_pose_skeleton[:, children[
i]] - rotate_rest_pose[:, i]
orig_vec = rel_pose_skeleton[:, children[i]]
template_vec = rel_rest_pose[:, children[i]]
norm_t = torch.norm(template_vec, dim=1, keepdim=True)
orig_vec = orig_vec * norm_t / torch.norm(
orig_vec, dim=1, keepdim=True)
diff = torch.norm(
child_final_loc - orig_vec, dim=1, keepdim=True)
big_diff_idx = torch.where(diff > 15 / 1000)[0]
child_final_loc[big_diff_idx] = orig_vec[big_diff_idx]
# train: vec_p_k = R_pa(k).T * (p_k - p_pa(k))
# test: vec_p_k = R_pa(k).T * (p_k - q_pa(k))
child_final_loc = torch.matmul(
rot_mat_chain[parents[i]].transpose(1, 2), child_final_loc)
# (B, 1, 1)
child_final_norm = torch.norm(child_final_loc, dim=1, keepdim=True)
child_rest_norm = torch.norm(child_rest_loc, dim=1, keepdim=True)
# vec_n
axis = torch.cross(child_rest_loc, child_final_loc, dim=1)
axis_norm = torch.norm(axis, dim=1, keepdim=True)
# (B, 1, 1)
cos = torch.sum(
child_rest_loc * child_final_loc, dim=1, keepdim=True) / (
child_rest_norm * child_final_norm + 1e-8)
sin = axis_norm / (child_rest_norm * child_final_norm + 1e-8)
# (B, 3, 1)
axis = axis / (axis_norm + 1e-8)
# Convert location revolve to rot_mat by rodrigues
# (B, 1, 1)
rx, ry, rz = torch.split(axis, 1, dim=1)
zeros = torch.zeros((batch_size, 1, 1), dtype=dtype, device=device)
K = torch.cat([zeros, -rz, ry, rz, zeros, -rx, -ry, rx, zeros],
dim=1).view((batch_size, 3, 3))
ident = torch.eye(3, dtype=dtype, device=device).unsqueeze(dim=0)
rot_mat_loc = ident + sin * K + (1 - cos) * torch.bmm(K, K)
# Convert spin to rot_mat
# (B, 3, 1)
spin_axis = child_rest_loc / child_rest_norm
# (B, 1, 1)
rx, ry, rz = torch.split(spin_axis, 1, dim=1)
zeros = torch.zeros((batch_size, 1, 1), dtype=dtype, device=device)
K = torch.cat([zeros, -rz, ry, rz, zeros, -rx, -ry, rx, zeros],
dim=1).view((batch_size, 3, 3))
ident = torch.eye(3, dtype=dtype, device=device).unsqueeze(dim=0)
# (B, 1, 1)
cos, sin = torch.split(phis[:, i - 1], 1, dim=1)
cos = torch.unsqueeze(cos, dim=2)
sin = torch.unsqueeze(sin, dim=2)
rot_mat_spin = ident + sin * K + (1 - cos) * torch.bmm(K, K)
rot_mat = torch.matmul(rot_mat_loc, rot_mat_spin)
rot_mat_chain.append(
torch.matmul(rot_mat_chain[parents[i]], rot_mat))
rot_mat_local.append(rot_mat)
# (B, K + 1, 3, 3)
rot_mats = torch.stack(rot_mat_local, dim=1)
return rot_mats, rotate_rest_pose.squeeze(-1) | Applies inverse kinematics transform to joints in a batch. Args: pose_skeleton (torch.tensor): Locations of estimated pose skeleton with shape (Bx29x3) global_orient (torch.tensor|none): Tensor of global rotation matrices with shape (Bx1x3x3) phis (torch.tensor): Rotation on bone axis parameters with shape (Bx23x2) rest_pose (torch.tensor): Locations of rest (Template) pose with shape (Bx29x3) children (List[int]): list of indexes of kinematic children with len 29 parents (List[int]): list of indexes of kinematic parents with len 29 dtype (torch.dtype, optional): Data type of the created tensors. Default: torch.float32 train (bool): Store True in train mode. Default: False leaf_thetas (torch.tensor, optional): Rotation matrixes for 5 leaf joints (Bx5x3x3). Default: None Returns: rot_mats (torch.tensor): Rotation matrics of all joints with shape (Bx29x3x3) rotate_rest_pose (torch.tensor): Locations of rotated rest/ template pose with shape (Bx29x3) |
14,382 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `aa_to_quat` function. Write a Python function `def aa_to_quat( axis_angle: Union[torch.Tensor, numpy.ndarray] ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert axis_angle to quaternions. Args: axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4).
Here is the function:
def aa_to_quat(
axis_angle: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""
Convert axis_angle to quaternions.
Args:
axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4).
"""
if axis_angle.shape[-1] != 3:
raise ValueError(f'Invalid input axis angles f{axis_angle.shape}.')
t = Compose([axis_angle_to_quaternion])
return t(axis_angle) | Convert axis_angle to quaternions. Args: axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4). |
14,383 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `rotmat_to_quat` function. Write a Python function `def rotmat_to_quat( matrix: Union[torch.Tensor, numpy.ndarray] ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert rotation matrixs to quaternions. Args: matrix (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3, 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4).
Here is the function:
def rotmat_to_quat(
matrix: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert rotation matrixs to quaternions.
Args:
matrix (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3, 3). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4).
"""
if matrix.shape[-1] != 3 or matrix.shape[-2] != 3:
raise ValueError(f'Invalid rotation matrix shape f{matrix.shape}.')
t = Compose([matrix_to_quaternion])
return t(matrix) | Convert rotation matrixs to quaternions. Args: matrix (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3, 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4). |
14,384 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `rotmat_to_rot6d` function. Write a Python function `def rotmat_to_rot6d( matrix: Union[torch.Tensor, numpy.ndarray] ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert rotation matrixs to rotation 6d representations. Args: matrix (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3, 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035
Here is the function:
def rotmat_to_rot6d(
matrix: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert rotation matrixs to rotation 6d representations.
Args:
matrix (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3, 3). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6).
[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
On the Continuity of Rotation Representations in Neural Networks.
IEEE Conference on Computer Vision and Pattern Recognition, 2019.
Retrieved from http://arxiv.org/abs/1812.07035
"""
if matrix.shape[-1] != 3 or matrix.shape[-2] != 3:
raise ValueError(f'Invalid rotation matrix shape f{matrix.shape}.')
t = Compose([matrix_to_rotation_6d])
return t(matrix) | Convert rotation matrixs to rotation 6d representations. Args: matrix (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3, 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035 |
14,385 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `quat_to_aa` function. Write a Python function `def quat_to_aa( quaternions: Union[torch.Tensor, numpy.ndarray] ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert quaternions to axis angles. Args: quaternions (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
Here is the function:
def quat_to_aa(
quaternions: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert quaternions to axis angles.
Args:
quaternions (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
"""
if quaternions.shape[-1] != 4:
raise ValueError(f'Invalid input quaternions f{quaternions.shape}.')
t = Compose([quaternion_to_axis_angle])
return t(quaternions) | Convert quaternions to axis angles. Args: quaternions (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3). |
14,386 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `quat_to_rotmat` function. Write a Python function `def quat_to_rotmat( quaternions: Union[torch.Tensor, numpy.ndarray] ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert quaternions to rotation matrixs. Args: quaternions (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3, 3).
Here is the function:
def quat_to_rotmat(
quaternions: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert quaternions to rotation matrixs.
Args:
quaternions (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3, 3).
"""
if quaternions.shape[-1] != 4:
raise ValueError(
f'Invalid input quaternions shape f{quaternions.shape}.')
t = Compose([quaternion_to_matrix])
return t(quaternions) | Convert quaternions to rotation matrixs. Args: quaternions (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3, 3). |
14,387 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `rot6d_to_rotmat` function. Write a Python function `def rot6d_to_rotmat( rotation_6d: Union[torch.Tensor, numpy.ndarray] ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert rotation 6d representations to rotation matrixs. Args: rotation_6d (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 6). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3, 3). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035
Here is the function:
def rot6d_to_rotmat(
rotation_6d: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert rotation 6d representations to rotation matrixs.
Args:
rotation_6d (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 6). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3, 3).
[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
On the Continuity of Rotation Representations in Neural Networks.
IEEE Conference on Computer Vision and Pattern Recognition, 2019.
Retrieved from http://arxiv.org/abs/1812.07035
"""
if rotation_6d.shape[-1] != 6:
raise ValueError(f'Invalid input rotation_6d f{rotation_6d.shape}.')
t = Compose([rotation_6d_to_matrix])
return t(rotation_6d) | Convert rotation 6d representations to rotation matrixs. Args: rotation_6d (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 6). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3, 3). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035 |
14,388 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `aa_to_ee` function. Write a Python function `def aa_to_ee(axis_angle: Union[torch.Tensor, numpy.ndarray], convention: str = 'xyz') -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert axis angles to euler angle. Args: axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. convention (str, optional): Convention string of three letters from {“x”, “y”, and “z”}. Defaults to 'xyz'. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
Here is the function:
def aa_to_ee(axis_angle: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz') -> Union[torch.Tensor, numpy.ndarray]:
"""Convert axis angles to euler angle.
Args:
axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
convention (str, optional): Convention string of three letters
from {“x”, “y”, and “z”}. Defaults to 'xyz'.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
"""
if axis_angle.shape[-1] != 3:
raise ValueError(
f'Invalid input axis_angle shape f{axis_angle.shape}.')
t = Compose([axis_angle_to_matrix, matrix_to_euler_angles])
return t(axis_angle, convention) | Convert axis angles to euler angle. Args: axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. convention (str, optional): Convention string of three letters from {“x”, “y”, and “z”}. Defaults to 'xyz'. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3). |
14,389 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `aa_to_rot6d` function. Write a Python function `def aa_to_rot6d( axis_angle: Union[torch.Tensor, numpy.ndarray] ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert axis angles to rotation 6d representations. Args: axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035
Here is the function:
def aa_to_rot6d(
axis_angle: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert axis angles to rotation 6d representations.
Args:
axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6).
[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
On the Continuity of Rotation Representations in Neural Networks.
IEEE Conference on Computer Vision and Pattern Recognition, 2019.
Retrieved from http://arxiv.org/abs/1812.07035
"""
if axis_angle.shape[-1] != 3:
raise ValueError(f'Invalid input axis_angle f{axis_angle.shape}.')
t = Compose([axis_angle_to_matrix, matrix_to_rotation_6d])
return t(axis_angle) | Convert axis angles to rotation 6d representations. Args: axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035 |
14,390 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `ee_to_aa` function. Write a Python function `def ee_to_aa(euler_angle: Union[torch.Tensor, numpy.ndarray], convention: str = 'xyz') -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert euler angles to axis angles. Args: euler_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. convention (str, optional): Convention string of three letters from {“x”, “y”, and “z”}. Defaults to 'xyz'. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
Here is the function:
def ee_to_aa(euler_angle: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz') -> Union[torch.Tensor, numpy.ndarray]:
"""Convert euler angles to axis angles.
Args:
euler_angle (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
convention (str, optional): Convention string of three letters
from {“x”, “y”, and “z”}. Defaults to 'xyz'.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
"""
if euler_angle.shape[-1] != 3:
raise ValueError(f'Invalid input euler_angle f{euler_angle.shape}.')
t = Compose([
euler_angles_to_matrix, matrix_to_quaternion, quaternion_to_axis_angle
])
return t(euler_angle, convention) | Convert euler angles to axis angles. Args: euler_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. convention (str, optional): Convention string of three letters from {“x”, “y”, and “z”}. Defaults to 'xyz'. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3). |
14,391 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `ee_to_quat` function. Write a Python function `def ee_to_quat(euler_angle: Union[torch.Tensor, numpy.ndarray], convention='xyz') -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert euler angles to quaternions. Args: euler_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. convention (str, optional): Convention string of three letters from {“x”, “y”, and “z”}. Defaults to 'xyz'. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4).
Here is the function:
def ee_to_quat(euler_angle: Union[torch.Tensor, numpy.ndarray],
convention='xyz') -> Union[torch.Tensor, numpy.ndarray]:
"""Convert euler angles to quaternions.
Args:
euler_angle (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
convention (str, optional): Convention string of three letters
from {“x”, “y”, and “z”}. Defaults to 'xyz'.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4).
"""
if euler_angle.shape[-1] != 3:
raise ValueError(f'Invalid input euler_angle f{euler_angle.shape}.')
t = Compose([euler_angles_to_matrix, matrix_to_quaternion])
return t(euler_angle, convention) | Convert euler angles to quaternions. Args: euler_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. convention (str, optional): Convention string of three letters from {“x”, “y”, and “z”}. Defaults to 'xyz'. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4). |
14,392 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `ee_to_rot6d` function. Write a Python function `def ee_to_rot6d(euler_angle: Union[torch.Tensor, numpy.ndarray], convention='xyz') -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert euler angles to rotation 6d representation. Args: euler_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. convention (str, optional): Convention string of three letters from {“x”, “y”, and “z”}. Defaults to 'xyz'. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035
Here is the function:
def ee_to_rot6d(euler_angle: Union[torch.Tensor, numpy.ndarray],
convention='xyz') -> Union[torch.Tensor, numpy.ndarray]:
"""Convert euler angles to rotation 6d representation.
Args:
euler_angle (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
convention (str, optional): Convention string of three letters
from {“x”, “y”, and “z”}. Defaults to 'xyz'.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6).
[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
On the Continuity of Rotation Representations in Neural Networks.
IEEE Conference on Computer Vision and Pattern Recognition, 2019.
Retrieved from http://arxiv.org/abs/1812.07035
"""
if euler_angle.shape[-1] != 3:
raise ValueError(f'Invalid input euler_angle f{euler_angle.shape}.')
t = Compose([euler_angles_to_matrix, matrix_to_rotation_6d])
return t(euler_angle, convention) | Convert euler angles to rotation 6d representation. Args: euler_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 3). ndim of input is unlimited. convention (str, optional): Convention string of three letters from {“x”, “y”, and “z”}. Defaults to 'xyz'. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035 |
14,393 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `quat_to_ee` function. Write a Python function `def quat_to_ee(quaternions: Union[torch.Tensor, numpy.ndarray], convention: str = 'xyz') -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert quaternions to euler angles. Args: quaternions (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 4). ndim of input is unlimited. convention (str, optional): Convention string of three letters from {“x”, “y”, and “z”}. Defaults to 'xyz'. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
Here is the function:
def quat_to_ee(quaternions: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz') -> Union[torch.Tensor, numpy.ndarray]:
"""Convert quaternions to euler angles.
Args:
quaternions (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 4). ndim of input is unlimited.
convention (str, optional): Convention string of three letters
from {“x”, “y”, and “z”}. Defaults to 'xyz'.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
"""
if quaternions.shape[-1] != 4:
raise ValueError(f'Invalid input quaternions f{quaternions.shape}.')
t = Compose([quaternion_to_matrix, matrix_to_euler_angles])
return t(quaternions, convention) | Convert quaternions to euler angles. Args: quaternions (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 4). ndim of input is unlimited. convention (str, optional): Convention string of three letters from {“x”, “y”, and “z”}. Defaults to 'xyz'. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3). |
14,394 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `quat_to_rot6d` function. Write a Python function `def quat_to_rot6d( quaternions: Union[torch.Tensor, numpy.ndarray] ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert quaternions to rotation 6d representations. Args: quaternions (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 4). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035
Here is the function:
def quat_to_rot6d(
quaternions: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert quaternions to rotation 6d representations.
Args:
quaternions (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 4). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6).
[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
On the Continuity of Rotation Representations in Neural Networks.
IEEE Conference on Computer Vision and Pattern Recognition, 2019.
Retrieved from http://arxiv.org/abs/1812.07035
"""
if quaternions.shape[-1] != 4:
raise ValueError(f'Invalid input quaternions f{quaternions.shape}.')
t = Compose([quaternion_to_matrix, matrix_to_rotation_6d])
return t(quaternions) | Convert quaternions to rotation 6d representations. Args: quaternions (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 4). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 6). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035 |
14,395 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `rot6d_to_aa` function. Write a Python function `def rot6d_to_aa( rotation_6d: Union[torch.Tensor, numpy.ndarray] ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert rotation 6d representations to axis angles. Args: rotation_6d (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 6). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035
Here is the function:
def rot6d_to_aa(
rotation_6d: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert rotation 6d representations to axis angles.
Args:
rotation_6d (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 6). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
On the Continuity of Rotation Representations in Neural Networks.
IEEE Conference on Computer Vision and Pattern Recognition, 2019.
Retrieved from http://arxiv.org/abs/1812.07035
"""
if rotation_6d.shape[-1] != 6:
raise ValueError(f'Invalid input rotation_6d f{rotation_6d.shape}.')
t = Compose([
rotation_6d_to_matrix, matrix_to_quaternion, quaternion_to_axis_angle
])
return t(rotation_6d) | Convert rotation 6d representations to axis angles. Args: rotation_6d (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 6). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035 |
14,396 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `rot6d_to_ee` function. Write a Python function `def rot6d_to_ee(rotation_6d: Union[torch.Tensor, numpy.ndarray], convention: str = 'xyz') -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert rotation 6d representations to euler angles. Args: rotation_6d (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 6). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035
Here is the function:
def rot6d_to_ee(rotation_6d: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz') -> Union[torch.Tensor, numpy.ndarray]:
"""Convert rotation 6d representations to euler angles.
Args:
rotation_6d (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 6). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
On the Continuity of Rotation Representations in Neural Networks.
IEEE Conference on Computer Vision and Pattern Recognition, 2019.
Retrieved from http://arxiv.org/abs/1812.07035
"""
if rotation_6d.shape[-1] != 6:
raise ValueError(f'Invalid input rotation_6d f{rotation_6d.shape}.')
t = Compose([rotation_6d_to_matrix, matrix_to_euler_angles])
return t(rotation_6d, convention) | Convert rotation 6d representations to euler angles. Args: rotation_6d (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 6). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035 |
14,397 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
The provided code snippet includes necessary dependencies for implementing the `rot6d_to_quat` function. Write a Python function `def rot6d_to_quat( rotation_6d: Union[torch.Tensor, numpy.ndarray] ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert rotation 6d representations to quaternions. Args: rotation (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 6). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035
Here is the function:
def rot6d_to_quat(
rotation_6d: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert rotation 6d representations to quaternions.
Args:
rotation (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 6). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4).
[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
On the Continuity of Rotation Representations in Neural Networks.
IEEE Conference on Computer Vision and Pattern Recognition, 2019.
Retrieved from http://arxiv.org/abs/1812.07035
"""
if rotation_6d.shape[-1] != 6:
raise ValueError(
f'Invalid input rotation_6d shape f{rotation_6d.shape}.')
t = Compose([rotation_6d_to_matrix, matrix_to_quaternion])
return t(rotation_6d) | Convert rotation 6d representations to quaternions. Args: rotation (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 6). ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 4). [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035 |
14,398 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
TRANSFORMATION_AA_TO_SJA = torch.Tensor([
[[1, 0, 0], [0, 0, 1], [0, -1, 0]], # 00, 'left_hip',
[[1, 0, 0], [0, 0, 1], [0, -1, 0]], # 01, 'right_hip',
[[1, 0, 0], [0, 0, -1], [0, 1, 0]], # 02, 'spine1',
[[1, 0, 0], [0, 0, 1], [0, -1, 0]], # 03, 'left_knee',
[[1, 0, 0], [0, 0, 1], [0, -1, 0]], # 04, 'right_knee',
[[1, 0, 0], [0, 0, -1], [0, 1, 0]], # 05, 'spine2',
[[1, 0, 0], [0, 1, 0], [0, 0, 1]], # 06, 'left_ankle',
[[1, 0, 0], [0, 1, 0], [0, 0, 1]], # 07, 'right_ankle',
[[1, 0, 0], [0, 0, -1], [0, 1, 0]], # 08, 'spine3',
[[1, 0, 0], [0, 1, 0], [0, 0, 1]], # 09, 'left_foot',
[[1, 0, 0], [0, 1, 0], [0, 0, 1]], # 10, 'right_foot',
[[1, 0, 0], [0, 0, -1], [0, 1, 0]], # 11, 'neck',
[[0, 0, -1], [0, 1, 0], [1, 0, 0]], # 12, 'left_collar',
[[0, 0, 1], [0, 1, 0], [-1, 0, 0]], # 13, 'right_collar',
[[1, 0, 0], [0, 0, -1], [0, 1, 0]], # 14, 'head',
[[0, 0, -1], [0, 1, 0], [1, 0, 0]], # 15, 'left_shoulder',
[[0, 0, 1], [0, 1, 0], [-1, 0, 0]], # 16, 'right_shoulder',
[[0, 0, -1], [0, 1, 0], [1, 0, 0]], # 17, 'left_elbow',
[[0, 0, 1], [0, 1, 0], [-1, 0, 0]], # 18, 'right_elbow',
[[0, 0, -1], [0, 1, 0], [1, 0, 0]], # 19, 'left_wrist',
[[0, 0, 1], [0, 1, 0], [-1, 0, 0]], # 20, 'right_wrist',
])
TRANSFORMATION_SJA_TO_AA = \
torch.inverse(TRANSFORMATION_AA_TO_SJA)
The provided code snippet includes necessary dependencies for implementing the `aa_to_sja` function. Write a Python function `def aa_to_sja( axis_angle: Union[torch.Tensor, numpy.ndarray], R_t: Union[torch.Tensor, numpy.ndarray] = TRANSFORMATION_AA_TO_SJA, R_t_inv: Union[torch.Tensor, numpy.ndarray] = TRANSFORMATION_SJA_TO_AA ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert axis-angles to standard joint angles. Args: axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3), ndim of input is unlimited. R_t (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3, 3). Transformation matrices from original axis-angle coordinate system to standard joint angle coordinate system, ndim of input is unlimited. R_t_inv (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3, 3). Transformation matrices from standard joint angle coordinate system to original axis-angle coordinate system, ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
Here is the function:
def aa_to_sja(
axis_angle: Union[torch.Tensor, numpy.ndarray],
R_t: Union[torch.Tensor, numpy.ndarray] = TRANSFORMATION_AA_TO_SJA,
R_t_inv: Union[torch.Tensor, numpy.ndarray] = TRANSFORMATION_SJA_TO_AA
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert axis-angles to standard joint angles.
Args:
axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 21, 3), ndim of input is unlimited.
R_t (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 21, 3, 3). Transformation matrices from
original axis-angle coordinate system to
standard joint angle coordinate system,
ndim of input is unlimited.
R_t_inv (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 21, 3, 3). Transformation matrices from
standard joint angle coordinate system to
original axis-angle coordinate system,
ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
"""
def _aa_to_sja(aa, R_t, R_t_inv):
R_aa = axis_angle_to_matrix(aa)
R_sja = R_t @ R_aa @ R_t_inv
sja = matrix_to_euler_angles(R_sja, convention='XYZ')
return sja
if axis_angle.shape[-2:] != (21, 3):
raise ValueError(
f'Invalid input axis angles shape f{axis_angle.shape}.')
if R_t.shape[-3:] != (21, 3, 3):
raise ValueError(f'Invalid input R_t shape f{R_t.shape}.')
if R_t_inv.shape[-3:] != (21, 3, 3):
raise ValueError(f'Invalid input R_t_inv shape f{R_t.shape}.')
t = Compose([_aa_to_sja])
return t(axis_angle, R_t=R_t, R_t_inv=R_t_inv) | Convert axis-angles to standard joint angles. Args: axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3), ndim of input is unlimited. R_t (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3, 3). Transformation matrices from original axis-angle coordinate system to standard joint angle coordinate system, ndim of input is unlimited. R_t_inv (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3, 3). Transformation matrices from standard joint angle coordinate system to original axis-angle coordinate system, ndim of input is unlimited. Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3). |
14,399 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
class Compose:
def __init__(self, transforms: list):
"""Composes several transforms together. This transform does not
support torchscript.
Args:
transforms (list): (list of transform functions)
"""
self.transforms = transforms
def __call__(self,
rotation: Union[torch.Tensor, numpy.ndarray],
convention: str = 'xyz',
**kwargs):
convention = convention.lower()
if not (set(convention) == set('xyz') and len(convention) == 3):
raise ValueError(f'Invalid convention {convention}.')
if isinstance(rotation, numpy.ndarray):
data_type = 'numpy'
rotation = torch.FloatTensor(rotation)
elif isinstance(rotation, torch.Tensor):
data_type = 'tensor'
else:
raise TypeError(
'Type of rotation should be torch.Tensor or numpy.ndarray')
for t in self.transforms:
if 'convention' in t.__code__.co_varnames:
rotation = t(rotation, convention.upper(), **kwargs)
else:
rotation = t(rotation, **kwargs)
if data_type == 'numpy':
rotation = rotation.detach().cpu().numpy()
return rotation
TRANSFORMATION_AA_TO_SJA = torch.Tensor([
[[1, 0, 0], [0, 0, 1], [0, -1, 0]], # 00, 'left_hip',
[[1, 0, 0], [0, 0, 1], [0, -1, 0]], # 01, 'right_hip',
[[1, 0, 0], [0, 0, -1], [0, 1, 0]], # 02, 'spine1',
[[1, 0, 0], [0, 0, 1], [0, -1, 0]], # 03, 'left_knee',
[[1, 0, 0], [0, 0, 1], [0, -1, 0]], # 04, 'right_knee',
[[1, 0, 0], [0, 0, -1], [0, 1, 0]], # 05, 'spine2',
[[1, 0, 0], [0, 1, 0], [0, 0, 1]], # 06, 'left_ankle',
[[1, 0, 0], [0, 1, 0], [0, 0, 1]], # 07, 'right_ankle',
[[1, 0, 0], [0, 0, -1], [0, 1, 0]], # 08, 'spine3',
[[1, 0, 0], [0, 1, 0], [0, 0, 1]], # 09, 'left_foot',
[[1, 0, 0], [0, 1, 0], [0, 0, 1]], # 10, 'right_foot',
[[1, 0, 0], [0, 0, -1], [0, 1, 0]], # 11, 'neck',
[[0, 0, -1], [0, 1, 0], [1, 0, 0]], # 12, 'left_collar',
[[0, 0, 1], [0, 1, 0], [-1, 0, 0]], # 13, 'right_collar',
[[1, 0, 0], [0, 0, -1], [0, 1, 0]], # 14, 'head',
[[0, 0, -1], [0, 1, 0], [1, 0, 0]], # 15, 'left_shoulder',
[[0, 0, 1], [0, 1, 0], [-1, 0, 0]], # 16, 'right_shoulder',
[[0, 0, -1], [0, 1, 0], [1, 0, 0]], # 17, 'left_elbow',
[[0, 0, 1], [0, 1, 0], [-1, 0, 0]], # 18, 'right_elbow',
[[0, 0, -1], [0, 1, 0], [1, 0, 0]], # 19, 'left_wrist',
[[0, 0, 1], [0, 1, 0], [-1, 0, 0]], # 20, 'right_wrist',
])
TRANSFORMATION_SJA_TO_AA = \
torch.inverse(TRANSFORMATION_AA_TO_SJA)
The provided code snippet includes necessary dependencies for implementing the `sja_to_aa` function. Write a Python function `def sja_to_aa( sja: Union[torch.Tensor, numpy.ndarray], R_t: Union[torch.Tensor, numpy.ndarray] = TRANSFORMATION_AA_TO_SJA, R_t_inv: Union[torch.Tensor, numpy.ndarray] = TRANSFORMATION_SJA_TO_AA ) -> Union[torch.Tensor, numpy.ndarray]` to solve the following problem:
Convert standard joint angles to axis angles. Args: sja (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3). ndim of input is unlimited. R_t (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3, 3). Transformation matrices from original axis-angle coordinate system to standard joint angle coordinate system R_t_inv (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3, 3). Transformation matrices from standard joint angle coordinate system to original axis-angle coordinate system Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
Here is the function:
def sja_to_aa(
sja: Union[torch.Tensor, numpy.ndarray],
R_t: Union[torch.Tensor, numpy.ndarray] = TRANSFORMATION_AA_TO_SJA,
R_t_inv: Union[torch.Tensor, numpy.ndarray] = TRANSFORMATION_SJA_TO_AA
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert standard joint angles to axis angles.
Args:
sja (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 21, 3). ndim of input is unlimited.
R_t (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 21, 3, 3). Transformation matrices from
original axis-angle coordinate system to
standard joint angle coordinate system
R_t_inv (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 21, 3, 3). Transformation matrices from
standard joint angle coordinate system to
original axis-angle coordinate system
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
"""
def _sja_to_aa(sja, R_t, R_t_inv):
R_sja = euler_angles_to_matrix(sja, convention='XYZ')
R_aa = R_t_inv @ R_sja @ R_t
aa = quaternion_to_axis_angle(matrix_to_quaternion(R_aa))
return aa
if sja.shape[-2:] != (21, 3):
raise ValueError(f'Invalid input axis angles shape f{sja.shape}.')
if R_t.shape[-3:] != (21, 3, 3):
raise ValueError(f'Invalid input R_t shape f{R_t.shape}.')
if R_t_inv.shape[-3:] != (21, 3, 3):
raise ValueError(f'Invalid input R_t_inv shape f{R_t.shape}.')
t = Compose([_sja_to_aa])
return t(sja, R_t=R_t, R_t_inv=R_t_inv) | Convert standard joint angles to axis angles. Args: sja (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3). ndim of input is unlimited. R_t (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3, 3). Transformation matrices from original axis-angle coordinate system to standard joint angle coordinate system R_t_inv (Union[torch.Tensor, numpy.ndarray]): input shape should be (..., 21, 3, 3). Transformation matrices from standard joint angle coordinate system to original axis-angle coordinate system Returns: Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3). |
14,400 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
The provided code snippet includes necessary dependencies for implementing the `make_homegeneous_rotmat_batch` function. Write a Python function `def make_homegeneous_rotmat_batch(input: torch.Tensor) -> torch.Tensor` to solve the following problem:
Appends a row of [0,0,0,1] to a batch size x 3 x 4 Tensor. Parameters ---------- :param input: A tensor of dimensions batch size x 3 x 4 :return: A tensor batch size x 4 x 4 (appended with 0,0,0,1)
Here is the function:
def make_homegeneous_rotmat_batch(input: torch.Tensor) -> torch.Tensor:
"""Appends a row of [0,0,0,1] to a batch size x 3 x 4 Tensor.
Parameters
----------
:param input: A tensor of dimensions batch size x 3 x 4
:return: A tensor batch size x 4 x 4 (appended with 0,0,0,1)
"""
batch_size = input.shape[0]
row_append = torch.tensor([0.0, 0.0, 0.0, 1.0], dtype=torch.float)
row_append.requires_grad = False
padded_tensor = torch.cat(
[input, row_append.view(1, 1, 4).repeat(batch_size, 1, 1)], dim=1)
return padded_tensor | Appends a row of [0,0,0,1] to a batch size x 3 x 4 Tensor. Parameters ---------- :param input: A tensor of dimensions batch size x 3 x 4 :return: A tensor batch size x 4 x 4 (appended with 0,0,0,1) |
14,401 | from typing import Union
import numpy
import torch
from mmhuman3d.core.conventions.joints_mapping.standard_joint_angles import (
TRANSFORMATION_AA_TO_SJA,
TRANSFORMATION_SJA_TO_AA,
)
from .logger import get_root_logger
The provided code snippet includes necessary dependencies for implementing the `make_homegeneous_rotmat` function. Write a Python function `def make_homegeneous_rotmat(input: torch.Tensor) -> torch.Tensor` to solve the following problem:
Appends a row of [0,0,0,1] to a 3 x 4 Tensor. Parameters ---------- :param input: A tensor of dimensions 3 x 4 :return: A tensor batch size x 4 x 4 (appended with 0,0,0,1)
Here is the function:
def make_homegeneous_rotmat(input: torch.Tensor) -> torch.Tensor:
"""Appends a row of [0,0,0,1] to a 3 x 4 Tensor.
Parameters
----------
:param input: A tensor of dimensions 3 x 4
:return: A tensor batch size x 4 x 4 (appended with 0,0,0,1)
"""
row_append = torch.tensor([0.0, 0.0, 0.0, 1.0], dtype=torch.float)
row_append.requires_grad = False
padded_tensor = torch.cat(input, row_append, dim=1)
return padded_tensor | Appends a row of [0,0,0,1] to a 3 x 4 Tensor. Parameters ---------- :param input: A tensor of dimensions 3 x 4 :return: A tensor batch size x 4 x 4 (appended with 0,0,0,1) |
14,402 | import copy
import os
from typing import Iterable, Optional, Union
import numpy as np
import torch
from pytorch3d.renderer.cameras import CamerasBase
from mmhuman3d.core.cameras import build_cameras
from mmhuman3d.core.conventions.cameras.convert_convention import (
convert_camera_matrix,
convert_world_view,
)
from mmhuman3d.core.conventions.cameras.convert_projection import \
convert_perspective_to_weakperspective
from mmhuman3d.models.body_models.builder import build_body_model
from mmhuman3d.utils.transforms import aa_to_rotmat, rotmat_to_aa
def convert_camera_matrix(
K: Optional[Union[torch.Tensor, np.ndarray]] = None,
R: Optional[Union[torch.Tensor, np.ndarray]] = None,
T: Optional[Union[torch.Tensor, np.ndarray]] = None,
is_perspective: bool = True,
convention_src: str = 'opencv',
convention_dst: str = 'pytorch3d',
in_ndc_src: bool = True,
in_ndc_dst: bool = True,
resolution_src: Optional[Union[int, Tuple[int, int], torch.Tensor,
np.ndarray]] = None,
resolution_dst: Optional[Union[int, Tuple[int, int], torch.Tensor,
np.ndarray]] = None,
camera_conventions: dict = CAMERA_CONVENTIONS,
) -> Tuple[Union[torch.Tensor, np.ndarray], Union[torch.Tensor, np.ndarray],
Union[torch.Tensor, np.ndarray]]:
"""Convert the intrinsic matrix K and extrinsic matrix [R|T] from source
convention to destination convention.
Args:
K (Union[torch.Tensor, np.ndarray]): Intrinsic matrix,
shape should be (batch_size, 4, 4) or (batch_size, 3, 3).
Will be ignored if None.
R (Optional[Union[torch.Tensor, np.ndarray]], optional):
Extrinsic rotation matrix. Shape should be (batch_size, 3, 3).
Will be identity if None.
Defaults to None.
T (Optional[Union[torch.Tensor, np.ndarray]], optional):
Extrinsic translation matrix. Shape should be (batch_size, 3).
Will be zeros if None.
Defaults to None.
is_perspective (bool, optional): whether is perspective projection.
Defaults to True.
_____________________________________________________________________
# Camera dependent args
convention_src (str, optional): convention of source camera,
convention_dst (str, optional): convention of destination camera,
We define the convention of cameras by the order of right, front and
up.
E.g., the first one is pyrender and its convention should be
'+x+z+y'. '+' could be ignored.
The second one is opencv and its convention should be '+x-z-y'.
The third one is pytorch3d and its convention should be '-xzy'.
opengl(pyrender) opencv pytorch3d
y z y
| / |
| / |
|_______x /________x x________ |
/ | /
/ | /
z / y | z /
in_ndc_src (bool, optional): Whether is the source camera defined
in ndc.
Defaults to True.
in_ndc_dst (bool, optional): Whether is the destination camera defined
in ndc.
Defaults to True.
in camera_convention, we define these args as:
1). `left_mm_ex` means extrinsic matrix `K` is left matrix
multiplcation defined.
2). `left_mm_in` means intrinsic matrix [`R`| `T`] is left
matrix multiplcation defined.
3) `view_to_world` means extrinsic matrix [`R`| `T`] is defined
as view to world.
resolution_src (Optional[Union[int, Tuple[int, int], torch.Tensor,
np.ndarray]], optional):
Source camera image size of (height, width).
Required if defined in screen.
Will be square if int.
Shape should be (2,) if `array` or `tensor`.
Defaults to None.
resolution_dst (Optional[Union[int, Tuple[int, int], torch.Tensor,
np.ndarray]], optional):
Destination camera image size of (height, width).
Required if defined in screen.
Will be square if int.
Shape should be (2,) if `array` or `tensor`.
Defaults to None.
camera_conventions: (dict, optional): `dict` containing
pre-defined camera convention information.
Defaults to CAMERA_CONVENTIONS.
Raises:
TypeError: K, R, T should all be `torch.Tensor` or `np.ndarray`.
Returns:
Tuple[Union[torch.Tensor, None], Union[torch.Tensor, None],
Union[torch.Tensor, None]]:
Converted K, R, T matrix of `tensor`.
"""
convention_dst = convention_dst.lower()
convention_src = convention_src.lower()
assert convention_dst in CAMERA_CONVENTIONS
assert convention_src in CAMERA_CONVENTIONS
left_mm_ex_src = CAMERA_CONVENTIONS[convention_src].get(
'left_mm_extrinsic', True)
view_to_world_src = CAMERA_CONVENTIONS[convention_src].get(
'view_to_world', False)
left_mm_in_src = CAMERA_CONVENTIONS[convention_src].get(
'left_mm_intrinsic', False)
left_mm_ex_dst = CAMERA_CONVENTIONS[convention_dst].get(
'left_mm_extrinsic', True)
view_to_world_dst = CAMERA_CONVENTIONS[convention_dst].get(
'view_to_world', False)
left_mm_in_dst = CAMERA_CONVENTIONS[convention_dst].get(
'left_mm_intrinsic', False)
sign_src, axis_src = enc_camera_convention(convention_src,
camera_conventions)
sign_dst, axis_dst = enc_camera_convention(convention_dst,
camera_conventions)
sign = torch.Tensor(sign_dst) / torch.Tensor(sign_src)
type_ = []
for x in [K, R, T]:
if x is not None:
type_.append(type(x))
if len(type_) > 0:
if not all(x == type_[0] for x in type_):
raise TypeError('Input type should be the same.')
use_numpy = False
if np.ndarray in type_:
use_numpy = True
# convert raw matrix to tensor
if isinstance(K, np.ndarray):
new_K = torch.Tensor(K)
elif K is None:
new_K = None
elif isinstance(K, torch.Tensor):
new_K = K.clone()
else:
raise TypeError(
f'K should be `torch.Tensor` or `numpy.ndarray`, type(K): '
f'{type(K)}')
if isinstance(R, np.ndarray):
new_R = torch.Tensor(R).view(-1, 3, 3)
elif R is None:
new_R = torch.eye(3, 3)[None]
elif isinstance(R, torch.Tensor):
new_R = R.clone().view(-1, 3, 3)
else:
raise TypeError(
f'R should be `torch.Tensor` or `numpy.ndarray`, type(R): '
f'{type(R)}')
if isinstance(T, np.ndarray):
new_T = torch.Tensor(T).view(-1, 3)
elif T is None:
new_T = torch.zeros(1, 3)
elif isinstance(T, torch.Tensor):
new_T = T.clone().view(-1, 3)
else:
raise TypeError(
f'T should be `torch.Tensor` or `numpy.ndarray`, type(T): '
f'{type(T)}')
if axis_dst != axis_src:
new_R = ee_to_rotmat(
rotmat_to_ee(new_R, convention=axis_src), convention=axis_dst)
# convert extrinsic to world_to_view
if view_to_world_src is True:
new_R, new_T = convert_world_view(new_R, new_T)
# right mm to left mm
if (not left_mm_ex_src) and left_mm_ex_dst:
new_R *= sign.to(new_R.device)
new_R = new_R.permute(0, 2, 1)
# left mm to right mm
elif left_mm_ex_src and (not left_mm_ex_dst):
new_R = new_R.permute(0, 2, 1)
new_R *= sign.to(new_R.device)
# right_mm to right mm
elif (not left_mm_ex_dst) and (not left_mm_ex_src):
new_R *= sign.to(new_R.device)
# left mm to left mm
elif left_mm_ex_src and left_mm_ex_dst:
new_R *= sign.view(3, 1).to(new_R.device)
new_T *= sign.to(new_T.device)
# convert extrinsic to as definition
if view_to_world_dst is True:
new_R, new_T = convert_world_view(new_R, new_T)
# in ndc or in screen
if in_ndc_dst is False and in_ndc_src is True:
assert resolution_dst is not None, \
'dst in screen, should specify resolution_dst.'
if in_ndc_src is False and in_ndc_dst is True:
assert resolution_src is not None, \
'src in screen, should specify resolution_dst.'
if resolution_dst is None:
resolution_dst = 2.0
if resolution_src is None:
resolution_src = 2.0
if new_K is not None:
if left_mm_in_src is False and left_mm_in_dst is True:
new_K = new_K.permute(0, 2, 1)
if new_K.shape[-2:] == (3, 3):
new_K = convert_K_3x3_to_4x4(new_K, is_perspective)
# src in ndc, dst in screen
if in_ndc_src is True and (in_ndc_dst is False):
new_K = convert_ndc_to_screen(
K=new_K,
is_perspective=is_perspective,
sign=sign.to(new_K.device),
resolution=resolution_dst)
# src in screen, dst in ndc
elif in_ndc_src is False and in_ndc_dst is True:
new_K = convert_screen_to_ndc(
K=new_K,
is_perspective=is_perspective,
sign=sign.to(new_K.device),
resolution=resolution_src)
# src in ndc, dst in ndc
elif in_ndc_src is True and in_ndc_dst is True:
if is_perspective:
new_K[:, 0, 2] *= sign[0].to(new_K.device)
new_K[:, 1, 2] *= sign[1].to(new_K.device)
else:
new_K[:, 0, 3] *= sign[0].to(new_K.device)
new_K[:, 1, 3] *= sign[1].to(new_K.device)
# src in screen, dst in screen
else:
pass
if left_mm_in_src is True and left_mm_in_dst is False:
new_K = new_K.permute(0, 2, 1)
num_batch = max(new_K.shape[0], new_R.shape[0], new_T.shape[0])
if new_K.shape[0] == 1:
new_K = new_K.repeat(num_batch, 1, 1)
if new_R.shape[0] == 1:
new_R = new_R.repeat(num_batch, 1, 1)
if new_T.shape[0] == 1:
new_T = new_T.repeat(num_batch, 1)
if use_numpy:
if isinstance(new_K, torch.Tensor):
new_K = new_K.cpu().numpy()
if isinstance(new_R, torch.Tensor):
new_R = new_R.cpu().numpy()
if isinstance(new_T, torch.Tensor):
new_T = new_T.cpu().numpy()
return new_K, new_R, new_T
def convert_world_view(
R: Union[torch.Tensor, np.ndarray], T: Union[torch.Tensor, np.ndarray]
) -> Tuple[Union[torch.Tensor, np.ndarray], Union[torch.Tensor, np.ndarray]]:
"""Convert between view_to_world and world_to_view defined extrinsic
matrix.
Args:
R (Union[torch.Tensor, np.ndarray]): extrinsic rotation matrix.
shape should be (batch, 3, 4)
T (Union[torch.Tensor, np.ndarray]): extrinsic translation matrix.
Raises:
TypeError: R and T should be of the same type.
Returns:
Tuple[Union[torch.Tensor, np.ndarray], Union[torch.Tensor,
np.ndarray]]: output R, T.
"""
if not (type(R) is type(T)):
raise TypeError(
f'R: {type(R)}, T: {type(T)} should have the same type.')
if isinstance(R, torch.Tensor):
R = R.clone()
T = T.clone()
R = R.permute(0, 2, 1)
T = -(R @ T.view(-1, 3, 1)).view(-1, 3)
elif isinstance(R, np.ndarray):
R = R.copy()
T = T.copy()
R = R.transpose(0, 2, 1)
T = -(R @ T.reshape(-1, 3, 1)).reshape(-1, 3)
else:
raise TypeError(f'R: {type(R)}, T: {type(T)} should be torch.Tensor '
f'or numpy.ndarray.')
return R, T
def convert_perspective_to_weakperspective(
K: Union[torch.Tensor, np.ndarray],
zmean: Union[torch.Tensor, np.ndarray, float, int],
resolution: Union[int, Tuple[int, int], torch.Tensor,
np.ndarray] = None,
in_ndc: bool = False,
convention: str = 'opencv') -> Union[torch.Tensor, np.ndarray]:
"""Convert perspective to weakperspective intrinsic matrix.
Args:
K (Union[torch.Tensor, np.ndarray]): input intrinsic matrix, shape
should be (batch, 4, 4) or (batch, 3, 3).
zmean (Union[torch.Tensor, np.ndarray, int, float]): zmean for object.
shape should be (batch, ) or singleton number.
resolution (Union[int, Tuple[int, int], torch.Tensor, np.ndarray],
optional): (height, width) of image. Defaults to None.
in_ndc (bool, optional): whether defined in ndc. Defaults to False.
convention (str, optional): camera convention. Defaults to 'opencv'.
Returns:
Union[torch.Tensor, np.ndarray]: output weakperspective pred_cam,
shape is (batch, 4)
"""
assert K is not None, 'K is required.'
K, _, _ = convert_camera_matrix(
K=K,
convention_src=convention,
convention_dst='pytorch3d',
is_perspective=True,
in_ndc_src=in_ndc,
in_ndc_dst=True,
resolution_src=resolution)
if isinstance(zmean, np.ndarray):
zmean = torch.Tensor(zmean)
elif isinstance(zmean, (float, int)):
zmean = torch.Tensor([zmean])
zmean = zmean.view(-1)
num_frame = max(zmean.shape[0], K.shape[0])
new_K = torch.eye(4, 4)[None].repeat(num_frame, 1, 1)
fx = K[:, 0, 0]
fy = K[:, 0, 0]
cx = K[:, 0, 2]
cy = K[:, 1, 2]
new_K[:, 0, 0] = fx / zmean
new_K[:, 1, 1] = fy / zmean
new_K[:, 0, 3] = cx
new_K[:, 1, 3] = cy
return new_K
def build_body_model(cfg):
"""Build body_models."""
if cfg is None:
return None
return BODY_MODELS.build(cfg)
def aa_to_rotmat(
axis_angle: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""
Convert axis_angle to rotation matrixs.
Args:
axis_angle (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3). ndim of input is unlimited.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3, 3).
"""
if axis_angle.shape[-1] != 3:
raise ValueError(
f'Invalid input axis angles shape f{axis_angle.shape}.')
t = Compose([axis_angle_to_matrix])
return t(axis_angle)
def rotmat_to_aa(
matrix: Union[torch.Tensor, numpy.ndarray]
) -> Union[torch.Tensor, numpy.ndarray]:
"""Convert rotation matrixs to axis angles.
Args:
matrix (Union[torch.Tensor, numpy.ndarray]): input shape
should be (..., 3, 3). ndim of input is unlimited.
convention (str, optional): Convention string of three letters
from {“x”, “y”, and “z”}. Defaults to 'xyz'.
Returns:
Union[torch.Tensor, numpy.ndarray]: shape would be (..., 3).
"""
if matrix.shape[-1] != 3 or matrix.shape[-2] != 3:
raise ValueError(f'Invalid rotation matrix shape f{matrix.shape}.')
t = Compose([matrix_to_quaternion, quaternion_to_axis_angle])
return t(matrix)
The provided code snippet includes necessary dependencies for implementing the `convert_smpl_from_opencv_calibration` function. Write a Python function `def convert_smpl_from_opencv_calibration( R: Union[np.ndarray, torch.Tensor], T: Union[np.ndarray, torch.Tensor], K: Optional[Union[np.ndarray, torch.Tensor]] = None, resolution: Optional[Union[Iterable[int], int]] = None, verts: Optional[Union[np.ndarray, torch.Tensor]] = None, poses: Optional[Union[np.ndarray, torch.Tensor]] = None, transl: Optional[Union[np.ndarray, torch.Tensor]] = None, model_path: Optional[str] = None, betas: Optional[Union[np.ndarray, torch.Tensor]] = None, model_type: Optional[str] = 'smpl', gender: Optional[str] = 'neutral')` to solve the following problem:
Convert opencv calibration smpl poses&transl parameters to model based poses&transl or verts. Args: R (Union[np.ndarray, torch.Tensor]): (frame, 3, 3) T (Union[np.ndarray, torch.Tensor]): [(frame, 3) K (Optional[Union[np.ndarray, torch.Tensor]], optional): (frame, 3, 3) or (frame, 4, 4). Defaults to None. resolution (Optional[Union[Iterable[int], int]], optional): (height, width). Defaults to None. verts (Optional[Union[np.ndarray, torch.Tensor]], optional): (frame, num_verts, 3). Defaults to None. poses (Optional[Union[np.ndarray, torch.Tensor]], optional): (frame, 72/165). Defaults to None. transl (Optional[Union[np.ndarray, torch.Tensor]], optional): (frame, 3). Defaults to None. model_path (Optional[str], optional): model path. Defaults to None. betas (Optional[Union[np.ndarray, torch.Tensor]], optional): (frame, 10). Defaults to None. model_type (Optional[str], optional): choose in 'smpl' or 'smplx'. Defaults to 'smpl'. gender (Optional[str], optional): choose in 'male', 'female', 'neutral'. Defaults to 'neutral'. Raises: ValueError: wrong input poses or transl. Returns: Tuple[torch.Tensor]: Return converted poses, transl, pred_cam or verts, pred_cam.
Here is the function:
def convert_smpl_from_opencv_calibration(
R: Union[np.ndarray, torch.Tensor],
T: Union[np.ndarray, torch.Tensor],
K: Optional[Union[np.ndarray, torch.Tensor]] = None,
resolution: Optional[Union[Iterable[int], int]] = None,
verts: Optional[Union[np.ndarray, torch.Tensor]] = None,
poses: Optional[Union[np.ndarray, torch.Tensor]] = None,
transl: Optional[Union[np.ndarray, torch.Tensor]] = None,
model_path: Optional[str] = None,
betas: Optional[Union[np.ndarray, torch.Tensor]] = None,
model_type: Optional[str] = 'smpl',
gender: Optional[str] = 'neutral'):
"""Convert opencv calibration smpl poses&transl parameters to model based
poses&transl or verts.
Args:
R (Union[np.ndarray, torch.Tensor]): (frame, 3, 3)
T (Union[np.ndarray, torch.Tensor]): [(frame, 3)
K (Optional[Union[np.ndarray, torch.Tensor]], optional):
(frame, 3, 3) or (frame, 4, 4). Defaults to None.
resolution (Optional[Union[Iterable[int], int]], optional):
(height, width). Defaults to None.
verts (Optional[Union[np.ndarray, torch.Tensor]], optional):
(frame, num_verts, 3). Defaults to None.
poses (Optional[Union[np.ndarray, torch.Tensor]], optional):
(frame, 72/165). Defaults to None.
transl (Optional[Union[np.ndarray, torch.Tensor]], optional):
(frame, 3). Defaults to None.
model_path (Optional[str], optional): model path.
Defaults to None.
betas (Optional[Union[np.ndarray, torch.Tensor]], optional):
(frame, 10). Defaults to None.
model_type (Optional[str], optional): choose in 'smpl' or 'smplx'.
Defaults to 'smpl'.
gender (Optional[str], optional): choose in 'male', 'female',
'neutral'.
Defaults to 'neutral'.
Raises:
ValueError: wrong input poses or transl.
Returns:
Tuple[torch.Tensor]: Return converted poses, transl, pred_cam
or verts, pred_cam.
"""
R_, T_ = convert_world_view(R, T)
RT = torch.eye(4, 4)[None]
RT[:, :3, :3] = R_
RT[:, :3, 3] = T_
if verts is not None:
poses = None
betas = None
transl = None
else:
assert poses is not None
assert transl is not None
if isinstance(poses, dict):
poses = copy.deepcopy(poses)
for k in poses:
if isinstance(poses[k], np.ndarray):
poses[k] = torch.Tensor(poses[k])
elif isinstance(poses, np.ndarray):
poses = torch.Tensor(poses)
elif isinstance(poses, torch.Tensor):
poses = poses.clone()
else:
raise ValueError(f'Wrong data type of poses: {type(poses)}.')
if isinstance(transl, np.ndarray):
transl = torch.Tensor(transl)
elif isinstance(transl, torch.Tensor):
transl = transl.clone()
else:
raise ValueError('Should pass valid `transl`.')
transl = transl.view(-1, 3)
if isinstance(betas, np.ndarray):
betas = torch.Tensor(betas)
elif isinstance(betas, torch.Tensor):
betas = betas.clone()
body_model = build_body_model(
dict(
type=model_type,
model_path=os.path.join(model_path, model_type),
gender=gender,
model_type=model_type))
if isinstance(poses, dict):
poses.update({'transl': transl, 'betas': betas})
else:
if isinstance(poses, np.ndarray):
poses = torch.tensor(poses)
poses = body_model.tensor2dict(
full_pose=poses, transl=transl, betas=betas)
model_output = body_model(**poses)
verts = model_output['vertices']
global_orient = poses['global_orient']
global_orient = rotmat_to_aa(R_ @ aa_to_rotmat(global_orient))
poses['global_orient'] = global_orient
poses['transl'] = None
verts_rotated = model_output['vertices']
rotated_pose = body_model.dict2tensor(poses)
verts_converted = verts.clone().view(-1, 3)
verts_converted = RT @ torch.cat(
[verts_converted,
torch.ones(verts_converted.shape[0], 1)], dim=-1).unsqueeze(-1)
verts_converted = verts_converted.squeeze(-1)
verts_converted = verts_converted[:, :3] / verts_converted[:, 3:]
verts_converted = verts_converted.view(verts.shape[0], -1, 3)
num_frame = verts_converted.shape[0]
if poses is not None:
transl = torch.mean(verts_converted - verts_rotated, dim=1)
orig_cam = None
if K is not None:
zmean = torch.mean(verts_converted, dim=1)[:, 2]
K, _, _ = convert_camera_matrix(
K,
is_perspective=True,
convention_dst='opencv',
convention_src='opencv',
in_ndc_dst=True,
in_ndc_src=False,
resolution_src=resolution)
K = K.repeat(num_frame, 1, 1)
orig_cam = convert_perspective_to_weakperspective(
K=K, zmean=zmean, in_ndc=True, resolution=resolution)
if poses is not None:
orig_cam[:, 0, 3] += transl[:, 0]
orig_cam[:, 1, 3] += transl[:, 1]
if poses is not None:
return rotated_pose, orig_cam
else:
return verts_converted, orig_cam | Convert opencv calibration smpl poses&transl parameters to model based poses&transl or verts. Args: R (Union[np.ndarray, torch.Tensor]): (frame, 3, 3) T (Union[np.ndarray, torch.Tensor]): [(frame, 3) K (Optional[Union[np.ndarray, torch.Tensor]], optional): (frame, 3, 3) or (frame, 4, 4). Defaults to None. resolution (Optional[Union[Iterable[int], int]], optional): (height, width). Defaults to None. verts (Optional[Union[np.ndarray, torch.Tensor]], optional): (frame, num_verts, 3). Defaults to None. poses (Optional[Union[np.ndarray, torch.Tensor]], optional): (frame, 72/165). Defaults to None. transl (Optional[Union[np.ndarray, torch.Tensor]], optional): (frame, 3). Defaults to None. model_path (Optional[str], optional): model path. Defaults to None. betas (Optional[Union[np.ndarray, torch.Tensor]], optional): (frame, 10). Defaults to None. model_type (Optional[str], optional): choose in 'smpl' or 'smplx'. Defaults to 'smpl'. gender (Optional[str], optional): choose in 'male', 'female', 'neutral'. Defaults to 'neutral'. Raises: ValueError: wrong input poses or transl. Returns: Tuple[torch.Tensor]: Return converted poses, transl, pred_cam or verts, pred_cam. |
14,403 | import copy
import os
from typing import Iterable, Optional, Union
import numpy as np
import torch
from pytorch3d.renderer.cameras import CamerasBase
from mmhuman3d.core.cameras import build_cameras
from mmhuman3d.core.conventions.cameras.convert_convention import (
convert_camera_matrix,
convert_world_view,
)
from mmhuman3d.core.conventions.cameras.convert_projection import \
convert_perspective_to_weakperspective
from mmhuman3d.models.body_models.builder import build_body_model
from mmhuman3d.utils.transforms import aa_to_rotmat, rotmat_to_aa
The provided code snippet includes necessary dependencies for implementing the `project_points` function. Write a Python function `def project_points(points3d: Union[np.ndarray, torch.Tensor], cameras: CamerasBase = None, resolution: Iterable[int] = None, K: Union[torch.Tensor, np.ndarray] = None, R: Union[torch.Tensor, np.ndarray] = None, T: Union[torch.Tensor, np.ndarray] = None, convention: str = 'opencv', in_ndc: bool = False) -> Union[torch.Tensor, np.ndarray]` to solve the following problem:
Project 3d points to image. Args: points3d (Union[np.ndarray, torch.Tensor]): shape could be (..., 3). cameras (CamerasBase): pytorch3d cameras or mmhuman3d cameras. resolution (Iterable[int]): (height, width) for rectangle or width for square. K (Union[torch.Tensor, np.ndarray], optional): intrinsic matrix. Defaults to None. R (Union[torch.Tensor, np.ndarray], optional): rotation matrix. Defaults to None. T (Union[torch.Tensor, np.ndarray], optional): translation matrix. Defaults to None. convention (str, optional): camera convention. Defaults to 'opencv'. in_ndc (bool, optional): whether in NDC. Defaults to False. Returns: Union[torch.Tensor, np.ndarray]: transformed points of shape (..., 2).
Here is the function:
def project_points(points3d: Union[np.ndarray, torch.Tensor],
cameras: CamerasBase = None,
resolution: Iterable[int] = None,
K: Union[torch.Tensor, np.ndarray] = None,
R: Union[torch.Tensor, np.ndarray] = None,
T: Union[torch.Tensor, np.ndarray] = None,
convention: str = 'opencv',
in_ndc: bool = False) -> Union[torch.Tensor, np.ndarray]:
"""Project 3d points to image.
Args:
points3d (Union[np.ndarray, torch.Tensor]): shape could be (..., 3).
cameras (CamerasBase): pytorch3d cameras or mmhuman3d cameras.
resolution (Iterable[int]): (height, width) for rectangle or width for
square.
K (Union[torch.Tensor, np.ndarray], optional): intrinsic matrix.
Defaults to None.
R (Union[torch.Tensor, np.ndarray], optional): rotation matrix.
Defaults to None.
T (Union[torch.Tensor, np.ndarray], optional): translation matrix.
Defaults to None.
convention (str, optional): camera convention. Defaults to 'opencv'.
in_ndc (bool, optional): whether in NDC. Defaults to False.
Returns:
Union[torch.Tensor, np.ndarray]: transformed points of shape (..., 2).
"""
if cameras is None:
cameras = build_cameras(
dict(
type='perspective',
convention=convention,
in_ndc=in_ndc,
resolution=resolution,
K=K,
R=R,
T=T))
if cameras.get_image_size() is not None:
image_size = cameras.get_image_size()
else:
image_size = resolution
if isinstance(points3d, np.ndarray):
points3d = torch.Tensor(points3d[..., :3]).to(cameras.device)
points2d = cameras.transform_points_screen(
points3d, image_size=image_size).cpu().numpy()
elif isinstance(points3d, torch.Tensor):
points3d = points3d[..., :3].to(cameras.device)
points2d = cameras.transform_points_screen(
points3d, image_size=image_size)
return points2d | Project 3d points to image. Args: points3d (Union[np.ndarray, torch.Tensor]): shape could be (..., 3). cameras (CamerasBase): pytorch3d cameras or mmhuman3d cameras. resolution (Iterable[int]): (height, width) for rectangle or width for square. K (Union[torch.Tensor, np.ndarray], optional): intrinsic matrix. Defaults to None. R (Union[torch.Tensor, np.ndarray], optional): rotation matrix. Defaults to None. T (Union[torch.Tensor, np.ndarray], optional): translation matrix. Defaults to None. convention (str, optional): camera convention. Defaults to 'opencv'. in_ndc (bool, optional): whether in NDC. Defaults to False. Returns: Union[torch.Tensor, np.ndarray]: transformed points of shape (..., 2). |
14,404 | import copy
import os
from typing import Iterable, Optional, Union
import numpy as np
import torch
from pytorch3d.renderer.cameras import CamerasBase
from mmhuman3d.core.cameras import build_cameras
from mmhuman3d.core.conventions.cameras.convert_convention import (
convert_camera_matrix,
convert_world_view,
)
from mmhuman3d.core.conventions.cameras.convert_projection import \
convert_perspective_to_weakperspective
from mmhuman3d.models.body_models.builder import build_body_model
from mmhuman3d.utils.transforms import aa_to_rotmat, rotmat_to_aa
The provided code snippet includes necessary dependencies for implementing the `homo_vector` function. Write a Python function `def homo_vector(vector)` to solve the following problem:
vector: B x N x C h_vector: B x N x (C + 1)
Here is the function:
def homo_vector(vector):
"""
vector: B x N x C
h_vector: B x N x (C + 1)
"""
batch_size, n_pts = vector.shape[:2]
h_vector = torch.cat(
[vector, torch.ones((batch_size, n_pts, 1)).to(vector)], dim=-1)
return h_vector | vector: B x N x C h_vector: B x N x (C + 1) |
14,405 | from mmcv.utils import collect_env as collect_base_env
from mmcv.utils import get_git_hash
import mmhuman3d
The provided code snippet includes necessary dependencies for implementing the `collect_env` function. Write a Python function `def collect_env()` to solve the following problem:
Collect the information of the running environments.
Here is the function:
def collect_env():
"""Collect the information of the running environments."""
env_info = collect_base_env()
env_info['MMHuman3d'] = mmhuman3d.__version__ + '+' + get_git_hash()[:7]
return env_info | Collect the information of the running environments. |
14,406 | import warnings
from typing import List, Optional, Union
import torch
from pytorch3d.io import IO
from pytorch3d.io import load_objs_as_meshes as _load_objs_as_meshes
from pytorch3d.io import save_obj
from pytorch3d.renderer import TexturesUV, TexturesVertex
from pytorch3d.structures import (
Meshes,
Pointclouds,
join_meshes_as_batch,
join_meshes_as_scene,
padded_to_list,
)
from .path_utils import prepare_output_path
The provided code snippet includes necessary dependencies for implementing the `join_batch_meshes_as_scene` function. Write a Python function `def join_batch_meshes_as_scene( meshes: List[Meshes], include_textures: bool = True, ) -> Meshes` to solve the following problem:
Join `meshes` as a scene each batch. Only for Pytorch3D `meshes`. The Meshes must share the same batch size, and topology could be different. They must all be on the same device. If `include_textures` is true, the textures should be the same type, all be None is not accepted. If `include_textures` is False, textures are ignored. The return meshes will have no textures. Args: meshes (List[Meshes]): A `list` of `Meshes` with the same batches. Required. include_textures: (bool) whether to try to join the textures. Returns: New Meshes which has join different Meshes by each batch.
Here is the function:
def join_batch_meshes_as_scene(
meshes: List[Meshes],
include_textures: bool = True,
) -> Meshes:
"""Join `meshes` as a scene each batch. Only for Pytorch3D `meshes`. The
Meshes must share the same batch size, and topology could be different.
They must all be on the same device. If `include_textures` is true, the
textures should be the same type, all be None is not accepted. If
`include_textures` is False, textures are ignored. The return meshes will
have no textures.
Args:
meshes (List[Meshes]): A `list` of `Meshes` with the same batches.
Required.
include_textures: (bool) whether to try to join the textures.
Returns:
New Meshes which has join different Meshes by each batch.
"""
for mesh in meshes:
mesh._verts_list = padded_to_list(mesh.verts_padded(),
mesh.num_verts_per_mesh().tolist())
num_scene_size = len(meshes)
num_batch_size = len(meshes[0])
for i in range(num_scene_size):
assert len(
meshes[i]
) == num_batch_size, 'Please make sure that the Meshes all have'
'the same batch size.'
meshes_all = []
for j in range(num_batch_size):
meshes_batch = []
for i in range(num_scene_size):
meshes_batch.append(meshes[i][j])
meshes_all.append(join_meshes_as_scene(meshes_batch, include_textures))
meshes_final = join_meshes_as_batch(meshes_all, include_textures)
return meshes_final | Join `meshes` as a scene each batch. Only for Pytorch3D `meshes`. The Meshes must share the same batch size, and topology could be different. They must all be on the same device. If `include_textures` is true, the textures should be the same type, all be None is not accepted. If `include_textures` is False, textures are ignored. The return meshes will have no textures. Args: meshes (List[Meshes]): A `list` of `Meshes` with the same batches. Required. include_textures: (bool) whether to try to join the textures. Returns: New Meshes which has join different Meshes by each batch. |
14,407 | import warnings
from typing import List, Optional, Union
import torch
from pytorch3d.io import IO
from pytorch3d.io import load_objs_as_meshes as _load_objs_as_meshes
from pytorch3d.io import save_obj
from pytorch3d.renderer import TexturesUV, TexturesVertex
from pytorch3d.structures import (
Meshes,
Pointclouds,
join_meshes_as_batch,
join_meshes_as_scene,
padded_to_list,
)
from .path_utils import prepare_output_path
The provided code snippet includes necessary dependencies for implementing the `mesh_to_pointcloud_vc` function. Write a Python function `def mesh_to_pointcloud_vc( meshes: Meshes, include_textures: bool = True, alpha: float = 1.0, ) -> Pointclouds` to solve the following problem:
Convert PyTorch3D vertex color `Meshes` to `PointClouds`. Args: meshes (Meshes): input meshes. include_textures (bool, optional): Whether include colors. Require the texture of input meshes is vertex color. Defaults to True. alpha (float, optional): transparency. Defaults to 1.0. Returns: Pointclouds: output pointclouds.
Here is the function:
def mesh_to_pointcloud_vc(
meshes: Meshes,
include_textures: bool = True,
alpha: float = 1.0,
) -> Pointclouds:
"""Convert PyTorch3D vertex color `Meshes` to `PointClouds`.
Args:
meshes (Meshes): input meshes.
include_textures (bool, optional): Whether include colors.
Require the texture of input meshes is vertex color.
Defaults to True.
alpha (float, optional): transparency.
Defaults to 1.0.
Returns:
Pointclouds: output pointclouds.
"""
assert isinstance(
meshes.textures,
TexturesVertex), 'textures of input meshes should be `TexturesVertex`'
vertices = meshes.verts_padded()
if include_textures:
verts_rgb = meshes.textures.verts_features_padded()
verts_rgba = torch.cat(
[verts_rgb,
torch.ones_like(verts_rgb)[..., 0:1] * alpha], dim=-1)
else:
verts_rgba = None
pointclouds = Pointclouds(points=vertices, features=verts_rgba)
return pointclouds | Convert PyTorch3D vertex color `Meshes` to `PointClouds`. Args: meshes (Meshes): input meshes. include_textures (bool, optional): Whether include colors. Require the texture of input meshes is vertex color. Defaults to True. alpha (float, optional): transparency. Defaults to 1.0. Returns: Pointclouds: output pointclouds. |
14,408 | import warnings
from typing import List, Optional, Union
import torch
from pytorch3d.io import IO
from pytorch3d.io import load_objs_as_meshes as _load_objs_as_meshes
from pytorch3d.io import save_obj
from pytorch3d.renderer import TexturesUV, TexturesVertex
from pytorch3d.structures import (
Meshes,
Pointclouds,
join_meshes_as_batch,
join_meshes_as_scene,
padded_to_list,
)
from .path_utils import prepare_output_path
The provided code snippet includes necessary dependencies for implementing the `texture_uv2vc` function. Write a Python function `def texture_uv2vc(meshes: Meshes) -> Meshes` to solve the following problem:
Convert a Pytorch3D meshes's textures from TexturesUV to TexturesVertex. Args: meshes (Meshes): input Meshes. Returns: Meshes: converted Meshes.
Here is the function:
def texture_uv2vc(meshes: Meshes) -> Meshes:
"""Convert a Pytorch3D meshes's textures from TexturesUV to TexturesVertex.
Args:
meshes (Meshes): input Meshes.
Returns:
Meshes: converted Meshes.
"""
assert isinstance(meshes.textures, TexturesUV)
device = meshes.device
vert_uv = meshes.textures.verts_uvs_padded()
batch_size = vert_uv.shape[0]
verts_features = []
num_verts = meshes.verts_padded().shape[1]
for index in range(batch_size):
face_uv = vert_uv[index][meshes.textures.faces_uvs_padded()
[index].view(-1)]
img = meshes.textures._maps_padded[index]
width, height, _ = img.shape
face_uv = face_uv * torch.Tensor([width - 1, height - 1
]).long().to(device)
face_uv[:, 0] = torch.clip(face_uv[:, 0], 0, width - 1)
face_uv[:, 1] = torch.clip(face_uv[:, 1], 0, height - 1)
face_uv = face_uv.long()
faces = meshes.faces_padded()
verts_rgb = torch.zeros(1, num_verts, 3).to(device)
verts_rgb[:, faces.view(-1)] = img[height - 1 - face_uv[:, 1],
face_uv[:, 0]]
verts_features.append(verts_rgb)
verts_features = torch.cat(verts_features)
meshes = meshes.clone()
meshes.textures = TexturesVertex(verts_features)
return meshes | Convert a Pytorch3D meshes's textures from TexturesUV to TexturesVertex. Args: meshes (Meshes): input Meshes. Returns: Meshes: converted Meshes. |
14,409 | import warnings
from typing import List, Optional, Union
import torch
from pytorch3d.io import IO
from pytorch3d.io import load_objs_as_meshes as _load_objs_as_meshes
from pytorch3d.io import save_obj
from pytorch3d.renderer import TexturesUV, TexturesVertex
from pytorch3d.structures import (
Meshes,
Pointclouds,
join_meshes_as_batch,
join_meshes_as_scene,
padded_to_list,
)
from .path_utils import prepare_output_path
def load_objs_as_meshes(files: List[str],
device: Optional[Union[torch.device, str]] = None,
load_textures: bool = True,
**kwargs) -> Meshes:
if not isinstance(files, list):
files = [files]
return _load_objs_as_meshes(
files=files, device=device, load_textures=load_textures, **kwargs) | null |
14,410 | import warnings
from typing import List, Optional, Union
import torch
from pytorch3d.io import IO
from pytorch3d.io import load_objs_as_meshes as _load_objs_as_meshes
from pytorch3d.io import save_obj
from pytorch3d.renderer import TexturesUV, TexturesVertex
from pytorch3d.structures import (
Meshes,
Pointclouds,
join_meshes_as_batch,
join_meshes_as_scene,
padded_to_list,
)
from .path_utils import prepare_output_path
def load_plys_as_meshes(
files: List[str],
device: Optional[Union[torch.device, str]] = None,
load_textures: bool = True,
) -> Meshes:
writer = IO()
meshes = []
if not isinstance(files, list):
files = [files]
for idx in range(len(files)):
assert files[idx].endswith('.ply'), 'Please input .ply files.'
mesh = writer.load_mesh(
path=files[idx], include_textures=load_textures, device=device)
meshes.append(mesh)
meshes = join_meshes_as_batch(meshes, include_textures=load_textures)
return meshes | null |
14,411 | import colorsys
import os
from collections import defaultdict
from contextlib import contextmanager
from functools import partial
from pathlib import Path
import mmcv
import numpy as np
from mmcv import Timer
from scipy import interpolate
from mmhuman3d.core.post_processing import build_post_processing
The provided code snippet includes necessary dependencies for implementing the `prepare_frames` function. Write a Python function `def prepare_frames(input_path=None)` to solve the following problem:
Prepare frames from input_path. Args: input_path (str, optional): Defaults to None. Raises: ValueError: check the input path. Returns: List[np.ndarray]: prepared frames
Here is the function:
def prepare_frames(input_path=None):
"""Prepare frames from input_path.
Args:
input_path (str, optional): Defaults to None.
Raises:
ValueError: check the input path.
Returns:
List[np.ndarray]: prepared frames
"""
if Path(input_path).is_file():
img_list = [mmcv.imread(input_path)]
if img_list[0] is None:
video = mmcv.VideoReader(input_path)
assert video.opened, f'Failed to load file {input_path}'
img_list = list(video)
elif Path(input_path).is_dir():
# input_type = 'folder'
file_list = [
os.path.join(input_path, fn) for fn in os.listdir(input_path)
if fn.lower().endswith(('.png', '.jpg'))
]
file_list.sort()
img_list = [mmcv.imread(img_path) for img_path in file_list]
assert len(img_list), f'Failed to load image from {input_path}'
else:
raise ValueError('Input path should be an file or folder.'
f' Got invalid input path: {input_path}')
return img_list | Prepare frames from input_path. Args: input_path (str, optional): Defaults to None. Raises: ValueError: check the input path. Returns: List[np.ndarray]: prepared frames |
14,412 | import glob
import json
import os
import shutil
import string
import subprocess
import sys
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
from mmhuman3d.utils.path_utils import check_input_path, prepare_output_path
def pad_for_libx264(image_array):
"""Pad zeros if width or height of image_array is not divisible by 2.
Otherwise you will get.
\"[libx264 @ 0x1b1d560] width not divisible by 2 \"
Args:
image_array (np.ndarray):
Image or images load by cv2.imread().
Possible shapes:
1. [height, width]
2. [height, width, channels]
3. [images, height, width]
4. [images, height, width, channels]
Returns:
np.ndarray:
A image with both edges divisible by 2.
"""
if image_array.ndim == 2 or \
(image_array.ndim == 3 and image_array.shape[2] == 3):
hei_index = 0
wid_index = 1
elif image_array.ndim == 4 or \
(image_array.ndim == 3 and image_array.shape[2] != 3):
hei_index = 1
wid_index = 2
else:
return image_array
hei_pad = image_array.shape[hei_index] % 2
wid_pad = image_array.shape[wid_index] % 2
if hei_pad + wid_pad > 0:
pad_width = []
for dim_index in range(image_array.ndim):
if dim_index == hei_index:
pad_width.append((0, hei_pad))
elif dim_index == wid_index:
pad_width.append((0, wid_pad))
else:
pad_width.append((0, 0))
values = 0
image_array = \
np.pad(image_array,
pad_width,
mode='constant', constant_values=values)
return image_array
def prepare_output_path(output_path: str,
allowed_suffix: List[str] = [],
tag: str = 'output file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
overwrite: bool = True) -> None:
"""Check output folder or file.
Args:
output_path (str): could be folder or file.
allowed_suffix (List[str], optional):
Check the suffix of `output_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `dir` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
overwrite (bool, optional):
Whether overwrite the existing file or folder.
Defaults to True.
Raises:
FileNotFoundError: suffix does not match.
FileExistsError: file or folder already exists and `overwrite` is
False.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(output_path, path_type=path_type)
if exist_result == Existence.MissingParent:
warnings.warn(
f'The parent folder of {tag} does not exist: {output_path},' +
f' will make dir {Path(output_path).parent.absolute().__str__()}')
os.makedirs(
Path(output_path).parent.absolute().__str__(), exist_ok=True)
elif exist_result == Existence.DirectoryNotExist:
os.mkdir(output_path)
print(f'Making directory {output_path} for saving results.')
elif exist_result == Existence.FileNotExist:
suffix_matched = \
check_path_suffix(output_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}: '
f'{output_path}.')
elif exist_result == Existence.FileExist:
if not overwrite:
raise FileExistsError(
f'{output_path} exists (set overwrite = True to overwrite).')
else:
print(f'Overwriting {output_path}.')
elif exist_result == Existence.DirectoryExistEmpty:
pass
elif exist_result == Existence.DirectoryExistNotEmpty:
if not overwrite:
raise FileExistsError(
f'{output_path} is not empty (set overwrite = '
'True to overwrite the files).')
else:
print(f'Overwriting {output_path} and its files.')
else:
raise FileNotFoundError(f'No Existence type for {output_path}.')
The provided code snippet includes necessary dependencies for implementing the `array_to_video` function. Write a Python function `def array_to_video( image_array: np.ndarray, output_path: str, fps: Union[int, float] = 30, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False, ) -> None` to solve the following problem:
Convert an array to a video directly, gif not supported. Args: image_array (np.ndarray): shape should be (f * h * w * 3). output_path (str): output video file path. fps (Union[int, float, optional): fps. Defaults to 30. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of the output video. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check output path. TypeError: check input array. Returns: None.
Here is the function:
def array_to_video(
image_array: np.ndarray,
output_path: str,
fps: Union[int, float] = 30,
resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None,
disable_log: bool = False,
) -> None:
"""Convert an array to a video directly, gif not supported.
Args:
image_array (np.ndarray): shape should be (f * h * w * 3).
output_path (str): output video file path.
fps (Union[int, float, optional): fps. Defaults to 30.
resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]],
optional): (height, width) of the output video.
Defaults to None.
disable_log (bool, optional): whether close the ffmepg command info.
Defaults to False.
Raises:
FileNotFoundError: check output path.
TypeError: check input array.
Returns:
None.
"""
if not isinstance(image_array, np.ndarray):
raise TypeError('Input should be np.ndarray.')
assert image_array.ndim == 4
assert image_array.shape[-1] == 3
prepare_output_path(
output_path,
allowed_suffix=['.mp4'],
tag='output video',
path_type='file',
overwrite=True)
if resolution:
height, width = resolution
width += width % 2
height += height % 2
else:
image_array = pad_for_libx264(image_array)
height, width = image_array.shape[1], image_array.shape[2]
command = [
'ffmpeg',
'-y', # (optional) overwrite output file if it exists
'-f',
'rawvideo',
'-s',
f'{int(width)}x{int(height)}', # size of one frame
'-pix_fmt',
'bgr24',
'-r',
f'{fps}', # frames per second
'-loglevel',
'error',
'-threads',
'4',
'-i',
'-', # The input comes from a pipe
'-vcodec',
'libx264',
'-an', # Tells FFMPEG not to expect any audio
output_path,
]
if not disable_log:
print(f'Running \"{" ".join(command)}\"')
process = subprocess.Popen(
command,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
)
if process.stdin is None or process.stderr is None:
raise BrokenPipeError('No buffer received.')
index = 0
while True:
if index >= image_array.shape[0]:
break
process.stdin.write(image_array[index].tobytes())
index += 1
process.stdin.close()
process.stderr.close()
process.wait() | Convert an array to a video directly, gif not supported. Args: image_array (np.ndarray): shape should be (f * h * w * 3). output_path (str): output video file path. fps (Union[int, float, optional): fps. Defaults to 30. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of the output video. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check output path. TypeError: check input array. Returns: None. |
14,413 | import glob
import json
import os
import shutil
import string
import subprocess
import sys
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
from mmhuman3d.utils.path_utils import check_input_path, prepare_output_path
class vid_info_reader(object):
def __init__(self, input_path) -> None:
"""Get video information from video, mimiced from ffmpeg-python.
https://github.com/kkroening/ffmpeg-python.
Args:
vid_file ([str]): video file path.
Raises:
FileNotFoundError: check the input path.
Returns:
None.
"""
check_input_path(
input_path,
allowed_suffix=['.mp4', '.gif', '.png', '.jpg', '.jpeg'],
tag='input file',
path_type='file')
cmd = [
'ffprobe', '-show_format', '-show_streams', '-of', 'json',
input_path
]
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, _ = process.communicate()
probe = json.loads(out.decode('utf-8'))
video_stream = next((stream for stream in probe['streams']
if stream['codec_type'] == 'video'), None)
if video_stream is None:
print('No video stream found', file=sys.stderr)
sys.exit(1)
self.video_stream = video_stream
def __getitem__(
self,
key: Literal['index', 'codec_name', 'codec_long_name', 'profile',
'codec_type', 'codec_time_base', 'codec_tag_string',
'codec_tag', 'width', 'height', 'coded_width',
'coded_height', 'has_b_frames', 'pix_fmt', 'level',
'chroma_location', 'refs', 'is_avc', 'nal_length_size',
'r_frame_rate', 'avg_frame_rate', 'time_base',
'start_pts', 'start_time', 'duration_ts', 'duration',
'bit_rate', 'bits_per_raw_sample', 'nb_frames',
'disposition', 'tags']):
"""Key (str): select in ['index', 'codec_name', 'codec_long_name',
'profile', 'codec_type', 'codec_time_base', 'codec_tag_string',
'codec_tag', 'width', 'height', 'coded_width', 'coded_height',
'has_b_frames', 'pix_fmt', 'level', 'chroma_location', 'refs',
'is_avc', 'nal_length_size', 'r_frame_rate', 'avg_frame_rate',
'time_base', 'start_pts', 'start_time', 'duration_ts', 'duration',
'bit_rate', 'bits_per_raw_sample', 'nb_frames', 'disposition',
'tags']"""
return self.video_stream[key]
def prepare_output_path(output_path: str,
allowed_suffix: List[str] = [],
tag: str = 'output file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
overwrite: bool = True) -> None:
"""Check output folder or file.
Args:
output_path (str): could be folder or file.
allowed_suffix (List[str], optional):
Check the suffix of `output_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `dir` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
overwrite (bool, optional):
Whether overwrite the existing file or folder.
Defaults to True.
Raises:
FileNotFoundError: suffix does not match.
FileExistsError: file or folder already exists and `overwrite` is
False.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(output_path, path_type=path_type)
if exist_result == Existence.MissingParent:
warnings.warn(
f'The parent folder of {tag} does not exist: {output_path},' +
f' will make dir {Path(output_path).parent.absolute().__str__()}')
os.makedirs(
Path(output_path).parent.absolute().__str__(), exist_ok=True)
elif exist_result == Existence.DirectoryNotExist:
os.mkdir(output_path)
print(f'Making directory {output_path} for saving results.')
elif exist_result == Existence.FileNotExist:
suffix_matched = \
check_path_suffix(output_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}: '
f'{output_path}.')
elif exist_result == Existence.FileExist:
if not overwrite:
raise FileExistsError(
f'{output_path} exists (set overwrite = True to overwrite).')
else:
print(f'Overwriting {output_path}.')
elif exist_result == Existence.DirectoryExistEmpty:
pass
elif exist_result == Existence.DirectoryExistNotEmpty:
if not overwrite:
raise FileExistsError(
f'{output_path} is not empty (set overwrite = '
'True to overwrite the files).')
else:
print(f'Overwriting {output_path} and its files.')
else:
raise FileNotFoundError(f'No Existence type for {output_path}.')
def check_input_path(
input_path: str,
allowed_suffix: List[str] = [],
tag: str = 'input file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
):
"""Check input folder or file.
Args:
input_path (str): input folder or file path.
allowed_suffix (List[str], optional):
Check the suffix of `input_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `directory` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
Raises:
FileNotFoundError: file does not exists or suffix does not match.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(input_path, path_type=path_type)
if exist_result in [
Existence.FileExist, Existence.DirectoryExistEmpty,
Existence.DirectoryExistNotEmpty
]:
suffix_matched = \
check_path_suffix(input_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}:' +
f'{input_path}.')
else:
raise FileNotFoundError(f'The {tag} does not exist: {input_path}.')
The provided code snippet includes necessary dependencies for implementing the `video_to_gif` function. Write a Python function `def video_to_gif( input_path: str, output_path: str, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, fps: Union[float, int] = 15, disable_log: bool = False, ) -> None` to solve the following problem:
Convert a video to a gif file. Args: input_path (str): video file path. output_path (str): gif file path. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of the output video. Defaults to None. fps (Union[float, int], optional): frames per second. Defaults to 15. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None.
Here is the function:
def video_to_gif(
input_path: str,
output_path: str,
resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None,
fps: Union[float, int] = 15,
disable_log: bool = False,
) -> None:
"""Convert a video to a gif file.
Args:
input_path (str): video file path.
output_path (str): gif file path.
resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]],
optional): (height, width) of the output video.
Defaults to None.
fps (Union[float, int], optional): frames per second. Defaults to 15.
disable_log (bool, optional): whether close the ffmepg command info.
Defaults to False.
Raises:
FileNotFoundError: check the input path.
FileNotFoundError: check the output path.
Returns:
None.
"""
check_input_path(
input_path,
allowed_suffix=['.mp4'],
tag='input video',
path_type='file')
prepare_output_path(
output_path,
allowed_suffix=['.gif'],
tag='output gif',
path_type='file',
overwrite=True)
info = vid_info_reader(input_path)
duration = info['duration']
if resolution:
height, width = resolution
else:
width, height = int(info['width']), int(info['height'])
command = [
'ffmpeg', '-r',
str(info['r_frame_rate']), '-i', input_path, '-r', f'{fps}', '-s',
f'{width}x{height}', '-loglevel', 'error', '-t', f'{duration}',
'-threads', '4', '-y', output_path
]
if not disable_log:
print(f'Running \"{" ".join(command)}\"')
subprocess.call(command) | Convert a video to a gif file. Args: input_path (str): video file path. output_path (str): gif file path. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of the output video. Defaults to None. fps (Union[float, int], optional): frames per second. Defaults to 15. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None. |
14,414 | import glob
import json
import os
import shutil
import string
import subprocess
import sys
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
from mmhuman3d.utils.path_utils import check_input_path, prepare_output_path
def prepare_output_path(output_path: str,
allowed_suffix: List[str] = [],
tag: str = 'output file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
overwrite: bool = True) -> None:
"""Check output folder or file.
Args:
output_path (str): could be folder or file.
allowed_suffix (List[str], optional):
Check the suffix of `output_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `dir` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
overwrite (bool, optional):
Whether overwrite the existing file or folder.
Defaults to True.
Raises:
FileNotFoundError: suffix does not match.
FileExistsError: file or folder already exists and `overwrite` is
False.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(output_path, path_type=path_type)
if exist_result == Existence.MissingParent:
warnings.warn(
f'The parent folder of {tag} does not exist: {output_path},' +
f' will make dir {Path(output_path).parent.absolute().__str__()}')
os.makedirs(
Path(output_path).parent.absolute().__str__(), exist_ok=True)
elif exist_result == Existence.DirectoryNotExist:
os.mkdir(output_path)
print(f'Making directory {output_path} for saving results.')
elif exist_result == Existence.FileNotExist:
suffix_matched = \
check_path_suffix(output_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}: '
f'{output_path}.')
elif exist_result == Existence.FileExist:
if not overwrite:
raise FileExistsError(
f'{output_path} exists (set overwrite = True to overwrite).')
else:
print(f'Overwriting {output_path}.')
elif exist_result == Existence.DirectoryExistEmpty:
pass
elif exist_result == Existence.DirectoryExistNotEmpty:
if not overwrite:
raise FileExistsError(
f'{output_path} is not empty (set overwrite = '
'True to overwrite the files).')
else:
print(f'Overwriting {output_path} and its files.')
else:
raise FileNotFoundError(f'No Existence type for {output_path}.')
def check_input_path(
input_path: str,
allowed_suffix: List[str] = [],
tag: str = 'input file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
):
"""Check input folder or file.
Args:
input_path (str): input folder or file path.
allowed_suffix (List[str], optional):
Check the suffix of `input_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `directory` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
Raises:
FileNotFoundError: file does not exists or suffix does not match.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(input_path, path_type=path_type)
if exist_result in [
Existence.FileExist, Existence.DirectoryExistEmpty,
Existence.DirectoryExistNotEmpty
]:
suffix_matched = \
check_path_suffix(input_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}:' +
f'{input_path}.')
else:
raise FileNotFoundError(f'The {tag} does not exist: {input_path}.')
The provided code snippet includes necessary dependencies for implementing the `images_to_gif` function. Write a Python function `def images_to_gif( input_folder: str, output_path: str, remove_raw_file: bool = False, img_format: str = '%06d.png', fps: int = 15, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, start: int = 0, end: Optional[int] = None, disable_log: bool = False, ) -> None` to solve the following problem:
Convert series of images to a video, similar to images_to_video, but provide more suitable parameters. Args: input_folder (str): input image folder. output_path (str): output gif file path. remove_raw_file (bool, optional): whether remove raw images. Defaults to False. img_format (str, optional): format to name the images. Defaults to '%06d.png'. fps (int, optional): output video fps. Defaults to 15. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to None. start (int, optional): start frame index. Inclusive. If < 0, will be converted to frame_index range in [0, frame_num]. Defaults to 0. end (int, optional): end frame index. Exclusive. Could be positive int or negative int or None. If None, all frames from start till the last frame are included. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None
Here is the function:
def images_to_gif(
input_folder: str,
output_path: str,
remove_raw_file: bool = False,
img_format: str = '%06d.png',
fps: int = 15,
resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None,
start: int = 0,
end: Optional[int] = None,
disable_log: bool = False,
) -> None:
"""Convert series of images to a video, similar to images_to_video, but
provide more suitable parameters.
Args:
input_folder (str): input image folder.
output_path (str): output gif file path.
remove_raw_file (bool, optional): whether remove raw images.
Defaults to False.
img_format (str, optional): format to name the images.
Defaults to '%06d.png'.
fps (int, optional): output video fps. Defaults to 15.
resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]],
optional): (height, width) of output. Defaults to None.
start (int, optional): start frame index. Inclusive.
If < 0, will be converted to frame_index range in [0, frame_num].
Defaults to 0.
end (int, optional): end frame index. Exclusive.
Could be positive int or negative int or None.
If None, all frames from start till the last frame are included.
Defaults to None.
disable_log (bool, optional): whether close the ffmepg command info.
Defaults to False.
Raises:
FileNotFoundError: check the input path.
FileNotFoundError: check the output path.
Returns:
None
"""
input_folderinfo = Path(input_folder)
check_input_path(
input_folder,
allowed_suffix=[],
tag='input image folder',
path_type='dir')
prepare_output_path(
output_path,
allowed_suffix=['.gif'],
tag='output gif',
path_type='file',
overwrite=True)
num_frames = len(os.listdir(input_folder))
start = (min(start, num_frames - 1) + num_frames) % num_frames
end = (min(end, num_frames - 1) +
num_frames) % num_frames if end is not None else num_frames
temp_input_folder = None
if img_format is None:
file_list = []
temp_input_folder = os.path.join(input_folderinfo.parent,
input_folderinfo.name + '_temp')
os.makedirs(temp_input_folder, exist_ok=True)
pngs = glob.glob(os.path.join(input_folder, '*.png'))
ext = 'png'
if pngs:
ext = 'png'
file_list.extend(pngs)
jpgs = glob.glob(os.path.join(input_folder, '*.jpg'))
if jpgs:
ext = 'jpg'
file_list.extend(jpgs)
file_list.sort()
for index, file_name in enumerate(file_list):
shutil.copy(
file_name,
os.path.join(temp_input_folder, '%06d.%s' % (index + 1, ext)))
input_folder = temp_input_folder
img_format = '%06d.' + ext
command = [
'ffmpeg',
'-y',
'-threads',
'4',
'-start_number',
f'{start}',
'-r',
f'{fps}',
'-i',
f'{input_folder}/{img_format}',
'-frames:v',
f'{end - start}',
'-loglevel',
'error',
'-v',
'error',
output_path,
]
if resolution:
height, width = resolution
command.insert(1, '-s')
command.insert(2, '%dx%d' % (width, height))
if not disable_log:
print(f'Running \"{" ".join(command)}\"')
subprocess.call(command)
if remove_raw_file:
shutil.rmtree(input_folder)
if temp_input_folder is not None:
shutil.rmtree(temp_input_folder) | Convert series of images to a video, similar to images_to_video, but provide more suitable parameters. Args: input_folder (str): input image folder. output_path (str): output gif file path. remove_raw_file (bool, optional): whether remove raw images. Defaults to False. img_format (str, optional): format to name the images. Defaults to '%06d.png'. fps (int, optional): output video fps. Defaults to 15. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to None. start (int, optional): start frame index. Inclusive. If < 0, will be converted to frame_index range in [0, frame_num]. Defaults to 0. end (int, optional): end frame index. Exclusive. Could be positive int or negative int or None. If None, all frames from start till the last frame are included. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None |
14,415 | import glob
import json
import os
import shutil
import string
import subprocess
import sys
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
from mmhuman3d.utils.path_utils import check_input_path, prepare_output_path
def prepare_output_path(output_path: str,
allowed_suffix: List[str] = [],
tag: str = 'output file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
overwrite: bool = True) -> None:
"""Check output folder or file.
Args:
output_path (str): could be folder or file.
allowed_suffix (List[str], optional):
Check the suffix of `output_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `dir` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
overwrite (bool, optional):
Whether overwrite the existing file or folder.
Defaults to True.
Raises:
FileNotFoundError: suffix does not match.
FileExistsError: file or folder already exists and `overwrite` is
False.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(output_path, path_type=path_type)
if exist_result == Existence.MissingParent:
warnings.warn(
f'The parent folder of {tag} does not exist: {output_path},' +
f' will make dir {Path(output_path).parent.absolute().__str__()}')
os.makedirs(
Path(output_path).parent.absolute().__str__(), exist_ok=True)
elif exist_result == Existence.DirectoryNotExist:
os.mkdir(output_path)
print(f'Making directory {output_path} for saving results.')
elif exist_result == Existence.FileNotExist:
suffix_matched = \
check_path_suffix(output_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}: '
f'{output_path}.')
elif exist_result == Existence.FileExist:
if not overwrite:
raise FileExistsError(
f'{output_path} exists (set overwrite = True to overwrite).')
else:
print(f'Overwriting {output_path}.')
elif exist_result == Existence.DirectoryExistEmpty:
pass
elif exist_result == Existence.DirectoryExistNotEmpty:
if not overwrite:
raise FileExistsError(
f'{output_path} is not empty (set overwrite = '
'True to overwrite the files).')
else:
print(f'Overwriting {output_path} and its files.')
else:
raise FileNotFoundError(f'No Existence type for {output_path}.')
def check_input_path(
input_path: str,
allowed_suffix: List[str] = [],
tag: str = 'input file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
):
"""Check input folder or file.
Args:
input_path (str): input folder or file path.
allowed_suffix (List[str], optional):
Check the suffix of `input_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `directory` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
Raises:
FileNotFoundError: file does not exists or suffix does not match.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(input_path, path_type=path_type)
if exist_result in [
Existence.FileExist, Existence.DirectoryExistEmpty,
Existence.DirectoryExistNotEmpty
]:
suffix_matched = \
check_path_suffix(input_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}:' +
f'{input_path}.')
else:
raise FileNotFoundError(f'The {tag} does not exist: {input_path}.')
The provided code snippet includes necessary dependencies for implementing the `gif_to_video` function. Write a Python function `def gif_to_video(input_path: str, output_path: str, fps: int = 30, remove_raw_file: bool = False, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False) -> None` to solve the following problem:
Convert a gif file to a video. Args: input_path (str): input gif file path. output_path (str): output video file path. fps (int, optional): fps. Defaults to 30. remove_raw_file (bool, optional): whether remove original input file. Defaults to False. down_sample_scale (Union[int, float], optional): down sample scale. Defaults to 1. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None
Here is the function:
def gif_to_video(input_path: str,
output_path: str,
fps: int = 30,
remove_raw_file: bool = False,
resolution: Optional[Union[Tuple[int, int],
Tuple[float, float]]] = None,
disable_log: bool = False) -> None:
"""Convert a gif file to a video.
Args:
input_path (str): input gif file path.
output_path (str): output video file path.
fps (int, optional): fps. Defaults to 30.
remove_raw_file (bool, optional): whether remove original input file.
Defaults to False.
down_sample_scale (Union[int, float], optional): down sample scale.
Defaults to 1.
resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]],
optional): (height, width) of output. Defaults to None.
disable_log (bool, optional): whether close the ffmepg command info.
Defaults to False.
Raises:
FileNotFoundError: check the input path.
FileNotFoundError: check the output path.
Returns:
None
"""
check_input_path(
input_path, allowed_suffix=['.gif'], tag='input gif', path_type='file')
prepare_output_path(
output_path,
allowed_suffix=['.mp4'],
tag='output video',
path_type='file',
overwrite=True)
command = [
'ffmpeg', '-i', input_path, '-r', f'{fps}', '-loglevel', 'error', '-y',
output_path, '-threads', '4'
]
if resolution:
height, width = resolution
command.insert(3, '-s')
command.insert(4, '%dx%d' % (width, height))
if not disable_log:
print(f'Running \"{" ".join(command)}\"')
subprocess.call(command)
if remove_raw_file:
subprocess.call(['rm', '-f', input_path]) | Convert a gif file to a video. Args: input_path (str): input gif file path. output_path (str): output video file path. fps (int, optional): fps. Defaults to 30. remove_raw_file (bool, optional): whether remove original input file. Defaults to False. down_sample_scale (Union[int, float], optional): down sample scale. Defaults to 1. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None |
14,416 | import glob
import json
import os
import shutil
import string
import subprocess
import sys
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
from mmhuman3d.utils.path_utils import check_input_path, prepare_output_path
def prepare_output_path(output_path: str,
allowed_suffix: List[str] = [],
tag: str = 'output file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
overwrite: bool = True) -> None:
"""Check output folder or file.
Args:
output_path (str): could be folder or file.
allowed_suffix (List[str], optional):
Check the suffix of `output_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `dir` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
overwrite (bool, optional):
Whether overwrite the existing file or folder.
Defaults to True.
Raises:
FileNotFoundError: suffix does not match.
FileExistsError: file or folder already exists and `overwrite` is
False.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(output_path, path_type=path_type)
if exist_result == Existence.MissingParent:
warnings.warn(
f'The parent folder of {tag} does not exist: {output_path},' +
f' will make dir {Path(output_path).parent.absolute().__str__()}')
os.makedirs(
Path(output_path).parent.absolute().__str__(), exist_ok=True)
elif exist_result == Existence.DirectoryNotExist:
os.mkdir(output_path)
print(f'Making directory {output_path} for saving results.')
elif exist_result == Existence.FileNotExist:
suffix_matched = \
check_path_suffix(output_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}: '
f'{output_path}.')
elif exist_result == Existence.FileExist:
if not overwrite:
raise FileExistsError(
f'{output_path} exists (set overwrite = True to overwrite).')
else:
print(f'Overwriting {output_path}.')
elif exist_result == Existence.DirectoryExistEmpty:
pass
elif exist_result == Existence.DirectoryExistNotEmpty:
if not overwrite:
raise FileExistsError(
f'{output_path} is not empty (set overwrite = '
'True to overwrite the files).')
else:
print(f'Overwriting {output_path} and its files.')
else:
raise FileNotFoundError(f'No Existence type for {output_path}.')
def check_input_path(
input_path: str,
allowed_suffix: List[str] = [],
tag: str = 'input file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
):
"""Check input folder or file.
Args:
input_path (str): input folder or file path.
allowed_suffix (List[str], optional):
Check the suffix of `input_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `directory` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
Raises:
FileNotFoundError: file does not exists or suffix does not match.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(input_path, path_type=path_type)
if exist_result in [
Existence.FileExist, Existence.DirectoryExistEmpty,
Existence.DirectoryExistNotEmpty
]:
suffix_matched = \
check_path_suffix(input_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}:' +
f'{input_path}.')
else:
raise FileNotFoundError(f'The {tag} does not exist: {input_path}.')
The provided code snippet includes necessary dependencies for implementing the `gif_to_images` function. Write a Python function `def gif_to_images(input_path: str, output_folder: str, fps: int = 30, img_format: str = '%06d.png', resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False) -> None` to solve the following problem:
Convert a gif file to a folder of images. Args: input_path (str): input gif file path. output_folder (str): output folder to save the images. fps (int, optional): fps. Defaults to 30. img_format (str, optional): output image name format. Defaults to '%06d.png'. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None
Here is the function:
def gif_to_images(input_path: str,
output_folder: str,
fps: int = 30,
img_format: str = '%06d.png',
resolution: Optional[Union[Tuple[int, int],
Tuple[float, float]]] = None,
disable_log: bool = False) -> None:
"""Convert a gif file to a folder of images.
Args:
input_path (str): input gif file path.
output_folder (str): output folder to save the images.
fps (int, optional): fps. Defaults to 30.
img_format (str, optional): output image name format.
Defaults to '%06d.png'.
resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]],
optional): (height, width) of output.
Defaults to None.
disable_log (bool, optional): whether close the ffmepg command info.
Defaults to False.
Raises:
FileNotFoundError: check the input path.
FileNotFoundError: check the output path.
Returns:
None
"""
check_input_path(
input_path, allowed_suffix=['.gif'], tag='input gif', path_type='file')
prepare_output_path(
output_folder,
allowed_suffix=[],
tag='output image folder',
path_type='dir',
overwrite=True)
command = [
'ffmpeg', '-r', f'{fps}', '-i', input_path, '-loglevel', 'error', '-f',
'image2', '-v', 'error', '-threads', '4', '-y', '-start_number', '0',
f'{output_folder}/{img_format}'
]
if resolution:
height, width = resolution
command.insert(3, '-s')
command.insert(4, '%dx%d' % (width, height))
if not disable_log:
print(f'Running \"{" ".join(command)}\"')
subprocess.call(command) | Convert a gif file to a folder of images. Args: input_path (str): input gif file path. output_folder (str): output folder to save the images. fps (int, optional): fps. Defaults to 30. img_format (str, optional): output image name format. Defaults to '%06d.png'. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None |
14,417 | import glob
import json
import os
import shutil
import string
import subprocess
import sys
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
from mmhuman3d.utils.path_utils import check_input_path, prepare_output_path
class vid_info_reader(object):
def __init__(self, input_path) -> None:
"""Get video information from video, mimiced from ffmpeg-python.
https://github.com/kkroening/ffmpeg-python.
Args:
vid_file ([str]): video file path.
Raises:
FileNotFoundError: check the input path.
Returns:
None.
"""
check_input_path(
input_path,
allowed_suffix=['.mp4', '.gif', '.png', '.jpg', '.jpeg'],
tag='input file',
path_type='file')
cmd = [
'ffprobe', '-show_format', '-show_streams', '-of', 'json',
input_path
]
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, _ = process.communicate()
probe = json.loads(out.decode('utf-8'))
video_stream = next((stream for stream in probe['streams']
if stream['codec_type'] == 'video'), None)
if video_stream is None:
print('No video stream found', file=sys.stderr)
sys.exit(1)
self.video_stream = video_stream
def __getitem__(
self,
key: Literal['index', 'codec_name', 'codec_long_name', 'profile',
'codec_type', 'codec_time_base', 'codec_tag_string',
'codec_tag', 'width', 'height', 'coded_width',
'coded_height', 'has_b_frames', 'pix_fmt', 'level',
'chroma_location', 'refs', 'is_avc', 'nal_length_size',
'r_frame_rate', 'avg_frame_rate', 'time_base',
'start_pts', 'start_time', 'duration_ts', 'duration',
'bit_rate', 'bits_per_raw_sample', 'nb_frames',
'disposition', 'tags']):
"""Key (str): select in ['index', 'codec_name', 'codec_long_name',
'profile', 'codec_type', 'codec_time_base', 'codec_tag_string',
'codec_tag', 'width', 'height', 'coded_width', 'coded_height',
'has_b_frames', 'pix_fmt', 'level', 'chroma_location', 'refs',
'is_avc', 'nal_length_size', 'r_frame_rate', 'avg_frame_rate',
'time_base', 'start_pts', 'start_time', 'duration_ts', 'duration',
'bit_rate', 'bits_per_raw_sample', 'nb_frames', 'disposition',
'tags']"""
return self.video_stream[key]
def prepare_output_path(output_path: str,
allowed_suffix: List[str] = [],
tag: str = 'output file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
overwrite: bool = True) -> None:
"""Check output folder or file.
Args:
output_path (str): could be folder or file.
allowed_suffix (List[str], optional):
Check the suffix of `output_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `dir` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
overwrite (bool, optional):
Whether overwrite the existing file or folder.
Defaults to True.
Raises:
FileNotFoundError: suffix does not match.
FileExistsError: file or folder already exists and `overwrite` is
False.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(output_path, path_type=path_type)
if exist_result == Existence.MissingParent:
warnings.warn(
f'The parent folder of {tag} does not exist: {output_path},' +
f' will make dir {Path(output_path).parent.absolute().__str__()}')
os.makedirs(
Path(output_path).parent.absolute().__str__(), exist_ok=True)
elif exist_result == Existence.DirectoryNotExist:
os.mkdir(output_path)
print(f'Making directory {output_path} for saving results.')
elif exist_result == Existence.FileNotExist:
suffix_matched = \
check_path_suffix(output_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}: '
f'{output_path}.')
elif exist_result == Existence.FileExist:
if not overwrite:
raise FileExistsError(
f'{output_path} exists (set overwrite = True to overwrite).')
else:
print(f'Overwriting {output_path}.')
elif exist_result == Existence.DirectoryExistEmpty:
pass
elif exist_result == Existence.DirectoryExistNotEmpty:
if not overwrite:
raise FileExistsError(
f'{output_path} is not empty (set overwrite = '
'True to overwrite the files).')
else:
print(f'Overwriting {output_path} and its files.')
else:
raise FileNotFoundError(f'No Existence type for {output_path}.')
def check_input_path(
input_path: str,
allowed_suffix: List[str] = [],
tag: str = 'input file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
):
"""Check input folder or file.
Args:
input_path (str): input folder or file path.
allowed_suffix (List[str], optional):
Check the suffix of `input_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `directory` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
Raises:
FileNotFoundError: file does not exists or suffix does not match.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(input_path, path_type=path_type)
if exist_result in [
Existence.FileExist, Existence.DirectoryExistEmpty,
Existence.DirectoryExistNotEmpty
]:
suffix_matched = \
check_path_suffix(input_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}:' +
f'{input_path}.')
else:
raise FileNotFoundError(f'The {tag} does not exist: {input_path}.')
The provided code snippet includes necessary dependencies for implementing the `crop_video` function. Write a Python function `def crop_video( input_path: str, output_path: str, box: Optional[Union[List[int], Tuple[int, int, int, int]]] = None, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False, ) -> None` to solve the following problem:
Spatially or temporally crop a video or gif file. Args: input_path (str): input video or gif file path. output_path (str): output video or gif file path. box (Iterable[int], optional): [x, y of the crop region left. corner and width and height]. Defaults to [0, 0, 100, 100]. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None'-start_number', f'{start}',
Here is the function:
def crop_video(
input_path: str,
output_path: str,
box: Optional[Union[List[int], Tuple[int, int, int, int]]] = None,
resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None,
disable_log: bool = False,
) -> None:
"""Spatially or temporally crop a video or gif file.
Args:
input_path (str): input video or gif file path.
output_path (str): output video or gif file path.
box (Iterable[int], optional): [x, y of the crop region left.
corner and width and height]. Defaults to [0, 0, 100, 100].
resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]],
optional): (height, width) of output. Defaults to None.
disable_log (bool, optional): whether close the ffmepg command info.
Defaults to False.
Raises:
FileNotFoundError: check the input path.
FileNotFoundError: check the output path.
Returns:
None'-start_number', f'{start}',
"""
check_input_path(
input_path,
allowed_suffix=['.gif', '.mp4'],
tag='input video',
path_type='file')
prepare_output_path(
output_path,
allowed_suffix=['.gif', '.mp4'],
tag='output video',
path_type='file',
overwrite=True)
info = vid_info_reader(input_path)
width, height = int(info['width']), int(info['height'])
if box is None:
box = [0, 0, width, height]
assert len(box) == 4
x, y, w, h = box
assert (w > 0 and h > 0)
command = [
'ffmpeg', '-i', input_path, '-vcodec', 'libx264', '-vf',
'crop=%d:%d:%d:%d' % (w, h, x, y), '-loglevel', 'error', '-y',
output_path
]
if resolution:
height, width = resolution
width += width % 2
height += height % 2
command.insert(-1, '-s')
command.insert(-1, '%dx%d' % (width, height))
if not disable_log:
print(f'Running \"{" ".join(command)}\"')
subprocess.call(command) | Spatially or temporally crop a video or gif file. Args: input_path (str): input video or gif file path. output_path (str): output video or gif file path. box (Iterable[int], optional): [x, y of the crop region left. corner and width and height]. Defaults to [0, 0, 100, 100]. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None'-start_number', f'{start}', |
14,418 | import glob
import json
import os
import shutil
import string
import subprocess
import sys
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
from mmhuman3d.utils.path_utils import check_input_path, prepare_output_path
class vid_info_reader(object):
def __init__(self, input_path) -> None:
"""Get video information from video, mimiced from ffmpeg-python.
https://github.com/kkroening/ffmpeg-python.
Args:
vid_file ([str]): video file path.
Raises:
FileNotFoundError: check the input path.
Returns:
None.
"""
check_input_path(
input_path,
allowed_suffix=['.mp4', '.gif', '.png', '.jpg', '.jpeg'],
tag='input file',
path_type='file')
cmd = [
'ffprobe', '-show_format', '-show_streams', '-of', 'json',
input_path
]
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, _ = process.communicate()
probe = json.loads(out.decode('utf-8'))
video_stream = next((stream for stream in probe['streams']
if stream['codec_type'] == 'video'), None)
if video_stream is None:
print('No video stream found', file=sys.stderr)
sys.exit(1)
self.video_stream = video_stream
def __getitem__(
self,
key: Literal['index', 'codec_name', 'codec_long_name', 'profile',
'codec_type', 'codec_time_base', 'codec_tag_string',
'codec_tag', 'width', 'height', 'coded_width',
'coded_height', 'has_b_frames', 'pix_fmt', 'level',
'chroma_location', 'refs', 'is_avc', 'nal_length_size',
'r_frame_rate', 'avg_frame_rate', 'time_base',
'start_pts', 'start_time', 'duration_ts', 'duration',
'bit_rate', 'bits_per_raw_sample', 'nb_frames',
'disposition', 'tags']):
"""Key (str): select in ['index', 'codec_name', 'codec_long_name',
'profile', 'codec_type', 'codec_time_base', 'codec_tag_string',
'codec_tag', 'width', 'height', 'coded_width', 'coded_height',
'has_b_frames', 'pix_fmt', 'level', 'chroma_location', 'refs',
'is_avc', 'nal_length_size', 'r_frame_rate', 'avg_frame_rate',
'time_base', 'start_pts', 'start_time', 'duration_ts', 'duration',
'bit_rate', 'bits_per_raw_sample', 'nb_frames', 'disposition',
'tags']"""
return self.video_stream[key]
The provided code snippet includes necessary dependencies for implementing the `slice_video` function. Write a Python function `def slice_video(input_path: str, output_path: str, start: int = 0, end: Optional[int] = None, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False) -> None` to solve the following problem:
Temporally crop a video/gif into another video/gif. Args: input_path (str): input video or gif file path. output_path (str): output video of gif file path. start (int, optional): start frame index. Defaults to 0. end (int, optional): end frame index. Exclusive. Could be positive int or negative int or None. If None, all frames from start till the last frame are included. Defaults to None. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: NoReturn
Here is the function:
def slice_video(input_path: str,
output_path: str,
start: int = 0,
end: Optional[int] = None,
resolution: Optional[Union[Tuple[int, int],
Tuple[float, float]]] = None,
disable_log: bool = False) -> None:
"""Temporally crop a video/gif into another video/gif.
Args:
input_path (str): input video or gif file path.
output_path (str): output video of gif file path.
start (int, optional): start frame index. Defaults to 0.
end (int, optional): end frame index. Exclusive.
Could be positive int or negative int or None.
If None, all frames from start till the last frame are included.
Defaults to None.
resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]],
optional): (height, width) of output. Defaults to None.
disable_log (bool, optional): whether close the ffmepg command info.
Defaults to False.
Raises:
FileNotFoundError: check the input path.
FileNotFoundError: check the output path.
Returns:
NoReturn
"""
info = vid_info_reader(input_path)
num_frames = int(info['nb_frames'])
start = (min(start, num_frames - 1) + num_frames) % num_frames
end = (min(end, num_frames - 1) +
num_frames) % num_frames if end is not None else num_frames
command = [
'ffmpeg', '-y', '-i', input_path, '-filter_complex',
f'[0]trim=start_frame={start}:end_frame={end}[v0]', '-map', '[v0]',
'-loglevel', 'error', '-vcodec', 'libx264', output_path
]
if resolution:
height, width = resolution
width += width % 2
height += height % 2
command.insert(1, '-s')
command.insert(2, '%dx%d' % (width, height))
if not disable_log:
print(f'Running \"{" ".join(command)}\"')
subprocess.call(command) | Temporally crop a video/gif into another video/gif. Args: input_path (str): input video or gif file path. output_path (str): output video of gif file path. start (int, optional): start frame index. Defaults to 0. end (int, optional): end frame index. Exclusive. Could be positive int or negative int or None. If None, all frames from start till the last frame are included. Defaults to None. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to None. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: NoReturn |
14,419 | import glob
import json
import os
import shutil
import string
import subprocess
import sys
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
from mmhuman3d.utils.path_utils import check_input_path, prepare_output_path
def prepare_output_path(output_path: str,
allowed_suffix: List[str] = [],
tag: str = 'output file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
overwrite: bool = True) -> None:
"""Check output folder or file.
Args:
output_path (str): could be folder or file.
allowed_suffix (List[str], optional):
Check the suffix of `output_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `dir` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
overwrite (bool, optional):
Whether overwrite the existing file or folder.
Defaults to True.
Raises:
FileNotFoundError: suffix does not match.
FileExistsError: file or folder already exists and `overwrite` is
False.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(output_path, path_type=path_type)
if exist_result == Existence.MissingParent:
warnings.warn(
f'The parent folder of {tag} does not exist: {output_path},' +
f' will make dir {Path(output_path).parent.absolute().__str__()}')
os.makedirs(
Path(output_path).parent.absolute().__str__(), exist_ok=True)
elif exist_result == Existence.DirectoryNotExist:
os.mkdir(output_path)
print(f'Making directory {output_path} for saving results.')
elif exist_result == Existence.FileNotExist:
suffix_matched = \
check_path_suffix(output_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}: '
f'{output_path}.')
elif exist_result == Existence.FileExist:
if not overwrite:
raise FileExistsError(
f'{output_path} exists (set overwrite = True to overwrite).')
else:
print(f'Overwriting {output_path}.')
elif exist_result == Existence.DirectoryExistEmpty:
pass
elif exist_result == Existence.DirectoryExistNotEmpty:
if not overwrite:
raise FileExistsError(
f'{output_path} is not empty (set overwrite = '
'True to overwrite the files).')
else:
print(f'Overwriting {output_path} and its files.')
else:
raise FileNotFoundError(f'No Existence type for {output_path}.')
def check_input_path(
input_path: str,
allowed_suffix: List[str] = [],
tag: str = 'input file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
):
"""Check input folder or file.
Args:
input_path (str): input folder or file path.
allowed_suffix (List[str], optional):
Check the suffix of `input_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `directory` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
Raises:
FileNotFoundError: file does not exists or suffix does not match.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(input_path, path_type=path_type)
if exist_result in [
Existence.FileExist, Existence.DirectoryExistEmpty,
Existence.DirectoryExistNotEmpty
]:
suffix_matched = \
check_path_suffix(input_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}:' +
f'{input_path}.')
else:
raise FileNotFoundError(f'The {tag} does not exist: {input_path}.')
The provided code snippet includes necessary dependencies for implementing the `spatial_concat_video` function. Write a Python function `def spatial_concat_video(input_path_list: List[str], output_path: str, array: List[int] = [1, 1], direction: Literal['h', 'w'] = 'h', resolution: Union[Tuple[int, int], List[int], List[float], Tuple[float, float]] = (512, 512), remove_raw_files: bool = False, padding: int = 0, disable_log: bool = False) -> None` to solve the following problem:
Spatially concat some videos as an array video. Args: input_path_list (list): input video or gif file list. output_path (str): output video or gif file path. array (List[int], optional): line number and column number of the video array]. Defaults to [1, 1]. direction (str, optional): [choose in 'h' or 'v', represent horizontal and vertical separately]. Defaults to 'h'. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to (512, 512). remove_raw_files (bool, optional): whether remove raw images. Defaults to False. padding (int, optional): width of pixels between videos. Defaults to 0. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None
Here is the function:
def spatial_concat_video(input_path_list: List[str],
output_path: str,
array: List[int] = [1, 1],
direction: Literal['h', 'w'] = 'h',
resolution: Union[Tuple[int,
int], List[int], List[float],
Tuple[float, float]] = (512, 512),
remove_raw_files: bool = False,
padding: int = 0,
disable_log: bool = False) -> None:
"""Spatially concat some videos as an array video.
Args:
input_path_list (list): input video or gif file list.
output_path (str): output video or gif file path.
array (List[int], optional): line number and column number of
the video array]. Defaults to [1, 1].
direction (str, optional): [choose in 'h' or 'v', represent
horizontal and vertical separately].
Defaults to 'h'.
resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]],
optional): (height, width) of output.
Defaults to (512, 512).
remove_raw_files (bool, optional): whether remove raw images.
Defaults to False.
padding (int, optional): width of pixels between videos.
Defaults to 0.
disable_log (bool, optional): whether close the ffmepg command info.
Defaults to False.
Raises:
FileNotFoundError: check the input path.
FileNotFoundError: check the output path.
Returns:
None
"""
lowercase = string.ascii_lowercase
assert len(array) == 2
assert (array[0] * array[1]) >= len(input_path_list)
for path in input_path_list:
check_input_path(
path,
allowed_suffix=['.gif', '.mp4'],
tag='input video',
path_type='file')
prepare_output_path(
output_path,
allowed_suffix=['.gif', '.mp4'],
tag='output video',
path_type='file',
overwrite=True)
command = ['ffmpeg']
height, width = resolution
scale_command = []
for index, vid_file in enumerate(input_path_list):
command.append('-i')
command.append(vid_file)
scale_command.append(
'[%d:v]scale=%d:%d:force_original_aspect_ratio=0[v%d];' %
(index, width, height, index))
scale_command = ' '.join(scale_command)
pad_command = '[v%d]pad=%d:%d[%s];' % (0, width * array[1] + padding *
(array[1] - 1),
height * array[0] + padding *
(array[0] - 1), lowercase[0])
for index in range(1, len(input_path_list)):
if direction == 'h':
pad_width = index % array[1] * (width + padding)
pad_height = index // array[1] * (height + padding)
else:
pad_width = index % array[0] * (width + padding)
pad_height = index // array[0] * (height + padding)
pad_command += '[%s][v%d]overlay=%d:%d' % (lowercase[index - 1], index,
pad_width, pad_height)
if index != len(input_path_list) - 1:
pad_command += '[%s];' % lowercase[index]
command += [
'-filter_complex',
'%s%s' % (scale_command, pad_command), '-loglevel', 'error', '-y',
output_path
]
if not disable_log:
print(f'Running \"{" ".join(command)}\"')
subprocess.call(command)
if remove_raw_files:
command = ['rm', '-f'] + input_path_list
subprocess.call(command) | Spatially concat some videos as an array video. Args: input_path_list (list): input video or gif file list. output_path (str): output video or gif file path. array (List[int], optional): line number and column number of the video array]. Defaults to [1, 1]. direction (str, optional): [choose in 'h' or 'v', represent horizontal and vertical separately]. Defaults to 'h'. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]], optional): (height, width) of output. Defaults to (512, 512). remove_raw_files (bool, optional): whether remove raw images. Defaults to False. padding (int, optional): width of pixels between videos. Defaults to 0. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None |
14,420 | import glob
import json
import os
import shutil
import string
import subprocess
import sys
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
from mmhuman3d.utils.path_utils import check_input_path, prepare_output_path
def prepare_output_path(output_path: str,
allowed_suffix: List[str] = [],
tag: str = 'output file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
overwrite: bool = True) -> None:
"""Check output folder or file.
Args:
output_path (str): could be folder or file.
allowed_suffix (List[str], optional):
Check the suffix of `output_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `dir` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
overwrite (bool, optional):
Whether overwrite the existing file or folder.
Defaults to True.
Raises:
FileNotFoundError: suffix does not match.
FileExistsError: file or folder already exists and `overwrite` is
False.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(output_path, path_type=path_type)
if exist_result == Existence.MissingParent:
warnings.warn(
f'The parent folder of {tag} does not exist: {output_path},' +
f' will make dir {Path(output_path).parent.absolute().__str__()}')
os.makedirs(
Path(output_path).parent.absolute().__str__(), exist_ok=True)
elif exist_result == Existence.DirectoryNotExist:
os.mkdir(output_path)
print(f'Making directory {output_path} for saving results.')
elif exist_result == Existence.FileNotExist:
suffix_matched = \
check_path_suffix(output_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}: '
f'{output_path}.')
elif exist_result == Existence.FileExist:
if not overwrite:
raise FileExistsError(
f'{output_path} exists (set overwrite = True to overwrite).')
else:
print(f'Overwriting {output_path}.')
elif exist_result == Existence.DirectoryExistEmpty:
pass
elif exist_result == Existence.DirectoryExistNotEmpty:
if not overwrite:
raise FileExistsError(
f'{output_path} is not empty (set overwrite = '
'True to overwrite the files).')
else:
print(f'Overwriting {output_path} and its files.')
else:
raise FileNotFoundError(f'No Existence type for {output_path}.')
def check_input_path(
input_path: str,
allowed_suffix: List[str] = [],
tag: str = 'input file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
):
"""Check input folder or file.
Args:
input_path (str): input folder or file path.
allowed_suffix (List[str], optional):
Check the suffix of `input_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `directory` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
Raises:
FileNotFoundError: file does not exists or suffix does not match.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(input_path, path_type=path_type)
if exist_result in [
Existence.FileExist, Existence.DirectoryExistEmpty,
Existence.DirectoryExistNotEmpty
]:
suffix_matched = \
check_path_suffix(input_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}:' +
f'{input_path}.')
else:
raise FileNotFoundError(f'The {tag} does not exist: {input_path}.')
The provided code snippet includes necessary dependencies for implementing the `temporal_concat_video` function. Write a Python function `def temporal_concat_video(input_path_list: List[str], output_path: str, resolution: Union[Tuple[int, int], Tuple[float, float]] = (512, 512), remove_raw_files: bool = False, disable_log: bool = False) -> None` to solve the following problem:
Concat no matter videos or gifs into a temporal sequence, and save as a new video or gif file. Args: input_path_list (List[str]): list of input video paths. output_path (str): output video file path. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]] , optional): (height, width) of output]. Defaults to (512,512). remove_raw_files (bool, optional): whether remove the input videos. Defaults to False. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None.
Here is the function:
def temporal_concat_video(input_path_list: List[str],
output_path: str,
resolution: Union[Tuple[int, int],
Tuple[float, float]] = (512, 512),
remove_raw_files: bool = False,
disable_log: bool = False) -> None:
"""Concat no matter videos or gifs into a temporal sequence, and save as a
new video or gif file.
Args:
input_path_list (List[str]): list of input video paths.
output_path (str): output video file path.
resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]]
, optional): (height, width) of output].
Defaults to (512,512).
remove_raw_files (bool, optional): whether remove the input videos.
Defaults to False.
disable_log (bool, optional): whether close the ffmepg command info.
Defaults to False.
Raises:
FileNotFoundError: check the input path.
FileNotFoundError: check the output path.
Returns:
None.
"""
for path in input_path_list:
check_input_path(
path,
allowed_suffix=['.gif', '.mp4'],
tag='input video',
path_type='file')
prepare_output_path(
output_path,
allowed_suffix=['.gif', '.mp4'],
tag='output video',
path_type='file',
overwrite=True)
height, width = resolution
command = ['ffmpeg']
concat_command = []
scale_command = []
for index, vid_file in enumerate(input_path_list):
command.append('-i')
command.append(vid_file)
scale_command.append(
'[%d:v]scale=%d:%d:force_original_aspect_ratio=0[v%d];' %
(index, width, height, index))
concat_command.append('[v%d]' % index)
concat_command = ''.join(concat_command)
scale_command = ''.join(scale_command)
command += [
'-filter_complex',
'%s%sconcat=n=%d:v=1:a=0[v]' %
(scale_command, concat_command, len(input_path_list)), '-loglevel',
'error', '-map', '[v]', '-c:v', 'libx264', '-y', output_path
]
if not disable_log:
print(f'Running \"{" ".join(command)}\"')
subprocess.call(command)
if remove_raw_files:
command = ['rm'] + input_path_list
subprocess.call(command) | Concat no matter videos or gifs into a temporal sequence, and save as a new video or gif file. Args: input_path_list (List[str]): list of input video paths. output_path (str): output video file path. resolution (Optional[Union[Tuple[int, int], Tuple[float, float]]] , optional): (height, width) of output]. Defaults to (512,512). remove_raw_files (bool, optional): whether remove the input videos. Defaults to False. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None. |
14,421 | import glob
import json
import os
import shutil
import string
import subprocess
import sys
from pathlib import Path
from typing import Iterable, List, Optional, Tuple, Union
import numpy as np
from mmhuman3d.utils.path_utils import check_input_path, prepare_output_path
class vid_info_reader(object):
def __init__(self, input_path) -> None:
"""Get video information from video, mimiced from ffmpeg-python.
https://github.com/kkroening/ffmpeg-python.
Args:
vid_file ([str]): video file path.
Raises:
FileNotFoundError: check the input path.
Returns:
None.
"""
check_input_path(
input_path,
allowed_suffix=['.mp4', '.gif', '.png', '.jpg', '.jpeg'],
tag='input file',
path_type='file')
cmd = [
'ffprobe', '-show_format', '-show_streams', '-of', 'json',
input_path
]
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, _ = process.communicate()
probe = json.loads(out.decode('utf-8'))
video_stream = next((stream for stream in probe['streams']
if stream['codec_type'] == 'video'), None)
if video_stream is None:
print('No video stream found', file=sys.stderr)
sys.exit(1)
self.video_stream = video_stream
def __getitem__(
self,
key: Literal['index', 'codec_name', 'codec_long_name', 'profile',
'codec_type', 'codec_time_base', 'codec_tag_string',
'codec_tag', 'width', 'height', 'coded_width',
'coded_height', 'has_b_frames', 'pix_fmt', 'level',
'chroma_location', 'refs', 'is_avc', 'nal_length_size',
'r_frame_rate', 'avg_frame_rate', 'time_base',
'start_pts', 'start_time', 'duration_ts', 'duration',
'bit_rate', 'bits_per_raw_sample', 'nb_frames',
'disposition', 'tags']):
"""Key (str): select in ['index', 'codec_name', 'codec_long_name',
'profile', 'codec_type', 'codec_time_base', 'codec_tag_string',
'codec_tag', 'width', 'height', 'coded_width', 'coded_height',
'has_b_frames', 'pix_fmt', 'level', 'chroma_location', 'refs',
'is_avc', 'nal_length_size', 'r_frame_rate', 'avg_frame_rate',
'time_base', 'start_pts', 'start_time', 'duration_ts', 'duration',
'bit_rate', 'bits_per_raw_sample', 'nb_frames', 'disposition',
'tags']"""
return self.video_stream[key]
def prepare_output_path(output_path: str,
allowed_suffix: List[str] = [],
tag: str = 'output file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
overwrite: bool = True) -> None:
"""Check output folder or file.
Args:
output_path (str): could be folder or file.
allowed_suffix (List[str], optional):
Check the suffix of `output_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `dir` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
overwrite (bool, optional):
Whether overwrite the existing file or folder.
Defaults to True.
Raises:
FileNotFoundError: suffix does not match.
FileExistsError: file or folder already exists and `overwrite` is
False.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(output_path, path_type=path_type)
if exist_result == Existence.MissingParent:
warnings.warn(
f'The parent folder of {tag} does not exist: {output_path},' +
f' will make dir {Path(output_path).parent.absolute().__str__()}')
os.makedirs(
Path(output_path).parent.absolute().__str__(), exist_ok=True)
elif exist_result == Existence.DirectoryNotExist:
os.mkdir(output_path)
print(f'Making directory {output_path} for saving results.')
elif exist_result == Existence.FileNotExist:
suffix_matched = \
check_path_suffix(output_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}: '
f'{output_path}.')
elif exist_result == Existence.FileExist:
if not overwrite:
raise FileExistsError(
f'{output_path} exists (set overwrite = True to overwrite).')
else:
print(f'Overwriting {output_path}.')
elif exist_result == Existence.DirectoryExistEmpty:
pass
elif exist_result == Existence.DirectoryExistNotEmpty:
if not overwrite:
raise FileExistsError(
f'{output_path} is not empty (set overwrite = '
'True to overwrite the files).')
else:
print(f'Overwriting {output_path} and its files.')
else:
raise FileNotFoundError(f'No Existence type for {output_path}.')
def check_input_path(
input_path: str,
allowed_suffix: List[str] = [],
tag: str = 'input file',
path_type: Literal['file', 'dir', 'auto'] = 'auto',
):
"""Check input folder or file.
Args:
input_path (str): input folder or file path.
allowed_suffix (List[str], optional):
Check the suffix of `input_path`. If folder, should be [] or [''].
If could both be folder or file, should be [suffixs..., ''].
Defaults to [].
tag (str, optional): The `string` tag to specify the output type.
Defaults to 'output file'.
path_type (Literal[, optional):
Choose `file` for file and `directory` for folder.
Choose `auto` if allowed to be both.
Defaults to 'auto'.
Raises:
FileNotFoundError: file does not exists or suffix does not match.
Returns:
None
"""
if path_type.lower() == 'dir':
allowed_suffix = []
exist_result = check_path_existence(input_path, path_type=path_type)
if exist_result in [
Existence.FileExist, Existence.DirectoryExistEmpty,
Existence.DirectoryExistNotEmpty
]:
suffix_matched = \
check_path_suffix(input_path, allowed_suffix=allowed_suffix)
if not suffix_matched:
raise FileNotFoundError(
f'The {tag} should be {", ".join(allowed_suffix)}:' +
f'{input_path}.')
else:
raise FileNotFoundError(f'The {tag} does not exist: {input_path}.')
The provided code snippet includes necessary dependencies for implementing the `compress_video` function. Write a Python function `def compress_video(input_path: str, output_path: str, compress_rate: int = 1, down_sample_scale: Union[float, int] = 1, fps: int = 30, disable_log: bool = False) -> None` to solve the following problem:
Compress a video file. Args: input_path (str): input video file path. output_path (str): output video file path. compress_rate (int, optional): compress rate, influents the bit rate. Defaults to 1. down_sample_scale (Union[float, int], optional): spatial down sample scale. Defaults to 1. fps (int, optional): Frames per second. Defaults to 30. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None.
Here is the function:
def compress_video(input_path: str,
output_path: str,
compress_rate: int = 1,
down_sample_scale: Union[float, int] = 1,
fps: int = 30,
disable_log: bool = False) -> None:
"""Compress a video file.
Args:
input_path (str): input video file path.
output_path (str): output video file path.
compress_rate (int, optional): compress rate, influents the bit rate.
Defaults to 1.
down_sample_scale (Union[float, int], optional): spatial down sample
scale. Defaults to 1.
fps (int, optional): Frames per second. Defaults to 30.
disable_log (bool, optional): whether close the ffmepg command info.
Defaults to False.
Raises:
FileNotFoundError: check the input path.
FileNotFoundError: check the output path.
Returns:
None.
"""
input_pathinfo = Path(input_path)
check_input_path(
input_path,
allowed_suffix=['.gif', '.mp4'],
tag='input video',
path_type='file')
prepare_output_path(
output_path,
allowed_suffix=['.gif', '.mp4'],
tag='output video',
path_type='file',
overwrite=True)
info = vid_info_reader(input_path)
width = int(info['width'])
height = int(info['height'])
bit_rate = int(info['bit_rate'])
duration = float(info['duration'])
if (output_path == input_path) or (not output_path):
temp_outpath = os.path.join(
os.path.abspath(input_pathinfo.parent),
'temp_file' + input_pathinfo.suffix)
else:
temp_outpath = output_path
new_width = int(width / down_sample_scale)
new_width += new_width % 2
new_height = int(height / down_sample_scale)
new_height += new_height % 2
command = [
'ffmpeg', '-y', '-r',
str(info['r_frame_rate']), '-i', input_path, '-loglevel', 'error',
'-b:v', f'{bit_rate / (compress_rate * down_sample_scale)}', '-r',
f'{fps}', '-t', f'{duration}', '-s',
'%dx%d' % (new_width, new_height), temp_outpath
]
if not disable_log:
print(f'Running \"{" ".join(command)}\"')
subprocess.call(command)
if (output_path == input_path) or (not output_path):
subprocess.call(['mv', '-f', temp_outpath, input_path]) | Compress a video file. Args: input_path (str): input video file path. output_path (str): output video file path. compress_rate (int, optional): compress rate, influents the bit rate. Defaults to 1. down_sample_scale (Union[float, int], optional): spatial down sample scale. Defaults to 1. fps (int, optional): Frames per second. Defaults to 30. disable_log (bool, optional): whether close the ffmepg command info. Defaults to False. Raises: FileNotFoundError: check the input path. FileNotFoundError: check the output path. Returns: None. |
14,422 | from collections import OrderedDict
import torch.distributed as dist
from mmcv.runner import OptimizerHook
from torch._utils import (
_flatten_dense_tensors,
_take_tensors,
_unflatten_dense_tensors,
)
def _allreduce_coalesced(tensors, world_size, bucket_size_mb=-1):
if bucket_size_mb > 0:
bucket_size_bytes = bucket_size_mb * 1024 * 1024
buckets = _take_tensors(tensors, bucket_size_bytes)
else:
buckets = OrderedDict()
for tensor in tensors:
tp = tensor.type()
if tp not in buckets:
buckets[tp] = []
buckets[tp].append(tensor)
buckets = buckets.values()
for bucket in buckets:
flat_tensors = _flatten_dense_tensors(bucket)
dist.all_reduce(flat_tensors)
flat_tensors.div_(world_size)
for tensor, synced in zip(
bucket, _unflatten_dense_tensors(flat_tensors, bucket)):
tensor.copy_(synced)
def allreduce_grads(params, coalesce=True, bucket_size_mb=-1):
grads = [
param.grad.data for param in params
if param.requires_grad and param.grad is not None
]
world_size = dist.get_world_size()
if coalesce:
_allreduce_coalesced(grads, world_size, bucket_size_mb)
else:
for tensor in grads:
dist.all_reduce(tensor.div_(world_size)) | null |
14,423 | from functools import partial
import torch
def multi_apply(func, *args, **kwargs):
pfunc = partial(func, **kwargs) if kwargs else func
map_results = map(pfunc, *args)
return tuple(map(list, zip(*map_results))) | null |
14,424 | from functools import partial
import torch
def torch_to_numpy(x):
assert isinstance(x, torch.Tensor)
return x.detach().cpu().numpy() | null |
14,425 | from typing import Optional, Tuple, Union
import numpy as np
import torch
from mmhuman3d.core.conventions.keypoints_mapping import KEYPOINTS_FACTORY
from mmhuman3d.core.conventions.keypoints_mapping.human_data import (
HUMAN_DATA_LIMBS_INDEX,
HUMAN_DATA_PALETTE,
)
The provided code snippet includes necessary dependencies for implementing the `transform_kps2d` function. Write a Python function `def transform_kps2d(kps2d: torch.Tensor, transf, img_res=224)` to solve the following problem:
Process gt 2D keypoints and apply transforms.
Here is the function:
def transform_kps2d(kps2d: torch.Tensor, transf, img_res=224):
"""Process gt 2D keypoints and apply transforms."""
bs, n_kps = kps2d.shape[:2]
kps_pad = torch.cat([kps2d, torch.ones((bs, n_kps, 1)).to(kps2d)], dim=-1)
kps_new = torch.bmm(transf, kps_pad.transpose(1, 2))
kps_new = kps_new.transpose(1, 2)
kps_new[:, :, :-1] = 2. * kps_new[:, :, :-1] / img_res - 1.
return kps_new[:, :, :2] | Process gt 2D keypoints and apply transforms. |
14,426 | import numpy as np
import torch
from einops.einops import rearrange
from torch.nn import functional as F
The provided code snippet includes necessary dependencies for implementing the `rot6d_to_rotmat` function. Write a Python function `def rot6d_to_rotmat(x)` to solve the following problem:
Convert 6D rotation representation to 3x3 rotation matrix. Based on Zhou et al., "On the Continuity of Rotation Representations in Neural Networks", CVPR 2019 Input: (B,6) Batch of 6-D rotation representations Output: (B,3,3) Batch of corresponding rotation matrices
Here is the function:
def rot6d_to_rotmat(x):
"""Convert 6D rotation representation to 3x3 rotation matrix.
Based on Zhou et al., "On the Continuity of Rotation
Representations in Neural Networks", CVPR 2019
Input:
(B,6) Batch of 6-D rotation representations
Output:
(B,3,3) Batch of corresponding rotation matrices
"""
if x.shape[-1] == 6:
batch_size = x.shape[0]
if len(x.shape) == 3:
num = x.shape[1]
x = rearrange(x, 'b n d -> (b n) d', d=6)
else:
num = 1
x = rearrange(x, 'b (k l) -> b k l', k=3, l=2)
# x = x.view(-1,3,2)
a1 = x[:, :, 0]
a2 = x[:, :, 1]
b1 = F.normalize(a1)
b2 = F.normalize(a2 -
torch.einsum('bi,bi->b', b1, a2).unsqueeze(-1) * b1)
b3 = torch.cross(b1, b2, dim=-1)
mat = torch.stack((b1, b2, b3), dim=-1)
if num > 1:
mat = rearrange(
mat, '(b n) h w-> b n h w', b=batch_size, n=num, h=3, w=3)
else:
if isinstance(x, torch.Tensor):
x = x.view(-1, 3, 2)
elif isinstance(x, np.ndarray):
x = x.reshape(-1, 3, 2)
a1 = x[:, :, 0]
a2 = x[:, :, 1]
b1 = F.normalize(a1)
b2 = F.normalize(a2 -
torch.einsum('bi,bi->b', b1, a2).unsqueeze(-1) * b1)
b3 = torch.cross(b1, b2)
mat = torch.stack((b1, b2, b3), dim=-1)
return mat | Convert 6D rotation representation to 3x3 rotation matrix. Based on Zhou et al., "On the Continuity of Rotation Representations in Neural Networks", CVPR 2019 Input: (B,6) Batch of 6-D rotation representations Output: (B,3,3) Batch of corresponding rotation matrices |
14,427 | import numpy as np
import torch
from einops.einops import rearrange
from torch.nn import functional as F
def quaternion_to_angle_axis(quaternion: torch.Tensor) -> torch.Tensor:
"""
This function is borrowed from https://github.com/kornia/kornia
Convert quaternion vector to angle axis of rotation.
Adapted from ceres C++ library: ceres-solver/include/ceres/rotation.h
Args:
quaternion (torch.Tensor): tensor with quaternions.
Return:
torch.Tensor: tensor with angle axis of rotation.
Shape:
- Input: :math:`(*, 4)` where `*` means, any number of dimensions
- Output: :math:`(*, 3)`
Example:
>>> quaternion = torch.rand(2, 4) # Nx4
>>> angle_axis = tgm.quaternion_to_angle_axis(quaternion) # Nx3
"""
if not torch.is_tensor(quaternion):
raise TypeError('Input type is not a torch.Tensor. Got {}'.format(
type(quaternion)))
if not quaternion.shape[-1] == 4:
raise ValueError(
'Input must be a tensor of shape Nx4 or 4. Got {}'.format(
quaternion.shape))
# unpack input and compute conversion
q1: torch.Tensor = quaternion[..., 1]
q2: torch.Tensor = quaternion[..., 2]
q3: torch.Tensor = quaternion[..., 3]
sin_squared_theta: torch.Tensor = q1 * q1 + q2 * q2 + q3 * q3
sin_theta: torch.Tensor = torch.sqrt(sin_squared_theta)
cos_theta: torch.Tensor = quaternion[..., 0]
two_theta: torch.Tensor = 2.0 * torch.where(
cos_theta < 0.0, torch.atan2(-sin_theta, -cos_theta),
torch.atan2(sin_theta, cos_theta))
k_pos: torch.Tensor = two_theta / sin_theta
k_neg: torch.Tensor = 2.0 * torch.ones_like(sin_theta)
k: torch.Tensor = torch.where(sin_squared_theta > 0.0, k_pos, k_neg)
angle_axis: torch.Tensor = torch.zeros_like(quaternion)[..., :3]
angle_axis[..., 0] += q1 * k
angle_axis[..., 1] += q2 * k
angle_axis[..., 2] += q3 * k
return angle_axis
def rotation_matrix_to_quaternion(rotation_matrix, eps=1e-6):
"""
This function is borrowed from https://github.com/kornia/kornia
Convert 3x4 rotation matrix to 4d quaternion vector
This algorithm is based on algorithm described in
https://github.com/KieranWynn/pyquaternion/blob/master/pyquaternion/quaternion.py#L201
Args:
rotation_matrix (Tensor): the rotation matrix to convert.
Return:
Tensor: the rotation in quaternion
Shape:
- Input: :math:`(N, 3, 4)`
- Output: :math:`(N, 4)`
Example:
>>> input = torch.rand(4, 3, 4) # Nx3x4
>>> output = tgm.rotation_matrix_to_quaternion(input) # Nx4
"""
if not torch.is_tensor(rotation_matrix):
raise TypeError('Input type is not a torch.Tensor. Got {}'.format(
type(rotation_matrix)))
if len(rotation_matrix.shape) > 3:
raise ValueError(
'Input size must be a three dimensional tensor. Got {}'.format(
rotation_matrix.shape))
# if not rotation_matrix.shape[-2:] == (3, 4):
# raise ValueError(
# 'Input size must be a N x 3 x 4 tensor. Got {}'.format(
# rotation_matrix.shape))
rmat_t = torch.transpose(rotation_matrix, 1, 2)
mask_d2 = rmat_t[:, 2, 2] < eps
mask_d0_d1 = rmat_t[:, 0, 0] > rmat_t[:, 1, 1]
mask_d0_nd1 = rmat_t[:, 0, 0] < -rmat_t[:, 1, 1]
t0 = 1 + rmat_t[:, 0, 0] - rmat_t[:, 1, 1] - rmat_t[:, 2, 2]
q0 = torch.stack([
rmat_t[:, 1, 2] - rmat_t[:, 2, 1], t0,
rmat_t[:, 0, 1] + rmat_t[:, 1, 0], rmat_t[:, 2, 0] + rmat_t[:, 0, 2]
], -1)
t0_rep = t0.repeat(4, 1).t()
t1 = 1 - rmat_t[:, 0, 0] + rmat_t[:, 1, 1] - rmat_t[:, 2, 2]
q1 = torch.stack([
rmat_t[:, 2, 0] - rmat_t[:, 0, 2], rmat_t[:, 0, 1] + rmat_t[:, 1, 0],
t1, rmat_t[:, 1, 2] + rmat_t[:, 2, 1]
], -1)
t1_rep = t1.repeat(4, 1).t()
t2 = 1 - rmat_t[:, 0, 0] - rmat_t[:, 1, 1] + rmat_t[:, 2, 2]
q2 = torch.stack([
rmat_t[:, 0, 1] - rmat_t[:, 1, 0], rmat_t[:, 2, 0] + rmat_t[:, 0, 2],
rmat_t[:, 1, 2] + rmat_t[:, 2, 1], t2
], -1)
t2_rep = t2.repeat(4, 1).t()
t3 = 1 + rmat_t[:, 0, 0] + rmat_t[:, 1, 1] + rmat_t[:, 2, 2]
q3 = torch.stack([
t3, rmat_t[:, 1, 2] - rmat_t[:, 2, 1],
rmat_t[:, 2, 0] - rmat_t[:, 0, 2], rmat_t[:, 0, 1] - rmat_t[:, 1, 0]
], -1)
t3_rep = t3.repeat(4, 1).t()
mask_c0 = mask_d2 * mask_d0_d1
mask_c1 = mask_d2 * ~mask_d0_d1
mask_c2 = ~mask_d2 * mask_d0_nd1
mask_c3 = ~mask_d2 * ~mask_d0_nd1
mask_c0 = mask_c0.view(-1, 1).type_as(q0)
mask_c1 = mask_c1.view(-1, 1).type_as(q1)
mask_c2 = mask_c2.view(-1, 1).type_as(q2)
mask_c3 = mask_c3.view(-1, 1).type_as(q3)
q = q0 * mask_c0 + q1 * mask_c1 + q2 * mask_c2 + q3 * mask_c3
q /= torch.sqrt(t0_rep * mask_c0 + t1_rep * mask_c1 + # noqa
t2_rep * mask_c2 + t3_rep * mask_c3) # noqa
q *= 0.5
return q
The provided code snippet includes necessary dependencies for implementing the `rotation_matrix_to_angle_axis` function. Write a Python function `def rotation_matrix_to_angle_axis(rotation_matrix)` to solve the following problem:
This function is borrowed from https://github.com/kornia/kornia Convert 3x4 rotation matrix to Rodrigues vector Args: rotation_matrix (Tensor): rotation matrix. Returns: Tensor: Rodrigues vector transformation. Shape: - Input: :math:`(N, 3, 4)` - Output: :math:`(N, 3)` Example: >>> input = torch.rand(2, 3, 4) # Nx3x4 >>> output = tgm.rotation_matrix_to_angle_axis(input) # Nx3
Here is the function:
def rotation_matrix_to_angle_axis(rotation_matrix):
"""
This function is borrowed from https://github.com/kornia/kornia
Convert 3x4 rotation matrix to Rodrigues vector
Args:
rotation_matrix (Tensor): rotation matrix.
Returns:
Tensor: Rodrigues vector transformation.
Shape:
- Input: :math:`(N, 3, 4)`
- Output: :math:`(N, 3)`
Example:
>>> input = torch.rand(2, 3, 4) # Nx3x4
>>> output = tgm.rotation_matrix_to_angle_axis(input) # Nx3
"""
if rotation_matrix.shape[1:] == (3, 3):
rot_mat = rotation_matrix.reshape(-1, 3, 3)
hom = torch.tensor([0, 0, 1],
dtype=torch.float32,
device=rotation_matrix.device)
hom = hom.reshape(1, 3, 1).expand(rot_mat.shape[0], -1, -1)
rotation_matrix = torch.cat([rot_mat, hom], dim=-1)
quaternion = rotation_matrix_to_quaternion(rotation_matrix)
aa = quaternion_to_angle_axis(quaternion)
aa[torch.isnan(aa)] = 0.0
return aa | This function is borrowed from https://github.com/kornia/kornia Convert 3x4 rotation matrix to Rodrigues vector Args: rotation_matrix (Tensor): rotation matrix. Returns: Tensor: Rodrigues vector transformation. Shape: - Input: :math:`(N, 3, 4)` - Output: :math:`(N, 3)` Example: >>> input = torch.rand(2, 3, 4) # Nx3x4 >>> output = tgm.rotation_matrix_to_angle_axis(input) # Nx3 |
14,428 | import numpy as np
import torch
from einops.einops import rearrange
from torch.nn import functional as F
def estimate_translation_np(S,
joints_2d,
joints_conf,
focal_length=5000,
img_size=224):
"""Find camera translation that brings 3D joints S closest to 2D the
corresponding joints_2d.
Input:
S: (25, 3) 3D joint locations
joints: (25, 3) 2D joint locations and confidence
Returns:
(3,) camera translation vector
"""
num_joints = S.shape[0]
# focal length
f = np.array([focal_length, focal_length])
# optical center
center = np.array([img_size / 2., img_size / 2.])
# transformations
Z = np.reshape(np.tile(S[:, 2], (2, 1)).T, -1)
XY = np.reshape(S[:, 0:2], -1)
OO = np.tile(center, num_joints)
F = np.tile(f, num_joints)
weight2 = np.reshape(np.tile(np.sqrt(joints_conf), (2, 1)).T, -1)
# least squares
Q = np.array([
F * np.tile(np.array([1, 0]), num_joints),
F * np.tile(np.array([0, 1]), num_joints),
OO - np.reshape(joints_2d, -1)
]).T
c = (np.reshape(joints_2d, -1) - OO) * Z - F * XY
# weighted least squares
W = np.diagflat(weight2)
Q = np.dot(W, Q)
c = np.dot(W, c)
# square matrix
A = np.dot(Q.T, Q)
b = np.dot(Q.T, c)
# solution
trans = np.linalg.solve(A, b)
return trans
The provided code snippet includes necessary dependencies for implementing the `estimate_translation` function. Write a Python function `def estimate_translation(S, joints_2d, focal_length=5000., img_size=224.)` to solve the following problem:
Find camera translation that brings 3D joints S closest to 2D the corresponding joints_2d. Input: S: (B, 49, 3) 3D joint locations joints: (B, 49, 3) 2D joint locations and confidence Returns: (B, 3) camera translation vectors
Here is the function:
def estimate_translation(S, joints_2d, focal_length=5000., img_size=224.):
"""Find camera translation that brings 3D joints S closest to 2D the
corresponding joints_2d.
Input:
S: (B, 49, 3) 3D joint locations
joints: (B, 49, 3) 2D joint locations and confidence
Returns:
(B, 3) camera translation vectors
"""
device = S.device
# Use only joints 25:49 (GT joints)
S = S[:, 25:, :].cpu().numpy()
joints_2d = joints_2d[:, 25:, :].cpu().numpy()
joints_conf = joints_2d[:, :, -1]
joints_2d = joints_2d[:, :, :-1]
trans = np.zeros((S.shape[0], 3), dtype=np.float32)
# Find the translation for each example in the batch
for i in range(S.shape[0]):
S_i = S[i]
joints_i = joints_2d[i]
conf_i = joints_conf[i]
trans[i] = estimate_translation_np(
S_i,
joints_i,
conf_i,
focal_length=focal_length,
img_size=img_size)
return torch.from_numpy(trans).to(device) | Find camera translation that brings 3D joints S closest to 2D the corresponding joints_2d. Input: S: (B, 49, 3) 3D joint locations joints: (B, 49, 3) 2D joint locations and confidence Returns: (B, 3) camera translation vectors |
14,429 | import numpy as np
import torch
from einops.einops import rearrange
from torch.nn import functional as F
def perspective_projection(points, rotation, translation, focal_length,
camera_center):
"""This function computes the perspective projection of a set of points.
Input:
points (bs, N, 3): 3D points
rotation (bs, 3, 3): Camera rotation
translation (bs, 3): Camera translation
focal_length (bs,) or scalar: Focal length
camera_center (bs, 2): Camera center
"""
batch_size = points.shape[0]
K = torch.zeros([batch_size, 3, 3], device=points.device)
K[:, 0, 0] = focal_length
K[:, 1, 1] = focal_length
K[:, 2, 2] = 1.
K[:, :-1, -1] = camera_center
# Transform points
points = torch.einsum('bij,bkj->bki', rotation, points)
points = points + translation.unsqueeze(1)
# Apply perspective distortion
projected_points = points / points[:, :, -1].unsqueeze(-1)
# Apply camera intrinsics
projected_points = torch.einsum('bij,bkj->bki', K, projected_points)
return projected_points[:, :, :-1]
The provided code snippet includes necessary dependencies for implementing the `project_points` function. Write a Python function `def project_points(points_3d, camera, focal_length, img_res)` to solve the following problem:
Perform orthographic projection of 3D points using the camera parameters, return projected 2D points in image plane. Notes: batch size: B point number: N Args: points_3d (Tensor([B, N, 3])): 3D points. camera (Tensor([B, 3])): camera parameters with the 3 channel as (scale, translation_x, translation_y) Returns: points_2d (Tensor([B, N, 2])): projected 2D points in image space.
Here is the function:
def project_points(points_3d, camera, focal_length, img_res):
"""Perform orthographic projection of 3D points using the camera
parameters, return projected 2D points in image plane.
Notes:
batch size: B
point number: N
Args:
points_3d (Tensor([B, N, 3])): 3D points.
camera (Tensor([B, 3])): camera parameters with the
3 channel as (scale, translation_x, translation_y)
Returns:
points_2d (Tensor([B, N, 2])): projected 2D points
in image space.
"""
batch_size = points_3d.shape[0]
device = points_3d.device
cam_t = torch.stack([
camera[:, 1], camera[:, 2], 2 * focal_length /
(img_res * camera[:, 0] + 1e-9)
],
dim=-1)
camera_center = camera.new_zeros([batch_size, 2])
rot_t = torch.eye(
3, device=device,
dtype=points_3d.dtype).unsqueeze(0).expand(batch_size, -1, -1)
keypoints_2d = perspective_projection(
points_3d,
rotation=rot_t,
translation=cam_t,
focal_length=focal_length,
camera_center=camera_center)
return keypoints_2d | Perform orthographic projection of 3D points using the camera parameters, return projected 2D points in image plane. Notes: batch size: B point number: N Args: points_3d (Tensor([B, N, 3])): 3D points. camera (Tensor([B, 3])): camera parameters with the 3 channel as (scale, translation_x, translation_y) Returns: points_2d (Tensor([B, N, 2])): projected 2D points in image space. |
14,430 | import numpy as np
import torch
from einops.einops import rearrange
from torch.nn import functional as F
The provided code snippet includes necessary dependencies for implementing the `weak_perspective_projection` function. Write a Python function `def weak_perspective_projection(points, scale, translation)` to solve the following problem:
This function computes the weak perspective projection of a set of points. Input: points (bs, N, 3): 3D points scale (bs,1): scalar translation (bs, 2): point 2D translation
Here is the function:
def weak_perspective_projection(points, scale, translation):
"""This function computes the weak perspective projection of a set of
points.
Input:
points (bs, N, 3): 3D points
scale (bs,1): scalar
translation (bs, 2): point 2D translation
"""
projected_points = scale.view(-1, 1, 1) * (
points[:, :, :2] + translation.view(-1, 1, 2))
return projected_points | This function computes the weak perspective projection of a set of points. Input: points (bs, N, 3): 3D points scale (bs,1): scalar translation (bs, 2): point 2D translation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.