code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def crop(img: Tensor, top: int, left: int, height: int, width: int) -> Tensor:
"""Crop the given image at specified location and output size.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
If image size is smaller than ... | Crop the given image at specified location and output size.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
If image size is smaller than output size along any edge, image is padded with 0 and then cropped.
Args:
... | crop | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | Apache-2.0 |
def center_crop(img: Tensor, output_size: List[int]) -> Tensor:
"""Crops the given image at the center.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
If image size is smaller than output size along any edge, image is p... | Crops the given image at the center.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
If image size is smaller than output size along any edge, image is padded with 0 and then center cropped.
Args:
img (PIL Image... | center_crop | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | Apache-2.0 |
def resized_crop(
img: Tensor,
top: int,
left: int,
height: int,
width: int,
size: List[int],
interpolation: InterpolationMode=InterpolationMode.BILINEAR) -> Tensor:
"""Crop the given image and resize it to desired size.
If the image is paddle Tensor, it i... | Crop the given image and resize it to desired size.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
Args:
img (PIL Image or Tensor): Image to be cropped. (0,0) denotes the top left corner of the image.
top (i... | resized_crop | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | Apache-2.0 |
def hflip(img):
"""Horizontally flip the given image.
Args:
img (PIL Image or Tensor): Image to be flipped. If img
is a Tensor, it is expected to be in [..., H, W] format,
where ... means it can have an arbitrary number of leading
dimensions.
Returns:
PIL ... | Horizontally flip the given image.
Args:
img (PIL Image or Tensor): Image to be flipped. If img
is a Tensor, it is expected to be in [..., H, W] format,
where ... means it can have an arbitrary number of leading
dimensions.
Returns:
PIL Image or Tensor: Horiz... | hflip | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | Apache-2.0 |
def get_params(img: Tensor, scale: List[float],
ratio: List[float]) -> Tuple[int, int, int, int]:
"""Get parameters for ``crop`` for a random sized crop.
Args:
img (PIL Image or Tensor): Input image.
scale (list): range of scale of the origin size cropped
... | Get parameters for ``crop`` for a random sized crop.
Args:
img (PIL Image or Tensor): Input image.
scale (list): range of scale of the origin size cropped
ratio (list): range of aspect ratio of the origin aspect ratio cropped
Returns:
tuple: params (i, j... | get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be cropped and resized.
Returns:
PIL Image or Tensor: Randomly cropped and resized image.
"""
i, j, h, w = self.get_params(img, self.scale, self.ratio)
return F.resized_crop(img... |
Args:
img (PIL Image or Tensor): Image to be cropped and resized.
Returns:
PIL Image or Tensor: Randomly cropped and resized image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be flipped.
Returns:
PIL Image or Tensor: Randomly flipped image.
"""
if random.random() < self.p:
return F.hflip(img)
return img |
Args:
img (PIL Image or Tensor): Image to be flipped.
Returns:
PIL Image or Tensor: Randomly flipped image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/transforms.py | Apache-2.0 |
def __init__(self,
config,
model_info: dict={},
data_info: dict={},
perf_info: dict={},
resource_info: dict={},
**kwargs):
"""
Construct PaddleInferBenchmark Class to format logs.
args:
... |
Construct PaddleInferBenchmark Class to format logs.
args:
config(paddle.inference.Config): paddle inference config
model_info(dict): basic model info
{'model_name': 'resnet50'
'precision': 'fp32'}
data_info(dict): input data info
... | __init__ | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/benchmark_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/benchmark_utils.py | Apache-2.0 |
def parse_config(self, config) -> dict:
"""
parse paddle predictor config
args:
config(paddle.inference.Config): paddle inference config
return:
config_status(dict): dict style config info
"""
if isinstance(config, paddle_infer.Config):
... |
parse paddle predictor config
args:
config(paddle.inference.Config): paddle inference config
return:
config_status(dict): dict style config info
| parse_config | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/benchmark_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/benchmark_utils.py | Apache-2.0 |
def report(self, identifier=None):
"""
print log report
args:
identifier(string): identify log
"""
if identifier:
identifier = f"[{identifier}]"
else:
identifier = ""
self.logger.info("\n")
self.logger.info(
... |
print log report
args:
identifier(string): identify log
| report | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/benchmark_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/benchmark_utils.py | Apache-2.0 |
def predict(self, image_list, threshold=0.5, repeats=1, add_timer=True):
'''
Args:
image_list (list): list of image
threshold (float): threshold of predicted box' score
repeats (int): repeat number for prediction
add_timer (bool): whether add timer during ... |
Args:
image_list (list): list of image
threshold (float): threshold of predicted box' score
repeats (int): repeat number for prediction
add_timer (bool): whether add timer during prediction
Returns:
results (dict): include 'boxes': np.ndarray:... | predict | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/infer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/infer.py | Apache-2.0 |
def create_inputs(imgs, im_info):
"""generate input for different model type
Args:
imgs (list(numpy)): list of images (np.ndarray)
im_info (list(dict)): list of image info
Returns:
inputs (dict): input of model
"""
inputs = {}
inputs['image'] = np.stack(imgs, axis=0)
... | generate input for different model type
Args:
imgs (list(numpy)): list of images (np.ndarray)
im_info (list(dict)): list of image info
Returns:
inputs (dict): input of model
| create_inputs | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/infer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/infer.py | Apache-2.0 |
def load_predictor(model_dir,
run_mode='paddle',
batch_size=1,
device='CPU',
min_subgraph_size=3,
use_dynamic_shape=False,
trt_min_shape=1,
trt_max_shape=1280,
trt_opt_... | set AnalysisConfig, generate AnalysisPredictor
Args:
model_dir (str): root path of __model__ and __params__
device (str): Choose the device you want to run, it can be: CPU/GPU/XPU, default is CPU
run_mode (str): mode of running(paddle/trt_fp32/trt_fp16/trt_int8)
use_dynamic_shape (bo... | load_predictor | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/infer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/infer.py | Apache-2.0 |
def get_test_images(infer_dir, infer_img):
"""
Get image path list in TEST mode
"""
assert infer_img is not None or infer_dir is not None, \
"--infer_img or --infer_dir should be set"
assert infer_img is None or os.path.isfile(infer_img), \
"{} is not a file".format(infer_img)
... |
Get image path list in TEST mode
| get_test_images | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/infer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/infer.py | Apache-2.0 |
def setup_logger(name="ppdet", output=None):
"""
Initialize logger and set its verbosity level to INFO.
Args:
output (str): a file name or a directory to save log. If None, will not save log file.
If ends with ".txt" or ".log", assumed to be a file name.
Otherwise, logs will ... |
Initialize logger and set its verbosity level to INFO.
Args:
output (str): a file name or a directory to save log. If None, will not save log file.
If ends with ".txt" or ".log", assumed to be a file name.
Otherwise, logs will be saved to `output/log.txt`.
name (str): th... | setup_logger | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/logger.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/logger.py | Apache-2.0 |
def get_max_preds(self, heatmaps):
"""get predictions from score maps
Args:
heatmaps: numpy.ndarray([batch_size, num_joints, height, width])
Returns:
preds: numpy.ndarray([batch_size, num_joints, 2]), keypoints coords
maxvals: numpy.ndarray([batch_size, num_... | get predictions from score maps
Args:
heatmaps: numpy.ndarray([batch_size, num_joints, height, width])
Returns:
preds: numpy.ndarray([batch_size, num_joints, 2]), keypoints coords
maxvals: numpy.ndarray([batch_size, num_joints, 2]), the maximum confidence of the key... | get_max_preds | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/postprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/postprocess.py | Apache-2.0 |
def dark_postprocess(self, hm, coords, kernelsize):
"""
refer to https://github.com/ilovepose/DarkPose/lib/core/inference.py
"""
hm = self.gaussian_blur(hm, kernelsize)
hm = np.maximum(hm, 1e-10)
hm = np.log(hm)
for n in range(coords.shape[0]):
for p ... |
refer to https://github.com/ilovepose/DarkPose/lib/core/inference.py
| dark_postprocess | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/postprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/postprocess.py | Apache-2.0 |
def get_final_preds(self, heatmaps, center, scale, kernelsize=3):
"""the highest heatvalue location with a quarter offset in the
direction from the highest response to the second highest response.
Args:
heatmaps (numpy.ndarray): The predicted heatmaps
center (numpy.ndarr... | the highest heatvalue location with a quarter offset in the
direction from the highest response to the second highest response.
Args:
heatmaps (numpy.ndarray): The predicted heatmaps
center (numpy.ndarray): The boxes center
scale (numpy.ndarray): The scale factor
... | get_final_preds | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/postprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/postprocess.py | Apache-2.0 |
def decode_image(im_file, im_info):
"""read rgb image
Args:
im_file (str|np.ndarray): input can be image path or np.ndarray
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
"""
if isinstance(... | read rgb image
Args:
im_file (str|np.ndarray): input can be image path or np.ndarray
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
| decode_image | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def __call__(self, im, im_info):
"""
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
"""
assert len(self.target_... |
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
| __call__ | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def generate_scale(self, im):
"""
Args:
im (np.ndarray): image (np.ndarray)
Returns:
im_scale_x: the resize ratio of X
im_scale_y: the resize ratio of Y
"""
origin_shape = im.shape[:2]
im_c = im.shape[2]
if self.keep_ratio:
... |
Args:
im (np.ndarray): image (np.ndarray)
Returns:
im_scale_x: the resize ratio of X
im_scale_y: the resize ratio of Y
| generate_scale | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def __call__(self, im, im_info):
"""
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
"""
im = im.astype(np.float... |
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
| __call__ | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def __call__(self, im, im_info):
"""
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
"""
im = im.transpose((2, 0... |
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
| __call__ | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def __call__(self, im, im_info):
"""
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
"""
coarsest_stride = self.... |
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
| __call__ | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def __call__(self, im, im_info):
"""
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
"""
img = cv2.cvtColor(im, ... |
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
Returns:
im (np.ndarray): processed image (np.ndarray)
im_info (dict): info of processed image
| __call__ | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def get_affine_transform(center,
input_size,
rot,
output_size,
shift=(0., 0.),
inv=False):
"""Get the affine transform matrix, given the center/scale/rot/output_size.
Args:
cente... | Get the affine transform matrix, given the center/scale/rot/output_size.
Args:
center (np.ndarray[2, ]): Center of the bounding box (x, y).
scale (np.ndarray[2, ]): Scale of the bounding box
wrt [width, height].
rot (float): Rotation angle (degree).
output_size (np.ndarr... | get_affine_transform | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def get_warp_matrix(theta, size_input, size_dst, size_target):
"""This code is based on
https://github.com/open-mmlab/mmpose/blob/master/mmpose/core/post_processing/post_transforms.py
Calculate the transformation matrix under the constraint of unbiased.
Paper ref: Huang et al. The Devil is in ... | This code is based on
https://github.com/open-mmlab/mmpose/blob/master/mmpose/core/post_processing/post_transforms.py
Calculate the transformation matrix under the constraint of unbiased.
Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased
Data Processing for Human Pose ... | get_warp_matrix | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def rotate_point(pt, angle_rad):
"""Rotate a point by an angle.
Args:
pt (list[float]): 2 dimensional point to be rotated
angle_rad (float): rotation angle by radian
Returns:
list[float]: Rotated point.
"""
assert len(pt) == 2
sn, cs = np.sin(angle_rad), np.cos(angle_ra... | Rotate a point by an angle.
Args:
pt (list[float]): 2 dimensional point to be rotated
angle_rad (float): rotation angle by radian
Returns:
list[float]: Rotated point.
| rotate_point | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def _get_3rd_point(a, b):
"""To calculate the affine matrix, three pairs of points are required. This
function is used to get the 3rd point, given 2D points a & b.
The 3rd point is defined by rotating vector `a - b` by 90 degrees
anticlockwise, using b as the rotation center.
Args:
a (np.n... | To calculate the affine matrix, three pairs of points are required. This
function is used to get the 3rd point, given 2D points a & b.
The 3rd point is defined by rotating vector `a - b` by 90 degrees
anticlockwise, using b as the rotation center.
Args:
a (np.ndarray): point(x,y)
b (np... | _get_3rd_point | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def get_affine_transform(center,
input_size,
rot,
output_size,
shift=(0., 0.),
inv=False):
"""Get the affine transform matrix, given the center/scale/rot/output_size.
Args:
cente... | Get the affine transform matrix, given the center/scale/rot/output_size.
Args:
center (np.ndarray[2, ]): Center of the bounding box (x, y).
scale (np.ndarray[2, ]): Scale of the bounding box
wrt [width, height].
rot (float): Rotation angle (degree).
output_size (np.ndarr... | get_affine_transform | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def get_warp_matrix(theta, size_input, size_dst, size_target):
"""This code is based on
https://github.com/open-mmlab/mmpose/blob/master/mmpose/core/post_processing/post_transforms.py
Calculate the transformation matrix under the constraint of unbiased.
Paper ref: Huang et al. The Devil is in ... | This code is based on
https://github.com/open-mmlab/mmpose/blob/master/mmpose/core/post_processing/post_transforms.py
Calculate the transformation matrix under the constraint of unbiased.
Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased
Data Processing for Human Pose ... | get_warp_matrix | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def rotate_point(pt, angle_rad):
"""Rotate a point by an angle.
Args:
pt (list[float]): 2 dimensional point to be rotated
angle_rad (float): rotation angle by radian
Returns:
list[float]: Rotated point.
"""
assert len(pt) == 2
sn, cs = np.sin(angle_rad), np.cos(angle_ra... | Rotate a point by an angle.
Args:
pt (list[float]): 2 dimensional point to be rotated
angle_rad (float): rotation angle by radian
Returns:
list[float]: Rotated point.
| rotate_point | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def _get_3rd_point(a, b):
"""To calculate the affine matrix, three pairs of points are required. This
function is used to get the 3rd point, given 2D points a & b.
The 3rd point is defined by rotating vector `a - b` by 90 degrees
anticlockwise, using b as the rotation center.
Args:
a (np.n... | To calculate the affine matrix, three pairs of points are required. This
function is used to get the 3rd point, given 2D points a & b.
The 3rd point is defined by rotating vector `a - b` by 90 degrees
anticlockwise, using b as the rotation center.
Args:
a (np.ndarray): point(x,y)
b (np... | _get_3rd_point | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/preprocess.py | Apache-2.0 |
def get_current_memory_mb():
"""
It is used to Obtain the memory usage of the CPU and GPU during the running of the program.
And this function Current program is time-consuming.
"""
import pynvml
import psutil
import GPUtil
gpu_id = int(os.environ.get('CUDA_VISIBLE_DEVICES', 0))
pid... |
It is used to Obtain the memory usage of the CPU and GPU during the running of the program.
And this function Current program is time-consuming.
| get_current_memory_mb | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/deploy/utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/deploy/utils.py | Apache-2.0 |
def _get_save_image_name(self, output_dir, image_path):
"""
Get save image name from source image path.
"""
if not os.path.exists(output_dir):
os.makedirs(output_dir)
image_name = os.path.split(image_path)[-1]
name, ext = os.path.splitext(image_name)
r... |
Get save image name from source image path.
| _get_save_image_name | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/core/trainer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/core/trainer.py | Apache-2.0 |
def get_categories(metric_type, anno_file=None, arch=None):
"""
Get class id to category id map and category id
to category name map from annotation file.
Args:
metric_type (str): metric type, currently support 'coco'.
anno_file (str): annotation file path
"""
if arch == 'keypoi... |
Get class id to category id map and category id
to category name map from annotation file.
Args:
metric_type (str): metric type, currently support 'coco'.
anno_file (str): annotation file path
| get_categories | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/category.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/category.py | Apache-2.0 |
def _mot_category(category='pedestrian'):
"""
Get class id to category id map and category id
to category name map of mot dataset
"""
label_map = {category: 0}
label_map = sorted(label_map.items(), key=lambda x: x[1])
cats = [l[0] for l in label_map]
clsid2catid = {i: i for i in range(l... |
Get class id to category id map and category id
to category name map of mot dataset
| _mot_category | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/category.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/category.py | Apache-2.0 |
def _coco17_category():
"""
Get class id to category id map and category id
to category name map of COCO2017 dataset
"""
clsid2catid = {
1: 1,
2: 2,
3: 3,
4: 4,
5: 5,
6: 6,
7: 7,
8: 8,
9: 9,
10: 10,
11: 11,
... |
Get class id to category id map and category id
to category name map of COCO2017 dataset
| _coco17_category | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/category.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/category.py | Apache-2.0 |
def _dota_category():
"""
Get class id to category id map and category id
to category name map of dota dataset
"""
catid2name = {
0: 'background',
1: 'plane',
2: 'baseball-diamond',
3: 'bridge',
4: 'ground-track-field',
5: 'small-vehicle',
6: '... |
Get class id to category id map and category id
to category name map of dota dataset
| _dota_category | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/category.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/category.py | Apache-2.0 |
def __getitem__(self, idx):
"""Prepare sample for training given the index."""
records = copy.deepcopy(self.db[idx])
records['image'] = cv2.imread(records['image_file'], cv2.IMREAD_COLOR |
cv2.IMREAD_IGNORE_ORIENTATION)
records['image'] = cv2.cvtColo... | Prepare sample for training given the index. | __getitem__ | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/keypoint_coco.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/keypoint_coco.py | Apache-2.0 |
def policy_v0():
"""Autoaugment policy that was used in AutoAugment Detection Paper."""
# Each tuple is an augmentation operation of the form
# (operation, probability, magnitude). Each element in policy is a
# sub-policy that will be applied sequentially on the image.
policy = [
[('Translat... | Autoaugment policy that was used in AutoAugment Detection Paper. | policy_v0 | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def policy_v1():
"""Autoaugment policy that was used in AutoAugment Detection Paper."""
# Each tuple is an augmentation operation of the form
# (operation, probability, magnitude). Each element in policy is a
# sub-policy that will be applied sequentially on the image.
policy = [
[('Translat... | Autoaugment policy that was used in AutoAugment Detection Paper. | policy_v1 | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def policy_v2():
"""Additional policy that performs well on object detection."""
# Each tuple is an augmentation operation of the form
# (operation, probability, magnitude). Each element in policy is a
# sub-policy that will be applied sequentially on the image.
policy = [
[('Color', 0.0, 6)... | Additional policy that performs well on object detection. | policy_v2 | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def policy_v3():
""""Additional policy that performs well on object detection."""
# Each tuple is an augmentation operation of the form
# (operation, probability, magnitude). Each element in policy is a
# sub-policy that will be applied sequentially on the image.
policy = [
[('Posterize', 0.... | "Additional policy that performs well on object detection. | policy_v3 | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def blend(image1, image2, factor):
"""Blend image1 and image2 using 'factor'.
Factor can be above 0.0. A value of 0.0 means only image1 is used.
A value of 1.0 means only image2 is used. A value between 0.0 and
1.0 means we linearly interpolate the pixel values between the two
images. A va... | Blend image1 and image2 using 'factor'.
Factor can be above 0.0. A value of 0.0 means only image1 is used.
A value of 1.0 means only image2 is used. A value between 0.0 and
1.0 means we linearly interpolate the pixel values between the two
images. A value greater than 1.0 "extrapolates" the di... | blend | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def cutout(image, pad_size, replace=0):
"""Apply cutout (https://arxiv.org/abs/1708.04552) to image.
This operation applies a (2*pad_size x 2*pad_size) mask of zeros to
a random location within `img`. The pixel values filled in will be of the
value `replace`. The located where the mask will be applied ... | Apply cutout (https://arxiv.org/abs/1708.04552) to image.
This operation applies a (2*pad_size x 2*pad_size) mask of zeros to
a random location within `img`. The pixel values filled in will be of the
value `replace`. The located where the mask will be applied is randomly
chosen uniformly over the whole... | cutout | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def rotate(image, degrees, replace):
"""Rotates the image by degrees either clockwise or counterclockwise.
Args:
image: An image Tensor of type uint8.
degrees: Float, a scalar angle in degrees to rotate all images by. If
degrees is positive the image will be rotated clockwise otherw... | Rotates the image by degrees either clockwise or counterclockwise.
Args:
image: An image Tensor of type uint8.
degrees: Float, a scalar angle in degrees to rotate all images by. If
degrees is positive the image will be rotated clockwise otherwise it will
be rotated countercl... | rotate | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def random_shift_bbox(image,
bbox,
pixel_scaling,
replace,
new_min_bbox_coords=None):
"""Move the bbox and the image content to a slightly new random location.
Args:
image: 3D uint8 Tensor.
bbox: 1D Tensor t... | Move the bbox and the image content to a slightly new random location.
Args:
image: 3D uint8 Tensor.
bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
of type float that represents the normalized coordinates between 0 and 1.
The potential values for the new mi... | random_shift_bbox | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def mask_and_add_image(min_y_, min_x_, max_y_, max_x_, mask,
content_tensor, image_):
"""Applies mask to bbox region in image then adds content_tensor to it."""
mask = np.pad(mask, [[min_y_, (image_height - 1) - max_y_],
[min_x_, (image_width - 1) ... | Applies mask to bbox region in image then adds content_tensor to it. | mask_and_add_image | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _clip_bbox(min_y, min_x, max_y, max_x):
"""Clip bounding box coordinates between 0 and 1.
Args:
min_y: Normalized bbox coordinate of type float between 0 and 1.
min_x: Normalized bbox coordinate of type float between 0 and 1.
max_y: Normalized bbox coordinate of type float between 0... | Clip bounding box coordinates between 0 and 1.
Args:
min_y: Normalized bbox coordinate of type float between 0 and 1.
min_x: Normalized bbox coordinate of type float between 0 and 1.
max_y: Normalized bbox coordinate of type float between 0 and 1.
max_x: Normalized bbox coordinate o... | _clip_bbox | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _check_bbox_area(min_y, min_x, max_y, max_x, delta=0.05):
"""Adjusts bbox coordinates to make sure the area is > 0.
Args:
min_y: Normalized bbox coordinate of type float between 0 and 1.
min_x: Normalized bbox coordinate of type float between 0 and 1.
max_y: Normalized bbox coordina... | Adjusts bbox coordinates to make sure the area is > 0.
Args:
min_y: Normalized bbox coordinate of type float between 0 and 1.
min_x: Normalized bbox coordinate of type float between 0 and 1.
max_y: Normalized bbox coordinate of type float between 0 and 1.
max_x: Normalized bbox coor... | _check_bbox_area | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _apply_bbox_augmentation(image, bbox, augmentation_func, *args):
"""Applies augmentation_func to the subsection of image indicated by bbox.
Args:
image: 3D uint8 Tensor.
bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
of type float that represents the normalized... | Applies augmentation_func to the subsection of image indicated by bbox.
Args:
image: 3D uint8 Tensor.
bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
of type float that represents the normalized coordinates between 0 and 1.
augmentation_func: Augmentation functi... | _apply_bbox_augmentation | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _concat_bbox(bbox, bboxes):
"""Helper function that concates bbox to bboxes along the first dimension."""
# Note if all elements in bboxes are -1 (_INVALID_BOX), then this means
# we discard bboxes and start the bboxes Tensor with the current bbox.
bboxes_sum_check = np.sum(bboxes)
bbox = np.ex... | Helper function that concates bbox to bboxes along the first dimension. | _concat_bbox | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _apply_bbox_augmentation_wrapper(image, bbox, new_bboxes, prob,
augmentation_func, func_changes_bbox,
*args):
"""Applies _apply_bbox_augmentation with probability prob.
Args:
image: 3D uint8 Tensor.
bbox: 1D Tensor th... | Applies _apply_bbox_augmentation with probability prob.
Args:
image: 3D uint8 Tensor.
bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
of type float that represents the normalized coordinates between 0 and 1.
new_bboxes: 2D Tensor that is a list of the bboxes in ... | _apply_bbox_augmentation_wrapper | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _apply_multi_bbox_augmentation(image, bboxes, prob, aug_func,
func_changes_bbox, *args):
"""Applies aug_func to the image for each bbox in bboxes.
Args:
image: 3D uint8 Tensor.
bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
... | Applies aug_func to the image for each bbox in bboxes.
Args:
image: 3D uint8 Tensor.
bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
has 4 elements (min_y, min_x, max_y, max_x) of type float.
prob: Float that is the probability of applying aug_func to a sp... | _apply_multi_bbox_augmentation | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _apply_multi_bbox_augmentation_wrapper(image, bboxes, prob, aug_func,
func_changes_bbox, *args):
"""Checks to be sure num bboxes > 0 before calling inner function."""
num_bboxes = len(bboxes)
new_image = deepcopy(image)
new_bboxes = deepcopy(bboxes)
if ... | Checks to be sure num bboxes > 0 before calling inner function. | _apply_multi_bbox_augmentation_wrapper | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def rotate_only_bboxes(image, bboxes, prob, degrees, replace):
"""Apply rotate to each bbox in the image with probability prob."""
func_changes_bbox = False
prob = _scale_bbox_only_op_probability(prob)
return _apply_multi_bbox_augmentation_wrapper(
image, bboxes, prob, rotate, func_changes_bbox,... | Apply rotate to each bbox in the image with probability prob. | rotate_only_bboxes | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def shear_x_only_bboxes(image, bboxes, prob, level, replace):
"""Apply shear_x to each bbox in the image with probability prob."""
func_changes_bbox = False
prob = _scale_bbox_only_op_probability(prob)
return _apply_multi_bbox_augmentation_wrapper(
image, bboxes, prob, shear_x, func_changes_bbox... | Apply shear_x to each bbox in the image with probability prob. | shear_x_only_bboxes | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def shear_y_only_bboxes(image, bboxes, prob, level, replace):
"""Apply shear_y to each bbox in the image with probability prob."""
func_changes_bbox = False
prob = _scale_bbox_only_op_probability(prob)
return _apply_multi_bbox_augmentation_wrapper(
image, bboxes, prob, shear_y, func_changes_bbox... | Apply shear_y to each bbox in the image with probability prob. | shear_y_only_bboxes | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def translate_x_only_bboxes(image, bboxes, prob, pixels, replace):
"""Apply translate_x to each bbox in the image with probability prob."""
func_changes_bbox = False
prob = _scale_bbox_only_op_probability(prob)
return _apply_multi_bbox_augmentation_wrapper(
image, bboxes, prob, translate_x, func... | Apply translate_x to each bbox in the image with probability prob. | translate_x_only_bboxes | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def translate_y_only_bboxes(image, bboxes, prob, pixels, replace):
"""Apply translate_y to each bbox in the image with probability prob."""
func_changes_bbox = False
prob = _scale_bbox_only_op_probability(prob)
return _apply_multi_bbox_augmentation_wrapper(
image, bboxes, prob, translate_y, func... | Apply translate_y to each bbox in the image with probability prob. | translate_y_only_bboxes | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def flip_only_bboxes(image, bboxes, prob):
"""Apply flip_lr to each bbox in the image with probability prob."""
func_changes_bbox = False
prob = _scale_bbox_only_op_probability(prob)
return _apply_multi_bbox_augmentation_wrapper(image, bboxes, prob,
np.f... | Apply flip_lr to each bbox in the image with probability prob. | flip_only_bboxes | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def solarize_only_bboxes(image, bboxes, prob, threshold):
"""Apply solarize to each bbox in the image with probability prob."""
func_changes_bbox = False
prob = _scale_bbox_only_op_probability(prob)
return _apply_multi_bbox_augmentation_wrapper(
image, bboxes, prob, solarize, func_changes_bbox, ... | Apply solarize to each bbox in the image with probability prob. | solarize_only_bboxes | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def equalize_only_bboxes(image, bboxes, prob):
"""Apply equalize to each bbox in the image with probability prob."""
func_changes_bbox = False
prob = _scale_bbox_only_op_probability(prob)
return _apply_multi_bbox_augmentation_wrapper(image, bboxes, prob,
... | Apply equalize to each bbox in the image with probability prob. | equalize_only_bboxes | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def cutout_only_bboxes(image, bboxes, prob, pad_size, replace):
"""Apply cutout to each bbox in the image with probability prob."""
func_changes_bbox = False
prob = _scale_bbox_only_op_probability(prob)
return _apply_multi_bbox_augmentation_wrapper(
image, bboxes, prob, cutout, func_changes_bbox... | Apply cutout to each bbox in the image with probability prob. | cutout_only_bboxes | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _rotate_bbox(bbox, image_height, image_width, degrees):
"""Rotates the bbox coordinated by degrees.
Args:
bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
of type float that represents the normalized coordinates between 0 and 1.
image_height: Int, height of the i... | Rotates the bbox coordinated by degrees.
Args:
bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
of type float that represents the normalized coordinates between 0 and 1.
image_height: Int, height of the image.
image_width: Int, height of the image.
degree... | _rotate_bbox | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def translate_x(image, pixels, replace):
"""Equivalent of PIL Translate in X dimension."""
image = Image.fromarray(wrap(image))
image = image.transform(image.size, Image.AFFINE, (1, 0, pixels, 0, 1, 0))
return unwrap(np.array(image), replace) | Equivalent of PIL Translate in X dimension. | translate_x | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def translate_y(image, pixels, replace):
"""Equivalent of PIL Translate in Y dimension."""
image = Image.fromarray(wrap(image))
image = image.transform(image.size, Image.AFFINE, (1, 0, 0, 0, 1, pixels))
return unwrap(np.array(image), replace) | Equivalent of PIL Translate in Y dimension. | translate_y | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _shift_bbox(bbox, image_height, image_width, pixels, shift_horizontal):
"""Shifts the bbox coordinates by pixels.
Args:
bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
of type float that represents the normalized coordinates between 0 and 1.
image_height: Int, h... | Shifts the bbox coordinates by pixels.
Args:
bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
of type float that represents the normalized coordinates between 0 and 1.
image_height: Int, height of the image.
image_width: Int, width of the image.
pixels: A... | _shift_bbox | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def translate_bbox(image, bboxes, pixels, replace, shift_horizontal):
"""Equivalent of PIL Translate in X/Y dimension that shifts image and bbox.
Args:
image: 3D uint8 Tensor.
bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
has 4 elements (min_y, min_x, max_y,... | Equivalent of PIL Translate in X/Y dimension that shifts image and bbox.
Args:
image: 3D uint8 Tensor.
bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
has 4 elements (min_y, min_x, max_y, max_x) of type float with values
between [0, 1].
pixels:... | translate_bbox | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def shear_x(image, level, replace):
"""Equivalent of PIL Shearing in X dimension."""
# Shear parallel to x axis is a projective transform
# with a matrix form of:
# [1 level
# 0 1].
image = Image.fromarray(wrap(image))
image = image.transform(image.size, Image.AFFINE, (1, level, 0, ... | Equivalent of PIL Shearing in X dimension. | shear_x | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def shear_y(image, level, replace):
"""Equivalent of PIL Shearing in Y dimension."""
# Shear parallel to y axis is a projective transform
# with a matrix form of:
# [1 0
# level 1].
image = Image.fromarray(wrap(image))
image = image.transform(image.size, Image.AFFINE, (1, 0, 0, leve... | Equivalent of PIL Shearing in Y dimension. | shear_y | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _shear_bbox(bbox, image_height, image_width, level, shear_horizontal):
"""Shifts the bbox according to how the image was sheared.
Args:
bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
of type float that represents the normalized coordinates between 0 and 1.
imag... | Shifts the bbox according to how the image was sheared.
Args:
bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
of type float that represents the normalized coordinates between 0 and 1.
image_height: Int, height of the image.
image_width: Int, height of the image.... | _shear_bbox | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def shear_with_bboxes(image, bboxes, level, replace, shear_horizontal):
"""Applies Shear Transformation to the image and shifts the bboxes.
Args:
image: 3D uint8 Tensor.
bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
has 4 elements (min_y, min_x, max_y, max_x... | Applies Shear Transformation to the image and shifts the bboxes.
Args:
image: 3D uint8 Tensor.
bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
has 4 elements (min_y, min_x, max_y, max_x) of type float with values
between [0, 1].
level: Float. H... | shear_with_bboxes | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def autocontrast(image):
"""Implements Autocontrast function from PIL.
Args:
image: A 3D uint8 tensor.
Returns:
The image after it has had autocontrast applied to it and will be of type
uint8.
"""
def scale_channel(image):
"""Scale the 2D image using the autocontra... | Implements Autocontrast function from PIL.
Args:
image: A 3D uint8 tensor.
Returns:
The image after it has had autocontrast applied to it and will be of type
uint8.
| autocontrast | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def scale_channel(image):
"""Scale the 2D image using the autocontrast rule."""
# A possibly cheaper version can be done using cumsum/unique_with_counts
# over the histogram values, rather than iterating over the entire image.
# to compute mins and maxes.
lo = float(np.min(image)... | Scale the 2D image using the autocontrast rule. | scale_channel | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def equalize(image):
"""Implements Equalize function from PIL using."""
def scale_channel(im, c):
"""Scale the data in the channel to implement equalize."""
im = im[:, :, c].astype(np.int32)
# Compute the histogram of the image channel.
histo, _ = np.histogram(im, range=[0, 255]... | Implements Equalize function from PIL using. | equalize | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def scale_channel(im, c):
"""Scale the data in the channel to implement equalize."""
im = im[:, :, c].astype(np.int32)
# Compute the histogram of the image channel.
histo, _ = np.histogram(im, range=[0, 255], bins=256)
# For the purposes of computing the step, filter out the non... | Scale the data in the channel to implement equalize. | scale_channel | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def wrap(image):
"""Returns 'image' with an extra channel set to all 1s."""
shape = image.shape
extended_channel = 255 * np.ones([shape[0], shape[1], 1], image.dtype)
extended = np.concatenate([image, extended_channel], 2).astype(image.dtype)
return extended | Returns 'image' with an extra channel set to all 1s. | wrap | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def unwrap(image, replace):
"""Unwraps an image produced by wrap.
Where there is a 0 in the last channel for every spatial position,
the rest of the three channels in that spatial dimension are grayed
(set to 128). Operations like translate and shear on a wrapped
Tensor will leave 0s in empty lo... | Unwraps an image produced by wrap.
Where there is a 0 in the last channel for every spatial position,
the rest of the three channels in that spatial dimension are grayed
(set to 128). Operations like translate and shear on a wrapped
Tensor will leave 0s in empty locations. Some transformations lo... | unwrap | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _cutout_inside_bbox(image, bbox, pad_fraction):
"""Generates cutout mask and the mean pixel value of the bbox.
First a location is randomly chosen within the image as the center where the
cutout mask will be applied. Note this can be towards the boundaries of the
image, so the full cutout mask may ... | Generates cutout mask and the mean pixel value of the bbox.
First a location is randomly chosen within the image as the center where the
cutout mask will be applied. Note this can be towards the boundaries of the
image, so the full cutout mask may not be applied.
Args:
image: 3D uint8 Tensor.
... | _cutout_inside_bbox | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def bbox_cutout(image, bboxes, pad_fraction, replace_with_mean):
"""Applies cutout to the image according to bbox information.
This is a cutout variant that using bbox information to make more informed
decisions on where to place the cutout mask.
Args:
image: 3D uint8 Tensor.
bboxes: 2... | Applies cutout to the image according to bbox information.
This is a cutout variant that using bbox information to make more informed
decisions on where to place the cutout mask.
Args:
image: 3D uint8 Tensor.
bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
... | bbox_cutout | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def apply_bbox_cutout(image, bboxes, pad_fraction):
"""Applies cutout to a single bounding box within image."""
# Choose a single bounding box to apply cutout to.
random_index = np.random.randint(0, bboxes.shape[0], dtype=np.int32)
# Select the corresponding bbox and apply cutout.
... | Applies cutout to a single bounding box within image. | apply_bbox_cutout | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _randomly_negate_tensor(tensor):
"""With 50% prob turn the tensor negative."""
should_flip = np.floor(np.random.rand() + 0.5) >= 1
final_tensor = tensor if should_flip else -tensor
return final_tensor | With 50% prob turn the tensor negative. | _randomly_negate_tensor | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _shrink_level_to_arg(level):
"""Converts level to ratio by which we shrink the image content."""
if level == 0:
return (1.0, ) # if level is zero, do not shrink the image
# Maximum shrinking ratio is 2.9.
level = 2. / (_MAX_LEVEL / level) + 0.9
return (level, ) | Converts level to ratio by which we shrink the image content. | _shrink_level_to_arg | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def bbox_wrapper(func):
"""Adds a bboxes function argument to func and returns unchanged bboxes."""
def wrapper(images, bboxes, *args, **kwargs):
return (func(images, *args, **kwargs), bboxes)
return wrapper | Adds a bboxes function argument to func and returns unchanged bboxes. | bbox_wrapper | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _parse_policy_info(name, prob, level, replace_value, augmentation_hparams):
"""Return the function that corresponds to `name` and update `level` param."""
func = NAME_TO_FUNC[name]
args = level_to_arg(augmentation_hparams)[name](level)
# Check to see if prob is passed into function. This is used fo... | Return the function that corresponds to `name` and update `level` param. | _parse_policy_info | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def _apply_func_with_prob(func, image, args, prob, bboxes):
"""Apply `func` to image w/ `args` as input with probability `prob`."""
assert isinstance(args, tuple)
assert 'bboxes' == inspect.getfullargspec(func)[0][1]
# If prob is a function argument, then this randomness is being handled
# inside t... | Apply `func` to image w/ `args` as input with probability `prob`. | _apply_func_with_prob | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def select_and_apply_random_policy(policies, image, bboxes):
"""Select a random policy from `policies` and apply it to `image`."""
policy_to_select = np.random.randint(0, len(policies), dtype=np.int32)
# policy_to_select = 6 # for test
for (i, policy) in enumerate(policies):
if i == policy_to_se... | Select a random policy from `policies` and apply it to `image`. | select_and_apply_random_policy | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def build_and_apply_nas_policy(policies, image, bboxes, augmentation_hparams):
"""Build a policy from the given policies passed in and apply to image.
Args:
policies: list of lists of tuples in the form `(func, prob, level)`, `func`
is a string name of the augmentation function, `prob` is t... | Build a policy from the given policies passed in and apply to image.
Args:
policies: list of lists of tuples in the form `(func, prob, level)`, `func`
is a string name of the augmentation function, `prob` is the probability
of applying the `func` operation, `level` is the input argu... | build_and_apply_nas_policy | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def distort_image_with_autoaugment(image, bboxes, augmentation_name):
"""Applies the AutoAugment policy to `image` and `bboxes`.
Args:
image: `Tensor` of shape [height, width, 3] representing an image.
bboxes: `Tensor` of shape [N, 4] representing ground truth boxes that are
normali... | Applies the AutoAugment policy to `image` and `bboxes`.
Args:
image: `Tensor` of shape [height, width, 3] representing an image.
bboxes: `Tensor` of shape [N, 4] representing ground truth boxes that are
normalized between [0, 1].
augmentation_name: The name of the AutoAugment po... | distort_image_with_autoaugment | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/autoaugment_utils.py | Apache-2.0 |
def __call__(self, sample, context=None):
""" Process a sample.
Args:
sample (dict): a dict of sample, eg: {'image':xx, 'label': xxx}
context (dict): info about this sample processing
Returns:
result (dict): a processed sample
"""
if isinstance... | Process a sample.
Args:
sample (dict): a dict of sample, eg: {'image':xx, 'label': xxx}
context (dict): info about this sample processing
Returns:
result (dict): a processed sample
| __call__ | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/operators.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/operators.py | Apache-2.0 |
def apply(self, sample, context=None):
""" load image if 'im_file' field is not empty but 'image' is"""
if 'image' not in sample:
with open(sample['im_file'], 'rb') as f:
sample['image'] = f.read()
sample.pop('im_file')
im = sample['image']
data =... | load image if 'im_file' field is not empty but 'image' is | apply | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/operators.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/operators.py | Apache-2.0 |
def __init__(self,
mean=[0.485, 0.456, 0.406],
std=[1, 1, 1],
is_scale=True):
"""
Args:
mean (list): the pixel mean
std (list): the pixel variance
"""
super(NormalizeImage, self).__init__()
self.mean = mea... |
Args:
mean (list): the pixel mean
std (list): the pixel variance
| __init__ | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/operators.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/operators.py | Apache-2.0 |
def apply(self, sample, context=None):
"""Normalize the image.
Operators:
1.(optional) Scale the image to [0,1]
2. Each pixel minus mean and is divided by std
"""
im = sample['image']
im = im.astype(np.float32, copy=False)
mean = np.array(self.mean... | Normalize the image.
Operators:
1.(optional) Scale the image to [0,1]
2. Each pixel minus mean and is divided by std
| apply | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/operators.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/dataset/transform/operators.py | Apache-2.0 |
def get_infer_results(outs, catid, bias=0):
"""
Get result at the stage of inference.
The output format is dictionary containing bbox or mask result.
For example, bbox result is a list and each element contains
image_id, category_id, bbox and score.
"""
if outs is None or len(outs) == 0:
... |
Get result at the stage of inference.
The output format is dictionary containing bbox or mask result.
For example, bbox result is a list and each element contains
image_id, category_id, bbox and score.
| get_infer_results | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/metrics/coco_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/metrics/coco_utils.py | Apache-2.0 |
def cocoapi_eval(jsonfile,
style,
coco_gt=None,
anno_file=None,
max_dets=(100, 300, 1000),
classwise=False,
sigmas=None,
use_area=True):
"""
Args:
jsonfile (str): Evaluation json file, ... |
Args:
jsonfile (str): Evaluation json file, eg: bbox.json, mask.json.
style (str): COCOeval style, can be `bbox` , `segm` , `proposal`, `keypoints` and `keypoints_crowd`.
coco_gt (str): Whether to load COCOAPI through anno_file,
eg: coco_gt = COCO(anno_file)
anno_fi... | cocoapi_eval | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/metrics/coco_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/metrics/coco_utils.py | Apache-2.0 |
def json_eval_results(metric, json_directory, dataset):
"""
cocoapi eval with already exists proposal.json, bbox.json or mask.json
"""
assert metric == 'COCO'
anno_file = dataset.get_anno()
json_file_list = ['proposal.json', 'bbox.json', 'mask.json']
if json_directory:
assert os.path... |
cocoapi eval with already exists proposal.json, bbox.json or mask.json
| json_eval_results | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/metrics/coco_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/metrics/coco_utils.py | Apache-2.0 |
def jaccard_overlap(pred, gt, is_bbox_normalized=False):
"""
Calculate jaccard overlap ratio between two bounding box
"""
if pred[0] >= gt[2] or pred[2] <= gt[0] or \
pred[1] >= gt[3] or pred[3] <= gt[1]:
return 0.
inter_xmin = max(pred[0], gt[0])
inter_ymin = max(pred[1], gt[1])... |
Calculate jaccard overlap ratio between two bounding box
| jaccard_overlap | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/metrics/map_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/metrics/map_utils.py | Apache-2.0 |
def update(self, bbox, score, label, gt_box, gt_label, difficult=None):
"""
Update metric statics from given prediction and ground
truth infomations.
"""
if difficult is None:
difficult = np.zeros_like(gt_label)
# record class gt count
for gtl, diff i... |
Update metric statics from given prediction and ground
truth infomations.
| update | python | PaddlePaddle/models | tutorials/pp-series/HRNet-Keypoint/lib/metrics/map_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/pp-series/HRNet-Keypoint/lib/metrics/map_utils.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves Python code examples from Django repository that contain 'django' in the code, which helps identify Django-specific code snippets but provides limited analytical insights beyond basic filtering.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.