code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def abbrev(name):
"""Get the abbreviation of label name:
'take (an object) from (a person)' -> 'take ... from ...'
"""
while name.find('(') != -1:
st, ed = name.find('('), name.find(')')
name = name[:st] + '...' + name[ed + 1:]
return name | Get the abbreviation of label name:
'take (an object) from (a person)' -> 'take ... from ...'
| abbrev | python | open-mmlab/mmaction2 | demo/demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/demo_spatiotemporal_det.py | Apache-2.0 |
def pack_result(human_detection, result, img_h, img_w):
"""Short summary.
Args:
human_detection (np.ndarray): Human detection result.
result (type): The predicted label of each human proposal.
img_h (int): The image height.
img_w (int): The image width.
Returns:
tupl... | Short summary.
Args:
human_detection (np.ndarray): Human detection result.
result (type): The predicted label of each human proposal.
img_h (int): The image height.
img_w (int): The image width.
Returns:
tuple: Tuple of human proposal, label name and label score.
| pack_result | python | open-mmlab/mmaction2 | demo/demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/demo_spatiotemporal_det.py | Apache-2.0 |
def visualize(frames, annotations, plate=plate_blue, max_num=5):
"""Visualize frames with predicted annotations.
Args:
frames (list[np.ndarray]): Frames for visualization, note that
len(frames) % len(annotations) should be 0.
annotations (list[list[tuple]]): The predicted results.
... | Visualize frames with predicted annotations.
Args:
frames (list[np.ndarray]): Frames for visualization, note that
len(frames) % len(annotations) should be 0.
annotations (list[list[tuple]]): The predicted results.
plate (str): The plate used for visualization. Default: plate_blu... | visualize | python | open-mmlab/mmaction2 | demo/demo_spatiotemporal_det_onnx.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/demo_spatiotemporal_det_onnx.py | Apache-2.0 |
def load_label_map(file_path):
"""Load Label Map.
Args:
file_path (str): The file path of label map.
Returns:
dict: The label map (int -> label name).
"""
lines = open(file_path).readlines()
lines = [x.strip().split(': ') for x in lines]
return {int(x[0]): x[1] for x in line... | Load Label Map.
Args:
file_path (str): The file path of label map.
Returns:
dict: The label map (int -> label name).
| load_label_map | python | open-mmlab/mmaction2 | demo/demo_spatiotemporal_det_onnx.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/demo_spatiotemporal_det_onnx.py | Apache-2.0 |
def abbrev(name):
"""Get the abbreviation of label name:
'take (an object) from (a person)' -> 'take ... from ...'
"""
while name.find('(') != -1:
st, ed = name.find('('), name.find(')')
name = name[:st] + '...' + name[ed + 1:]
return name | Get the abbreviation of label name:
'take (an object) from (a person)' -> 'take ... from ...'
| abbrev | python | open-mmlab/mmaction2 | demo/demo_spatiotemporal_det_onnx.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/demo_spatiotemporal_det_onnx.py | Apache-2.0 |
def pack_result(human_detection, result, img_h, img_w):
"""Short summary.
Args:
human_detection (np.ndarray): Human detection result.
result (type): The predicted label of each human proposal.
img_h (int): The image height.
img_w (int): The image width.
Returns:
tupl... | Short summary.
Args:
human_detection (np.ndarray): Human detection result.
result (type): The predicted label of each human proposal.
img_h (int): The image height.
img_w (int): The image width.
Returns:
tuple: Tuple of human proposal, label name and label score.
| pack_result | python | open-mmlab/mmaction2 | demo/demo_spatiotemporal_det_onnx.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/demo_spatiotemporal_det_onnx.py | Apache-2.0 |
def visualize(args,
frames,
annotations,
pose_data_samples,
action_result,
plate=PLATEBLUE,
max_num=5):
"""Visualize frames with predicted annotations.
Args:
frames (list[np.ndarray]): Frames for visualization, note tha... | Visualize frames with predicted annotations.
Args:
frames (list[np.ndarray]): Frames for visualization, note that
len(frames) % len(annotations) should be 0.
annotations (list[list[tuple]]): The predicted spatio-temporal
detection results.
pose_data_samples (list[lis... | visualize | python | open-mmlab/mmaction2 | demo/demo_video_structuralize.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/demo_video_structuralize.py | Apache-2.0 |
def load_label_map(file_path):
"""Load Label Map.
Args:
file_path (str): The file path of label map.
Returns:
dict: The label map (int -> label name).
"""
lines = open(file_path).readlines()
lines = [x.strip().split(': ') for x in lines]
return {int(x[0]): x[1] for x in lin... | Load Label Map.
Args:
file_path (str): The file path of label map.
Returns:
dict: The label map (int -> label name).
| load_label_map | python | open-mmlab/mmaction2 | demo/demo_video_structuralize.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/demo_video_structuralize.py | Apache-2.0 |
def abbrev(name):
"""Get the abbreviation of label name:
'take (an object) from (a person)' -> 'take ... from ...'
"""
while name.find('(') != -1:
st, ed = name.find('('), name.find(')')
name = name[:st] + '...' + name[ed + 1:]
return name | Get the abbreviation of label name:
'take (an object) from (a person)' -> 'take ... from ...'
| abbrev | python | open-mmlab/mmaction2 | demo/demo_video_structuralize.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/demo_video_structuralize.py | Apache-2.0 |
def pack_result(human_detection, result, img_h, img_w):
"""Short summary.
Args:
human_detection (np.ndarray): Human detection result.
result (type): The predicted label of each human proposal.
img_h (int): The image height.
img_w (int): The image width.
Returns:
tup... | Short summary.
Args:
human_detection (np.ndarray): Human detection result.
result (type): The predicted label of each human proposal.
img_h (int): The image height.
img_w (int): The image width.
Returns:
tuple: Tuple of human proposal, label name and label score.
| pack_result | python | open-mmlab/mmaction2 | demo/demo_video_structuralize.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/demo_video_structuralize.py | Apache-2.0 |
def add_frames(self, idx, frames, processed_frames):
"""Add the clip and corresponding id.
Args:
idx (int): the current index of the clip.
frames (list[ndarray]): list of images in "BGR" format.
processed_frames (list[ndarray]): list of resize and normed images
... | Add the clip and corresponding id.
Args:
idx (int): the current index of the clip.
frames (list[ndarray]): list of images in "BGR" format.
processed_frames (list[ndarray]): list of resize and normed images
in "BGR" format.
| add_frames | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def get_model_inputs(self, device):
"""Convert preprocessed images to MMAction2 STDet model inputs."""
cur_frames = [self.processed_frames[idx] for idx in self.frames_inds]
input_array = np.stack(cur_frames).transpose((3, 0, 1, 2))[np.newaxis]
input_tensor = torch.from_numpy(input_array)... | Convert preprocessed images to MMAction2 STDet model inputs. | get_model_inputs | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def _do_detect(self, image):
"""Get human bboxes with shape [n, 4].
The format of bboxes is (xmin, ymin, xmax, ymax) in pixels.
""" | Get human bboxes with shape [n, 4].
The format of bboxes is (xmin, ymin, xmax, ymax) in pixels.
| _do_detect | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def _do_detect(self, image):
"""Get bboxes in shape [n, 4] and values in pixels."""
det_data_sample = inference_detector(self.model, image)
pred_instance = det_data_sample.pred_instances.cpu().numpy()
# We only keep human detection bboxs with score larger
# than `det_score_thr` a... | Get bboxes in shape [n, 4] and values in pixels. | _do_detect | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def predict(self, task):
"""Spatio-temporval Action Detection model inference."""
# No need to do inference if no one in keyframe
if len(task.stdet_bboxes) == 0:
return task
with torch.no_grad():
result = self.model(**task.get_model_inputs(self.device))
s... | Spatio-temporval Action Detection model inference. | predict | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def read_fn(self):
"""Main function for read thread.
Contains three steps:
1) Read and preprocess (resize + norm) frames from source.
2) Create task by frames from previous step and buffer.
3) Put task into read queue.
"""
was_read = True
start_time = ti... | Main function for read thread.
Contains three steps:
1) Read and preprocess (resize + norm) frames from source.
2) Create task by frames from previous step and buffer.
3) Put task into read queue.
| read_fn | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def display_fn(self):
"""Main function for display thread.
Read data from display queue and display predictions.
"""
start_time = time.time()
while not self.stopped:
# get the state of the read thread
with self.read_id_lock:
read_id = self... | Main function for display thread.
Read data from display queue and display predictions.
| display_fn | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def __next__(self):
"""Get data from read queue.
This function is part of the main thread.
"""
if self.read_queue.qsize() == 0:
time.sleep(0.02)
return not self.stopped, None
was_read, task = self.read_queue.get()
if not was_read:
# I... | Get data from read queue.
This function is part of the main thread.
| __next__ | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def start(self):
"""Start read thread and display thread."""
self.read_thread = threading.Thread(
target=self.read_fn, args=(), name='VidRead-Thread', daemon=True)
self.read_thread.start()
self.display_thread = threading.Thread(
target=self.display_fn,
... | Start read thread and display thread. | start | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def clean(self):
"""Close all threads and release all resources."""
self.stopped = True
self.read_lock.acquire()
self.cap.release()
self.read_lock.release()
self.output_lock.acquire()
cv2.destroyAllWindows()
if self.video_writer:
self.video_wri... | Close all threads and release all resources. | clean | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def display(self, task):
"""Add the visualized task to the display queue.
Args:
task (TaskInfo object): task object that contain the necessary
information for prediction visualization.
"""
with self.display_lock:
self.display_queue[task.id] = (True, t... | Add the visualized task to the display queue.
Args:
task (TaskInfo object): task object that contain the necessary
information for prediction visualization.
| display | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def get_output_video_writer(self, path):
"""Return a video writer object.
Args:
path (str): path to the output video file.
"""
return cv2.VideoWriter(
filename=path,
fourcc=cv2.VideoWriter_fourcc(*'mp4v'),
fps=float(self.output_fps),
... | Return a video writer object.
Args:
path (str): path to the output video file.
| get_output_video_writer | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def draw_predictions(self, task):
"""Visualize stdet predictions on raw frames."""
# read bboxes from task
bboxes = task.display_bboxes.cpu().numpy()
# draw predictions and update task
keyframe_idx = len(task.frames) // 2
draw_range = [
keyframe_idx - task.cl... | Visualize stdet predictions on raw frames. | draw_predictions | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def draw_clip_range(self, frames, preds, bboxes, draw_range):
"""Draw a range of frames with the same bboxes and predictions."""
# no predictions to be draw
if bboxes is None or len(bboxes) == 0:
return frames
# draw frames in `draw_range`
left_frames = frames[:draw_... | Draw a range of frames with the same bboxes and predictions. | draw_clip_range | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def abbrev(name):
"""Get the abbreviation of label name:
'take (an object) from (a person)' -> 'take ... from ...'
"""
while name.find('(') != -1:
st, ed = name.find('('), name.find(')')
name = name[:st] + '...' + name[ed + 1:]
return name | Get the abbreviation of label name:
'take (an object) from (a person)' -> 'take ... from ...'
| abbrev | python | open-mmlab/mmaction2 | demo/webcam_demo_spatiotemporal_det.py | https://github.com/open-mmlab/mmaction2/blob/master/demo/webcam_demo_spatiotemporal_det.py | Apache-2.0 |
def parse_version_info(version_str: str):
"""Parse a version string into a tuple.
Args:
version_str (str): The version string.
Returns:
tuple[int or str]: The version info, e.g., "1.3.0" is parsed into
(1, 3, 0), and "2.0.0rc1" is parsed into (2, 0, 0, 'rc1').
"""
versio... | Parse a version string into a tuple.
Args:
version_str (str): The version string.
Returns:
tuple[int or str]: The version info, e.g., "1.3.0" is parsed into
(1, 3, 0), and "2.0.0rc1" is parsed into (2, 0, 0, 'rc1').
| parse_version_info | python | open-mmlab/mmaction2 | mmaction/version.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/version.py | Apache-2.0 |
def init_recognizer(config: Union[str, Path, mmengine.Config],
checkpoint: Optional[str] = None,
device: Union[str, torch.device] = 'cuda:0') -> nn.Module:
"""Initialize a recognizer from config file.
Args:
config (str or :obj:`Path` or :obj:`mmengine.Config`): C... | Initialize a recognizer from config file.
Args:
config (str or :obj:`Path` or :obj:`mmengine.Config`): Config file
path, :obj:`Path` or the config object.
checkpoint (str, optional): Checkpoint path/url. If set to None,
the model will not load any weights. Defaults to None.
... | init_recognizer | python | open-mmlab/mmaction2 | mmaction/apis/inference.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inference.py | Apache-2.0 |
def inference_recognizer(model: nn.Module,
video: Union[str, dict],
test_pipeline: Optional[Compose] = None
) -> ActionDataSample:
"""Inference a video with the recognizer.
Args:
model (nn.Module): The loaded recognizer.
... | Inference a video with the recognizer.
Args:
model (nn.Module): The loaded recognizer.
video (Union[str, dict]): The video file path or the results
dictionary (the input of pipeline).
test_pipeline (:obj:`Compose`, optional): The test pipeline.
If not specified, the ... | inference_recognizer | python | open-mmlab/mmaction2 | mmaction/apis/inference.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inference.py | Apache-2.0 |
def inference_skeleton(model: nn.Module,
pose_results: List[dict],
img_shape: Tuple[int],
test_pipeline: Optional[Compose] = None
) -> ActionDataSample:
"""Inference a pose results with the skeleton recognizer.
Args:
... | Inference a pose results with the skeleton recognizer.
Args:
model (nn.Module): The loaded recognizer.
pose_results (List[dict]): The pose estimation results dictionary
(the results of `pose_inference`)
img_shape (Tuple[int]): The original image shape used for inference
... | inference_skeleton | python | open-mmlab/mmaction2 | mmaction/apis/inference.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inference.py | Apache-2.0 |
def detection_inference(det_config: Union[str, Path, mmengine.Config,
nn.Module],
det_checkpoint: str,
frame_paths: List[str],
det_score_thr: float = 0.9,
det_cat_id: int = 0,
... | Detect human boxes given frame paths.
Args:
det_config (Union[str, :obj:`Path`, :obj:`mmengine.Config`,
:obj:`torch.nn.Module`]):
Det config file path or Detection model object. It can be
a :obj:`Path`, a config object, or a module object.
det_checkpoint: Checkpo... | detection_inference | python | open-mmlab/mmaction2 | mmaction/apis/inference.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inference.py | Apache-2.0 |
def pose_inference(pose_config: Union[str, Path, mmengine.Config, nn.Module],
pose_checkpoint: str,
frame_paths: List[str],
det_results: List[np.ndarray],
device: Union[str, torch.device] = 'cuda:0') -> tuple:
"""Perform Top-Down pose estim... | Perform Top-Down pose estimation.
Args:
pose_config (Union[str, :obj:`Path`, :obj:`mmengine.Config`,
:obj:`torch.nn.Module`]): Pose config file path or
pose model object. It can be a :obj:`Path`, a config object,
or a module object.
pose_checkpoint: Checkpoint pa... | pose_inference | python | open-mmlab/mmaction2 | mmaction/apis/inference.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inference.py | Apache-2.0 |
def __call__(self,
inputs: InputsType,
return_datasamples: bool = False,
batch_size: int = 1,
return_vis: bool = False,
show: bool = False,
wait_time: int = 0,
draw_pred: bool = True,
... | Call the inferencer.
Args:
inputs (InputsType): Inputs for the inferencer.
return_datasamples (bool): Whether to return results as
:obj:`BaseDataElement`. Defaults to False.
batch_size (int): Inference batch size. Defaults to 1.
show (bool): Wheth... | __call__ | python | open-mmlab/mmaction2 | mmaction/apis/inferencers/actionrecog_inferencer.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inferencers/actionrecog_inferencer.py | Apache-2.0 |
def _inputs_to_list(self, inputs: InputsType) -> list:
"""Preprocess the inputs to a list. The main difference from mmengine
version is that we don't list a directory cause input could be a frame
folder.
Preprocess inputs to a list according to its type:
- list or tuple: return... | Preprocess the inputs to a list. The main difference from mmengine
version is that we don't list a directory cause input could be a frame
folder.
Preprocess inputs to a list according to its type:
- list or tuple: return inputs
- str: return a list containing the string. The st... | _inputs_to_list | python | open-mmlab/mmaction2 | mmaction/apis/inferencers/actionrecog_inferencer.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inferencers/actionrecog_inferencer.py | Apache-2.0 |
def visualize(
self,
inputs: InputsType,
preds: PredType,
return_vis: bool = False,
show: bool = False,
wait_time: int = 0,
draw_pred: bool = True,
fps: int = 30,
out_type: str = 'video',
target_resolution: Optional[Tuple[int]] = None,
... | Visualize predictions.
Args:
inputs (List[Union[str, np.ndarray]]): Inputs for the inferencer.
preds (List[Dict]): Predictions of the model.
return_vis (bool): Whether to return the visualization result.
Defaults to False.
show (bool): Whether to ... | visualize | python | open-mmlab/mmaction2 | mmaction/apis/inferencers/actionrecog_inferencer.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inferencers/actionrecog_inferencer.py | Apache-2.0 |
def postprocess(
self,
preds: PredType,
visualization: Optional[List[np.ndarray]] = None,
return_datasample: bool = False,
print_result: bool = False,
pred_out_file: str = '',
) -> Union[ResType, Tuple[ResType, np.ndarray]]:
"""Process the predictions and visu... | Process the predictions and visualization results from ``forward``
and ``visualize``.
This method should be responsible for the following tasks:
1. Convert datasamples into a json-serializable dict if needed.
2. Pack the predictions and visualization results and return them.
3.... | postprocess | python | open-mmlab/mmaction2 | mmaction/apis/inferencers/actionrecog_inferencer.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inferencers/actionrecog_inferencer.py | Apache-2.0 |
def pred2dict(self, data_sample: ActionDataSample) -> Dict:
"""Extract elements necessary to represent a prediction into a
dictionary. It's better to contain only basic data elements such as
strings and numbers in order to guarantee it's json-serializable.
Args:
data_sample ... | Extract elements necessary to represent a prediction into a
dictionary. It's better to contain only basic data elements such as
strings and numbers in order to guarantee it's json-serializable.
Args:
data_sample (ActionDataSample): The data sample to be converted.
Returns:
... | pred2dict | python | open-mmlab/mmaction2 | mmaction/apis/inferencers/actionrecog_inferencer.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inferencers/actionrecog_inferencer.py | Apache-2.0 |
def forward(self, inputs: InputType, batch_size: int,
**forward_kwargs) -> PredType:
"""Forward the inputs to the model.
Args:
inputs (InputsType): The inputs to be forwarded.
batch_size (int): Batch size. Defaults to 1.
Returns:
Dict: The pr... | Forward the inputs to the model.
Args:
inputs (InputsType): The inputs to be forwarded.
batch_size (int): Batch size. Defaults to 1.
Returns:
Dict: The prediction results. Possibly with keys "rec".
| forward | python | open-mmlab/mmaction2 | mmaction/apis/inferencers/mmaction2_inferencer.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inferencers/mmaction2_inferencer.py | Apache-2.0 |
def visualize(self, inputs: InputsType, preds: PredType,
**kwargs) -> List[np.ndarray]:
"""Visualize predictions.
Args:
inputs (List[Union[str, np.ndarray]]): Inputs for the inferencer.
preds (List[Dict]): Predictions of the model.
show (bool): Whet... | Visualize predictions.
Args:
inputs (List[Union[str, np.ndarray]]): Inputs for the inferencer.
preds (List[Dict]): Predictions of the model.
show (bool): Whether to display the image in a popup window.
Defaults to False.
wait_time (float): The int... | visualize | python | open-mmlab/mmaction2 | mmaction/apis/inferencers/mmaction2_inferencer.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inferencers/mmaction2_inferencer.py | Apache-2.0 |
def __call__(
self,
inputs: InputsType,
batch_size: int = 1,
**kwargs,
) -> dict:
"""Call the inferencer.
Args:
inputs (InputsType): Inputs for the inferencer. It can be a path
to image / image directory, or an array, or a list of these.
... | Call the inferencer.
Args:
inputs (InputsType): Inputs for the inferencer. It can be a path
to image / image directory, or an array, or a list of these.
return_datasamples (bool): Whether to return results as
:obj:`BaseDataElement`. Defaults to False.
... | __call__ | python | open-mmlab/mmaction2 | mmaction/apis/inferencers/mmaction2_inferencer.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inferencers/mmaction2_inferencer.py | Apache-2.0 |
def _inputs_to_list(self, inputs: InputsType) -> list:
"""Preprocess the inputs to a list. The main difference from mmengine
version is that we don't list a directory cause input could be a frame
folder.
Preprocess inputs to a list according to its type:
- list or tuple: return... | Preprocess the inputs to a list. The main difference from mmengine
version is that we don't list a directory cause input could be a frame
folder.
Preprocess inputs to a list according to its type:
- list or tuple: return inputs
- str: return a list containing the string. The st... | _inputs_to_list | python | open-mmlab/mmaction2 | mmaction/apis/inferencers/mmaction2_inferencer.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inferencers/mmaction2_inferencer.py | Apache-2.0 |
def load_data_list(self) -> List[dict]:
"""Load annotation file to get video information."""
exists(self.ann_file)
data_list = []
anno_database = mmengine.load(self.ann_file)
for video_name in anno_database:
video_info = anno_database[video_name]
feature_p... | Load annotation file to get video information. | load_data_list | python | open-mmlab/mmaction2 | mmaction/datasets/activitynet_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/activitynet_dataset.py | Apache-2.0 |
def load_data_list(self) -> List[Dict]:
"""Load annotation file to get audio information."""
check_file_exist(self.ann_file)
data_list = []
with open(self.ann_file, 'r') as fin:
for line in fin:
line_split = line.strip().split()
video_info = {}... | Load annotation file to get audio information. | load_data_list | python | open-mmlab/mmaction2 | mmaction/datasets/audio_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/audio_dataset.py | Apache-2.0 |
def parse_img_record(self, img_records: List[dict]) -> tuple:
"""Merge image records of the same entity at the same time.
Args:
img_records (List[dict]): List of img_records (lines in AVA
annotations).
Returns:
Tuple(list): A tuple consists of lists of b... | Merge image records of the same entity at the same time.
Args:
img_records (List[dict]): List of img_records (lines in AVA
annotations).
Returns:
Tuple(list): A tuple consists of lists of bboxes, action labels and
entity_ids.
| parse_img_record | python | open-mmlab/mmaction2 | mmaction/datasets/ava_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/ava_dataset.py | Apache-2.0 |
def parse_img_record(self, img_records: List[dict]) -> tuple:
"""Merge image records of the same entity at the same time.
Args:
img_records (List[dict]): List of img_records (lines in AVA
annotations).
Returns:
Tuple(list): A tuple consists of lists of b... | Merge image records of the same entity at the same time.
Args:
img_records (List[dict]): List of img_records (lines in AVA
annotations).
Returns:
Tuple(list): A tuple consists of lists of bboxes, action labels and
entity_ids.
| parse_img_record | python | open-mmlab/mmaction2 | mmaction/datasets/ava_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/ava_dataset.py | Apache-2.0 |
def load_data_list(self) -> List[dict]:
"""Load annotation file to get video information."""
exists(self.ann_file)
data_list = []
with open(self.ann_file) as f:
anno_database = f.readlines()
for item in anno_database:
first_part, query_sentence = item.str... | Load annotation file to get video information. | load_data_list | python | open-mmlab/mmaction2 | mmaction/datasets/charades_sta_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/charades_sta_dataset.py | Apache-2.0 |
def load_data_list(self) -> List[Dict]:
"""Load annotation file to get video information."""
exists(self.ann_file)
data_list = []
with open(self.ann_file) as f:
data_lines = json.load(f)
for data in data_lines:
answers = data['answer']
... | Load annotation file to get video information. | load_data_list | python | open-mmlab/mmaction2 | mmaction/datasets/msrvtt_datasets.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/msrvtt_datasets.py | Apache-2.0 |
def load_data_list(self) -> List[Dict]:
"""Load annotation file to get video information."""
exists(self.ann_file)
data_list = []
with open(self.ann_file) as f:
data_lines = json.load(f)
for data in data_lines:
data_item = dict(
... | Load annotation file to get video information. | load_data_list | python | open-mmlab/mmaction2 | mmaction/datasets/msrvtt_datasets.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/msrvtt_datasets.py | Apache-2.0 |
def load_data_list(self) -> List[Dict]:
"""Load annotation file to get video information."""
exists(self.ann_file)
data_list = []
with open(self.ann_file) as f:
data_lines = json.load(f)
video_idx = 0
text_idx = 0
for data in data_lines:
... | Load annotation file to get video information. | load_data_list | python | open-mmlab/mmaction2 | mmaction/datasets/msrvtt_datasets.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/msrvtt_datasets.py | Apache-2.0 |
def load_data_list(self) -> List[Dict]:
"""Load annotation file to get skeleton information."""
assert self.ann_file.endswith('.pkl')
mmengine.exists(self.ann_file)
data_list = mmengine.load(self.ann_file)
if self.split is not None:
split, annos = data_list['split'],... | Load annotation file to get skeleton information. | load_data_list | python | open-mmlab/mmaction2 | mmaction/datasets/pose_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/pose_dataset.py | Apache-2.0 |
def load_data_list(self) -> List[dict]:
"""Load annotation file to get video information."""
exists(self.ann_file)
data_list = []
fin = list_from_file(self.ann_file)
for line in fin:
line_split = line.strip().split()
video_info = {}
idx = 0
... | Load annotation file to get video information. | load_data_list | python | open-mmlab/mmaction2 | mmaction/datasets/rawframe_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/rawframe_dataset.py | Apache-2.0 |
def get_type(transform: Union[dict, Callable]) -> str:
"""get the type of the transform."""
if isinstance(transform, dict) and 'type' in transform:
return transform['type']
elif callable(transform):
return transform.__repr__().split('(')[0]
else:
raise TypeError | get the type of the transform. | get_type | python | open-mmlab/mmaction2 | mmaction/datasets/repeat_aug_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/repeat_aug_dataset.py | Apache-2.0 |
def prepare_data(self, idx) -> List[dict]:
"""Get data processed by ``self.pipeline``.
Reduce the video loading and decompressing.
Args:
idx (int): The index of ``data_info``.
Returns:
List[dict]: A list of length num_repeats.
"""
transforms = sel... | Get data processed by ``self.pipeline``.
Reduce the video loading and decompressing.
Args:
idx (int): The index of ``data_info``.
Returns:
List[dict]: A list of length num_repeats.
| prepare_data | python | open-mmlab/mmaction2 | mmaction/datasets/repeat_aug_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/repeat_aug_dataset.py | Apache-2.0 |
def load_data_list(self) -> List[dict]:
"""Load annotation file to get video information."""
exists(self.ann_file)
data_list = []
fin = list_from_file(self.ann_file)
for line in fin:
line_split = line.strip().split(self.delimiter)
if self.multi_class:
... | Load annotation file to get video information. | load_data_list | python | open-mmlab/mmaction2 | mmaction/datasets/video_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/video_dataset.py | Apache-2.0 |
def load_data_list(self) -> List[Dict]:
"""Load annotation file to get video information."""
exists(self.ann_file)
data_list = []
with open(self.ann_file) as f:
video_dict = json.load(f)
for filename, texts in video_dict.items():
filename = osp.jo... | Load annotation file to get video information. | load_data_list | python | open-mmlab/mmaction2 | mmaction/datasets/video_text_dataset.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/video_text_dataset.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""The transform function of :class:`PackActionInputs`.
Args:
results (dict): The result dict.
Returns:
dict: The result dict.
"""
packed_results = dict()
if self.collect_keys is not None:
p... | The transform function of :class:`PackActionInputs`.
Args:
results (dict): The result dict.
Returns:
dict: The result dict.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/formatting.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/formatting.py | Apache-2.0 |
def transform(self, results):
"""Method to pack the input data.
Args:
results (dict): Result dict from the data pipeline.
Returns:
dict:
- 'inputs' (obj:`torch.Tensor`): The forward data of models.
- 'data_samples' (obj:`DetDataSample`): The ann... | Method to pack the input data.
Args:
results (dict): Result dict from the data pipeline.
Returns:
dict:
- 'inputs' (obj:`torch.Tensor`): The forward data of models.
- 'data_samples' (obj:`DetDataSample`): The annotation info of the
sampl... | transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/formatting.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/formatting.py | Apache-2.0 |
def transform(self, results):
"""Performs the Transpose formatting.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
for key in self.keys:
results[key] = results[key].transpose(self.order)
... | Performs the Transpose formatting.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/formatting.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/formatting.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""Performs the FormatShape formatting.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
if not isinstance(results['imgs'], np.ndarray):
results['... | Performs the FormatShape formatting.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/formatting.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/formatting.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""Performs the FormatShape formatting.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
audios = results['audios']
# clip x sample x freq -> clip x c... | Performs the FormatShape formatting.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/formatting.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/formatting.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""The transform function of :class:`FormatGCNInput`.
Args:
results (dict): The result dict.
Returns:
dict: The result dict.
"""
keypoint = results['keypoint']
if 'keypoint_score' in results:
... | The transform function of :class:`FormatGCNInput`.
Args:
results (dict): The result dict.
Returns:
dict: The result dict.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/formatting.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/formatting.py | Apache-2.0 |
def transform(self, results: dict) -> dict:
"""Functions to load image.
Args:
results (dict): Result dict from :obj:``mmcv.BaseDataset``.
Returns:
dict: The dict contains loaded image and meta information.
"""
filename = results['img_path']
try:... | Functions to load image.
Args:
results (dict): Result dict from :obj:``mmcv.BaseDataset``.
Returns:
dict: The dict contains loaded image and meta information.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Convert the label dictionary to 3 tensors: "label", "mask" and
"category_mask".
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
if not self.hvu_initialized:
... | Convert the label dictionary to 3 tensors: "label", "mask" and
"category_mask".
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def _get_train_clips(self, num_frames: int,
ori_clip_len: float) -> np.array:
"""Get clip offsets in train mode.
It will calculate the average interval for selected frames,
and randomly shift them within offsets between [0, avg_interval].
If the total number of ... | Get clip offsets in train mode.
It will calculate the average interval for selected frames,
and randomly shift them within offsets between [0, avg_interval].
If the total number of frames is smaller than clips num or origin
frames length, it will return all zero indices.
Args:
... | _get_train_clips | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def _get_test_clips(self, num_frames: int,
ori_clip_len: float) -> np.array:
"""Get clip offsets in test mode.
If the total number of frames is
not enough, it will return all zero indices.
Args:
num_frames (int): Total number of frame in the video.
... | Get clip offsets in test mode.
If the total number of frames is
not enough, it will return all zero indices.
Args:
num_frames (int): Total number of frame in the video.
ori_clip_len (float): length of original sample clip.
Returns:
np.ndarray: Sampl... | _get_test_clips | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def _sample_clips(self, num_frames: int, ori_clip_len: float) -> np.array:
"""Choose clip offsets for the video in a given mode.
Args:
num_frames (int): Total number of frame in the video.
Returns:
np.ndarray: Sampled frame indices.
"""
if self.test_mode... | Choose clip offsets for the video in a given mode.
Args:
num_frames (int): Total number of frame in the video.
Returns:
np.ndarray: Sampled frame indices.
| _sample_clips | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def _get_ori_clip_len(self, fps_scale_ratio: float) -> float:
"""calculate length of clip segment for different strategy.
Args:
fps_scale_ratio (float): Scale ratio to adjust fps.
"""
if self.target_fps is not None:
# align test sample strategy with `PySlowFast` ... | calculate length of clip segment for different strategy.
Args:
fps_scale_ratio (float): Scale ratio to adjust fps.
| _get_ori_clip_len | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: dict) -> dict:
"""Perform the SampleFrames loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
total_frames = results['total_frames']
# if can't get fps, same ... | Perform the SampleFrames loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def _get_sample_clips(self, num_frames: int) -> np.ndarray:
"""To sample an n-frame clip from the video. UniformSample basically
divides the video into n segments of equal length and randomly samples
one frame from each segment. When the duration of video frames is
shorter than the desir... | To sample an n-frame clip from the video. UniformSample basically
divides the video into n segments of equal length and randomly samples
one frame from each segment. When the duration of video frames is
shorter than the desired length of the target clip, this approach will
duplicate the ... | _get_sample_clips | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""Perform the Uniform Sampling.
Args:
results (dict): The result dict.
Returns:
dict: The result dict.
"""
num_frames = results['total_frames']
inds = self._get_sample_clips(num_frames)
start_... | Perform the Uniform Sampling.
Args:
results (dict): The result dict.
Returns:
dict: The result dict.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the SampleFrames loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
total_frames = results['total_frames']
start_index = results['start_index']
... | Perform the SampleFrames loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def _get_train_clips(self, num_frames: int) -> np.array:
"""Get clip offsets by dense sample strategy in train mode.
It will calculate a sample position and sample interval and set
start index 0 when sample_pos == 1 or randomly choose from
[0, sample_pos - 1]. Then it will shift the sta... | Get clip offsets by dense sample strategy in train mode.
It will calculate a sample position and sample interval and set
start index 0 when sample_pos == 1 or randomly choose from
[0, sample_pos - 1]. Then it will shift the start index by each
base offset.
Args:
num... | _get_train_clips | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def _get_test_clips(self, num_frames: int) -> np.array:
"""Get clip offsets by dense sample strategy in test mode.
It will calculate a sample position and sample interval and evenly
sample several start indexes as start positions between
[0, sample_position-1]. Then it will shift each s... | Get clip offsets by dense sample strategy in test mode.
It will calculate a sample position and sample interval and evenly
sample several start indexes as start positions between
[0, sample_position-1]. Then it will shift each start index by the
base offsets.
Args:
... | _get_test_clips | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def _sample_clips(self, num_frames: int) -> np.array:
"""Choose clip offsets for the video in a given mode.
Args:
num_frames (int): Total number of frame in the video.
Returns:
np.ndarray: Sampled frame indices.
"""
if self.test_mode:
clip_of... | Choose clip offsets for the video in a given mode.
Args:
num_frames (int): Total number of frame in the video.
Returns:
np.ndarray: Sampled frame indices.
| _sample_clips | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: dict) -> dict:
"""Perform the SampleFrames loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
total_frames = results['total_frames']
clip_offsets = self._sam... | Perform the SampleFrames loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the SampleFrames loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
fps = results['fps']
timestamp = results['timestamp']
timestamp_start ... | Perform the SampleFrames loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the PyAV initialization.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
try:
import av
except ImportError:
raise ImportError('P... | Perform the PyAV initialization.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the PyAV decoding.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
container = results['video_reader']
imgs = list()
if self.multi_thread:
... | Perform the PyAV decoding.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the PIMS initialization.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
try:
import pims
except ImportError:
raise ImportError(... | Perform the PIMS initialization.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the PIMS decoding.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
container = results['video_reader']
if results['frame_inds'].ndim != 1:
... | Perform the PIMS decoding.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the PyAV motion vector decoding.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
container = results['video_reader']
imgs = list()
if self.mult... | Perform the PyAV motion vector decoding.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""Perform the Decord initialization.
Args:
results (dict): The result dict.
Returns:
dict: The result dict.
"""
container = self._get_video_reader(results['filename'])
results['total_frames'] = len(con... | Perform the Decord initialization.
Args:
results (dict): The result dict.
Returns:
dict: The result dict.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""Perform the Decord decoding.
Args:
results (dict): The result dict.
Returns:
dict: The result dict.
"""
container = results['video_reader']
if results['frame_inds'].ndim != 1:
results['f... | Perform the Decord decoding.
Args:
results (dict): The result dict.
Returns:
dict: The result dict.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: dict) -> dict:
"""Perform the OpenCV initialization.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
if self.io_backend == 'disk':
new_path = results['filename'... | Perform the OpenCV initialization.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: dict) -> dict:
"""Perform the OpenCV decoding.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
container = results['video_reader']
imgs = list()
if results... | Perform the OpenCV decoding.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: dict) -> dict:
"""Perform the ``RawFrameDecode`` to pick frames given indices.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
mmcv.use_backend(self.decoding_backend)
... | Perform the ``RawFrameDecode`` to pick frames given indices.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the ``RawFrameDecode`` to pick frames given indices.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
modality = results['modality']
array = results['ar... | Perform the ``RawFrameDecode`` to pick frames given indices.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the ``ImageDecode`` to load image given the file path.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
mmcv.use_backend(self.decoding_backend)
filename... | Perform the ``ImageDecode`` to load image given the file path.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""Perform the numpy loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
if osp.exists(results['audio_path']):
feature_map = np.load(results... | Perform the numpy loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the building of pseudo clips.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
# the input should be one single image
assert len(results['imgs']) == 1
... | Perform the building of pseudo clips.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""Perform the ``AudioFeatureSelector`` to pick audio feature clips.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
audio = results['audios']
frame_... | Perform the ``AudioFeatureSelector`` to pick audio feature clips.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the LoadLocalizationFeature loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
data_path = results['feature_path']
raw_feature = np.loadtxt(
... | Perform the LoadLocalizationFeature loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the GenerateLocalizationLabels loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
video_frame = results['duration_frame']
video_second = results['... | Perform the GenerateLocalizationLabels loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results):
"""Perform the LoadProposals loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
video_name = results['video_name']
proposal_path = osp.join(self.pgm_proposal... | Perform the LoadProposals loading.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/loading.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/loading.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""Perform the pose decoding.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
required_keys = ['total_frames', 'frame_inds', 'keypoint']
for k in req... | Perform the pose decoding.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/pose_transforms.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/pose_transforms.py | Apache-2.0 |
def generate_a_heatmap(self, arr: np.ndarray, centers: np.ndarray,
max_values: np.ndarray) -> None:
"""Generate pseudo heatmap for one keypoint in one frame.
Args:
arr (np.ndarray): The array to store the generated heatmaps.
Shape: img_h * img_w.
... | Generate pseudo heatmap for one keypoint in one frame.
Args:
arr (np.ndarray): The array to store the generated heatmaps.
Shape: img_h * img_w.
centers (np.ndarray): The coordinates of corresponding keypoints
(of multiple persons). Shape: M * 2.
... | generate_a_heatmap | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/pose_transforms.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/pose_transforms.py | Apache-2.0 |
def generate_a_limb_heatmap(self, arr: np.ndarray, starts: np.ndarray,
ends: np.ndarray, start_values: np.ndarray,
end_values: np.ndarray) -> None:
"""Generate pseudo heatmap for one limb in one frame.
Args:
arr (np.ndarray): T... | Generate pseudo heatmap for one limb in one frame.
Args:
arr (np.ndarray): The array to store the generated heatmaps.
Shape: img_h * img_w.
starts (np.ndarray): The coordinates of one keypoint in the
corresponding limbs. Shape: M * 2.
ends (np... | generate_a_limb_heatmap | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/pose_transforms.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/pose_transforms.py | Apache-2.0 |
def generate_heatmap(self, arr: np.ndarray, kps: np.ndarray,
max_values: np.ndarray) -> None:
"""Generate pseudo heatmap for all keypoints and limbs in one frame (if
needed).
Args:
arr (np.ndarray): The array to store the generated heatmaps.
... | Generate pseudo heatmap for all keypoints and limbs in one frame (if
needed).
Args:
arr (np.ndarray): The array to store the generated heatmaps.
Shape: V * img_h * img_w.
kps (np.ndarray): The coordinates of keypoints in this frame.
Shape: M * V *... | generate_heatmap | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/pose_transforms.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/pose_transforms.py | Apache-2.0 |
def gen_an_aug(self, results: Dict) -> np.ndarray:
"""Generate pseudo heatmaps for all frames.
Args:
results (dict): The dictionary that contains all info of a sample.
Returns:
np.ndarray: The generated pseudo heatmaps.
"""
all_kps = results['keypoint']... | Generate pseudo heatmaps for all frames.
Args:
results (dict): The dictionary that contains all info of a sample.
Returns:
np.ndarray: The generated pseudo heatmaps.
| gen_an_aug | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/pose_transforms.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/pose_transforms.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""Generate pseudo heatmaps based on joint coordinates and confidence.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
heatmap = self.gen_an_aug(results)
... | Generate pseudo heatmaps based on joint coordinates and confidence.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/pose_transforms.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/pose_transforms.py | Apache-2.0 |
def transform(self, results: Dict) -> Dict:
"""Convert the coordinates of keypoints to make it more compact.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
"""
img_shape = results['img_shape']
h, ... | Convert the coordinates of keypoints to make it more compact.
Args:
results (dict): The resulting dict to be modified and passed
to the next transform in pipeline.
| transform | python | open-mmlab/mmaction2 | mmaction/datasets/transforms/pose_transforms.py | https://github.com/open-mmlab/mmaction2/blob/master/mmaction/datasets/transforms/pose_transforms.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.