code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def forward_with_coords(
self, coords_input: torch.Tensor, image_size: Tuple[int, int]
) -> torch.Tensor:
"""Positionally encode points that are not normalized to [0,1]."""
coords = coords_input.clone()
coords[:, :, 0] = coords[:, :, 0] / image_size[1]
coords[:, :, 1] = coord... | Positionally encode points that are not normalized to [0,1]. | forward_with_coords | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/position_encoding.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/position_encoding.py | Apache-2.0 |
def _build_sam_heads(self):
"""Build SAM-style prompt encoder and mask decoder."""
self.sam_prompt_embed_dim = self.hidden_dim
self.sam_image_embedding_size = self.image_size // self.backbone_stride
# build PromptEncoder and MaskDecoder from SAM
# (their hyperparameters like `ma... | Build SAM-style prompt encoder and mask decoder. | _build_sam_heads | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | Apache-2.0 |
def _forward_sam_heads(
self,
backbone_features,
point_inputs=None,
mask_inputs=None,
high_res_features=None,
multimask_output=False,
):
"""
Forward SAM prompt encoders and mask heads.
Inputs:
- backbone_features: image features of [B,... |
Forward SAM prompt encoders and mask heads.
Inputs:
- backbone_features: image features of [B, C, H, W] shape
- point_inputs: a dictionary with "point_coords" and "point_labels", where
1) "point_coords" has [B, P, 2] shape and float32 dtype and contains the
absol... | _forward_sam_heads | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | Apache-2.0 |
def _use_mask_as_output(self, backbone_features, high_res_features, mask_inputs):
"""
Directly turn binary `mask_inputs` into a output mask logits without using SAM.
(same input and output shapes as in _forward_sam_heads above).
"""
# Use -10/+10 as logits for neg/pos pixels (ver... |
Directly turn binary `mask_inputs` into a output mask logits without using SAM.
(same input and output shapes as in _forward_sam_heads above).
| _use_mask_as_output | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | Apache-2.0 |
def forward_image(self, img_batch: torch.Tensor):
"""Get the image feature on the input batch."""
backbone_out = self.image_encoder(img_batch)
if self.use_high_res_features_in_sam:
# precompute projected level 0 and level 1 features in SAM decoder
# to avoid running it ag... | Get the image feature on the input batch. | forward_image | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | Apache-2.0 |
def _prepare_memory_conditioned_features(
self,
frame_idx,
is_init_cond_frame,
current_vision_feats,
current_vision_pos_embeds,
feat_sizes,
output_dict,
num_frames,
track_in_reverse=False, # tracking in reverse time order (for demo usage)
):
... | Fuse the current frame's visual feature map with previous memory. | _prepare_memory_conditioned_features | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | Apache-2.0 |
def _encode_new_memory(
self,
current_vision_feats,
feat_sizes,
pred_masks_high_res,
is_mask_from_pts,
):
"""Encode the current image and its prediction into a memory feature."""
B = current_vision_feats[-1].size(1) # batch size on this frame
C = self... | Encode the current image and its prediction into a memory feature. | _encode_new_memory | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | Apache-2.0 |
def _use_multimask(self, is_init_cond_frame, point_inputs):
"""Whether to use multimask output in the SAM head."""
num_pts = 0 if point_inputs is None else point_inputs["point_labels"].size(1)
multimask_output = (
self.multimask_output_in_sam
and (is_init_cond_frame or se... | Whether to use multimask output in the SAM head. | _use_multimask | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | Apache-2.0 |
def _apply_non_overlapping_constraints(self, pred_masks):
"""
Apply non-overlapping constraints to the object scores in pred_masks. Here we
keep only the highest scoring object at each spatial location in pred_masks.
"""
batch_size = pred_masks.size(0)
if batch_size == 1:... |
Apply non-overlapping constraints to the object scores in pred_masks. Here we
keep only the highest scoring object at each spatial location in pred_masks.
| _apply_non_overlapping_constraints | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_base.py | Apache-2.0 |
def select_closest_cond_frames(frame_idx, cond_frame_outputs, max_cond_frame_num):
"""
Select up to `max_cond_frame_num` conditioning frames from `cond_frame_outputs`
that are temporally closest to the current frame at `frame_idx`. Here, we take
- a) the closest conditioning frame before `frame_idx` (if... |
Select up to `max_cond_frame_num` conditioning frames from `cond_frame_outputs`
that are temporally closest to the current frame at `frame_idx`. Here, we take
- a) the closest conditioning frame before `frame_idx` (if any);
- b) the closest conditioning frame after `frame_idx` (if any);
- c) any ot... | select_closest_cond_frames | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_utils.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_utils.py | Apache-2.0 |
def get_1d_sine_pe(pos_inds, dim, temperature=10000):
"""
Get 1D sine positional embedding as in the original Transformer paper.
"""
pe_dim = dim // 2
dim_t = torch.arange(pe_dim, dtype=torch.float32, device=pos_inds.device)
dim_t = temperature ** (2 * (dim_t // 2) / pe_dim)
pos_embed = pos... |
Get 1D sine positional embedding as in the original Transformer paper.
| get_1d_sine_pe | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_utils.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_utils.py | Apache-2.0 |
def get_activation_fn(activation):
"""Return an activation function given a string"""
if activation == "relu":
return F.relu
if activation == "gelu":
return F.gelu
if activation == "glu":
return F.glu
raise RuntimeError(f"activation should be relu/gelu, not {activation}.") | Return an activation function given a string | get_activation_fn | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam2_utils.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam2_utils.py | Apache-2.0 |
def __init__(
self,
position_encoding: nn.Module,
d_model: int,
backbone_channel_list: List[int],
kernel_size: int = 1,
stride: int = 1,
padding: int = 0,
fpn_interp_model: str = "bilinear",
fuse_type: str = "sum",
fpn_top_down_levels: Opti... | Initialize the neck
:param trunk: the backbone
:param position_encoding: the positional encoding to use
:param d_model: the dimension of the model
:param neck_norm: the normalization to use
| __init__ | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/backbones/image_encoder.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/backbones/image_encoder.py | Apache-2.0 |
def window_partition(x, window_size):
"""
Partition into non-overlapping windows with padding if needed.
Args:
x (tensor): input tokens with [B, H, W, C].
window_size (int): window size.
Returns:
windows: windows after partition with [B * num_windows, window_size, window_size, C]... |
Partition into non-overlapping windows with padding if needed.
Args:
x (tensor): input tokens with [B, H, W, C].
window_size (int): window size.
Returns:
windows: windows after partition with [B * num_windows, window_size, window_size, C].
(Hp, Wp): padded height and width b... | window_partition | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/backbones/utils.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/backbones/utils.py | Apache-2.0 |
def window_unpartition(windows, window_size, pad_hw, hw):
"""
Window unpartition into original sequences and removing padding.
Args:
x (tensor): input tokens with [B * num_windows, window_size, window_size, C].
window_size (int): window size.
pad_hw (Tuple): padded height and width (... |
Window unpartition into original sequences and removing padding.
Args:
x (tensor): input tokens with [B * num_windows, window_size, window_size, C].
window_size (int): window size.
pad_hw (Tuple): padded height and width (Hp, Wp).
hw (Tuple): original height and width (H, W) bef... | window_unpartition | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/backbones/utils.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/backbones/utils.py | Apache-2.0 |
def __init__(
self,
kernel_size: Tuple[int, ...] = (7, 7),
stride: Tuple[int, ...] = (4, 4),
padding: Tuple[int, ...] = (3, 3),
in_chans: int = 3,
embed_dim: int = 768,
):
"""
Args:
kernel_size (Tuple): kernel size of the projection layer.
... |
Args:
kernel_size (Tuple): kernel size of the projection layer.
stride (Tuple): stride of the projection layer.
padding (Tuple): padding size of the projection layer.
in_chans (int): Number of input image channels.
embed_dim (int): embed_dim (int): P... | __init__ | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/backbones/utils.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/backbones/utils.py | Apache-2.0 |
def __init__(
self,
*,
transformer_dim: int,
transformer: nn.Module,
num_multimask_outputs: int = 3,
activation: Type[nn.Module] = nn.GELU,
iou_head_depth: int = 3,
iou_head_hidden_dim: int = 256,
use_high_res_features: bool = False,
iou_pr... |
Predicts masks given an image and prompt embeddings, using a
transformer architecture.
Arguments:
transformer_dim (int): the channel dimension of the transformer
transformer (nn.Module): the transformer used to predict masks
num_multimask_outputs (int): the number... | __init__ | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/mask_decoder.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/mask_decoder.py | Apache-2.0 |
def forward(
self,
image_embeddings: torch.Tensor,
image_pe: torch.Tensor,
sparse_prompt_embeddings: torch.Tensor,
dense_prompt_embeddings: torch.Tensor,
multimask_output: bool,
repeat_image: bool,
high_res_features: Optional[List[torch.Tensor]] = None,
... |
Predict masks given image and prompt embeddings.
Arguments:
image_embeddings (torch.Tensor): the embeddings from the image encoder
image_pe (torch.Tensor): positional encoding with the shape of image_embeddings
sparse_prompt_embeddings (torch.Tensor): the embeddings of th... | forward | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/mask_decoder.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/mask_decoder.py | Apache-2.0 |
def predict_masks(
self,
image_embeddings: torch.Tensor,
image_pe: torch.Tensor,
sparse_prompt_embeddings: torch.Tensor,
dense_prompt_embeddings: torch.Tensor,
repeat_image: bool,
high_res_features: Optional[List[torch.Tensor]] = None,
) -> Tuple[torch.Tensor,... | Predicts masks. See 'forward' for more details. | predict_masks | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/mask_decoder.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/mask_decoder.py | Apache-2.0 |
def _get_stability_scores(self, mask_logits):
"""
Compute stability scores of the mask logits based on the IoU between upper and
lower thresholds, similar to https://github.com/fairinternal/onevision/pull/568.
"""
mask_logits = mask_logits.flatten(-2)
stability_delta = se... |
Compute stability scores of the mask logits based on the IoU between upper and
lower thresholds, similar to https://github.com/fairinternal/onevision/pull/568.
| _get_stability_scores | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/mask_decoder.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/mask_decoder.py | Apache-2.0 |
def _dynamic_multimask_via_stability(self, all_mask_logits, all_iou_scores):
"""
When outputting a single mask, if the stability score from the current single-mask
output (based on output token 0) falls below a threshold, we instead select from
multi-mask outputs (based on output token 1... |
When outputting a single mask, if the stability score from the current single-mask
output (based on output token 0) falls below a threshold, we instead select from
multi-mask outputs (based on output token 1~3) the mask with the highest predicted
IoU score. This is intended to ensure a ... | _dynamic_multimask_via_stability | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/mask_decoder.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/mask_decoder.py | Apache-2.0 |
def __init__(
self,
embed_dim: int,
image_embedding_size: Tuple[int, int],
input_image_size: Tuple[int, int],
mask_in_chans: int,
activation: Type[nn.Module] = nn.GELU,
) -> None:
"""
Encodes prompts for input to SAM's mask decoder.
Arguments:... |
Encodes prompts for input to SAM's mask decoder.
Arguments:
embed_dim (int): The prompts' embedding dimension
image_embedding_size (tuple(int, int)): The spatial size of the
image embedding, as (H, W).
input_image_size (int): The padded size of the image as in... | __init__ | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/prompt_encoder.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/prompt_encoder.py | Apache-2.0 |
def _get_batch_size(
self,
points: Optional[Tuple[torch.Tensor, torch.Tensor]],
boxes: Optional[torch.Tensor],
masks: Optional[torch.Tensor],
) -> int:
"""
Gets the batch size of the output given the batch size of the input prompts.
"""
if points is no... |
Gets the batch size of the output given the batch size of the input prompts.
| _get_batch_size | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/prompt_encoder.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/prompt_encoder.py | Apache-2.0 |
def forward(
self,
points: Optional[Tuple[torch.Tensor, torch.Tensor]],
boxes: Optional[torch.Tensor],
masks: Optional[torch.Tensor],
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Embeds different types of prompts, returning both sparse and dense
embeddings.
... |
Embeds different types of prompts, returning both sparse and dense
embeddings.
Arguments:
points (tuple(torch.Tensor, torch.Tensor) or none): point coordinates
and labels to embed.
boxes (torch.Tensor or none): boxes to embed
masks (torch.Tensor or non... | forward | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/prompt_encoder.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/prompt_encoder.py | Apache-2.0 |
def sdp_kernel_context(dropout_p):
"""
Get the context for the attention scaled dot-product kernel. We use Flash Attention
by default, but fall back to all available kernels if Flash Attention fails.
"""
if ALLOW_ALL_KERNELS:
return contextlib.nullcontext()
return torch.backends.cuda.sd... |
Get the context for the attention scaled dot-product kernel. We use Flash Attention
by default, but fall back to all available kernels if Flash Attention fails.
| sdp_kernel_context | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/transformer.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/transformer.py | Apache-2.0 |
def __init__(
self,
depth: int,
embedding_dim: int,
num_heads: int,
mlp_dim: int,
activation: Type[nn.Module] = nn.ReLU,
attention_downsample_rate: int = 2,
) -> None:
"""
A transformer decoder that attends to an input image using
queri... |
A transformer decoder that attends to an input image using
queries whose positional embedding is supplied.
Args:
depth (int): number of layers in the transformer
embedding_dim (int): the channel dimension for the input embeddings
num_heads (int): the number of hea... | __init__ | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/transformer.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/transformer.py | Apache-2.0 |
def forward(
self,
image_embedding: Tensor,
image_pe: Tensor,
point_embedding: Tensor,
) -> Tuple[Tensor, Tensor]:
"""
Args:
image_embedding (torch.Tensor): image to attend to. Should be shape
B x embedding_dim x h x w for any h and w.
... |
Args:
image_embedding (torch.Tensor): image to attend to. Should be shape
B x embedding_dim x h x w for any h and w.
image_pe (torch.Tensor): the positional encoding to add to the image. Must
have the same shape as image_embedding.
point_embedding (torch.Te... | forward | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/transformer.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/transformer.py | Apache-2.0 |
def __init__(
self,
embedding_dim: int,
num_heads: int,
mlp_dim: int = 2048,
activation: Type[nn.Module] = nn.ReLU,
attention_downsample_rate: int = 2,
skip_first_layer_pe: bool = False,
) -> None:
"""
A transformer block with four layers: (1) ... |
A transformer block with four layers: (1) self-attention of sparse
inputs, (2) cross attention of sparse inputs to dense inputs, (3) mlp
block on sparse inputs, and (4) cross attention of dense inputs to sparse
inputs.
Arguments:
embedding_dim (int): the channel dimen... | __init__ | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/sam/transformer.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/sam/transformer.py | Apache-2.0 |
def is_box_near_crop_edge(
boxes: torch.Tensor, crop_box: List[int], orig_box: List[int], atol: float = 20.0
) -> torch.Tensor:
"""Filter masks at the edge of a crop, but not at the edge of the original image."""
crop_box_torch = torch.as_tensor(crop_box, dtype=torch.float, device=boxes.device)
orig_box... | Filter masks at the edge of a crop, but not at the edge of the original image. | is_box_near_crop_edge | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/amg.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/amg.py | Apache-2.0 |
def mask_to_rle_pytorch(tensor: torch.Tensor) -> List[Dict[str, Any]]:
"""
Encodes masks to an uncompressed RLE, in the format expected by
pycoco tools.
"""
# Put in fortran order and flatten h,w
b, h, w = tensor.shape
tensor = tensor.permute(0, 2, 1).flatten(1)
# Compute change indices... |
Encodes masks to an uncompressed RLE, in the format expected by
pycoco tools.
| mask_to_rle_pytorch | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/amg.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/amg.py | Apache-2.0 |
def rle_to_mask(rle: Dict[str, Any]) -> np.ndarray:
"""Compute a binary mask from an uncompressed RLE."""
h, w = rle["size"]
mask = np.empty(h * w, dtype=bool)
idx = 0
parity = False
for count in rle["counts"]:
mask[idx : idx + count] = parity
idx += count
parity ^= True
... | Compute a binary mask from an uncompressed RLE. | rle_to_mask | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/amg.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/amg.py | Apache-2.0 |
def calculate_stability_score(
masks: torch.Tensor, mask_threshold: float, threshold_offset: float
) -> torch.Tensor:
"""
Computes the stability score for a batch of masks. The stability
score is the IoU between the binary masks obtained by thresholding
the predicted mask logits at high and low valu... |
Computes the stability score for a batch of masks. The stability
score is the IoU between the binary masks obtained by thresholding
the predicted mask logits at high and low values.
| calculate_stability_score | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/amg.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/amg.py | Apache-2.0 |
def build_point_grid(n_per_side: int) -> np.ndarray:
"""Generates a 2D grid of points evenly spaced in [0,1]x[0,1]."""
offset = 1 / (2 * n_per_side)
points_one_side = np.linspace(offset, 1 - offset, n_per_side)
points_x = np.tile(points_one_side[None, :], (n_per_side, 1))
points_y = np.tile(points_o... | Generates a 2D grid of points evenly spaced in [0,1]x[0,1]. | build_point_grid | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/amg.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/amg.py | Apache-2.0 |
def build_all_layer_point_grids(
n_per_side: int, n_layers: int, scale_per_layer: int
) -> List[np.ndarray]:
"""Generates point grids for all crop layers."""
points_by_layer = []
for i in range(n_layers + 1):
n_points = int(n_per_side / (scale_per_layer**i))
points_by_layer.append(build_... | Generates point grids for all crop layers. | build_all_layer_point_grids | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/amg.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/amg.py | Apache-2.0 |
def generate_crop_boxes(
im_size: Tuple[int, ...], n_layers: int, overlap_ratio: float
) -> Tuple[List[List[int]], List[int]]:
"""
Generates a list of crop boxes of different sizes. Each layer
has (2**i)**2 boxes for the ith layer.
"""
crop_boxes, layer_idxs = [], []
im_h, im_w = im_size
... |
Generates a list of crop boxes of different sizes. Each layer
has (2**i)**2 boxes for the ith layer.
| generate_crop_boxes | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/amg.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/amg.py | Apache-2.0 |
def remove_small_regions(
mask: np.ndarray, area_thresh: float, mode: str
) -> Tuple[np.ndarray, bool]:
"""
Removes small disconnected regions and holes in a mask. Returns the
mask and an indicator of if the mask has been modified.
"""
import cv2 # type: ignore
assert mode in ["holes", "is... |
Removes small disconnected regions and holes in a mask. Returns the
mask and an indicator of if the mask has been modified.
| remove_small_regions | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/amg.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/amg.py | Apache-2.0 |
def batched_mask_to_box(masks: torch.Tensor) -> torch.Tensor:
"""
Calculates boxes in XYXY format around masks. Return [0,0,0,0] for
an empty mask. For input shape C1xC2x...xHxW, the output shape is C1xC2x...x4.
"""
# torch.max below raises an error on empty inputs, just skip in this case
if tor... |
Calculates boxes in XYXY format around masks. Return [0,0,0,0] for
an empty mask. For input shape C1xC2x...xHxW, the output shape is C1xC2x...x4.
| batched_mask_to_box | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/amg.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/amg.py | Apache-2.0 |
def mask_to_box(masks: torch.Tensor):
"""
compute bounding box given an input mask
Inputs:
- masks: [B, 1, H, W] masks, dtype=torch.Tensor
Returns:
- box_coords: [B, 1, 4], contains (x, y) coordinates of top left and bottom right box corners, dtype=torch.Tensor
"""
B, _, h, w = masks.s... |
compute bounding box given an input mask
Inputs:
- masks: [B, 1, H, W] masks, dtype=torch.Tensor
Returns:
- box_coords: [B, 1, 4], contains (x, y) coordinates of top left and bottom right box corners, dtype=torch.Tensor
| mask_to_box | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/misc.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/misc.py | Apache-2.0 |
def load_video_frames(
video_path,
image_size,
offload_video_to_cpu,
img_mean=(0.485, 0.456, 0.406),
img_std=(0.229, 0.224, 0.225),
async_loading_frames=False,
compute_device=torch.device("cuda"),
):
"""
Load the video frames from a directory of JPEG files ("<frame_index>.jpg" format... |
Load the video frames from a directory of JPEG files ("<frame_index>.jpg" format).
The frames are resized to image_size x image_size and are loaded to GPU if
`offload_video_to_cpu` is `False` and to CPU if `offload_video_to_cpu` is `True`.
You can load a frame asynchronously by setting `async_loading... | load_video_frames | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/misc.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/misc.py | Apache-2.0 |
def fill_holes_in_mask_scores(mask, max_area):
"""
A post processor to fill small holes in mask scores with area under `max_area`.
"""
# Holes are those connected components in background with area <= self.max_area
# (background regions are those with mask scores <= 0)
assert max_area > 0, "max_... |
A post processor to fill small holes in mask scores with area under `max_area`.
| fill_holes_in_mask_scores | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/misc.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/misc.py | Apache-2.0 |
def concat_points(old_point_inputs, new_points, new_labels):
"""Add new points and labels to previous point inputs (add at the end)."""
if old_point_inputs is None:
points, labels = new_points, new_labels
else:
points = torch.cat([old_point_inputs["point_coords"], new_points], dim=1)
... | Add new points and labels to previous point inputs (add at the end). | concat_points | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/misc.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/misc.py | Apache-2.0 |
def transform_coords(
self, coords: torch.Tensor, normalize=False, orig_hw=None
) -> torch.Tensor:
"""
Expects a torch tensor with length 2 in the last dimension. The coordinates can be in absolute image or normalized coordinates,
If the coords are in absolute image coordinates, norm... |
Expects a torch tensor with length 2 in the last dimension. The coordinates can be in absolute image or normalized coordinates,
If the coords are in absolute image coordinates, normalize should be set to True and original image size is required.
Returns
Un-normalized coordinates in... | transform_coords | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/transforms.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/transforms.py | Apache-2.0 |
def transform_boxes(
self, boxes: torch.Tensor, normalize=False, orig_hw=None
) -> torch.Tensor:
"""
Expects a tensor of shape Bx4. The coordinates can be in absolute image or normalized coordinates,
if the coords are in absolute image coordinates, normalize should be set to True and... |
Expects a tensor of shape Bx4. The coordinates can be in absolute image or normalized coordinates,
if the coords are in absolute image coordinates, normalize should be set to True and original image size is required.
| transform_boxes | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/utils/transforms.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/utils/transforms.py | Apache-2.0 |
def _extract_face(self, img, box, image_size, save_path=None):
"""
extract face
:param img: A PIL Image.
:param box: Four-element bounding box.
:param image_size: raw image size
:param save_path: Save path for extracted face image. (default: {None})
:return:
... |
extract face
:param img: A PIL Image.
:param box: Four-element bounding box.
:param image_size: raw image size
:param save_path: Save path for extracted face image. (default: {None})
:return:
| _extract_face | python | FutureUniant/Tailor | app/src/algorithm/video_cut_face/face_analysis.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/video_cut_face/face_analysis.py | Apache-2.0 |
def open_project(cls, tailor_path, **kwargs):
"""
:param tailor_path:
:param kwargs
save: Save specifically refers to saving project image,
which also means opening the project for the first time
project_image_path: When the proj... |
:param tailor_path:
:param kwargs
save: Save specifically refers to saving project image,
which also means opening the project for the first time
project_image_path: When the project has been recorded in project_info table,
... | open_project | python | FutureUniant/Tailor | app/src/project/__init__.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/project/__init__.py | Apache-2.0 |
def __init__(
self,
id=0,
operation_id="",
act_time=Timer.get_timestamp(integer=True, string=False),
parameter=None,
output=None,
video=None,
file=None,
):
"""
:param id:
:param operation_id:
... |
:param id:
:param operation_id:
:param act_time:
:param parameter:
:param output:
:param video:
:param file:
| __init__ | python | FutureUniant/Tailor | app/src/project/model/action.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/project/model/action.py | Apache-2.0 |
def __init__(
self,
id=-1,
name="",
path="",
sort=0,
):
"""
:param id:
:param name:
:param path:
:param sort:
"""
self.id = id
self.name = name
self.path = path
self.sort = so... |
:param id:
:param name:
:param path:
:param sort:
| __init__ | python | FutureUniant/Tailor | app/src/project/model/video.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/project/model/video.py | Apache-2.0 |
def compare_images(image1_path, image2_path):
"""
:param image1_path:
:param image2_path:
:return:
0 : image1 and image2 are normal, but different
1 : image1 and image2 are normal, and same
-1 : image1 is damaged
-2 : image2 is damaged
-3 : ... |
:param image1_path:
:param image2_path:
:return:
0 : image1 and image2 are normal, but different
1 : image1 and image2 are normal, and same
-1 : image1 is damaged
-2 : image2 is damaged
-3 : image1 and image2 are all damaged
| compare_images | python | FutureUniant/Tailor | app/src/utils/imager.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/utils/imager.py | Apache-2.0 |
def write_log(self, content, log_level=logging.INFO):
"""
:param content:
The format specification of the content :
type: total stage: current stage: total step: current step: remark
type:
interval: No actual progress, simulate progre... |
:param content:
The format specification of the content :
type: total stage: current stage: total step: current step: remark
type:
interval: No actual progress, simulate progress based on time intervals.
When it... | write_log | python | FutureUniant/Tailor | app/src/utils/logger.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/utils/logger.py | Apache-2.0 |
def register(self, target=None, name=None, force=False):
"""Register a module.
A record will be added to `self._module_dict`, whose key is the class
name or the specified name, and value is the class itself.
It can be used as a decorator or a normal function.
Args:
... | Register a module.
A record will be added to `self._module_dict`, whose key is the class
name or the specified name, and value is the class itself.
It can be used as a decorator or a normal function.
Args:
target (callable| class | None): The target for register.
... | register | python | FutureUniant/Tailor | app/src/utils/register.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/utils/register.py | Apache-2.0 |
def _create_bindings(self, sequence: Optional[str] = None):
""" set necessary bindings for functionality of widget, will overwrite other bindings """
if sequence is None or sequence == "<Enter>":
self._canvas.bind("<Enter>", self._on_enter)
self._text_label.bind("<Enter>", self._... | set necessary bindings for functionality of widget, will overwrite other bindings | _create_bindings | python | FutureUniant/Tailor | app/tailorwidgets/tailor_menu_bar.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_menu_bar.py | Apache-2.0 |
def _update_font(self):
""" pass font to tkinter widgets with applied font scaling and update grid with workaround """
self._text_label.configure(font=self._apply_font_scaling(self._font))
# Workaround to force grid to be resized when text changes size.
# Otherwise grid will lag and onl... | pass font to tkinter widgets with applied font scaling and update grid with workaround | _update_font | python | FutureUniant/Tailor | app/tailorwidgets/tailor_menu_bar.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_menu_bar.py | Apache-2.0 |
def unbind(self, sequence: str = None, funcid: str = None):
""" called on the tkinter.Label and tkinter.Canvas """
if funcid is not None:
raise ValueError("'funcid' argument can only be None, because there is a bug in" +
" tkinter and its not clear whether the in... | called on the tkinter.Label and tkinter.Canvas | unbind | python | FutureUniant/Tailor | app/tailorwidgets/tailor_menu_bar.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_menu_bar.py | Apache-2.0 |
def __init__(self,
master: any,
values: List,
fg_color: Optional[Union[str, Tuple[str, str]]] = None,
text_color: Optional[Union[str, Tuple[str, str]]] = None,
button_fg_color: Optional[Union[str, Tuple[str, str]]] = None,
... |
:param master:
:param values:
example:
[{
"text": "The first multiple choice question",
"options": [(key: which is for showing, val: which is for app using)]
}...{}]
:param fg_color:
... | __init__ | python | FutureUniant/Tailor | app/tailorwidgets/tailor_multi_radios_dialog.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_multi_radios_dialog.py | Apache-2.0 |
def update_data(self):
""" update the data when values are changes """
for i in self.frame:
if self.checkbox and i[1] == 0:
continue
if self.write:
self.data[i]["value"] = self.frame[i].get()
else:
self.data[i]["value"] ... | update the data when values are changes | update_data | python | FutureUniant/Tailor | app/tailorwidgets/tailor_table.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_table.py | Apache-2.0 |
def edit_row(self, row, value=None, **kwargs):
""" edit all parameters of a single row """
start_idx = 0
if self.checkbox:
start_idx = 1
for i in range(start_idx, self.real_columns):
self.frame[row, i].configure(**kwargs)
self.data[row, i]["args"].upda... | edit all parameters of a single row | edit_row | python | FutureUniant/Tailor | app/tailorwidgets/tailor_table.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_table.py | Apache-2.0 |
def edit_column(self, column, value=None, **kwargs):
""" edit all parameters of a single column """
for i in range(self.rows):
self.frame[i, column].configure(**kwargs)
self.data[i, column]["args"].update(kwargs)
if value:
self.insert(i, column, value)... | edit all parameters of a single column | edit_column | python | FutureUniant/Tailor | app/tailorwidgets/tailor_table.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_table.py | Apache-2.0 |
def insert(self, row, column, value, **kwargs):
""" insert value in a specific block [row, column] """
if self.write:
self.frame[row, column].delete(0, END)
self.frame[row, column].insert(0, value)
self.frame[row, column].configure(**kwargs)
else:
... | insert value in a specific block [row, column] | insert | python | FutureUniant/Tailor | app/tailorwidgets/tailor_table.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_table.py | Apache-2.0 |
def delete(self, row, column, **kwargs):
""" delete a value from a specific block [row, column] """
if self.write:
self.frame[row, column].delete(0, END)
self.frame[row, column].configure(**kwargs)
else:
self.frame[row, column].configure(text="", **kwargs)
... | delete a value from a specific block [row, column] | delete | python | FutureUniant/Tailor | app/tailorwidgets/tailor_table.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_table.py | Apache-2.0 |
def bind(self, sequence: str = None, command: Callable = None, add: str = True):
""" called on the tkinter.Label and tkinter.Canvas """
if not (add == "+" or add is True):
raise ValueError("'add' argument can only be '+' or True to preserve internal callbacks")
self._treeview.bind(se... | called on the tkinter.Label and tkinter.Canvas | bind | python | FutureUniant/Tailor | app/tailorwidgets/tailor_tree_view.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_tree_view.py | Apache-2.0 |
def _detect_color_of_master(self, master_widget=None) -> Union[str, Tuple[str, str]]:
""" detect foreground color of master widget for bg_color and transparent color """
if master_widget is None:
master_widget = self.master
if isinstance(master_widget, (windows.widgets.core_widget_... | detect foreground color of master widget for bg_color and transparent color | _detect_color_of_master | python | FutureUniant/Tailor | app/tailorwidgets/tailor_tree_view.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_tree_view.py | Apache-2.0 |
def _check_font_type(self, font: any):
""" check font type when passed to widget """
if isinstance(font, CTkFont):
return font
elif type(font) == tuple and len(font) == 1:
warnings.warn(f"{type(self).__name__} Warning: font {font} given without size, will be extended wit... | check font type when passed to widget | _check_font_type | python | FutureUniant/Tailor | app/tailorwidgets/tailor_tree_view.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_tree_view.py | Apache-2.0 |
def bind(self, sequence: str = None, command: Callable = None, add: str = True):
""" called on the tkinter.Label and tkinter.Canvas """
if not (add == "+" or add is True):
raise ValueError("'add' argument can only be '+' or True to preserve internal callbacks")
self._treeview.bind(se... | called on the tkinter.Label and tkinter.Canvas | bind | python | FutureUniant/Tailor | app/tailorwidgets/tailor_tree_view_bck.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_tree_view_bck.py | Apache-2.0 |
def _async_raise(tid, exctype):
"""raises the exception, performs cleanup if needed"""
tid = ctypes.c_long(tid)
if not inspect.isclass(exctype):
exctype = type(exctype)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(exctype))
if res == 0:
raise ValueError("inv... | raises the exception, performs cleanup if needed | _async_raise | python | FutureUniant/Tailor | app/tailorwidgets/tailor_video_player.py | https://github.com/FutureUniant/Tailor/blob/master/app/tailorwidgets/tailor_video_player.py | Apache-2.0 |
def decode(self, samples, images, sd_version: str, sub_batch_size: int):
"""
sub_batch_size: How many images to decode in a single pass.
See https://github.com/huchenlei/ComfyUI-layerdiffuse/pull/4 for more
context.
"""
sd_version = StableDiffusionVersion(sd_version)
... |
sub_batch_size: How many images to decode in a single pass.
See https://github.com/huchenlei/ComfyUI-layerdiffuse/pull/4 for more
context.
| decode | python | huchenlei/ComfyUI-layerdiffuse | layered_diffusion.py | https://github.com/huchenlei/ComfyUI-layerdiffuse/blob/master/layered_diffusion.py | Apache-2.0 |
def zero_module(module):
"""
Zero out the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().zero_()
return module |
Zero out the parameters of a module and return it.
| zero_module | python | huchenlei/ComfyUI-layerdiffuse | lib_layerdiffusion/models.py | https://github.com/huchenlei/ComfyUI-layerdiffuse/blob/master/lib_layerdiffusion/models.py | Apache-2.0 |
def load_file_from_url(
url: str,
*,
model_dir: str,
progress: bool = True,
file_name: Optional[str] = None,
) -> str:
"""Download a file from `url` into `model_dir`, using the file present if possible.
Returns the path to the downloaded file.
"""
os.makedirs(model_dir, exist_ok=Tru... | Download a file from `url` into `model_dir`, using the file present if possible.
Returns the path to the downloaded file.
| load_file_from_url | python | huchenlei/ComfyUI-layerdiffuse | lib_layerdiffusion/utils.py | https://github.com/huchenlei/ComfyUI-layerdiffuse/blob/master/lib_layerdiffusion/utils.py | Apache-2.0 |
def to_lora_patch_dict(state_dict: dict) -> dict:
""" Convert raw lora state_dict to patch_dict that can be applied on
modelpatcher."""
patch_dict = {}
for k, w in state_dict.items():
model_key, patch_type, weight_index = k.split('::')
if model_key not in patch_dict:
patch_di... | Convert raw lora state_dict to patch_dict that can be applied on
modelpatcher. | to_lora_patch_dict | python | huchenlei/ComfyUI-layerdiffuse | lib_layerdiffusion/utils.py | https://github.com/huchenlei/ComfyUI-layerdiffuse/blob/master/lib_layerdiffusion/utils.py | Apache-2.0 |
def copy(
doc_links_config: Path = typer.Argument(..., help="Path to the doc links config"),
absolute_paths: bool = typer.Option(
False, help="Whether to use absolute paths for the source files"
),
):
"""Copy documentation files as specified in the doc links config.
This function reads the ... | Copy documentation files as specified in the doc links config.
This function reads the doc links configuration file and copies the specified
files from their source locations to the destination paths. It can handle both
relative and absolute paths based on the `absolute_paths` parameter.
Args:
... | copy | python | oumi-ai/oumi | docs/_manage_doclinks.py | https://github.com/oumi-ai/oumi/blob/master/docs/_manage_doclinks.py | Apache-2.0 |
def clean(
doc_links_config: Path = typer.Argument(..., help="Path to the doc links config"),
absolute_paths: bool = typer.Option(
False, help="Whether to use absolute paths for the destination files"
),
):
"""Delete destination files specified in the doc links config.
This function reads t... | Delete destination files specified in the doc links config.
This function reads the doc links configuration file and deletes the specified
destination files. It can handle both relative and absolute paths based on the
`absolute_paths` parameter.
Args:
doc_links_config (Path): Path to the doc l... | clean | python | oumi-ai/oumi | docs/_manage_doclinks.py | https://github.com/oumi-ai/oumi/blob/master/docs/_manage_doclinks.py | Apache-2.0 |
def summarize_module(
module_name: str = typer.Argument(..., help="The name of the module to inspect"),
filter_type: Optional[list[str]] = typer.Option(
None, help="Filter for object types (class, method, attribute, function)"
),
output_file: Optional[str] = typer.Option(
None, help="Fil... | Generate a markdown table of objects defined in a Python module.
Args:
module_name: The name of the module to inspect.
filter_type: Optional filter for object types.
Can be 'class', 'method', 'attribute', 'function', or a list of these.
output_file: Optional file path to save th... | summarize_module | python | oumi-ai/oumi | docs/_summarize_module.py | https://github.com/oumi-ai/oumi/blob/master/docs/_summarize_module.py | Apache-2.0 |
def summarize_configs(
config_folder: str = typer.Argument(..., help="The folder containing config files"),
config_class: str = typer.Argument(
..., help="The class to instantiate configs (format: module.ClassName)"
),
output_file: Optional[str] = typer.Option(
None, help="File path to s... | Generate a markdown table summarizing config files in a folder.
Args:
config_folder: The folder containing config files.
config_class: The class to instantiate configs (format: module.ClassName).
output_file: Optional file path to save the generated markdown.
Returns:
A string ... | summarize_configs | python | oumi-ai/oumi | docs/_summarize_module.py | https://github.com/oumi-ai/oumi/blob/master/docs/_summarize_module.py | Apache-2.0 |
def _get_object_docstring(obj, summary: bool = True) -> str:
"""Get the docstring of an object."""
docstring = inspect.getdoc(obj) or "No description available"
if summary:
return docstring.split("\n")[0]
return docstring | Get the docstring of an object. | _get_object_docstring | python | oumi-ai/oumi | docs/_summarize_module.py | https://github.com/oumi-ai/oumi/blob/master/docs/_summarize_module.py | Apache-2.0 |
def _get_object_type(obj) -> str:
"""Get the type of an object."""
if inspect.isclass(obj):
return "class"
elif inspect.isfunction(obj):
return "function"
elif inspect.ismodule(obj):
return "module"
elif isinstance(obj, property):
return "property"
elif inspect.is... | Get the type of an object. | _get_object_type | python | oumi-ai/oumi | docs/_summarize_module.py | https://github.com/oumi-ai/oumi/blob/master/docs/_summarize_module.py | Apache-2.0 |
def _is_child_of(obj, parent_class) -> bool:
"""Check if an object is a child of a parent class."""
return (
inspect.isclass(obj) and issubclass(obj, parent_class) and obj != parent_class
) | Check if an object is a child of a parent class. | _is_child_of | python | oumi-ai/oumi | docs/_summarize_module.py | https://github.com/oumi-ai/oumi/blob/master/docs/_summarize_module.py | Apache-2.0 |
def _is_defined_in_module(obj, module):
"""Check if an object is defined in the given module."""
try:
return inspect.getmodule(obj) == module
except AttributeError:
# Some objects might not have a module, assume they're not defined in the module
return False | Check if an object is defined in the given module. | _is_defined_in_module | python | oumi-ai/oumi | docs/_summarize_module.py | https://github.com/oumi-ai/oumi/blob/master/docs/_summarize_module.py | Apache-2.0 |
def show_logo():
"""Display the Oumi platform logo in a panel."""
logo_text = r"""
____ _ _ __ __ _____
/ __ \| | | | \/ |_ _|
| | | | | | | \ / | | |
| | | | | | | |\/| | | |
| |__| | |__| | | | |_| |_
\____/ \____/|_| |_|_____|"""
tagline = (
"Everything you need to bui... | Display the Oumi platform logo in a panel. | show_logo | python | oumi-ai/oumi | scripts/demo.py | https://github.com/oumi-ai/oumi/blob/master/scripts/demo.py | Apache-2.0 |
def display_yaml_config(config: dict, title: str = "Configuration"):
"""Display a YAML configuration in a panel with syntax highlighting.
Args:
config: The configuration dictionary to display
title: The title for the panel
"""
yaml_str = yaml.dump(config)
console.print(
Pane... | Display a YAML configuration in a panel with syntax highlighting.
Args:
config: The configuration dictionary to display
title: The title for the panel
| display_yaml_config | python | oumi-ai/oumi | scripts/demo.py | https://github.com/oumi-ai/oumi/blob/master/scripts/demo.py | Apache-2.0 |
def run_command(
command: str, capture_output: bool = False
) -> subprocess.CompletedProcess:
"""Run a shell command and return the result.
Args:
command: The command to run
capture_output: Whether to capture the command output
Returns:
The completed process object
"""
... | Run a shell command and return the result.
Args:
command: The command to run
capture_output: Whether to capture the command output
Returns:
The completed process object
| run_command | python | oumi-ai/oumi | scripts/demo.py | https://github.com/oumi-ai/oumi/blob/master/scripts/demo.py | Apache-2.0 |
def select_from_choices(
prompt: str,
choices: list[dict[str, str]],
default: str = "1",
show_descriptions: bool = True,
) -> tuple[str, str]:
"""Display numbered choices and get user selection.
Args:
prompt: The prompt to display to the user
choices: List of choice dictionaries... | Display numbered choices and get user selection.
Args:
prompt: The prompt to display to the user
choices: List of choice dictionaries with name, description (optional), and value,
or dictionary of choice descriptions to values, or list of choices
default: Default choice numb... | select_from_choices | python | oumi-ai/oumi | scripts/demo.py | https://github.com/oumi-ai/oumi/blob/master/scripts/demo.py | Apache-2.0 |
def num_bytes_to_str(bytes: Union[int, float]) -> str:
"""Returns a human-readable string for a number of bytes."""
if bytes < 1000:
return f"{bytes} B"
elif bytes < 1e6:
return f"{bytes / 1000:.1f} KB"
elif bytes < 1e9:
return f"{bytes / 1e6:.1f} MB"
else:
return f"{... | Returns a human-readable string for a number of bytes. | num_bytes_to_str | python | oumi-ai/oumi | scripts/memcalc.py | https://github.com/oumi-ai/oumi/blob/master/scripts/memcalc.py | Apache-2.0 |
def get_seq_len(config: TrainingConfig, model_config: ModelConfig) -> int:
"""Gets the maximum sequence length supported by the model."""
seq_len = model_config.seq_len
if config.model.model_max_length is not None:
seq_len = config.model.model_max_length
return seq_len | Gets the maximum sequence length supported by the model. | get_seq_len | python | oumi-ai/oumi | scripts/memcalc.py | https://github.com/oumi-ai/oumi/blob/master/scripts/memcalc.py | Apache-2.0 |
def get_standardized_model_config(hf_model_config) -> ModelConfig:
"""Gets a standardized model config given a HF model config.
Each HF config may use different field names for the same property. This function
converts them to a standardized format for use throughout the script.
"""
if isinstance(h... | Gets a standardized model config given a HF model config.
Each HF config may use different field names for the same property. This function
converts them to a standardized format for use throughout the script.
| get_standardized_model_config | python | oumi-ai/oumi | scripts/memcalc.py | https://github.com/oumi-ai/oumi/blob/master/scripts/memcalc.py | Apache-2.0 |
def get_data_bytes(
config: TrainingConfig, model_config: ModelConfig, bytes_per_unit: int
) -> int:
"""Gets the total number of bytes used by the data batch."""
batch_size = config.training.per_device_train_batch_size
model_max_length = get_seq_len(config, model_config)
return batch_size * model_ma... | Gets the total number of bytes used by the data batch. | get_data_bytes | python | oumi-ai/oumi | scripts/memcalc.py | https://github.com/oumi-ai/oumi/blob/master/scripts/memcalc.py | Apache-2.0 |
def get_model_bytes(model: torch.nn.Module, bytes_per_unit: int) -> int:
"""Gets the total number of bytes used by the loaded model."""
num_total_params = count_model_parameters(model).all_params
print(f"- Model parameter count: {num_total_params:,}")
return num_total_params * bytes_per_unit | Gets the total number of bytes used by the loaded model. | get_model_bytes | python | oumi-ai/oumi | scripts/memcalc.py | https://github.com/oumi-ai/oumi/blob/master/scripts/memcalc.py | Apache-2.0 |
def get_optim_bytes(config: TrainingConfig, model_bytes: int) -> int:
"""Gets the total number of bytes used by the optimizer."""
optim = config.training.optimizer
if optim in ["adamw_torch", "adamw_torch_fused"]:
multiplier = 2
elif optim == "adafactor":
multiplier = 0.3
elif optim ... | Gets the total number of bytes used by the optimizer. | get_optim_bytes | python | oumi-ai/oumi | scripts/memcalc.py | https://github.com/oumi-ai/oumi/blob/master/scripts/memcalc.py | Apache-2.0 |
def get_gradient_bytes(config: TrainingConfig, model_bytes: int) -> int:
"""Gets the total number of bytes used by gradients."""
print("- The size of the gradient is the same as that for model weights.")
if config.training.gradient_accumulation_steps > 1:
print(
"- If gradient accumulati... | Gets the total number of bytes used by gradients. | get_gradient_bytes | python | oumi-ai/oumi | scripts/memcalc.py | https://github.com/oumi-ai/oumi/blob/master/scripts/memcalc.py | Apache-2.0 |
def get_activation_bytes(
config: TrainingConfig, model_config: ModelConfig, bytes_per_unit: int
) -> int:
"""Gets the total number of bytes used by activations."""
vocab_size = model_config.vocab_size
num_layers = model_config.num_layers
hidden_dim = model_config.hidden_dim
seq_len = get_seq_le... | Gets the total number of bytes used by activations. | get_activation_bytes | python | oumi-ai/oumi | scripts/memcalc.py | https://github.com/oumi-ai/oumi/blob/master/scripts/memcalc.py | Apache-2.0 |
def main(args):
"""Runs the DataLoader benchmark in distributed mode."""
if is_distributed():
logger.info("Benchmarking in distributed mode...")
init_distributed()
else:
logger.info("Running benchmark in single-process mode.")
#
# Run benchmarks
#
all_results = []
... | Runs the DataLoader benchmark in distributed mode. | main | python | oumi-ai/oumi | scripts/benchmarks/benchmark_dataloader.py | https://github.com/oumi-ai/oumi/blob/master/scripts/benchmarks/benchmark_dataloader.py | Apache-2.0 |
def _generate_config_combinations(
variable_params: dict[str, list[Any]],
) -> list[dict[str, Any]]:
"""Generates a list of configs based on a list of variable parameters."""
keys, values = zip(*variable_params.items())
configurations = [dict(zip(keys, v)) for v in itertools.product(*values)]
return... | Generates a list of configs based on a list of variable parameters. | _generate_config_combinations | python | oumi-ai/oumi | scripts/benchmarks/benchmark_dataloader.py | https://github.com/oumi-ai/oumi/blob/master/scripts/benchmarks/benchmark_dataloader.py | Apache-2.0 |
def _load_dataset(dataset_fn, *args, **kwargs) -> tuple[float, Any]:
"""Measures the time taken to initialize a dataset using the given dataset function.
Parameters:
dataset_fn (callable): The function used to create the dataset.
*args: Variable length argument list to be passed to the dataset ... | Measures the time taken to initialize a dataset using the given dataset function.
Parameters:
dataset_fn (callable): The function used to create the dataset.
*args: Variable length argument list to be passed to the dataset function.
**kwargs: Arbitrary keyword arguments to be passed to the ... | _load_dataset | python | oumi-ai/oumi | scripts/benchmarks/benchmark_dataloader.py | https://github.com/oumi-ai/oumi/blob/master/scripts/benchmarks/benchmark_dataloader.py | Apache-2.0 |
def _benchmark_dataloader_epoch(
dataset,
batch_size: int = 1,
num_dataloader_workers: int = 0,
pin_memory: bool = False,
model_fwd_bwd_ms: float = 0.0,
use_distributed_sampler: bool = False,
max_steps: Optional[int] = None,
**kwargs,
) -> dict[str, Any]:
"""Measures the time taken t... | Measures the time taken to iterate over a DataLoader for one epoch. | _benchmark_dataloader_epoch | python | oumi-ai/oumi | scripts/benchmarks/benchmark_dataloader.py | https://github.com/oumi-ai/oumi/blob/master/scripts/benchmarks/benchmark_dataloader.py | Apache-2.0 |
def benchmark_barrier(device_info, num_iterations: int = 1):
"""Benchmarks the time taken to execute a barrier operation."""
start_time = time.perf_counter()
sleep_time_seconds = 1
for _ in range(num_iterations):
barrier()
time.sleep(sleep_time_seconds)
end_time = time.perf_counter()... | Benchmarks the time taken to execute a barrier operation. | benchmark_barrier | python | oumi-ai/oumi | scripts/benchmarks/benchmark_nccl.py | https://github.com/oumi-ai/oumi/blob/master/scripts/benchmarks/benchmark_nccl.py | Apache-2.0 |
def benchmark_ddp(device_info, num_iterations: int = 1):
"""Benchmarks the time taken to execute a forward and backward pass with DDP."""
rank = device_info.local_rank
device = f"cuda:{rank}"
model = MLPEncoder().to(device)
ddp_model = DDP(model, device_ids=[rank])
optimizer = torch.optim.SGD(... | Benchmarks the time taken to execute a forward and backward pass with DDP. | benchmark_ddp | python | oumi-ai/oumi | scripts/benchmarks/benchmark_nccl.py | https://github.com/oumi-ai/oumi/blob/master/scripts/benchmarks/benchmark_nccl.py | Apache-2.0 |
def benchmark_all_reduce(device_info, num_iterations: int = 1, tensor_size=1000000):
"""Benchmarks the time taken to execute an all-reduce operation."""
tensor = torch.randn(tensor_size).to(device_info.local_rank)
start_time = time.time()
for _ in range(num_iterations):
dist.all_reduce(tensor, ... | Benchmarks the time taken to execute an all-reduce operation. | benchmark_all_reduce | python | oumi-ai/oumi | scripts/benchmarks/benchmark_nccl.py | https://github.com/oumi-ai/oumi/blob/master/scripts/benchmarks/benchmark_nccl.py | Apache-2.0 |
def main(args):
"""Main function for the benchmark script."""
if is_distributed():
logger.info("Benchmarking in distributed mode...")
init_distributed()
else:
logger.info("Running benchmark in single-process mode.")
#
# Run tests
#
device_info = get_device_rank_info(... | Main function for the benchmark script. | main | python | oumi-ai/oumi | scripts/benchmarks/benchmark_nccl.py | https://github.com/oumi-ai/oumi/blob/master/scripts/benchmarks/benchmark_nccl.py | Apache-2.0 |
def _load_sft_dataset(
dataset_name: str,
*,
dataset_path: Optional[str],
dataset_subset: Optional[str],
dataset_split: Optional[str],
tokenizer: Optional[BaseTokenizer] = None,
processor_name: Optional[str] = None,
trust_remote_code: bool = False,
dataset_kwargs: Optional[dict[str, ... | Loads a custom SFT dataset with the specified name and subset. | _load_sft_dataset | python | oumi-ai/oumi | scripts/datasets/save_conversations.py | https://github.com/oumi-ai/oumi/blob/master/scripts/datasets/save_conversations.py | Apache-2.0 |
def parse_cli() -> tuple[ParsedArgs, list[str]]:
"""Parses command line arguments and returns the configuration filename."""
parser = argparse.ArgumentParser()
parser.add_argument(
"-c",
"--config",
default=None,
help="Path to the configuration file",
)
parser.add_arg... | Parses command line arguments and returns the configuration filename. | parse_cli | python | oumi-ai/oumi | scripts/datasets/pretokenize/process_dataset.py | https://github.com/oumi-ai/oumi/blob/master/scripts/datasets/pretokenize/process_dataset.py | Apache-2.0 |
def save_conversations_for_dataset(
output_path: str, dataset_name="yahma/alpaca-cleaned", num_samples: int = 10
):
"""Save the conversations to a file.
Args:
output_path (str): The path to save the conversations.
dataset_name (str): The name of the dataset to use.
num_samples (int)... | Save the conversations to a file.
Args:
output_path (str): The path to save the conversations.
dataset_name (str): The name of the dataset to use.
num_samples (int): The number of samples to save.
| save_conversations_for_dataset | python | oumi-ai/oumi | scripts/examples/batch_inference/bulk_infer.py | https://github.com/oumi-ai/oumi/blob/master/scripts/examples/batch_inference/bulk_infer.py | Apache-2.0 |
def compare_predictions(prediction_files: list[str], idx: int):
"""Compares the predictions from different files at a given index.
Args:
prediction_files (list[str]): The paths to the prediction files.
idx (int): The index of the prediction to compare.
"""
for target_file in prediction_... | Compares the predictions from different files at a given index.
Args:
prediction_files (list[str]): The paths to the prediction files.
idx (int): The index of the prediction to compare.
| compare_predictions | python | oumi-ai/oumi | scripts/examples/batch_inference/bulk_infer.py | https://github.com/oumi-ai/oumi/blob/master/scripts/examples/batch_inference/bulk_infer.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.