code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def forward(self, x, timesteps):
"""
Apply the model to an input batch.
:param x: an [N x C x ...] Tensor of inputs.
:param timesteps: a 1-D batch of timesteps.
:return: an [N x K] Tensor of outputs.
"""
emb = self.time_embed(timestep_embedding(timesteps, self.mod... |
Apply the model to an input batch.
:param x: an [N x C x ...] Tensor of inputs.
:param timesteps: a 1-D batch of timesteps.
:return: an [N x K] Tensor of outputs.
| forward | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/openaimodel.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/openaimodel.py | Apache-2.0 |
def mixed_checkpoint(func, inputs: dict, params, flag):
"""
Evaluate a function without caching intermediate activations, allowing for
reduced memory at the expense of extra compute in the backward pass. This differs from the original checkpoint function
borrowed from https://github.com/openai/guided-di... |
Evaluate a function without caching intermediate activations, allowing for
reduced memory at the expense of extra compute in the backward pass. This differs from the original checkpoint function
borrowed from https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_di... | mixed_checkpoint | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/util.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/util.py | Apache-2.0 |
def checkpoint(func, inputs, params, flag):
"""
Evaluate a function without caching intermediate activations, allowing for
reduced memory at the expense of extra compute in the backward pass.
:param func: the function to evaluate.
:param inputs: the argument sequence to pass to `func`.
:param pa... |
Evaluate a function without caching intermediate activations, allowing for
reduced memory at the expense of extra compute in the backward pass.
:param func: the function to evaluate.
:param inputs: the argument sequence to pass to `func`.
:param params: a sequence of parameters `func` depends on bu... | checkpoint | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/util.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/util.py | Apache-2.0 |
def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False, dtype=torch.float32):
"""
Create sinusoidal timestep embeddings.
:param timesteps: a 1-D Tensor of N indices, one per batch element.
These may be fractional.
:param dim: the dimension of the output.
:pa... |
Create sinusoidal timestep embeddings.
:param timesteps: a 1-D Tensor of N indices, one per batch element.
These may be fractional.
:param dim: the dimension of the output.
:param max_period: controls the minimum frequency of the embeddings.
:return: an [N x dim] Tensor of pos... | timestep_embedding | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/util.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/util.py | Apache-2.0 |
def scale_module(module, scale):
"""
Scale the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().mul_(scale)
return module |
Scale the parameters of a module and return it.
| scale_module | python | THUDM/CogVideo | sat/sgm/modules/diffusionmodules/util.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/diffusionmodules/util.py | Apache-2.0 |
def normal_kl(mean1, logvar1, mean2, logvar2):
"""
source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
Compute the KL divergence between two gaussians.
Shapes are automatically broadcasted, so batches can be compared to
scal... |
source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
Compute the KL divergence between two gaussians.
Shapes are automatically broadcasted, so batches can be compared to
scalars, among other use cases.
| normal_kl | python | THUDM/CogVideo | sat/sgm/modules/distributions/distributions.py | https://github.com/THUDM/CogVideo/blob/master/sat/sgm/modules/distributions/distributions.py | Apache-2.0 |
def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters):
"""
Returns fp32 state_dict reconstructed from ds checkpoint
Args:
- ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
"""
print(f"Processing zero check... |
Returns fp32 state_dict reconstructed from ds checkpoint
Args:
- ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
| _get_fp32_state_dict_from_zero_checkpoint | python | THUDM/CogVideo | tools/convert_weight_deepspeed2hf.py | https://github.com/THUDM/CogVideo/blob/master/tools/convert_weight_deepspeed2hf.py | Apache-2.0 |
def contiguous(self):
"""
Merge partitioned weights from flat_groups into a single tensor.
"""
end_idx = self.offset + self.partitioned_numel
world_size = len(self.flat_groups)
pad_flat_param_chunks = []
for rank_i in range(world_size):
# for each ran... |
Merge partitioned weights from flat_groups into a single tensor.
| contiguous | python | THUDM/CogVideo | tools/convert_weight_deepspeed2hf.py | https://github.com/THUDM/CogVideo/blob/master/tools/convert_weight_deepspeed2hf.py | Apache-2.0 |
def to_torch_tensor(state_dict, return_empty_tensor=False):
"""
Convert state_dict of GatheredTensor to torch tensor
"""
torch_state_dict = {}
converted_tensors = {}
for name, tensor in state_dict.items():
tensor_id = id(tensor)
if tensor_id in converted_tensors: # shared tensor... |
Convert state_dict of GatheredTensor to torch tensor
| to_torch_tensor | python | THUDM/CogVideo | tools/convert_weight_deepspeed2hf.py | https://github.com/THUDM/CogVideo/blob/master/tools/convert_weight_deepspeed2hf.py | Apache-2.0 |
def convert_zero_checkpoint_to_fp32_state_dict(
checkpoint_dir,
output_dir,
max_shard_size="5GB",
safe_serialization=False,
tag=None,
exclude_frozen_parameters=False,
):
"""
Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
loaded with ``t... |
Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
Args:
- ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, lik... | convert_zero_checkpoint_to_fp32_state_dict | python | THUDM/CogVideo | tools/convert_weight_deepspeed2hf.py | https://github.com/THUDM/CogVideo/blob/master/tools/convert_weight_deepspeed2hf.py | Apache-2.0 |
def setup(self) -> None:
"""Load the model into memory to make running multiple predictions efficient"""
if not os.path.exists(MODEL_CACHE):
download_weights(MODEL_URL, MODEL_CACHE)
# model_id: THUDM/CogVideoX-5b-I2V
self.pipe = CogVideoXImageToVideoPipeline.from_pretrained... | Load the model into memory to make running multiple predictions efficient | setup | python | THUDM/CogVideo | tools/replicate/predict_i2v.py | https://github.com/THUDM/CogVideo/blob/master/tools/replicate/predict_i2v.py | Apache-2.0 |
def predict(
self,
prompt: str = Input(description="Input prompt", default="Starry sky slowly rotating."),
image: Path = Input(description="Input image"),
num_inference_steps: int = Input(
description="Number of denoising steps", ge=1, le=500, default=50
),
gu... | Run a single prediction on the model | predict | python | THUDM/CogVideo | tools/replicate/predict_i2v.py | https://github.com/THUDM/CogVideo/blob/master/tools/replicate/predict_i2v.py | Apache-2.0 |
def profile() -> None:
"""
Prints top N methods, sorted by time.
Equivalent to:
python -m cProfile -o data/profile.txt main.py -n 100
Options:
time, cumulative, line, name, nfl, calls
-----------
ncalls - for the number of calls.
time/tottime - for the total time spent in th... |
Prints top N methods, sorted by time.
Equivalent to:
python -m cProfile -o data/profile.txt main.py -n 100
Options:
time, cumulative, line, name, nfl, calls
-----------
ncalls - for the number of calls.
time/tottime - for the total time spent in the given function
(and excl... | profile | python | TylerYep/torchinfo | profiler.py | https://github.com/TylerYep/torchinfo/blob/master/profiler.py | MIT |
def pytest_addoption(parser: pytest.Parser) -> None:
"""This allows us to check for these params in sys.argv."""
parser.addoption("--overwrite", action="store_true", default=False)
parser.addoption("--no-output", action="store_true", default=False) | This allows us to check for these params in sys.argv. | pytest_addoption | python | TylerYep/torchinfo | tests/conftest.py | https://github.com/TylerYep/torchinfo/blob/master/tests/conftest.py | MIT |
def verify_output(capsys: pytest.CaptureFixture[str], filename: str) -> None:
"""
Utility function to ensure output matches file.
If you are writing new tests, set overwrite_file=True to generate the
new test_output file.
"""
captured, _ = capsys.readouterr()
filepath = Path(filename)
if... |
Utility function to ensure output matches file.
If you are writing new tests, set overwrite_file=True to generate the
new test_output file.
| verify_output | python | TylerYep/torchinfo | tests/conftest.py | https://github.com/TylerYep/torchinfo/blob/master/tests/conftest.py | MIT |
def assert_sum_column_totals_match(output: str, category: ColumnSettings) -> None:
"""Asserts that column totals match the total from the table summary."""
lines = output.replace("=", "").split("\n\n")
header_row = lines[0].strip()
offset = header_row.find(HEADER_TITLES[category])
if offset == -1:
... | Asserts that column totals match the total from the table summary. | assert_sum_column_totals_match | python | TylerYep/torchinfo | tests/conftest.py | https://github.com/TylerYep/torchinfo/blob/master/tests/conftest.py | MIT |
def test_edgecase_input_output_model() -> None:
"""
Test the following two if-clauses
from LayerInfo.calculate_size.extract_tensor: 3
(starts counting from 1) as well as the final return.
"""
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = EdgecaseInputOutputMod... |
Test the following two if-clauses
from LayerInfo.calculate_size.extract_tensor: 3
(starts counting from 1) as well as the final return.
| test_edgecase_input_output_model | python | TylerYep/torchinfo | tests/torchinfo_test.py | https://github.com/TylerYep/torchinfo/blob/master/tests/torchinfo_test.py | MIT |
def set_layer_name_width(
self, summary_list: list[LayerInfo], align_val: int = 5
) -> None:
"""
Set layer name width by taking the longest line length and rounding up to
the nearest multiple of align_val.
"""
max_length = 0
for info in summary_list:
... |
Set layer name width by taking the longest line length and rounding up to
the nearest multiple of align_val.
| set_layer_name_width | python | TylerYep/torchinfo | torchinfo/formatting.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/formatting.py | MIT |
def format_row(self, layer_name: str, row_values: dict[ColumnSettings, str]) -> str:
"""Get the string representation of a single layer of the model."""
info_to_use = [row_values.get(row_type, "") for row_type in self.col_names]
new_line = f"{layer_name:<{self.layer_name_width}} "
for in... | Get the string representation of a single layer of the model. | format_row | python | TylerYep/torchinfo | torchinfo/formatting.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/formatting.py | MIT |
def layer_info_to_row(
self, layer_info: LayerInfo, reached_max_depth: bool, total_params: int
) -> str:
"""Convert layer_info to string representation of a row."""
values_for_row = {
ColumnSettings.KERNEL_SIZE: self.str_(layer_info.kernel_size),
ColumnSettings.GROUPS... | Convert layer_info to string representation of a row. | layer_info_to_row | python | TylerYep/torchinfo | torchinfo/formatting.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/formatting.py | MIT |
def layers_to_str(self, summary_list: list[LayerInfo], total_params: int) -> str:
"""
Print each layer of the model using only current layer info.
Container modules are already dealt with in add_missing_container_layers.
"""
new_str = ""
for layer_info in summary_list:
... |
Print each layer of the model using only current layer info.
Container modules are already dealt with in add_missing_container_layers.
| layers_to_str | python | TylerYep/torchinfo | torchinfo/formatting.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/formatting.py | MIT |
def trainable(self) -> str:
"""
Checks if the module is trainable. Returns:
"True", if all the parameters are trainable (`requires_grad=True`)
"False" if none of the parameters are trainable.
"Partial" if some weights are trainable, but not all.
"--" if no... |
Checks if the module is trainable. Returns:
"True", if all the parameters are trainable (`requires_grad=True`)
"False" if none of the parameters are trainable.
"Partial" if some weights are trainable, but not all.
"--" if no module has no parameters, like Dropout... | trainable | python | TylerYep/torchinfo | torchinfo/layer_info.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/layer_info.py | MIT |
def calculate_size(
inputs: DETECTED_INPUT_OUTPUT_TYPES | None, batch_dim: int | None
) -> tuple[list[int], int]:
"""
Set input_size or output_size using the model's inputs.
Returns the corrected shape of `inputs` and the size of
a single element in bytes.
"""
... |
Set input_size or output_size using the model's inputs.
Returns the corrected shape of `inputs` and the size of
a single element in bytes.
| calculate_size | python | TylerYep/torchinfo | torchinfo/layer_info.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/layer_info.py | MIT |
def get_param_count(
module: nn.Module, name: str, param: torch.Tensor
) -> tuple[int, str]:
"""
Get count of number of params, accounting for mask.
Masked models save parameters with the suffix "_orig" added.
They have a buffer ending with "_mask" which has only 0s and 1s.
... |
Get count of number of params, accounting for mask.
Masked models save parameters with the suffix "_orig" added.
They have a buffer ending with "_mask" which has only 0s and 1s.
If a mask exists, the sum of 1s in mask is number of params.
| get_param_count | python | TylerYep/torchinfo | torchinfo/layer_info.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/layer_info.py | MIT |
def calculate_macs(self) -> None:
"""
Set MACs using the module's parameters and layer's output size, which is
used for computing number of operations for Conv layers.
Please note: Returned MACs is the number of MACs for the full tensor,
i.e., taking the batch-dimension into acc... |
Set MACs using the module's parameters and layer's output size, which is
used for computing number of operations for Conv layers.
Please note: Returned MACs is the number of MACs for the full tensor,
i.e., taking the batch-dimension into account.
| calculate_macs | python | TylerYep/torchinfo | torchinfo/layer_info.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/layer_info.py | MIT |
def check_recursive(self, layer_ids: set[int]) -> None:
"""
If the current module is already-used, mark as (recursive).
Must check before adding line to the summary.
"""
if self.layer_id in layer_ids:
self.is_recursive = True |
If the current module is already-used, mark as (recursive).
Must check before adding line to the summary.
| check_recursive | python | TylerYep/torchinfo | torchinfo/layer_info.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/layer_info.py | MIT |
def leftover_params(self) -> int:
"""
Leftover params are the number of params this current layer has that are not
included in the child num_param counts.
"""
return self.num_params - sum(
child.num_params if child.is_leaf_layer else child.leftover_params()
... |
Leftover params are the number of params this current layer has that are not
included in the child num_param counts.
| leftover_params | python | TylerYep/torchinfo | torchinfo/layer_info.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/layer_info.py | MIT |
def rgetattr(module: nn.Module, attr: str) -> torch.Tensor | None:
"""Get the tensor submodule called attr from module."""
for attr_i in attr.split("."):
if not hasattr(module, attr_i):
return None
module = getattr(module, attr_i)
assert isinstance(module, torch.Tensor) # type: ... | Get the tensor submodule called attr from module. | rgetattr | python | TylerYep/torchinfo | torchinfo/layer_info.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/layer_info.py | MIT |
def get_children_layers(summary_list: list[LayerInfo], index: int) -> list[LayerInfo]:
"""Fetches all of the children of a given layer."""
num_children = 0
for layer in summary_list[index + 1 :]:
if layer.depth <= summary_list[index].depth:
break
num_children += 1
return summ... | Fetches all of the children of a given layer. | get_children_layers | python | TylerYep/torchinfo | torchinfo/layer_info.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/layer_info.py | MIT |
def to_readable(num: float, units: Units = Units.AUTO) -> tuple[Units, float]:
"""Converts a number to millions, billions, or trillions."""
if units == Units.AUTO:
if num >= 1e12:
return Units.TERABYTES, num / 1e12
if num >= 1e9:
return Units.GIGAB... | Converts a number to millions, billions, or trillions. | to_readable | python | TylerYep/torchinfo | torchinfo/model_statistics.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/model_statistics.py | MIT |
def process_input(
input_data: INPUT_DATA_TYPE | None,
input_size: INPUT_SIZE_TYPE | None,
batch_dim: int | None,
device: torch.device | None,
dtypes: list[torch.dtype] | None = None,
) -> tuple[CORRECTED_INPUT_DATA_TYPE, Any]:
"""Reads sample input data to get the input size."""
x = None
... | Reads sample input data to get the input size. | process_input | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def forward_pass(
model: nn.Module,
x: CORRECTED_INPUT_DATA_TYPE,
batch_dim: int | None,
cache_forward_pass: bool,
device: torch.device | None,
mode: Mode,
**kwargs: Any,
) -> list[LayerInfo]:
"""Perform a forward pass on the model using forward hooks."""
global _cached_forward_pass
... | Perform a forward pass on the model using forward hooks. | forward_pass | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def set_children_layers(summary_list: list[LayerInfo]) -> None:
"""Populates the children and depth_index fields of all LayerInfo."""
idx: dict[int, int] = {}
for i, layer in enumerate(summary_list):
idx[layer.depth] = idx.get(layer.depth, 0) + 1
layer.depth_index = idx[layer.depth]
... | Populates the children and depth_index fields of all LayerInfo. | set_children_layers | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def add_missing_container_layers(summary_list: list[LayerInfo]) -> None:
"""Finds container modules not in the currently listed hierarchy."""
layer_ids = {layer.layer_id for layer in summary_list}
current_hierarchy: dict[int, LayerInfo] = {}
for idx, layer_info in enumerate(summary_list):
# to k... | Finds container modules not in the currently listed hierarchy. | add_missing_container_layers | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def validate_user_params(
input_data: INPUT_DATA_TYPE | None,
input_size: INPUT_SIZE_TYPE | None,
col_names: tuple[ColumnSettings, ...],
col_width: int,
device: torch.device | None,
dtypes: list[torch.dtype] | None,
verbose: int,
) -> None:
"""Raise exceptions if the user's input is inva... | Raise exceptions if the user's input is invalid. | validate_user_params | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def traverse_input_data(
data: Any, action_fn: Callable[..., Any], aggregate_fn: Callable[..., Any]
) -> Any:
"""
Traverses any type of nested input data. On a tensor, returns the action given by
action_fn, and afterwards aggregates the results using aggregate_fn.
"""
if isinstance(data, torch.T... |
Traverses any type of nested input data. On a tensor, returns the action given by
action_fn, and afterwards aggregates the results using aggregate_fn.
| traverse_input_data | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def set_device(data: Any, device: torch.device | None) -> Any:
"""Sets device for all input types and collections of input types."""
return (
data
if device is None
else traverse_input_data(
data,
action_fn=lambda data: data.to(device, non_blocking=True),
... | Sets device for all input types and collections of input types. | set_device | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def get_device(
model: nn.Module, input_data: INPUT_DATA_TYPE | None
) -> torch.device | None:
"""
If input_data is given, the device should not be changed
(to allow for multi-device models, etc.)
Otherwise gets device of first parameter of model and returns it if it is on cuda,
otherwise retur... |
If input_data is given, the device should not be changed
(to allow for multi-device models, etc.)
Otherwise gets device of first parameter of model and returns it if it is on cuda,
otherwise returns cuda if available or cpu if not.
| get_device | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def get_input_data_sizes(data: Any) -> Any:
"""
Converts input data to an equivalent data structure of torch.Sizes
instead of tensors.
"""
return traverse_input_data(
data, action_fn=lambda data: data.size(), aggregate_fn=type
) |
Converts input data to an equivalent data structure of torch.Sizes
instead of tensors.
| get_input_data_sizes | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def get_total_memory_used(data: CORRECTED_INPUT_DATA_TYPE) -> int:
"""Calculates the total memory of all tensors stored in data."""
result = traverse_input_data(
data,
action_fn=lambda data: sys.getsizeof(
data.untyped_storage()
if hasattr(data, "untyped_storage")
... | Calculates the total memory of all tensors stored in data. | get_total_memory_used | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def get_input_tensor(
input_size: CORRECTED_INPUT_SIZE_TYPE,
batch_dim: int | None,
dtypes: list[torch.dtype],
device: torch.device,
) -> list[torch.Tensor]:
"""Get input_tensor with batch size 1 for use in model.forward()"""
x = []
for size, dtype in zip(input_size, dtypes):
input_t... | Get input_tensor with batch size 1 for use in model.forward() | get_input_tensor | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def get_correct_input_sizes(input_size: INPUT_SIZE_TYPE) -> CORRECTED_INPUT_SIZE_TYPE:
"""
Convert input_size to the correct form, which is a list of tuples.
Also handles multiple inputs to the network.
"""
if not isinstance(input_size, (list, tuple)):
raise TypeError(
"Input_siz... |
Convert input_size to the correct form, which is a list of tuples.
Also handles multiple inputs to the network.
| get_correct_input_sizes | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def pre_hook(module: nn.Module, inputs: Any) -> None:
"""Create a LayerInfo object to aggregate layer information."""
del inputs
info = LayerInfo(var_name, module, curr_depth, parent_info)
info.calculate_num_params()
info.check_recursive(layer_ids)
summary_list.append(inf... | Create a LayerInfo object to aggregate layer information. | pre_hook | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def apply_hooks(
model_name: str,
module: nn.Module,
input_data: CORRECTED_INPUT_DATA_TYPE,
batch_dim: int | None,
) -> tuple[
list[LayerInfo],
dict[int, LayerInfo],
dict[int, tuple[RemovableHandle, RemovableHandle]],
]:
"""
If input_data is provided, recursively adds hooks to all la... |
If input_data is provided, recursively adds hooks to all layers of the model.
Else, fills summary_list with layer info without computing a
forward pass through the network.
| apply_hooks | python | TylerYep/torchinfo | torchinfo/torchinfo.py | https://github.com/TylerYep/torchinfo/blob/master/torchinfo/torchinfo.py | MIT |
def db_credentials(c):
"""Encode db credentials (for github actions)"""
path = str(Path("~", ".auth", "postgres-ploomber.json").expanduser())
creds = Path(path).read_text()
print(base64.b64encode(creds.encode()).decode()) | Encode db credentials (for github actions) | db_credentials | python | ploomber/ploomber | tasks.py | https://github.com/ploomber/ploomber/blob/master/tasks.py | Apache-2.0 |
def fit(product, upstream):
"""Train a model and save it (pickle format)"""
clf = DecisionTreeClassifier()
df = pd.read_csv(str(upstream["join"]))
X = df.drop("target", axis="columns")
y = df["target"]
clf.fit(X, y)
with open(str(product), "wb") as f:
pickle.dump(clf, f) | Train a model and save it (pickle format) | fit | python | ploomber/ploomber | doc/examples/InMemoryDAG.py | https://github.com/ploomber/ploomber/blob/master/doc/examples/InMemoryDAG.py | Apache-2.0 |
def serializer(df, product):
"""Save all data frames as CSVs"""
out = str(product)
# make sure the parent folder exists
Path(out).parent.mkdir(parents=True, exist_ok=True)
df.to_csv(out, index=False) | Save all data frames as CSVs | serializer | python | ploomber/ploomber | doc/examples/InMemoryDAG.py | https://github.com/ploomber/ploomber/blob/master/doc/examples/InMemoryDAG.py | Apache-2.0 |
def add_features(dag):
"""
Given a DAG, adds feature engineering tasks. The DAG must have a task "get"
that returns the input data.
"""
get_task = dag["get"]
output = Path("output")
# instantiate tasks
a_feature_task = PythonCallable(
a_feature,
File(output / "a_feature... |
Given a DAG, adds feature engineering tasks. The DAG must have a task "get"
that returns the input data.
| add_features | python | ploomber/ploomber | doc/examples/InMemoryDAG.py | https://github.com/ploomber/ploomber/blob/master/doc/examples/InMemoryDAG.py | Apache-2.0 |
def make_predict():
"""Instantiate a prediction DAG using a previously trained model"""
dag_pred = DAG()
# this special function adds a task with name "get" that will just forward
# whatever value we pass when calling .build(). You can pass a function
# in the "preprocessor" argument to perform arb... | Instantiate a prediction DAG using a previously trained model | make_predict | python | ploomber/ploomber | doc/examples/InMemoryDAG.py | https://github.com/ploomber/ploomber/blob/master/doc/examples/InMemoryDAG.py | Apache-2.0 |
def diff_strings(a, b):
"""Compute the diff between two strings"""
d = Differ()
if a is None and b is None:
return "[Both a and b are None]"
out = ""
if a is None:
out += "[a is None]\n"
elif b is None:
out += "[a is None]\n"
a = "" if a is None else a
b = "" ... | Compute the diff between two strings | diff_strings | python | ploomber/ploomber | src/ploomber/codediffer.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/codediffer.py | Apache-2.0 |
def is_different(self, a, b, a_params, b_params, extension=None):
"""
Compares code and params to determine if it's changed. Ignores top-keys
in a_params or b_params if they're no JSON serializable.
Parameters
----------
a : str
Code to compare
b : s... |
Compares code and params to determine if it's changed. Ignores top-keys
in a_params or b_params if they're no JSON serializable.
Parameters
----------
a : str
Code to compare
b : str
Code to compare
a_params : dict
Params pa... | is_different | python | ploomber/ploomber | src/ploomber/codediffer.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/codediffer.py | Apache-2.0 |
def _get_normalizer(self, extension):
"""Get the normalizer function for a given extension"""
if extension in self.NORMALIZERS:
return self.NORMALIZERS[extension]
else:
return normalize_null | Get the normalizer function for a given extension | _get_normalizer | python | ploomber/ploomber | src/ploomber/codediffer.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/codediffer.py | Apache-2.0 |
def find_entry_point_type(entry_point):
"""
Step 1: If not ENTRY_POINT is defined nor a value is passed, a default
value is used (pipeline.yaml for CLI, recursive lookup for Jupyter client).
If ENTRY_POINT is defined, this simply overrides the default value, but
passing a value overrides the default... |
Step 1: If not ENTRY_POINT is defined nor a value is passed, a default
value is used (pipeline.yaml for CLI, recursive lookup for Jupyter client).
If ENTRY_POINT is defined, this simply overrides the default value, but
passing a value overrides the default value. Once the value is determined.
Step... | find_entry_point_type | python | ploomber/ploomber | src/ploomber/entrypoint.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/entrypoint.py | Apache-2.0 |
def _to_str(self, name=None, file=None, writer_kwargs=None, show_summary=True):
"""
Return the string representation of the collected messages
Parameters
----------
name
Title to show at the end
file
Text stream to use. If None, uses a temporary S... |
Return the string representation of the collected messages
Parameters
----------
name
Title to show at the end
file
Text stream to use. If None, uses a temporary StringIO object
writer_kwargs
Extra keyword arguments passed to the term... | _to_str | python | ploomber/ploomber | src/ploomber/messagecollector.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/messagecollector.py | Apache-2.0 |
def _run_command(path, command):
"""Safely run command in certain path"""
if not Path(path).is_dir():
raise ValueError("{} is not a directory".format(path))
out = subprocess.check_output(
shlex.split(command), cwd=str(path), stderr=subprocess.PIPE
)
s = out.decode("utf-8")
# re... | Safely run command in certain path | _run_command | python | ploomber/ploomber | src/ploomber/repo.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/repo.py | Apache-2.0 |
def is_repo(path):
"""Check if the path is in a git repo"""
if path is None:
return False
if not shutil.which("git"):
return False
out = subprocess.run(
["git", "-C", str(path), "rev-parse"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
repo_exists ... | Check if the path is in a git repo | is_repo | python | ploomber/ploomber | src/ploomber/repo.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/repo.py | Apache-2.0 |
def data_preprocessing(self, values):
"""Create a build report from several tasks"""
# in case the pipeline has no tasks...
elapsed = values.get("Elapsed (s)", [])
total = sum(elapsed)
def compute_pct(elapsed, total):
if not elapsed:
return 0
... | Create a build report from several tasks | data_preprocessing | python | ploomber/ploomber | src/ploomber/table.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/table.py | Apache-2.0 |
def rows2columns(rows):
"""Convert [{key: value}, {key: value2}] to [{key: [value, value2]}]"""
if not len(rows):
return {}
cols_combinations = set(tuple(sorted(row.columns)) for row in rows)
if len(cols_combinations) > 1:
raise KeyError(
"All rows should have the same colu... | Convert [{key: value}, {key: value2}] to [{key: [value, value2]}] | rows2columns | python | ploomber/ploomber | src/ploomber/table.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/table.py | Apache-2.0 |
def wrap_table_dict(table_dict, column_width, exclude):
"""Wraps a columns to take at most column_width characters
Parameters
----------
column_width : int, 'auto' or None
Width per column. Splits evenly if 'auto', does not wrap if None
exclude : list
Exclude columns from wrapping (... | Wraps a columns to take at most column_width characters
Parameters
----------
column_width : int, 'auto' or None
Width per column. Splits evenly if 'auto', does not wrap if None
exclude : list
Exclude columns from wrapping (show them in a single line)
| wrap_table_dict | python | ploomber/ploomber | src/ploomber/table.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/table.py | Apache-2.0 |
def separator_width(header_length, max_value_length):
"""
Calculates the width of the '---' line that separates header from content
"""
n_value_extra = header_length - max_value_length
if n_value_extra >= -2:
return header_length + 2
else:
return max_value_length |
Calculates the width of the '---' line that separates header from content
| separator_width | python | ploomber/ploomber | src/ploomber/table.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/table.py | Apache-2.0 |
def width_required_for_column(header, values):
"""
Spaced needed to display column in a single line, accounts for the two
extra characters that the tabulate package adds to the header when the
content is too short
"""
values_max = -1 if not values else max(len(str(v)) for v in values)
return... |
Spaced needed to display column in a single line, accounts for the two
extra characters that the tabulate package adds to the header when the
content is too short
| width_required_for_column | python | ploomber/ploomber | src/ploomber/table.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/table.py | Apache-2.0 |
def calculate_wrapping(table_dict, do_not_wrap, width_total):
"""
Determines the column width by keeping some columns unwrapped (show all
rows, including the header in a single line) and distributing the
remaining space evenly. Accounts for the betwee-column spacing.
"""
# space required to disp... |
Determines the column width by keeping some columns unwrapped (show all
rows, including the header in a single line) and distributing the
remaining space evenly. Accounts for the betwee-column spacing.
| calculate_wrapping | python | ploomber/ploomber | src/ploomber/table.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/table.py | Apache-2.0 |
def equal_column_width(n_cols, width_total):
"""
Max column width if splitting width_total equally among n_cols. Note
that before computing column width, a quantity is substracted to account
for required for spacing between columns
"""
if not n_cols:
raise ValueError("n_cols must be >0")... |
Max column width if splitting width_total equally among n_cols. Note
that before computing column width, a quantity is substracted to account
for required for spacing between columns
| equal_column_width | python | ploomber/ploomber | src/ploomber/table.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/table.py | Apache-2.0 |
def apply_wrapping(table_dict, wrapper, exclude=None):
"""
Wrap text using a wrapper, excluding columns in exclude
"""
exclude = exclude or []
return dict(
apply_wrapping_to_column(header, values, exclude, wrapper)
for header, values in table_dict.items()
) |
Wrap text using a wrapper, excluding columns in exclude
| apply_wrapping | python | ploomber/ploomber | src/ploomber/table.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/table.py | Apache-2.0 |
def wrap_elementwise(value, wrapper):
"""Apply wrap if str (elementwise if iterable of str)"""
if isinstance(value, Iterable) and not isinstance(value, str):
return [wrapper.fill(str(v)) for v in value]
else:
return wrapper.fill(str(value)) | Apply wrap if str (elementwise if iterable of str) | wrap_elementwise | python | ploomber/ploomber | src/ploomber/table.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/table.py | Apache-2.0 |
def assert_no_extra_attributes_in_class(abstract_class, concrete_class, allowed=None):
"""
Ploomber makes heavy use of abstract classes to provide a uniform API for
tasks, products, metadata, etc. When defining abstract classes, the
interpreter refuses to instantiate an object where the concrete class
... |
Ploomber makes heavy use of abstract classes to provide a uniform API for
tasks, products, metadata, etc. When defining abstract classes, the
interpreter refuses to instantiate an object where the concrete class
misses implementation for abstract methods. However, it does not complain
if the concre... | assert_no_extra_attributes_in_class | python | ploomber/ploomber | src/ploomber/_testing_utils.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/_testing_utils.py | Apache-2.0 |
def _delete_git_repo(path):
"""
If on windows, we need to change permissionsto delete the repo
"""
path_to_repo = Path(path, ".git")
if os.name == "nt" and path_to_repo.exists():
for root, dirs, files in os.walk(path_to_repo):
for dir_ in dirs:
os.chmod(Path(root,... |
If on windows, we need to change permissionsto delete the repo
| _delete_git_repo | python | ploomber/ploomber | src/ploomber/cli/examples.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/examples.py | Apache-2.0 |
def main(use_lock, create_env=None, use_venv=False):
"""
Install project, automatically detecting if it's a conda-based or pip-based
project.
Parameters
---------
use_lock : bool
If True Uses requirements.lock.txt/environment.lock.yml and
requirements.dev.lock.txt/environment.de... |
Install project, automatically detecting if it's a conda-based or pip-based
project.
Parameters
---------
use_lock : bool
If True Uses requirements.lock.txt/environment.lock.yml and
requirements.dev.lock.txt/environment.dev.lock.yml files. If False
uses regular files and cr... | main | python | ploomber/ploomber | src/ploomber/cli/install.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/install.py | Apache-2.0 |
def _get_base_prefix_compat():
"""
This function will find the pip virtualenv with different python versions.
Get base/real prefix, or sys.prefix if there is none.
"""
return (
getattr(sys, "base_prefix", None)
or sys.prefix
or getattr(sys, "real_prefix", None)
) |
This function will find the pip virtualenv with different python versions.
Get base/real prefix, or sys.prefix if there is none.
| _get_base_prefix_compat | python | ploomber/ploomber | src/ploomber/cli/install.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/install.py | Apache-2.0 |
def main_pip(use_lock, create_env=True):
"""
Install pip-based project (uses venv), looks for requirements.txt files
Parameters
----------
start_time : datetime
The initial runtime of the function.
use_lock : bool
If True Uses requirements.txt and requirements.dev.lock.txt file... |
Install pip-based project (uses venv), looks for requirements.txt files
Parameters
----------
start_time : datetime
The initial runtime of the function.
use_lock : bool
If True Uses requirements.txt and requirements.dev.lock.txt files
create_env : bool
If True, it use... | main_pip | python | ploomber/ploomber | src/ploomber/cli/install.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/install.py | Apache-2.0 |
def main_conda(use_lock, create_env=True):
"""
Install conda-based project, looks for environment.yml files
Parameters
----------
use_lock : bool
If True Uses environment.lock.yml and environment.dev.lock.yml files
create_env : bool
If True, it uses the venv module to create a... |
Install conda-based project, looks for environment.yml files
Parameters
----------
use_lock : bool
If True Uses environment.lock.yml and environment.dev.lock.yml files
create_env : bool
If True, it uses the venv module to create a new virtual environment,
then installs th... | main_conda | python | ploomber/ploomber | src/ploomber/cli/install.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/install.py | Apache-2.0 |
def _is_conda():
"""
The function will tell if the code is running in a conda env
"""
conda_path = Path(sys.prefix, "conda-meta")
return (
conda_path.exists()
or os.environ.get("CONDA_PREFIX", False)
or os.environ.get("CONDA_DEFAULT_ENV", False)
) |
The function will tell if the code is running in a conda env
| _is_conda | python | ploomber/ploomber | src/ploomber/cli/install.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/install.py | Apache-2.0 |
def _locate_pip_inside_conda(env_name):
"""
Locates pip inside the conda env with a given name
"""
pip = _path_to_pip_in_env_with_name(shutil.which("conda"), env_name)
# this might happen if the environment does not contain python/pip
if not Path(pip).exists():
err = (
f"Cou... |
Locates pip inside the conda env with a given name
| _locate_pip_inside_conda | python | ploomber/ploomber | src/ploomber/cli/install.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/install.py | Apache-2.0 |
def _pip_install(cmdr, pip, lock, requirements=_REQS_TXT):
"""Install and freeze requirements
Parameters
----------
cmdr
Commander instance
pip
Path to pip binary
lock
If true, locks dependencies and stores them in a requirements.lock.txt
"""
cmdr.run(
... | Install and freeze requirements
Parameters
----------
cmdr
Commander instance
pip
Path to pip binary
lock
If true, locks dependencies and stores them in a requirements.lock.txt
| _pip_install | python | ploomber/ploomber | src/ploomber/cli/install.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/install.py | Apache-2.0 |
def cli_endpoint(fn):
"""
Decorator for command line endpoints that execute dags or tasks. It runs
the decorated function, captures exception (if any), sends a colored
traceback to standard error and exits with code 1.
Notes
-----
This will hide the traceback when raising subclasses of
... |
Decorator for command line endpoints that execute dags or tasks. It runs
the decorated function, captures exception (if any), sends a colored
traceback to standard error and exits with code 1.
Notes
-----
This will hide the traceback when raising subclasses of
ploomber.exeptions.BaseExcept... | cli_endpoint | python | ploomber/ploomber | src/ploomber/cli/io.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/io.py | Apache-2.0 |
def command_endpoint(fn):
"""
Decorator for command line endpoints that only parse dags or tasks but do
not execute them. If it tails, it prints error message to stderror, then
calls with exit code 1.
"""
@wraps(fn)
def wrapper(**kwargs):
try:
fn(**kwargs)
# echo... |
Decorator for command line endpoints that only parse dags or tasks but do
not execute them. If it tails, it prints error message to stderror, then
calls with exit code 1.
| command_endpoint | python | ploomber/ploomber | src/ploomber/cli/io.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/io.py | Apache-2.0 |
def _call_in_source(dag, method_name, message, kwargs=None, verbose=True):
"""
Execute method on each task.source in dag, passing kwargs
"""
kwargs = kwargs or {}
files = []
results = []
for task in dag.values():
ok_to_inject_task = True
if "priority" in kwargs:
o... |
Execute method on each task.source in dag, passing kwargs
| _call_in_source | python | ploomber/ploomber | src/ploomber/cli/nb.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/nb.py | Apache-2.0 |
def _install_hook(path_to_hook, content, entry_point):
"""
Install a git hook script at the given path
"""
if path_to_hook.exists():
raise RuntimeError(
"hook already exists "
f'at {path_to_hook}. Run: "ploomber nb -u" to uninstall the '
"existing hook and try... |
Install a git hook script at the given path
| _install_hook | python | ploomber/ploomber | src/ploomber/cli/nb.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/nb.py | Apache-2.0 |
def _delete_hook(path):
"""Delete a git hook at the given path"""
if path.exists():
if path.is_file():
path.unlink()
else:
# in the remote case that it's a directory
shutil.rmtree(path)
click.echo(f"Deleted hook located at {path}") | Delete a git hook at the given path | _delete_hook | python | ploomber/ploomber | src/ploomber/cli/nb.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/nb.py | Apache-2.0 |
def _py_with_single_click_enable():
"""
Writes ~/.jupyterlab/labconfig/default_setting_overrides.json to enable
opening .py files as notebooks with a single click. If the secion already
exists, it overrides its value
"""
parent = Path("~/.jupyter", "labconfig").expanduser()
path = parent / "... |
Writes ~/.jupyterlab/labconfig/default_setting_overrides.json to enable
opening .py files as notebooks with a single click. If the secion already
exists, it overrides its value
| _py_with_single_click_enable | python | ploomber/ploomber | src/ploomber/cli/nb.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/nb.py | Apache-2.0 |
def _py_with_single_click_disable():
"""
Opens ~/.jupyterlab/labconfig/default_setting_overrides.json and deletes
the value in
['@jupyterlab/docmanager-extension:plugin'][''defaultViewers'], if any
"""
parent = Path("~/.jupyter", "labconfig")
target = (parent / "default_setting_overrides.jso... |
Opens ~/.jupyterlab/labconfig/default_setting_overrides.json and deletes
the value in
['@jupyterlab/docmanager-extension:plugin'][''defaultViewers'], if any
| _py_with_single_click_disable | python | ploomber/ploomber | src/ploomber/cli/nb.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/nb.py | Apache-2.0 |
def parse_entry_point_value(self):
"""
Returns the entry_point value pased without calling parse_args(),
this is required to find env params to show, if we call parse_args()
the CLI stops there and shows available params
"""
index = None
try:
index = ... |
Returns the entry_point value pased without calling parse_args(),
this is required to find env params to show, if we call parse_args()
the CLI stops there and shows available params
| parse_entry_point_value | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def add_argument(self, *args, **kwargs):
"""
Add a CLI argument. If called after the context manager, it is
considered part of the dynamic API, if called within the context
manager, the arg is considered part of the static API. If it's
called outside a context manager, and no sta... |
Add a CLI argument. If called after the context manager, it is
considered part of the dynamic API, if called within the context
manager, the arg is considered part of the static API. If it's
called outside a context manager, and no static API has been set,
it raises an error
... | add_argument | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def add_mutually_exclusive_group(self, **kwargs):
"""
Add a mutually exclusive group. It returns a custom class that
correctly stores the arguments in the static or dynamic API
"""
group = CustomMutuallyExclusiveGroup(self, **kwargs)
self._mutually_exclusive_groups.append... |
Add a mutually exclusive group. It returns a custom class that
correctly stores the arguments in the static or dynamic API
| add_mutually_exclusive_group | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def process_factory_dotted_path(self, dotted_path):
"""Parse a factory entry point, returns initialized dag and parsed args"""
entry = load_dotted_path(str(dotted_path), raise_=True)
# add args using the function's signature
required, _ = _add_args_from_callable(self, entry)
# ... | Parse a factory entry point, returns initialized dag and parsed args | process_factory_dotted_path | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def load_from_entry_point_arg(self):
"""
Parses an entry point, adding arguments by extracting them from
the env.
Returns a dag and the parsed args
"""
entry_point = EntryPoint(self.parse_entry_point_value())
dag, args = load_dag_from_entry_point_and_parser(entry... |
Parses an entry point, adding arguments by extracting them from
the env.
Returns a dag and the parsed args
| load_from_entry_point_arg | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def _parse_doc(callable_):
"""
Convert numpydoc docstring to a list of dictionaries
"""
doc = callable_.__doc__
# no docstring
if doc is None:
return {"params": {}, "summary": None}
# try to import numpydoc, if can't find it, just returnt the first line
try:
docscrape =... |
Convert numpydoc docstring to a list of dictionaries
| _parse_doc | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def _env_keys_to_override(args, static_args):
"""
Returns a dictionary with all extra cli parameters passed, all these must
be parameters that part of the env or params (with no defaults) if
entry point is a factory function
"""
return {
name: getattr(args, name)
for name in dir(... |
Returns a dictionary with all extra cli parameters passed, all these must
be parameters that part of the env or params (with no defaults) if
entry point is a factory function
| _env_keys_to_override | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def _add_cli_args_from_env_dict_keys(parser, env_dict):
"""
Add one parameter to the args parser by taking a look at all values
defined in an env dict object
"""
# flatten keys from the env dictionary. e.g. from {'a': {'b': 1}} is
# converted to {'a--b': 1}. This allows us to add cli args such a... |
Add one parameter to the args parser by taking a look at all values
defined in an env dict object
| _add_cli_args_from_env_dict_keys | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def _parse_signature_from_callable(callable_):
"""
Parse a callable signature, return a dictionary with
{param_key: default_value} and a list of required parameters
"""
sig = inspect.signature(callable_)
required = [k for k, v in sig.parameters.items() if v.default == inspect._empty]
defau... |
Parse a callable signature, return a dictionary with
{param_key: default_value} and a list of required parameters
| _parse_signature_from_callable | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def _add_args_from_callable(parser, callable_):
"""
Modifies an args parser to include parameters from a callable, adding
parameters with default values as optional and parameters with no defaults
as mandatory. Adds descriptions from parsing the callable's docstring
It also adds the description fro... |
Modifies an args parser to include parameters from a callable, adding
parameters with default values as optional and parameters with no defaults
as mandatory. Adds descriptions from parsing the callable's docstring
It also adds the description from the docstring, if any
Returns parsed args: requi... | _add_args_from_callable | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def _process_file_dir_or_glob(parser, dagspec_arg=None):
"""
Process a file entry point file, directory or glob-like pattern,
the initialized dag and parsed args
Parameters
----------
parser : CustomParser
CLI arg parser
"""
# NOTE: we must use parser.parse_entry_point_value() i... |
Process a file entry point file, directory or glob-like pattern,
the initialized dag and parsed args
Parameters
----------
parser : CustomParser
CLI arg parser
| _process_file_dir_or_glob | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def load_dag_from_entry_point_and_parser(entry_point, parser, argv):
"""Load DAG from entry point
Parameters
----------
parser : CustomParser
The cli parser object
argv : list
Command line arguments
"""
help_cmd = "--help" in argv or "-h" in argv
# if the file does not... | Load DAG from entry point
Parameters
----------
parser : CustomParser
The cli parser object
argv : list
Command line arguments
| load_dag_from_entry_point_and_parser | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def _configure_logger(args):
"""Configure logger if user passed --log/--log-file args"""
if hasattr(args, "log"):
if args.log is not None:
logging.basicConfig(level=args.log.upper())
if hasattr(args, "log_file"):
if args.log_file is not None:
file_handler = logging.F... | Configure logger if user passed --log/--log-file args | _configure_logger | python | ploomber/ploomber | src/ploomber/cli/parsers.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/parsers.py | Apache-2.0 |
def connection(self):
"""Return a connection, open one if there isn't any"""
# if there isn't an open connection, open one...
if self._connection is None:
self._connection = self.connect_fn(**self.connect_kwargs)
return self._connection | Return a connection, open one if there isn't any | connection | python | ploomber/ploomber | src/ploomber/clients/db.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/clients/db.py | Apache-2.0 |
def execute(self, code):
"""Execute code with the existing connection"""
cur = self.connection.cursor()
if self.split_source:
for command in code_split(code, token=self.split_source):
cur.execute(command)
else:
cur.execute(code)
self.conn... | Execute code with the existing connection | execute | python | ploomber/ploomber | src/ploomber/clients/db.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/clients/db.py | Apache-2.0 |
def __init__(
self, connect_kwargs, path_to_directory, run_template="bash {{path_to_code}}"
):
"""
path_to_directory: str
A path to save temporary files
connect_kwargs: dict
Parameters to send to the paramiko.SSHClient.connect constructor
"""
... |
path_to_directory: str
A path to save temporary files
connect_kwargs: dict
Parameters to send to the paramiko.SSHClient.connect constructor
| __init__ | python | ploomber/ploomber | src/ploomber/clients/shell.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/clients/shell.py | Apache-2.0 |
def upload(self, local):
"""Upload file or folder from a local path by calling _upload as needed
Parameters
----------
local
Path to local file or folder to upload
"""
if Path(local).is_dir():
for f in glob.iglob(str(Path(local, "**")), recursive=... | Upload file or folder from a local path by calling _upload as needed
Parameters
----------
local
Path to local file or folder to upload
| upload | python | ploomber/ploomber | src/ploomber/clients/storage/abc.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/clients/storage/abc.py | Apache-2.0 |
def _remote_path(self, local):
"""
Given a local path, compute the remote path where the file will be
stored.
1. Obtain the absolute project root (``/path/to/project``)
2. Get the local absolute path (``/path/to/project/out/data.csv``)
3. Compute the relative path (``out... |
Given a local path, compute the remote path where the file will be
stored.
1. Obtain the absolute project root (``/path/to/project``)
2. Get the local absolute path (``/path/to/project/out/data.csv``)
3. Compute the relative path (``out/data.csv``)
4. Prefix the relativ... | _remote_path | python | ploomber/ploomber | src/ploomber/clients/storage/abc.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/clients/storage/abc.py | Apache-2.0 |
def _resolve(path):
"""
Path.resolve() does not work on windows if the path doesn't exist
this makes it work
"""
path = Path(path)
return path if path.is_absolute() else Path(".").resolve() / path |
Path.resolve() does not work on windows if the path doesn't exist
this makes it work
| _resolve | python | ploomber/ploomber | src/ploomber/clients/storage/util.py | https://github.com/ploomber/ploomber/blob/master/src/ploomber/clients/storage/util.py | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.