code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def _normalise(self, batch: Dict) -> Dict:
"""
Create the `text` field, typically from the field `caption` and remove the `caption` column.
Remove all the un-necessary columns and put them into a json dict (`meta` column).
"""
# `datasets.map` requires function to return pure-fun... |
Create the `text` field, typically from the field `caption` and remove the `caption` column.
Remove all the un-necessary columns and put them into a json dict (`meta` column).
| _normalise | python | huggingface/smollm | vision/m4/sourcing/pmd/loader_builder.py | https://github.com/huggingface/smollm/blob/master/vision/m4/sourcing/pmd/loader_builder.py | Apache-2.0 |
def _normalise(self, batch: Dict) -> Dict:
"""Create the `text` field, typically from the field `caption`."""
# `datasets.map` requires function to return pure-functions, which is not the case here
# https://github.com/huggingface/datasets/pull/4197#issue-1211342558
batch = batch.copy()
... | Create the `text` field, typically from the field `caption`. | _normalise | python | huggingface/smollm | vision/m4/sourcing/pmd/loader_builder.py | https://github.com/huggingface/smollm/blob/master/vision/m4/sourcing/pmd/loader_builder.py | Apache-2.0 |
def _add_image_or_exception(self, batch: Dict, image_or_exception_iterator: Iterator) -> Dict:
"""Get the images from the iterator and put them in the batch dict.
Remove all the un-necessary columns and put them into a json dict (`meta` column).
Add the source info to the batch dict"""
#... | Get the images from the iterator and put them in the batch dict.
Remove all the un-necessary columns and put them into a json dict (`meta` column).
Add the source info to the batch dict | _add_image_or_exception | python | huggingface/smollm | vision/m4/sourcing/pmd/loader_builder.py | https://github.com/huggingface/smollm/blob/master/vision/m4/sourcing/pmd/loader_builder.py | Apache-2.0 |
def map_shard(self, shard: Dataset) -> Dataset:
"""
Prepare the `text` fields, and download (or fetch from cache) images.
"""
# Decide which urls we need to query
shard = shard.map(
self._normalise,
batched=True,
remove_columns=shard.column_nam... |
Prepare the `text` fields, and download (or fetch from cache) images.
| map_shard | python | huggingface/smollm | vision/m4/sourcing/pmd/loader_builder.py | https://github.com/huggingface/smollm/blob/master/vision/m4/sourcing/pmd/loader_builder.py | Apache-2.0 |
def _split_to_single_caption(annotations):
"""This function is mainly used in Localized Narratives where a paragraph can contain
multiple relevant captions to a single image. We split the paragraph into multiple
captions and then return each as an individual sample.
"""
extended = []
for annotat... | This function is mainly used in Localized Narratives where a paragraph can contain
multiple relevant captions to a single image. We split the paragraph into multiple
captions and then return each as an individual sample.
| _split_to_single_caption | python | huggingface/smollm | vision/m4/sourcing/pmd/local_loaders/localized_narratives__openimages/localized_narratives__openimages.py | https://github.com/huggingface/smollm/blob/master/vision/m4/sourcing/pmd/local_loaders/localized_narratives__openimages/localized_narratives__openimages.py | Apache-2.0 |
def create_database(shard_name: str):
"""
If the databse does not exist, create it
TODO: update so that we can take in multiple shards
"""
db_filepath = f"data/extracted_databases/{shard_name}.db"
if os.path.exists(db_filepath):
print("Database already exists")
return
print(... |
If the databse does not exist, create it
TODO: update so that we can take in multiple shards
| create_database | python | huggingface/smollm | vision/m4/sourcing/processing/extracting_ngrams/utils.py | https://github.com/huggingface/smollm/blob/master/vision/m4/sourcing/processing/extracting_ngrams/utils.py | Apache-2.0 |
def webdoc_valid_sample(sample):
"""Check whether a sample is valid.
:param sample: sample to be checked
"""
return (
sample is not None
and isinstance(sample, dict)
and len(list(sample.keys())) > 0
and not sample.get("__bad__", False)
and sample_has_all_files(sa... | Check whether a sample is valid.
:param sample: sample to be checked
| webdoc_valid_sample | python | huggingface/smollm | vision/m4/training/dataset_utils.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/dataset_utils.py | Apache-2.0 |
def group_by_keys_interleaved(data, handler=log_and_continue):
"""Return function over iterator that groups key, value pairs into samples."""
current_sample = None
for filesample in data:
try:
assert isinstance(filesample, dict)
fname, value = filesample["fname"], filesample[... | Return function over iterator that groups key, value pairs into samples. | group_by_keys_interleaved | python | huggingface/smollm | vision/m4/training/dataset_utils.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/dataset_utils.py | Apache-2.0 |
def dump_optim_states(self):
"""dumps basic information about the state of the optimizer"""
print("*** Optim States Dump:")
param_groups_cnt = len(self.vl_optim.param_groups)
# state dict has more than param_groups info, so extract only the param groups
param_group_states = list(self.vl_optim.state... | dumps basic information about the state of the optimizer | dump_optim_states | python | huggingface/smollm | vision/m4/training/debug_utils.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/debug_utils.py | Apache-2.0 |
def validate_optim_states_are_reset(self):
"""
for a new or fully reset optimizer we expect all zeros `exp_avg` and `exp_avg_sq` state tensors and step=1
"""
param_groups_cnt = len(self.vl_optim.param_groups)
param_group_states = list(self.vl_optim.state.values())[:param_groups_cnt]
for i, stat... |
for a new or fully reset optimizer we expect all zeros `exp_avg` and `exp_avg_sq` state tensors and step=1
| validate_optim_states_are_reset | python | huggingface/smollm | vision/m4/training/debug_utils.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/debug_utils.py | Apache-2.0 |
def greedy_packing(
input_ids_to_pack: List[List[int]],
images_to_pack: List[List[torch.FloatTensor]],
max_seq_len: int,
max_num_images: int,
image_seq_len: int,
pad_token_id: int,
fake_token_around_image_id: int,
image_token_id: int,
double_breaking_lines_token_ids: List[int],
o... |
Args details:
`images_to_pack` -> # Each tensor is of size (3, im_height, im_width)
`output_input_ids` -> # Each tensor is of size (max_seq_len,)
`output_images` -> # Each tensor is of size (max_num_images, 3, max_sample_height, max_sample_width)
`output_attention_masks` -> # Each tensor is of size... | greedy_packing | python | huggingface/smollm | vision/m4/training/packing.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/packing.py | Apache-2.0 |
def prepare_result_return(
output_input_ids,
output_images,
output_attention_masks,
output_pixel_attention_masks,
output_num_images,
output_num_text_tokens,
output_labels=[],
):
"""
This function returns the end dictionary at the exit of the dataloader.
Mostly batchify things and... |
This function returns the end dictionary at the exit of the dataloader.
Mostly batchify things and pad accordingly.
| prepare_result_return | python | huggingface/smollm | vision/m4/training/packing.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/packing.py | Apache-2.0 |
def split_pack_and_pad_webdocs(
sample,
tokenizer,
max_seq_len,
image_transform,
max_num_images,
image_seq_len,
max_image_size=384,
vision_encoder_max_image_size=384,
pre_split_scale_up_max=1.0,
pre_split_scale_up_frequency=0.0,
max_num_samples_per_document=10,
prefix_see... |
Return a batch of samples in the format expected by the model which
includes `input_ids`, `pixel_values`, `attention_mask`, `image_attention_mask`,
and `next_image_attention_mask`. The `input_ids` are sampled from the document to
ensure it has `max_seq_len` tokens otherwise, the shorter documents are p... | split_pack_and_pad_webdocs | python | huggingface/smollm | vision/m4/training/packing.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/packing.py | Apache-2.0 |
def model_name_to_classes(model_name_or_path):
"""returns config_class, model_class for a given model name or path"""
model_name_lowcase = model_name_or_path.lower()
for rx, classes in model_name2classes.items():
if re.search(rx, model_name_lowcase):
return classes
else:
rai... | returns config_class, model_class for a given model name or path | model_name_to_classes | python | huggingface/smollm | vision/m4/training/setup_language_model.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/setup_language_model.py | Apache-2.0 |
def vision_model_name_to_model(model_name_or_path, model):
"""returns the model if supported, asserts otherwise"""
model_name_lowcase = model_name_or_path.lower()
for rx, lookup in vision_model_name2model.items():
if re.search(rx, model_name_lowcase):
return lookup(model)
else:
... | returns the model if supported, asserts otherwise | vision_model_name_to_model | python | huggingface/smollm | vision/m4/training/setup_vision_model.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/setup_vision_model.py | Apache-2.0 |
def setup_batch_size_related_configs(self):
"""
batch_size-related configs are processed here.
All this work is done here because it requires knowing the value of num_processes
"""
hparams = self.hparams
if hparams.global_batch_size_ramp_up.start is not None:
... |
batch_size-related configs are processed here.
All this work is done here because it requires knowing the value of num_processes
| setup_batch_size_related_configs | python | huggingface/smollm | vision/m4/training/trainer.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/trainer.py | Apache-2.0 |
def update_gas_and_gbs(self, grad_acc_size_current, global_batch_size_current):
"""
Update m4, deepspeed and accelerate with the derived global_batch_size and grad_acc_size
"""
self.hparams.grad_acc_size = grad_acc_size_current
self.hparams.global_batch_size = global_batch_size_c... |
Update m4, deepspeed and accelerate with the derived global_batch_size and grad_acc_size
| update_gas_and_gbs | python | huggingface/smollm | vision/m4/training/trainer.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/trainer.py | Apache-2.0 |
def _configure_optimizer_and_scheduler(self):
"""defines model optimizer and lr scheduler"""
vl_optim = getattr(torch_optim, self.optim_param.vl_optim)
if issubclass(vl_optim, torch_optim.AdamW):
no_decay = self.optim_param.vl_optim_params.pop("no_decay", [])
weight_deca... | defines model optimizer and lr scheduler | _configure_optimizer_and_scheduler | python | huggingface/smollm | vision/m4/training/trainer.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/trainer.py | Apache-2.0 |
def _prepare_register(self):
"""
Prepare model, optimizer and dataloader if necessary.
Register the scheduler for checkpointing.
"""
if isinstance(self.train_loader.dataset, torch.utils.data.IterableDataset):
# `dummy_dataloader`: trick as suggested here: https://disc... |
Prepare model, optimizer and dataloader if necessary.
Register the scheduler for checkpointing.
| _prepare_register | python | huggingface/smollm | vision/m4/training/trainer.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/trainer.py | Apache-2.0 |
def gather_metrics(
self,
local_metric_list: List[Dict[str, torch.Tensor]],
placeholder_tensor: torch.Tensor,
reduce_op_list,
ds_name_suffix: str,
) -> List[Dict[str, torch.Tensor]]:
"""
Collating all metrics to gather into ONE call to `torch.distributed.all_g... |
Collating all metrics to gather into ONE call to `torch.distributed.all_gather` instead of doing one per metric x dataset_name.
| gather_metrics | python | huggingface/smollm | vision/m4/training/trainer.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/trainer.py | Apache-2.0 |
def format_print_logs(self, dict_logs, keys_known_formats, skip_keys=[]):
"""
compact formatting of the logs with pre-specified formatter for each log entry, plus a
catch-all if new log entries are added but forgotten to be added in keys_known_formats
the keys order is the one that cont... |
compact formatting of the logs with pre-specified formatter for each log entry, plus a
catch-all if new log entries are added but forgotten to be added in keys_known_formats
the keys order is the one that controls how the logs are printed (py37+).
even if there is no formatter there is... | format_print_logs | python | huggingface/smollm | vision/m4/training/trainer.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/trainer.py | Apache-2.0 |
def format_jsonl_logs(self, dict_logs):
"""
Similar to format_print_logs but for jsonl logs
"""
log = {}
for key in dict_logs:
# We don't want to log the accumulated values
if "_acc" in key:
continue
elif isinstance(dict_logs[ke... |
Similar to format_print_logs but for jsonl logs
| format_jsonl_logs | python | huggingface/smollm | vision/m4/training/trainer.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/trainer.py | Apache-2.0 |
def image_splitting(
image,
vision_encoder_max_image_size,
max_image_size,
pre_split_scale_up_max=1.0,
pre_split_scale_up_frequency=0.0,
scale_up_factor=None,
):
"""
Image splitting strategy.
1) If one side of the original image is larger than `max_image_size`, resize it to `max_imag... |
Image splitting strategy.
1) If one side of the original image is larger than `max_image_size`, resize it to `max_image_size` while preserving the aspect ratio.
2) Divide the resulting image into `ceil(height / vision_encoder_max_image_size)` x `ceil(width / vision_encoder_max_image_size)`
sub-images o... | image_splitting | python | huggingface/smollm | vision/m4/training/utils.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/utils.py | Apache-2.0 |
def get_tokenizer(
tokenizer_name: str,
tokenizer_add_tokens,
tokenizer_add_special_tokens,
tokenizer_params,
additional_vocab_size,
model_vocab_size=None,
is_fine_tuning=False,
):
"""
We artificially separate `tokenizer_add_tokens` and `tokenizer_add_special_tokens` is a dictionary ... |
We artificially separate `tokenizer_add_tokens` and `tokenizer_add_special_tokens` is a dictionary whose keys only takes into account special tokens (eos, pad, cls, etc.).
On the contrary, `tokenizer_add_tokens` is a list of string of `AddedToken`.
In practise, we use `tokenizer.add_special_tokens` to add ... | get_tokenizer | python | huggingface/smollm | vision/m4/training/utils.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/utils.py | Apache-2.0 |
def accelerate_torch_dtype():
"""
derive and return `torch_dtype` to be used in `from_pretrained` from either Deepspeed config or if
Deepspeed isn't used than accelerator state
"""
if not is_accelerate_initialized():
return None
accelerator_state = AcceleratorState()
if is_deepspee... |
derive and return `torch_dtype` to be used in `from_pretrained` from either Deepspeed config or if
Deepspeed isn't used than accelerator state
| accelerate_torch_dtype | python | huggingface/smollm | vision/m4/training/utils.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/utils.py | Apache-2.0 |
def deepspeed_zero_init_disabled_context_manager():
"""
returns either a context list that includes one that will disable zero.Init or an empty context list
"""
deepspeed_plugin = get_deepspeed_plugin()
if deepspeed_plugin is not None:
return [deepspeed_plugin.zero3_init_context_manager(enab... |
returns either a context list that includes one that will disable zero.Init or an empty context list
| deepspeed_zero_init_disabled_context_manager | python | huggingface/smollm | vision/m4/training/utils.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/utils.py | Apache-2.0 |
def deepspeed_gathered_parameters_context_manager(params, modify=True):
"""
Under zero.Init returns a context manager that will gather the sharded param, otherwise returns an empty list
If `modify` is `True`, gather the shards and once the context exits update the shards with the
modified data - one wa... |
Under zero.Init returns a context manager that will gather the sharded param, otherwise returns an empty list
If `modify` is `True`, gather the shards and once the context exits update the shards with the
modified data - one wants that when modifying the gathered param. If one wants to just gather
the... | deepspeed_gathered_parameters_context_manager | python | huggingface/smollm | vision/m4/training/utils.py | https://github.com/huggingface/smollm/blob/master/vision/m4/training/utils.py | Apache-2.0 |
def detect_overflow(var, ctx):
"""
Report whether the tensor contains any `nan` or `inf` entries.
This is useful for detecting overflows/underflows and best to call right after the function that did some math that
modified the tensor in question.
This function contains a few other helper features ... |
Report whether the tensor contains any `nan` or `inf` entries.
This is useful for detecting overflows/underflows and best to call right after the function that did some math that
modified the tensor in question.
This function contains a few other helper features that you can enable and tweak directly... | detect_overflow | python | huggingface/smollm | vision/m4/utils/activation_tracker.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/activation_tracker.py | Apache-2.0 |
def check_valid_tokenizer(tokenizer) -> bool:
"""Check if the special tokens were correctly added to the tokenizer,
and if they are not normalized.
"""
tok_class = type(tokenizer).__name__.lower()
if ("idefics" in tok_class) or ("mistral" in tok_class):
assert "<image>" in tokenizer.get_voca... | Check if the special tokens were correctly added to the tokenizer,
and if they are not normalized.
| check_valid_tokenizer | python | huggingface/smollm | vision/m4/utils/check_valid_tokenizer.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/check_valid_tokenizer.py | Apache-2.0 |
def printflock(*args, **kwargs):
"""
This is a wrapper around the built-in Python `print` which calls `flock` before calling
`print` and unlocks it immediately after. This wrapper is useful for when each rank needs to
print a message without getting it interleaved with prints from other ranks.
The l... |
This is a wrapper around the built-in Python `print` which calls `flock` before calling
`print` and unlocks it immediately after. This wrapper is useful for when each rank needs to
print a message without getting it interleaved with prints from other ranks.
The lock file is the file this wrapper is def... | printflock | python | huggingface/smollm | vision/m4/utils/debug.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/debug.py | Apache-2.0 |
def _get_default_logging_level():
"""
If M4_VERBOSITY env var is set to one of the valid choices return that as the new default level. If it is
not - fall back to `_default_log_level`
"""
env_level_str = os.getenv("M4_VERBOSITY", None)
if env_level_str:
if env_level_str in log_levels:
... |
If M4_VERBOSITY env var is set to one of the valid choices return that as the new default level. If it is
not - fall back to `_default_log_level`
| _get_default_logging_level | python | huggingface/smollm | vision/m4/utils/logging.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/logging.py | Apache-2.0 |
def get_logger(name: Optional[str] = None) -> logging.Logger:
"""
Return a logger with the specified name.
This function is not supposed to be directly accessed unless you are writing a custom m4 module.
"""
if name is None:
name = _get_library_name()
_configure_library_root_logger()
... |
Return a logger with the specified name.
This function is not supposed to be directly accessed unless you are writing a custom m4 module.
| get_logger | python | huggingface/smollm | vision/m4/utils/logging.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/logging.py | Apache-2.0 |
def disable_default_handler() -> None:
"""Disable the default handler of the HuggingFace M4's root logger."""
_configure_library_root_logger()
assert _default_handler is not None
_get_library_root_logger().removeHandler(_default_handler) | Disable the default handler of the HuggingFace M4's root logger. | disable_default_handler | python | huggingface/smollm | vision/m4/utils/logging.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/logging.py | Apache-2.0 |
def enable_default_handler() -> None:
"""Enable the default handler of the HuggingFace M4's root logger."""
_configure_library_root_logger()
assert _default_handler is not None
_get_library_root_logger().addHandler(_default_handler) | Enable the default handler of the HuggingFace M4's root logger. | enable_default_handler | python | huggingface/smollm | vision/m4/utils/logging.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/logging.py | Apache-2.0 |
def add_handler(handler: logging.Handler) -> None:
"""adds a handler to the HuggingFace M4's root logger."""
_configure_library_root_logger()
assert handler is not None
_get_library_root_logger().addHandler(handler) | adds a handler to the HuggingFace M4's root logger. | add_handler | python | huggingface/smollm | vision/m4/utils/logging.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/logging.py | Apache-2.0 |
def remove_handler(handler: logging.Handler) -> None:
"""removes given handler from the HuggingFace M4's root logger."""
_configure_library_root_logger()
assert handler is not None and handler not in _get_library_root_logger().handlers
_get_library_root_logger().removeHandler(handler) | removes given handler from the HuggingFace M4's root logger. | remove_handler | python | huggingface/smollm | vision/m4/utils/logging.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/logging.py | Apache-2.0 |
def disable_propagation() -> None:
"""
Disable propagation of the library log outputs. Note that log propagation is disabled by default.
"""
_configure_library_root_logger()
_get_library_root_logger().propagate = False |
Disable propagation of the library log outputs. Note that log propagation is disabled by default.
| disable_propagation | python | huggingface/smollm | vision/m4/utils/logging.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/logging.py | Apache-2.0 |
def enable_propagation() -> None:
"""
Enable propagation of the library log outputs. Please disable the HuggingFace M4's default handler to
prevent double logging if the root logger has been configured.
"""
_configure_library_root_logger()
_get_library_root_logger().propagate = True |
Enable propagation of the library log outputs. Please disable the HuggingFace M4's default handler to
prevent double logging if the root logger has been configured.
| enable_propagation | python | huggingface/smollm | vision/m4/utils/logging.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/logging.py | Apache-2.0 |
def enable_explicit_format() -> None:
"""
Enable explicit formatting for every HuggingFace M4's logger. The explicit formatter is as follows:
```
[LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE
```
All handlers currently bound to the root logger are affected by this method.
"""
hand... |
Enable explicit formatting for every HuggingFace M4's logger. The explicit formatter is as follows:
```
[LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE
```
All handlers currently bound to the root logger are affected by this method.
| enable_explicit_format | python | huggingface/smollm | vision/m4/utils/logging.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/logging.py | Apache-2.0 |
def reset_format() -> None:
"""
Resets the formatting for HuggingFace M4's loggers.
All handlers currently bound to the root logger are affected by this method.
"""
handlers = _get_library_root_logger().handlers
for handler in handlers:
handler.setFormatter(None) |
Resets the formatting for HuggingFace M4's loggers.
All handlers currently bound to the root logger are affected by this method.
| reset_format | python | huggingface/smollm | vision/m4/utils/logging.py | https://github.com/huggingface/smollm/blob/master/vision/m4/utils/logging.py | Apache-2.0 |
def execute_python(code: str):
"""Execute python code in a Jupyter notebook cell and returns any result, stdout, stderr, display_data, and error."""
with open("sandboxid.txt", "r") as f:
sandboxid = f.read()
sandbox = CodeInterpreter.reconnect(sandboxid)
execution = sandbox.notebook.exec_cell(co... | Execute python code in a Jupyter notebook cell and returns any result, stdout, stderr, display_data, and error. | execute_python | python | kturung/langgraph_streamlit_codeassistant | main.py | https://github.com/kturung/langgraph_streamlit_codeassistant/blob/master/main.py | MIT |
def send_file_to_user(filepath: str):
"""Send a single file to the user."""
with open("sandboxid.txt", "r") as f:
sandboxid = f.read()
sandbox = CodeInterpreter.reconnect(sandboxid)
remote_file_path = "/home/user/" + filepath
try:
file_in_bytes = sandbox.download_file(remote_file_pat... | Send a single file to the user. | send_file_to_user | python | kturung/langgraph_streamlit_codeassistant | main.py | https://github.com/kturung/langgraph_streamlit_codeassistant/blob/master/main.py | MIT |
def install_npm_dependencies(package_names: str):
"""Installs the given npm dependencies and returns the result of the installation."""
try:
# Split the package_names string into a list of individual package names
package_list = package_names.split()
npm_cmd = "npm.cmd" if platform.syste... | Installs the given npm dependencies and returns the result of the installation. | install_npm_dependencies | python | kturung/langgraph_streamlit_codeassistant | main.py | https://github.com/kturung/langgraph_streamlit_codeassistant/blob/master/main.py | MIT |
def render_react(code: str):
"""Render a react component with the given code and return the render result."""
cwd = os.getcwd()
file_path = os.path.join(cwd, "src", "App.js")
with open(file_path, "w", encoding="utf-8") as f:
f.write(code)
# Determine the appropriate command based on the oper... | Render a react component with the given code and return the render result. | render_react | python | kturung/langgraph_streamlit_codeassistant | main.py | https://github.com/kturung/langgraph_streamlit_codeassistant/blob/master/main.py | MIT |
def maybe_chdir():
"""Detects if DepthMap was installed as a stable-diffusion-webui script, but run without current directory set to
the stable-diffusion-webui root. Changes current directory if needed.
This is to avoid re-downloading models and putting results into a wrong folder."""
try:
file_... | Detects if DepthMap was installed as a stable-diffusion-webui script, but run without current directory set to
the stable-diffusion-webui root. Changes current directory if needed.
This is to avoid re-downloading models and putting results into a wrong folder. | maybe_chdir | python | thygate/stable-diffusion-webui-depthmap-script | main.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/main.py | MIT |
def __init__(
self,
img_size=224,
patch_size=16,
in_chans=3,
embed_dim=768,
depth=12,
num_heads=12,
mlp_ratio=4.0,
qkv_bias=True,
ffn_bias=True,
proj_bias=True,
drop_path_rate=0.0,
drop_path_uniform=False,
in... |
Args:
img_size (int, tuple): input image size
patch_size (int, tuple): patch size
in_chans (int): number of input channels
embed_dim (int): embedding dimension
depth (int): depth of transformer
num_heads (int): number of attention heads
... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/depth_anything_v2/dinov2.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/depth_anything_v2/dinov2.py | MIT |
def init_weights_vit_timm(module: nn.Module, name: str = ""):
"""ViT weight initialization, original timm impl (for reproducibility)"""
if isinstance(module, nn.Linear):
trunc_normal_(module.weight, std=0.02)
if module.bias is not None:
nn.init.zeros_(module.bias) | ViT weight initialization, original timm impl (for reproducibility) | init_weights_vit_timm | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/depth_anything_v2/dinov2.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/depth_anything_v2/dinov2.py | MIT |
def vit_giant2(patch_size=16, num_register_tokens=0, **kwargs):
"""
Close to ViT-giant, with embed-dim 1536 and 24 heads => embed-dim per head 64
"""
model = DinoVisionTransformer(
patch_size=patch_size,
embed_dim=1536,
depth=40,
num_heads=24,
mlp_ratio=4,
... |
Close to ViT-giant, with embed-dim 1536 and 24 heads => embed-dim per head 64
| vit_giant2 | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/depth_anything_v2/dinov2.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/depth_anything_v2/dinov2.py | MIT |
def get_attn_bias_and_cat(x_list, branges=None):
"""
this will perform the index select, cat the tensors, and provide the attn_bias from cache
"""
batch_sizes = [b.shape[0] for b in branges] if branges is not None else [x.shape[0] for x in x_list]
all_shapes = tuple((b, x.shape[1]) for b, x in zip(b... |
this will perform the index select, cat the tensors, and provide the attn_bias from cache
| get_attn_bias_and_cat | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/depth_anything_v2/dinov2_layers/block.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/depth_anything_v2/dinov2_layers/block.py | MIT |
def forward_nested(self, x_list: List[Tensor]) -> List[Tensor]:
"""
x_list contains a list of tensors to nest together and run
"""
assert isinstance(self.attn, MemEffAttention)
if self.training and self.sample_drop_ratio > 0.0:
def attn_residual_func(x: Tensor, attn... |
x_list contains a list of tensors to nest together and run
| forward_nested | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/depth_anything_v2/dinov2_layers/block.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/depth_anything_v2/dinov2_layers/block.py | MIT |
def __init__(self, features, activation, bn):
"""Init.
Args:
features (int): number of features
"""
super().__init__()
self.bn = bn
self.groups=1
self.conv1 = nn.Conv2d(features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.... | Init.
Args:
features (int): number of features
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/depth_anything_v2/util/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/depth_anything_v2/util/blocks.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
"""
out = self.activation(x)
out = self.conv1(out)
if self.bn == True:
out = self.bn1(out)
out = self.activation(o... | Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
| forward | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/depth_anything_v2/util/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/depth_anything_v2/util/blocks.py | MIT |
def __init__(
self,
features,
activation,
deconv=False,
bn=False,
expand=False,
align_corners=True,
size=None
):
"""Init.
Args:
features (int): number of features
"""
super(FeatureFusionBlock, ... | Init.
Args:
features (int): number of features
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/depth_anything_v2/util/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/depth_anything_v2/util/blocks.py | MIT |
def __init__(
self,
width,
height,
resize_target=True,
keep_aspect_ratio=False,
ensure_multiple_of=1,
resize_method="lower_bound",
image_interpolation_method=cv2.INTER_AREA,
):
"""Init.
Args:
width (int): desired output wid... | Init.
Args:
width (int): desired output width
height (int): desired output height
resize_target (bool, optional):
True: Resize the full sample (image, mask, target).
False: Resize image only.
Defaults to True.
keep_... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/depth_anything_v2/util/transform.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/depth_anything_v2/util/transform.py | MIT |
def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
"""Rezise the sample to ensure the given size. Keeps aspect ratio.
Args:
sample (dict): sample
size (tuple): image size
Returns:
tuple: new size
"""
shape = list(sample["disparity"].shape)
if ... | Rezise the sample to ensure the given size. Keeps aspect ratio.
Args:
sample (dict): sample
size (tuple): image size
Returns:
tuple: new size
| apply_min_size | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/dataset/transform.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/dataset/transform.py | MIT |
def __init__(
self,
width,
height,
resize_target=True,
keep_aspect_ratio=False,
ensure_multiple_of=1,
resize_method="lower_bound",
image_interpolation_method=cv2.INTER_AREA,
):
"""Init.
Args:
width (int): desired output wid... | Init.
Args:
width (int): desired output width
height (int): desired output height
resize_target (bool, optional):
True: Resize the full sample (image, mask, target).
False: Resize image only.
Defaults to True.
keep_... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/dataset/transform.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/dataset/transform.py | MIT |
def __init__(
self,
img_size=224,
patch_size=16,
in_chans=3,
embed_dim=768,
depth=12,
num_heads=12,
mlp_ratio=4.0,
qkv_bias=True,
ffn_bias=True,
proj_bias=True,
drop_path_rate=0.0,
drop_path_uniform=False,
in... |
Args:
img_size (int, tuple): input image size
patch_size (int, tuple): patch size
in_chans (int): number of input channels
embed_dim (int): embedding dimension
depth (int): depth of transformer
num_heads (int): number of attention heads
... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/depth_anything_v2/dinov2.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/depth_anything_v2/dinov2.py | MIT |
def init_weights_vit_timm(module: nn.Module, name: str = ""):
"""ViT weight initialization, original timm impl (for reproducibility)"""
if isinstance(module, nn.Linear):
trunc_normal_(module.weight, std=0.02)
if module.bias is not None:
nn.init.zeros_(module.bias) | ViT weight initialization, original timm impl (for reproducibility) | init_weights_vit_timm | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/depth_anything_v2/dinov2.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/depth_anything_v2/dinov2.py | MIT |
def vit_giant2(patch_size=16, num_register_tokens=0, **kwargs):
"""
Close to ViT-giant, with embed-dim 1536 and 24 heads => embed-dim per head 64
"""
model = DinoVisionTransformer(
patch_size=patch_size,
embed_dim=1536,
depth=40,
num_heads=24,
mlp_ratio=4,
... |
Close to ViT-giant, with embed-dim 1536 and 24 heads => embed-dim per head 64
| vit_giant2 | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/depth_anything_v2/dinov2.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/depth_anything_v2/dinov2.py | MIT |
def get_attn_bias_and_cat(x_list, branges=None):
"""
this will perform the index select, cat the tensors, and provide the attn_bias from cache
"""
batch_sizes = [b.shape[0] for b in branges] if branges is not None else [x.shape[0] for x in x_list]
all_shapes = tuple((b, x.shape[1]) for b, x in zip(b... |
this will perform the index select, cat the tensors, and provide the attn_bias from cache
| get_attn_bias_and_cat | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/depth_anything_v2/dinov2_layers/block.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/depth_anything_v2/dinov2_layers/block.py | MIT |
def forward_nested(self, x_list: List[Tensor]) -> List[Tensor]:
"""
x_list contains a list of tensors to nest together and run
"""
assert isinstance(self.attn, MemEffAttention)
if self.training and self.sample_drop_ratio > 0.0:
def attn_residual_func(x: Tensor, attn... |
x_list contains a list of tensors to nest together and run
| forward_nested | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/depth_anything_v2/dinov2_layers/block.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/depth_anything_v2/dinov2_layers/block.py | MIT |
def __init__(self, features, activation, bn):
"""Init.
Args:
features (int): number of features
"""
super().__init__()
self.bn = bn
self.groups=1
self.conv1 = nn.Conv2d(features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.... | Init.
Args:
features (int): number of features
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/depth_anything_v2/util/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/depth_anything_v2/util/blocks.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
"""
out = self.activation(x)
out = self.conv1(out)
if self.bn == True:
out = self.bn1(out)
out = self.activation(o... | Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
| forward | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/depth_anything_v2/util/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/depth_anything_v2/util/blocks.py | MIT |
def __init__(
self,
features,
activation,
deconv=False,
bn=False,
expand=False,
align_corners=True,
size=None
):
"""Init.
Args:
features (int): number of features
"""
super(FeatureFusionBlock, ... | Init.
Args:
features (int): number of features
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/depth_anything_v2/util/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/depth_anything_v2/util/blocks.py | MIT |
def __init__(
self,
width,
height,
resize_target=True,
keep_aspect_ratio=False,
ensure_multiple_of=1,
resize_method="lower_bound",
image_interpolation_method=cv2.INTER_AREA,
):
"""Init.
Args:
width (int): desired output wid... | Init.
Args:
width (int): desired output width
height (int): desired output height
resize_target (bool, optional):
True: Resize the full sample (image, mask, target).
False: Resize image only.
Defaults to True.
keep_... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | ddepth_anything_v2/metric_depth/depth_anything_v2/util/transform.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/ddepth_anything_v2/metric_depth/depth_anything_v2/util/transform.py | MIT |
def __encode_empty_text(self):
"""
Encode text embedding for empty prompt
"""
prompt = ""
text_inputs = self.tokenizer(
prompt,
padding="do_not_pad",
max_length=self.tokenizer.model_max_length,
truncation=True,
return_te... |
Encode text embedding for empty prompt
| __encode_empty_text | python | thygate/stable-diffusion-webui-depthmap-script | dmarigold/marigold/marigold_pipeline.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmarigold/marigold/marigold_pipeline.py | MIT |
def single_infer(
self, rgb_in: torch.Tensor, num_inference_steps: int, show_pbar: bool
) -> torch.Tensor:
"""
Perform an individual depth prediction without ensembling.
Args:
rgb_in (torch.Tensor):
Input RGB image.
num_inference_steps (int):
... |
Perform an individual depth prediction without ensembling.
Args:
rgb_in (torch.Tensor):
Input RGB image.
num_inference_steps (int):
Number of diffusion denoisign steps (DDIM) during inference.
show_pbar (bool):
Display... | single_infer | python | thygate/stable-diffusion-webui-depthmap-script | dmarigold/marigold/marigold_pipeline.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmarigold/marigold/marigold_pipeline.py | MIT |
def encode_rgb(self, rgb_in: torch.Tensor) -> torch.Tensor:
"""
Encode RGB image into latent.
Args:
rgb_in (torch.Tensor):
Input RGB image to be encoded.
Returns:
torch.Tensor: Image latent
"""
# encode
h = self.vae.encode... |
Encode RGB image into latent.
Args:
rgb_in (torch.Tensor):
Input RGB image to be encoded.
Returns:
torch.Tensor: Image latent
| encode_rgb | python | thygate/stable-diffusion-webui-depthmap-script | dmarigold/marigold/marigold_pipeline.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmarigold/marigold/marigold_pipeline.py | MIT |
def decode_depth(self, depth_latent: torch.Tensor) -> torch.Tensor:
"""
Decode depth latent into depth map.
Args:
depth_latent (torch.Tensor):
Depth latent to be decoded.
Returns:
torch.Tensor: Decoded depth map.
"""
# scale laten... |
Decode depth latent into depth map.
Args:
depth_latent (torch.Tensor):
Depth latent to be decoded.
Returns:
torch.Tensor: Decoded depth map.
| decode_depth | python | thygate/stable-diffusion-webui-depthmap-script | dmarigold/marigold/marigold_pipeline.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmarigold/marigold/marigold_pipeline.py | MIT |
def find_batch_size(ensemble_size: int, input_res: int) -> int:
"""
Automatically search for suitable operating batch size.
Args:
ensemble_size (int): Number of predictions to be ensembled
input_res (int): Operating resolution of the input image.
Returns:
int: Operating batch s... |
Automatically search for suitable operating batch size.
Args:
ensemble_size (int): Number of predictions to be ensembled
input_res (int): Operating resolution of the input image.
Returns:
int: Operating batch size
| find_batch_size | python | thygate/stable-diffusion-webui-depthmap-script | dmarigold/marigold/util/batchsize.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmarigold/marigold/util/batchsize.py | MIT |
def inter_distances(tensors: torch.Tensor):
"""
To calculate the distance between each two depth maps.
"""
distances = []
for i, j in torch.combinations(torch.arange(tensors.shape[0])):
arr1 = tensors[i : i + 1]
arr2 = tensors[j : j + 1]
distances.append(arr1 - arr2)
dist... |
To calculate the distance between each two depth maps.
| inter_distances | python | thygate/stable-diffusion-webui-depthmap-script | dmarigold/marigold/util/ensemble.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmarigold/marigold/util/ensemble.py | MIT |
def ensemble_depths(
input_images: torch.Tensor,
regularizer_strength: float = 0.02,
max_iter: int = 2,
tol: float = 1e-3,
reduction: str = "median",
max_res: int = None,
):
"""
To ensemble multiple affine-invariant depth images (up to scale and shift),
by aligning estimating the... |
To ensemble multiple affine-invariant depth images (up to scale and shift),
by aligning estimating the scale and shift
| ensemble_depths | python | thygate/stable-diffusion-webui-depthmap-script | dmarigold/marigold/util/ensemble.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmarigold/marigold/util/ensemble.py | MIT |
def seed_all(seed: int = 0):
"""
Set random seeds of all components.
"""
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed) |
Set random seeds of all components.
| seed_all | python | thygate/stable-diffusion-webui-depthmap-script | dmarigold/marigold/util/seed_all.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmarigold/marigold/util/seed_all.py | MIT |
def load(self, path):
"""Load model from file.
Args:
path (str): file path
"""
parameters = torch.load(path, map_location=torch.device('cpu'))
if "optimizer" in parameters:
parameters = parameters["model"]
self.load_state_dict(parameters) | Load model from file.
Args:
path (str): file path
| load | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/base_model.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/base_model.py | MIT |
def __init__(self, scale_factor, mode, align_corners=False):
"""Init.
Args:
scale_factor (float): scaling
mode (str): interpolation mode
"""
super(Interpolate, self).__init__()
self.interp = nn.functional.interpolate
self.scale_factor = scale_fac... | Init.
Args:
scale_factor (float): scaling
mode (str): interpolation mode
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/blocks.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input
Returns:
tensor: interpolated data
"""
x = self.interp(
x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners
)
return x | Forward pass.
Args:
x (tensor): input
Returns:
tensor: interpolated data
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/blocks.py | MIT |
def __init__(self, features):
"""Init.
Args:
features (int): number of features
"""
super().__init__()
self.conv1 = nn.Conv2d(
features, features, kernel_size=3, stride=1, padding=1, bias=True
)
self.conv2 = nn.Conv2d(
featur... | Init.
Args:
features (int): number of features
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/blocks.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
"""
out = self.relu(x)
out = self.conv1(out)
out = self.relu(out)
out = self.conv2(out)
return out + x | Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/blocks.py | MIT |
def __init__(self, features):
"""Init.
Args:
features (int): number of features
"""
super(FeatureFusionBlock, self).__init__()
self.resConfUnit1 = ResidualConvUnit(features)
self.resConfUnit2 = ResidualConvUnit(features) | Init.
Args:
features (int): number of features
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/blocks.py | MIT |
def __init__(self, features, activation, bn):
"""Init.
Args:
features (int): number of features
"""
super().__init__()
self.bn = bn
self.groups=1
self.conv1 = nn.Conv2d(
features, features, kernel_size=3, stride=1, padding=1, bias=True,... | Init.
Args:
features (int): number of features
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/blocks.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
"""
out = self.activation(x)
out = self.conv1(out)
if self.bn==True:
out = self.bn1(out)
out = self.activation(out... | Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/blocks.py | MIT |
def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True, size=None):
"""Init.
Args:
features (int): number of features
"""
super(FeatureFusionBlock_custom, self).__init__()
self.deconv = deconv
self.align_corners = a... | Init.
Args:
features (int): number of features
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/blocks.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/blocks.py | MIT |
def __init__(self, path=None, features=256, non_negative=True):
"""Init.
Args:
path (str, optional): Path to saved model. Defaults to None.
features (int, optional): Number of features. Defaults to 256.
backbone (str, optional): Backbone network for encoder. Defaults... | Init.
Args:
path (str, optional): Path to saved model. Defaults to None.
features (int, optional): Number of features. Defaults to 256.
backbone (str, optional): Backbone network for encoder. Defaults to resnet50
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/midas_net.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/midas_net.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input data (image)
Returns:
tensor: depth
"""
layer_1 = self.pretrained.layer1(x)
layer_2 = self.pretrained.layer2(layer_1)
layer_3 = self.pretrained.layer3(layer_2)
layer_... | Forward pass.
Args:
x (tensor): input data (image)
Returns:
tensor: depth
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/midas_net.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/midas_net.py | MIT |
def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True,
blocks={'expand': True}):
"""Init.
Args:
path (str, optional): Path to saved model. Defaults to None.
features (int, opt... | Init.
Args:
path (str, optional): Path to saved model. Defaults to None.
features (int, optional): Number of features. Defaults to 256.
backbone (str, optional): Backbone network for encoder. Defaults to resnet50
| __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/midas_net_custom.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/midas_net_custom.py | MIT |
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input data (image)
Returns:
tensor: depth
"""
if self.channels_last==True:
print("self.channels_last = ", self.channels_last)
x.contiguous(memory_format=torch.channels_last)... | Forward pass.
Args:
x (tensor): input data (image)
Returns:
tensor: depth
| forward | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/midas_net_custom.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/midas_net_custom.py | MIT |
def load_model(device, model_path, model_type="dpt_large_384", optimize=True, height=None, square=False):
"""Load the specified network.
Args:
device (device): the torch device used
model_path (str): path to saved model
model_type (str): the type of the model to be loaded
optimi... | Load the specified network.
Args:
device (device): the torch device used
model_path (str): path to saved model
model_type (str): the type of the model to be loaded
optimize (bool): optimize the model to half-integer on CUDA?
height (int): inference encoder image height
... | load_model | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/model_loader.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/model_loader.py | MIT |
def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
"""Rezise the sample to ensure the given size. Keeps aspect ratio.
Args:
sample (dict): sample
size (tuple): image size
Returns:
tuple: new size
"""
shape = list(sample["disparity"].shape)
if ... | Rezise the sample to ensure the given size. Keeps aspect ratio.
Args:
sample (dict): sample
size (tuple): image size
Returns:
tuple: new size
| apply_min_size | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/transforms.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/transforms.py | MIT |
def __init__(
self,
width,
height,
resize_target=True,
keep_aspect_ratio=False,
ensure_multiple_of=1,
resize_method="lower_bound",
image_interpolation_method=cv2.INTER_AREA,
):
"""Init.
Args:
width (int): desired output wid... | Init.
Args:
width (int): desired output width
height (int): desired output height
resize_target (bool, optional):
True: Resize the full sample (image, mask, target).
False: Resize image only.
Defaults to True.
keep_... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/transforms.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/transforms.py | MIT |
def patch_embed_forward(self, x):
"""
Modification of timm.models.layers.patch_embed.py: PatchEmbed.forward to support arbitrary window sizes.
"""
x = self.proj(x)
if self.flatten:
x = x.flatten(2).transpose(1, 2)
x = self.norm(x)
return x |
Modification of timm.models.layers.patch_embed.py: PatchEmbed.forward to support arbitrary window sizes.
| patch_embed_forward | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/backbones/beit.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/backbones/beit.py | MIT |
def _get_rel_pos_bias(self, window_size):
"""
Modification of timm.models.beit.py: Attention._get_rel_pos_bias to support arbitrary window sizes.
"""
old_height = 2 * self.window_size[0] - 1
old_width = 2 * self.window_size[1] - 1
new_height = 2 * window_size[0] - 1
new_width = 2 * window_s... |
Modification of timm.models.beit.py: Attention._get_rel_pos_bias to support arbitrary window sizes.
| _get_rel_pos_bias | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/backbones/beit.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/backbones/beit.py | MIT |
def attention_forward(self, x, resolution, shared_rel_pos_bias: Optional[torch.Tensor] = None):
"""
Modification of timm.models.beit.py: Attention.forward to support arbitrary window sizes.
"""
B, N, C = x.shape
qkv_bias = torch.cat((self.q_bias, self.k_bias, self.v_bias)) if self.q_bias is not Non... |
Modification of timm.models.beit.py: Attention.forward to support arbitrary window sizes.
| attention_forward | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/backbones/beit.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/backbones/beit.py | MIT |
def block_forward(self, x, resolution, shared_rel_pos_bias: Optional[torch.Tensor] = None):
"""
Modification of timm.models.beit.py: Block.forward to support arbitrary window sizes.
"""
if hasattr(self, 'drop_path1') and not hasattr(self, 'drop_path'):
self.drop_path = self.drop_path1
if sel... |
Modification of timm.models.beit.py: Block.forward to support arbitrary window sizes.
| block_forward | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/backbones/beit.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/backbones/beit.py | MIT |
def beit_forward_features(self, x):
"""
Modification of timm.models.beit.py: Beit.forward_features to support arbitrary window sizes.
"""
resolution = x.shape[2:]
x = self.patch_embed(x)
x = torch.cat((self.cls_token.expand(x.shape[0], -1, -1), x), dim=1)
if self.pos_embed is not None:
... |
Modification of timm.models.beit.py: Beit.forward_features to support arbitrary window sizes.
| beit_forward_features | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/backbones/beit.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/backbones/beit.py | MIT |
def stem_b4_transpose(in_chs, out_chs, activation):
"""
Modification of
https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/levit.py: stem_b16
such that ConvTranspose2d is used instead of Conv2d and stem is also reduced to the half.
"""
return nn.Sequential(
Con... |
Modification of
https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/levit.py: stem_b16
such that ConvTranspose2d is used instead of Conv2d and stem is also reduced to the half.
| stem_b4_transpose | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/backbones/levit.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/backbones/levit.py | MIT |
def merge_pre_bn(module, pre_bn_1, pre_bn_2=None):
""" Merge pre BN to reduce inference runtime.
"""
weight = module.weight.data
if module.bias is None:
zeros = torch.zeros(module.out_channels, device=weight.device).type(weight.type())
module.bias = nn.Parameter(zeros)
bias = module.... | Merge pre BN to reduce inference runtime.
| merge_pre_bn | python | thygate/stable-diffusion-webui-depthmap-script | dmidas/backbones/next_vit.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dmidas/backbones/next_vit.py | MIT |
def __init__(self, config, mode, device='cpu', transform=None, **kwargs):
"""
Data loader for depth datasets
Args:
config (dict): Config dictionary. Refer to utils/config.py
mode (str): "train" or "online_eval"
device (str, optional): Device to load the data ... |
Data loader for depth datasets
Args:
config (dict): Config dictionary. Refer to utils/config.py
mode (str): "train" or "online_eval"
device (str, optional): Device to load the data on. Defaults to 'cpu'.
transform (torchvision.transforms, optional): Tran... | __init__ | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/data/data_mono.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/data/data_mono.py | MIT |
def repetitive_roundrobin(*iterables):
"""
cycles through iterables but sample wise
first yield first sample from first iterable then first sample from second iterable and so on
then second sample from first iterable then second sample from second iterable and so on
If one iterable is shorter than ... |
cycles through iterables but sample wise
first yield first sample from first iterable then first sample from second iterable and so on
then second sample from first iterable then second sample from second iterable and so on
If one iterable is shorter than the others, it is repeated until all iterables... | repetitive_roundrobin | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/data/data_mono.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/data/data_mono.py | MIT |
def get_white_border(rgb_image, value=255, **kwargs) -> CropParams:
"""Crops the white border of the RGB.
Args:
rgb: RGB image, shape (H, W, 3).
Returns:
Crop parameters.
"""
if value == 255:
# assert range of values in rgb image is [0, 255]
assert np.max(rgb_image) ... | Crops the white border of the RGB.
Args:
rgb: RGB image, shape (H, W, 3).
Returns:
Crop parameters.
| get_white_border | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/data/preprocess.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/data/preprocess.py | MIT |
def crop_black_or_white_border(rgb_image, *other_images: np.ndarray, tolerance=0.1, cut_off=20, level_diff_threshold=5) -> Tuple[np.ndarray]:
"""Crops the white and black border of the RGB and depth images.
Args:
rgb: RGB image, shape (H, W, 3). This image is used to determine the border.
other... | Crops the white and black border of the RGB and depth images.
Args:
rgb: RGB image, shape (H, W, 3). This image is used to determine the border.
other_images: The other images to crop according to the border of the RGB image.
Returns:
Cropped RGB and other images.
| crop_black_or_white_border | python | thygate/stable-diffusion-webui-depthmap-script | dzoedepth/data/preprocess.py | https://github.com/thygate/stable-diffusion-webui-depthmap-script/blob/master/dzoedepth/data/preprocess.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.