code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def prepare_model_for_distributed(
model: torch.nn.Module,
config: TrainingConfig,
ddp_find_unused_parameters: Optional[bool] = None,
) -> torch.nn.Module:
"""Wrap the model for distributed training (DDP or FSDP).
Args:
model: The model to be wrapped.
config: The training config.
... | Wrap the model for distributed training (DDP or FSDP).
Args:
model: The model to be wrapped.
config: The training config.
ddp_find_unused_parameters: Whether to traverse the autograd graph from all
tensors contained in the return value of the wrapped module's `forward`
... | prepare_model_for_distributed | python | oumi-ai/oumi | src/oumi/core/distributed.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/distributed.py | Apache-2.0 |
def get_accelerate_env_vars(config: TrainingConfig) -> dict[str, str]:
"""Gets environment vars for FSDP Accelerate corresponding to Oumi training params.
This mimics the environment variables set here:
https://github.com/huggingface/accelerate/blob/bf4572b6ce0a534a9d73537485a0edf1d68144b8/src/accelerate/u... | Gets environment vars for FSDP Accelerate corresponding to Oumi training params.
This mimics the environment variables set here:
https://github.com/huggingface/accelerate/blob/bf4572b6ce0a534a9d73537485a0edf1d68144b8/src/accelerate/utils/launch.py#L260-L285
Note how they lowercase all boolean values, excep... | get_accelerate_env_vars | python | oumi-ai/oumi | src/oumi/core/distributed.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/distributed.py | Apache-2.0 |
def prepare_accelerate_fsdp_run(config: TrainingConfig) -> dict[str, str]:
"""Prepares our FSDP training job to run with the HuggingFace Accelerate library.
This function should be run if we didn't invoke the current training job from the
Accelerate launcher, but still want to use FSDP with Accelerate. The... | Prepares our FSDP training job to run with the HuggingFace Accelerate library.
This function should be run if we didn't invoke the current training job from the
Accelerate launcher, but still want to use FSDP with Accelerate. The motivation for
this is to remove the need for the Accelerate config, centrali... | prepare_accelerate_fsdp_run | python | oumi-ai/oumi | src/oumi/core/distributed.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/distributed.py | Apache-2.0 |
def estimate_dataloader_num_workers(
gpus_per_node: Optional[int] = None, cpu_count: Optional[int] = None
) -> int:
"""Estimates the number of dataloader workers.
Uses a simple heuristic based on the number of GPU-s and CPU-s per node.
Args:
gpus_per_node: The number of GPU-s per node.
... | Estimates the number of dataloader workers.
Uses a simple heuristic based on the number of GPU-s and CPU-s per node.
Args:
gpus_per_node: The number of GPU-s per node.
cpu_count: The number of CPU cores.
Returns:
The estimated number of dataloader workers (a non-zero positive numb... | estimate_dataloader_num_workers | python | oumi-ai/oumi | src/oumi/core/distributed.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/distributed.py | Apache-2.0 |
def set_random_seeds(seed: int = 42, set_deterministic: bool = False) -> None:
"""Set random seeds for reproducibility.
Each worker will have a different seed to ensure that each worker
starts with a different random state.
Args:
seed: The seed value to set for random number generators.
... | Set random seeds for reproducibility.
Each worker will have a different seed to ensure that each worker
starts with a different random state.
Args:
seed: The seed value to set for random number generators.
set_deterministic: Whether to set deterministic mode for CUDA operations.
| set_random_seeds | python | oumi-ai/oumi | src/oumi/core/distributed.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/distributed.py | Apache-2.0 |
def on_save(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Saving callback.
Gets triggered at each saving step to qu... | Saving callback.
Gets triggered at each saving step to quantize trained models
in 1bit precision.
| on_save | python | oumi-ai/oumi | src/oumi/core/callbacks/bitnet_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/bitnet_callback.py | Apache-2.0 |
def __init__(
self,
dtype: torch.dtype,
):
"""Initialize the HfMfuTrainerCallback.
Args:
dtype: The data type of the model.
"""
self._dtype = dtype
self._time_of_second_step: Optional[float] = None
self._flops_at_second_step: Optional[floa... | Initialize the HfMfuTrainerCallback.
Args:
dtype: The data type of the model.
| __init__ | python | oumi-ai/oumi | src/oumi/core/callbacks/hf_mfu_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/hf_mfu_callback.py | Apache-2.0 |
def on_step_begin(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the beginning of each train step."""
... | Event called at the beginning of each train step. | on_step_begin | python | oumi-ai/oumi | src/oumi/core/callbacks/hf_mfu_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/hf_mfu_callback.py | Apache-2.0 |
def on_step_end(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the end of each train step.
Note that... | Event called at the end of each train step.
Note that this will be called after all gradient accumulation substeps.
| on_step_end | python | oumi-ai/oumi | src/oumi/core/callbacks/hf_mfu_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/hf_mfu_callback.py | Apache-2.0 |
def on_log(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called after logging the last logs."""
if self._callb... | Event called after logging the last logs. | on_log | python | oumi-ai/oumi | src/oumi/core/callbacks/hf_mfu_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/hf_mfu_callback.py | Apache-2.0 |
def __init__(
self,
dtype: torch.dtype,
num_params: int,
sequence_length: int,
num_layers: Optional[int] = None,
num_attention_heads: Optional[int] = None,
attention_head_size: Optional[int] = None,
add_rematerialization: bool = False,
):
"""In... | Initialize the MfuTrainerCallback.
Args:
dtype: The data type of the model.
num_params: The number of parameters in the model.
start_time_seconds: The start time of the program.
sequence_length: The sequence length of the model.
num_layers: The number... | __init__ | python | oumi-ai/oumi | src/oumi/core/callbacks/mfu_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/mfu_callback.py | Apache-2.0 |
def on_step_begin(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the beginning of each train step."""
... | Event called at the beginning of each train step. | on_step_begin | python | oumi-ai/oumi | src/oumi/core/callbacks/mfu_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/mfu_callback.py | Apache-2.0 |
def on_step_end(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the end of each train step.
Note that... | Event called at the end of each train step.
Note that this will be called after all gradient accumulation substeps.
| on_step_end | python | oumi-ai/oumi | src/oumi/core/callbacks/mfu_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/mfu_callback.py | Apache-2.0 |
def on_log(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called after logging the last logs."""
if self._callb... | Event called after logging the last logs. | on_log | python | oumi-ai/oumi | src/oumi/core/callbacks/mfu_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/mfu_callback.py | Apache-2.0 |
def __init__(
self,
metrics: list[str],
):
"""Initializes the NanInfDetectionCallback.
Args:
metrics: The list of metrics to monitor.
"""
self._metrics = copy.deepcopy(metrics) | Initializes the NanInfDetectionCallback.
Args:
metrics: The list of metrics to monitor.
| __init__ | python | oumi-ai/oumi | src/oumi/core/callbacks/nan_inf_detection_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/nan_inf_detection_callback.py | Apache-2.0 |
def on_log(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called after logging the last logs."""
metrics_dict =... | Event called after logging the last logs. | on_log | python | oumi-ai/oumi | src/oumi/core/callbacks/nan_inf_detection_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/nan_inf_detection_callback.py | Apache-2.0 |
def __init__(self, profiler):
"""Initialize the ProfilerStepCallback.
Args:
profiler: PyTorch profiler object.
"""
self._profiler = profiler
self._microstep_function = None | Initialize the ProfilerStepCallback.
Args:
profiler: PyTorch profiler object.
| __init__ | python | oumi-ai/oumi | src/oumi/core/callbacks/profiler_step_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/profiler_step_callback.py | Apache-2.0 |
def on_step_begin(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the beginning of a training step.
I... | Event called at the beginning of a training step.
If using gradient accumulation, one training step might take several inputs.
| on_step_begin | python | oumi-ai/oumi | src/oumi/core/callbacks/profiler_step_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/profiler_step_callback.py | Apache-2.0 |
def on_substep_end(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the end of an substep during gradient accum... | Event called at the end of an substep during gradient accumulation. | on_substep_end | python | oumi-ai/oumi | src/oumi/core/callbacks/profiler_step_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/profiler_step_callback.py | Apache-2.0 |
def on_step_end(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the end of each train step.
Note that... | Event called at the end of each train step.
Note that this will be called after all gradient accumulation substeps.
| on_step_end | python | oumi-ai/oumi | src/oumi/core/callbacks/profiler_step_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/profiler_step_callback.py | Apache-2.0 |
def __init__(
self,
skip_first_steps: int = 1,
world_process_zero_only: bool = True,
include_timer_metrics: bool = False,
track_gpu_temperature: bool = False,
output_dir: Optional[pathlib.Path] = None,
):
"""Initializes the TelemetryCallback.
Args:
... | Initializes the TelemetryCallback.
Args:
skip_first_steps: The number of initial steps to exclude from stats.
world_process_zero_only: Whether to collect stats on the main process only.
include_timer_metrics: Whether to add timer stats to reported metrics.
Th... | __init__ | python | oumi-ai/oumi | src/oumi/core/callbacks/telemetry_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/telemetry_callback.py | Apache-2.0 |
def on_step_begin(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the beginning of a training step.
I... | Event called at the beginning of a training step.
If using gradient accumulation, one training step might take several inputs.
| on_step_begin | python | oumi-ai/oumi | src/oumi/core/callbacks/telemetry_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/telemetry_callback.py | Apache-2.0 |
def on_substep_end(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the end of a substep during gradient accumu... | Event called at the end of a substep during gradient accumulation. | on_substep_end | python | oumi-ai/oumi | src/oumi/core/callbacks/telemetry_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/telemetry_callback.py | Apache-2.0 |
def on_step_end(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the end of each train step.
Note that... | Event called at the end of each train step.
Note that this will be called after all gradient accumulation substeps.
| on_step_end | python | oumi-ai/oumi | src/oumi/core/callbacks/telemetry_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/telemetry_callback.py | Apache-2.0 |
def on_epoch_begin(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the beginning of an epoch."""
if se... | Event called at the beginning of an epoch. | on_epoch_begin | python | oumi-ai/oumi | src/oumi/core/callbacks/telemetry_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/telemetry_callback.py | Apache-2.0 |
def on_epoch_end(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the end of an epoch."""
if self._perm... | Event called at the end of an epoch. | on_epoch_end | python | oumi-ai/oumi | src/oumi/core/callbacks/telemetry_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/telemetry_callback.py | Apache-2.0 |
def on_log(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called after logging the last logs."""
if self._callb... | Event called after logging the last logs. | on_log | python | oumi-ai/oumi | src/oumi/core/callbacks/telemetry_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/telemetry_callback.py | Apache-2.0 |
def on_train_end(
self,
args: Union[transformers.TrainingArguments, TrainingParams],
state: Optional[transformers.TrainerState] = None,
control: Optional[transformers.TrainerControl] = None,
**kwargs,
):
"""Event called at the end of training."""
if self._call... | Event called at the end of training. | on_train_end | python | oumi-ai/oumi | src/oumi/core/callbacks/telemetry_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/telemetry_callback.py | Apache-2.0 |
def _callback_disabled(self) -> bool:
"""Check if the callback should be disabled."""
if self._permanently_disabled:
return True
if self._skip_first_steps > 0 and self._step <= self._skip_first_steps:
return True
return False | Check if the callback should be disabled. | _callback_disabled | python | oumi-ai/oumi | src/oumi/core/callbacks/telemetry_callback.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/callbacks/telemetry_callback.py | Apache-2.0 |
def __call__(self, batch) -> dict[str, Any]:
"""Pads to the longest length present in the batch.
Args:
batch: List of batch items.
Returns:
Dict[str, torch.Tensor]: Processed batch.
"""
collation_inputs: dict[str, list[Any]] = collections.defaultdict(lis... | Pads to the longest length present in the batch.
Args:
batch: List of batch items.
Returns:
Dict[str, torch.Tensor]: Processed batch.
| __call__ | python | oumi-ai/oumi | src/oumi/core/collators/text_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/text_collator_with_padding.py | Apache-2.0 |
def _log_debug_example(
self,
batch: list[dict[str, Any]],
combined_batch: dict[str, Any],
) -> None:
"""Logs a debug example if debug is enabled.
Args:
batch: The original batch of data.
combined_batch: The collated batch after processing.
""... | Logs a debug example if debug is enabled.
Args:
batch: The original batch of data.
combined_batch: The collated batch after processing.
| _log_debug_example | python | oumi-ai/oumi | src/oumi/core/collators/text_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/text_collator_with_padding.py | Apache-2.0 |
def _update_max_lengths_and_log(self, *, max_input_ids_length: int):
"""Updates max length counters.
Also, logs a truncation warning if increment is large enough.
"""
_LOG_REL_INCREMENT = 0.1 # log if max length is up 10%
log_max_lengths: bool = False
if max_input_ids_... | Updates max length counters.
Also, logs a truncation warning if increment is large enough.
| _update_max_lengths_and_log | python | oumi-ai/oumi | src/oumi/core/collators/text_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/text_collator_with_padding.py | Apache-2.0 |
def __init__(
self,
tokenizer: BaseTokenizer,
instruction_prefix: str,
response_prefix: str,
debug: bool = False,
):
"""Custom collator for text LLM training.
Args:
tokenizer: The tokenizer used for encoding the data.
instruction_prefix: The p... | Custom collator for text LLM training.
Args:
tokenizer: The tokenizer used for encoding the data.
instruction_prefix: The prefix marking the beginning of the user instruction.
response_prefix: The prefix marking the beginning of the assistant response.
debug: If True, enables de... | __init__ | python | oumi-ai/oumi | src/oumi/core/collators/text_completions_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/text_completions_collator_with_padding.py | Apache-2.0 |
def __call__(self, batch) -> dict[str, Any]:
"""Pads to the longest length present in the batch.
Args:
batch: List of batch items.
Returns:
Dict[str, torch.Tensor]: Processed batch.
"""
for item in batch:
if _INPUT_IDS_KEY not in item:
... | Pads to the longest length present in the batch.
Args:
batch: List of batch items.
Returns:
Dict[str, torch.Tensor]: Processed batch.
| __call__ | python | oumi-ai/oumi | src/oumi/core/collators/text_completions_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/text_completions_collator_with_padding.py | Apache-2.0 |
def _log_debug_example(
self, batch: list[dict[str, Any]], collated_text_inputs: dict[str, Any]
) -> None:
"""Logs an example of the data in each step for debugging purposes.
Args:
batch: The batch of examples to log.
collated_text_inputs: The collated inputs after p... | Logs an example of the data in each step for debugging purposes.
Args:
batch: The batch of examples to log.
collated_text_inputs: The collated inputs after processing.
| _log_debug_example | python | oumi-ai/oumi | src/oumi/core/collators/text_completions_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/text_completions_collator_with_padding.py | Apache-2.0 |
def _to_py(x):
"""Convert tensor-like objects to Python native types."""
if hasattr(x, "tolist"):
return x.tolist()
elif hasattr(x, "item"):
return x.item()
else:
return x | Convert tensor-like objects to Python native types. | _to_py | python | oumi-ai/oumi | src/oumi/core/collators/text_completions_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/text_completions_collator_with_padding.py | Apache-2.0 |
def __init__(
self,
tokenizer: BaseTokenizer,
*,
max_length: Optional[int],
truncation: bool = False,
label_ignore_index: Optional[int] = None,
allow_multi_image_inputs: bool = True,
main_image_feature: str = "pixel_values",
):
"""Custom collat... | Custom collator for multi-modal vision-language training.
Args:
tokenizer: The tokenizer used for encoding the data.
max_length: Padding length.
truncation: Whether to truncate long inputs to `max_length`.
If False, the long inputs are preserved as is even if they exceed
... | __init__ | python | oumi-ai/oumi | src/oumi/core/collators/vision_language_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/vision_language_collator_with_padding.py | Apache-2.0 |
def __call__(self, batch) -> dict[str, Any]:
"""Custom collator for multi-modal vision-language training.
Args:
batch: List of batch items.
Returns:
Dict[str, torch.Tensor]: Processed batch.
"""
# Collate batch prompts
collated_batch = self._text... | Custom collator for multi-modal vision-language training.
Args:
batch: List of batch items.
Returns:
Dict[str, torch.Tensor]: Processed batch.
| __call__ | python | oumi-ai/oumi | src/oumi/core/collators/vision_language_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/vision_language_collator_with_padding.py | Apache-2.0 |
def collate_images(self, images) -> torch.Tensor:
"""Collate images for multi-modal training.
Args:
images: List of images to collate.
Returns:
torch.Tensor: Batch of processed images.
"""
if len(images) == 0:
raise ValueError("No images foun... | Collate images for multi-modal training.
Args:
images: List of images to collate.
Returns:
torch.Tensor: Batch of processed images.
| collate_images | python | oumi-ai/oumi | src/oumi/core/collators/vision_language_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/vision_language_collator_with_padding.py | Apache-2.0 |
def __init__(
self,
tokenizer: BaseTokenizer,
processor_name: str,
*,
processor_kwargs: Optional[dict[str, Any]] = None,
max_length: Optional[int] = None,
truncation: bool = False,
truncation_side: str = "right",
label_ignore_index: Optional[int] =... | Initializes the vision-language SFT collator.
Args:
tokenizer: The tokenizer for encoding text. Should match the model's
tokenizer for proper token alignment.
processor_name: Name or path of the processor to use for feature extraction.
This should typica... | __init__ | python | oumi-ai/oumi | src/oumi/core/collators/vision_language_sft_collator.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/vision_language_sft_collator.py | Apache-2.0 |
def __call__(self, batch) -> dict[str, Any]:
"""Process a batch of conversation data into model-ready features.
This method converts serialized conversations into the tensor format expected
by vision-language models. It handles the complete pipeline:
1. Deserializes conversation JSON st... | Process a batch of conversation data into model-ready features.
This method converts serialized conversations into the tensor format expected
by vision-language models. It handles the complete pipeline:
1. Deserializes conversation JSON strings
2. Passes conversations to the feature gen... | __call__ | python | oumi-ai/oumi | src/oumi/core/collators/vision_language_sft_collator.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/vision_language_sft_collator.py | Apache-2.0 |
def _collate_individual_results(
self, results: list[dict[str, Any]]
) -> dict[str, Any]:
"""Collate individually processed results by padding to max dimensions.
Args:
results: List of feature dictionaries from individual conversation
processing
Returns:... | Collate individually processed results by padding to max dimensions.
Args:
results: List of feature dictionaries from individual conversation
processing
Returns:
Collated dictionary with padded tensors
Raises:
ValueError: If results have inc... | _collate_individual_results | python | oumi-ai/oumi | src/oumi/core/collators/vision_language_sft_collator.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/collators/vision_language_sft_collator.py | Apache-2.0 |
def _read_config_without_interpolation(config_path: str) -> str:
"""Reads a configuration file without interpolating variables.
Args:
config_path: The path to the configuration file.
Returns:
str: The stringified configuration.
"""
with open(config_path) as f:
stringified_c... | Reads a configuration file without interpolating variables.
Args:
config_path: The path to the configuration file.
Returns:
str: The stringified configuration.
| _read_config_without_interpolation | python | oumi-ai/oumi | src/oumi/core/configs/base_config.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/base_config.py | Apache-2.0 |
def from_yaml(
cls: type[T], config_path: Union[str, Path], ignore_interpolation=True
) -> T:
"""Loads a configuration from a YAML file.
Args:
config_path: The path to the YAML file.
ignore_interpolation: If True, then any interpolation variables in the
... | Loads a configuration from a YAML file.
Args:
config_path: The path to the YAML file.
ignore_interpolation: If True, then any interpolation variables in the
configuration file will be escaped.
Returns:
BaseConfig: The merged configuration object.
... | from_yaml | python | oumi-ai/oumi | src/oumi/core/configs/base_config.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/base_config.py | Apache-2.0 |
def from_str(cls: type[T], config_str: str) -> T:
"""Loads a configuration from a YAML string.
Args:
config_str: The YAML string.
Returns:
BaseConfig: The configuration object.
"""
schema = OmegaConf.structured(cls)
file_config = OmegaConf.create... | Loads a configuration from a YAML string.
Args:
config_str: The YAML string.
Returns:
BaseConfig: The configuration object.
| from_str | python | oumi-ai/oumi | src/oumi/core/configs/base_config.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/base_config.py | Apache-2.0 |
def from_yaml_and_arg_list(
cls: type[T],
config_path: Optional[str],
arg_list: list[str],
logger: Optional[logging.Logger] = None,
ignore_interpolation=True,
) -> T:
"""Loads a configuration from various sources.
If both YAML and arguments list are provided,... | Loads a configuration from various sources.
If both YAML and arguments list are provided, then
parameters specified in `arg_list` have higher precedence.
Args:
config_path: The path to the YAML file.
arg_list: Command line arguments list.
logger: (optional) ... | from_yaml_and_arg_list | python | oumi-ai/oumi | src/oumi/core/configs/base_config.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/base_config.py | Apache-2.0 |
def finalize_and_validate(self) -> None:
"""Finalizes and validates the top level params objects."""
for _, attr_value in self:
if isinstance(attr_value, BaseParams):
attr_value.finalize_and_validate()
self.__finalize_and_validate__() | Finalizes and validates the top level params objects. | finalize_and_validate | python | oumi-ai/oumi | src/oumi/core/configs/base_config.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/base_config.py | Apache-2.0 |
def __finalize_and_validate__(self) -> None:
"""Finalizes and validates the parameters of this object.
This method can be overridden by subclasses to implement custom
validation logic.
In case of validation errors, this method should raise a `ValueError`
or other appropriate ex... | Finalizes and validates the parameters of this object.
This method can be overridden by subclasses to implement custom
validation logic.
In case of validation errors, this method should raise a `ValueError`
or other appropriate exception.
| __finalize_and_validate__ | python | oumi-ai/oumi | src/oumi/core/configs/base_config.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/base_config.py | Apache-2.0 |
def __iter__(self) -> Iterator[tuple[str, Any]]:
"""Returns an iterator over field names and values.
Note: for an attribute to be a field, it must be declared in the
dataclass definition and have a type annotation.
"""
for param in dataclasses.fields(self):
yield par... | Returns an iterator over field names and values.
Note: for an attribute to be a field, it must be declared in the
dataclass definition and have a type annotation.
| __iter__ | python | oumi-ai/oumi | src/oumi/core/configs/base_config.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/base_config.py | Apache-2.0 |
def messages(self) -> list[Message]:
"""Returns the messages in oumi format.
This will include the judge system prompt, and any few-shot examples.
"""
messages = [Message(content=self.system_prompt, role=Role.SYSTEM)]
return messages + [e.message for e in self.examples] | Returns the messages in oumi format.
This will include the judge system prompt, and any few-shot examples.
| messages | python | oumi-ai/oumi | src/oumi/core/configs/judge_config.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/judge_config.py | Apache-2.0 |
def load(cls: type, filename: str) -> "JudgeAttribute[T]":
"""Loads the judge attribute config from a file."""
path = Path(filename)
if not path.exists():
raise FileNotFoundError(path)
return cls.model_validate_json(path.read_text()) | Loads the judge attribute config from a file. | load | python | oumi-ai/oumi | src/oumi/core/configs/judge_config.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/judge_config.py | Apache-2.0 |
def find_model_hf_config(
model_name: str,
*,
trust_remote_code: bool,
revision: Optional[str] = None,
**kwargs: dict[str, Any],
):
"""Finds HF model config by model name."""
hf_config, unused_kwargs = transformers.AutoConfig.from_pretrained(
model_name,
trust_remote_code=tru... | Finds HF model config by model name. | find_model_hf_config | python | oumi-ai/oumi | src/oumi/core/configs/internal/supported_models.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/internal/supported_models.py | Apache-2.0 |
def _create_default_vlm_config(
*,
supports_multiple_images: bool = False,
pixel_values_variable_shape: bool = False,
pixel_values_first_dim_action: InternalFeatureFirstDimAction = (
InternalFeatureFirstDimAction.DROP_IF_DUMMY
),
) -> InternalModelConfig:
"""Creates a default configurati... | Creates a default configuration for vision-language models.
This function provides a base configuration that can be used for most VLMs.
It sets up the basic visual features and configurations that VLMs typically need.
Args:
supports_multiple_images: Whether the model can process multiple images in... | _create_default_vlm_config | python | oumi-ai/oumi | src/oumi/core/configs/internal/supported_models.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/internal/supported_models.py | Apache-2.0 |
def _create_molmo_vlm_config() -> InternalModelConfig:
"""Creates a config for Molmo VLM model.
Molmo uses a specific set of features including image masks and input indices
for handling images in the model. The config is set up to handle these
features appropriately.
"""
config = InternalModel... | Creates a config for Molmo VLM model.
Molmo uses a specific set of features including image masks and input indices
for handling images in the model. The config is set up to handle these
features appropriately.
| _create_molmo_vlm_config | python | oumi-ai/oumi | src/oumi/core/configs/internal/supported_models.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/internal/supported_models.py | Apache-2.0 |
def get_all_models_map() -> Mapping[
str, # model type
_ModelTypeInfo,
]:
"""Creates a map of all supported models with their configurations.
This is the central registry of the non-standard models supported by the Oumi
framework. Each entry maps a model type (as defined in the HuggingFace model c... | Creates a map of all supported models with their configurations.
This is the central registry of the non-standard models supported by the Oumi
framework. Each entry maps a model type (as defined in the HuggingFace model config)
to its corresponding configuration and metadata.
Returns:
An immut... | get_all_models_map | python | oumi-ai/oumi | src/oumi/core/configs/internal/supported_models.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/internal/supported_models.py | Apache-2.0 |
def is_custom_model(model_name: str) -> bool:
"""Determines whether the model is a custom model defined in oumi registry."""
result: bool = len(model_name) > 0 and REGISTRY.contains(
name=model_name, type=RegistryType.MODEL
)
return result | Determines whether the model is a custom model defined in oumi registry. | is_custom_model | python | oumi-ai/oumi | src/oumi/core/configs/internal/supported_models.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/internal/supported_models.py | Apache-2.0 |
def find_internal_model_config_using_model_name(
model_name: str, trust_remote_code: bool
) -> Optional[InternalModelConfig]:
"""Finds an internal model config for supported models using model name.
Args:
model_name: The model name, either:
- A HuggingFace model ID (e.g., "meta-llama/Ll... | Finds an internal model config for supported models using model name.
Args:
model_name: The model name, either:
- A HuggingFace model ID (e.g., "meta-llama/Llama-2-7b-hf")
- A local path to a model directory
- A custom model name registered in Oumi
trust_remote_c... | find_internal_model_config_using_model_name | python | oumi-ai/oumi | src/oumi/core/configs/internal/supported_models.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/internal/supported_models.py | Apache-2.0 |
def find_internal_model_config(
model_params: ModelParams,
) -> Optional[InternalModelConfig]:
"""Finds an internal model config for supported models using `ModelParams`.
Args:
model_params: The model parameters.
Returns:
Model config, or `None` if model is not recognized.
"""
... | Finds an internal model config for supported models using `ModelParams`.
Args:
model_params: The model parameters.
Returns:
Model config, or `None` if model is not recognized.
| find_internal_model_config | python | oumi-ai/oumi | src/oumi/core/configs/internal/supported_models.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/internal/supported_models.py | Apache-2.0 |
def __finalize_and_validate__(self) -> None:
"""Finalizes and validates the parameters of this object.
This method can be overridden by subclasses to implement custom
validation logic.
In case of validation errors, this method should raise a `ValueError`
or other appropriate ex... | Finalizes and validates the parameters of this object.
This method can be overridden by subclasses to implement custom
validation logic.
In case of validation errors, this method should raise a `ValueError`
or other appropriate exception.
| __finalize_and_validate__ | python | oumi-ai/oumi | src/oumi/core/configs/params/base_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/base_params.py | Apache-2.0 |
def __iter__(self) -> Iterator[tuple[str, Any]]:
"""Returns an iterator over field names and values.
Note: for an attribute to be a field, it must be declared in the
dataclass definition and have a type annotation.
"""
for param in dataclasses.fields(self):
yield par... | Returns an iterator over field names and values.
Note: for an attribute to be a field, it must be declared in the
dataclass definition and have a type annotation.
| __iter__ | python | oumi-ai/oumi | src/oumi/core/configs/params/base_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/base_params.py | Apache-2.0 |
def _finalize_and_validate(self, validated: Optional[set[int]]) -> None:
"""Recursively finalizes and validates the parameters."""
if validated is None:
validated = set()
# If this object has already been validated, return immediately
if id(self) in validated:
re... | Recursively finalizes and validates the parameters. | _finalize_and_validate | python | oumi-ai/oumi | src/oumi/core/configs/params/base_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/base_params.py | Apache-2.0 |
def get_literal_value(self) -> Literal["first_exhausted", "all_exhausted"]:
"""Returns a literal value of the enum."""
if self.value == MixtureStrategy.FIRST_EXHAUSTED:
return "first_exhausted"
elif self.value == MixtureStrategy.ALL_EXHAUSTED:
return "all_exhausted"
... | Returns a literal value of the enum. | get_literal_value | python | oumi-ai/oumi | src/oumi/core/configs/params/data_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/data_params.py | Apache-2.0 |
def get_split(self, split: DatasetSplit) -> DatasetSplitParams:
"""A public getting for individual dataset splits."""
if split == DatasetSplit.TRAIN:
return self.train
elif split == DatasetSplit.TEST:
return self.test
elif split == DatasetSplit.VALIDATION:
... | A public getting for individual dataset splits. | get_split | python | oumi-ai/oumi | src/oumi/core/configs/params/data_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/data_params.py | Apache-2.0 |
def get_evaluation_backend(self) -> EvaluationBackend:
"""Returns the evaluation backend as an Enum."""
if not self.evaluation_backend:
raise ValueError(
"Missing `evaluation_backend`. When running evaluations, it is "
"necessary to specify the evaluation back... | Returns the evaluation backend as an Enum. | get_evaluation_backend | python | oumi-ai/oumi | src/oumi/core/configs/params/evaluation_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/evaluation_params.py | Apache-2.0 |
def to_torch(self) -> torch_fsdp.ShardingStrategy:
"""Convert the enum to the corresponding torch_fsdp.ShardingStrategy."""
strategy_map = {
ShardingStrategy.FULL_SHARD: torch_fsdp.ShardingStrategy.FULL_SHARD,
ShardingStrategy.SHARD_GRAD_OP: torch_fsdp.ShardingStrategy.SHARD_GRAD... | Convert the enum to the corresponding torch_fsdp.ShardingStrategy. | to_torch | python | oumi-ai/oumi | src/oumi/core/configs/params/fsdp_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/fsdp_params.py | Apache-2.0 |
def to_torch(self) -> torch_fsdp.StateDictType:
"""Converts to the corresponding torch.distributed.fsdp.StateDictType."""
state_dict_map = {
StateDictType.FULL_STATE_DICT: torch_fsdp.StateDictType.FULL_STATE_DICT,
StateDictType.SHARDED_STATE_DICT: (
torch_fsdp.Sta... | Converts to the corresponding torch.distributed.fsdp.StateDictType. | to_torch | python | oumi-ai/oumi | src/oumi/core/configs/params/fsdp_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/fsdp_params.py | Apache-2.0 |
def to_torch(self) -> Optional[torch_fsdp.BackwardPrefetch]:
"""Convert the enum to the corresponding torch_fsdp.BackwardPrefetch."""
map = {
BackwardPrefetch.BACKWARD_PRE: torch_fsdp.BackwardPrefetch.BACKWARD_PRE,
BackwardPrefetch.BACKWARD_POST: torch_fsdp.BackwardPrefetch.BACKW... | Convert the enum to the corresponding torch_fsdp.BackwardPrefetch. | to_torch | python | oumi-ai/oumi | src/oumi/core/configs/params/fsdp_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/fsdp_params.py | Apache-2.0 |
def to_hf_trainer_kwargs(self) -> dict[str, Any]:
"""Converts GrpoParams to TRL's GRPOConfig kwargs."""
result = {}
if len(self.model_init_kwargs) > 0:
result["model_init_kwargs"] = self.model_init_kwargs
if self.max_prompt_length is not None:
result["max_prompt_l... | Converts GrpoParams to TRL's GRPOConfig kwargs. | to_hf_trainer_kwargs | python | oumi-ai/oumi | src/oumi/core/configs/params/grpo_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/grpo_params.py | Apache-2.0 |
def __finalize_and_validate__(self):
"""Finalizes and validates final config params."""
# If the user didn't specify a LoRA adapter, check to see if the dir/repo
# specified by `model_name` contains an adapter, and set `adapter_name` if so.
if self.adapter_model is None:
# Th... | Finalizes and validates final config params. | __finalize_and_validate__ | python | oumi-ai/oumi | src/oumi/core/configs/params/model_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/model_params.py | Apache-2.0 |
def get_literal_value(
self,
) -> Literal[
"default",
"random",
"gaussian",
"eva",
"pissa",
"pissa_niter_[number of iters]",
"loftq",
"olora",
]:
"""Returns a literal value of the enum."""
if self.value not in {
... | Returns a literal value of the enum. | get_literal_value | python | oumi-ai/oumi | src/oumi/core/configs/params/peft_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/peft_params.py | Apache-2.0 |
def to_bits_and_bytes(self) -> BitsAndBytesConfig:
"""Creates a configuration for quantized models via BitsAndBytes.
The resulting configuration uses the instantiated peft parameters.
"""
quantization_config = BitsAndBytesConfig(
load_in_4bit=self.q_lora_bits == 4,
... | Creates a configuration for quantized models via BitsAndBytes.
The resulting configuration uses the instantiated peft parameters.
| to_bits_and_bytes | python | oumi-ai/oumi | src/oumi/core/configs/params/peft_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/peft_params.py | Apache-2.0 |
def _check_attribute_ids(self, attribute_ids: set[str], id: str):
"""Check if the attribute ID is already in the set."""
if id in attribute_ids:
raise ValueError(
f"GeneralSynthesisParams contains duplicate attribute IDs: {id}"
)
attribute_ids.add(id) | Check if the attribute ID is already in the set. | _check_attribute_ids | python | oumi-ai/oumi | src/oumi/core/configs/params/synthesis_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/synthesis_params.py | Apache-2.0 |
def _check_dataset_source_attribute_ids(self, all_attribute_ids: set[str]) -> None:
"""Check attribute IDs from dataset sources for uniqueness."""
if self.input_data is None:
return
if len(self.input_data) == 0:
raise ValueError("GeneralSynthesisParams.input_data cannot ... | Check attribute IDs from dataset sources for uniqueness. | _check_dataset_source_attribute_ids | python | oumi-ai/oumi | src/oumi/core/configs/params/synthesis_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/synthesis_params.py | Apache-2.0 |
def _check_document_source_attribute_ids(self, all_attribute_ids: set[str]) -> None:
"""Check attribute IDs from document sources for uniqueness."""
if self.input_documents is None:
return
if len(self.input_documents) == 0:
raise ValueError("GeneralSynthesisParams.input_... | Check attribute IDs from document sources for uniqueness. | _check_document_source_attribute_ids | python | oumi-ai/oumi | src/oumi/core/configs/params/synthesis_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/synthesis_params.py | Apache-2.0 |
def _check_example_source_attribute_ids(self, all_attribute_ids: set[str]) -> None:
"""Check attribute IDs from example sources for uniqueness."""
if self.input_examples is None:
return
if len(self.input_examples) == 0:
raise ValueError("GeneralSynthesisParams.input_exam... | Check attribute IDs from example sources for uniqueness. | _check_example_source_attribute_ids | python | oumi-ai/oumi | src/oumi/core/configs/params/synthesis_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/synthesis_params.py | Apache-2.0 |
def _check_permutable_attribute_ids(self, all_attribute_ids: set[str]) -> None:
"""Check attribute IDs from permutable attributes for uniqueness."""
if self.permutable_attributes is None:
return
if len(self.permutable_attributes) == 0:
raise ValueError(
"... | Check attribute IDs from permutable attributes for uniqueness. | _check_permutable_attribute_ids | python | oumi-ai/oumi | src/oumi/core/configs/params/synthesis_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/synthesis_params.py | Apache-2.0 |
def _check_generated_attribute_ids(self, all_attribute_ids: set[str]) -> None:
"""Check attribute IDs from generated attributes for uniqueness."""
if self.generated_attributes is None:
return
if len(self.generated_attributes) == 0:
raise ValueError(
"Gene... | Check attribute IDs from generated attributes for uniqueness. | _check_generated_attribute_ids | python | oumi-ai/oumi | src/oumi/core/configs/params/synthesis_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/synthesis_params.py | Apache-2.0 |
def _check_transformed_attribute_ids(self, all_attribute_ids: set[str]) -> None:
"""Check attribute IDs from transformed attributes for uniqueness."""
if self.transformed_attributes is None:
return
if len(self.transformed_attributes) == 0:
raise ValueError(
... | Check attribute IDs from transformed attributes for uniqueness. | _check_transformed_attribute_ids | python | oumi-ai/oumi | src/oumi/core/configs/params/synthesis_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/synthesis_params.py | Apache-2.0 |
def _check_combination_sampling_sample_rates(self) -> None:
"""Validate that the combination sample rates are <= 1.0."""
if self.combination_sampling is None:
return
if len(self.combination_sampling) == 0:
raise ValueError(
"GeneralSynthesisParams.combina... | Validate that the combination sample rates are <= 1.0. | _check_combination_sampling_sample_rates | python | oumi-ai/oumi | src/oumi/core/configs/params/synthesis_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/synthesis_params.py | Apache-2.0 |
def _check_passthrough_attribute_ids(self) -> None:
"""Validate that passthrough attributes are non-empty when defined."""
if self.passthrough_attributes is None:
return
if len(self.passthrough_attributes) == 0:
raise ValueError(
"GeneralSynthesisParams.p... | Validate that passthrough attributes are non-empty when defined. | _check_passthrough_attribute_ids | python | oumi-ai/oumi | src/oumi/core/configs/params/synthesis_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/synthesis_params.py | Apache-2.0 |
def to_hf(self):
"""Converts Oumi config to HuggingFace's TrainingArguments."""
save_strategy: str = "no"
if self.save_epoch:
save_strategy = "epoch"
if self.save_steps > 0:
save_strategy = "steps"
dataloader_num_workers = 0
if isinstance(self.dat... | Converts Oumi config to HuggingFace's TrainingArguments. | to_hf | python | oumi-ai/oumi | src/oumi/core/configs/params/training_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/training_params.py | Apache-2.0 |
def _get_hf_report_to(self) -> list[str]:
"""Gets the list of reporting tools enabled for the current instance.
Returns:
list: A list of reporting tools enabled.
Possible values are "wandb", "tensorboard", or "none".
"""
report_to = []
if self.enable_... | Gets the list of reporting tools enabled for the current instance.
Returns:
list: A list of reporting tools enabled.
Possible values are "wandb", "tensorboard", or "none".
| _get_hf_report_to | python | oumi-ai/oumi | src/oumi/core/configs/params/training_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/training_params.py | Apache-2.0 |
def telemetry_dir(self) -> Optional[Path]:
"""Returns the telemetry stats output directory."""
result: Optional[Path] = None
if self.telemetry.telemetry_dir:
result = Path(self.telemetry.telemetry_dir)
if self.output_dir:
output_dir = Path(self.output_dir)
... | Returns the telemetry stats output directory. | telemetry_dir | python | oumi-ai/oumi | src/oumi/core/configs/params/training_params.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/configs/params/training_params.py | Apache-2.0 |
def __init__(
self,
*,
dataset_name: Optional[str] = None,
dataset_path: Optional[str] = None,
split: Optional[str] = None,
tokenizer: Optional[BaseTokenizer] = None,
return_tensors: bool = False,
**kwargs,
) -> None:
"""Initializes a new insta... | Initializes a new instance of the BaseExperimentalDpoDataset class. | __init__ | python | oumi-ai/oumi | src/oumi/core/datasets/base_dpo_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_dpo_dataset.py | Apache-2.0 |
def transform_preference(self, samples: dict) -> dict:
"""Transform the samples to the Oumi format."""
prompt = samples[_PROMPT_KEY]
chosen_chat = samples[_CHOSEN_KEY]
rejected_chat = samples[_REJECTED_KEY]
chosen_chat_response = self._extract_from_chat_format(chosen_chat)
... | Transform the samples to the Oumi format. | transform_preference | python | oumi-ai/oumi | src/oumi/core/datasets/base_dpo_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_dpo_dataset.py | Apache-2.0 |
def _extract_from_chat_format(self, sample: dict) -> str:
"""Extract the last 'assistant' turn in the chat."""
for turn in sample[::-1]:
if turn[_ROLE] == _ASSISTANT:
return turn[_CONTENT]
raise ValueError("No chat turn was found with an 'assistant' role.") | Extract the last 'assistant' turn in the chat. | _extract_from_chat_format | python | oumi-ai/oumi | src/oumi/core/datasets/base_dpo_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_dpo_dataset.py | Apache-2.0 |
def __init__(
self,
*,
dataset_name: Optional[str] = None,
dataset_path: Optional[str] = None,
split: Optional[str] = None,
**kwargs,
) -> None:
"""Initializes a new instance of the BaseExperimentalGrpoDataset class."""
super().__init__(
da... | Initializes a new instance of the BaseExperimentalGrpoDataset class. | __init__ | python | oumi-ai/oumi | src/oumi/core/datasets/base_grpo_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_grpo_dataset.py | Apache-2.0 |
def _transform_grpo_example(self, example: Union[dict, pd.Series]) -> dict:
"""Validate and transform the GRPO sample into Python `dict`."""
for required_key in (_PROMPT_KEY, _COMPLETION_KEY):
if required_key not in example:
raise ValueError(
f"Example doe... | Validate and transform the GRPO sample into Python `dict`. | _transform_grpo_example | python | oumi-ai/oumi | src/oumi/core/datasets/base_grpo_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_grpo_dataset.py | Apache-2.0 |
def transform_conversation(self, sample: Union[dict, pd.Series]) -> Conversation:
"""Converts the input sample to a Conversation.
Args:
sample (Union[dict, pd.Series]): The input example.
Returns:
Conversation: The resulting conversation.
"""
# Contains... | Converts the input sample to a Conversation.
Args:
sample (Union[dict, pd.Series]): The input example.
Returns:
Conversation: The resulting conversation.
| transform_conversation | python | oumi-ai/oumi | src/oumi/core/datasets/base_grpo_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_grpo_dataset.py | Apache-2.0 |
def __init__(
self,
*,
dataset_name: Optional[str] = None,
dataset_path: Optional[str] = None,
subset: Optional[str] = None,
split: Optional[str] = None,
trust_remote_code: bool = False,
stream: bool = True,
**kwargs,
) -> None:
"""Init... | Initializes a new instance of the BaseIterableDataset class. | __init__ | python | oumi-ai/oumi | src/oumi/core/datasets/base_iterable_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_iterable_dataset.py | Apache-2.0 |
def to_hf(self, return_iterable: bool = True) -> datasets.IterableDataset:
"""Converts the dataset to a Hugging Face dataset."""
if not return_iterable:
raise NotImplementedError("Only returning IterableDataset is supported.")
return datasets.IterableDataset.from_generator(self.__ite... | Converts the dataset to a Hugging Face dataset. | to_hf | python | oumi-ai/oumi | src/oumi/core/datasets/base_iterable_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_iterable_dataset.py | Apache-2.0 |
def _load_data(self) -> Iterable[Any]:
"""Loads the dataset from the specified source."""
if self.dataset_path:
result = self._load_local_dataset(self.dataset_path)
else:
result = self._load_hf_hub_dataset()
return result | Loads the dataset from the specified source. | _load_data | python | oumi-ai/oumi | src/oumi/core/datasets/base_iterable_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_iterable_dataset.py | Apache-2.0 |
def _load_hf_hub_dataset(self) -> Iterable[Any]:
"""Loads the dataset from the specified source."""
return datasets.load_dataset(
path=self.dataset_name,
name=self.dataset_subset,
split=self.split,
streaming=self.stream,
trust_remote_code=self.... | Loads the dataset from the specified source. | _load_hf_hub_dataset | python | oumi-ai/oumi | src/oumi/core/datasets/base_iterable_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_iterable_dataset.py | Apache-2.0 |
def __init__(
self,
*,
dataset_name: Optional[str],
dataset_path: Optional[str] = None,
subset: Optional[str] = None,
split: Optional[str] = None,
trust_remote_code: bool = False,
transform_num_workers: Optional[Union[str, int]] = None,
**kwargs,
... | Initializes a new instance of the BaseDataset class. | __init__ | python | oumi-ai/oumi | src/oumi/core/datasets/base_map_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_map_dataset.py | Apache-2.0 |
def __getitem__(self, idx: int) -> dict:
"""Gets the item at the specified index.
Args:
idx (int): The index of the item to retrieve.
Returns:
dict: The item at the specified index.
"""
sample = self.raw(idx)
processed = self.transform(sample)
... | Gets the item at the specified index.
Args:
idx (int): The index of the item to retrieve.
Returns:
dict: The item at the specified index.
| __getitem__ | python | oumi-ai/oumi | src/oumi/core/datasets/base_map_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_map_dataset.py | Apache-2.0 |
def _as_generator_over_shards(
self, shards: list[_ExamplesIndicesRange]
) -> Generator[dict[str, Any], None, None]:
"""Returns a sharded generator for the dataset."""
for shard in shards:
for idx in range(shard.start_index, shard.end_index):
yield self[idx] | Returns a sharded generator for the dataset. | _as_generator_over_shards | python | oumi-ai/oumi | src/oumi/core/datasets/base_map_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_map_dataset.py | Apache-2.0 |
def _detect_features_and_estimate_element_size_bytes(
self, samples_iter: Iterable[dict[str, Any]]
) -> _InferredFeatureMap:
"""Returns an estimate of max element size in bytes."""
samples_list = list(samples_iter)
def _dummy_generator():
yield from samples_list
... | Returns an estimate of max element size in bytes. | _detect_features_and_estimate_element_size_bytes | python | oumi-ai/oumi | src/oumi/core/datasets/base_map_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_map_dataset.py | Apache-2.0 |
def _compute_effective_transform_num_workers(self) -> int:
"""Returns an effective number of dataset transform workers.
Guaranteed to be a positive integer (>= 1). 1 if no parallelism is used.
"""
num_proc = None
if self.transform_num_workers is not None:
if isinstan... | Returns an effective number of dataset transform workers.
Guaranteed to be a positive integer (>= 1). 1 if no parallelism is used.
| _compute_effective_transform_num_workers | python | oumi-ai/oumi | src/oumi/core/datasets/base_map_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_map_dataset.py | Apache-2.0 |
def to_hf(
self, return_iterable: bool = False
) -> Union[datasets.Dataset, datasets.IterableDataset]:
"""Converts the dataset to a Hugging Face dataset.
Args:
return_iterable: Whether to return an iterable dataset.
Iterable datasets aren't cached to disk, which ... | Converts the dataset to a Hugging Face dataset.
Args:
return_iterable: Whether to return an iterable dataset.
Iterable datasets aren't cached to disk, which can sometimes be
advantageous. For example, if transformed examples are very large
(e.g., if `... | to_hf | python | oumi-ai/oumi | src/oumi/core/datasets/base_map_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_map_dataset.py | Apache-2.0 |
def _load_data(self) -> pd.DataFrame:
"""Loads the dataset from the specified source.
Returns:
dict: The loaded dataset.
"""
if self.dataset_path:
result = self._load_local_dataset(self.dataset_path)
else:
result = self._load_hf_hub_dataset()
... | Loads the dataset from the specified source.
Returns:
dict: The loaded dataset.
| _load_data | python | oumi-ai/oumi | src/oumi/core/datasets/base_map_dataset.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/core/datasets/base_map_dataset.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.