--- library_name: transformers base_model: BytArch/source-mini tags: - generated_from_trainer model-index: - name: source-mini results: [] license: mit datasets: - BytArch/ub-data-structures-2025-1 - BytArch/ub-networking-dataset-2024-2 - BytArch/ub-web-dev-2025-2 - BytArch/source-mini-id language: - en metrics: - accuracy - character new_version: BytArch/source-mini pipeline_tag: text-generation --- # source-mini This model is a fine-tuned version of [BytArch/source-mini](https://huggingface.co/BytArch/source-mini) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.56.0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0