| base_model: meta-llama/Llama-3.1-8B-Instruct | |
| datasets: | |
| - codezakh/EFAGen-Llama-3.1-8B-Instruct-Training-Data | |
| library_name: transformers | |
| license: other | |
| pipeline_tag: text-generation | |
| tags: | |
| - llama-factory | |
| - lora | |
| - generated_from_trainer | |
| model-index: | |
| - name: llama_factory_output_dir | |
| results: [] | |
| [๐ Paper](arxiv.org/abs/2504.09763) | |
| Project Page: https://zaidkhan.me/EFAGen | |
| This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) trained to generate Executable Functional Abstractions (EFAs) for math problems. | |
| The training data for this model can be found [here](https://huggingface.co/datasets/codezakh/EFAGen-Llama-3.1-8B-Instruct-Training-Data). | |
| The model was trained using Llama-Factory and the data is already in Alpaca instruction-tuning format. | |
| The "Instruction" field contains a prompt with instructions defining the EFA protocol and a set of static in-context examples (they're the same for all rows). | |
| The "Response" field contains the code of the EFA. | |
| ### Training hyperparameters | |
| The following hyperparameters were used during training: | |
| - learning_rate: 0.0001 | |
| - train_batch_size: 1 | |
| - eval_batch_size: 1 | |
| - seed: 42 | |
| - distributed_type: multi-GPU | |
| - num_devices: 4 | |
| - gradient_accumulation_steps: 2 | |
| - total_train_batch_size: 8 | |
| - total_eval_batch_size: 4 | |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | |
| - lr_scheduler_type: cosine | |
| - lr_scheduler_warmup_ratio: 0.1 | |
| - num_epochs: 3.0 | |
| - mixed_precision_training: Native AMP | |
| ### Framework versions | |
| - PEFT 0.12.0 | |
| - Transformers 4.44.2 | |
| - Pytorch 2.4.0+cu121 | |
| - Datasets 2.21.0 | |
| - Tokenizers 0.19.1 |