gg-armv8-O2 / README.md
nielsr's picture
nielsr HF Staff
Add pipeline tag, code-translation tag, project page link and improve description
984ea5f verified
|
raw
history blame
2.38 kB
---
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
library_name: transformers
license: apache-2.0
tags:
- llama-factory
- full
- generated_from_trainer
- code-translation
model-index:
- name: ex33_armv8
results: []
pipeline_tag: translation
---
Check out more datails here:
- Paper: https://arxiv.org/abs/2506.14606
- Code: https://github.com/ahmedheakl/Guaranteed-Guess
- Project page: https://ahmedheakl.github.io/Guaranteed-Guess/
# ex33_armv8
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) on the anghabench_armv8_O2_p1, the anghabench_armv8_O2_p2 and the stack_armv8_O2 datasets.
## Model description
This model is part of the Guaranteed Guess (GG) pipeline, which tackles the challenging problem of CISC-to-RISC transpilation. GG combines the power of pre-trained large language models (LLMs) with software testing to generate and validate code translations between instruction set architectures (ISAs). This model is fine-tuned to translate from x86 (CISC) to ARMv8 (RISC).
## Intended uses & limitations
This model is intended for researchers and developers interested in ISA transpilation, particularly CISC-to-RISC translation. It can be used to translate x86 assembly code to ARMv8 assembly code. However, the model's performance may vary depending on the complexity and optimization level of the input code.
## Training and evaluation data
The model was trained and evaluated on the anghabench_armv8_O2_p1, the anghabench_armv8_O2_p2 and the stack_armv8_O2 datasets. These datasets include code snippets and programs designed to test the model's ability to translate between x86 and ARMv8 architectures.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 192
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0