Add pipeline tag, code-translation tag, project page link and improve description
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,19 +1,22 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
library_name: transformers
|
| 3 |
license: apache-2.0
|
| 4 |
-
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
|
| 5 |
tags:
|
| 6 |
- llama-factory
|
| 7 |
- full
|
| 8 |
- generated_from_trainer
|
|
|
|
| 9 |
model-index:
|
| 10 |
- name: ex33_armv8
|
| 11 |
results: []
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
Check out more datails here:
|
| 15 |
- Paper: https://arxiv.org/abs/2506.14606
|
| 16 |
- Code: https://github.com/ahmedheakl/Guaranteed-Guess
|
|
|
|
| 17 |
|
| 18 |
# ex33_armv8
|
| 19 |
|
|
@@ -21,15 +24,15 @@ This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-1.5B-Instruct](https:/
|
|
| 21 |
|
| 22 |
## Model description
|
| 23 |
|
| 24 |
-
|
| 25 |
|
| 26 |
## Intended uses & limitations
|
| 27 |
|
| 28 |
-
|
| 29 |
|
| 30 |
## Training and evaluation data
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
## Training procedure
|
| 35 |
|
|
@@ -59,4 +62,4 @@ The following hyperparameters were used during training:
|
|
| 59 |
- Transformers 4.50.0
|
| 60 |
- Pytorch 2.6.0+cu124
|
| 61 |
- Datasets 3.4.1
|
| 62 |
-
- Tokenizers 0.21.0
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
|
| 3 |
library_name: transformers
|
| 4 |
license: apache-2.0
|
|
|
|
| 5 |
tags:
|
| 6 |
- llama-factory
|
| 7 |
- full
|
| 8 |
- generated_from_trainer
|
| 9 |
+
- code-translation
|
| 10 |
model-index:
|
| 11 |
- name: ex33_armv8
|
| 12 |
results: []
|
| 13 |
+
pipeline_tag: translation
|
| 14 |
---
|
| 15 |
|
| 16 |
Check out more datails here:
|
| 17 |
- Paper: https://arxiv.org/abs/2506.14606
|
| 18 |
- Code: https://github.com/ahmedheakl/Guaranteed-Guess
|
| 19 |
+
- Project page: https://ahmedheakl.github.io/Guaranteed-Guess/
|
| 20 |
|
| 21 |
# ex33_armv8
|
| 22 |
|
|
|
|
| 24 |
|
| 25 |
## Model description
|
| 26 |
|
| 27 |
+
This model is part of the Guaranteed Guess (GG) pipeline, which tackles the challenging problem of CISC-to-RISC transpilation. GG combines the power of pre-trained large language models (LLMs) with software testing to generate and validate code translations between instruction set architectures (ISAs). This model is fine-tuned to translate from x86 (CISC) to ARMv8 (RISC).
|
| 28 |
|
| 29 |
## Intended uses & limitations
|
| 30 |
|
| 31 |
+
This model is intended for researchers and developers interested in ISA transpilation, particularly CISC-to-RISC translation. It can be used to translate x86 assembly code to ARMv8 assembly code. However, the model's performance may vary depending on the complexity and optimization level of the input code.
|
| 32 |
|
| 33 |
## Training and evaluation data
|
| 34 |
|
| 35 |
+
The model was trained and evaluated on the anghabench_armv8_O2_p1, the anghabench_armv8_O2_p2 and the stack_armv8_O2 datasets. These datasets include code snippets and programs designed to test the model's ability to translate between x86 and ARMv8 architectures.
|
| 36 |
|
| 37 |
## Training procedure
|
| 38 |
|
|
|
|
| 62 |
- Transformers 4.50.0
|
| 63 |
- Pytorch 2.6.0+cu124
|
| 64 |
- Datasets 3.4.1
|
| 65 |
+
- Tokenizers 0.21.0
|