unknown commited on
Commit ·
3d72cdf
1
Parent(s): f6ee434
Initial
Browse files
README.md
CHANGED
|
@@ -337,4 +337,4 @@ $ cat ./Scripts/Exp/Perf/Fig10.csv
|
|
| 337 |
|
| 338 |
Users can run this experiment in different software environments, but they must ensure that PyTorch version is compatible with the CUDA version in those software environments. The experiment can also be conducted in different hardware environments, but adjustments to the batch size for fine-tuning and inference are necessary based on the available GPU memory. We have fixed the random seed and parameters in the provided scripts to ensure consistent code generation accuracy within the same hardware and software environment. However, if the model is re-fine-tuned under different hardware or software environments, the accuracy of the newly fine-tuned model may exhibit slight variations.
|
| 339 |
|
| 340 |
-
We further conducted code generation tests on a machine with **an Nvidia A100 GPU (80GB memory)** and **CUDA Version == 12.0**. Under the provided Conda virtual environment, the experimental results showed a **25-minute reduction in the time overhead** of the code generation process (Fig. 7), as the previous setup with 8 V100 GPUs
|
|
|
|
| 337 |
|
| 338 |
Users can run this experiment in different software environments, but they must ensure that PyTorch version is compatible with the CUDA version in those software environments. The experiment can also be conducted in different hardware environments, but adjustments to the batch size for fine-tuning and inference are necessary based on the available GPU memory. We have fixed the random seed and parameters in the provided scripts to ensure consistent code generation accuracy within the same hardware and software environment. However, if the model is re-fine-tuned under different hardware or software environments, the accuracy of the newly fine-tuned model may exhibit slight variations.
|
| 339 |
|
| 340 |
+
We further conducted code generation tests on a machine with **an Nvidia A100 GPU (80GB memory)** and **CUDA Version == 12.0**. Under the provided Conda virtual environment, the experimental results showed a **25-minute reduction in the time overhead** of the code generation process (Fig. 7). This reduction is due to the A100 GPU's higher computational efficiency compared to the V100, as well as the additional time costs in the previous setup with 8 V100 GPUs caused by synchronization requirements across multiple GPUs. Notably, **code accuracy remained unchanged** (Fig. 8, Fig. 9, Table. 2, Table. 3). This confirms that our experiment is adaptable across different hardware and software environments.
|