unknown commited on
Commit
f6ee434
·
1 Parent(s): de6b003
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -337,5 +337,4 @@ $ cat ./Scripts/Exp/Perf/Fig10.csv
337
 
338
  Users can run this experiment in different software environments, but they must ensure that PyTorch version is compatible with the CUDA version in those software environments. The experiment can also be conducted in different hardware environments, but adjustments to the batch size for fine-tuning and inference are necessary based on the available GPU memory. We have fixed the random seed and parameters in the provided scripts to ensure consistent code generation accuracy within the same hardware and software environment. However, if the model is re-fine-tuned under different hardware or software environments, the accuracy of the newly fine-tuned model may exhibit slight variations.
339
 
340
-
341
- **We further conducted code generation tests on a machine with an Nvidia A100 GPU (80GB memory). With a consistent software environment, the experimental results demonstrated a reduction in the time overhead of the code generation process (Fig. 7), as the previous setup with 8 V100 GPUs incurred higher time overhead due to the need for synchronization between multiple GPUs. However, the code accuracy remained unchanged (Fig. 8, Table. 2, Fig. 9, Table. 3). This confirms that our experiment can also be executed across different hardware environments.**
 
337
 
338
  Users can run this experiment in different software environments, but they must ensure that PyTorch version is compatible with the CUDA version in those software environments. The experiment can also be conducted in different hardware environments, but adjustments to the batch size for fine-tuning and inference are necessary based on the available GPU memory. We have fixed the random seed and parameters in the provided scripts to ensure consistent code generation accuracy within the same hardware and software environment. However, if the model is re-fine-tuned under different hardware or software environments, the accuracy of the newly fine-tuned model may exhibit slight variations.
339
 
340
+ We further conducted code generation tests on a machine with **an Nvidia A100 GPU (80GB memory)** and **CUDA Version == 12.0**. Under the provided Conda virtual environment, the experimental results showed a **25-minute reduction in the time overhead** of the code generation process (Fig. 7), as the previous setup with 8 V100 GPUs incurred greater time costs due to synchronization requirements between multiple GPUs. Notably, **code accuracy remained unchanged** (Fig. 8, Fig. 9, Table. 2, Table. 3). This confirms that our experiment is adaptable across different hardware and software environments.