Commit my-awesome-file to the Hub
Browse files- 2024_02_27_12_57_48.log +14 -0
2024_02_27_12_57_48.log
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
INFO 2024-02-27 12:57:48,438 [<ipython-input-1-c89cddfcf9c3>:<module>:11] Start the script
|
| 2 |
+
INFO 2024-02-27 12:57:48,527 [<ipython-input-1-c89cddfcf9c3>:<module>:17] Current Time: 2024-02-27 12:57:48.446493
|
| 3 |
+
INFO 2024-02-27 12:57:53,700 [<ipython-input-1-c89cddfcf9c3>:<module>:37] Columns names: Index(['Unnamed: 0', 'caption', 'url', 'index', 'nr_words', 'caption_ar'], dtype='object')
|
| 4 |
+
INFO 2024-02-27 12:57:53,708 [<ipython-input-1-c89cddfcf9c3>:<module>:53] The available gpus available is [<torch.cuda.device object at 0x7ff30f849310>]
|
| 5 |
+
INFO 2024-02-27 12:59:10,142 [<ipython-input-2-defe6f08bddf>:<module>:19] The model used on the translation is facebook/nllb-200-distilled-600M
|
| 6 |
+
INFO 2024-02-27 12:59:10,147 [<ipython-input-2-defe6f08bddf>:<module>:23] The model loaded on the following device: cuda:0
|
| 7 |
+
INFO 2024-02-27 12:59:11,519 [<ipython-input-1-c89cddfcf9c3>:__init__:59] The max length of tokenizer is 1024
|
| 8 |
+
INFO 2024-02-27 12:59:11,522 [<ipython-input-4-b7b7277fd95b>:<module>:5] The dataset for training length 10
|
| 9 |
+
INFO 2024-02-27 12:59:40,806 [<ipython-input-5-e1f599860ef7>:<module>:26] Script time 0:01:52.360273
|
| 10 |
+
INFO 2024-02-27 13:01:13,523 [<ipython-input-6-3c691fe33fc8>:<module>:26] Script time 0:03:25.077276
|
| 11 |
+
INFO 2024-02-27 13:04:41,126 [<ipython-input-7-fede7eda85ff>:<module>:11] Start the script
|
| 12 |
+
INFO 2024-02-27 13:04:41,132 [<ipython-input-7-fede7eda85ff>:<module>:17] Current Time: 2024-02-27 13:04:41.131153
|
| 13 |
+
INFO 2024-02-27 13:04:46,330 [<ipython-input-7-fede7eda85ff>:<module>:37] Columns names: Index(['Unnamed: 0', 'caption', 'url', 'index', 'nr_words', 'caption_ar'], dtype='object')
|
| 14 |
+
INFO 2024-02-27 13:04:46,334 [<ipython-input-7-fede7eda85ff>:<module>:54] The available gpus available is [<torch.cuda.device object at 0x7ff300a89bd0>]
|