barakplasma commited on
Commit
f944013
Β·
verified Β·
1 Parent(s): 84e8ded

Delete logs/convert_float16.log with huggingface_hub

Browse files
Files changed (1) hide show
  1. logs/convert_float16.log +0 -73
logs/convert_float16.log DELETED
@@ -1,73 +0,0 @@
1
- [!] HF_TOKEN is not set. If model is public, you can pass --allow-no-token.
2
- [+] Using existing model dir: /home/ubuntu/translategemma-4b-it
3
- [+] ARCH: {'model_type': 'gemma3', 'architecture': 'Gemma3ForConditionalGeneration', 'vocab_size': 262208}
4
- [+] Strategy 1: litert-torch native
5
- [!] float16 support depends on converter/runtime build.
6
- [+] Gemma3 builders available: ['build_model_1b', 'build_model_270m']
7
- [+] Trying litert_torch.generative.examples.gemma3.gemma3.build_model_1b ...
8
- [!] failed: 'NoneType' object has no attribute 'eval'
9
- [+] Trying litert_torch.generative.examples.gemma3.gemma3.build_model_270m ...
10
- [!] failed: 'NoneType' object has no attribute 'eval'
11
- [!] Strategy 1 did not find a compatible builder
12
- [+] Strategy 2: ai_edge_torch generic (wrapped logits-only)
13
- [+] Loading HF model on CPU with dtype=torch.float16 ...
14
- `torch_dtype` is deprecated! Use `dtype` instead!
15
-
16
- Loading weights: 0%| | 0/883 [00:00<?, ?it/s]
17
- Loading weights: 15%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 131/883 [00:00<00:00, 964.72it/s]
18
- Loading weights: 26%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 228/883 [00:00<00:00, 818.54it/s]
19
- Loading weights: 35%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 310/883 [00:00<00:00, 656.66it/s]
20
- Loading weights: 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 379/883 [00:00<00:00, 666.47it/s]
21
- Loading weights: 51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 450/883 [00:00<00:00, 679.41it/s]
22
- Loading weights: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 814/883 [00:00<00:00, 1573.63it/s]
23
- Loading weights: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 883/883 [00:00<00:00, 1167.42it/s]
24
- (00:00) [START] LiteRT-Torch Convert
25
- (00:00) [START] LiteRT-Torch Convert > Torch Export: serving_default
26
- (00:08) [START] LiteRT-Torch Convert > Torch Export: serving_default > ExportedProgram Run Decompositions
27
- (00:27) [ DONE] LiteRT-Torch Convert > Torch Export: serving_default > ExportedProgram Run Decompositions (+00:18)
28
- (00:27) [ DONE] LiteRT-Torch Convert > Torch Export: serving_default (+00:27)
29
- (00:27) [START] LiteRT-Torch Convert > Run FX Passes
30
- (00:28) [START] LiteRT-Torch Convert > Run FX Passes > ExportedProgram Run Decompositions
31
- (00:28) [ DONE] LiteRT-Torch Convert > Run FX Passes > ExportedProgram Run Decompositions (+00:00)
32
- (00:28) [ DONE] LiteRT-Torch Convert > Run FX Passes (+00:01)
33
- (00:28) [START] LiteRT-Torch Convert > Lower to MLIR: serving_default
34
- (00:28) [START] LiteRT-Torch Convert > Lower to MLIR: serving_default > ExportedProgram Run Decompositions
35
- (00:46) [ DONE] LiteRT-Torch Convert > Lower to MLIR: serving_default > ExportedProgram Run Decompositions (+00:18)
36
- (00:46) [START] LiteRT-Torch Convert > Lower to MLIR: serving_default > ExportedProgram Run Decompositions
37
- (01:06) [ DONE] LiteRT-Torch Convert > Lower to MLIR: serving_default > ExportedProgram Run Decompositions (+00:19)
38
- (01:06) [START] LiteRT-Torch Convert > Lower to MLIR: serving_default > Create MLIR Module
39
- WARNING:jax._src.xla_bridge:An NVIDIA GPU may be present on this machine, but a CUDA-enabled jaxlib is not installed. Falling back to cpu.
40
- (01:30) [ DONE] LiteRT-Torch Convert > Lower to MLIR: serving_default > Create MLIR Module (+00:23)
41
- (01:30) [ DONE] LiteRT-Torch Convert > Lower to MLIR: serving_default (+01:01)
42
- (01:30) [START] LiteRT-Torch Convert > Merge MLIR Modules
43
- (01:30) [ DONE] LiteRT-Torch Convert > Merge MLIR Modules (+00:00)
44
- (01:30) [START] LiteRT-Torch Convert > Run LiteRT Converter Passes
45
- loc("__main__.strategy2_generic.<locals>.LogitsOnlyWrapper/transformers.models.gemma3.modeling_gemma3.Gemma3ForConditionalGeneration_model/transformers.models.gemma3.modeling_gemma3.Gemma3Model_model/transformers.models.gemma3.modeling_gemma3.Gemma3TextScaledWordEmbedding_embed_tokens;"("embedding"("/usr/local/lib/python3.12/dist-packages/torch/nn/modules/sparse.py":192:0))): error: failed to legalize operation 'tfl.embedding_lookup' that was explicitly marked illegal: %492 = "tfl.embedding_lookup"(%491, %274) : (tensor<128xi32>, tensor<262208x2560xf16>) -> tensor<128x2560xf16>
46
- (03:48) [ FAIL] LiteRT-Torch Convert > Run LiteRT Converter Passes
47
- (03:48) [ FAIL] LiteRT-Torch Convert
48
- [!] Strategy 2 failed: Failed to run converter passes: /usr/local/lib/python3.12/dist-packages/torch/nn/modules/sparse.py:192:0: error: failed to legalize operation 'tfl.embedding_lookup' that was explicitly marked illegal: %492 = "tfl.embedding_lookup"(%491, %274) : (tensor<128xi32>, tensor<262208x2560xf16>) -> tensor<128x2560xf16>
49
-
50
- Traceback (most recent call last):
51
- File "/home/ubuntu/convert_translategemma_android.py", line 372, in main
52
- tflite_file = strategy2_generic(
53
- ^^^^^^^^^^^^^^^^^^
54
- File "/home/ubuntu/convert_translategemma_android.py", line 234, in strategy2_generic
55
- edge_model = ai_edge_torch.convert(wrapped, (sample_ids,))
56
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
57
- File "/usr/local/lib/python3.12/dist-packages/litert_torch/_convert/interface.py", line 323, in convert
58
- return Converter().convert(
59
- ^^^^^^^^^^^^^^^^^^^^
60
- File "/usr/local/lib/python3.12/dist-packages/litert_torch/_convert/interface.py", line 208, in convert
61
- converted_model = core.convert_signatures(
62
- ^^^^^^^^^^^^^^^^^^^^^^^^
63
- File "/usr/lib/python3.12/contextlib.py", line 81, in inner
64
- return func(*args, **kwds)
65
- ^^^^^^^^^^^^^^^^^^^
66
- File "/usr/local/lib/python3.12/dist-packages/litert_torch/_convert/core.py", line 158, in convert_signatures
67
- exporter = litert_converter.exported_programs_to_flatbuffer(
68
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
69
- File "/usr/local/lib/python3.12/dist-packages/litert_torch/_convert/litert_converter.py", line 189, in exported_programs_to_flatbuffer
70
- converter_api_ext.run_convert_to_tfl_passes(
71
- ValueError: Failed to run converter passes: /usr/local/lib/python3.12/dist-packages/torch/nn/modules/sparse.py:192:0: error: failed to legalize operation 'tfl.embedding_lookup' that was explicitly marked illegal: %492 = "tfl.embedding_lookup"(%491, %274) : (tensor<128xi32>, tensor<262208x2560xf16>) -> tensor<128x2560xf16>
72
-
73
- [x] All conversion strategies failed