repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | tf quantizer break pip package | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 9 16 bazel version 6 1 0 gcc compiler version 16 0 6 cuda cudnn version n a gpu model and memory n a current behavior the commit add pywrap quantize model so to the pip package but the build fail to set the rpath for pywrap tensorflow internal so in the new so lead to a failure in auditwheel when attempt to repair to make it manylinux2014 compatible standalone code to reproduce the issue shell python3 m auditwheel repair plat manylinux2014 aarch64 tensorflow pkg tensorflow aarch64 2 15 0 cp311 cp311 linux aarch64 whl wheel dir whl relevant log output shell python3 m auditwheel repair plat manylinux2014 aarch64 tensorflow pkg tensorflow aarch64 2 15 0 cp311 cp311 linux aarch64 whl wheel dir whl info auditwheel main repair repair tensorflow aarch64 2 15 0 cp311 cp311 linux aarch64 whl traceback most recent call last file line 198 in run module as main file line 88 in run code file usr local lib python3 11 dist package auditwheel main py line 6 in sys exit main file usr local lib python3 11 dist package auditwheel main py line 59 in main rval args func args p file usr local lib python3 11 dist package auditwheel main repair py line 173 in execute out wheel repair wheel file usr local lib python3 11 dist package auditwheel repair py line 80 in repair wheel raise valueerror valueerror can not repair wheel because require library pywrap tensorflow internal so could not be locate also ldd venv bad lib python3 11 site package tensorflow compiler mlir quantization tensorflow python pywrap quantize model so linux vdso so 1 0x0000ffffb8af9000 libtensorflow framework so 2 workspace venv bad lib python3 11 site package tensorflow compiler mlir quantization tensorflow python libtensorflow framework so 2 0x0000ffffb6800000 pywrap tensorflow internal so not find libdl so 2 lib aarch64 linux gnu libdl so 2 0x0000ffffb8923000 libpthread so 0 lib aarch64 linux gnu libpthread so 0 0x0000ffffb88f2000 libm so 6 lib aarch64 linux gnu libm so 6 0x0000ffffb8847000 libstdc so 6 lib aarch64 linux gnu libstdc so 6 0x0000ffffb661b000 libgcc s so 1 lib aarch64 linux gnu libgcc s so 1 0x0000ffffb8823000 libc so 6 lib aarch64 linux gnu libc so 6 0x0000ffffb64a8000 lib ld linux aarch64 so 1 0x0000ffffb8ac9000 librt so 1 lib aarch64 linux gnu librt so 1 0x0000ffffb8809000 compare with ldd venv bad lib python3 11 site package tensorflow python save model pywrap save model so linux vdso so 1 0x0000ffffa40a2000 libtensorflow framework so 2 workspace venv bad lib python3 11 site package tensorflow python save model libtensorflow framework so 2 0x0000ffffa1e00000 pywrap tensorflow internal so workspace venv bad lib python3 11 site package tensorflow python save model pywrap tensorflow internal so 0x0000ffffa1c0a000 libdl so 2 lib aarch64 linux gnu libdl so 2 0x0000ffffa3ee5000 libm so 6 lib aarch64 linux gnu libm so 6 0x0000ffffa3e3a000 libpthread so 0 lib aarch64 linux gnu libpthread so 0 0x0000ffffa3e09000 libstdc so 6 lib aarch64 linux gnu libstdc so 6 0x0000ffffa1a25000 libgcc s so 1 lib aarch64 linux gnu libgcc s so 1 0x0000ffffa3de5000 libc so 6 lib aarch64 linux gnu libc so 6 0x0000ffffa18b2000 lib ld linux aarch64 so 1 0x0000ffffa4072000 librt so 1 lib aarch64 linux gnu librt so 1 0x0000ffffa3dcd000 libtensorflow cc so 2 workspace venv bad lib python3 11 site package tensorflow python save model libtensorflow cc so 2 0x0000ffff8be00000 libml dtype so so workspace venv bad lib python3 11 site package tensorflow python save model tsl python lib core libml dtype so so 0x0000ffffa3d99000 libomp so 5 usr lib llvm 16 lib libomp so 5 0x0000ffff8bcc0000 |
tensorflowtensorflow | overflow when run tf compat v1 manip tile | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large element in the input list standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try input tensor tf random uniform 1 355 768 dtype tf float32 input tf identity input tensor multiple 0 125091515651 multiple 1 true multiple 2 125091515651 multiple multiple 0 multiple 1 multiple 2 name none out tf compat v1 manip tile input input multiple multiple name name except exception as e print error str e relevant log output shell error function node wrap tile device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 44407488056105 with 96070284019968 result 1 node tile op tile |
tensorflowtensorflow | overflow bug when run tf image resize | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large element in the input list standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try arg 0 tensor tf constant 1000000 shape 577 700 3 1 dtype tf float64 arg 0 tf identity arg 0 tensor arg 1 0 1610637938 arg 1 1 1250999896764 arg 1 arg 1 0 arg 1 1 out tf image resize arg 0 arg 1 except exception as e print error str e relevant log output shell error function node wrap resizebilinear device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 929338090226 with 1164413628 result 1 op resizebilinear name |
tensorflowtensorflow | overflow when run tf compat v1 linalg diag | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large element in the input list standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try diagonal 0 0 0 0 1111 diagonal 0 0 0 1 1112 diagonal 0 0 0 diagonal 0 0 0 0 diagonal 0 0 0 1 diagonal 0 0 1 0 1121 diagonal 0 0 1 1 1122 diagonal 0 0 1 diagonal 0 0 1 0 diagonal 0 0 1 1 diagonal 0 0 diagonal 0 0 0 diagonal 0 0 1 diagonal 0 1 0 0 1211 diagonal 0 1 0 1 1212 diagonal 0 1 0 diagonal 0 1 0 0 diagonal 0 1 0 1 diagonal 0 1 1 0 1221 diagonal 0 1 1 1 1222 diagonal 0 1 1 diagonal 0 1 1 0 diagonal 0 1 1 1 diagonal 0 1 diagonal 0 1 0 diagonal 0 1 1 diagonal 0 diagonal 0 0 diagonal 0 1 diagonal 1 0 0 0 2111 diagonal 1 0 0 1 2112 diagonal 1 0 0 diagonal 1 0 0 0 diagonal 1 0 0 1 diagonal 1 0 1 0 2121 diagonal 1 0 1 1 2122 diagonal 1 0 1 diagonal 1 0 1 0 diagonal 1 0 1 1 diagonal 1 0 diagonal 1 0 0 diagonal 1 0 1 diagonal 1 1 0 0 2211 diagonal 1 1 0 1 2212 diagonal 1 1 0 diagonal 1 1 0 0 diagonal 1 1 0 1 diagonal 1 1 1 0 2221 diagonal 1 1 1 1 2222 diagonal 1 1 1 diagonal 1 1 1 0 diagonal 1 1 1 1 diagonal 1 1 diagonal 1 1 0 diagonal 1 1 1 diagonal 1 diagonal 1 0 diagonal 1 1 diagonal diagonal 0 diagonal 1 name diag part k 1610637938 padding value 0 align right leave out tf compat v1 linalg diag diagonal diagonal name name k k padding value padding value align align except exception as e print error str e relevant log output shell error function node wrap matrixdiagv3 device job localhost replica 0 task 0 device cpu 0 encounter overflow when multiply 12885103520 with 1610637940 result 1 op matrixdiagv3 |
tensorflowtensorflow | overflow bug when run tf compat v1 manip tile | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large element in the input tensor standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try input tensor tf random uniform 1 355 768 dtype tf float32 input tf identity input tensor multiple 0 125091515651 multiple 1 true multiple 2 125091515651 multiple multiple 0 multiple 1 multiple 2 name none out tf compat v1 manip tile input input multiple multiple name name except exception as e print error str e relevant log output shell error function node wrap tile device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 44407488056105 with 96070284019968 result 1 node tile op tile |
tensorflowtensorflow | overflow when run tf compat v1 tile | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large element in the input list standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try input tensor tf random uniform 370 1 1024 dtype tf float32 input tf identity input tensor multiple 0 125091515651 multiple 1 125091515651 multiple 2 125091515651 multiple multiple 0 multiple 1 multiple 2 name tensor tf random uniform dtype tf int32 maxval 66860669291904 name tf identity name tensor name tf variable name out tf compat v1 tile input input multiple multiple name name except exception as e print error str e relevant log output shell error function node wrap tile device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 46283860790870 with 125091515651 result 1 node tile op tile |
tensorflowtensorflow | overflow bug when run tf raw op resizenearestneighbor | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large list element standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try image tensor tf constant 256 shape 16 4 5 1 dtype tf float16 image tf identity image tensor size 0 1250999896764 size 1 1610637938 size size 0 size 1 align corner false half pixel center false name none out tf raw op resizenearestneighbor image image size size align corner align corner half pixel center half pixel center name name except exception as e print error str e relevant log output shell error function node wrap resizenearestneighbor device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 18630618048 with 1610637938 result 1 op resizenearestneighbor |
tensorflowtensorflow | overflow bug when run tf raw op tile | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large element in input list standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try input tensor tf random uniform 4 1 1 20 dtype tf float32 input tf identity input tensor multiple 0 125091515651 multiple 1 true multiple 2 125091515651 multiple 3 125091515651 multiple multiple 0 multiple 1 multiple 2 multiple 3 name none out tf raw op tile input input multiple multiple name name except exception as e print error str e relevant log output shell error function node wrap tile device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 500366062604 with 125091515651 result 1 node tile op tile |
tensorflowtensorflow | overflow bug when run tf keras layer zeropadding2d | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large list element standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try pad 0 0 125091515651 pad 0 1 false padding 0 pad 0 0 padding 0 1 padding 1 0 125091515651 padding 1 1 125091515651 padding 1 padding 1 0 padding 1 1 padding padding 0 padding 1 arg class tf keras layer zeropadding2d padding padding arg input 0 tensor tf random uniform 3 300 300 192 dtype tf float32 arg input 0 tf identity arg input 0 tensor arg input arg input 0 out arg class arg input except exception as e print error str e relevant log output shell function node wrap pad device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 2501999793529 with 2501999793530 result 1 n t node pad op pad n n call argument receive by layer zero padding2d type zeropadding2d n input tf tensor shape 1 1 2 2 dtype float32 n |
tensorflowtensorflow | overflow bug when run tf linalg diag on colab | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large integer list element standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try diagonal 0 0 0 0 1111 diagonal 0 0 0 1 1112 diagonal 0 0 0 diagonal 0 0 0 0 diagonal 0 0 0 1 diagonal 0 0 1 0 1121 diagonal 0 0 1 1 1122 diagonal 0 0 1 diagonal 0 0 1 0 diagonal 0 0 1 1 diagonal 0 0 diagonal 0 0 0 diagonal 0 0 1 diagonal 0 1 0 0 1211 diagonal 0 1 0 1 1212 diagonal 0 1 0 diagonal 0 1 0 0 diagonal 0 1 0 1 diagonal 0 1 1 0 1221 diagonal 0 1 1 1 1222 diagonal 0 1 1 diagonal 0 1 1 0 diagonal 0 1 1 1 diagonal 0 1 diagonal 0 1 0 diagonal 0 1 1 diagonal 0 diagonal 0 0 diagonal 0 1 diagonal 1 0 0 0 2111 diagonal 1 0 0 1 2112 diagonal 1 0 0 diagonal 1 0 0 0 diagonal 1 0 0 1 diagonal 1 0 1 0 2121 diagonal 1 0 1 1 2122 diagonal 1 0 1 diagonal 1 0 1 0 diagonal 1 0 1 1 diagonal 1 0 diagonal 1 0 0 diagonal 1 0 1 diagonal 1 1 0 0 2211 diagonal 1 1 0 1 2212 diagonal 1 1 0 diagonal 1 1 0 0 diagonal 1 1 0 1 diagonal 1 1 1 0 2221 diagonal 1 1 1 1 2222 diagonal 1 1 1 diagonal 1 1 1 0 diagonal 1 1 1 1 diagonal 1 1 diagonal 1 1 0 diagonal 1 1 1 diagonal 1 diagonal 1 0 diagonal 1 1 diagonal diagonal 0 diagonal 1 name none k 92233720368 padding value 0 align right leave out tf linalg diag diagonal diagonal name name k k padding value padding value align align except exception as e print error str e relevant log output shell error function node wrap matrixdiagv3 device job localhost replica 0 task 0 device cpu 0 encounter overflow when multiply 16315257232 with 2039407154 result 1 op matrixdiagv3 |
tensorflowtensorflow | overflow bug when run tf raw op pad | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to the large list of element standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try input tensor tf random uniform 16 16 16 512 dtype tf float32 input tf identity input tensor padding 0 0 125091515651 padding 0 1 125091515651 padding 0 padding 0 0 padding 0 1 padding 1 0 125091515651 padding 1 1 false padding 1 padding 1 0 padding 1 1 padding 2 0 125091515651 padding 2 1 125091515651 padding 2 padding 2 0 padding 2 1 padding 3 0 125091515651 padding 3 1 125091515651 padding 3 padding 3 0 padding 3 1 padding padding 0 padding 1 padding 2 padding 3 name none out tf raw op pad input input padding padding name name except exception as e print error str e relevant log output shell error function node wrap pad device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 250183031318 with 125091515667 result 1 node pad op pad |
tensorflowtensorflow | overflow when run tf raw op padv2 | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to the large list of element standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try input tensor tf random uniform 16 4 4 512 dtype tf float32 input tf identity input tensor padding 0 0 125091515651 padding 0 1 125091515651 padding 0 padding 0 0 padding 0 1 padding 1 0 125091515651 padding 1 1 125091515651 padding 1 padding 1 0 padding 1 1 padding 2 0 125091515651 padding 2 1 125091515651 padding 2 padding 2 0 padding 2 1 padding 3 0 125091515651 padding 3 1 125091515651 padding 3 padding 3 0 padding 3 1 padding padding 0 padding 1 padding 2 padding 3 constant value 0 name none out tf raw op padv2 input input padding padding constant value constant value name name except exception as e print error str e relevant log output shell error function node wrap padv2 device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 250183031318 with 250183031306 result 1 node padv2 op padv2 |
tensorflowtensorflow | overflow bug when run tf tile | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large list element standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try arg 0 tensor tf random uniform 452 1 768 dtype tf float32 arg 0 tf identity arg 0 tensor arg 1 0 125091515651 arg 1 1 125091515651 arg 1 2 125091515651 arg 1 arg 1 0 arg 1 1 arg 1 2 out tf tile arg 0 arg 1 except exception as e print error str e relevant log output shell error function node wrap tile device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 56541365074252 with 125091515651 result 1 node tile op tile |
tensorflowtensorflow | overflow bug when run tf raw op tile on colab | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large list element standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try input tensor tf random uniform 4 1 1 20 dtype tf float32 input tf identity input tensor multiple 0 125091515651 multiple 1 true multiple 2 125091515651 multiple 3 125091515651 multiple multiple 0 multiple 1 multiple 2 multiple 3 name none out tf raw op tile input input multiple multiple name name except exception as e print error str e relevant log output shell error function node wrap tile device job localhost replica 0 task 0 device cpu 0 encounter overflow when multiply 500366062604 with 125091515651 result 1 op tile |
tensorflowtensorflow | overflow when run tf keras layer zeropadding2d | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to large list element standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try pad 0 0 125091515651 pad 0 1 false padding 0 pad 0 0 padding 0 1 padding 1 0 125091515651 padding 1 1 125091515651 padding 1 padding 1 0 padding 1 1 padding padding 0 padding 1 arg class tf keras layer zeropadding2d padding padding arg input 0 tensor tf random uniform 3 300 300 192 dtype tf float32 arg input 0 tf identity arg input 0 tensor arg input arg input 0 out arg class arg input except exception as e print error str e relevant log output shell error exception encounter when call layer zero padding2d type zeropadding2d function node wrap pad device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 375274547853 with 250183031602 result 1 node pad op pad call argument receive by layer zero padding2d type zeropadding2d input tf tensor shape 3 300 300 192 dtype float32 |
tensorflowtensorflow | overflow bug when run tf keras layer zeropadding3d | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior due to the large integer value standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np try pad 1610612736 arg class tf keras layer zeropadding3d padding padding arg input 0 tensor tf random uniform 1 1 2 2 3 dtype tf float32 arg input 0 tf identity arg input 0 tensor arg input arg input 0 out arg class arg input except exception as e print error str e relevant log output shell error exception encounter when call layer zero padding3d type zeropadding3d function node wrap pad device job localhost replica 0 task 0 device gpu 0 encounter overflow when multiply 3221225473 with 3221225474 result 8070450522584252414 op pad call argument receive by layer zero padding3d type zeropadding3d input tf tensor shape 1 1 2 2 3 dtype float32 |
tensorflowtensorflow | integer overflow when run tf experimental numpy identity | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 0 custom code yes os platform and distribution pretty name ubuntu 22 04 2 lts name ubuntu version i d 22 04 version 22 04 2 lts jammy jellyfish version codename jammy i d ubuntu i d like debian home url support url bug report url privacy policy url ubuntu codename jammy mobile device no response python version 3 10 12 main jun 11 2023 05 26 28 bazel version no response gcc compiler version gcc 11 4 0 cuda cudnn version nvidia cudnn cu11 8 6 0 163 cudatoolkit 11 8 0 nvcc nvidia r cuda compiler driver copyright c 2005 2022 nvidia corporation build on we d sep 21 10 33 58 pdt 2022 cuda compilation tool release 11 8 v11 8 89 build cuda 11 8 r11 8 compiler 31833905 0 gpu model and memory t4 current behavior due to large integer variable standalone code to reproduce the issue shell result dict import tensorflow as tf import numpy as np try try with tf device cpu n tensor 3046875451 n tf identity n tensor dtype tf uint16 out tf experimental numpy identity n n dtype dtype except exception as e print error str e try with tf device gpu 0 n tf identity n tensor n tf cast n tf complex64 dtype tf uint16 tf experimental numpy identity n n dtype dtype except exception as e print error str e except exception as e print error str e print result relevant log output shell error function node wrap matrixdiagv3 device job localhost replica 0 task 0 device cpu 0 encounter overflow when multiply 3046875451 with 3046875451 result 9163294059803098215 op matrixdiagv3 name diag usr local lib python3 10 dist package tensorflow python framework op py 1035 complexwarne cast complex value to real discard the imaginary part return int self numpy error function node wrap matrixdiagv3 device job localhost replica 0 task 0 device cpu 0 encounter overflow when multiply 3046875392 with 3046875392 result 9163294419334397952 op matrixdiagv3 name diag |
tensorflowtensorflow | attributeerror module tensorflow python pywrap mlir have no attribute experimental convert save model v1 | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version v1 12 1 96406 gfa4d29bfef8 2 14 0 dev20230706 custom code no os platform and distribution uibuntu 20 04 mobile device uibuntu 20 04 python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior standalone code to reproduce the issue shell relevant log output shell file home fastdisk jiahao research iree venv lib python3 8 site package iree tool tf script iree import tf main py line 54 in main import save model file home fastdisk jiahao research iree venv lib python3 8 site package iree tool tf script iree import tf main py line 102 in import save model result convert save model v1 file home fastdisk jiahao research iree venv lib python3 8 site package tensorflow python compiler mlir mlir py line 141 in convert save model v1 return pywrap mlir experimental convert save model v1 attributeerror module tensorflow python pywrap mlir have no attribute experimental convert save model v1 |
tensorflowtensorflow | train simple audio recognition tinyml | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 12 0 custom code yes os platform and distribution window 11 mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior issue report error in train audio recognition model description I be try to train a simple audio recognition model as describe in the book tinyml I be use google colab to train the model however I encounter error during both the installation of dependency and the training process error during install dependency when attempt to install dependency use the command pip uninstall y tensorflow tensorflow estimator tensorboard pip install q tf estimator nightly 1 14 0 dev2019072901 tf nightly gpu 1 15 0 dev20190729 I encounter the follow error error could not find a version that satisfy the requirement tf nightly gpu 1 15 0 dev20190729 from version 2 12 0 error no match distribution find for tf nightly gpu 1 15 0 dev20190729 error during training modulenotfounderror error to begin training upon run the training script with tensorflow in the begin train section I receive the follow error traceback most recent call last file content tensorflow tensorflow example speech command train py line 81 in import input datum file content tensorflow tensorflow example speech command input datum py line 35 in from tensorflow contrib framework python ops import audio op as contrib audio modulenotfounderror no module name tensorflow contrib observation I suspect that these error be occur because the code provide in the book be intend for tensorflow 1 15 while I be use tensorflow 2 12 0 since tensorflow 1 15 be no long support not sure how I can resolve this please help I to fix this issue standalone code to reproduce the issue shell relevant log output shell error could not find a version that satisfy the requirement tf nightly gpu 1 15 0 dev20190729 from version 2 12 0 error no match distribution find for tf nightly gpu 1 15 0 dev20190729 traceback most recent call last file content tensorflow tensorflow example speech command train py line 81 in import input datum file content tensorflow tensorflow example speech command input datum py line 35 in from tensorflow contrib framework python ops import audio op as contrib audio modulenotfounderror no module name tensorflow contrib |
tensorflowtensorflow | build failure on aarch64 undeclared identifier memset | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 9 16 bazel version 6 1 0 gcc compiler version 16 0 6 cuda cudnn version n a gpu model and memory n a current behavior build fail since commit standalone code to reproduce the issue shell bazel build config mkl aarch64 threadpool copt flax vector conversion test env tf enable onednn opt 1 test env tf2 behavior 1 define tf api version 2 tensorflow tool pip package build pip package relevant log output shell error workspace tensorflow lite kernels internal build 448 11 compile tensorflow lite kernels internal optimize 4bit neon fully connect cc fail exit 1 clang fail error execute command from target tensorflow lite kernel internal optimize 4bit cd home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow exec env cachebuster 20220325 path home andrew src tf test tensorflow git bazel ci build cache cache bazelisk download bazelbuild bazel 6 1 0 linux arm64 bin usr local sbin usr local bin usr sbin usr bin sbin bin usr games usr local game snap bin pwd proc self cwd tf2 behavior 1 usr lib llvm 16 bin clang md mf bazel out aarch64 opt bin tensorflow lite kernels internal objs optimize 4bit neon fully connect pic d frandom seed bazel out aarch64 opt bin tensorflow lite kernels internal objs optimize 4bit neon fully connect pic o dfc 4bit neon dbazel current repository iquote iquote bazel out aarch64 opt bin iquote external cpuinfo iquote bazel out aarch64 opt bin external cpuinfo isystem external cpuinfo include isystem bazel out aarch64 opt bin external cpuinfo include isystem external cpuinfo src isystem bazel out aarch64 opt bin external cpuinfo src fmerge all constant wno builtin macro redefine d date redact d timestamp redact d time redact fpic u fortify source d fortify source 1 fstack protector wall wno invalid partial specialization fno omit frame pointer no canonical prefix dndebug g0 o2 ffunction section fdata section wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel wno gnu offsetof extension mtune generic march armv8 a o3 flax vector conversion std c 17 dfarmhash no cxx string wno sign compare o3 fno exception o3 sysroot dt10 c tensorflow lite kernels internal optimize 4bit neon fully connect cc o bazel out aarch64 opt bin tensorflow lite kernels internal objs optimize 4bit neon fully connect pic o configuration 70a2ceb8c9b79ab96bab8f0b73bbfb70969f7e2a66f605b1d1332a62f7eef342 execution platform local execution config platform platform tensorflow lite kernel internal optimize 4bit neon fully connect cc 284 3 error use of undeclared identifier memset memset d static cast 119 sizeof uint8 t size tensorflow lite kernels internal optimize 4bit neon fully connect cc 313 3 error use of undeclared identifier memset memset datum 0 sizeof int8 t size tensorflow lite kernels internal optimize 4bit neon fully connect cc 314 3 error use of undeclared identifier memset memset input offset 0 sizeof int32 t layout row 3 error generate target tensorflow tool pip package build pip package fail to build info elapse time 23 741s critical path 7 00 info 439 process 339 internal 100 local fail build do not complete successfully |
tensorflowtensorflow | I need tensorflow 2 2 0 but it be remove how to find it | Bug | I need to install tensorflow 2 2 0 why because this repo be request it and now matter what I try can t make it work with new tensorflow how can I install tensorflow 2 2 0 on window 10 and python 3 10 the error I be get be and I be not able to fix it venv g nsfw model python a py 2023 08 14 00 33 39 437427 I tensorflow core platform cpu feature guard cc 151 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx avx2 to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 08 14 00 33 40 148302 I tensorflow core common runtime gpu gpu device cc 1525 create device job localhost replica 0 task 0 device gpu 0 with 21643 mb memory device 0 name nvidia geforce rtx 3090 ti pci bus i d 0000 01 00 0 compute capability 8 6 2023 08 14 00 33 40 149891 I tensorflow core common runtime gpu gpu device cc 1525 create device job localhost replica 0 task 0 device gpu 1 with 9603 mb memory device 1 name nvidia geforce rtx 3060 pci bus i d 0000 05 00 0 compute capability 8 6 traceback most recent call last file g nsfw model a py line 13 in print predict classify model test file g nsfw model nsfw detector predict py line 67 in classify prob classify nd model image predict args file g nsfw model nsfw detector predict py line 77 in classify nd model pred model predict nd image predict args file g nsfw model venv lib site package keras util traceback util py line 67 in error handler raise e with traceback filter tb from none file g nsfw model venv lib site package kera engine training py line 1997 in predict raise valueerror unexpected result of predict function valueerror unexpected result of predict function empty batch output please use model compile run eagerly true or tf config run function eagerly true for more information of where go wrong or file a issue bug to tf keras predict py python import argparse import json from os import listdir from os path import isfile join exist isdir abspath import numpy as np import tensorflow as tf from tensorflow import kera import tensorflow hub as hub image dim 299 require default image dimensionality def load image image path image size verbose true function for loading image into numpy array for pass to model predict input image path list of image path to load image size size into which image should be resize verbose show all of the image path and size load output load image load image on which keras model can run prediction load image index path of image which the function be able to process load image load image path if isdir image path parent abspath image path image path join parent f for f in listdir image path if isfile join parent f elif isfile image path image path image path for img path in image path try if verbose print img path size image size image kera preprocesse image load img img path target size image size image kera preprocesse image img to array image image 255 load image append image load image path append img path except exception as ex print image load failure img path ex return np asarray load image load image path def load model model path if model path be none or not exist model path raise valueerror save model path must be the valid directory of a save model to load model tf keras model load model model path custom object keraslayer hub keraslayer compile false return model def classify model input path image dim image dim predict args classify give a model input path could be single string and image dimensionality optionally pass predict args that will be pass to tf keras model predict image image path load image input path image dim image dim prob classify nd model image predict args return dict zip image path prob def classify nd model nd image predict args classify give a model image array numpy optionally pass predict args that will be pass to tf keras model predict model pred model predict nd image predict args pred np argsort model pred axis 1 tolist category drawing hentai neutral porn sexy prob for I single pred in enumerate model pred single prob for j pre in enumerate single pred single prob category j float pre prob append single prob return prob def main args none parser argparse argumentparser description a script to perform nfsw classification of image epilog launch with default model and a test image python nsfw detector predict py save model path mobilenet v2 140 224 image source test jpg formatter class argparse rawtexthelpformatter submain parser add argument group main execution and evaluation functionality submain add argument image source d image source type str require true help a directory of image or a single image to classify submain add argument save model path d save model path type str require true help the model to load submain add argument image dim d image dim type int default image dim help the square dimension of the model s input shape if args be not none config vars parser parse args args else config vars parser parse args if config image source be none or not exist config image source raise valueerror image source must be a valid directory with image or a single image to classify model load model config save model path image pred classify model config image source config image dim print json dump image pred indent 2 n if name main main |
tensorflowtensorflow | abort when run tensorflow python eager remote connect to remote host | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 11 0 custom code yes os platform and distribution 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version nvidia cudnn cu11 8 6 0 163 cudatoolkit 11 8 0 gpu model and memory no response current behavior nan string argument standalone code to reproduce the issue shell import tensorflow as tf import numpy as np from tensorflow python eager import remote try try with tf device cpu arg 0 nan out remote connect to remote host arg 0 except exception as e print error str e try with tf device gpu 0 remote connect to remote host arg 0 except exception as e print error str e except exception as e print error str e relevant log output shell 2023 08 13 01 22 37 499369 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 08 13 01 22 38 459392 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 08 13 01 22 38 480510 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 08 13 01 22 38 480708 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 08 13 01 22 38 481081 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 08 13 01 22 38 481707 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 08 13 01 22 38 481844 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 08 13 01 22 38 481961 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 08 13 01 22 38 536637 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 08 13 01 22 38 536859 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 08 13 01 22 38 536991 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 08 13 01 22 38 537094 I tensorflow core common runtime gpu gpu device cc 1613 create device job localhost replica 0 task 0 device gpu 0 with 1725 mb memory device 0 name nvidia geforce gtx 1660 ti pci bus i d 0000 01 00 0 compute capability 7 5 2023 08 13 01 22 38 546718 e tensorflow core distribute runtime rpc grpc server lib cc 589 invalid argument could not interpret nan as a host port pair e0813 01 22 38 546961566 1686085 completion queue cc 244 assertion fail queue num item 0 abort |
tensorflowtensorflow | binaryfocalcrossentropy alpha do not work | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version tf 2 13 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior tf keras loss binaryfocalcrossentropy compute the loss without use the alpha import tensorflow as tf y true list 0 1 0 0 logit list 1 6 0 51 2 94 1 8 gamma 2 focal func1 tf keras loss binaryfocalcrossentropy gamma gamma alpha 0 25 from logit true focal loss1 focal func1 y true list logit list focal func2 tf keras loss binaryfocalcrossentropy gamma gamma alpha 10 from logit true focal loss2 focal func2 y true list logit list focal func3 tf keras loss binaryfocalcrossentropy gamma gamma alpha 100 from logit true focal loss3 focal func3 y true list logit list print focal loss1 print focal loss2 print focal loss3 the result be tf tensor 0 6932789 shape dtype float32 tf tensor 0 6932789 shape dtype float32 tf tensor 0 6932789 shape dtype float32 focal loss1 in the code should be 0 51168 standalone code to reproduce the issue shell in above relevant log output no response |
tensorflowtensorflow | attributeerror can t set attribute in plot or property for example | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 8 custom code no os platform and distribution window mobile device na python version 3 8 bazel version na gcc compiler version cuda cudnn version gpu model and memory colab notebook current behavior I m run this tensorflow example I want to add some additional model at the end and try to create new datum window as do above I notice the example already give and my new code require the follow line to be run before the property for example be create w2 example example input example label else you get a vague error attributeerror can t set attribute but it look like this have a setter this line be find under 3 plot and if move to a late section after the property def example under section 4 this error occur standalone code to reproduce the issue shell move w2 example example input example label to under section 4 which should be ok can t set error occur relevant log output shell attributeerror can t set attribute |
tensorflowtensorflow | tensorflow inference error | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 8 custom code yes os platform and distribution linux cento mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior my model can train normally but there be an error when inference after the training be complete the model structure code be as follow conv 1 conv2d 32 1 5 1 1 name mode0 conv 1 padding same input 1 bn 1 batchnormalizetion name mode0 bn 1 conv 1 out 1 prelu share axis 1 2 bn 1 out 2 tf reshape out 1 dp 1 lstm out 2 dp o1 dense 32 dp 1 dp o2 prelu share axis 1 2 dp o1 ls o1 lstm dp o2 dp o3 dense 2 ls o1 the error be as follow tensorflow pyrhon framework error impl invalidargumenterror graph execution error node model model0 bn 1 fusebatchnormv3 scale must have the same number of element as the channel of x get 32 and 2 node model model0 bn 1 fusebatchnormv3 op inference predict function 3355 may I ask what cause this and if it can be resolve standalone code to reproduce the issue shell no relevant log output no response |
tensorflowtensorflow | could not load dynamic library cudart64 110 dll dlerror cudart64 110 dll not find | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 10 0 custom code yes os platform and distribution window 11 mobile device n a python version 3 10 microsoft store bazel version no response gcc compiler version no response cuda cudnn version cuda 11 2 gpu model and memory rtx 3070 ti 8 gb current behavior I instal cuda 11 2 as recommend for tf 2 10 0 here s the install screenshot at first I think it be a path issue but after restart my pc I be able to access exe file in that folder image if the file be in path why can t tensorflow find they many people say to use miniconda so I do but I get the same result other resolve issue be resolve as the op s be use the wrong version of cuda I check on the website and I can confirm that my version be the require one standalone code to reproduce the issue shell import tensorflow as tf relevant log output shell 2023 07 31 18 56 25 098058 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cudart64 110 dll dlerror cudart64 110 dll not find 2023 07 31 18 56 25 098226 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine 2023 07 31 18 56 26 164080 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cudart64 110 dll dlerror cudart64 110 dll not find 2023 07 31 18 56 26 164320 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cublas64 11 dll dlerror cublas64 11 dll not find 2023 07 31 18 56 26 164540 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cublaslt64 11 dll dlerror cublaslt64 11 dll not find 2023 07 31 18 56 26 164818 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cufft64 10 dll dlerror cufft64 10 dll not find 2023 07 31 18 56 26 368828 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cusparse64 11 dll dlerror cusparse64 11 dll not find 2023 07 31 18 56 26 369092 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cudnn64 8 dll dlerror cudnn64 8 dll not find |
tensorflowtensorflow | could not load dynamic library cudart64 110 dll dlerror cudart64 110 dll not find | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 10 0 custom code yes os platform and distribution window 11 mobile device n a python version 3 10 microsoft store bazel version no response gcc compiler version no response cuda cudnn version cuda 11 2 gpu model and memory rtx 3070 ti 8 gb current behavior I instal cuda 11 2 as recommend for tf 2 10 0 here s the install screenshot at first I think it be a path issue but after restart my pc I be able to access exe file in that folder image if the file be in path why can t tensorflow find they many people say to use miniconda so I do but I get the same result other resolve issue be resolve as the op s be use the wrong version of cuda I check on the website and I can confirm that my version be the require one standalone code to reproduce the issue shell import tensorflow as tf relevant log output shell 2023 07 31 18 56 25 098058 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cudart64 110 dll dlerror cudart64 110 dll not find 2023 07 31 18 56 25 098226 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine 2023 07 31 18 56 26 164080 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cudart64 110 dll dlerror cudart64 110 dll not find 2023 07 31 18 56 26 164320 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cublas64 11 dll dlerror cublas64 11 dll not find 2023 07 31 18 56 26 164540 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cublaslt64 11 dll dlerror cublaslt64 11 dll not find 2023 07 31 18 56 26 164818 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cufft64 10 dll dlerror cufft64 10 dll not find 2023 07 31 18 56 26 368828 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cusparse64 11 dll dlerror cusparse64 11 dll not find 2023 07 31 18 56 26 369092 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library cudnn64 8 dll dlerror cudnn64 8 dll not find |
tensorflowtensorflow | visual studio 2022 mingw64 can not find source file | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 7 0 custom code yes os platform and distribution window 11 mobile device no response python version no response bazel version no response gcc compiler version 8 1 0 cuda cudnn version no response gpu model and memory no response current behavior can not write complete application standalone code to reproduce the issue shell include include include include int main tensorflow tensorflow scope root tensorflow scope newrootscope tensorflow clientsession session root 5 ohlc std vector input datum ohlc tensorflow tensor input tensor tensorflow dt float tensorflow tensorshape 1 5 auto input tensor map input tensor tensor for int I 0 I 5 I input tensor map 0 I input datum I tensorflow graphdef graph def tensorflow readbinaryproto tensorflow env default path to model pb graph def tensorflow sessionoption session option tensorflow clientsession session root session option session create graph def tensorflow tensor output tensor tensorflow status run status session run input tensor name input tensor output tensor name output tensor if run status ok std cerr run status error message std endl return 1 auto output tensor map output tensor tensor ohlc return 0 relevant log output shell e1696 third party eigen3 unsupported eigen cxx11 threadpool ai c user user source repos ai include tensorflow tsl platform threadpool interface h 19 e1696 absl status status h ai c user user source repos ai include tensorflow cc framework op h 24 e1696 absl strings str cat h ai c user user source repos ai include tensorflow cc framework op h 25 e1696 tensorflow core framework tensor pb h ai c user user source repos ai include tensorflow cc framework op h 27 e1696 absl strings str cat h ai c user user source repos ai include tensorflow cc framework scope h 25 e1696 tensorflow cc op array ops h ai c user user source repos ai include tensorflow cc ops standard op h 19 e1696 tensorflow cc op candidate sample op h ai c user user source repos ai include tensorflow cc ops standard op h 20 e1696 tensorflow cc op control flow op h ai c user user source repos ai include tensorflow cc ops standard op h 22 e1696 tensorflow cc op data flow op h ai c user user source repos ai include tensorflow cc ops standard op h 23 e1696 tensorflow cc op image op h ai c user user source repos ai include tensorflow cc ops standard op h 24 e1696 tensorflow cc ops io op h ai c user user source repos ai include tensorflow cc ops standard op h 25 e1696 tensorflow cc ops linalg ops h ai c user user source repos ai include tensorflow cc ops standard op h 26 e1696 tensorflow cc op log op h ai c user user source repos ai include tensorflow cc ops standard op h 27 e1696 tensorflow cc ops lookup op h ai c user user source repos ai include tensorflow cc ops standard op h 28 e1696 tensorflow cc op math op h ai c user user source repos ai include tensorflow cc ops standard op h 29 e1696 tensorflow cc op nn op h ai c user user source repos ai include tensorflow cc ops standard op h 30 e1696 tensorflow cc op no op h ai c user user source repos ai include tensorflow cc ops standard op h 31 e1696 tensorflow cc op parse op h ai c user user source repos ai include tensorflow cc ops standard op h 32 e1696 tensorflow cc op random op h ai c user user source repos ai include tensorflow cc ops standard op h 33 e1696 tensorflow cc op sparse op h ai c user user source repos ai include tensorflow cc ops standard op h 34 e1696 tensorflow cc op state op h ai c user user source repos ai include tensorflow cc ops standard op h 35 e1696 tensorflow cc op string op h ai c user user source repos ai include tensorflow cc ops standard op h 36 e1696 tensorflow cc op training op h ai c user user source repos ai include tensorflow cc ops standard op h 37 e1696 tensorflow cc ops user op h ai c user user source repos ai include tensorflow cc ops standard op h 38 e1696 tensorflow core framework graph pb h ai c user user source repos ai include tensorflow core common runtime graph constructor h 19 e1696 absl string string view h ai c user user source repos ai include tensorflow core framework allocator h 24 e1696 absl type optional h ai c user user source repos ai include tensorflow core framework allocator h 25 e1696 absl base macros h ai c user user source repos ai include tensorflow core framework device base h 23 e1696 absl string string view h ai c user user source repos ai include tensorflow core framework device base h 24 e1696 tensorflow core framework device attribute pb h ai c user user source repos ai include tensorflow core framework device base h 25 e1696 tensorflow core framework full type pb h ai c user user source repos ai include tensorflow core framework full type inference util h 23 e1696 tensorflow core framework full type pb h ai c user user source repos ai include tensorflow core framework full type util h 22 e1696 tensorflow core framework node def pb h ai c user user source repos ai include tensorflow core framework full type util h 23 e1696 tensorflow core framework op def pb h ai c user user source repos ai include tensorflow core framework full type util h 25 e1696 tensorflow core framework graph debug info pb h ai c user user source repos ai include tensorflow core framework function h 25 e1696 absl container flat hash map h ai c user user source repos ai include tensorflow core framework function h 30 e1696 absl type optional h ai c user user source repos ai include tensorflow core framework function h 31 e1696 absl type variant h ai c user user source repos ai include tensorflow core framework function h 32 e1696 tensorflow core framework attr value pb h ai c user user source repos ai include tensorflow core framework function h 33 e1696 tensorflow core framework function pb h ai c user user source repos ai include tensorflow core framework function h 36 e1696 tensorflow core framework optimize function graph pb h ai c user user source repos ai include tensorflow core framework function h 40 e1696 tensorflow core protobuf config pb h ai c user user source repos ai include tensorflow core framework function h 51 e1696 tensorflow tsl protobuf error code pb h ai c user user source repos ai include tensorflow core framework function h 52 e1696 tensorflow core protobuf remote tensor handle pb h ai c user user source repos ai include tensorflow core framework function h 54 e1696 tensorflow core framework node def pb h ai c user user source repos ai include tensorflow core framework node def builder h 23 e1696 tensorflow core framework op def pb h ai c user user source repos ai include tensorflow core framework node def builder h 26 e1696 tensorflow core framework node def pb h ai c user user source repos ai include tensorflow core framework node def util h 24 e1696 tensorflow core framework op def pb h ai c user user source repos ai include tensorflow core framework node def util h 25 e1696 tensorflow core framework type pb h ai c user user source repos ai include tensorflow core framework node def util h 29 e1696 tensorflow core framework node def pb h ai c user user source repos ai include tensorflow core framework node property h 19 e1696 tensorflow core framework op def pb h ai c user user source repos ai include tensorflow core framework node property h 20 e1696 absl container flat hash map h ai c user user source repos ai include tensorflow core framework op h 25 e1696 tensorflow core framework full type pb h ai c user user source repos ai include tensorflow core framework op h 26 e1696 tensorflow core framework full type pb h ai c user user source repos ai include tensorflow core framework op def builder h 26 e1696 tensorflow core framework op def pb h ai c user user source repos ai include tensorflow core framework op def builder h 27 e1696 tensorflow core framework api def pb h ai c user user source repos ai include tensorflow core framework op def util h 24 e1696 tensorflow core framework op def pb h ai c user user source repos ai include tensorflow core framework op def util h 25 e1696 absl time time h ai c user user source repos ai include tensorflow core framework op kernel h 24 e1696 absl type optional h ai c user user source repos ai include tensorflow core framework op kernel h 25 e1696 absl type span h ai c user user source repos ai include tensorflow core framework op kernel h 26 e1696 tensorflow core framework graph pb h ai c user user source repos ai include tensorflow core framework op kernel h 31 e1696 tensorflow core framework kernel def pb h ai c user user source repos ai include tensorflow core framework op kernel h 32 e1696 tensorflow core framework node def pb h ai c user user source repos ai include tensorflow core framework op kernel h 34 e1696 tensorflow core framework tensor shape pb h ai c user user source repos ai include tensorflow core framework op kernel h 44 e1696 tensorflow core framework type pb h ai c user user source repos ai include tensorflow core framework op kernel h 47 e1696 tensorflow core protobuf config pb h ai c user user source repos ai include tensorflow core framework op kernel h 59 e1696 tensorflow core framework registration option h ai c user user source repos ai include tensorflow core framework registration registration h 38 e1696 tensorflow core framework type pb h ai c user user source repos ai include tensorflow core framework resource handle h 24 e1696 third party eigen3 unsupported eigen cxx11 tensor ai c user user source repos ai include tensorflow core framework tensor h 25 e1696 tensorflow core framework type pb h ai c user user source repos ai include tensorflow core framework tensor h 30 e1696 third party eigen3 unsupported eigen cxx11 tensor ai c user user source repos ai include tensorflow core framework tensor shape h 21 e1696 tensorflow core framework type pb h ai c user user source repos ai include tensorflow core framework tensor shape h 22 e1696 third party eigen3 unsupported eigen cxx11 tensor ai c user user source repos ai include tensorflow core framework tensor type h 19 e1696 third party eigen3 unsupported eigen cxx11 tensor ai c user user source repos ai include tensorflow core framework type h 23 e1696 tensorflow core framework full type pb h ai c user user source repos ai include tensorflow core framework type h 25 e1696 tensorflow core framework type pb h ai c user user source repos ai include tensorflow core framework type h 28 e1696 absl type optional h ai c user user source repos ai include tensorflow core graph graph h 45 e1696 tensorflow core framework full type pb h ai c user user source repos ai include tensorflow core graph graph h 46 e1696 tensorflow core framework node def pb h ai c user user source repos ai include tensorflow core graph graph h 48 e1696 absl container flat hash map h ai c user user source repos ai include tensorflow core graph graph debug info builder h 24 e1696 absl status status h ai c user user source repos ai include tensorflow core graph graph debug info builder h 25 e1696 absl status statusor h ai c user user source repos ai include tensorflow core graph graph debug info builder h 26 e1696 absl string string view h ai c user user source repos ai include tensorflow core graph graph debug info builder h 27 e1696 absl type span h ai c user user source repos ai include tensorflow core graph graph debug info builder h 28 e1696 tensorflow core framework graph debug info pb h ai c user user source repos ai include tensorflow core graph graph debug info builder h 29 e1696 tensorflow core framework op def pb h ai c user user source repos ai include tensorflow core graph node builder h 22 e1696 absl type span h ai c user user source repos ai include tensorflow core lib gtl array slice h 19 e1696 absl base attribute h ai c user user source repos ai include tensorflow core platform error h 23 e1696 absl strings str join h ai c user user source repos ai include tensorflow core platform error h 24 e1696 absl type optional h ai c user user source repos ai include tensorflow core platform threadpool h 22 e1696 tensorflow core protobuf config pb h ai c user user source repos ai include tensorflow core public session option h 22 e1696 absl string match h ai c user user source repos ai include tensorflow core util manage stack trace h 26 e1696 absl strings str cat h ai c user user source repos ai include tensorflow core util manage stack trace h 27 e1696 absl type optional h ai c user user source repos ai include tensorflow core util manage stack trace h 28 e1696 absl string string view h ai c user user source repos ai include tensorflow core util tensor format h 23 e1696 absl string string view h ai c user user source repos ai include tensorflow tsl framework allocator h 25 e1696 absl type optional h ai c user user source repos ai include tensorflow tsl framework allocator h 26 e1696 absl string string view h ai c user user source repos ai include tensorflow tsl framework device type h 22 e1696 eigen core ai c user user source repos ai include tensorflow tsl framework fixedpoint type h 21 e1696 absl container inline vector h ai c user user source repos ai include tensorflow tsl lib gtl inline vector h 19 e1696 third party eigen3 eigen core ai c user user source repos ai include tensorflow tsl platform bfloat16 h 20 e1696 absl string cord h ai c user user source repos ai include tensorflow tsl platform default cord h 22 e1696 absl base log severity h ai c user user source repos ai include tensorflow tsl platform default log h 35 e1696 absl string string view h ai c user user source repos ai include tensorflow tsl platform default log h 36 e1696 absl status statusor h ai c user user source repos ai include tensorflow tsl platform default statusor h 18 e1696 absl functional any invocable h ai c user user source repos ai include tensorflow tsl platform env h 27 e1696 absl base attribute h ai c user user source repos ai include tensorflow tsl platform error h 26 e1696 absl status status h ai c user user source repos ai include tensorflow tsl platform error h 27 e1696 absl string cord h ai c user user source repos ai include tensorflow tsl platform error h 28 e1696 absl strings str join h ai c user user source repos ai include tensorflow tsl platform error h 29 e1696 include float8 h ai c user user source repos ai include tensorflow tsl platform float8 h 19 e1696 google protobuf descriptor pb h ai c user user source repos ai include tensorflow tsl platform protobuf h 30 e1696 google protobuf arena h ai c user user source repos ai include tensorflow tsl platform protobuf h 31 e1696 google protobuf descriptor h ai c user user source repos ai include tensorflow tsl platform protobuf h 32 e1696 google protobuf dynamic message h ai c user user source repos ai include tensorflow tsl platform protobuf h 33 e1696 google protobuf io code stream h ai c user user source repos ai include tensorflow tsl platform protobuf h 34 e1696 google protobuf io tokenizer h ai c user user source repos ai include tensorflow tsl platform protobuf h 35 e1696 google protobuf io zero copy stream h ai c user user source repos ai include tensorflow tsl platform protobuf h 36 e1696 google protobuf io zero copy stream impl lite h ai c user user source repos ai include tensorflow tsl platform protobuf h 37 e1696 google protobuf map h ai c user user source repos ai include tensorflow tsl platform protobuf h 38 e1696 google protobuf message h ai c user user source repos ai include tensorflow tsl platform protobuf h 39 e1696 google protobuf repeat field h ai c user user source repos ai include tensorflow tsl platform protobuf h 40 e1696 google protobuf text format h ai c user user source repos ai include tensorflow tsl platform protobuf h 41 e1696 google protobuf util field comparator h ai c user user source repos ai include tensorflow tsl platform protobuf h 42 e1696 google protobuf util json util h ai c user user source repos ai include tensorflow tsl platform protobuf h 43 e1696 google protobuf util message differencer h ai c user user source repos ai include tensorflow tsl platform protobuf h 44 e1696 google protobuf util type resolver util h ai c user user source repos ai include tensorflow tsl platform protobuf h 45 e1696 absl base attribute h ai c user user source repos ai include tensorflow tsl platform status h 28 e1696 absl functional function ref h ai c user user source repos ai include tensorflow tsl platform status h 29 e1696 absl status status h ai c user user source repos ai include tensorflow tsl platform status h 30 e1696 absl string cord h ai c user user source repos ai include tensorflow tsl platform status h 31 e1696 absl string string view h ai c user user source repos ai include tensorflow tsl platform status h 32 e1696 absl type optional h ai c user user source repos ai include tensorflow tsl platform status h 33 e1696 tensorflow tsl protobuf error code pb h ai c user user source repos ai include tensorflow tsl platform status h 39 e1696 absl base attribute h ai c user user source repos ai include tensorflow tsl platform statusor h 71 e1696 absl status statusor h ai c user user source repos ai include tensorflow tsl platform statusor h 72 e1696 absl string string view h ai c user user source repos ai include tensorflow tsl platform stringpiece h 29 e1696 absl strings str join h ai c user user source repos ai include tensorflow tsl platform str util h 23 e1696 absl strings str split h ai c user user source repos ai include tensorflow tsl platform str util h 24 e1696 absl type optional h ai c user user source repos ai include tensorflow tsl platform threadpool h 22 |
tensorflowtensorflow | fix the code on adam py | Bug | as request in the issue modify var assign sub m alpha math op sqrt v coefficient epsilon to var assign sub m alpha math op sqrt v coefficient epsilon |
tensorflowtensorflow | issue | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 9 custom code yes os platform and distribution linux ubuntu mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior xyz standalone code to reproduce the issue shell xyz relevant log output shell jkjnk |
tensorflowtensorflow | the issue of update a formula | Bug | l443 var assign sub m alpha math op sqrt v coefficient epsilon should be var assign sub m alpha math op sqrt v coefficient epsilon |
tensorflowtensorflow | model fit occur cudnn graph fail to build unknown cudnn status bad param | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 11 2 12 2 13 custom code yes os platform and distribution linux ubuntu 22 04 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version 11 8 8 6 11 8 8 9 2 gpu model and memory rtx 3090 ti rtx 4090 current behavior this be first time experience to have such error message when I try model fit server stop with error message below try cudnn version 8 6 as tensorflow org and 8 9 2 lateset for cuda 11 8 both have problem how can I solve the issue thank standalone code to reproduce the issue shell gpu i d 2 0 or 1 import os os environ cuda visible device gpu i d import tensorflow as tf import tensorflow kera as keras import tensorflow keras layer as layer import tensorflow kera model as model size y 256 size x 256 load dataset dic path seg dataset train dic msk path seg dataset train msk seed 1004 random number in your mind dic datagen keras preprocesse image imagedatagenerator rescale 1 255 validation split 0 2 msk datagen kera preprocesse image imagedatagenerator rescale 1 255 validation split 0 2 dic train dic datagen flow from directory dic path target size size y size x class mode none seed seed subset training msk train msk datagen flow from directory msk path target size size y size x class mode none color mode grayscale seed seed subset training dic valid dic datagen flow from directory dic path target size size y size x class mode none seed seed subset validation msk valid msk datagen flow from directory msk path target size size y size x class mode none color mode grayscale seed seed subset validation train ds zip dic train msk train valid ds zip dic valid msk valid f 16 32 64 128 256 kernel size 3 3 padding same stride 1 number of filter at each level input layer input size y size x 1 p0 input downblock 1 x layer conv2d 16 kernel size padding padding stride stride activation relu p0 c1 layer conv2d 16 kernel size padding padding stride stride activation relu x x layer maxpool2d 2 2 2 2 c1 downblock 2 x layer conv2d 32 kernel size padding padding stride stride activation relu x c2 layer conv2d 32 kernel size padding padding stride stride activation relu x x layer maxpool2d 2 2 2 2 c2 downblock 3 x layer conv2d 64 kernel size padding padding stride stride activation relu x c3 layer conv2d 64 kernel size padding padding stride stride activation relu x x layer maxpool2d 2 2 2 2 c3 downblock 4 x layer conv2d 128 kernel size padding padding stride stride activation relu x c4 layer conv2d 128 kernel size padding padding stride stride activation relu x x layer maxpool2d 2 2 2 2 c4 bottle neck x layer conv2d 256 kernel size padding padding stride stride activation relu x x layer conv2d 256 kernel size padding padding stride stride activation relu x up block 1 x layer upsampling2d 2 2 x concat layer concatenate x c4 x layer conv2d 128 kernel size padding padding stride stride activation relu concat x layer conv2d 128 kernel size padding padding stride stride activation relu x up block 1 x layer upsampling2d 2 2 x concat layer concatenate x c3 x layer conv2d 64 kernel size padding padding stride stride activation relu concat x layer conv2d 64 kernel size padding padding stride stride activation relu x up block 1 x layer upsampling2d 2 2 x concat layer concatenate x c2 x layer conv2d 32 kernel size padding padding stride stride activation relu concat x layer conv2d 32 kernel size padding padding stride stride activation relu x up block 1 x layer upsampling2d 2 2 x concat layer concatenate x c1 x layer conv2d 16 kernel size padding padding stride stride activation relu concat x layer conv2d 16 kernel size padding padding stride stride activation relu x last convolution 1x1 output layer conv2d 1 1 1 padding same activation sigmoid x model model model input output model compile optimizer adam loss binary crossentropy metric accuracy path checkpoint seg checkpoint os makedirs path checkpoint exist ok true model checkpointer keras callback modelcheckpoint filepath path checkpoint save weight only true monitor val loss mode min save well only true verbose 1 additional callback model checkpointer keras callback earlystoppe patience 50 3 monitor val loss mode min verbose 1 train start epoch 10 history model fit train ds validation datum valid ds validation step 15 total number of step batch of sample to draw before stop when perform validation at the end of every epoch batch size 16 step per epoch 50 epoch epoch callback callback relevant log output shell 2023 07 26 14 46 27 667380 e tensorflow compiler xla stream executor cuda cuda dnn cc 8942 unable to register cudnn factory attempt to register factory for plugin cudnn when one have already be register 2023 07 26 14 46 27 667411 e tensorflow compiler xla stream executor cuda cuda fft cc 609 unable to register cufft factory attempt to register factory for plugin cufft when one have already be register 2023 07 26 14 46 27 667426 e tensorflow compiler xla stream executor cuda cuda blas cc 1518 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 07 26 14 46 27 671343 I tensorflow core platform cpu feature guard cc 182 this tensorflow binary be optimize to use available cpu instruction in performance critical operation to enable the follow instruction avx2 fma in other operation rebuild tensorflow with the appropriate compiler flag 2023 07 26 14 46 28 183018 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning could not find tensorrt warning tensorflow from home bootcamp miniconda3 envs tf lib python3 10 site package tensorflow python op distribution distribution py 259 reparameterizationtype init from tensorflow python op distribution distribution be deprecate and will be remove after 2019 01 01 instruction for update the tensorflow distribution library have move to tensorflow probability you should update all reference to use tfp distribution instead of tf distribution warn tensorflow from home bootcamp miniconda3 envs tf lib python3 10 site package tensorflow python op distribution bernoulli py 165 registerkl init from tensorflow python op distribution kullback leibler be deprecate and will be remove after 2019 01 01 instruction for update the tensorflow distribution library have move to tensorflow probability you should update all reference to use tfp distribution instead of tf distribution 2023 07 26 14 46 28 727203 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 28 741660 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 28 741864 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 28 807551 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 28 807748 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 28 807913 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 28 808057 I tensorflow core common runtime gpu gpu device cc 1884 create device job localhost replica 0 task 0 device gpu 0 with 22168 mb memory device 0 name nvidia geforce rtx 3090 ti pci bus i d 0000 41 00 0 compute capability 8 6 2023 07 26 14 46 28 809796 I tensorflow core common runtime direct session cc 380 device mapping job localhost replica 0 task 0 device gpu 0 device 0 name nvidia geforce rtx 3090 ti pci bus i d 0000 41 00 0 compute capability 8 6 find 40000 image belong to 1 class find 40000 image belong to 1 class find 10000 image belong to 1 class find 10000 image belong to 1 class 2023 07 26 14 46 30 293048 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 30 293252 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 30 293411 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 30 293658 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 30 293824 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 30 293974 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 30 294158 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 30 294315 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 07 26 14 46 30 294450 I tensorflow core common runtime gpu gpu device cc 1884 create device job localhost replica 0 task 0 device gpu 0 with 22168 mb memory device 0 name nvidia geforce rtx 3090 ti pci bus i d 0000 41 00 0 compute capability 8 6 epoch 1 10 2023 07 26 14 46 31 666051 I tensorflow compiler xla stream executor cuda cuda dnn cc 440 load cudnn version 8600 2023 07 26 14 46 31 674917 w tensorflow core framework op kernel cc 1839 op require fail at conv op fuse impl h 625 internal cudnn graph fail to build unknown cudnn status bad param in tensorflow compiler xla stream executor cuda cuda dnn cc 4340 conv op cudnn backend operation cudnnfinalize fail traceback most recent call last file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 70 in error handler raise e with traceback filter tb from none file home bootcamp miniconda3 envs tf lib python3 10 site package tensorflow python eager execute py line 53 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl internalerror graph execution error detect at node model conv2d relu define at most recent call last file home bootcamp train unet py line 161 in history model fit file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 515 in call return self run internal graph input training training mask mask file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 515 in call return self run internal graph input training training mask mask file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 672 in run internal graph output node layer args kwargs file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 515 in call return self run internal graph input training training mask mask file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 672 in run internal graph output node layer args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 515 in call return self run internal graph input training training mask mask file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 672 in run internal graph output node layer args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 515 in call return self run internal graph input training training mask mask file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 672 in run internal graph output node layer args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 515 in call return self run internal graph input training training mask mask file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 672 in run internal graph output node layer args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src layer convolutional base conv py line 321 in call return self activation output file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 515 in call return self run internal graph input training training mask mask file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 672 in run internal graph output node layer args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src layer convolutional base conv py line 321 in call return self activation output file home bootcamp miniconda3 envs tf lib python3 10 site package keras src activation py line 306 in relu return backend relu file home bootcamp train unet py line 161 in history model fit file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1783 in fit tmp log self train function iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1377 in train function return step function self iterator file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1360 in step function output model distribute strategy run run step args datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1349 in run step output model train step datum file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 1126 in train step y pre self x training true file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine training py line 589 in call return super call args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 515 in call return self run internal graph input training training mask mask file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine functional py line 672 in run internal graph output node layer args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 65 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src engine base layer py line 1149 in call output call fn input args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src util traceback util py line 96 in error handler return fn args kwargs file home bootcamp miniconda3 envs tf lib python3 10 site package keras src layer convolutional base conv py line 321 in call return self activation output file home bootcamp miniconda3 envs tf lib python3 10 site package keras src activation py line 306 in relu return backend relu file home bootcamp miniconda3 envs tf lib python3 10 site package keras src backend py line 5397 in relu x tf nn relu x cudnn graph fail to build unknown cudnn status bad param in tensorflow compiler xla stream executor cuda cuda dnn cc 4340 conv op cudnn backend operation cudnnfinalize fail node model conv2d relu op inference train function 4359 |
tensorflowtensorflow | onednn log be not print while build tf with config mkl aarch64 | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 13 0 custom code yes os platform and distribution ubuntu 22 04 2 lts mobile device no response python version 3 10 6 bazel version 6 3 gcc compiler version 11 3 0 cuda cudnn version no response gpu model and memory no response current behavior I be expect onednn log should print while run deep learning model such as resnet50 if we export onednn verbose 1 standalone code to reproduce the issue shell to reproduce same we have to build tf on arm cpu and use follow command to build bazel build config mkl aarch64 tensorflow tool pip package build pip package relevant log output no response |
tensorflowtensorflow | valueerror can not take the length of shape with unknown rank when use multiheadrelativeattention | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 13 1 custom code yes os platform and distribution mac m2 pro mobile device mac m2 pro python version 3 10 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior when use multiheadrelativeattention from official nlp modeling layer I face on this error valueerror can not take the length of shape with unknown rank I m sorry I m not good at english thank you standalone code to reproduce the issue shell from official nlp model layer import multiheadrelativeattention import tensorflow as tf vec tf constant 0 1 4 3 3 layer multiheadrelativeattention num head 4 key dim 3 output layer vec vec content attention bias 0 1 positional attention bias 0 1 relevant log output shell valueerror can not take the length of shape with unknown rank |
tensorflowtensorflow | could not load dynamic library libcublaslt so 12 dlerror libcublaslt so 12 can not open share object file no such file or directory | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version v2 13 0 rc2 7 g1cb1a030a62 2 13 0 custom code no os platform and distribution linux ubuntu 23 10 mobile device no response python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version 11 8 0 1 8 9 3 28 1 cuda12 1 gpu model and memory nvidia geforce gtx 960 m 4096mib current behavior run the mobilenet from keras include by tensorflow lead to the follow error could not load library libcublaslt so 12 error libcublaslt so 12 can not open share object file no such file or directory abgebrochen speicherabzug geschrieben standalone code to reproduce the issue shell python c from tensorflow keras application mobilenet import mobilenet import numpy as np m mobilenet m predict np zero 32 224 224 3 relevant log output shell could not load library libcublaslt so 12 error libcublaslt so 12 can not open share object file no such file or directory abgebrochen speicherabzug geschrieben |
tensorflowtensorflow | tensorflow keras model predict be not thread safe | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 13 0 custom code yes os platform and distribution linux cento 7 9 mobile device no response python version 3 11 4 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior we execute model predict in multi thread and sometimes the code raise the exception functional object have no attribute predict function standalone code to reproduce the issue shell def predict self x tf server false port 8501 model path step 0 if tf server return self predict tf server grpc x port step pre none try if not self model train print try to load model n self load model model path self model train true if step 0 pre self model predict x else pre self model predict x step step except exception as ex print ex return pre while the exception raise we execute the code self model predict x in debug window again and it return the correct prediction result relevant log output no response |
tensorflowtensorflow | the legal value range of the alpha parameter in leakyrelu | Bug | issue type documentation bug have you reproduce the bug with tensorflow nightly no source source tensorflow version tf2 12 0 custom code yes os platform and distribution macos mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the document describe the leakyrelu function as have an alpha parameter whose type be a float point number and the value be equal to or great than 0 but we find that the value of this parameter be less than zero can also work document alpha float 0 negative slope coefficient default to 0 3 standalone code to reproduce the issue shell from tensorflow import keras def bilstm num unit 25 input shape 10 bilstm input layer input tensor keras input shape input shape bilstm hide layer x keras layer embed input dim 100 output dim 10 input length 8 embedding initializer uniform input tensor x keras layers leakyrelu alpha 0 2044550861511304 x bilstm output layer output tensor x model keras model model input input tensor output output tensor return model if name main bilstm summary relevant log output shell model model layer type output shape param input 1 inputlayer none 10 0 embed embed none 10 10 1000 leaky re lu leakyrelu none 10 10 0 total param 1 000 trainable param 1 000 non trainable param 0 |
tensorflowtensorflow | the value range of dropout parameter and recurrent dropout parameter of gru lstm simplernn function | Bug | issue type documentation bug have you reproduce the bug with tensorflow nightly no source source tensorflow version tf2 12 0 custom code yes os platform and distribution macos mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the document describe the dropout parameter of the gru lstm simplernn function and the recurrent dropout parameter be float point number with value range from 0 to 1 but the program can still run normally if the number be not between 0 and 1 document dropout float between 0 and 1 fraction of the unit to drop for the linear transformation of the input default 0 recurrent dropout float between 0 and 1 fraction of the unit to drop for the linear transformation of the recurrent state default 0 standalone code to reproduce the issue shell from tensorflow import keras def bilstm num unit 25 input shape 10 bilstm input layer input tensor keras input shape input shape bilstm hide layer x keras layer embed input dim 985 output dim 10 input length 8 input tensor x keras layers alphadropout rate 0 1 noise shape none x y keras layers lstm unit num unit return sequence false recurrent dropout 0 1 x keras layers gru unit 21 stateful false dropout 0 6477691805726197 recurrent dropout 0 6477691805726197 x bilstm output layer output tensor x model keras model model input input tensor output output tensor return model if name main bilstm summary relevant log output shell model model layer type output shape param input 1 inputlayer none 10 0 embed embed none 10 10 9850 alpha dropout alphadropout none 10 10 0 gru gru none 21 2079 total param 11 929 trainable param 11 929 non trainable param 0 |
tensorflowtensorflow | documentation bug about api activityregularization | Bug | issue type documentation bug have you reproduce the bug with tensorflow nightly no source source tensorflow version tf2 12 0 custom code yes os platform and distribution macos mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior document l1 l1 regularization factor positive float l2 l2 regularization factor positive float the l1 and l2 parameter of the activityregularization function be describe in this document as float point number and their value should be positive but we find that they can run with value less than zero standalone code to reproduce the issue shell from tensorflow import keras def gru num unit 25 input shape 10 gru input layer input tensor keras input shape input shape gru hide layer x keras layer embed input dim 100 output dim 10 input length none input tensor x keras layers gru unit 32 dropout 0 7338014982069313 return sequence true x x keras layer activityregularization l1 0 616784030867379 l2 0 9646777799675004 x gru output layer output tensor keras layer flatten kera layer dense unit num unit activation relu x model keras model model input input tensor output output tensor return model if name main gru summary relevant log output shell model model layer type output shape param input 1 inputlayer none 10 0 embed embed none 10 10 1000 gru gru none 10 32 4224 activity regularization ac none 10 32 0 tivityregularization dense dense none 10 25 825 flatten flatten none 250 0 total param 6 049 trainable param 6 049 non trainable param 0 |
tensorflowtensorflow | attributeerror module tensorflow dataset have no attribute load | Bug | issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 10 0 os platform and distribution window 11 mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I can not access ucf101 and download tensorflow 2 10 python 3 11 12 tensorflow dataset 4 9 2 standalone code to reproduce the issue shell import tensorflow dataset ucf101 tensorflow dataset video ucf101 ucf101 relevant log output shell attributeerror module tensorflow dataset have no attribute load |
tensorflowtensorflow | site en guide create op md example code have memory leak | Bug | issue type documentation bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 6 5 custom code yes os platform and distribution linux cento 7 6 mobile device linux cento 7 6 python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior in the demo code tensor output tensor be allocate but the memory be not free include tensorflow core framework op kernel h use namespace tensorflow class zerooutop public opkernel public explicit zerooutop opkernelconstruction context opkernel context void compute opkernelcontext context override grab the input tensor const tensor input tensor context input 0 auto input input tensor flat create an output tensor tensor output tensor null op require ok context context allocate output 0 input tensor shape output tensor auto output flat output tensor flat set all but the first element of the output tensor to 0 const int n input size for int I 1 I n I output flat I 0 preserve the first input value if possible if n 0 output flat 0 input 0 standalone code to reproduce the issue shell repeatedly call the zerooutop you ll see the memory continue to increase relevant log output no response |
tensorflowtensorflow | assert shape do not return anything and can not be use as control dependency | Bug | issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 14 0 dev20230606 custom code yes os platform and distribution window 11 x64 mobile device na python version 3 10 bazel version na gcc compiler version na cuda cudnn version na gpu model and memory na current behavior call tf debugging assert shape do not return anything as a consequence unlike other assert operation try to use this assert as a control dependency with tf control dependency in graph mode fail with typeerror can not convert a nonetype into a tensor or operation presumably this would be fix by simply add a return before the call to assert shape in assert shape v2 in tensorflow python op check op py but I d rather someone familiar with the code to judge whether that be fine orif maybe there be be a reason why this op be not return standalone code to reproduce the issue shell import tensorflow as tf tf function def my func x with tf control dependency tf debugging assert shape x 2 return x 2 my func 1 2 typeerror can not convert a nonetype into a tensor or operation relevant log output shell my func with tf control dependency tf debugging assert shape x 2 site package tensorflow python framework op py 5359 control dependency return get default graph control dependency control input site package tensorflow python framework func graph py 362 control dependency return super funcgraph self control dependency filter control input site package tensorflow python framework op py 4815 control dependency c self as graph element c site package tensorflow python framework op py 3726 as graph element return self as graph element lock obj allow tensor allow operation site package tensorflow python framework op py 3815 as graph element lock type obj name type str typeerror can not convert a nonetype into a tensor or operation |
tensorflowtensorflow | model run on cpu when use nnapi even with nnapidelegate option usennapicpu false | Bug | click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version tensorflow lite 2 12 0 custom code yes os platform and distribution no response mobile device google pixel 7 python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour I be run the openai whisper tiny model on android use the tensorflow lite nnapi delegate it seem to be run on the cpu instead of the hardware accelerator on the google pixel 7 even though nnapidelegate option usennapicpu false be set standalone code to reproduce the issue shell nnapidelegate option usennapicpu false relevant log output shell w access deny find property ro mediatek platform w access deny find property ro chipname w access deny find property ro hardware chipname I create tensorflow lite xnnpack delegate for cpu |
tensorflowtensorflow | unsatisfiedlinkerror fail to load native tensorflow lite method | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version 0 0 0 nightly snapshot custom code no os platform and distribution no response mobile device android samsung galaxy j5 python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour fatal exception java lang unsatisfiedlinkerror fail to load native tensorflow lite method check that the correct native library be present and if use a custom native library have be properly load via system loadlibrary java lang unsatisfiedlinkerror dlopen fail can not locate symbol register atfork reference by libtensorflowlite jni so this be reproducible on lot of android device run android version 5 6 7 this be happen in the nightly snapshot from probably last 2 week do not encounter this in previous nightly snapshot standalone code to reproduce the issue shell val nnapioption nnapidelegate option nnapioption setusennapicpu true val nnapidelegate nnapidelegate nnapioption relevant log output 2023 06 28 10 52 35 629 29536 29620 interpreterapi I didn t load native library tensorflowlite jni 2023 06 28 10 52 35 637 29536 29620 interpreterapi I didn t load native library tensorflowlite jni stable 2023 06 28 10 52 35 638 29536 29620 interpreterapi I didn t load native library tensorflowlite jni gms client |
tensorflowtensorflow | the relationship between the parameter of conv2d be unclear | Bug | click to expand issue type documentation bug have you reproduce the bug with tf nightly no source source tensorflow version tf2 12 0 custom code yes os platform and distribution macos mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour valueerror stride 1 not support in conjunction with dilation rate 1 receive stride 2 2 and dilation rate 4 5 the relationship between these two parameter be not clearly define in the documentation and it be not certain be unaware of that stride 1 not support in conjunction with dilation rate 1 standalone code to reproduce the issue shell import tensorflow as tf from tensorflow python keras layers import conv2d input tensor tf random normal shape 1 32 32 3 x conv2d filter 2 kernel size 1 1 stride 2 2 padding same use bias false dilation rate 4 5 input tensor print x shape relevant log output no response |
tensorflowtensorflow | tf2 13 break register keras serializable | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source pypi tensorflow version tf2 13 0rc2 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour when build tf addon for tf2 13 we re notice that our ability to register custom keras object as serializable have be break tf keras save register kera serializable my package class mydense tf keras layer dense def init self unit kwargs super init unit kwargs standalone code to reproduce the issue shell here it be show as work in tf2 12 here it break in tf2 13 relevant log output shell valueerror unknown layer mydense please ensure you be use a keras util custom object scope and that this object be include in the scope see register the custom object for detail |
tensorflowtensorflow | request feature datum size 536907080 doesn t match 1960 feature generation fail | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version v2 8 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour hello together I m have a problem with the micro speech example for arduino from this repo when try to use this example with a new train model from this jupyter noteboobk I always get the same error message request feature datum size 536907080 doesn t match 1960 feature generation fail the only thing I change in the notebook be the tensorflow version this be because this notebook be use 1 x version which be no long support by colab and I change it to work with the late 2 x version can anyone help here greeting patrick standalone code to reproduce the issue shell relevant log output no response |
tensorflowtensorflow | ai | Invalid | system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source provide the text output from tflite convert copy and paste here standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | some parameter be miss type description | Bug | click to expand issue type documentation bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 12 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour api lack of type desciption param tf sparse bincount value tf keras layers input tensor tf keras application densenet121 input tensor tf math add n input tf keras applications mobilenetv2 input tensor tf device device name tf keras regularizer get identifi tf metric rootmeansquarederror metric tf expand dim input tf keras application densenet201 input tensor tf signal frame signal tf loss mean square error y true y pre tf loss cosine similarity y true y pre tf keras metric binary accuracy y true y pre tf keras loss logcosh y true y pre tf keras application efficientnetb5 input tensor tf metric binary accuracy y true y pre tf data experimental assert cardinality expect cardinality tf keras metric sparse top k categorical accuracy y true y pre tf loss square hinge y true y pre tf keras util pack x y sample weight x y sample weight tf keras activation softplus x tf nn depth to space input tf gather nd param tf zero like input tf keras application efficientnetb2 input tensor tf keras metric meanabsoluteerror metric tf stack value tf ensure shape x tf roll input tf keras activation gelu x tf linalg set diag input diagonal k tf math bincount arr weight tf concat value tf keras application efficientnetb6 input tensor tf keras loss mean square error y true y pre tf random categorical logit tf keras backend be keras tensor x tf keras activation tanh x tf pad tensor tf keras initializers identity gain tf keras application efficientnetb0 input tensor tf repeat input tf image convert image dtype image tf split value tf keras util to categorical y tf scatter nd update tf keras layers concatenate input tf linalg band triangular solve band rhs tf one like input tf nest flatten structure tf keras activation linear x tf linalg tensor diag part input tf image pad to bound box image tf config set visible device device tf size input tf image resize image tf tile input tf keras application vgg16 input tensor tf keras metrics top k categorical accuracy y true y pre tf initializer identity gain tf image stateless random brightness image tf ragged range start limit delta tf reshape tensor tf loss logcosh y true y pre tf keras application resnet50v2 input tensor tf nn moment x tf keras application efficientnetb4 input tensor tf image adjust jpeg quality image tf keras loss sparse categorical crossentropy y true y pre tf control dependency control input tf image grayscale to rgb image tf image rgb to yiq image tf keras application efficientnetb3 input tensor tf ragged boolean mask datum mask tf image random hue image tf image adjust gamma image tf be tensor x tf keras initializers orthogonal gain tf math polyval coeff x tf io serialize tensor tensor tf image stateless random saturation image tf one hot indice tf linalg diag part input padding value tf image adjust saturation image tf boolean mask tensor mask tf transpose a tf image flip up down image tf keras loss binary crossentropy y true y pre tf broadcast to input tf image stateless random crop value tf loss mean absolute percentage error y true y pre tf image stateless random flip leave right image tf image random flip up down image tf keras activation exponential x tf keras application xception input tensor tf identity input tf gather param tf keras application inceptionv3 input tensor tf keras layer mask mask value tf loss kullback leibler divergence y true y pre tf linalg band part input tf keras loss cosine similarity y true y pre tf image random contrast image tf image transpose image tf stop gradient input tf string bytes split input tf random stateless parameterized truncate normal mean stddevs minval maxval tf keras loss mean absolute error y true y pre tf image stateless random hue image tf keras application densenet169 input tensor tf keras loss categorical crossentropy y true y pre tf nn embed lookup param tf math reduce variance input tensor tf keras util unpack x y sample weight datum tf nn l2 normalize x tf keras loss categorical hinge y true y pre tf keras application efficientnetb1 input tensor tf keras constraint get identifi tf initializer orthogonal gain tf divide x y tf math top k input tf keras loss kullback leibler divergence y true y pre tf image stateless random jpeg quality image tf keras loss mean absolute percentage error y true y pre tf keras applications efficientnetb7 input tensor tf clip by value t tf type spec from value value tf loss mean squared logarithmic error y true y pre tf tensor scatter nd update tensor indice tf equal x y tf image rgb to grayscale image tf image stateless random contrast image tf image rgb to hsv image tf convert to tensor value tf loss sparse categorical crossentropy y true y pre tf keras activations sigmoid x tf slice input tf image adjust hue image tf math argmax input tf reverse sequence input tf loss categorical crossentropy y true y pre tf keras loss square hinge y true y pre tf squeeze input tf math equal x y tf math divide x y tf unstack value tf keras application mobilenet input tensor tf keras application resnet152v2 input tensor tf keras activation softsign x tf keras application nasnetmobile input tensor tf keras activation swish x tf metric categorical accuracy y true y pre tf keras metric sparse categorical accuracy y true y pre tf metric sparse top k categorical accuracy y true y pre tf loss mean absolute error y true y pre tf loss binary crossentropy y true y pre tf keras application resnet50 input tensor tf image random jpeg quality image min jpeg quality max jpeg quality tf keras activation hard sigmoid x tf image flip leave right image tf keras application resnet101v2 input tensor tf nn batch normalization x mean variance offset scale tf math reduce min input tensor tf keras application resnet101 input tensor tf math not equal x y tf image rot90 image tf keras application vgg19 input tensor tf image stateless random flip up down image tf keras application mobilenetv3large input tensor tf keras application resnet152 input tensor tf keras metric categorical accuracy y true y pre tf sparse cross input tf keras loss mean square logarithmic error y true y pre tf image random flip leave right image standalone code to reproduce the issue shell many parameter be not for any type of value so the allow type should be clearly mark in the document relevant log output no response |
tensorflowtensorflow | f1 score error on multi class datum | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version v1 12 1 95639 g08bd7e1a8e5 2 14 0 dev20230618 custom code yes os platform and distribution os ventura 13 0 1 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour implement the f1 score available in the nightly build on multi class datum such as below model compile optimizer adam loss tf keras loss sparsecategoricalcrossentropy from logit true metric tf keras metric f1score history model fit train image train label epoch 10 validation data test image test label trigger the follow error epoch 1 10 valueerror traceback most recent call last cell in 8 line 5 1 model compile optimizer adam 2 loss tf keras loss sparsecategoricalcrossentropy from logit true 3 metric tf keras metric f1score 5 history model fit train image train label epoch 10 6 validation data test image test label file opt homebrew lib python3 8 site package keras src util traceback util py 70 in filter traceback error handler args kwargs 67 filter tb process traceback frames e traceback 68 to get the full stack trace call 69 tf debug disable traceback filtering 70 raise e with traceback filter tb from none 71 finally 72 del filter tb file var folder f5 mkqkf 0d42qcsqc37hd y0hm0000gn t autograph generate fileb8tcgui2 py 15 in outer factory inner factory tf train function iterator 13 try 14 do return true 15 retval ag convert call ag ld step function ag ld self ag ld iterator none fscope 16 except 17 do return false valueerror in user code file opt homebrew lib python3 8 site package keras src engine training py line 1338 in train function return step function self iterator file opt homebrew lib python3 8 site package keras src engine training py line 1322 in step function output model distribute strategy run run step args datum file opt homebrew lib python3 8 site package keras src engine training py line 1303 in run step output model train step datum file opt homebrew lib python3 8 site package keras src engine training py line 1085 in train step return self compute metric x y y pre sample weight file opt homebrew lib python3 8 site package keras src engine training py line 1179 in compute metric self compile metric update state y y pre sample weight file opt homebrew lib python3 8 site package keras src engine compile util py line 605 in update state metric obj update state y t y p sample weight mask file opt homebrew lib python3 8 site package keras src util metric util py line 77 in decorate update op update state fn args kwargs file opt homebrew lib python3 8 site package kera src metric base metric py line 140 in update state fn return ag update state args kwargs file opt homebrew lib python3 8 site package keras src metric f score metric py line 176 in update state y true tf convert to tensor y true dtype self dtype valueerror tensor conversion request dtype float32 for tensor with dtype uint8 I ve try with multiple multi class dataset and the same error be return the f1 score page say it should work with multi class datum be there something I ve miss regard its implementation for multi class datum such as somewhere to specify the number of class or be this a bug standalone code to reproduce the issue shell here be a jupyter notebook with some example datum from relevant log output no response |
tensorflowtensorflow | null | Invalid | system information android device information use adb shell getprop ro build fingerprint if possible tensorflow lite in play service sdk version find in build gradle google play service version setting app google play service app detail standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to or attach code demonstrate the problem any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | typo in docs transfer learn ipynb | Bug | it seem there be a typo in in the augmentation section there be a note as follow typo note these layer be active only during training when you call model fit fit they be inactive when the model be use in inference mode in model evaluate evaluate or model fit fit if I understand correctly the second reference to model fit be not intend or correct and should be change to model call as follow correction note these layer be active only during training when you call model fit fit they be inactive when the model be use in inference mode in model evaluate evaluate or model call call I m willing to help with a pull request if confirm |
tensorflowtensorflow | test | Invalid | 1 system information os platform and distribution e g linux ubuntu 16 04 tensorflow installation pip package or build from source tensorflow library version if pip package or github sha if build from source 2 code provide code to help we reproduce your issue use one of the follow option option a reference colab notebook 1 reference tensorflow model colab demonstrate how to build your tf model 2 reference tensorflow lite model colab demonstrate how to convert your tf model to a tf lite model with quantization if use and run tflite inference if possible you can paste link or attach file by drag drop they below provide link to your update version of the above two colab notebook provide link to your tensorflow model and optionally tensorflow lite model option b paste your code here or provide a link to a custom end to end colab you can paste link or attach file by drag drop they below include code to invoke the tflite converter python api and the error provide link to your tensorflow model and optionally tensorflow lite model 3 failure after conversion if the conversion be successful but the generate model be wrong then state what be wrong model produce wrong result and or have less accuracy model produce correct result but it be slow than expect 4 optional rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title 5 optional any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | mistype mixed bloat16 in the keras mix precision policy class | Bug | click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version master custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour there be a mistype l195 in the mixed precision policy check it should be bfloat16 not bloat16 if name in mixed float16 mixed bloat16 device compatibility check log device compatibility check name there be no practical issue with it because log device compatibility check doesn t support it anyway standalone code to reproduce the issue shell not require it s clear from the code relevant log output no response |
tensorflowtensorflow | w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudnn so 8 dlerror libcudnn so 8 can not open share object file no such file or directory ld library path | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 8 0 custom code no os platform and distribution ubuntu 22 04 mobile device no response python version python 3 9 16 package by conda forge main feb 1 2023 21 39 03 bazel version bazel 5 3 2 gcc compiler version gcc ubuntu 11 3 0 1ubuntu1 22 04 1 11 3 0 cuda cudnn version 11 2 8 in conda env gpu model and memory laptop 3080 rtx current behaviour a bug happen standalone code to reproduce the issue shell I have tensorflow 2 instal and also from the code below I see cudnn 8 be find samurai mona ard gpu 01 samurai cat cudnn test py import tensorflow as tf sys detail tf sysconfig get build info cuda version sys detail cuda version print cuda version cudnn version sys detail cudnn version print cudnn version cuda compute capability sys detail cuda compute capability print cuda compute capability samurai mona ard gpu 01 samurai python cudnn test py 11 2 8 sm 35 sm 50 sm 60 sm 70 sm 75 compute 80 however when I run the follow command I get an error that cudnn 8 be not find samurai mona ard gpu 01 samurai python train samurai py config configs samurai samurai txt datadir datum duck basedir expname duck test gpu 0 namespace config none basedir expname duck test batch size 1024 learning rate 0 0001 epoch 150 step per epoch 2000 gpu 0 tpu none debug false profile false perturb 1 0 raw noise std 0 0 coarse sample 64 linear disparity sample false fine sample 128 fourier frequency 10 direction fourier frequency 4 random encoding offset true fine net width 128 fine net depth 8 coarse net width 128 coarse net depth 6 appearance latent dim 32 diffuse latent dim 24 fix diffuse true camera distribution sphere use fully random camera false random camera per view 4 min softmax scaler 1 0 max softmax scaler 10 0 camera weight update lr 0 3 camera weight update momentum 0 75 bounding size 0 5 resolution factor 4 advanced loss do 80000 network gradient norm clip 0 1 camera gradient norm clip 1 not learn r false not learn t false not learn f false edge align step 200 num edge align step 50 pretraine camera pose folder none start f optimization 90000 start fourier anneal 0 finish fourier anneal 50000 slow scheduler decay 100000 brdf schedule decay 40000 lambda smoothness 0 01 smoothness bind dividier 200 coarse distortion lambda 0 001 fine distortion lambda 0 normal direction lambda 0 005 mlp normal direction lambda 0 0003 disable posterior scale false disable mask uncertainty true lambda brdf decoder smoothness 0 1 lambda brdf decoder sparsity 0 01 camera lr 0 003 camera lr decay 70 camera regularization 0 1 aim center regularization 10 0 camera rotation lookat learn camera offset true basecolor metallic true skip decomposition false compose on white true rotate object false single env false brdf preintegration path data neural pil brdflut hdr illumination network path data neural pil illumination network datadir datum duck max resolution dimension 400 test holdout 16 dataset samurai load gt pose false canonical pose 0 log step 100 weight epoch 5 validation epoch 5 testset epoch 150 video epoch 50 lrate decay 300 render only false 2023 06 13 15 35 10 002485 I tensorflow stream executor cuda cuda gpu executor cc 936 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 06 13 15 35 10 022702 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudnn so 8 dlerror libcudnn so 8 can not open share object file no such file or directory ld library path usr local lib home mona mvtec halcon 23 05 progress lib x64 linux usr local cuda 11 7 lib64 home mona onnx tensorrt build 2023 06 13 15 35 10 022715 w tensorflow core common runtime gpu gpu device cc 1850 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device utilize 0 gpu for train 2023 06 13 15 35 11 092766 I tensorflow core platform cpu feature guard cc 151 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 avx512f fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 70 3 model sequential 12 layer type output shape param mappingnetwork layer 0 den none 128 16512 se mappingnetwork layer 1 den none 128 16512 se mappingnetwork final dense none 768 99072 reshape 1 reshape none 2 3 128 0 total param 132 096 trainable param 132 096 non trainable param 0 model sequential 13 layer type output shape param conditionalnetwork dense1 none 32 192 dense conditionalnetwork densefin none 256 8448 al dense reshape 2 reshape none 2 128 0 total param 8 640 trainable param 8 640 non trainable param 0 find ckpt start training in epoch 0 at step 0 start train home mona anaconda3 envs samurai lib python3 9 site package tensorflow python framework index slice py 444 userwarning convert sparse indexedslice indexedslice indice tensor gradient interpolate bilinear gather bottom right gatherv2 grad reshape 1 0 shape 1024 dtype int32 value tensor gradient interpolate bilinear gather bottom right gatherv2 grad reshape 0 shape 1024 1 dtype float32 dense shape tensor gradient interpolate bilinear gather bottom right gatherv2 grad cast 0 shape 2 dtype int32 to a dense tensor of unknown shape this may consume a large amount of memory warning warn home mona anaconda3 envs samurai lib python3 9 site package tensorflow python framework index slice py 444 userwarning convert sparse indexedslice indexedslice indice tensor gradient interpolate bilinear gather bottom leave gatherv2 grad reshape 1 0 shape 1024 dtype int32 value tensor gradient interpolate bilinear gather bottom leave gatherv2 grad reshape 0 shape 1024 1 dtype float32 dense shape tensor gradient interpolate bilinear gather bottom leave gatherv2 grad cast 0 shape 2 dtype int32 to a dense tensor of unknown shape this may consume a large amount of memory warning warn home mona anaconda3 envs samurai lib python3 9 site package tensorflow python framework index slice py 444 userwarning convert sparse indexedslice indexedslice indice tensor gradient interpolate bilinear gather top right gatherv2 grad reshape 1 0 shape 1024 dtype int32 value tensor gradient interpolate bilinear gather top right gatherv2 grad reshape 0 shape 1024 1 dtype float32 dense shape tensor gradient interpolate bilinear gather top right gatherv2 grad cast 0 shape 2 dtype int32 to a dense tensor of unknown shape this may consume a large amount of memory warning warn home mona anaconda3 envs samurai lib python3 9 site package tensorflow python framework index slice py 444 userwarning convert sparse indexedslice indexedslice indice tensor gradient interpolate bilinear gather top leave gatherv2 grad reshape 1 0 shape 1024 dtype int32 value tensor gradient interpolate bilinear gather top leave gatherv2 grad reshape 0 shape 1024 1 dtype float32 dense shape tensor gradient interpolate bilinear gather top leave gatherv2 grad cast 0 shape 2 dtype int32 to a dense tensor of unknown shape this may consume a large amount of memory warning warn 25 2000 eta 42 41 loss 1 8824 loss camera 7 2076 fine loss 1 8019 relevant log output shell samurai mona ard gpu 01 samurai lsb release a lsb version core 11 1 0ubuntu4 noarch security 11 1 0ubuntu4 noarch distributor i d ubuntu description ubuntu 22 04 2 lts release 22 04 codename jammy samurai mona ard gpu 01 samurai uname a linux ard gpu 01 5 19 0 43 generic 44 22 04 1 ubuntu smp preempt dynamic mon may 22 13 39 36 utc 2 x86 64 x86 64 x86 64 gnu linux samurai mona ard gpu 01 samurai nvidia smi tue jun 13 15 38 44 2023 nvidia smi 530 30 02 driver version 530 30 02 cuda version 12 1 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m mig m 0 nvidia geforce rtx 3080 l on 00000000 01 00 0 off n a n a 49c p8 17w 90w 102mib 16384mib 21 default n a process gpu gi ci pid type process name gpu memory i d i d usage 0 n a n a 2549 g usr lib xorg xorg 95mib 0 n a n a 2983 g libexec gnome remote desktop daemon 3mib samurai mona ard gpu 01 samurai nvcc version nvcc nvidia r cuda compiler driver copyright c 2005 2022 nvidia corporation build on we d jun 8 16 49 14 pdt 2022 cuda compilation tool release 11 7 v11 7 99 build cuda 11 7 r11 7 compiler 31442593 0 the code be from this repo |
tensorflowtensorflow | documentation bug the description of padding | Bug | click to expand issue type documentation bug have you reproduce the bug with tf nightly no source source tensorflow version tf2 12 0 custom code yes os platform and distribution macos mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour output valueerror the padding argument must be a tuple of 2 integer receive padding 2 document padding int or tuple of int length 2 or dictionary standalone code to reproduce the issue shell input shape 2 2 3 x np arange np prod input shape reshape input shape x zeropadding1d padding 2 x print x relevant log output no response |
tensorflowtensorflow | image segmenter tflite suuport attributeerror type object segmentationoption have no attribute outputtype | Bug | click to expand issue type documentation bug have you reproduce the bug with tf nightly yes source source tensorflow version tflite support 0 1 0a1 custom code no os platform and distribution macos ventura 13 4 mobile device no response python version 3 8 7 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour the syntax provide for use image segmenter do not execute processor segmentationoption in python environment possible fix the line segmentation option processor segmentationoption output type processor segmentationoption outputtype category mask should have be segmentation option processor segmentationoption output type processor segmentationoption output type category mask standalone code to reproduce the issue shell step 2 use the model 2 relevant log output shell traceback most recent call last file image segmenter py line 8 in segmentation option processor segmentationoption output type processor segmentationoption outputtype category mask attributeerror type object segmentationoption have no attribute outputtype |
tensorflowtensorflow | can not pick gpu unavailable | Bug | click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version 2 12 0 custom code yes os platform and distribution window 10 build 19045 0 mobile device no response python version python 3 10 10 bazel version no response gcc compiler version no response cuda cudnn version cuda compilation tool release 12 1 v12 1 105 build cuda 12 1 r12 1 compiler 32688072 0 gpu model and memory 1070 ti 8 g current behaviour a bug happen gpu tf config experimental list physical device gpu this piece of code return empty array when in reallity have two gtx 1070 description nvidia geforce gtx 1070 ti x 2 at least hope to get one I m follow all installation step by step verifying path I don t know what to do anymore standalone code to reproduce the issue shell follow this step guide install tensorflow configuration path cuda verification os environ cuda home r c program files nvidia gpu computing toolkit cuda v12 1 relevant log output no response |
tensorflowtensorflow | pip installation ld library path order | Bug | click to expand issue type documentation bug have you reproduce the bug with tf nightly yes source binary tensorflow version tf 2 12 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour in the official pip install documentation ld library path be update via conda activation follow instruction on clean machine for library link work perfectly however if the machine have a native cuda library set tf will load the system library ahead of virtual environment one suggest to load conda path ahead of system one bash echo cudnn path dirname python c import nvidia cudnn print nvidia cudnn file conda prefix etc conda activate d env var sh echo export ld library path conda prefix lib cudnn path lib ld library path conda prefix etc conda activate d env var sh standalone code to reproduce the issue shell echo export ld library path ld library path conda prefix lib cudnn path lib conda prefix etc conda activate d env var sh relevant log output shell 2023 06 01 15 58 54 130390 e tensorflow compiler xla stream executor cuda cuda dnn cc 417 load runtime cudnn library 8 1 1 but source be compile with 8 6 0 cudnn library need to have match major version and equal or high minor version if use a binary install upgrade your cudnn library if build from source make sure the library load at runtime be compatible with the version specify during compile configuration 2023 06 01 15 58 54 131265 w tensorflow core framework op kernel cc 1830 op require fail at conv op cc 1068 unimplemente dnn library be not find |
tensorflowtensorflow | tf compiler version cause error with pkg resource parse version invalid format | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version 2 14 0 dev20230531 custom code no os platform and distribution linux 3 10 0 1127 el7 x86 64 mobile device no response python version 3 11 3 bazel version n a gcc compiler version n a cuda cudnn version cuda 10 2 89 gpu model and memory n a current behaviour current behavior the follow code in a fresh conda env with tf nightly instal via pip produce a pkg resource extern packaging version invalidversion error from parse version python import tensorflow from pkg resource import parse version parse version tensorflow compiler version desire behavior a string that match the format produce by cmake s cmake cxx compiler version I e be readable by pkg resource parse version context presently I be use this to warn against compiler mismatch at build time of a package that build with tf lib in cmake check against cmake cxx compiler version the build process doesn t strike I as relevant here but I will provide the build file if request standalone code to reproduce the issue shell import tensorflow from pkg resource import parse version parse version tensorflow compiler version relevant log output shell parse version tensorflow compiler version traceback most recent call last file line 1 in file home rainierbarrett conda envs tf test lib python3 11 site package pkg resource vendor packaging version py line 197 in init raise invalidversion f invalid version version pkg resource extern packaging version invalidversion invalid version ubuntu clang 16 0 4 20230506063001 3c1576cc0c54 1 exp1 20230506063103 85 |
tensorflowtensorflow | mkl aarch64 threadpool build break by recent commit | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 8 13 bazel version 6 1 0 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour build be break since standalone code to reproduce the issue shell bazel build config mkl aarch64 threadpool copt mtune generic copt march armv8 a copt o3 verbose failure job 100 tensorflow tool pip package build pip package relevant log output shell info analyze target tensorflow tool pip package build pip package 607 package load 38558 target configure info find 1 target error home builder 1 tensorflow build tensorflow 1dnn git tensorflow core common runtime eager build 644 22 compile tensorflow core common runtime eager mkl eager op rewrite cc fail exit 1 gcc fail error execute command from target tensorflow core common runtime eager mkl eager op rewrite cd home builder cache bazel bazel builder 945690c41481150b9aa58576637dd867 execroot org tensorflow exec env path home builder cache bazelisk download bazelbuild bazel 6 1 0 linux arm64 bin home builder venv39 bin home builder local bin home builder bin usr share module bin usr local bin usr bin usr local sbin usr sbin pwd proc self cwd python bin path home builder venv39 bin python3 python lib path home builder venv39 lib python3 9 site package tf2 behavior 1 usr local bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section fdata section std c 0x md mf bazel out aarch64 opt bin tensorflow core common runtime eager objs mkl eager op rewrite mkl eager op rewrite pic d frandom seed bazel out aarch64 opt bin tensorflow core common runtime eager objs mkl eager op rewrite mkl eager op rewrite pic o fpic deigen mpl2 only deigen max align byte 64 dhave sys uio h dtf use snappy dllvm on unix 1 dhave backtrace 1 dbacktrace header dltdl shlib ext so dllvm plugin ext so dllvm enable thread 1 dhave deregister frame 1 dhave libpthread 1 dhave pthread getname np 1 dhave pthread h 1 dhave pthread setname np 1 dhave register frame 1 dhave setenv r 1 dhave strerror r 1 dhave sysexit h 1 dhave unistd h 1 d gnu source dhave link h 1 dhave mallinfo 1 dhave sbrk 1 dhave struct stat st mtim tv nsec 1 dllvm native arch aarch64 dllvm native asmparser llvminitializeaarch64asmparser dllvm native asmprinter llvminitializeaarch64asmprint dllvm native disassembler llvminitializeaarch64disassembler dllvm native target llvminitializeaarch64target dllvm native targetinfo llvminitializeaarch64targetinfo dllvm native targetmc llvminitializeaarch64targetmc dllvm native targetmca llvminitializeaarch64targetmca dllvm host triple aarch64 unknown linux gnu dllvm default target triple aarch64 unknown linux gnu dllvm version major 17 dllvm version minor 0 dllvm version patch 0 dllvm version string 17 0 0git d stdc limit macro d stdc constant macro d stdc format macros dblake3 use neon 0 dblake3 no avx2 dblake3 no avx512 dblake3 no sse2 dblake3 no sse41 dno llvm support 0 dcurl staticlib dgrpc are 0 dtensorflow use custom contraction kernel deigen use avx512 gemm kernel 0 deigen altivec use custom pack 0 deigen neon gebp nr 4 dtf enable activity watcher dbazel current repository iquote iquote bazel out aarch64 opt bin iquote external com google absl iquote bazel out aarch64 opt bin external com google absl iquote external farmhash archive iquote bazel out aarch64 opt bin external farmhash archive iquote external nsync iquote bazel out aarch64 opt bin external nsync iquote external com google protobuf iquote bazel out aarch64 opt bin external com google protobuf iquote external gif iquote bazel out aarch64 opt bin external gif iquote external libjpeg turbo iquote bazel out aarch64 opt bin external libjpeg turbo iquote external com googlesource code re2 iquote bazel out aarch64 opt bin external com googlesource code re2 iquote external fft2d iquote bazel out aarch64 opt bin external fft2d iquote external highwayhash iquote bazel out aarch64 opt bin external highwayhash iquote external zlib iquote bazel out aarch64 opt bin external zlib iquote external eigen archive iquote bazel out aarch64 opt bin external eigen archive iquote external double conversion iquote bazel out aarch64 opt bin external double conversion iquote external snappy iquote bazel out aarch64 opt bin external snappy iquote external llvm project iquote bazel out aarch64 opt bin external llvm project iquote external curl iquote bazel out aarch64 opt bin external curl iquote external boringssl iquote bazel out aarch64 opt bin external boringssl iquote external jsoncpp git iquote bazel out aarch64 opt bin external jsoncpp git iquote external com github grpc grpc iquote bazel out aarch64 opt bin external com github grpc grpc iquote external upb iquote bazel out aarch64 opt bin external upb iquote external local config cuda iquote bazel out aarch64 opt bin external local config cuda iquote external local config rocm iquote bazel out aarch64 opt bin external local config rocm iquote external local config tensorrt iquote bazel out aarch64 opt bin external local config tensorrt ibazel out aarch64 opt bin external llvm project mlir virtual include builtinattributeinterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include builtinattributesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include builtindialectbytecodegen ibazel out aarch64 opt bin external llvm project mlir virtual include builtindialectincgen ibazel out aarch64 opt bin external llvm project mlir virtual include builtinlocationattributesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include builtinopsincgen ibazel out aarch64 opt bin external llvm project mlir virtual include builtintypeinterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include builtintypesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include callopinterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include castopinterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include functioninterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include infertypeopinterfaceincgen ibazel out aarch64 opt bin external llvm project mlir virtual include opasminterfaceincgen ibazel out aarch64 opt bin external llvm project mlir virtual include regionkindinterfaceincgen ibazel out aarch64 opt bin external llvm project mlir virtual include sideeffectinterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include symbolinterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include tensorencodingincgen ibazel out aarch64 opt bin external llvm project mlir virtual include arithbaseincgen ibazel out aarch64 opt bin external llvm project mlir virtual include arithcanonicalizationincgen ibazel out aarch64 opt bin external llvm project mlir virtual include arithopsincgen ibazel out aarch64 opt bin external llvm project mlir virtual include arithopsinterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include bytecodeopinterfaceincgen ibazel out aarch64 opt bin external llvm project mlir virtual include inferintrangeinterfaceincgen ibazel out aarch64 opt bin external llvm project mlir virtual include vectorinterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include controlflowinterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include controlflowopsincgen ibazel out aarch64 opt bin external llvm project mlir virtual include funcincgen ibazel out aarch64 opt bin external llvm project mlir virtual include asmparsertokenkind ibazel out aarch64 opt bin external llvm project mlir virtual include quantopsincgen ibazel out aarch64 opt bin external llvm project mlir virtual include looplikeinterfaceincgen ibazel out aarch64 opt bin external llvm project mlir virtual include memoryslotinterfacesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include dialectutilsincgen ibazel out aarch64 opt bin external llvm project mlir virtual include viewlikeinterfaceincgen ibazel out aarch64 opt bin external llvm project mlir virtual include pdlopsincgen ibazel out aarch64 opt bin external llvm project mlir virtual include pdltypesincgen ibazel out aarch64 opt bin external llvm project mlir virtual include pdlinterpopsincgen ibazel out aarch64 opt bin external llvm project mlir virtual include conversionpassincgen ibazel out aarch64 opt bin external llvm project mlir virtual include transformspassincgen ibazel out aarch64 opt bin external llvm project mlir virtual include derivedattributeopinterfaceincgen ibazel out aarch64 opt bin external llvm project mlir virtual include runtimeverifiableopinterfaceincgen ibazel out aarch64 opt bin external local config cuda cuda virtual include cuda header virtual ibazel out aarch64 opt bin external local config tensorrt virtual include tensorrt header isystem external farmhash archive src isystem bazel out aarch64 opt bin external farmhash archive src isystem external nsync public isystem bazel out aarch64 opt bin external nsync public isystem external com google protobuf src isystem bazel out aarch64 opt bin external com google protobuf src isystem external gif isystem bazel out aarch64 opt bin external gif isystem external zlib isystem bazel out aarch64 opt bin external zlib isystem third party eigen3 mkl include isystem bazel out aarch64 opt bin third party eigen3 mkl include isystem external eigen archive isystem bazel out aarch64 opt bin external eigen archive isystem external llvm project llvm include isystem bazel out aarch64 opt bin external llvm project llvm include isystem external llvm project mlir include isystem bazel out aarch64 opt bin external llvm project mlir include isystem external curl include isystem bazel out aarch64 opt bin external curl include isystem external boringssl src include isystem bazel out aarch64 opt bin external boringssl src include isystem external jsoncpp git include isystem bazel out aarch64 opt bin external jsoncpp git include isystem external com github grpc grpc include isystem bazel out aarch64 opt bin external com github grpc grpc include isystem external com github grpc grpc src core ext upb generate isystem bazel out aarch64 opt bin external com github grpc grpc src core ext upb generate isystem external com github grpc grpc third party address sorting include isystem bazel out aarch64 opt bin external com github grpc grpc third party address sorting include isystem external local config cuda cuda isystem bazel out aarch64 opt bin external local config cuda cuda isystem external local config cuda cuda cuda include isystem bazel out aarch64 opt bin external local config cuda cuda cuda include isystem external local config rocm rocm isystem bazel out aarch64 opt bin external local config rocm rocm isystem external local config rocm rocm rocm include isystem bazel out aarch64 opt bin external local config rocm rocm rocm include isystem external local config rocm rocm rocm include rocrand isystem bazel out aarch64 opt bin external local config rocm rocm rocm include rocrand isystem external local config rocm rocm rocm include roctracer isystem bazel out aarch64 opt bin external local config rocm rocm rocm include roctracer wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel mtune generic march armv8 a o3 std c 17 deigen avoid stl array iexternal gemmlowp wno sign compare ftemplate depth 900 fno exception dtensorflow use xla 1 dintel mkl ddnnl aarch64 use acl 1 pthread fexception fno canonical system header wno builtin macro redefine d date redact d timestamp redact d time redact c tensorflow core common runtime eager mkl eager op rewrite cc o bazel out aarch64 opt bin tensorflow core common runtime eager objs mkl eager op rewrite mkl eager op rewrite pic o configuration c6a902c52ef1c3172fb41364dfaf0eb8e48392ba4d35c8c961b05ad0c35a1401 execution platform local execution config platform platform in file include from tensorflow core common runtime eager mkl eager op rewrite cc 22 tensorflow core util mkl util h 27 10 fatal error dnnl hpp no such file or directory 27 include dnnl hpp compilation terminate target tensorflow tool pip package build pip package fail to build info elapse time 2193 259s critical path 810 20 info 16688 process 2002 internal 14686 local fail build do not complete successfully |
tensorflowtensorflow | error when convert model to coreml | Bug | issue I have model that be train in tensorflow 2 x the model work perfectly with tensorflow openvino and onnxruntime format but doesn t get convert in coreml the model inference be perfect in tensorflow but when I try to convert it into coreml format I get the follow error invalidargumenterror traceback most recent call last file sagemaker envs coreml env lib64 python3 8 site package tensorflow python framework importer py 496 in import graph def internal graph def input map return element validate colocation constraint name producer op list 495 try 496 result c api tf graphimportgraphdefwithresult 497 graph c graph serialize option pylint disable protect access 498 result c api util scopedtfimportgraphdefresult result invalidargumenterror input 0 of node model1 fpn fpn1 bn assignnewvalue be pass float from model1 fpn fpn1 bn fusedbatchnormv3 readvariableop resource 0 incompatible with expect resource during handling of the above exception another exception occur valueerror traceback most recent call last cell in 15 line 13 6 width 256 8 input shape ct shape shape ct rangedim lower bind 1 upper bind 1 9 ct rangedim lower bind height upper bind 1024 10 ct rangedim lower bind width upper bind 1024 11 3 13 c model ct convert model input ct tensortype shape input shape name input name source tensorflow source code here be the source code for loading and convert the model in coreml format import tensorflow as tf import coremltool as ct model pth temp with weight model h5 model tf keras model load model model pth print ct version input name model input 0 name height 256 width 256 input shape ct shape shape ct rangedim lower bind 1 upper bind 1 ct rangedim lower bind height upper bind 1024 ct rangedim lower bind width upper bind 1024 3 c model ct convert model input ct tensortype shape input shape name input name source tensorflow |
tensorflowtensorflow | flowmode | Invalid | convert model model convert successfully standalone code to reproduce the issue import tensorflow as tf def my function x return tf add x 1 model tf keras model sequential tf keras layers input shape 1 tf keras layer dense 1 activation relu my function model compile optimizer adam loss mse model fit x 1 2 3 y 2 3 4 epoch 10 tflite model tf lite tfliteconverter from keras model model tflite model experimental generate debug info true tflite model target spec support op tflite opsset tflite builtin tflite opsset select tf op tflite model output format tflite model modelformat tflite tflite model experimental enable mlir converter true tflite model experimental enable quantization true tflite model inference input type tflite tensortype float32 tflite model inference output type tflite tensortype float32 tflite model input tensor 0 name x tflite model output tensor 0 name y tflite model output file my model tflite tflite model convert print tflite model output file this code will create a tensorflow lite model and save it to a file call my model tflite any other info log there be no other log or source code that would be helpful to diagnose the problem |
tensorflowtensorflow | call model predict in graph mode be not support when the model instance be construct with eager mode enable please construct your model instance in graph mode or call model predict with eager mode enable | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 6 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour we be use tensorflow version 2 where eager model be enable by default while predict from train model we be get this error call model predict in graph mode be not support when the model instance be construct with eager mode enable please construct your model instance in graph mode or call model predict with eager mode enable as a workaround for time be we resolve this issue by disable eager mode and load the model inside graph mode we would like to know how can we resolve this issue without disable the eager mode standalone code to reproduce the issue shell this be not an issue which can be reproduce as its happen only for few request relevant log output no response |
tensorflowtensorflow | google protobuf message decodeerror error parse message when use hlo pb2 hlomodeuleproto | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 8 custom code yes os platform and distribution ubuntu 16 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour a bug happen I use hlo pb2 hlomoduleproto parsefromstre to get the hlo model but google protobuf message decodeerror error parse message will be report in addition I can use tf compat v1 graphdef to load the model may I ask if there be something wrong with the method I use standalone code to reproduce the issue shell import tensorflow as tf pb file path os getcwd with tf compat v1 session graph tf graph as sess x tf compat v1 placeholder tf float32 name x y tf compat v1 placeholder tf float32 name y b tf variable 1 0 name b xy tf multiply x y op tf add xy b name op to store sess run tf compat v1 global variable initalizer constant graph tf compat v1 graph util convert variable to constant sess sess graph def op to store with tf io gfile gfile model test pb wb as f f write constant graph serializetostring from tensorflow compiler xla service import hlo pb2 with tf io gfile gfile tmp pb rb as f module hlo pb2 hlomoduleproto module parsefromstre f read module tf compat v1 graphdef module parsefromstre f read relevant log output no response |
tensorflowtensorflow | tf meshgrid not work with tf function | Bug | click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version 2 4 4 custom code yes os platform and distribution window 10 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour when run the code I would expect it to pass without error but I be get the follow error when delete the tf function decorator it work as expect traceback most recent call last file c user josef ondrej appdata roam jetbrain pycharm2022 3 scratch scratch 236 py line 11 in print my function file c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python eager def function py line 828 in call result self call args kwd file c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python eager def function py line 871 in call self initialize args kwd add initializer to initializer file c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python eager def function py line 725 in initialize self stateful fn get concrete function internal garbage collect pylint disable protect access file c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python eager function py line 2969 in get concrete function internal garbage collect graph function self maybe define function args kwargs file c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python eager function py line 3361 in maybe define function graph function self create graph function args kwargs file c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python eager function py line 3196 in create graph function func graph module func graph from py func file c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python framework func graph py line 990 in func graph from py func func output python func func args func kwargs file c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python eager def function py line 634 in wrap fn out weak wrap fn wrap args kwd file c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python framework func graph py line 977 in wrapper raise e ag error metadata to exception e notimplementederror in user code c user josef ondrej appdata roam jetbrain pycharm2022 3 scratch scratch 236 py 8 my function tf meshgrid b a c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python util dispatch py 201 wrapper return target args kwargs c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python op array op py 3552 meshgrid mult fact one shape output dtype c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python util dispatch py 201 wrapper return target args kwargs c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python op array op py 3120 one output constant if small one shape dtype name c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python op array op py 2804 constant if small if np prod shape 1000 array function internal 180 prod c user josef ondrej anaconda3 envs foobar env lib site package numpy core fromnumeric py 3088 prod return wrapreduction a np multiply prod axis dtype out c user josef ondrej anaconda3 envs foobar env lib site package numpy core fromnumeric py 86 wrapreduction return ufunc reduce obj axis dtype out passkwargs c user josef ondrej anaconda3 envs foobar env lib site package tensorflow python framework op py 852 array raise notimplementederror notimplementederror can not convert a symbolic tensor meshgrid size 1 0 to a numpy array this error may indicate that you re try to pass a tensor to a numpy call which be not support standalone code to reproduce the issue shell import tensorflow as tf tf function def my function a tf constant 1 0 b tf constant 1 0 tf meshgrid b a relevant log output no response |
tensorflowtensorflow | when argument batch size be bool tf datum experimental dense to ragged batch work | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 12 0 custom code yes os platform and distribution win11 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour accord to doc the argument batch size should be int64 but in follow snippet code when it s bool type the api tf datum experimental dense to ragged batch also work if this be due to the type cast in api then the documentation should make it clear that the argument batch size can be bool type as well if this be an unexpected type cast then this issue should be fix standalone code to reproduce the issue shell result dict import tensorflow as tf with tf device cpu batch size false result re cpu tf datum experimental dense to ragged batch batch size batch size with tf device gpu 0 result re gpu tf datum experimental dense to ragged batch batch size batch size print result result re cpu apply fn at 0x7f0f5974a3b0 re gpu apply fn at 0x7f0f5974a710 relevant log output no response |
tensorflowtensorflow | no match distribution find for tensorflow when instal with pip on macos | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 3 0 custom code yes os platform and distribution no response mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour error no match distribution find for tensorflow 2 3 0 standalone code to reproduce the issue shell pip install u index url tensorflow 2 3 0 relevant log output shell same issue as here no match distribution find for tensorflow when instal with pip on macos 11 47205 the issue do not resolve but be close the solution doesn t work it errore with error no match distribution find for tensorflow 2 3 0 |
tensorflowtensorflow | android download page url not available | Bug | click to expand issue type documentation bug have you reproduce the bug with tf nightly yes source source tensorflow version tf2 8 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour on the frontpage of the readme md the android download link point to now be not available and will redirect to another page standalone code to reproduce the issue shell document issue which easy to find on frontpage of this github project relevant log output no response |
tensorflowtensorflow | regression bazel pip tensorflow python kernel test nn op pool op 3d test fail | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 10 bazel version 5 3 0 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour introduce unit test failure when tf enable onednn opt 1 standalone code to reproduce the issue shell bazel test build test only config mkl aarch64 threadpool copt flax vector conversion test env tf enable onednn opt 1 test env tf2 behavior 1 define no tensorflow py dep true test lang filter py flaky test attempt 3 test size filter small medium test output error verbose failure true test keep go not verbose timeout warning local test job 64 test tag filter nopip no pip oss serial no oss oss exclude v1only benchmark test no aarch64 no oss py38 no oss py39 no oss py310 k bazel pip tensorflow bazel pip tensorflow compiler tf2tensorrt bazel pip tensorflow compiler xrt bazel pip tensorflow core tpu bazel pip tensorflow go bazel pip tensorflow java bazel pip tensorflow python integration testing bazel pip tensorflow tool toolchain bazel pip tensorflow lite bazel pip tensorflow python kernel test nn op atrous conv2d test relevant log output shell fail testavgpool3dgradinvalidksize main poolingtest poolingtest testavgpool3dgradinvalidksize tensorflow python framework error impl invalidargumenterror function node wrap mklnativeavgpool3dgrad device job localhost replica 0 task 0 device cpu 0 slide window ksize for dimension 1 be zero op avgpool3dgrad during handling of the above exception another exception occur traceback most recent call last file tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin bazel pip tensorflow python kernel test nn op pool op 3d test cpu runfiles org tensorflow bazel pip tensorflow python kernel test nn op pool op 3d test py line 181 in testavgpool3dgradinvalidksize with self assertraisesregex assertionerror ksize must be positive get do not match function node wrap mklnativeavgpool3dgrad device job localhost replica 0 task 0 device cpu 0 slide window ksize for dimension 1 be zero op avgpool3dgrad run 33 test in 22 409s fail failure 1 skip 1 |
tensorflowtensorflow | some unit test fail pip test | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 10 bazel version 5 3 0 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour the unit test bazel pip tensorflow dtensor python test multi client test 2gpus bazel pip tensorflow dtensor python test multi client test cpu bazel pip tensorflow dtensor python test multi client test nccl 2gpus and bazel pip tensorflow dtensor python test multi client test nccl local 2gpus fail since commit see step 5 29731 standalone code to reproduce the issue shell bazel test build test only config mkl aarch64 threadpool copt flax vector conversion test env tf enable onednn opt 1 test env tf2 behavior 1 define no tensorflow py dep true test lang filter py flaky test attempt 3 test size filter small medium test output error verbose failure true test keep go not verbose timeout warning local test job 64 test tag filter nopip no pip oss serial no oss oss exclude v1only benchmark test no aarch64 no oss py38 no oss py39 no oss py310 k bazel pip tensorflow bazel pip tensorflow compiler tf2tensorrt bazel pip tensorflow compiler xrt bazel pip tensorflow core tpu bazel pip tensorflow go bazel pip tensorflow java bazel pip tensorflow python integration testing bazel pip tensorflow tool toolchain bazel pip tensorflow lite bazel pip tensorflow python kernel test nn op atrous conv2d test relevant log output shell fail bazel pip tensorflow dtensor python test multi client test 2gpus see tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt testlog bazel pip tensorflow dtensor python test multi client test 2gpus test attempt attempt 1 log fail bazel pip tensorflow dtensor python test multi client test 2gpus see tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt testlog bazel pip tensorflow dtensor python test multi client test 2gpus test attempt attempt 2 log fail bazel pip tensorflow dtensor python test multi client test 2gpus see tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt testlog bazel pip tensorflow dtensor python test multi client test 2gpus test log fail bazel pip tensorflow dtensor python test multi client test 2gpus summary tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt testlog bazel pip tensorflow dtensor python test multi client test 2gpus test log tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt testlog bazel pip tensorflow dtensor python test multi client test 2gpus test attempt attempt 1 log tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt testlog bazel pip tensorflow dtensor python test multi client test 2gpus test attempt attempt 2 log info from test bazel pip tensorflow dtensor python test multi client test 2gpus test output for bazel pip tensorflow dtensor python test multi client test 2gpus 2023 05 02 23 40 14 254127 I tensorflow core util port cc 116 experimental onednn custom operation be on if you experience issue please turn they off by set the environment variable tf enable onednn opt 0 traceback most recent call last file tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin bazel pip tensorflow dtensor python test multi client test 2gpus runfile org tensorflow bazel pip tensorflow dtensor python test multi client test py line 27 in from tensorflow dtensor python test import multi client test util importerror can not import name multi client test util from tensorflow dtensor python test workspace pip test venv clean lib python3 10 site package tensorflow dtensor python test init py test output for bazel pip tensorflow dtensor python test multi client test 2gpus 2023 05 02 23 40 19 436998 I tensorflow core util port cc 116 experimental onednn custom operation be on if you experience issue please turn they off by set the environment variable tf enable onednn opt 0 traceback most recent call last file tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin bazel pip tensorflow dtensor python test multi client test 2gpus runfile org tensorflow bazel pip tensorflow dtensor python test multi client test py line 27 in from tensorflow dtensor python test import multi client test util importerror can not import name multi client test util from tensorflow dtensor python test workspace pip test venv clean lib python3 10 site package tensorflow dtensor python test init py test output for bazel pip tensorflow dtensor python test multi client test 2gpus 2023 05 02 23 40 23 477863 I tensorflow core util port cc 116 experimental onednn custom operation be on if you experience issue please turn they off by set the environment variable tf enable onednn opt 0 traceback most recent call last file tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin bazel pip tensorflow dtensor python test multi client test 2gpus runfile org tensorflow bazel pip tensorflow dtensor python test multi client test py line 27 in from tensorflow dtensor python test import multi client test util importerror can not import name multi client test util from tensorflow dtensor python test workspace pip test venv clean lib python3 10 site package tensorflow dtensor python test init py |
tensorflowtensorflow | docs problem notebook not find | Bug | hello I be new to tensoflow and right now I be read your gide particularly guide about tf function there you have a link to guide for eager execution but I don t have access to it be it my problem or you have break link guide for tf function scrollto j122xqyg7w6w break link which be in the first block of code in tensorflow 2 eager execution be turn on by default upd also there be break link here scrollto ocxx hvk7p2o see the transformer and deep dream tutorial for example first link do not work upd 2 another break link scrollto jed2u yrbfvb to learn more see the tf data build tensorflow input pipeline guide |
tensorflowtensorflow | error message of tf eye be inconsistent with doc | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 12 0 custom code yes os platform and distribution win11 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour accord to doc the param num row should be non negative int32 scalar tensor but below snippet code 1 indicate that the param num row can not be zero which be inconsistent with doc on the other hand the param num row should not be bool tensor but when give bool tensor tf eye work as below snippet code 2 show standalone code to reproduce the issue shell snippet code 1 import tensorflow as tf result try num row 1 result re tf eye num rows num row except exception as e result err error str e print result result error argument num row and num column must be positive integer value receive num row 1 num column 1 snippet code 2 import tensorflow as tf result try num row true result re tf eye num rows num row except exception as e result err error str e print result result re relevant log output no response |
tensorflowtensorflow | error message of tf constant initializer be not accuracy | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 12 0 custom code yes os platform and distribution win11 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour when run below snippet code it will throw an exception it s correct to throw an exception but I think the error message be not so accurate the follow error message be more for developer to see than to tell the user where the input parameter be wrong standalone code to reproduce the issue shell import tensorflow as tf result try value 0 tensor tf random uniform dtype tf float32 value 0 tf identity value 0 tensor value value 0 arg class tf constant initializer value value arg input 0 3 arg input 1 1 arg input arg input 0 arg input 1 result re arg class arg input except exception as e result err error str e print result result err error typeerror scalar tensor have no len ntraceback most recent call last n n file usr local lib python3 10 dist package tensorflow python framework op py line 1107 in len n raise typeerror scalar tensor have no len n ntypeerror scalar tensor have no len n n relevant log output no response |
tensorflowtensorflow | softmax api mismatch on beta param between tf and tfl | Bug | in tfl s reference for softmax a beta param be support to be multiply with logit link l53 however in tf the beta seem have not be support yet link be there any reason why it s only be support in tfl not in tf now more background I m try to test the beta param from softmax with a model convert from tf to tfl so if it doesn t support beta in tf how could I insert beta into the tfl model thank jerry |
tensorflowtensorflow | typeerror unsupported operand type s for collection ordereddict and collection ordereddict be raise when train sequential model | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf nightly 2 13 0 dev20230427 custom code no os platform and distribution macos mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour a bug happen typeerror unsupported operand type s for collection ordereddict and collection ordereddict be raise when train sequential model compose of one dense layer and training with numpy array type input dataset standalone code to reproduce the issue shell from tensorflow keras model import sequential from tensorflow keras layer import dense from tensorflow keras optimizer import sgd import numpy as np x np random rand 10 4 y np random rand 10 model sequential model add dense 3 input dim 4 model add dense 1 model compile loss mean squared error optimizer sgd learning rate 0 001 model fit x y raise typeerror relevant log output shell traceback most recent call last file tf1 py line 14 in model fit x y file python3 8 site package keras src util traceback util py line 70 in error handler raise e with traceback filter tb from none file python3 8 site package tensorflow core function capture capture container py line 303 in capture type return self by val tracetype self by ref tracetype typeerror unsupported operand type s for collection ordereddict and collection ordereddict |
tensorflowtensorflow | unit test failure on arm ci | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 9 16 bazel version 5 3 0 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour since the commit arm ci have be show unit test failure on bazel pip tensorflow python compiler xla xla test gpu bazel pip tensorflow python data experimental kernel tests checkpoint input pipeline hook test bazel pip tensorflow python distribute parameter server strategy test cpu bazel pip tensorflow python compiler xla xla test cpu bazel pip tensorflow python distribute parameter server strategy test gpu bazel pip tensorflow core platform ram file system test standalone code to reproduce the issue shell bazel bazelrc usertool aarch64 bazelrc test config mkl aarch64 threadpool copt flax vector conversion test env tf enable onednn opt 1 test env tf2 behavior 1 define tf api version 2 flaky test attempt 3 test output error verbose failure true test keep go job 75 not verbose timeout warning build test only tensorflow core platform ram file system test tensorflow python distribute parameter server strategy test gpu tensorflow python compiler xla xla test cpu tensorflow python distribute parameter server strategy test cpu tensorflow python data experimental kernel tests checkpoint input pipeline hook test tensorflow python compiler xla xla test gpu relevant log output shell all test fail with similar backtrace info from test bazel pip tensorflow core platform ram file system test file tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin bazel pip tensorflow core platform ram file system test runfiles org tensorflow bazel pip tensorflow core platform ram file system test py line 21 in from tensorflow python estimator estimator import estimator file tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin bazel pip tensorflow core platform ram file system test runfiles org tensorflow tensorflow python estimator estimator py line 22 in from tensorflow estimator python estimator import estimator file workspace pip test venv clean lib python3 10 site package tensorflow estimator init py line 8 in from tensorflow estimator api v1 import estimator file workspace pip test venv clean lib python3 10 site package tensorflow estimator api v1 estimator init py line 11 in from tensorflow estimator api v1 estimator import tpu file workspace pip test venv clean lib python3 10 site package tensorflow estimator api v1 estimator tpu init py line 12 in from tensorflow estimator python estimator tpu tpu estimator import tpuestimator file workspace pip test venv clean lib python3 10 site package tensorflow estimator python estimator tpu tpu estimator py line 118 in to proto resource variable op to proto fn pylint disable protect access attributeerror module tensorflow python op resource variable op have no attribute to proto fn |
tensorflowtensorflow | bug feedback | Bug | click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version tf2 5 custom code yes os platform and distribution window 10 21h2 9044 2604 mobile device no response python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version 11 6 gpu model and memory no response current behaviour a robustness problem happen when the number of input be great than 1 the tanh function can still compute the input without throw any exception but in other framework like pytorch I can get exception message immediately standalone code to reproduce the issue shell import tensorflow as tf import torch import mindspore as ms if name main try print tf tanh tf one 2 2 tf one 2 2 except exception as e print e try print torch tanh torch one 2 2 torch one 2 2 except exception as e print e try print ms op tanh ms op one 2 2 ms float32 ms op one 2 2 ms float32 except exception as e print e relevant log output shell tf tensor 0 76159416 0 76159416 shape 2 dtype float64 tanh take 1 positional argument but 2 be give too many positional argument |
tensorflowtensorflow | bug feedback | Bug | click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version tf2 5 custom code yes os platform and distribution window 10 21h2 9044 2604 mobile device no response python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version 11 6 gpu model and memory no response current behaviour a bug happen when the input of conv2d be inf in channel and out channel be great than 128 the output be suppose to be inf but get nan in fact and I can get correct output in pytorch standalone code to reproduce the issue shell datum np array np inf for in range 1 128 6 6 datum np reshape data newshape 1 128 6 6 conv tf keras layer conv2d filter 256 kernel size 3 print conv datum relevant log output shell tf tensor nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan shape 1 126 4 256 dtype float32 |
tensorflowtensorflow | tensorflow python framework error impl notfounderror key conv1 kernel not find in checkpoint | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 12 0 custom code yes os platform and distribution window x64 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour hi everyone I m a novice programmer I decide to create a neural network for facial recognition I have encounter such an error shell c user wefy2 appdata local program python python310 python exe c user wefy2 pycharmproject cnn facial recognition master1 test face i d py warn tensorflow from c user wefy2 appdata local program python python310 lib site package tensorflow python compat v2 compat py 107 disable resource variable from tensorflow python op variable scope be deprecate and will be remove in a future version instruction for update non resource variable be not support in the long term warning tensorflow from c user wefy2 appdata local program python python310 lib site package keras layer normalization batch normalization py 581 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer 2023 04 11 23 01 13 568545 w tensorflow c c api cc 300 operation name conv dw 12 bn gamma assign i d 1939 op device request assign def node conv dw 12 bn gamma assign assignvariableop have manual control dependency true dtype dt float validate shape false conv dw 12 bn gamma conv dw 12 bn gamma initializer one be change by set attribute after it be run by a session this mutation will have no effect and will trigger an error in the future either don t modify node after run they or create a new session 2023 04 11 23 01 15 059810 w tensorflow core framework op kernel cc 1830 op require fail at save restore v2 op cc 228 not find key conv1 kernel not find in checkpoint warn tensorflow restore an object base checkpoint use a name base saver this may be somewhat fragile and will re build the saver instead consider loading object base checkpoint use tf train checkpoint traceback most recent call last file c user wefy2 appdata local program python python310 lib site package tensorflow python client session py line 1378 in do call return fn args file c user wefy2 appdata local program python python310 lib site package tensorflow python client session py line 1361 in run fn return self call tf sessionrun option feed dict fetch list file c user wefy2 appdata local program python python310 lib site package tensorflow python client session py line 1454 in call tf sessionrun return tf session tf sessionrun wrapper self session option feed dict tensorflow python framework error impl notfounderror key conv1 kernel not find in checkpoint node save restorev2 during handling of the above exception another exception occur traceback most recent call last file c user wefy2 appdata local program python python310 lib site package tensorflow python training saver py line 1418 in restore sess run self saver def restore op name file c user wefy2 appdata local program python python310 lib site package tensorflow python client session py line 968 in run result self run none fetch feed dict option ptr file c user wefy2 appdata local program python python310 lib site package tensorflow python client session py line 1191 in run result self do run handle final target final fetch file c user wefy2 appdata local program python python310 lib site package tensorflow python client session py line 1371 in do run return self do call run fn feed fetch target option file c user wefy2 appdata local program python python310 lib site package tensorflow python client session py line 1397 in do call raise type e node def op message pylint disable no value for parameter tensorflow python framework error impl notfounderror graph execution error detect at node save restorev2 define at most recent call last file c user wefy2 pycharmproject cnn facial recognition master1 test face i d py line 66 in test nn load network file c user wefy2 pycharmproject cnn facial recognition master1 test face i d py line 25 in load network saver tf train saver node save restorev2 key conv1 kernel not find in checkpoint node save restorev2 original stack trace for save restorev2 file c user wefy2 pycharmproject cnn facial recognition master1 test face i d py line 66 in test nn load network file c user wefy2 pycharmproject cnn facial recognition master1 test face i d py line 25 in load network saver tf train saver file c user wefy2 appdata local program python python310 lib site package tensorflow python training saver py line 934 in init self build file c user wefy2 appdata local program python python310 lib site package tensorflow python training saver py line 946 in build self build self filename build save true build restore true file c user wefy2 appdata local program python python310 lib site package tensorflow python training saver py line 974 in build self saver def self builder build internal pylint disable protect access file c user wefy2 appdata local program python python310 lib site package tensorflow python training saver py line 543 in build internal restore op self addrestoreop filename tensor saveable file c user wefy2 appdata local program python python310 lib site package tensorflow python training saver py line 360 in addrestoreop all tensor self bulk restore filename tensor saveable prefer shard file c user wefy2 appdata local program python python310 lib site package tensorflow python training saver py line 611 in bulk restore return io op restore v2 filename tensor name slice dtype file c user wefy2 appdata local program python python310 lib site package tensorflow python ops gen io op py line 1604 in restore v2 op output op def library apply op helper file c user wefy2 appdata local program python python310 lib site package tensorflow python framework op def library py line 795 in apply op helper op g create op internal op type name input dtype none file c user wefy2 appdata local program python python310 lib site package tensorflow python framework op py line 3814 in create op internal ret operation during handling of the above exception another exception occur traceback most recent call last file c user wefy2 pycharmproject cnn facial recognition master1 test face i d py line 66 in test nn load network file c user wefy2 pycharmproject cnn facial recognition master1 test face i d py line 27 in load network saver restore self sess path file c user wefy2 appdata local program python python310 lib site package tensorflow python training saver py line 1444 in restore self object restore saver saver from object base checkpoint file c user wefy2 appdata local program python python310 lib site package tensorflow python training saver py line 1826 in saver from object base checkpoint raise error notfounderror tensorflow python framework error impl notfounderror exist variable not in the checkpoint conv1 kernel conv1 bn beta conv1 bn gamma conv1 bn move mean conv1 bn move variance conv dw 1 depthwise kernel conv dw 10 depthwise kernel conv dw 10 bn beta conv dw 10 bn gamma conv dw 10 bn move mean conv dw 10 bn move variance conv dw 11 depthwise kernel conv dw 11 bn beta conv dw 11 bn gamma conv dw 11 bn move mean conv dw 11 bn move variance conv dw 12 depthwise kernel conv dw 12 bn beta conv dw 12 bn gamma conv dw 12 bn move mean conv dw 12 bn move variance conv dw 13 depthwise kernel conv dw 13 bn beta conv dw 13 bn gamma conv dw 13 bn move mean conv dw 13 bn move variance conv dw 1 bn beta conv dw 1 bn gamma conv dw 1 bn move mean conv dw 1 bn move variance conv dw 2 depthwise kernel conv dw 2 bn beta conv dw 2 bn gamma conv dw 2 bn move mean conv dw 2 bn move variance conv dw 3 depthwise kernel conv dw 3 bn beta conv dw 3 bn gamma conv dw 3 bn move mean conv dw 3 bn move variance conv dw 4 depthwise kernel conv dw 4 bn beta conv dw 4 bn gamma conv dw 4 bn move mean conv dw 4 bn move variance conv dw 5 depthwise kernel conv dw 5 bn beta conv dw 5 bn gamma conv dw 5 bn move mean conv dw 5 bn move variance conv dw 6 depthwise kernel conv dw 6 bn beta conv dw 6 bn gamma conv dw 6 bn move mean conv dw 6 bn move variance conv dw 7 depthwise kernel conv dw 7 bn beta conv dw 7 bn gamma conv dw 7 bn move mean conv dw 7 bn move variance conv dw 8 depthwise kernel conv dw 8 bn beta conv dw 8 bn gamma conv dw 8 bn move mean conv dw 8 bn move variance conv dw 9 depthwise kernel conv dw 9 bn beta conv dw 9 bn gamma conv dw 9 bn move mean conv dw 9 bn move variance conv pw 1 kernel conv pw 10 kernel conv pw 10 bn beta conv pw 10 bn gamma conv pw 10 bn move mean conv pw 10 bn move variance conv pw 11 kernel conv pw 11 bn beta conv pw 11 bn gamma conv pw 11 bn move mean conv pw 11 bn move variance conv pw 12 kernel conv pw 12 bn beta conv pw 12 bn gamma conv pw 12 bn move mean conv pw 12 bn move variance conv pw 13 kernel conv pw 13 bn beta conv pw 13 bn gamma conv pw 13 bn move mean conv pw 13 bn move variance conv pw 1 bn beta conv pw 1 bn gamma conv pw 1 bn move mean conv pw 1 bn move variance conv pw 2 kernel conv pw 2 bn beta conv pw 2 bn gamma conv pw 2 bn move mean conv pw 2 bn move variance conv pw 3 kernel conv pw 3 bn beta conv pw 3 bn gamma conv pw 3 bn move mean conv pw 3 bn move variance conv pw 4 kernel conv pw 4 bn beta conv pw 4 bn gamma conv pw 4 bn move mean conv pw 4 bn move variance conv pw 5 kernel conv pw 5 bn beta conv pw 5 bn gamma conv pw 5 bn move mean conv pw 5 bn move variance conv pw 6 kernel conv pw 6 bn beta conv pw 6 bn gamma conv pw 6 bn move mean conv pw 6 bn move variance conv pw 7 kernel conv pw 7 bn beta conv pw 7 bn gamma conv pw 7 bn move mean conv pw 7 bn move variance conv pw 8 kernel conv pw 8 bn beta conv pw 8 bn gamma conv pw 8 bn move mean conv pw 8 bn move variance conv pw 9 kernel conv pw 9 bn beta conv pw 9 bn gamma conv pw 9 bn move mean conv pw 9 bn move variance variable name when this checkpoint be write which don t exist now adam m conv2d bias adam m conv2d kernel adam m conv2d 1 bias adam m conv2d 1 kernel adam m conv2d 2 bias adam m conv2d 2 kernel adam m conv2d 3 bias adam m conv2d 3 kernel adam m dense bias adam m dense kernel adam m dense 1 bias adam m dense 1 kernel adam v conv2d bias adam v conv2d kernel adam v conv2d 1 bias adam v conv2d 1 kernel adam v conv2d 2 bias adam v conv2d 2 kernel adam v conv2d 3 bias adam v conv2d 3 kernel adam v dense bias adam v dense kernel adam v dense 1 bias adam v dense 1 kernel conv2d bias conv2d kernel conv2d 1 bias conv2d 1 kernel conv2d 2 bias conv2d 2 kernel conv2d 3 bias conv2d 3 kernel count iteration learning rate total 4 variable name s do match could not find some variable in the checkpoint see name above saver be attempt to load an object base checkpoint save use tf train checkpoint or tf keras model save weight use variable name if the checkpoint be write with eager execution enable it s possible that variable name have change for example miss a 1 suffix it s also possible that there be new variable which do not exist when the checkpoint be write you can construct a saver var list with only the variable which previously exist and if variable name have change you may need to make this a dictionary with the old name as key if you re use an estimator you ll need to return a tf train saver inside a tf train scaffold from your model fn process finish with exit code 1 I suspect that this problem be that the error occur when load the weight of the test nn model from the save checkpoint it report that some variable in the save checkpoint do not correspond to variable in the model because there be no corresponding store value for they or because of a mismatch of tensorflow library version mit datum process py shell import cv2 import glob import numpy as np save to c user wefy2 pycharmproject cnn facial recognition master1 datum all face img for img in glob glob c user wefy2 pycharmproject cnn facial recognition master1 datum gt db s jpg face x face y facecascade cv2 cascadeclassifi data haarcascade frontalface xml for I face in enumerate all face image cv2 imread face gray cv2 cvtcolor image cv2 color bgr2gray face facecascade detectmultiscale gray 1 3 5 if len face 1 x y w h face 0 crop img image y y h x x w face x append cv2 resize crop img 128 128 face y append int face split 2 1 print finish I out of len all face face x face y np array face x np array face y np save save to x train face x np save save to y train face y train face i d py import numpy as np import tensorflow as tf import tensorflow addon as tfa loading datum face x np load datax train npy face y np load datay train npy face x tf expand dim face x axis 0 face y tf expand dim face y axis 0 train dataset tf datum dataset from tensor slice face x face y print face be load successfully print tf version construct the fully connect hash layer model tf keras sequential tf keras layer conv2d filter 64 kernel size 3 padding same activation relu input shape 128 128 3 tf keras layer maxpooling2d pool size 2 tf keras layers dropout 0 3 tf keras layer conv2d filter 64 kernel size 3 padding same activation relu input shape 128 128 3 tf keras layer maxpooling2d pool size 2 tf keras layers dropout 0 3 tf keras layer conv2d filter 32 kernel size 2 padding same activation relu tf keras layer maxpooling2d pool size 2 tf keras layers dropout 0 3 tf keras layer conv2d filter 32 kernel size 2 padding same activation relu tf keras layer maxpooling2d pool size 2 tf keras layers dropout 0 3 tf keras layer flatten tf keras layer dense 256 activation relu tf keras layers dropout 0 3 tf keras layer dense 128 activation sigmoid compile the model model compile optimizer tf keras optimizer adam 0 001 loss tfa loss tripletsemihardloss margin 3 0 print model summary print model compile successfully train the model print training have start history model fit train dataset epoch 10 verbose 1 save the model model save model face i d model print training be finish test face i d py import numpy as np import cv2 import tensorflow compat v1 as tf tf disable v2 behavior class faceid def init self model tf keras sequential net tf keras application mobilenet input shape 128 128 3 weight imagenet include top false model add net model add tf keras layer globalaveragepooling2d self feature extractor model self x holder tf placeholder shape none 1024 dtype tf float32 fc 1 tf layer dense unit 512 activation tf nn relu self x holder fc 2 tf layer dense unit 128 activation tf nn sigmoid fc 1 self face i d fc 2 self sess none def load network self path model face i d model variable variable saver tf train saver self sess tf session saver restore self sess path def get i d self imgs imgs imgs reshape 1 128 128 3 feature self feature extractor predict imgs embed self sess run self face i d feed dict self x holder feature return embed 0 class faceextractor def init self cascade path data haarcascade frontalface xml self facecascade cv2 cascadeclassifi cascade path def extract single face from path self img path image cv2 imread img path gray cv2 cvtcolor image cv2 color bgr2gray face self facecascade detectmultiscale gray 1 3 5 if len face 1 x y w h face 0 crop img image y y h x x w return cv2 resize crop img 128 128 else face self facecascade detectmultiscale gray 1 3 10 if len face 1 x y w h face 0 crop img image y y h x x w return cv2 resize crop img 128 128 return none def face from image self image gray cv2 cvtcolor image cv2 color bgr2gray return self facecascade detectmultiscale gray 1 3 5 test nn faceid face ex faceextractor test nn load network ref face face ex extract single face from path ref jpg ref face hash test nn get i d ref face 0 cap cv2 videocapture 0 while true ret frame cap read face face ex face from image frame for face in face x y w h face crop face cv2 resize frame y y h x x w 128 128 crop hash test nn get i d cropped face 0 cv2 rectangle frame x y x w y h 1 3 distance 1 np sum np power ref face hash crop hash 2 if distance 1 3 cv2 puttext frame ref x y h 30 cv2 font hershey simplex 1 1 2 cv2 line aa else cv2 puttext frame nan x y h 30 cv2 font hershey simplex 1 1 2 cv2 line aa cv2 imshow my faceid frame if cv2 waitkey 1 0xff ord q ret frame cap read break relevant log output no response |
tensorflowtensorflow | link page be not valid | Bug | click to expand issue type documentation bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 8 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell trfl in be not valid standalone code to reproduce the issue shell trfl in be not valid relevant log output no response |
tensorflowtensorflow | pip install tensorflow for mac m1 python3 11 matching distribution not find nightly build do | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 12 custom code no os platform and distribution mac osx m2 mobile device no response python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell please could you support mac m1 s in a way where we do not have to build from source tf nightly be work out of the box but standard pip tensorflow be currently not support standalone code to reproduce the issue shell pip install tensorflow error could not find a version that satisfy the requirement tensorflow from version none error no match distribution find for tensorflow pip install tensorflow macos error could not find a version that satisfy the requirement tensorflow macos from version none error no match distribution find for tensorflow macos pip install tf nightly download tf nightly 2 13 0 dev20230406 cp311 cp311 macosx 12 0 arm64 whl 2 1 kb relevant log output no response |
tensorflowtensorflow | wsl2 fit function not work tensorflow 2 12 0 | Bug | click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version v2 12 0 rc1 12 g0db597d0d75 2 12 0 custom code no os platform and distribution wsl2 ubuntu window 10 19044 64bit mobile device no response python version 3 10 9 3 10 10 bazel version gcc compiler version cuda cudnn version 11 8 0 8 6 0 163 gpu model and memory 1050ti 4 gb current behaviour shell instal all use this instruction window wsl2 python see my gpu but model fit function not work standalone code to reproduce the issue shell import tensorflow as tf from tensorflow python client import device lib import numpy as np print tf version print f tensor flow version tf version print f keras version tf keras version print gpu len tf config list physical device gpu 0 print gpu be available if gpu else not available sample np array u 0 u 10 10 1 u 1 u 10 0 u 0 test np array u 0 u 1 u 0 u 0 train text train label test text test label for sample in sample train text append sample 0 train label append float sample 1 for tst in test test text append tst 0 test label append float tst 1 dataset train 0 test 0 dataset train tf datum dataset from tensor slice train text train label dataset test tf datum dataset from tensor slice test text test label train dataset test dataset dataset train dataset test for text lable in train dataset take 2 print text buffer size 10000 batch size 128 train dataset train dataset shuffle buffer size batch batch size prefetch tf datum autotune test dataset test dataset batch batch size prefetch tf datum autotune vocab size 20000 encoder tf keras layer textvectorization standardize low max tokens vocab size encode utf 8 encoder adapt train dataset map lambda text label text model tf keras sequential encoder tf keras layer embed input dim len encoder get vocabulary output dim 64 use mask to handle the variable sequence length mask zero true tf keras layers bidirectional tf keras layers lstm 128 tf keras layer dense 64 activation relu tf keras layer dense 1 model compile loss tf keras loss binarycrossentropy from logit true optimizer tf keras optimizer adam 1e 4 metric accuracy history model fit train dataset epoch 250 validation datum test dataset sample text 10 prediction model predict np array sample text print prediction relevant log output shell 2023 04 06 12 06 58 601576 I tensorflow core platform cpu feature guard cc 182 this tensorflow binary be optimize to use available cpu instruction in performance critical operation to enable the follow instruction avx2 fma in other operation rebuild tensorflow with the appropriate compiler flag 2023 04 06 12 07 00 057818 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning could not find tensorrt 2 12 0 tensor flow version 2 12 0 keras version 2 12 0 2023 04 06 12 07 01 276945 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 982 could not open file to read numa node sys bus pci device 0000 01 00 0 numa node your kernel may have be build without numa support 2023 04 06 12 07 01 462465 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 982 could not open file to read numa node sys bus pci device 0000 01 00 0 numa node your kernel may have be build without numa support 2023 04 06 12 07 01 462581 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 982 could not open file to read numa node sys bus pci device 0000 01 00 0 numa node your kernel may have be build without numa support gpu be available 2023 04 06 12 07 01 477422 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 982 could not open file to read numa node sys bus pci device 0000 01 00 0 numa node your kernel may have be build without numa support 2023 04 06 12 07 01 477546 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 982 could not open file to read numa node sys bus pci device 0000 01 00 0 numa node your kernel may have be build without numa support 2023 04 06 12 07 01 477680 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 982 could not open file to read numa node sys bus pci device 0000 01 00 0 numa node your kernel may have be build without numa support 2023 04 06 12 07 06 851452 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 982 could not open file to read numa node sys bus pci device 0000 01 00 0 numa node your kernel may have be build without numa support 2023 04 06 12 07 06 852698 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 982 could not open file to read numa node sys bus pci device 0000 01 00 0 numa node your kernel may have be build without numa support 2023 04 06 12 07 06 852765 I tensorflow core common runtime gpu gpu device cc 1722 could not identify numa node of platform gpu i d 0 default to 0 your kernel may not have be build with numa support 2023 04 06 12 07 06 852885 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 982 could not open file to read numa node sys bus pci device 0000 01 00 0 numa node your kernel may have be build without numa support 2023 04 06 12 07 06 873685 I tensorflow core common runtime gpu gpu device cc 1635 create device job localhost replica 0 task 0 device gpu 0 with 2519 mb memory device 0 name nvidia geforce gtx 1050 ti pci bus i d 0000 01 00 0 compute capability 6 1 2023 04 06 12 07 17 597288 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor placeholder 1 with dtype float and shape 92 node placeholder 1 tf tensor b xd0 xa0 xd0 xbe xd1 x81 xd1 x81 xd0 xb8 xd1 x8f shape dtype string tf tensor b xd0 x92 xd1 x87 xd0 xb5 xd1 x80 xd0 xb0 xd1 x81 xd0 xbc xd0 xbe xd1 x82 xd1 x80 xd0 xb5 xd0 xbb xd0 xb2 xd0 xba xd0 xb8 xd0 xbd xd0 xbe xd0 xbf xd0 xbe xd1 x82 xd1 x80 xd1 x8f xd1 x81 xd0 xb0 xd1 x8e xd1 x89 xd0 xb8 xd0 xb9 xd1 x84 xd0 xb8 xd0 xbb xd1 x8c xd0 xbc xd0 x90 xd0 xba xd1 x82 xd1 x91 xd1 x80 xd1 x8b xd0 xb2 xd1 x8b xd1 x81 xd1 x88 xd0 xb8 xd0 xb5 xd0 xbd xd0 xb5 xd0 xb2 xd0 xb5 xd1 x80 xd0 xbe xd1 x8f xd1 x82 xd0 xbd xd1 x8b xd0 xb5 xd0 xb4 xd0 xb5 xd0 xba xd0 xbe xd1 x80 xd0 xb0 xd1 x86 xd0 xb8 xd0 xb8 xd0 xb1 xd0 xb5 xd0 xb7 xd1 x83 xd0 xb4 xd0 xb5 xd1 x80 xd0 xb6 xd0 xbd xd1 x8b xd0 xb9 xd0 xb4 xd1 x80 xd0 xb0 xd0 xb9 xd0 xb2 xd0 xbd xd0 xb0 xd0 xbf xd1 x80 xd0 xbe xd1 x82 xd1 x8f xd0 xb6 xd0 xb5 xd0 xbd xd0 xb8 xd0 xb8 xd0 xb2 xd1 x81 xd0 xb5 xd0 xb3 xd0 xbe xd1 x84 xd0 xb8 xd0 xbb xd1 x8c xd0 xbc xd0 xb0 xd0 x94 xd0 xb0 xd0 xb2 xd0 xbd xd0 xbe xd0 xbd xd0 xb5 xd0 xb8 xd1 x81 xd0 xbf xd1 x8b xd1 x82 xd1 x8b xd0 xb2 xd0 xb0 xd0 xbb xd1 x82 xd0 xb0 xd0 xba xd0 xbe xd0 xb3 xd0 xbe xd0 xb2 xd0 xbe xd1 x81 xd1 x82 xd0 xbe xd1 x80 xd0 xb3 xd0 xb0 xd0 xbe xd1 x82 xd0 xbf xd1 x80 xd0 xbe xd1 x81 xd0 xbc xd0 xbe xd1 x82 xd1 x80 xd0 xb0 10 10 shape dtype string 2023 04 06 12 07 18 366287 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor placeholder 1 with dtype float and shape 92 node placeholder 1 2023 04 06 12 07 18 366750 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor placeholder 0 with dtype string and shape 92 node placeholder 0 2023 04 06 12 07 23 301274 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor placeholder 1 with dtype float and shape 92 node placeholder 1 2023 04 06 12 07 23 301919 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor placeholder 0 with dtype string and shape 92 node placeholder 0 epoch 1 250 2023 04 06 12 07 26 986857 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor gradient reversev2 grad reversev2 reversev2 axis with dtype int32 and shape 1 node gradient reversev2 grad reversev2 reversev2 axis 2023 04 06 12 07 30 238269 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor gradient reversev2 grad reversev2 reversev2 axis with dtype int32 and shape 1 node gradient reversev2 grad reversev2 reversev2 axis 2023 04 06 12 07 31 901964 w tensorflow core common runtime type inference cc 339 type inference fail this indicate an invalid graph that escape type check error message invalid argument expect compatible input type but input 1 type i d tft optional args type i d tft product args type i d tft tensor args type i d tft int32 be neither a subtype nor a supertype of the combine input precede it type i d tft optional args type i d tft product args type i d tft tensor args type i d tft float while infer type of node cond 40 output 23 2023 04 06 12 07 38 520941 I tensorflow compiler xla stream executor cuda cuda dnn cc 424 load cudnn version 8600 2023 04 06 12 07 39 881521 I tensorflow compiler xla service service cc 169 xla service 0x1da1c010 initialize for platform cuda this do not guarantee that xla will be use device 2023 04 06 12 07 39 881632 I tensorflow compiler xla service service cc 177 streamexecutor device 0 nvidia geforce gtx 1050 ti compute capability 6 1 2023 04 06 12 07 39 994222 I tensorflow compiler mlir tensorflow util dump mlir util cc 269 disable mlir crash reproducer set env var mlir crash reproducer directory to enable 2023 04 06 12 07 40 402391 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 530 can t find libdevice directory cuda dir nvvm libdevice this may result in compilation or runtime failure if the program we try to run use routine from libdevice search for cuda in the follow directory cuda sdk lib usr local cuda 11 8 usr local cuda you can choose the search directory by set xla gpu cuda datum dir in hlomodule s debugoption for most app set the environment variable xla flag xla gpu cuda data dir path to cuda will work 2023 04 06 12 07 40 405914 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 04 06 12 07 40 408337 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 04 06 12 07 40 408428 I tensorflow core common runtime executor cc 1197 job localhost replica 0 task 0 device gpu 0 debug info executor start abort this do not indicate an error and you can ignore this message internal libdevice not find at libdevice 10 bc node statefulpartitionedcall 10 2023 04 06 12 07 40 444717 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 04 06 12 07 40 445387 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 04 06 12 07 40 475982 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 04 06 12 07 40 476520 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 04 06 12 07 40 509245 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 04 06 12 07 40 509756 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc traceback most recent call last file home yatebyaeby test py line 168 in history model fit train dataset epoch 250 file home yatebyaeby miniconda3 lib python3 10 site package keras util traceback util py line 70 in error handler raise e with traceback filter tb from none file home yatebyaeby miniconda3 lib python3 10 site package tensorflow python eager execute py line 52 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl internalerror graph execution error detect at node statefulpartitionedcall 10 define at most recent call last file home yatebyaeby test py line 168 in history model fit train dataset epoch 250 file home yatebyaeby miniconda3 lib python3 10 site package keras util traceback util py line 65 in error handler return fn args kwargs file home yatebyaeby miniconda3 lib python3 10 site package kera engine training py line 1685 in fit tmp log self train function iterator file home yatebyaeby miniconda3 lib python3 10 site package kera engine training py line 1284 in train function return step function self iterator file home yatebyaeby miniconda3 lib python3 10 site package kera engine training py line 1268 in step function output model distribute strategy run run step args datum file home yatebyaeby miniconda3 lib python3 10 site package kera engine training py line 1249 in run step output model train step datum file home yatebyaeby miniconda3 lib python3 10 site package kera engine training py line 1054 in train step self optimizer minimize loss self trainable variable tape tape file home yatebyaeby miniconda3 lib python3 10 site package keras optimizers optimizer py line 543 in minimize self apply gradient grad and var file home yatebyaeby miniconda3 lib python3 10 site package keras optimizers optimizer py line 1174 in apply gradient return super apply gradient grad and var name name file home yatebyaeby miniconda3 lib python3 10 site package keras optimizers optimizer py line 650 in apply gradient iteration self internal apply gradient grad and var file home yatebyaeby miniconda3 lib python3 10 site package keras optimizers optimizer py line 1200 in internal apply gradient return tf internal distribute interim maybe merge call file home yatebyaeby miniconda3 lib python3 10 site package keras optimizers optimizer py line 1250 in distribute apply gradient fn distribution extend update file home yatebyaeby miniconda3 lib python3 10 site package keras optimizers optimizer py line 1245 in apply grad to update var return self update step xla grad var i d self var key var node statefulpartitionedcall 10 libdevice not find at libdevice 10 bc node statefulpartitionedcall 10 op inference train function 14740 |
tensorflowtensorflow | output tensor datum pointer be null before call invoke | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf2 10 custom code yes os platform and distribution ubuntu 20 04 4 lts mobile device no response python version python 3 8 10 bazel version bazel 6 1 1 gcc compiler version gcc ubuntu 9 4 0 1ubuntu1 20 04 1 9 4 0 cuda cudnn version no response gpu model and memory no response current behaviour shell hello I run into the follow issue I try to retrieve the data pointer from the output tensor of my tensorflow lite model before run invoke and it be null be it the expect behavior I would have suppose that after call allocatetensor the tensor could be use standalone code to reproduce the issue shell I run the follow code class modeltflite public modeltflite const std string path modeltflite default bool invoke private std unique ptr model tflite op builtin builtinopresolver resolver std unique ptr interpreter modeltflite modeltflite const std string path model tflite flatbuffermodel buildfromfile path c str assert model nullptr tflite interpreterbuilder builder model resolver builder interpreter assert interpreter nullptr bool modeltflite invoke tflitestatus status interpreter allocatetensor void inputtensor interpreter input tensor 0 datum datum void outputtensor interpreter output tensor 0 datum datum std cout before invoke inputtensor inputtensor outputtensor outputtensor n status interpreter invoke inputtensor interpreter input tensor 0 datum datum outputtensor interpreter output tensor 0 datum datum std cout after invoke inputtensor inputtensor outputtensor outputtensor n if status ktfliteok std cout fail to run invoke status n return false return true main int main int argc char argv if argc 2 fprintf stderr testmodel n return 1 const char filename argv 1 modeltflite m filename m invoke return 0 relevant log output shell before invoke inputtensor 0x56253a528800 outputtensor 0 after invoke inputtensor 0x56253ab84d00 outputtensor 0x56253ab84e00 |
tensorflowtensorflow | problem on installation document wrong ld library path | Bug | click to expand issue type documentation bug have you reproduce the bug with tf nightly no source binary tensorflow version tf v2 12 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell bug mkdir p conda prefix etc conda activate d cudnn path dirname python c import nvidia cudnn print nvidia cudnn file echo export ld library path ld library path conda prefix lib cudnn path lib conda prefix etc conda activate d env var sh symptom error on create conda prefix etc conda activate d env var sh cause the ld library path can not be set properly use single quote instead of double quote solution cudnn path dirname python c import nvidia cudnn print nvidia cudnn file echo export ld library path ld library path conda prefix lib cudnn path lib conda prefix etc conda activate d env var sh standalone code to reproduce the issue shell mkdir p conda prefix etc conda activate d cudnn path dirname python c import nvidia cudnn print nvidia cudnn file echo export ld library path ld library path conda prefix lib cudnn path lib conda prefix etc conda activate d env var sh relevant log output no response |
tensorflowtensorflow | null pointer dereference in lmhlo to cpu runtime cc | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 12 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell variable dict may be nullptr and be dereference on line 149 in tensorflow compiler xla mlir backends cpu transform lmhlo to cpu runtime cc dict be initialize on line 146 and may equal nullptr then it be dereference on line 149 standalone code to reproduce the issue shell bug be find by svace static analysis tool relevant log output no response |
tensorflowtensorflow | null pointer dereference in conv op fuse int8 cc | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 12 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell pointer side input ptr be dereference and pass as the first argument into a call to std fmaf in tensorflow core kernel conv op fuse int8 cc if we be at 0th iteration in a for loop line 210 and side input base nullptr then col 0 and side input ptr will also equal nullptr line 265 after assign to side input ptr this pointer be dereference on line 269 standalone code to reproduce the issue shell bug be find by svace static analysis tool relevant log output no response |
tensorflowtensorflow | can not pip install tensorflow example | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 12 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I be try tensorflow segmentation example from here on google colab and I get an error in the first cell with pip install standalone code to reproduce the issue shell from jupyter notebook pip install git relevant log output shell look in index collect git cloning to tmp pip req build xe1kper2 run command git clone filter blob none quiet tmp pip req build xe1kper2 resolve to commit 5bc9f1ed519146242db5e71f00d9d39d52a308c8 error subprocess exit with error python setup py egg info do not run successfully exit code 1 see above for output note this error originate from a subprocess and be likely not a problem with pip prepare metadata setup py error error metadata generation fail encounter error while generate package metadata see above for output note this be an issue with the package mention above not pip hint see above for detail output of git version and tf version import tensorflow as tf print tf version git version tf version version v2 12 0 rc1 12 g0db597d0d75 2 12 0 |
tensorflowtensorflow | keras model with sparse input fail to process symbolic sparse input after be save and load again | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version tf 2 4 custom code yes os platform and distribution cento mobile device no response python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell as show in the test code we follow the official google wiki to construct a keras model with sparse input and validate that it can run infer on sparsetensor then we save this model into savedmodel format and reload it and we expect the loaded keraslayer can be stitch into a new keras model for finetune however we meet with error when try to run the keraslayer with symbolic sparse input typeerror signature wrapper args 0 1 args 0 2 args 0 miss require argument args 0 args 0 1 args 0 2 this error occur in all tf version from 2 4 to 2 11 standalone code to reproduce the issue shell import tensorflow as tf import tensorflow hub as hub try google official code to construct a model tfkera x tf keras input shape 1000 sparse true y tf keras layer dense 4 x model tf keras model x y sparse datum tf sparse sparsetensor indice 0 0 0 1 0 2 4 3 5 0 5 1 value 1 1 1 1 1 1 dense shape 6 1000 print sparse datum print model predict sparse data model dir test model save model dir model 1 hub keraslayer model dir trainable false signature serve default input tf keras input shape 1000 sparse true emb model 1 input relevant log output shell 2023 04 03 11 56 06 381971 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2023 04 03 11 56 09 089355 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2023 04 03 11 56 09 090767 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcuda so 1 2023 04 03 11 56 09 100227 I tensorflow core common runtime gpu gpu device cc 1714 find device 0 with property pcibusid 0001 00 00 0 name tesla m60 computecapability 5 2 coreclock 1 1775ghz corecount 16 devicememorysize 7 94gib devicememorybandwidth 149 31gib s 2023 04 03 11 56 09 100265 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2023 04 03 11 56 09 106741 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2023 04 03 11 56 09 106802 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2023 04 03 11 56 09 110060 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2023 04 03 11 56 09 110473 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2023 04 03 11 56 09 113208 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 10 2023 04 03 11 56 09 114508 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 11 2023 04 03 11 56 09 114656 w tensorflow stream executor platform default dso loader cc 60 could not load dynamic library libcudnn so 8 dlerror libcudnn so 8 can not open share object file no such file or directory ld library path export app xtool oracle instantclient 2023 04 03 11 56 09 114678 w tensorflow core common runtime gpu gpu device cc 1751 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2023 04 03 11 56 09 115066 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 04 03 11 56 09 116084 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2023 04 03 11 56 09 116154 I tensorflow core common runtime gpu gpu device cc 1255 device interconnect streamexecutor with strength 1 edge matrix 2023 04 03 11 56 09 116169 I tensorflow core common runtime gpu gpu device cc 1261 sparsetensor indice tf tensor 0 0 0 1 0 2 4 3 5 0 5 1 shape 6 2 dtype int64 value tf tensor 1 1 1 1 1 1 shape 6 dtype int32 dense shape tf tensor 6 1000 shape 2 dtype int64 2023 04 03 11 56 09 209265 I tensorflow compiler mlir mlir graph optimization pass cc 116 none of the mlir optimization pass be enable register 2 2023 04 03 11 56 09 209956 I tensorflow core platform profile util cpu util cc 112 cpu frequency 2596985000 hz 0 01467296 0 01330906 0 08810526 0 08027062 0 0 0 0 0 0 0 0 0 0 0 0 0 04658424 0 04584154 0 06923811 0 01754887 0 09175565 0 05856525 0 04386244 0 03397611 2023 04 03 11 56 09 322841 w tensorflow python util util cc 348 set be not currently consider sequence but this may change in the future so consider avoid use they traceback most recent call last file test py line 30 in emb video nhfc model input file datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow python keras engine base layer py line 952 in call input list file datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow python keras engine base layer py line 1091 in functional construction call input input mask args kwargs file datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow python keras engine base layer py line 822 in keras tensor symbolic call return self infer output signature input args kwargs input mask file datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow python keras engine base layer py line 863 in infer output signature output call fn input args kwargs file datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow python autograph impl api py line 670 in wrapper raise e ag error metadata to exception e typeerror in user code datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow hub keras layer py 237 call result smart cond smart cond training datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow python eager function py 1669 call return self call impl args kwargs datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow python eager function py 1685 call impl raise structured err datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow python eager function py 1679 call impl cancellation manager datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow python eager function py 1756 call with structured signature self structured signature check miss args args kwargs datum src mmsearch modeling build model pcv2 environment development venv lib python3 7 site package tensorflow python eager function py 1780 structure signature check miss args join sort miss argument typeerror signature wrapper args 0 1 args 0 2 args 0 miss require argument args 0 args 0 1 args 0 2 |
tensorflowtensorflow | please update fix the tutorial of how to automatically add cuda dnn path into environment when activate conda environment | Bug | click to expand issue type documentation bug have you reproduce the bug with tf nightly no source source tensorflow version tf 2 12 custom code no os platform and distribution no response mobile device linux ubuntu 22 04 2 python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell hi guy first thank for the support from tf 2 0 and cuda I think there be a typo in tutorial which result the cuda would not be find by tf in the main tutorail from linux setup access from 2023 3 31 where you mention for your convenience it be recommend that you automate it with the follow command the system path will be automatically configure when you activate this conda environment and the code be give be mkdir p conda prefix etc conda activate d cudnn path dirname python c import nvidia cudnn print nvidia cudnn file echo export ld library path ld library path conda prefix lib cudnn path lib conda prefix etc conda activate d env var sh this be somewhat problematic as this would only put export ld library path ld library path conda prefix lib cudnn path lib inside the file env var sh and when I activate the conda environment the cuda path be not automatically load simply as cudnn path be not define this would then result gpu not find etc no gpu cuda can not be load etc fix issue etc I believe the fix should be may not be universal correct but work for I please help check mkdir p conda prefix etc conda activate d echo cudnn path dirname python c import nvidia cudnn print nvidia cudnn file conda prefix etc conda activate d env var sh echo export ld library path ld library path conda prefix lib cudnn path lib conda prefix etc conda activate d env var sh in this case the file env var sh would contain cudnn path dirname python c import nvidia cudnn print nvidia cudnn file export ld library path ld library path conda prefix lib cudnn path lib now finally everytime I activate the conda environment I no long need to manually set up the environment path for cuda I know this be somewhat fundamental but this could be misleading to those high level devloper and this could cause that cuda can not find gpu error without much detailed info so please consider fix this in the tutorial webpage linux setup plus another issue that one need to update conda before install cuda but if one be use environment module where the anaconda be not instal in the global envrionment then one need to be root or other account which have access to update the conda for that specific anaconda version specifically say the anaconda be instal in home software globalmodule app binapps anaconda3 2020 07 if we activate conda envrionment and run conda upgrade n base conda it will return error no permission as the software be centrally distribute that user have no write access to the global instal pacakge imaging that many user be share a hpc in which case one have to be the root user or other user have write access to the folder where we install the anaconda to update the conda for this version however this be not an issue of tensorflow of course as it will only happen if one be use envrionment module I provide here just in case anyone fail to update conda to install cuda etc as if the update of conda fail then it will fail to install cuda somehow for no reason thank standalone code to reproduce the issue shell note that I have instal environment module so this may not happen when only a universal conda be instal connect to some server ssh x p 3060 environment module load anaconda module load app binapps anaconda3 2020 07 activate assume this venvpy3 8 be create follow tutorial from linux setup conda activate venvpy3 8 try install cuda in virtual envrionment as suggest conda install c conda forge cudatoolkit 11 8 0 pip install nvidia cudnn cu11 8 6 0 163 specify envirionment path suppose to make my life easy but in fact not mkdir p conda prefix etc conda activate d echo export ld library path ld library path conda prefix lib cudnn path lib conda prefix etc conda activate d env var sh pip install tensorflow 2 12 verify cpu setup should return the cpu and not note that gpu not find python3 c import tensorflow as tf print tf reduce sum tf random normal 1000 1000 no gpu find etc veerify gpu setup should return the physical gpu if successful python3 c import tensorflow as tf print tf config list physical device gpu no gpu find etc fix try instead go above step of enrionment path specify envirionment path mkdir p conda prefix etc conda activate d echo cudnn path dirname python c import nvidia cudnn print nvidia cudnn file conda prefix etc conda activate d env var sh echo export ld library path ld library path conda prefix lib cudnn path lib conda prefix etc conda activate d env var sh relevant log output shell can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device |
tensorflowtensorflow | tf trt qat explicit precision assertion outscale size 1 when convert qat tf model | Bug | click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version 2 9 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour I be currently work through support tf trt with explicit quant dequant node and be in the final stage of make it work here be a summary of the fix I land implement a explicit convert for fakequantwithminmaxvar keras api support conv2d with a tensor input in explicit conversion transpose 2nd input to kcr format to be compatible with trt conv layer the conversion actually run through but I get an error when build the engine almost immediately shell internal error assertion outscale size 1 fail standalone code to reproduce the issue I be build the code from a custom tf trt branch off of r2 9 with minimal non break change I can clean up my change and open a pr on r2 9 so people can understand the change well relevant log output shell w20230329 16 29 04 094142 99658 quantization op cc 309 fakequantwithminmaxvar have narrow range true but for tensorrt conversion narrow range false be recommend w20230329 16 29 04 138859 99658 quantization op cc 309 fakequantwithminmaxvar have narrow range true but for tensorrt conversion narrow range false be recommend e20230329 16 29 05 544306 99658 convert node cc 7274 use explicit precision 1 e20230329 16 29 05 544360 99658 convert node cc 7280 build cuda engine e20230329 16 29 05 545313 99658 convert node cc 1287 set tensorrt network name to tf 2 9 1 trt 8 5 1 precision int8 calibration 0 max batch size 1 max workspace size 4294967296 e20230329 16 29 12 417982 99658 trt logg cc 40 defaultlogger 2 pointwisev2builder cpp toexpr 279 error code 2 internal error assertion outscale size 1 fail w20230329 16 29 12 464977 99658 trt engine op cc 1047 tf trt warn engine creation for trtengineop 000 000 fail the native segment will be use instead reason internal fail to build tensorrt engine w20230329 16 29 12 465102 99658 trt engine op cc 888 tf trt warn engine retrieval for input shape 1 7 12 1 7 15 1 7 12 1 6 5 550400 7 3 1208 1920 1 15 1 1 fail run native segment for trtengineop 000 000 follow up from this issue it can possibly help solve both |
tensorflowtensorflow | test | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf2 8 custom code yes os platform and distribution no response mobile device mate python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell test relevant log output shell test |
tensorflowtensorflow | weight metric argument for compile doesn t account for weight | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 12 0 dev20221213 custom code yes os platform and distribution no response mobile device no response python version 3 7 6 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell when use weight metric value that would be expect with no weighting be see I expect that when I use sample weight in fit and pass the same function to the metric and weight metric in compile the score would differ but they be the same standalone code to reproduce the issue shell import os os environ tf cpp min log level 3 import numpy as np import tensorflow as tf x train y train x test y test tf keras datasets mnist load datum add channel axis x train x train np newaxis x test x test np newaxis subsample train idx np random choice len x train 1000 replace 0 test idx np random choice len x test 1000 replace 0 x train x train train idx x test x test test idx convert 1 hot y train tf one hot y train train idx 10 y test tf one hot y test test idx 10 define the model def get model model tf keras sequential tf keras layers input 28 28 1 tf keras layer conv2d 32 3 3 activation relu tf keras layer flatten tf keras layer dense 32 activation relu tf keras layer dense 10 activation softmax return model first no weighting initialize model model get model compile model compile optimizer adam loss categorical crossentropy metric categorical crossentropy weight metric categorical crossentropy train the model history model fit x train y train validation datum x train y train epoch 5 step per epoch 5 verbose 0 h history history print no weighting print final loss h loss 1 print final categorical crossentropy h categorical crossentropy 1 print final weighted categorical crossentropy h weight categorical crossentropy 1 add weight categorical crossentropy should differ from weighted categorical crossentropy define sample weight sample weight np one len x train 1 2 val sample weight np one len x test 1 2 initialize model model get model compile model compile optimizer adam loss categorical crossentropy metric categorical crossentropy weight metric categorical crossentropy train the model history model fit x train y train sample weight sample weight validation datum x train y train val sample weight epoch 5 step per epoch 5 verbose 0 h history history print nwith weighting print final loss h loss 1 print final categorical crossentropy h categorical crossentropy 1 print final weighted categorical crossentropy h weight categorical crossentropy 1 relevant log output shell no weighting final loss 2 071871042251587 final categorical crossentropy 2 071871042251587 final weight categorical crossentropy 2 071871042251587 with weighting final loss 3 7742838859558105 final categorical crossentropy 1 8871419429779053 final weight categorical crossentropy 1 8871419429779053 |
tensorflowtensorflow | issue in tf doc and it s tool with dataclasse module in python 3 6 and early | Bug | currently the tensorflow documentation tool require use the dataclasse module however the dataclasse module be not include in the standard library in python 3 6 and early which can cause compatibility issue for user of these version of python l36 l38 python dataclasse be in build from py 3 7 this version be a backport for py 3 6 if sys version info major sys version info minor 3 6 require pkgs append dataclasse but in the nbfmt tool the notebook util py file l119 l124 python dataclasse dataclass class cellcopystat process cell int 0 update cell int 0 unmatched target cell list str dataclasse field default factory list unmatched source cell list str dataclasse field default factory list out of order target cell list str dataclasse field default factory list list be use instead of the typing list which will cause typeerror type object be not subscriptable I think typing list should be use to ensure backward compatibility to avoid these issue the documentation should 1 use the typing list class instead of list to ensure backward compatibility with old version of python 2 require the installation of the dataclasse package for user of python 3 6 and early instead of require it for python 3 6 only like the follow python import sys if sys version info 3 7 require pkgs append dataclasse I m ready to start fix it in a pr |
tensorflowtensorflow | lstmblockcell validation error | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf2 11 custom code yes os platform and distribution linux debian 10 mobile device n a python version n a bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell in tensorflow tensorflow core kernel rnn lstm op cc line 441 the error message print wci tensor dim but should print wcf tensor dim in current version a strange message can appear wcf tensor must be rank 1 but be rank 1 standalone code to reproduce the issue shell it be clear from the source code no need to reproduce relevant log output no response |
tensorflowtensorflow | tensorflow 2 12 0 wsl2 gpu support | Bug | click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 12 0 custom code no os platform and distribution window 11 wsl2 mobile device no response python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version 11 8 8 4 1 50 gpu model and memory no response current behaviour shell I have a tensorflow 2 11 environment in wsl that work fine it be instal use the conda forge package for tensorflow 2 12 I swap to use the pip package for tensorflow keras keras tuner tensorflow hub tensorflow dataset and tensorboard after installation the conda environment for 2 12 can t find the gpu anymore swap back over to my 2 11 environment it still find it fine be there new instruction for gpu support in wsl2 as of tensorflow 2 12 0 standalone code to reproduce the issue shell gpu tf config experimental list physical device gpu print gpu gpu relevant log output no response |
tensorflowtensorflow | csr | Invalid | thank |
tensorflowtensorflow | tf raw op round doesn t have a gradient | Bug | click to expand issue type documentation bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 11 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the function tf raw op round do not have a gradient however it be mention in the documentation that it do despite this inconsistency the code operate correctly as intend standalone code to reproduce the issue shell import os import tensorflow as tf import numpy as np x tf variable 1 5 1 5 dtype tf dtype float64 def rounding x round op tf raw op round x x return round op t1 round x with tf gradienttape as tape tape watch x t round x gradient tape gradient t x print gradient relevant log output shell none |
tensorflowtensorflow | tensorflow version be 2 11 1 in v2 12 0 tag | Bug | click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 12 custom code yes os platform and distribution linux rhel 8 6 mobile device no response python version no response bazel version 5 3 gcc compiler version 11 2 cuda cudnn version 11 4 8 3 gpu model and memory no response current behaviour shell version of tf in v2 12 0 tag be 2 11 1 however it should have be 2 12 0 standalone code to reproduce the issue shell l49 show 2 11 1 relevant log output no response |
tensorflowtensorflow | miss module | Bug | click to expand issue type preformence bug have you reproduce the bug with tf nightly no source binary tensorflow version code custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell hello I find a error with miss modul in file async comp test standalone code to reproduce the issue shell 1 make codespace 2 find file async comp test py 3 start debug relevant log output shell no module name tensorflow file workspace tensorflow tensorflow compiler test async comp test py line 20 in from tensorflow core protobuf import config pb2 modulenotfounderror no module name tensorflow |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.