repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
png warning
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 15 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I create dataset use tf keras util image dataset from directory when I fit the model with this dataset I get a warning about color space my image have pillow mode l and rgba and the result dataset be grayscale where be the issue standalone code to reproduce the issue shell train ds val ds tf keras util image dataset from directory code dataset train dataset validation split 0 1 90 training 10 validation subset both image size 128 128 interpolation bicubic batch size none color mode grayscale shuffle true seed 123 def preprocessing image label normalize image tf clip by value tf cast image dtype tf float32 255 0 0 0 1 0 return image label train ds train ds map preprocesse num parallel call tf datum autotune train ds train ds shuffle 1024 train ds train ds batch config batch size drop remainder true train ds train ds prefetch tf datum autotune model fit train ds epoch config epoch validation datum val ds relevant log output shell w tensorflow core lib png png io cc 88 png warning iccp profile icc profile gray gray color space not permit on rgb png
tensorflowtensorflow
inconsistency of leftshift between eager mode and jit true
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 14 1 custom code yes os platform and distribution ubuntu 22 04 3 lts x86 64 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior different output of tf raw op leftshift between eagermode and op mode be observe standalone code to reproduce the issue shell import tensorflow as tf class network tf module def init self super init tf function jit compile true def call self x y x tf raw op leftshift y x x y return x m network tensor x tf random uniform minval 0 maxval 255 dtype tf int32 tensor y tf random uniform 9 minval 0 maxval 255 dtype tf int32 inp x tensor x y tensor y with tf device cpu 0 tf config run function eagerly true no op re m inp tf config run function eagerly false with tf device cpu 0 op re m inp tf debugging assert near tf cast no op re tf float64 tf cast op re tf float64 atol 0 001 rtol 0 001 relevant log output shell raise error invalidargumenterror tensorflow python framework error impl invalidargumenterror expect tf tensor false shape dtype bool to be true summarize datum b b x and y not equal to tolerance rtol tf tensor 0 001 shape dtype float64 atol tf tensor 0 001 shape dtype float64 b x shape 9 dtype float64 2147483648 0 0 0 0 0 b y shape 9 dtype float64 0 0 0 0 0 0
tensorflowtensorflow
inconsistency in result for tf keras metric and tf keras loss
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 15 custom code yes os platform and distribution ubuntu 22 04 mobile device ubuntu 22 04 python version 3 9 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the current behavior highlight an inconsistency between the outcome obtain use tf keras metric logcosherror and tf keras loss logcosh the discrepancy arise when utilize the sample weight weight argument the objective be to compute a weight mean during both the training and testing phase standalone code to reproduce the issue shell true value tf constant 3 5 8 2 1 8 6 7 predict value tf constant 5 1 7 9 10 4 3 2 weight tf constant 0 9 0 4 logcosh metric tf keras metric logcosherror logcosh metric update state true value predict value print logcosh without weight logcosh metric result numpy logcosh metric tf keras metric logcosherror logcosh metric update state true value predict value sample weight weight print logcosh with weight logcosh metric result numpy true label tf constant 3 5 8 2 1 8 6 7 predict label tf constant 5 1 7 9 10 4 3 2 weight tf constant 0 9 0 4 logcosh loss tf keras loss logcosh print logcosh without weight logcosh loss true label predict label numpy print logcosh with weight logcosh loss true label predict label sample weight weight numpy relevant log output shell logcosh without weight 2 926441 logcosh with weight 1 9914919 logcosh without weight 2 926441 logcosh with weight 1 2944697
tensorflowtensorflow
tensorflow macos 2 15 0 still request ml dtype 0 2 0 instead of 0 3 1
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 15 0 custom code no os platform and distribution macos 14 1 2 mobile device mac python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior currently we could not install the follow package together tensorflow macos 2 15 0 tensorflow metal 1 1 0 jax 0 4 23 jaxlib 0 4 23 orbax checkpoint 0 4 8 the conflict be cause by tensorflow macos 2 15 0 depend on ml dtypes 0 2 0 jax 0 4 23 depend on ml dtypes 0 2 0 jaxlib 0 4 23 depend on ml dtypes 0 2 0 tensorstore 0 1 52 depend on ml dtypes 0 3 1 tensorflow macos 2 15 0 depend on ml dtypes 0 2 0 jax 0 4 23 depend on ml dtypes 0 2 0 jaxlib 0 4 23 depend on ml dtypes 0 2 0 tensorstore 0 1 51 depend on ml dtypes 0 3 1 from the requirement l353 it seem that it should require ml dtypes 0 3 1 but if run curl s jq info require dist grep ml dtype we will get ml dtype 0 2 0 note it do work without orbax checkpoint which be now recommend by flax for checkpointe standalone code to reproduce the issue shell name debug channel default dependency python 3 9 pip 23 3 1 pip tensorflow macos 2 15 0 tensorflow metal 1 1 0 jax 0 4 23 jaxlib 0 4 23 orbax checkpoint 0 4 8 please put this into environment mac m1 debug yml then execute conda env create f environment mac m1 debug yml relevant log output shell conda env create f environment mac m1 debug yml ok base py channel default conda forge platform osx arm64 collect package metadata repodata json do solve environment do warn a new version of conda exist current version 23 3 1 late version 23 11 0 please update conda by run conda update n base c conda forge conda download and extract package prepare transaction do verifying transaction do execute transaction do instal pip dependency run pip subprocess with argument user user miniforge3 envs debug bin python m pip install u r user user condaenv 1tnhkmib requirement txt exist action b pip subprocess output collect tensorflow macos 2 15 0 from r user user condaenv 1tnhkmib requirement txt line 1 use cache tensorflow macos 2 15 0 cp39 cp39 macosx 12 0 arm64 whl metadata 4 2 kb collect tensorflow metal 1 1 0 from r user user condaenv 1tnhkmib requirement txt line 2 download tensorflow metal 1 1 0 cp39 cp39 macosx 12 0 arm64 whl metadata 1 2 kb collect jax 0 4 23 from r user user condaenv 1tnhkmib requirement txt line 3 use cache jax 0 4 23 py3 none any whl metadata 24 kb collect jaxlib 0 4 23 from r user user condaenv 1tnhkmib requirement txt line 4 use cache jaxlib 0 4 23 cp39 cp39 macosx 11 0 arm64 whl metadata 2 1 kb collect orbax checkpoint 0 4 8 from r user user condaenv 1tnhkmib requirement txt line 5 use cache orbax checkpoint 0 4 8 py3 none any whl metadata 1 7 kb collect tensorflow dataset 4 9 3 from r user user condaenv 1tnhkmib requirement txt line 6 download tensorflow dataset 4 9 3 py3 none any whl metadata 9 3 kb collect absl py 1 0 0 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download absl py 2 0 0 py3 none any whl metadata 2 3 kb collect astunparse 1 6 0 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download astunparse 1 6 3 py2 py3 none any whl 12 kb collect flatbuffer 23 5 26 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download flatbuffer 23 5 26 py2 py3 none any whl metadata 850 byte collect gast 0 5 0 0 5 1 0 5 2 0 2 1 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download gast 0 5 4 py3 none any whl 19 kb collect google pasta 0 1 1 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download google pasta 0 2 0 py3 none any whl 57 kb 57 5 57 5 kb 45 2 kb s eta 0 00 00 collect h5py 2 9 0 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download h5py 3 10 0 cp39 cp39 macosx 11 0 arm64 whl metadata 2 5 kb collect libclang 13 0 0 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download libclang 16 0 6 py2 py3 none macosx 11 0 arm64 whl metadata 5 2 kb collect ml dtype 0 2 0 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download ml dtype 0 2 0 cp39 cp39 macosx 10 9 universal2 whl metadata 20 kb collect numpy 2 0 0 1 23 5 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download numpy 1 26 3 cp39 cp39 macosx 11 0 arm64 whl metadata 61 kb 61 2 61 2 kb 60 0 kb s eta 0 00 00 collect opt einsum 2 3 2 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 use cache opt einsum 3 3 0 py3 none any whl 65 kb collect packaging from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 use cache packaging 23 2 py3 none any whl metadata 3 2 kb collect protobuf 4 21 0 4 21 1 4 21 2 4 21 3 4 21 4 4 21 5 5 0 0dev 3 20 3 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download protobuf 4 25 1 cp37 abi3 macosx 10 9 universal2 whl metadata 541 byte requirement already satisfy setuptool in user user miniforge3 envs debug lib python3 9 site package from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 68 2 2 collect six 1 12 0 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 use cache six 1 16 0 py2 py3 none any whl 11 kb collect termcolor 1 1 0 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download termcolor 2 4 0 py3 none any whl metadata 6 1 kb collect type extension 3 6 6 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 use cache typing extension 4 9 0 py3 none any whl metadata 3 0 kb collect wrapt 1 15 1 11 0 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download wrapt 1 14 1 cp39 cp39 macosx 11 0 arm64 whl 35 kb collect tensorflow io gcs filesystem 0 23 1 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download tensorflow io gcs filesystem 0 34 0 cp39 cp39 macosx 12 0 arm64 whl metadata 14 kb collect grpcio 2 0 1 24 3 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download grpcio 1 60 0 cp39 cp39 macosx 10 10 universal2 whl metadata 4 0 kb collect tensorboard 2 16 2 15 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 use cache tensorboard 2 15 1 py3 none any whl metadata 1 7 kb collect tensorflow estimator 2 16 2 15 0 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 use cache tensorflow estimator 2 15 0 py2 py3 none any whl metadata 1 3 kb collect keras 2 16 2 15 0 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 use cache keras 2 15 0 py3 none any whl metadata 2 4 kb requirement already satisfied wheel 0 35 in user user miniforge3 envs debug lib python3 9 site package from tensorflow metal 1 1 0 r user user condaenv 1tnhkmib requirement txt line 2 0 41 2 collect scipy 1 9 from jax 0 4 23 r user user condaenv 1tnhkmib requirement txt line 3 download scipy 1 11 4 cp39 cp39 macosx 12 0 arm64 whl metadata 60 kb 60 4 60 4 kb 21 7 kb s eta 0 00 00 collect importlib metadata 4 6 from jax 0 4 23 r user user condaenv 1tnhkmib requirement txt line 3 download importlib metadata 7 0 1 py3 none any whl metadata 4 9 kb collect etil epath epy from orbax checkpoint 0 4 8 r user user condaenv 1tnhkmib requirement txt line 5 download etil 1 5 2 py3 none any whl metadata 6 3 kb collect msgpack from orbax checkpoint 0 4 8 r user user condaenv 1tnhkmib requirement txt line 5 downloading msgpack 1 0 7 cp39 cp39 macosx 11 0 arm64 whl metadata 9 1 kb collect pyyaml from orbax checkpoint 0 4 8 r user user condaenv 1tnhkmib requirement txt line 5 download pyyaml 6 0 1 cp39 cp39 macosx 11 0 arm64 whl metadata 2 1 kb collect tensorstore 0 1 51 from orbax checkpoint 0 4 8 r user user condaenv 1tnhkmib requirement txt line 5 use cache tensorstore 0 1 52 cp39 cp39 macosx 11 0 arm64 whl metadata 3 0 kb collect nest asyncio from orbax checkpoint 0 4 8 r user user condaenv 1tnhkmib requirement txt line 5 download nest asyncio 1 5 8 py3 none any whl metadata 2 8 kb collect array record from tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download array record 0 4 1 py39 none any whl metadata 503 byte collect click from tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 use cache click 8 1 7 py3 none any whl metadata 3 0 kb collect dm tree from tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download dm tree 0 1 8 cp39 cp39 macosx 11 0 arm64 whl 110 kb 110 8 110 8 kb 28 6 kb s eta 0 00 00 collect promise from tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 use cache promise 2 3 py3 none any whl collect psutil from tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download psutil 5 9 7 cp38 abi3 macosx 11 0 arm64 whl metadata 21 kb collect request 2 19 0 from tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 use cache request 2 31 0 py3 none any whl metadata 4 6 kb collect tensorflow metadata from tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download tensorflow metadata 1 14 0 py3 none any whl metadata 2 1 kb collect toml from tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download toml 0 10 2 py2 py3 none any whl 16 kb collect tqdm from tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download tqdm 4 66 1 py3 none any whl metadata 57 kb 57 6 57 6 kb 40 7 kb s eta 0 00 00 collect fsspec from etil enp epath etree 0 9 0 tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download fsspec 2023 12 2 py3 none any whl metadata 6 8 kb collect importlib resource from etil enp epath etree 0 9 0 tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download importlib resource 6 1 1 py3 none any whl metadata 4 1 kb collect zipp from etil enp epath etree 0 9 0 tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download zipp 3 17 0 py3 none any whl metadata 3 7 kb collect charset normalizer 4 2 from request 2 19 0 tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download charset normalizer 3 3 2 cp39 cp39 macosx 11 0 arm64 whl metadata 33 kb collect idna 4 2 5 from request 2 19 0 tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download idna 3 6 py3 none any whl metadata 9 9 kb collect urllib3 3 1 21 1 from request 2 19 0 tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download urllib3 2 1 0 py3 none any whl metadata 6 4 kb collect certifi 2017 4 17 from request 2 19 0 tensorflow dataset 4 9 3 r user user condaenv 1tnhkmib requirement txt line 6 download certifi 2023 11 17 py3 none any whl metadata 2 2 kb collect google auth 3 1 6 3 from tensorboard 2 16 2 15 tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download google auth 2 26 1 py2 py3 none any whl metadata 4 7 kb collect google auth oauthlib 2 0 5 from tensorboard 2 16 2 15 tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download google auth oauthlib 1 2 0 py2 py3 none any whl metadata 2 7 kb collect markdown 2 6 8 from tensorboard 2 16 2 15 tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download markdown 3 5 1 py3 none any whl metadata 7 1 kb collect protobuf 4 21 0 4 21 1 4 21 2 4 21 3 4 21 4 4 21 5 5 0 0dev 3 20 3 from tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download protobuf 4 23 4 cp37 abi3 macosx 10 9 universal2 whl metadata 540 byte collect tensorboard datum server 0 8 0 0 7 0 from tensorboard 2 16 2 15 tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download tensorboard datum server 0 7 2 py3 none any whl metadata 1 1 kb collect werkzeug 1 0 1 from tensorboard 2 16 2 15 tensorflow macos 2 15 0 r user user condaenv 1tnhkmib requirement txt line 1 download werkzeug 3 0 1 py3 none any whl metadata 4 1 kb info pip be look at multiple version of tensorstore to determine which version be compatible with other requirement this could take a while collect tensorstore 0 1 51 from orbax checkpoint 0 4 8 r user user condaenv 1tnhkmib requirement txt line 5 use cache tensorstore 0 1 51 cp39 cp39 macosx 11 0 arm64 whl metadata 3 0 kb the conflict be cause by tensorflow macos 2 15 0 depend on ml dtypes 0 2 0 jax 0 4 23 depend on ml dtypes 0 2 0 jaxlib 0 4 23 depend on ml dtypes 0 2 0 tensorstore 0 1 52 depend on ml dtypes 0 3 1 tensorflow macos 2 15 0 depend on ml dtypes 0 2 0 jax 0 4 23 depend on ml dtypes 0 2 0 jaxlib 0 4 23 depend on ml dtypes 0 2 0 tensorstore 0 1 51 depend on ml dtypes 0 3 1 to fix this you could try to 1 loosen the range of package version you ve specify 2 remove package version to allow pip attempt to solve the dependency conflict pip subprocess error error can not install r user user condaenv 1tnhkmib requirement txt line 1 r user user condaenv 1tnhkmib requirement txt line 3 r user user condaenv 1tnhkmib requirement txt line 4 and orbax checkpoint because these package version have conflicting dependency error resolutionimpossible for help visit deal with dependency conflict fail condaenvexception pip fail
tensorflowtensorflow
tf 2 15 0 post1 complain about tf keras optimizer adam to be not trackable
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version v2 15 0 2 g0b15fdfcb3f 2 15 0 custom code yes os platform and distribution linux ubuntu 22 04 mobile device linux ubuntu 22 04 python version 3 11 7 bazel version no response gcc compiler version no response cuda cudnn version 12 2 140 gpu model and memory nvidia 3080 with 10240 mb current behavior 1 define a model and a keras adam optimizer python import tensorflow as tf import kera input keras input shape 37 x keras layer dense 32 activation relu input output keras layer dense 5 activation softmax x model keras model inputs input output output optimizer tf keras optimizer adam 0 001 2 create a checkpoint python ckpt tf train checkpoint step tf variable 1 name step optimizer optimizer net model 3 output error python valueerror checkpoint be expect optimizer to be a trackable object an object derive from trackable get if you believe this object should be trackable I e it be part of the tensorflow python api and manage state please open an issue standalone code to reproduce the issue shell import tensorflow as tf import kera input keras input shape 37 x keras layer dense 32 activation relu input output keras layer dense 5 activation softmax x model keras model inputs input output output optimizer keras optimizer adam 0 001 ckpt actor tf train checkpoint step tf variable 1 name step optimizer optimizer net model relevant log output shell name valueerror message checkpoint be expect optimizer to be a trackable object an object derive from trackable get if you believe this object should be trackable I e it be part of the tensorflow python api and manage state please open an issue stack valueerror traceback most recent call last cell in 10 line 1 1 ckpt actor tf train checkpoint 2 step tf variable 1 name step 3 optimizer optimizer 4 net model 5 file dpt pyenv version miniconda3 3 11 23 11 0 2 envs meos nbd lib python3 11 site package tensorflow python checkpoint checkpoint py 2200 in checkpoint init self root kwargs 2198 if isinstance convert v weakref ref 2199 convert v convert v 2200 assert trackable convert v k 2202 if root 2203 make sure that root doesn t already have dependency with these name 2204 child trackable root lookup dependency k file dpt pyenv version miniconda3 3 11 23 11 0 2 envs meos nbd lib python3 11 site package tensorflow python checkpoint checkpoint py 1548 in assert trackable obj name 1545 def assert trackable obj name 1546 if not isinstance 1547 obj base trackable def function function 1548 raise valueerror 1549 f checkpoint be expect name to be a trackable object an 1550 f object derive from trackable get obj if you believe this 1551 object should be trackable I e it be part of the 1552 tensorflow python api and manage state please open an issue valueerror checkpoint be expect optimizer to be a trackable object an object derive from trackable get if you believe this object should be trackable I e it be part of the tensorflow python api and manage state please open an issue
tensorflowtensorflow
custom callback log value be not accurately reflect in the training progress bar
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 14 1 custom code yes os platform and distribution linux ubuntu 22 04 mobile device no response python version 3 9 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I implement a custom learning rate schedule that inherit from the tf keras optimizer schedule learningrateschedule class I then create a custom callback to add it to the training log however the value show in the progress bar do not seem to match the actual value this go for the learning rate but also another change variable such as the batch number I expect this to update correctly but it do not seem to do so standalone code to reproduce the issue shell relevant log output shell epoch 1 10 end of batch 0 lr 9 375000445288606e 06 1 32 2 16 4s step loss 0 0615 lr 9 3750e 06 batch 0 0000e 00 end of batch 1 lr 1 8750000890577212e 05 end of batch 2 lr 2 8125001335865818e 05 end of batch 3 lr 3 7500001781154424e 05 end of batch 4 lr 4 6875000407453626e 05 end of batch 5 lr 5 6250002671731636e 05 6 32 0s 12ms step loss 0 0786 lr 3 2813e 05 batch 2 5000 end of batch 6 lr 6 562500493600965e 05 end of batch 7 lr 7 500000356230885e 05 end of batch 8 lr 8 437500218860805e 05 end of batch 9 lr 9 375000081490725e 05 end of batch 10 lr 0 00010312500671716407 end of batch 11 lr 0 00011250000534346327 12 32 0s 10ms step loss 0 0834 lr 6 0938e 05 batch 5 5000 end of batch 12 lr 0 00012187500396976247 end of batch 13 lr 0 0001312500098720193 end of batch 14 lr 0 0001406250084983185 end of batch 15 lr 0 0001500000071246177 end of batch 16 lr 0 0001593750057509169 end of batch 17 lr 0 0001687500043772161 18 32 0s 10ms step loss 0 0867 lr 8 9063e 05 batch 8 5000 end of batch 18 lr 0 0001781250030035153 end of batch 19 lr 0 0001875000016298145 end of batch 20 lr 0 00019687501480802894 end of batch 21 lr 0 00020625001343432814 end of batch 22 lr 0 00021562501206062734 23 32 0s 10ms step loss 0 0882 lr 1 1250e 04 batch 11 0000 end of batch 23 lr 0 00022500001068692654 end of batch 24 lr 0 00023437500931322575 end of batch 25 lr 0 00024375000793952495 end of batch 26 lr 0 00025312500656582415 end of batch 27 lr 0 0002625000197440386 28 32 0s 11ms step loss 0 0890 lr 1 3594e 04 batch 13 5000 end of batch 28 lr 0 00027187500381842256 end of batch 29 lr 0 000281250016996637 end of batch 30 lr 0 00029062500107102096 end of batch 31 lr 0 0003000000142492354 32 32 5s 11ms step loss 0 0893 lr 1 5469e 04 batch 15 5000
tensorflowtensorflow
process be abort core dump when axis be a large negative integer when call tf gather
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 16 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I be actually use 2 10 0 but I find this issue insist in 2 16 0 dev20231209 tf nightly here be the code to reproduce import tensorflow as tf param tf constant 0 69 indice tf constant 16 axis tf constant 9223372036854775808 dtype int64 tf gather param indice axis axis the process will directly be kill by the system when run above code here be the related output 2023 12 09 22 48 16 035701 f tensorflow core framework tensor h 832 check fail new num element numelement 0 vs 1 abort core dump standalone code to reproduce the issue shell import tensorflow as tf param tf constant 0 69 indice tf constant 16 axis tf constant 9223372036854775808 dtype int64 tf gather param indice axis axis relevant log output no response
tensorflowtensorflow
advisory ghsa 9jjw hf72 3mxw contain invalid semver
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly no source source tensorflow version n a custom code no os platform and distribution n a mobile device n a python version n a bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the semver string be 2 4 0rc0 but it should be 2 4 0 rc 0 this cause problem for tool and script that parse the advisory database standalone code to reproduce the issue shell n a relevant log output no response
tensorflowtensorflow
multi gpu strategy scope cause save weight error
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 1 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version 11 8 gpu model and memory 2x a6000 48 g current behavior I have a custom layer with a model that be create and save within a multi gpu scope everything work well without the mirroredstrategy scope thus run a single gpu but when run on multiple gpu use the strategy scope the name of the weight of the model get truncate and essentially only resemble kernel and bias this lead to duplicate entry in the save weight function cause an hdf5 error code to reproduce be below you need to have 2 gpu and log output be also attach standalone code to reproduce the issue shell import os import shutil import tensorflow as tf from contextlib import nullcontext from tensorflow keras layers import input conv2d layer groupnormalization from tensorflow keras model import model load model class mylayer layer def init self channel kwargs super init kwargs self norm groupnormalization epsilon 1e 5 self proj1 conv2d channel 1 self proj2 conv2d channel 1 def call self input return self proj2 self proj1 self norm input class modelmock model def init self img height img width name none x input input img height img width 3 name x input x conv2d 320 kernel size 3 padding same x input x mylayer 320 x output conv2d 16 kernel size 3 padding same x super init x input output name name if name main strategy tf distribute mirroredstrategy print number of device format strategy num replicas in sync if os path exist test shutil rmtree test os mkdir test fail true with strategy scope if fail else nullcontext print create model modelmock 256 256 for I w in enumerate model weight print I w name model save weight test model1 hdf5 model save test model1 model load model test model1 compile false for I w in enumerate model weight print I w name model save weight test model2 hdf5 print do relevant log output shell number of device 2 create 0 conv2d kernel 0 1 conv2d bias 0 2 my layer group normalization gamma 0 3 my layer group normalization beta 0 4 my layer conv2d 1 kernel 0 5 my layer conv2d 1 bias 0 6 my layer conv2d 2 kernel 0 7 my layer conv2d 2 bias 0 8 conv2d 3 kernel 0 9 conv2d 3 bias 0 0 kernel 0 1 bias 0 2 gamma 0 3 beta 0 4 kernel 0 5 bias 0 6 kernel 0 7 bias 0 8 kernel 0 9 bias 0 traceback most recent call last file home naked dev meta cv nak3d fusion python app ml check weight name py line 48 in model save weight test model2 hdf5 file usr local lib python3 8 dist package kera src util traceback util py line 70 in error handler raise e with traceback filter tb from none file usr local lib python3 8 dist package h5py hl group py line 183 in create dataset dsid dataset make new dset group shape dtype data name kwd file usr local lib python3 8 dist package h5py hl dataset py line 163 in make new dset dset i d h5d create parent i d name tid sid dcpl dcpl dapl dapl file h5py object pyx line 54 in h5py object with phil wrapper file h5py object pyx line 55 in h5py object with phil wrapper file h5py h5d pyx line 138 in h5py h5d create valueerror unable to create dataset name already exist
tensorflowtensorflow
wrong gradient calculate for api tf keras layer add
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version tf 2 14 0 custom code yes os platform and distribution window colab mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior forward mode differentiation for the case be inconsistent with the gradient calculate in reverse mode they shoule be the same standalone code to reproduce the issue shell import tensorflow as tf input 0 tensor tf random uniform minval 0 3 maxval 1 dtype tf float32 input 0 tf identity input 0 tensor input 1 tensor tf random uniform minval 1 1 maxval 10 dtype tf float32 input 1 tf identity input 1 tensor input input 0 input 1 layer tf keras layer add with tf gradienttape persistent true as g g watch input re backward layer input grad backward g jacobian re backward re backward print re backward re backward print grad backward grad backward tangent 1 tf constant 1 dtype tf float32 tangent 2 tf constant 1 dtype tf float32 tangent tangent 1 tangent 2 with tf autodiff forwardaccumulator input tangent as acc re forward layer input grad jvp acc jvp res forward print re forward re forward print grad forward grad jvp relevant log output shell re backward tf tensor 9 794308 shape dtype float32 grad backward tf tensor 1 0 shape dtype float32 re forward tf tensor 9 794308 shape dtype float32 grad forward tf tensor 2 0 shape dtype float32
tensorflowtensorflow
tensorflow org version point to late version for 2 11 onward
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 11 custom code no os platform and distribution all mobile device all python version all bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior in the link to version 2 11 2 12 2 13 and 2 14 all point to the late doc I e instead of point to or to a similar url for 2 12 and 2 13 if the script that generate this page be available on github I m happy to fix it but I just can t find it note that the link to the release note be okay standalone code to reproduce the issue shell step 1 go to step 2 click on r2 11 r2 12 or r2 13 all of these lead to the wrong page once 2 15 be release I m guess r2 14 will also be wrong it will point to the late doc relevant log output shell n a
tensorflowtensorflow
tf keras model load model can t handle unc path
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version tf 2 14 0 custom code yes os platform and distribution window 10 mobile device no response python version 3 11 6 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I recently issue a bug when save tf kera model in the savedmodel format use a unc path like model save computername path to folder see 62319 a workaround for the time be be to use the kera save format instead so the follow work model save computername path to folder model keras save format kera however load that model again use the same unc path still be break run tf keras model load model computername path to folder model keras raise the error show below the problem once again seem to be with the unc path because replace it with a path use the drive s name letter resolve the problem if j be set as an alia for computername then tf keras model load model j path to folder model keras work as expect but do this be not really a solution for I because then I would have to rely on other to use the same letter j for the network drive in order to run my code standalone code to reproduce the issue shell the issue can be reproduce use for instance the path to the root of the d drive express as unc path path localhost d path pathlib windowspath path path path test if we uncomment the follow line and replace the path string with one that include the drive s letter no error occur path pathlib windowspath d test path mkdir exist ok true parent true set up and save a dummy model model tf keras sequential model add tf keras input shape 16 model add tf keras layer dense 8 model save path model keras save format keras load the model use the same unc path raise a unicodedecodeerror model tf keras model load model path model keras relevant log output shell traceback most recent call last file d keras load error py line 26 in main file d keras load error py line 23 in main model tf keras model load model path model keras file d venv lib site package keras src save save api py line 254 in load model return save lib load model file d venv lib site package keras src save saving lib py line 281 in load model raise e file d venv lib site package keras src save saving lib py line 234 in load model as gfile handle zipfile zipfile gfile handle r as zf file c program file python311 lib zipfile py line 1302 in init self realgetcontent file c program file python311 lib zipfile py line 1365 in realgetcontent endrec endrecdata fp file c program file python311 lib zipfile py line 292 in endrecdata fpin seek 0 2 file d venv lib site package tensorflow python util deprecation py line 588 in new func return func args kwargs file d venv lib site package tensorflow python lib io file io py line 139 in seek self preread check file d venv lib site package tensorflow python lib io file io py line 77 in preread check self read buf pywrap file io bufferedinputstream unicodedecodeerror utf 8 codec can t decode byte 0xfc in position 89 invalid start byte
tensorflowtensorflow
documentation of tf nn depthwise conv2d fail to mention limitation on stride
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 14 0 custom code no os platform and distribution macos 13 6 mobile device none python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior apparently tensorflow s tf nn depthwise conv2d recently lose the ability to work with inequal stride see 60391 apparently it be support up to tf 2 11 this be mention in the documentation of tf keras layer separableconv2d albeit imho it should be more emphasise give that this be a limitation specific to depthwise convolution standalone code to reproduce the issue shell import tensorflow as tf import numpy as np layer1 tf nn depthwise conv2d np one 2 3 4 5 filter np one 1 2 5 stride 1 1 2 1 padding same relevant log output shell invalidargumenterror function node wrap depthwiseconv2dnative device job localhost replica 0 task 0 device cpu 0 current implementation only support equal length stride in the row and column dimension op depthwiseconv2dnative name
tensorflowtensorflow
remove or update zh cn translation from installation instruction
Bug
the late version of tensorflow do not support gpu on window native however the zh cn translation say nothing about it the translation have now be outdated and misleading when I try to contribute to the l10n tensorflow doc l10n I only find a do not translate tensorflow doc l10n blob master site zh cn install do not translate file and the readme tensorflow doc l10n do not translate say tensorflow org do not translate time sensitive section like the installation instruction if the installation instruction will not be translate any more please remove the translation
tensorflowtensorflow
efficientnet transfer learn init with imagenet weight change model
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 14 custom code no os platform and distribution linux ubuntu 22 04 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version 11 8 gpu model and memory no response current behavior my current pipeline of 1 train a model use efficientnet backbone with weight imagenet 2 transfer a train model weight to the prod environment 3 initialize the production model with weight none and load the previously learn weight 4 compare the model s output between training and production environment they be different obviously I expect the output to be the same after lot of investigation I find the culprit which be not mention anywhere in the documentation github link l359 turn out when the model be initialize with weight imagenet an additional rescaling layer be add I understand the reason behind it reproduce the imagenet result but the above situation prove how misleading the implementation be why didn t I just save the whole model instead of transfer only the weight well because the model be train with mixed precision on a gpu and transfer it this way to a production env which use a cpu would make it unusable nan all around what I would expect ideally the same model structure no matter which weight be load what I would expect to minimize the impact of this issue when the weight be load to production model the mismatch between number of layer be detect and a warning be raise see test collab notebook with the code from below link standalone code to reproduce the issue shell import tensorflow as tf import numpy as np from keras application efficientnet import efficientnetb3 preprocess input img shape 300 300 3 x np one 1 img shape 255 train model efficientnetb3 weight imagenet include top false input shape img shape train model save weight weight h5 prod model efficientnetb3 weight none include top false input shape img shape prod model load weight weight h5 y train train model predict preprocess input x verbose 0 0 2 2 2 y prod prod model predict preprocess input x verbose 0 0 2 2 2 print f train model output initialize with imagenet weight n y train print f production model output initialize with none weight n y prod relevant log output shell train model output initialize with imagenet weight 0 27523986 0 25020567 0 26998693 0 27772853 0 2728562 0 27376127 0 24821427 0 23993279 production model output initialize with none weight 0 2746129 0 89038897 0 25032952 0 1611695 0 1986168 0 0976367 0 17414406 0 26978934
tensorflowtensorflow
tf truncatemod do not support half and bfloat16 datum type
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 14 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior follow the documentation tf truncatemod support the half and bfloat16 data type but in practical it do not standalone code to reproduce the issue shell import tensorflow as tf import numpy as np import warning warning filterwarning ignore x tf constant np random rand 2 2 dtype half y tf constant np random randint 0 100 dtype half out tf truncatemod x y crash print out import tensorflow as tf import numpy as np import warning warning filterwarning ignore x tf constant np random rand 2 2 dtype bfloat16 y tf constant np random randint 0 100 dtype bfloat16 out tf truncatemod x y crash print out relevant log output shell notfounderror could not find device for node node truncatemod truncatemod t dt bfloat16 all kernel register for op truncatemod device xla cpu jit t in dt float dt double dt int32 dt int64 dt bfloat16 dt half device xla gpu jit t in dt float dt double dt int32 dt int64 dt bfloat16 dt half device default t in dt int32 device gpu t in dt int32 device cpu t in dt double device cpu t in dt float device cpu t in dt int64 device cpu t in dt int32 op truncatemod name notfounderror could not find device for node node truncatemod truncatemod t dt half all kernel register for op truncatemod device xla cpu jit t in dt float dt double dt int32 dt int64 dt bfloat16 dt half device xla gpu jit t in dt float dt double dt int32 dt int64 dt bfloat16 dt half device default t in dt int32 device gpu t in dt int32 device cpu t in dt double device cpu t in dt float device cpu t in dt int64 device cpu t in dt int32 op truncatemod name
tensorflowtensorflow
elu activation implementation
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 13 0 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the implementation of the elu function be wrong the elu function require the alpha value as be give in this paper that image but in tensorflow the elu be implement without the alpha image standalone code to reproduce the issue shell I have compare the result with torch because the elu fucntion in torch have implemnte with alpha and manually with the numpy and you can see that the tf be not use the alpha and return the wrong result import tensorflow as tf import numpy as np import torch print tf version tf version version n x np array 3 0 alpha 0 5 np result np where x 0 x np multiply alpha np expm1 x astype x dtype tf result tf nn elu tf constant 3 0 torch result torch nn functional elu torch tensor 3 0 alpha alpha print tf result tf result n print torch result torch result n print numpy result np result n relevant log output shell tf version 2 13 0 tf result tf tensor 0 95021296 shape 1 dtype float32 torch result tensor 0 4751 numpy result 0 47510647
tensorflowtensorflow
segmentation fault for tflite interpreter
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 13 0 custom code yes os platform and distribution linux ubuntu 22 04 03 mobile device no response python version 3 8 17 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I create a simple tensorflow model and convert it to a tflite model upon call the tflite interpreter s allocate tensor method I find that tflite segfault perhaps similar to this issue standalone code to reproduce the issue shell import numpy as np import tensorflow as tf from tflite import model import tflite runtime interpreter as tflite if name main data1 np arange 128 128 reshape 16 16 1 astype np float32 data2 np arange 128 128 reshape 16 16 1 astype np float32 merge tf keras layer subtract def dataset generator for in range 1 arr1 data1 reshape 1 data1 shape arr2 data2 reshape 1 data2 shape yield arr1 astype np float32 arr2 astype np float32 base tf model inputs1 tf keras layers input shape data1 shape dtype np float32 inputs2 tf keras layers input shape data2 shape dtype np float32 merge merge inputs1 inputs2 tf model tf keras model inputs1 inputs2 merge tfl model converter tf lite tfliteconverter from keras model tf model converter optimization tf lite optimize default converter representative dataset dataset generator tflite quant model converter convert tfl model model getrootasmodel tflite quant model 0 write model to file model file segfault test tfl with open model file wb as fp fp write tflite quant model create interpreter interpreter tflite interpreter model path model file experimental op resolver type tflite opresolvertype builtin ref this call cause a segfault interpreter allocate tensor relevant log output shell 2023 09 11 11 38 16 876852 I tensorflow core util port cc 110 onednn custom operation be on you may see slightly different numerical result due to float point round off error from different computation order to turn they off set the environment variable tf enable onednn opt 0 2023 09 11 11 38 16 877887 I tensorflow tsl cuda cudart stub cc 28 could not find cuda driver on your machine gpu will not be use 2023 09 11 11 38 16 894424 e tensorflow compiler xla stream executor cuda cuda dnn cc 7630 unable to register cudnn factory attempt to register factory for plugin cudnn when one have already be register 2023 09 11 11 38 16 894443 e tensorflow compiler xla stream executor cuda cuda fft cc 609 unable to register cufft factory attempt to register factory for plugin cufft when one have already be register 2023 09 11 11 38 16 894454 e tensorflow compiler xla stream executor cuda cuda blas cc 1518 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 09 11 11 38 16 898344 I tensorflow tsl cuda cudart stub cc 28 could not find cuda driver on your machine gpu will not be use 2023 09 11 11 38 16 898500 I tensorflow core platform cpu feature guard cc 182 this tensorflow binary be optimize to use available cpu instruction in performance critical operation to enable the follow instruction avx2 avx512f avx512 vnni avx512 bf16 fma in other operation rebuild tensorflow with the appropriate compiler flag 2023 09 11 11 38 17 196991 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning could not find tensorrt warning tensorflow from home miniconda3 envs tf nightly lib python3 8 site package tensorflow python op distribution distribution py 259 reparameterizationtype init from tensorflow python op distribution distribution be deprecate and will be remove after 2019 01 01 instruction for update the tensorflow distribution library have move to tensorflow probability you should update all reference to use tfp distribution instead of tf distribution warn tensorflow from home miniconda3 envs tf nightly lib python3 8 site package tensorflow python op distribution bernoulli py 165 registerkl init from tensorflow python op distribution kullback leibler be deprecate and will be remove after 2019 01 01 instruction for update the tensorflow distribution library have move to tensorflow probability you should update all reference to use tfp distribution instead of tf distribution 2023 09 11 11 38 17 461212 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 894 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 09 11 11 38 17 461403 w tensorflow core common runtime gpu gpu device cc 2158 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2023 09 11 11 38 17 559681 I tensorflow cc save model reader cc 83 reading savedmodel from tmp tmpnwirm5mk home miniconda3 envs tf nightly lib python3 8 site package tensorflow lite python convert py 928 userwarning statistic for quantize input be expect but not specify continue anyway warning warn 2023 09 11 11 38 17 578426 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 365 ignore output format 2023 09 11 11 38 17 578436 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 368 ignore drop control dependency 2023 09 11 11 38 17 578732 I tensorflow cc save model reader cc 83 reading savedmodel from tmp tmpnwirm5mk 2023 09 11 11 38 17 578885 I tensorflow cc save model reader cc 51 read meta graph with tag serve 2023 09 11 11 38 17 578889 I tensorflow cc save model reader cc 146 reading savedmodel debug info if present from tmp tmpnwirm5mk 2023 09 11 11 38 17 579556 I tensorflow compiler mlir mlir graph optimization pass cc 382 mlir v1 optimization pass be not enable 2023 09 11 11 38 17 579648 I tensorflow cc save model loader cc 233 restore savedmodel bundle 2023 09 11 11 38 17 582018 I tensorflow cc save model loader cc 217 running initialization op on savedmodel bundle at path tmp tmpnwirm5mk 2023 09 11 11 38 17 583376 I tensorflow cc save model loader cc 316 savedmodel load for tag serve status success ok take 4645 microsecond 2023 09 11 11 38 17 586236 I tensorflow compiler mlir tensorflow util dump mlir util cc 269 disable mlir crash reproducer set env var mlir crash reproducer directory to enable fully quantize 0 inference type 6 input inference type float32 output inference type float32 abort core dump
tensorflowtensorflow
doc spurious favicon in template
Bug
I can t work out where the template for this life but the tensorflow doc at least the homepage and a couple of random page I spot check have a spurious favicon link to an image that doesn t exist link href static en site asset image marketing favicon png rel shortcut icon this be shadow by another page early in the head and so doesn t seem to be cause any immediate problem but it still seem like it might be a good idea to clean it up
tensorflowtensorflow
keras docs source code link point to 404
Bug
example if I go here and click source code I get to a 404 here l70 l3991 and it s the same no matter what version of the api doc I click from speak of if you go to then the 2 12 and 2 11 link don t point to those version of the api doc they go to the current version unless you use the link in the left side bar those be still okay except that there s no link to the current version which be 2 13
tensorflowtensorflow
output mismatch between direct pass and loop pass through dense layer
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 11 0 custom code no os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version 12 0 gpu model and memory no response current behavior when pass a large 3d tensor hrough a dense layer use two different method the output be not always equal specifically when the input tensor be relatively small e g 2 128 42 128 the output be equal but when the input tensor be large e g 2 8192 42 379 the output differ the tf debugging assert near y1 y2 statement do not trigger any assertion error standalone code to reproduce the issue shell import tensorflow as tf create a random tensor x tf random uniform shape 2 8192 42 379 create a dense layer layer tf keras layer dense 128 pass the input through the layer to get output y1 layer x pass the input use a for loop over the 3d dimension y2 tf stack layer x I for I in range x shape 2 axis 2 check if the output be the same print tf reduce all tf equal y1 y2 tf debugging assert near y1 y2 relevant log output no response
tensorflowtensorflow
tf keras callbacks sidecarevaluatormodelexport doc page look break
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 13 0 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the raw html tag be display and the display be collapse when you access this link standalone code to reproduce the issue shell please access from your browser if it be not reproduce I will share my detailed environment relevant log output no response
tensorflowtensorflow
stringlookup layer do not retrieve vocabulary after save and load the model
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 11 2 12 2 13 custom code no os platform and distribution linux ubuntu 20 04 1 mobile device no response python version 3 9 5 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior we notice we could pickle the model right after build it but unpickling would fail after save and load it from the disk upon further investigation we realize the error be due to the vocabulary of the stringlookup layer which be become an empty list after the tf keras model load model operation the unpickle issue happen on tf 2 11 onwards the unpickling work on tf 2 8 2 9 and 2 10 even though the empty vocabulary issue be still there use the minimal reproducible example below before save the model if we inspect the stringlookup layer we get full model layer 1 get config out 9 name string lookup trainable true dtype int64 invert false max token none num oov indice 1 oov token unk mask token none output mode int sparse false pad to max token false vocabulary listwrapper a b idf weight none encode utf 8 after save and load to the disk we get full model load layer 1 get config out 10 name string lookup trainable true dtype int64 invert false max token none num oov indice 1 oov token unk mask token none output mode int sparse false pad to max token false vocabulary listwrapper idf weight none encode utf 8 we be able to circumvent the issue by create a new class as follow tf keras util register keras serializable class mystringlookup tf keras layer stringlookup def get config self base config super get config custom vocabulary self get vocabulary return base config custom however it would be nice if we didn t have to create this wrapper standalone code to reproduce the issue shell import tensorflow as tf import pickle model input tf keras input shape 1 dtype tf int64 lookup tf keras layer stringlookup vocabulary a b model input output tf keras layer dense 10 lookup full model tf keras model model input output this part should work model byte pickle dump full model model recover pickle load model byte this part should throw an error full model save tmp temp model full model load tf keras model load model tmp temp model model byte 2 pickle dump full model load model recover 2 pickle load model byte 2 relevant log output shell valueerror traceback most recent call last file 1 1 model recover 2 pickle load model byte 2 file databrick python lib python3 9 site package kera save pickle util py 48 in deserialize model from bytecode serialize model 46 model save lib load model filepath 47 except exception as e 48 raise e 49 else 50 return model file databrick python lib python3 9 site package kera save pickle util py 46 in deserialize model from bytecode serialize model 40 f write serialized model 41 when load direct import will work for most custom object 42 though it will require get config to be implement 43 some custom object e g an activation in a dense layer 44 serialize as a string by dense get config will require 45 a custom object scope 46 model save lib load model filepath 47 except exception as e 48 raise e file databrick python lib python3 9 site package keras save experimental saving lib py 196 in load model filepath custom object 194 h5 file close 195 except exception as e 196 raise e 197 else 198 return model file databrick python lib python3 9 site package keras save experimental saving lib py 183 in load model filepath custom object 181 config dict json load config json 182 construct the model from the configuration file in the archive 183 model deserialize keras object config dict custom object 184 h5 file h5py file tf io gfile join temp path var fname r 185 print h5 file h5 file action loading file databrick python lib python3 9 site package kera save experimental serialization lib py 318 in deserialize keras object config custom object 315 instantiate the class from its config inside a custom object scope 316 so that we can catch any custom object that the config refer to 317 with object registration custom object scope custom object 318 return cls from config inner config file databrick python lib python3 9 site package kera engine training py 3114 in model from config cls config custom object 3107 functional model key 3108 name 3109 layer 3110 input layer 3111 output layer 3112 3113 if all key in config for key in functional model key 3114 input output layer functional reconstruct from config 3115 config custom object 3116 3117 model cls 3118 input input output output name config get name 3119 3120 functional connect ancillary layer model layer file databrick python lib python3 9 site package keras engine functional py 1470 in reconstruct from config config custom object create layer 1468 first we create all layer and enqueue node to be process 1469 for layer datum in config layer 1470 process layer layer datum 1471 then we process node in order of layer depth 1472 node that can not yet be process if the inbound node 1473 do not yet exist be re enqueue and the process 1474 be repeat until all node be process 1475 while unprocessed node file databrick python lib python3 9 site package keras engine functional py 1451 in reconstruct from config process layer layer datum 1447 else 1448 instantiate layer 1449 from keras layers import deserialize as deserialize layer 1451 layer deserialize layer layer datum custom object custom object 1452 create layer layer name layer 1454 node count by layer layer int should skip first node layer file databrick python lib python3 9 site package keras layers serialization py 252 in deserialize config custom object 215 instantiate a layer from a config dictionary 216 217 args 249 250 251 populate deserializable object 252 return serialization deserialize keras object 253 config 254 module object local all object 255 custom object custom object 256 printable module name layer 257 file databrick python lib python3 9 site package keras save legacy serialization py 527 in deserialize keras object identifi module object custom object printable module name 525 else 526 with object registration customobjectscope custom object 527 deserialize obj cls from config cls config 528 else 529 then cls may be a function return a class 530 in this case by convention config hold 531 the kwargs of the function 532 custom object custom object or file databrick python lib python3 9 site package kera engine base layer py 860 in layer from config cls config 844 classmethod 845 def from config cls config 846 create a layer from its config 847 848 this method be the reverse of get config 858 a layer instance 859 860 return cls config file databrick python lib python3 9 site package kera layer preprocesse string lookup py 333 in stringlookup init self max tokens num oov index mask token oov token vocabulary idf weight encode invert output mode sparse pad to max token kwargs 329 del kwargs dtype 331 self encoding encode 333 super init 334 max tokens max tokens 335 num oov indice num oov indice 336 mask token mask token 337 oov token oov token 338 vocabulary vocabulary 339 vocabulary dtype tf string 340 idf weight idf weight 341 invert invert 342 output mode output mode 343 sparse sparse 344 pad to max token pad to max token 345 kwargs 346 347 base preprocessing layer keras kpl gauge get cell stringlookup set 348 true 349 file databrick python lib python3 9 site package kera layer preprocesse index lookup py 323 in indexlookup init self max tokens num oov index mask token oov token vocabulary dtype vocabulary idf weight invert output mode sparse pad to max token kwargs 320 self idf weight const self idf weight value 322 if vocabulary be not none 323 self set vocabulary vocabulary idf weight 324 else 325 when restore from a keras savedmodel the loading code will 326 expect to find and restore a lookup table attribute on the layer 327 this table need to be uninitialized as a statichashtable can not 328 be initialize twice 329 self lookup table self uninitialized lookup table file databrick python lib python3 9 site package kera layer preprocesse index lookup py 510 in indexlookup set vocabulary self vocabulary idf weight 507 idf weight np array idf weight 509 if vocabulary size 0 510 raise valueerror 511 f can not set an empty vocabulary you pass vocabulary 512 514 oov start self oov start index 515 token start self token start index valueerror can not set an empty vocabulary you pass
tensorflowtensorflow
mlir fix tf stridedslice lower to tosa with new axis mask shrink axis mask
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 13 0rc2 custom code yes os platform and distribution linux ubunto 18 04 mobile device no response python version 3 9 1 bazel version 6 1 2 gcc compiler version clang 15 0 2 cuda cudnn version no response gpu model and memory no response current behavior the follow mlir input be not support func func test stride slice new axis mask arg0 tensor 1x14x8xf32 tensor 1x14x8x1xf32 stride tf const device value dense 1 tensor 4xi32 tensor 4xi32 begin end tf const device value dense 0 tensor 4xi32 tensor 4xi32 re tf stridedslice arg0 begin end begin end stride begin mask 7 i64 device ellipsis mask 0 i64 end mask 7 i64 new axis mask 8 i64 shrink axis mask 0 i64 tensor 1x14x8xf32 tensor 4xi32 tensor 4xi32 tensor 4xi32 tensor 1x14x8x1xf32 func return re tensor 1x14x8x1xf32 func func test stride slice shrink axis mask arg0 tensor 1x14x8x1xf32 tensor 1x14x8xf32 stride tf const device value dense 1 tensor 4xi32 tensor 4xi32 begin tf const device value dense 0 tensor 4xi32 tensor 4xi32 end tf const device value dense 0 0 0 1 tensor 4xi32 tensor 4xi32 re tf stridedslice arg0 begin end stride begin mask 7 i64 device ellipsis mask 0 i64 end mask 7 i64 new axis mask 0 i64 shrink axis mask 8 i64 tensor 1x14x8x1xf32 tensor 4xi32 tensor 4xi32 tensor 4xi32 tensor 1x14x8xf32 func return re tensor 1x14x8xf32 standalone code to reproduce the issue shell use mlir new test in to see the error relevant log output shell I create pull request to fix the issue
tensorflowtensorflow
could not run gpu on jupyter
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 12 0 custom code yes os platform and distribution no response mobile device no response python version 3 10 11 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour I could not get tensorflow to run on gpu tf see the gpu on terminal but not on jupyter lab edit find a solution to see it on jupyterlab but must manually repeat still erratic misconfiguration seem to happen standalone code to reproduce the issue shell I could not get tensorflow to run on gpu tf see the gpu on terminal but not on jupyter lab edit eventually after hour I find a temporary solution in set the path each time before I launch jupyter lab cudnn path dirname python c import nvidia cudnn print nvidia cudnn file export ld library path ld library path conda prefix lib cudnn path lib nb if I be to set mkdir p conda prefix etc conda activate d echo cudnn path dirname python c import nvidia cudnn print nvidia cudnn file conda prefix etc conda activate d env var sh echo export ld library path ld library path conda prefix lib cudnn path lib conda prefix etc conda activate d env var sh as in somehow it messy with jupyter bcause the select kernel from jupyter would not correspond to the kernel set from the script now I can see the gpu on jupyter but still it crash and before on cpu it be not when I run this script use this library instal as in the description embedder parametricumap all the param now launch on gpu with tf device gpu 0 embed embedder fit transform np array t ravel for t in train datum the code fail with the log output below if I close the jupyter lab connection go back on the conda environment set again cudnn path dirname python c import nvidia cudnn print nvidia cudnn file export ld library path ld library path conda prefix lib cudnn path lib I and install nvcc conda install c nvidia cuda nvcc 11 3 58 configure the xla cuda directory mkdir p conda prefix etc conda activate d printf export xla flag xla gpu cuda data dir conda prefix lib n conda prefix etc conda activate d env var sh source conda prefix etc conda activate d env var sh copy libdevice file to the require path mkdir p conda prefix lib nvvm libdevice cp conda prefix lib libdevice 10 bc conda prefix lib nvvm libdevice and relaunch jupterlab this time I get the gpu see also in jupyter lab run with tf device gpu 0 spec embed spec embedder fit transform np array t ravel for t in train datum it yield 2023 06 06 21 55 04 944257 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor gradient split grad concat split split dim with dtype int32 node gradient split grad concat split split dim and can not understand why it raise warning or error with cpu device if I run on gpu not even sure it fall back to cpu actually please advice how to sync jupter lab and conda I do follow up with ikernel but it seem after a lot of check that environment variable be not correctly read not sure if kernel json fail to be update properly with thepath of the cuda library please consider add a guide on for run tf on jupyter my situation be that I need to run from a remote cluster and I think it be a frequent situation hope this feedback be useful relevant log output shell 2023 06 06 21 40 08 385763 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor gradient split grad concat split split dim with dtype int32 node gradient split grad concat split split dim 2023 06 06 21 40 10 938429 I tensorflow compiler xla stream executor cuda cuda dnn cc 424 load cudnn version 8600 2023 06 06 21 40 11 641291 I tensorflow compiler xla service service cc 169 xla service 0x7f89f489b8d0 initialize for platform cuda this do not guarantee that xla will be use device 2023 06 06 21 40 11 641337 I tensorflow compiler xla service service cc 177 streamexecutor device 0 nvidia geforce rtx 2080 ti compute capability 7 5 2023 06 06 21 40 11 641346 I tensorflow compiler xla service service cc 177 streamexecutor device 1 nvidia geforce rtx 2080 ti compute capability 7 5 2023 06 06 21 40 11 641353 I tensorflow compiler xla service service cc 177 streamexecutor device 2 nvidia geforce rtx 2080 ti compute capability 7 5 2023 06 06 21 40 11 641359 I tensorflow compiler xla service service cc 177 streamexecutor device 3 nvidia geforce rtx 2080 ti compute capability 7 5 2023 06 06 21 40 11 646324 I tensorflow compiler mlir tensorflow util dump mlir util cc 269 disable mlir crash reproducer set env var mlir crash reproducer directory to enable 2023 06 06 21 40 11 673008 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 530 can t find libdevice directory cuda dir nvvm libdevice this may result in compilation or runtime failure if the program we try to run use routine from libdevice search for cuda in the follow directory cuda sdk lib usr local cuda 11 8 usr local cuda you can choose the search directory by set xla gpu cuda datum dir in hlomodule s debugoption for most app set the environment variable xla flag xla gpu cuda data dir path to cuda will work 2023 06 06 21 40 11 673259 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 673658 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 673685 I tensorflow core common runtime executor cc 1197 job localhost replica 0 task 0 device gpu 0 debug info executor start abort this do not indicate an error and you can ignore this message internal libdevice not find at libdevice 10 bc node statefulpartitionedcall 16 2023 06 06 21 40 11 700045 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 700414 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 729242 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 729590 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 755959 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 756308 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 783041 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 783397 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 809786 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 810134 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 836777 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 837129 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 864411 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 864767 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 892388 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 892738 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 919296 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 919647 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 946506 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 946866 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 11 974287 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 11 974699 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 12 245782 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 12 246188 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 12 272303 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 12 272657 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 12 322559 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 12 322899 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 12 350255 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 12 350736 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 12 379576 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 12 379966 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc 2023 06 06 21 40 12 406780 w tensorflow compiler xla service gpu llvm gpu backend gpu backend lib cc 274 libdevice be require by this hlo module but be not find at libdevice 10 bc 2023 06 06 21 40 12 407233 w tensorflow core framework op kernel cc 1830 op require fail at xla ops cc 362 internal libdevice not find at libdevice 10 bc internalerror traceback most recent call last cell in 33 line 31 7 spec embedder parametricumap 8 metric euclidean 9 min dist 0 1 27 n training epoch 1 28 30 with tf device gpu 0 31 spec embed spec embedder fit transform np array t ravel for t in train datum file miniconda3 envs conda01 lib python3 10 site package umap parametric umap py 217 in parametricumap fit transform self x y precompute distance 215 return super fit transform precompute distance y 216 else 217 return super fit transform x y file miniconda3 envs conda01 lib python3 10 site package umap umap py 2772 in umap fit transform self x y 2742 def fit transform self x y none 2743 fit x into an embed space and return that transform 2744 output 2745 2770 local radius of datum point in the embed log transform 2771 2772 self fit x y 2773 if self transform mode embed 2774 if self output den file miniconda3 envs conda01 lib python3 10 site package umap parametric umap py 202 in parametricumap fit self x y precompute distance 200 return super fit precompute distance y 201 else 202 return super fit x y file miniconda3 envs conda01 lib python3 10 site package umap umap py 2684 in umap fit self x y 2681 print ts construct embed 2683 if self transform mode embed 2684 self embed aux datum self fit embed datum 2685 self raw datum index 2686 self n epoch 2687 init 2688 random state jh why raw datum 2689 2690 assign any point that be fully disconnected from our manifold s to have embed 2691 coordinate of np nan these will be filter by our plotting function automatically 2692 they also prevent user from be deceive a distance query to one of these point 2693 might be worth move this into simplicial set embed or fit embed datum 2694 disconnect vertex np array self graph sum axis 1 flatten 0 file miniconda3 envs conda01 lib python3 10 site package umap parametric umap py 462 in parametricumap fit embed datum self x n epoch init random state 459 validation datum none 461 create embed 462 history self parametric model fit 463 edge dataset 464 epoch self loss report frequency self n training epoch 465 step per epoch step per epoch 466 max queue size 100 467 validation datum validation datum 468 self keras fit kwargs 469 470 save loss history dictionary 471 self history history history file local lib python3 10 site package keras util traceback util py 70 in filter traceback error handler args kwargs 67 filter tb process traceback frames e traceback 68 to get the full stack trace call 69 tf debug disable traceback filtering 70 raise e with traceback filter tb from none 71 finally 72 del filter tb file miniconda3 envs conda01 lib python3 10 site package tensorflow python eager execute py 52 in quick execute op name num output input attrs ctx name 50 try 51 ctx ensure initialize 52 tensor pywrap tfe tfe py execute ctx handle device name op name 53 input attrs num output 54 except core notokstatusexception as e 55 if name be not none internalerror graph execution error detect at node statefulpartitionedcall 16 define at most recent call last file home h21 luas6629 miniconda3 envs conda01 lib python3 10 runpy py line 196 in run module as main return run code code main global none file home h21 luas6629 miniconda3 envs conda01 lib python3 10 runpy py line 86 in run code exec code run global file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipykernel launcher py line 17 in app launch new instance file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package traitlet config application py line 992 in launch instance app start file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipykernel kernelapp py line 711 in start self io loop start file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package tornado platform asyncio py line 215 in start self asyncio loop run forever file home h21 luas6629 miniconda3 envs conda01 lib python3 10 asyncio base event py line 603 in run forever self run once file home h21 luas6629 miniconda3 envs conda01 lib python3 10 asyncio base event py line 1909 in run once handle run file home h21 luas6629 miniconda3 envs conda01 lib python3 10 asyncio event py line 80 in run self context run self callback self args file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipykernel kernelbase py line 510 in dispatch queue await self process one file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipykernel kernelbase py line 499 in process one await dispatch args file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipykernel kernelbase py line 406 in dispatch shell await result file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipykernel kernelbase py line 729 in execute request reply content await reply content file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipykernel ipkernel py line 411 in do execute res shell run cell file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipykernel zmqshell py line 531 in run cell return super run cell args kwargs file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipython core interactiveshell py line 3006 in run cell result self run cell file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipython core interactiveshell py line 3061 in run cell result runner coro file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipython core async helper py line 129 in pseudo sync runner coro send none file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipython core interactiveshell py line 3266 in run cell async have raise await self run ast nodes code ast body cell name file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipython core interactiveshell py line 3445 in run ast node if await self run code code result async asy file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package ipython core interactiveshell py line 3505 in run code exec code obj self user global ns self user n file tmp ipykernel 3110734 3903349550 py line 31 in spec embed spec embedder fit transform np array t ravel for t in train datum file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package umap parametric umap py line 217 in fit transform return super fit transform x y file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package umap umap py line 2772 in fit transform self fit x y file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package umap parametric umap py line 202 in fit return super fit x y file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package umap umap py line 2684 in fit self embed aux datum self fit embed datum file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package umap parametric umap py line 462 in fit embed datum history self parametric model fit file home h21 luas6629 local lib python3 10 site package keras util traceback util py line 65 in error handler return fn args kwargs file home h21 luas6629 local lib python3 10 site package kera engine training py line 1685 in fit tmp log self train function iterator file home h21 luas6629 local lib python3 10 site package kera engine training py line 1284 in train function return step function self iterator file home h21 luas6629 local lib python3 10 site package kera engine training py line 1268 in step function output model distribute strategy run run step args datum file home h21 luas6629 local lib python3 10 site package kera engine training py line 1249 in run step output model train step datum file home h21 luas6629 miniconda3 envs conda01 lib python3 10 site package umap parametric umap py line 1150 in train step self optimizer apply gradient zip gradient trainable var file home h21 luas6629 local lib python3 10 site package keras optimizers optimizer py line 1174 in apply gradient return super apply gradient grad and var name name file home h21 luas6629 local lib python3 10 site package keras optimizers optimizer py line 650 in apply gradient iteration self internal apply gradient grad and var file home h21 luas6629 local lib python3 10 site package keras optimizers optimizer py line 1200 in internal apply gradient return tf internal distribute interim maybe merge call file home h21 luas6629 local lib python3 10 site package keras optimizers optimizer py line 1250 in distribute apply gradient fn distribution extend update file home h21 luas6629 local lib python3 10 site package keras optimizers optimizer py line 1245 in apply grad to update var return self update step xla grad var i d self var key var node statefulpartitionedcall 16 libdevice not find at libdevice 10 bc node statefulpartitionedcall 16 op inference train function 4496
tensorflowtensorflow
weird memory usage of shuffle in tf datum dataset
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version tf 2 12 custom code no os platform and distribution ubuntu 22 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour hi I ve read the tf datum doc randomly shuffle input datum which say use a large buffer size in datum shuffling be not recommend but still the shuffle behavior cost way more memory than I would expect for example I expect the full shuffling of 50 000 000 int datum may only use 1 gb of memory but the follow code after print essentially use 10 gb this lead to some practical concern if I set buffer size 1024 in datum shuffle would the actual memory usage of the buffer size be 10 time that of 1024 element standalone code to reproduce the issue shell import numpy as np import tensorflow as tf datum size 50000000 tf dataset tf datum dataset from tensor slice np arange datum size tf dataset iter tf dataset shuffle datum size print next tf dataset relevant log output shell
tensorflowtensorflow
error message be inconsistent with documentation in tf nn leaky relu
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 12 0 custom code yes os platform and distribution win11 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour accord to doc the argument feature can be float16 float32 float64 int32 int64 but the error message in follow snippet code indicate that the type of feature can only be float16 float32 float64 without int standalone code to reproduce the issue shell import tensorflow as tf result try feature true result re tf nn leaky relu feature feature except exception as e result err error str e print result result err error value for attr t of bool be not in the list of allow value half bfloat16 float double n t nodedef node leakyrelu op activation t attr alpha float default 0 2 attr t type default dt float allow dt half dt bfloat16 dt float dt double op leakyrelu relevant log output no response
tensorflowtensorflow
speech recognition no code be present
Bug
please add source code for android for speech recognition currently only resource file be present synandi tensorflow copybara can u please help
tensorflowtensorflow
tf raw ops gatherv2 api doc page do not properly describe the argument batch dim
Bug
click to expand issue type documentation bug have you reproduce the bug with tf nightly no source source tensorflow version tf 2 12 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour the doc page for tf raw op gatherv2 here do not describe how batch dim be use in the operator I find one stack overflow comment here that say batch dim n inform tf that the first n dimension of the tensor be batch dimension if this be indeed correct it should be include in the doc page otherwise the behavior of batch dim parameter be difficult to understand standalone code to reproduce the issue shell its a missing detail in the documentation for tf raw op gatherv2 relevant log output no response
tensorflowtensorflow
memory leak in forward pass e g of resnet50 model with tensorflow 2 12 0 and python 3 11
Bug
the follow minimal example reproduce the memory leak I run into no gpu just cpu memleak py python3 import numpy as np import psutil import tensorflow as tf model tf keras application resnet50 vgg19 seem to not leak tf config threading set inter op parallelism thread 0 and tf config threading set intra op parallelism thread 0 do not help inp np random rand 1 224 224 3 255 astype uint8 for run in range 1 9999999 model inp memory usage in mib psutil process memory info rss 1024 1024 print f memory usage after run run s in mib memory usage in mib 3f flush true dockerfile dockerfile from python 3 11 2 run pip install no cache dir tensorflow 2 12 0 psutil 5 9 4 disable the docker cache from this stage on see add skipcache add memleak py run python memleak py output docker build rm memory usage after 1 run s in mib 604 324 memory usage after 2 run s in mib 606 906 memory usage after 3 run s in mib 606 906 memory usage after 4 run s in mib 606 906 memory usage after 5 run s in mib 606 906 memory usage after 6 run s in mib 607 164 memory usage after 7 run s in mib 607 164 memory usage after 8 run s in mib 607 164 memory usage after 9 run s in mib 607 164 memory usage after 10 run s in mib 607 164 memory usage after 11 run s in mib 607 422 memory usage after 12 run s in mib 607 422 memory usage after 498 run s in mib 626 242 memory usage after 499 run s in mib 626 242 memory usage after 500 run s in mib 626 242 memory usage after 501 run s in mib 626 500 memory usage after 502 run s in mib 626 500 memory usage after 1996 run s in mib 683 477 memory usage after 1997 run s in mib 683 734 memory usage after 1998 run s in mib 683 734 memory usage after 1999 run s in mib 683 734 memory usage after 2000 run s in mib 683 734 memory usage after 2001 run s in mib 683 734 memory usage after 9996 run s in mib 960 258 memory usage after 9997 run s in mib 960 508 memory usage after 9998 run s in mib 960 508 memory usage after 9999 run s in mib 960 508 memory usage after 10000 run s in mib 960 508 memory usage after 10001 run s in mib 960 508 memory usage after 24997 run s in mib 1547 840 memory usage after 24998 run s in mib 1547 840 memory usage after 24999 run s in mib 1534 230 memory usage after 25000 run s in mib 1532 348 memory usage after 25001 run s in mib 1533 441 memory usage after 25002 run s in mib 1544 711 when use the same tensorflow version 2 12 0 but with python 3 10 10 instead of 3 11 2 the memory usage do not grow
tensorflowtensorflow
delete legacy java client from tensorflow main repository
Bug
tensorflow main repository still contain the old java client base on tf1 x that have be replace a few year ago by the new version maintain by sig jvm this be very misleading for user who want to discover the capability of run tensorflow model on java just this week a new example of such question appear on the forum this issue be to start the process of delete this client for good in tf main repo we could start by replace this readme for simply say that this client be deprecate and provide link to the new repo then we can proceed to the folder deletion make sure it win t break any code ci job or external script like the documentation one if need be we at sig jvm can take care of push a series of pull request to achieve this goal cc bhack craigacp
tensorflowtensorflow
compute jacobian for tf image rot90 throw error encounter an exception while vectorize the jacobian computation
Bug
click to expand issue type bug source source tensorflow version tf nightly custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell compute jacobian for tf image rot90 fail and throw error encounter an exception while vectorize the jacobian computation this only happen when I set jit compile true standalone code to reproduce the issue shell import tensorflow as tf image tf random uniform 2 2 1 1 minval 0 maxval 1 dtype tf float64 tf function jit compile true def rot func image return tf image rot90 image k 3 with tf gradienttape persistent true as tape tape watch image output rot func image print output shape gradient tape jacobian output image print gradient relevant log output shell 2 1 2 1 valueerror in user code valueerror dimension 2 in both shape must be equal but be 1 and 2 shape be 4 2 1 2 1 and 2 2 1 1 encounter an exception while vectorize the jacobian computation vectorization can be disable by set experimental use pfor to false
tensorflowtensorflow
tensorflow customize embed not work on gpu
Bug
click to expand issue type bug source binary tensorflow version v2 3 0 rc2 23 gb36436b087 2 3 0 custom code no os platform and distribution ubuntu 17 04 mobile device no response python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version 10 1 7 6 5 gpu model and memory tesla v100 32 g current behaviour shell I make a tensorflow model like follow class enhancedembedde tf keras layer embed def init self input dim output dim embedding initializer uniform embedding regularizer none activity regularizer none embedding constraint none mask zero false input length none kwargs super init input dim output dim embedding initializer embedding regularizer activity regularizer embedding constraint mask zero input length kwargs self five tf constant 0 5 self zero tf constant 0 def embed self input return super call input def map 2 self token identifi self embed token 0 cur word self embed token 2 return tf cond tf equal tf shape token 0 self zero lambda tf squeeze cur word lambda tf squeeze tf reduce mean identifier self five cur word self five def map 1 self input return tf map fn fn lambda x self map 2 x elem input dtype tf float32 def call self input final embedding tf map fn fn lambda x self map 1 x elem input dtype tf float32 return final embedding class enhancedmodel model def init self embed dim hide dim vocab size label size seq len pretraine weight super init self hide dim hide dim self vocab size vocab size self embed dim embed dim self label size label size self activation tf keras activation tanh self num layer 1 self embed enhancedembedde vocab size embed dim embedding initializer keras initializers constant pretraine weight self encoder bidirectional lstm hidden dim return sequence true input shape seq len embed dim self pool maxpool1d hide dim 2 self decoder dense self label size def call self input embedding self embed input embedding tf reshape embedding 1 400 32 lstm out self encoder embedding lstm out tf transpose lstm out perm 0 2 1 pool out self pool lstm out out tf squeeze pool out 1 out self decoder out return out actually it be a model that cite from other I only change the embed layer and the input the input be a raggedtensor with shape batch size 400 3 none and I use and map 2 function in enhancedembedding to general final embed for the follow layer everything work fine except that the model be not work with gpu I look the gpu utilization use nvidia smi and the result be about 5 but cpu util be about 100 gpu can be use in the virtual env tf test be gpu available true standalone code to reproduce the issue shell the training code be here but it can not be reproduce since the pretraine vector the training datum be too large relevant log output no response
tensorflowtensorflow
try to export a function which reference untracked resource tensor
Bug
click to expand issue type bug source source tensorflow version 2 8 2 custom code yes os platform and distribution colab mobile device no response python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I be not able to export model with a custom signature that contain preprocesse layer as part of the model graph this issue happen for different kind of layer in my case it be normalization integerlookup and stringlookup standalone code to reproduce the issue shell link for colab def model fn age input l input shape 1 dtype tf float32 name age sex input l input shape 1 dtype tf float32 name sex cp input l input shape len cp vocab dtype tf float32 name cp thal input l input shape len thal vocab dtype tf float32 name thal concat input l concatenate age input sex input cp input thal input x l dense 32 activation relu concat input x l dropout 0 5 x output l dense 1 activation sigmoid x return tf keras model model input age input sex input cp input thal input output output model model fn def custom signature model input signature tf tensorspec 1 tf int64 name age tf tensorspec 1 tf int64 name sex tf tensorspec 1 tf int64 name cp tf tensorspec 1 tf string name thal tf function input signature input signature def serve fn input age process age preprocesse input 0 sex process tf cast sex preprocessing input 1 dtype tf float32 cp process cp preprocesse input 2 thal process thal preprocesse input 3 preprocesse input age age process sex sex process cp tf expand dim cp process 0 thal tf expand dim thal process 0 prob model preprocesse input return prob prob return serve fn tf save model save obj model export dir signature model signature serve default custom signature model relevant log output shell assertionerror try to export a function which reference untracked resource tensor 3629 0 shape dtype resource tensorflow object e g tf variable capture by function must be track by assign they to an attribute of a track object or assign to an attribute of the main object directly trackable python object refer to this tensor from gc get referrer limit to two hop
tensorflowtensorflow
tf image ssim multiscale have a bug as a loss function it produce nan the bug didn t happen before please fix it
Bug
click to expand issue type bug source source tensorflow version tf 2 9 1 custom code yes os platform and distribution window mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version 11 2 8 1 gpu model and memory rtx a6000 current behaviour shell tf image ssim multiscale have a bug as a loss function it produce nan during train the bug didn t happen in the previous tensorflow version I think another person have a similar issue with the ssim loss colab tensorflow version be 2 8 2 and it doesn t have this issue but I really need to use tensorflow 2 9 1 for my project please help we to fix it thank you standalone code to reproduce the issue shell msssim tf reduce mean tf image ssim multiscale real image generate image 2 relevant log output shell epoch 1 10 6 7856 eta 1 45 39 d loss nan g loss nan ms ssim nan l1 nan l2 nan entropy nan g cost nan
tensorflowtensorflow
fail to set new cubla math mode cubla status invalid value
Bug
issue type bug source binary tensorflow version v2 9 0 18 gd8ce9f9c301 2 9 1 custom code no os platform and distribution linux ubuntu 20 04 4 lts mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour I have a dynamic keras model name symbol net when execute forward computation call call method sometimes it crash as follow if there s a dense layer in the model I have search on the internet and try so many solution include combine they like python import tensorflow as tf type ignore from tensorflow import kera from keras import layer type ignore from keras import backend as k physical device tf config list physical device gpu if len physical device 0 tf config experimental set memory growth physical device 0 true config tf compat v1 configproto config gpu option allow growth true config gpu option per process gpu memory fraction 0 333 session tf compat v1 session config config k set session session but all of they don t work I have a gpu with 12 gib on the multi user machine when I be run the code there remain 12000 mib for I so it s enough my model be quite small like this which win t take a lot of mem python 2022 08 21 23 09 42 546282 I tensorflow stream executor cuda cuda blas cc 1786 tensorfloat 32 will be use for the matrix multiplication this will only be log once 2022 08 21 23 09 42 546307 e tensorflow stream executor cuda cuda blas cc 197 fail to set new cubla math mode cubla status invalid value 2022 08 21 23 09 42 546320 w tensorflow core framework op kernel cc 1745 op require fail at matmul op impl h 438 internal fail initialize math mode output shape 2 2 2 2 dtype traceback most recent call last file home colin code nnsmith nnsmith graph gen 2 py line 1899 in ic net input list file home colin miniconda3 lib python3 10 site package keras util traceback util py line 67 in error handler raise e with traceback filter tb from none file home colin miniconda3 lib python3 10 site package tensorflow python eager execute py line 54 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl internalerror exception encounter when call layer symbol net type symbolnet graph execution error detect at node dense tensordot matmul define at most recent call last file home colin code nnsmith nnsmith graph gen 2 py line 1899 in ic net input list file home colin miniconda3 lib python3 10 site package keras util traceback util py line 64 in error handler return fn args kwargs file home colin miniconda3 lib python3 10 site package kera engine training py line 490 in call return super call args kwargs file home colin miniconda3 lib python3 10 site package keras util traceback util py line 64 in error handler return fn args kwargs file home colin miniconda3 lib python3 10 site package kera engine base layer py line 1014 in call output call fn input args kwargs file home colin miniconda3 lib python3 10 site package keras util traceback util py line 92 in error handler return fn args kwargs file home colin code nnsmith nnsmith graph gen 2 py line 547 in call for inst inps out op node i d in self instruction datum file home colin code nnsmith nnsmith graph gen 2 py line 576 in call output inst input tensor file home colin miniconda3 lib python3 10 site package keras util traceback util py line 64 in error handler return fn args kwargs file home colin miniconda3 lib python3 10 site package kera engine base layer py line 1014 in call output call fn input args kwargs file home colin miniconda3 lib python3 10 site package keras util traceback util py line 92 in error handler return fn args kwargs file home colin miniconda3 lib python3 10 site package keras layer core dense py line 224 in call output tf tensordot input self kernel rank 1 0 node dense tensordot matmul fail initialize math mode node dense tensordot matmul op inference call 146 call argument receive by layer symbol net type symbolnet args tf tensor shape 2 2 2 2 dtype float32 tf tensor shape 1 1 1 1 dtype float32 kwargs training none standalone code to reproduce the issue shell currently my code be large sorry relevant log output shell 2022 08 21 23 09 55 580410 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 55 601460 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 55 601638 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 55 602081 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2022 08 21 23 09 55 603250 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 55 603399 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 55 603554 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 55 915740 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 55 915925 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 55 916011 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 55 916113 I tensorflow core common runtime gpu gpu device cc 1532 create device job localhost replica 0 task 0 device gpu 0 with 4013 mb memory device 0 name nvidia geforce rtx 3080 ti pci bus i d 0000 01 00 0 compute capability 8 6 2022 08 21 23 09 56 068318 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 56 068541 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 56 068654 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 56 068796 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 56 068904 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 56 068997 I tensorflow core common runtime gpu gpu device cc 1532 create device job localhost replica 0 task 0 device gpu 0 with 4013 mb memory device 0 name nvidia geforce rtx 3080 ti pci bus i d 0000 01 00 0 compute capability 8 6 2022 08 21 23 09 56 183640 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 56 183809 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 56 183889 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 56 184001 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 56 184083 I tensorflow stream executor cuda cuda gpu executor cc 975 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 08 21 23 09 56 184142 I tensorflow core common runtime gpu gpu device cc 1532 create device job localhost replica 0 task 0 device gpu 0 with 4013 mb memory device 0 name nvidia geforce rtx 3080 ti pci bus i d 0000 01 00 0 compute capability 8 6 2022 08 21 23 09 57 669085 I tensorflow stream executor cuda cuda blas cc 1786 tensorfloat 32 will be use for the matrix multiplication this will only be log once 2022 08 21 23 09 57 669107 e tensorflow stream executor cuda cuda blas cc 197 fail to set new cubla math mode cubla status invalid value 2022 08 21 23 09 57 669119 w tensorflow core framework op kernel cc 1745 op require fail at matmul op impl h 438 internal fail initialize math mode output shape 1 1 dtype traceback most recent call last file home colin code nnsmith nnsmith graph gen 2 py line 1899 in ic net input list file home colin miniconda3 lib python3 10 site package keras util traceback util py line 67 in error handler raise e with traceback filter tb from none file home colin miniconda3 lib python3 10 site package tensorflow python eager execute py line 54 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl internalerror exception encounter when call layer symbol net type symbolnet graph execution error detect at node dense matmul define at most recent call last file home colin code nnsmith nnsmith graph gen 2 py line 1899 in ic net input list file home colin miniconda3 lib python3 10 site package keras util traceback util py line 64 in error handler return fn args kwargs file home colin miniconda3 lib python3 10 site package kera engine training py line 490 in call return super call args kwargs file home colin miniconda3 lib python3 10 site package keras util traceback util py line 64 in error handler return fn args kwargs file home colin miniconda3 lib python3 10 site package kera engine base layer py line 1014 in call output call fn input args kwargs file home colin miniconda3 lib python3 10 site package keras util traceback util py line 92 in error handler return fn args kwargs file home colin code nnsmith nnsmith graph gen 2 py line 547 in call for inst inps out op node i d in self instruction datum file home colin code nnsmith nnsmith graph gen 2 py line 576 in call output inst input tensor file home colin miniconda3 lib python3 10 site package keras util traceback util py line 64 in error handler return fn args kwargs file home colin miniconda3 lib python3 10 site package kera engine base layer py line 1014 in call output call fn input args kwargs file home colin miniconda3 lib python3 10 site package keras util traceback util py line 92 in error handler return fn args kwargs file home colin miniconda3 lib python3 10 site package keras layer core dense py line 221 in call output tf matmul a input b self kernel node dense matmul fail initialize math mode node dense matmul op inference call 156 call argument receive by layer symbol net type symbolnet args tf tensor shape 2 2 2 1 dtype float32 tf tensor shape 1 dtype float32 kwargs training none
tensorflowtensorflow
inactiverpcerror of rpc that terminate with status statuscode fail precondition detail attempt to use uninitialized value dense 2 bias
Bug
inactiverpcerror of rpc that terminate with status statuscode fail precondition detail attempt to use uninitialized value dense 2 bias get follow error from my python grpc client when try predict wuth the tf serve server inactiverpcerror inactiverpcerror of rpc that terminate with status statuscode fail precondition detail attempt to use uninitialized value dense 2 bias node dense 2 bias read debug error string create 1654790154 977771219 description error receive from peer ipv4 10 1 25 14 9002 file src core lib surface call cc file line 903 grpc message attempt to use uninitialized value dense 2 bias n t node dense 2 bias read grpc status 9 I find description of problem but it be not informative for I operation be reject because the system be not in a state require for the operation s execution grpc status code system information python 3 9 tensorflow serve api 2 8 0 tensorflow 2 8 0 what can be the reason for this error and how it could be fix
tensorflowtensorflow
tf gather nd and tf gather have inconsistent type check for batch dim
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 8 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue import tensorflow as tf param tf random uniform 3 1 12 64 dtype tf float32 indice tf random uniform 35 2 minval 0 maxval 1 dtype tf int64 batch dim false tf gather nd param indice batch dim batch dim pass tf gather param indice batch dim batch dim invalidargumenterror detailed error message invalidargumenterror value for attr taxi of bool be not in the list of allow value int32 int64 nodedef node gatherv2 op output tparam attr batch dim int default 0 attr tparams type attr tindice type allow dt int32 dt int64 attr taxis type allow dt int32 dt int64 op gatherv2 describe the current behavior in the above code batch dim be a bool not a int tf gather complain about this type mismatch and throw invalidargumenterror however tf gather nd would do implicit conversion and convert false to 0 there be an inconsistency in the type checking describe the expect behavior either allow implicit bool int conversion in all case or throw an error in all case
tensorflowtensorflow
invalidargumenterror node while grad connect to invalid output 4 of source node while which have 4 output
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary colab tensorflow version use command below 2 8 0 colab python version 3 7 12 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior I get the exception node gradient while grad while grad connect to invalid output 4 of source node while which have 4 output try use tf compat v1 experimental output all intermediate true when execute a noop without any control dependency but also when execute my real code I get the same exception I m not sure if these be two separate issue or the same describe the expect behavior a noop without control dependency should not depend on anything so I would never expect such exception for my real code I also would not expect such exception standalone code to reproduce the issue code to be execute with disabled eager mode v tf variable 1 def cond I x return tf less I 10 def body I x return I 1 x 1 j y tf while loop cond body 0 v loss tf reduce sum y 2 session run tf compat v1 global variable initializer opt tf compat v1 train gradientdescentoptimizer 0 1 opt op opt minimize loss no op tf no op session run no op here it crash already print session run j y opt op colab other info log when execute locally I additionally see this log output 2022 02 19 23 07 57 636413 w tensorflow c c api cc 349 operation name while i d 11 op device request assign def node while statelesswhile t dt int32 dt int32 dt float dt float dt variant lower use switch merge true num original output 5 read only resource input stateful parallelism false body while body 8 rewrite cond while cond 7 rewrite output shape parallel iteration 10 while loop counter while maximum iteration const readvariableop gradient while grad placeholder 1 0 accumulator 0 be change by set attribute after it be run by a session this mutation will have no effect and will trigger an error in the future either don t modify node after run they or create a new session traceback most recent call last file user az miniforge3 lib python3 9 site package tensorflow python client session py line 1380 in do call return fn args file user az miniforge3 lib python3 9 site package tensorflow python client session py line 1362 in run fn self extend graph file user az miniforge3 lib python3 9 site package tensorflow python client session py line 1403 in extend graph tf session extendsession self session tensorflow python framework error impl invalidargumenterror node gradient while grad while grad connect to invalid output 4 of source node while which have 4 output try use tf compat v1 experimental output all intermediate true during handling of the above exception another exception occur traceback most recent call last file user az programmierung playground tf while v2 py line 37 in main file user az programmierung playground tf while v2 py line 31 in main session run no op file user az miniforge3 lib python3 9 site package tensorflow python client session py line 970 in run result self run none fetch feed dict option ptr file user az miniforge3 lib python3 9 site package tensorflow python client session py line 1193 in run result self do run handle final target final fetch file user az miniforge3 lib python3 9 site package tensorflow python client session py line 1373 in do run return self do call run fn feed fetch target option file user az miniforge3 lib python3 9 site package tensorflow python client session py line 1399 in do call raise type e node def op message pylint disable no value for parameter tensorflow python framework error impl invalidargumenterror node gradient while grad while grad connect to invalid output 4 of source node while which have 4 output try use tf compat v1 experimental output all intermediate true my hypothesis be that the first session call session run tf compat v1 global variable initializer will somehow freeze the while loop function graph which be strange though as it would not depend on it and then the further code which add the gradient cause this warning operation statelesswhile be change by set attribute after it be run by a session I don t really understand why the first session call do that even though it be independent from the loop I don t really understand why it cause the error for the noop execution be there any workaround
tensorflowtensorflow
document whether the gpu execution of concrete function be asynchronous
Bug
url s with the issue performance see the speed up description of issue what need change the comment in the example code at performance state tf matmul can return before complete the matrix multiplication e g can return after enquee the operation on a cuda stream the x numpy call below will ensure that all enqueue operation have complete in contrast the benchmark script at see the speed up do not contain the numpy call python def power x y result tf eye 10 dtype tf dtype int32 for in range y result tf matmul x result return result print eager execution timeit timeit lambda power x 100 number 1000 power as graph tf function power print graph execution timeit timeit lambda power as graph x 100 number 1000 be the second example correct don t we need a numpy call to ensure that the computation be complete improve the documentation of graph function execution while read through the guide graph execution vs eager execution and I do not find any reference to the possibly asynchronous nature of function execution can we improve the guide by state that a graph function can return before complete the computation e g can return after enquee the operation on a cuda stream if this be not the case then can we state that explicitly async scope the description at mention that function call inside the scope can return before finish the actual execution but look into the source code indicate that the scope control whether remote execution be synchronous what be the status with a function execution on a local gpu be that affect by this scope could you expand the docstring with this information
tensorflowtensorflow
mkl fuse batch norm op test fail
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 rhel 7 tensorflow instal from source or binary source tensorflow version use command below 2 5 0 2 6 0 python version 3 8 bazel version if compile from source 3 7 2 gcc compiler version if compile from source 10 3 describe the current behavior accord to we build without config mkl and when run the test of the build the test tensorflow core kernel mkl mkl fuse batch norm op test fail 5 test from 1 test suite run 197 ms total pass 0 test fail 5 test list below fail test fusedbatchnormopt 0 training where typeparam float fail test fusedbatchnormopt 0 trainingrunningmean where typeparam float fail test fusedbatchnormopt 0 inference where typeparam float fail test fusedbatchnormopt 0 inferenceignoreavgfactor where typeparam float fail test fusedbatchnormopt 0 fusedbatchnormgradv3 where typeparam float the last one fusedbatchnormgradv3 seemingly succeed on tf 2 6 while the other 4 fail on 2 5 and 2 6 the test succeed when use config mkl and on other system it seem the amd epyc cpus be affect another intel node work fine so that might be relate although the intel cpus be a bit old broadwell and we use march native other info log test log test log command use cc opt flag o3 march native fno math errno fpic bazel test config noaws config nogcp config nohdfs compilation mode opt config opt subcommand verbose failure job 64 copt fpic action env pythonpath action env ebpythonprefixe action env pythonnousersite 1 distinct host configuration false test output error build test only local test job 64 test env cuda visible device 1 test timeout 3600 tensorflow core kernel mkl mkl fuse batch norm op test
tensorflowtensorflow
constant folding ignore epsilon
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary pip tensorflow version use command below v2 6 0 rc2 32 g919f693420e 2 6 0 cpu version python version 3 6 9 when run the script I m get epsilon 1e 09 dtype in 0 000000e 00 1 000000e 09 9 999999e 01 1 000000e 00 out 0 0 15 942385 inf epsilon 1e 07 dtype in 0 000000e 00 1 000000e 09 9 999999e 01 1 000000e 00 out 1 1920928e 07 1 1920928e 07 1 5249238e 01 1 5942385e 01 epsilon 1e 09 dtype in 0 000000e 00 1 000000e 09 9 999999e 01 1 000000e 00 out 1 00000008e 09 0 00000000e 00 1 61081453e 01 2 07232658e 01 epsilon 1e 07 dtype in 0 000000e 00 1 000000e 09 9 999999e 01 1 000000e 00 out 9 99999951e 08 9 89999951e 08 1 54249485e 01 1 61180957e 01 add sub 1 logit epsilon be simplify by grappler constant folding to sub 1 epsilon logit user doesn t expect inf because epsilon be use for this purpose it s expect to see epsilon 1e 09 dtype in 0 000000e 00 1 000000e 09 9 999999e 01 1 000000e 00 out 0 0 15 9340315 20 723267 epsilon 1e 07 dtype in 0 000000e 00 1 000000e 09 9 999999e 01 1 000000e 00 out 1 1920928e 07 1 1920928e 07 1 5333239e 01 1 6118095e 01 epsilon 1e 09 dtype in 0 000000e 00 1 000000e 09 9 999999e 01 1 000000e 00 out 1 00000008e 09 0 00000000e 00 1 61081453e 01 2 07232658e 01 epsilon 1e 07 dtype in 0 000000e 00 1 000000e 09 9 999999e 01 1 000000e 00 out 9 99999951e 08 9 89999951e 08 1 54249485e 01 1 61180957e 01 image as we can see it s impossible to represent 1 1e 9 in float32 it s round to 1 0 which cause inf which very likely will break the training with nan this case be extract from model use hard negative mining the input come from sigmoid contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute constant folding should not simplify a b a when b 0 standalone code to reproduce the issue do it make sense to prepare patch resolve this issue
tensorflowtensorflow
the word vector obtain by the word2vec tutorial be very bad
Bug
url s with the issue description of issue what need change I run the word2vec code without change from the tutorial but the word vector from the model be very bad not reflect the semantic meaning at all here be the result show in the embed projector selection 002
tensorflowtensorflow
file system be not work on window through tf io gfile gfile interface
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary 2 5 0 tensorflow version use command below 2 5 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior it look like on window file system e g gcs be only work through tf io read file but not work through tf io gfile gfile read on window check the follow command bash python3 c import tensorflow as tf print tf version version tf io read file gs 1234567 throw out error of tensorflow python framework error impl invalidargumenterror gcs path doesn t contain an object name gs 1234567 op readfile this indicate gcs file system at least be available on the other hand the follow command bash python3 c import tensorflow as tf print tf version version tf io gfile gfile gs 1234567 read throw out error of tensorflow python framework error impl unimplementederror file system scheme gs not implement file gs 1234567 which indicate gcs file system be not even register describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute yes standalone code to reproduce the issue see description above the issue be due to the duplication of env default in multiple place the issue be expose on window due to config monolithic be pass to bazel I think I have identify the place that cause the issue will submit a pr soon cc mihaimaruseac vnvo2409 kvignesh1420 terrytangyuan burgerkingeater
tensorflowtensorflow
negative sampling in word2vec tutorial
Bug
url s with the issue negative sample negative sampling for one skip gram generate training datum generate training datum description of issue what need change I be go through the tutorial on skipgram word2vec and I notice that positive sample candidate be also negative sample candidate too clear description for example we have sentence the wide road shimmer in the hot sun and window size 2 for tf keras preprocesse sequence skipgram therefore positive skip gram for include road the road wide road shimmer road in I guess the wide shimmer in should not be later label as negative skip gram for road right ps I m a newbie
tensorflowtensorflow
exception when save custom rnn model with a constant in the call function when use savedmodel format
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux redhat 7 8 2 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 4 1 and 2 3 1 test python version 3 6 8 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version no gpu model and memory no current behavior when save savedmodel format an rnn model with a constant in the call function like show below we get an exception desire behaviour we should be able to save model define below in a savedmodel format just like we can save it in an h5 format code to reproduce the issue import tensorflow as tf import tensorflow kera as tfk import tensorflow kera backend as k import numpy as np class minimalrnncell tfk layer layer def init self unit kwargs self unit unit self state size unit super minimalrnncell self init kwargs def build self input shape self kernel self add weight shape input shape 1 self unit initializer uniform name kernel self recurrent kernel self add weight shape self unit self unit initializer uniform name recurrent kernel self build true def call self input state none constant none train false args kwargs prev output state 0 print constant constant h k dot input self kernel constant 0 output h k dot prev output self recurrent kernel return output output def get config self return dict super get config unit self unit cell minimalrnncell 32 x tfk input none 5 name x z tfk input 1 name z layer tfk layer rnn cell name rnn y layer x constant z model tfk model input x z output y model compile optimizer adam loss mse model save tmp h5 this work ok model load tfk model load model tmp h5 custom object minimalrnncell minimalrnncell print model load predict np array 0 0 0 0 0 np array 0 this work ok model save tmp2 this throw an exception other info the stdout from the above be constant constant constant constant constant constant 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 constant constant constant I ll attach the full exception here anyone should be able to reproduce however the main error be valueerror dimension must be equal but be 32 and 5 for node add addv2 t dt float matmul constant with input shape 32 5 for some reason the name of z change to constant and the shape change from the expect none 1 to none none 5 any idea appreciate thank in advance
tensorflowtensorflow
quantization aware training model have weird inference behavior
Bug
hi I have a pretraine detection model I train in tensorflow 2 3 with fp32 precision I use this model s weight as initial pretraine weight for quantization aware training qat during training I could see that training converge and give logical prediction I visualize result during training when try to load with tensorflow the quantization aware training weight for inference the model behave differently when apply in inference the same preprocessing normalization of mean std the prediction probability be always around 0 25 also all the prediction bound box in inference be really small around 15 pixel height width while during training I could see prediction have logical bounding box that be with varied dimension and with much high probability this behavior appear even on image that be use for training so it s not relate to overfitte issue technical constraint for my implementation 1 I use tensorflow s tf quantization quantize and dequantize in specific place in my model after the activation function which appear in the follow pattern conv2d batchnormalization activation I attach here an example for such pattern from tensorflow keras layer import conv2d zeropadding2d leakyrelu relu batchnormalization def myconv x filter kernel size stride 1 batch norm true be quantize false if stride 1 padding same else x zeropadding2d 1 0 1 0 x top leave half padding padding valid x conv2d filter filter kernel size kernel size stride stride padding padding use bias not batch norm kernel regularizer l2 0 0005 x if batch norm x batchnormalization x if be quantize we ll use the leaky version of relu6 for stability of low precision computation we define the leaky relu use relu for conversion constrain to tensorrt later x relu negative slope 0 1 max value 6 x else x leakyrelu alpha 0 1 x if be quantize use tf syntax of quantization apply quantization only on the conv op after the activation x tf quantization quantize and dequantize x input min 64 input max 64 range give false the input min input max should be ignore due to range give false return x this function be call when build the model both for training and inference the reason I place the quantize and dequantize op specifically after the activation function be because this model should be converter to tensorrt later their qat guideline quantization training note that this be the place to apply quantization op for qat due to the layer fusion tensorrt apply in the conversion and optimization process I try also apply qat use the tensorflow model optimization toolkit as describe here but this result with a model that include unsupported node for tensorrt such as convinteger a tf model that use quantize and dequantize produce model that be possible to convert to trt but have the issue I describe above due to the irregular behavior in tf during inference here be an example to the quant op in netron for the inference model qat structure example this be a tensorflow relate issue since the issue appear during inference in tensorflow with qat weight 2 I create the model for inference use the follow command tf keras backend set learning phase 1 self model self build model where build model call internally also the function I attach here which contain the quantization op after every activation function be there any additional command that need to be apply in inference when use tf quantize quantize and dequantize I remember in tf 1 x there be tf contrib create eval graph but I didn t see anything similar to it in tf 2 3 I prefer a solution that let I still use tf quantize quantize and dequantize due to the tensorrt constraint I describe since I can t use model optimization toolkit s quantize model system information os platform and distribution linux ubuntu 18 04 tensorflow version 2 3 0 python version 3 6
tensorflowtensorflow
crash when use tf nn local response normalization across multiple gpu
Bug
model use tf nn local response normalization train ok but crash upon evaluation when parallelize over multiple gpu same model do not crash when run on a single gpu replace tf nn local response normalization with keras layer batchnormalization model train ok and evaluate ok on single or multiple gpu so it seem to I the problem be with tf nn local response normalization over multiple gpu number of gpu allocate via slurm e g sbatch node 1 sbatch ntask 2 sbatch gre gpu 2 here I allocate 2 gpu sbatch mem 16 g system information red hat enterprise linux server 7 6 maipo slurm 19 05 4 tensorflow 2 4 1 instal via conda python 3 7 10 cuda compilation tool release 10 1 v10 1 168 multiple nvidia tesla v100 with 32 gb ram describe the current behavior when train on two gpu code below train ok but crash when evaluate and dump core 50 51 eta 0s loss 1 6090 accuracy 0 21062021 03 24 17 11 41 834444 f tensorflow stream executor cuda cuda dnn cc 535 check fail cudnnsettensornddescriptor handle get elem type nd dim datum stride data cudnn status success 3 vs 0 batch descriptor count 0 feature map count 1 spatial 224 224 value min 0 000000 value max 0 000000 layout batchyxdepth abort core dump if I train on only one gpu code below train ok and evaluate ok describe the expect behavior train and evaluate ok on two gpu standalone code to reproduce the issue def create model input keras input shape 224 224 3 x tf cast input tf float32 x keras layer conv2d 1 2 2 stride 1 1 padding same x x keras layers lambda tf nn local response normalization x if I use x keras layer batchnormalization x then ok x keras layer activation relu x x tf keras layer globalaveragepooling2d x x tf keras layer dense 5 activation softmax x return keras model inputs input output x name toy if name main training datum validation datum testing datum load img dataset path to datum 224 224 these be tensorflow python data op dataset op batchdataset strategy tf distribute mirroredstrategy with strategy scope model create model optimizer tf keras optimizer adam metric accuracy model compile loss categorical crossentropy optimizer optimizer metric metric history model fit training datum epoch 1 validation datum validation datum loss accuracy model evaluate testing datum crash here
tensorflowtensorflow
accuracy of resnet50 vgg16 after tfmot quantization keras quantize model be really low
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below v2 4 python version 3 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior accuracy of resnet50 vgg16 after tfmot quantization keras quantize model be really low I test they with imagenet2012 subset validation portion for resnet50 the pretraine model import from keras application have top 1 accuracy 70 before quantization and 0 1 after quantization describe the expect behavior I expect the accuracy drop only slightly standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
quantization aware training do not seem to perform per channel quantization with allvaluesquantizer
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow custom code os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary anaconda navigator tensorflow version use command below tensorflow gpu 2 3 0 python version 3 7 9 cuda cudnn version cudatoolkit 10 1 243 cudnn 7 6 5 gpu model and memory gtx 1070 8 gb describe the current behavior I have quantize two cnn model with qat allvaluesquantizer one with per tensor and one with per axis per channel quantization when save the model in h5 format and inspect they I netron I note that each quantizewrapper layer s parameter both have scalar kernel min and kernel max describe the expect behavior as I have understand from this paper the min max value of the kernel be what define the scale and zero point quantization parameter for per tensor quantization it be reasonable that the model only have a single min and max value as the whole tensor have the same scale and zero point however for per channel quantization where each channel have its own scale and zero point I believe that kernel min and kernel max should be vector why aren t they here be one of my per tensor model allvalue here be one of my per axis model allvalue in this github issue someone mention that qat automatically use per tensor quantization as of march 2020 but that this be subject to change to I it look like qat at least allvaluesquantizer still only use per tensor quantization if that s the case why be there a parameter that I can set to enable per tensor quantization see allvaluesquantizer s per axis boolean I also note in the source code for the allvaluesquantizer l291 l366 that self per axis be never pass to the next function so what be that even variable use for so do qat even perform per channel quantization doesn t seem like it to I how can I use per channel quantization with the allvaluesquantizer standalone code to reproduce my model import tensorflow as tf from tensorflow import kera import tensorflow model optimization as tfmot from tensorflow compat v1 import configproto from tensorflow compat v1 import interactivesession config configproto config gpu option allow growth true session interactivesession config config possible quantization aware quantizer qat all value tfmot quantization keras quantizer allvaluesquantizer qat last value tfmot quantization keras quantizers lastvaluequantizer qat ma tfmot quantization keras quantizer movingaveragequantizer def quantization aware training model save w bit a bit symmetric per axis narrow range quantizer batch size 64 epoch 2 create quantize model s name string name model name name name str w bit wbit str a bit abit if symmetric name name sym else name name asym if narrow range name name narr else name name full if per axis name name perch else name name perten if quantizer qat all value name name av elif quantizer qat last value name name lv elif quantizer qat ma name name ma quantization quantize apply tfmot quantization kera quantize apply quantize model tfmot quantization keras quantize model quantize annotate layer tfmot quantization keras quantize annotate layer clone model tf keras model clone model quantize scope tfmot quantization keras quantize scope support layer tf keras layer conv2d class quantizer tfmot quantization keras quantizeconfig configure how to quantize weight def get weight and quantizer self layer return layer kernel tfmot quantization keras quantizer lastvaluequantizer num bit 8 symmetric true narrow range false per axis false configure how to quantize activation def get activation and quantizer self layer return layer activation tfmot quantization keras quantizer movingaveragequantizer num bit 8 symmetric false narrow range false per axis false def set quantize weight self layer quantize weight add this line for each item return in get weight and quantizer in the same order layer kernel quantize weight 0 def set quantize activation self layer quantize activation add this line for each item return in get activation and quantizer in the same order layer activation quantize activation 0 configure how to quantize output may be equivalent to activation def get output quantizer self layer return def get config self return class convquantizer quantizer configure weight to quantize with 4 bit instead of 8 bit def get weight and quantizer self layer return layer kernel quantizer num bit w bit symmetric symmetric narrow range narrow range per axis per axis configure how to quantize activation def get activation and quantizer self layer return layer activation tfmot quantization keras quantizer movingaveragequantizer num bit a bit symmetric false narrow range false per axis false class depthwisequantizer quantizer configure weight to quantize with 4 bit instead of 8 bit def get weight and quantizer self layer return layer depthwise kernel quantizer num bit w bit symmetric symmetric narrow range narrow range per axis per axis configure how to quantize activation def get activation and quantizer self layer return layer activation tfmot quantization keras quantizer movingaveragequantizer num bit a bit symmetric false narrow range false per axis false instead of simply use quantize annotate model or quantize model we must use quantize annotate layer since it s the only one with a quantize config argument def quantize all layer layer if isinstance layer tf keras layer depthwiseconv2d return quantize annotate layer layer quantize config depthwisequantizer elif isinstance layer tf keras layer conv2d return quantize annotate layer layer quantize config convquantizer return layer annotate model clone model model clone function quantize all layer with quantize scope quantizer quantizer convquantizer convquantizer depthwisequantizer depthwisequantizer q aware model quantize apply annotated model compile and train model optimizer keras optimizer adam learning rate 0 001 q aware model compile loss tf keras loss sparsecategoricalcrossentropy from logit true optimizer optimizer metric sparse categorical accuracy train image train label keras datasets cifar10 load datum q aware model fit train image train label batch size batch size epoch epoch verbose 1 validation split 0 1 if save save path model temp name q aware model save save path h5 return q aware model def temp net dropout 0 1 model keras sequential model add keras layer conv2d 32 3 3 padding same input shape 32 32 3 model add keras layers batchnormalization model add keras layers activation relu model add keras layer flatten model add keras layer dense 10 activation softmax model name temp net return model if name main q model quantization aware training model temp net save true w bit 8 a bit 8 symmetric false narrow range false per axis false quantizer qat all value batch size 64 epoch 1
tensorflowtensorflow
integer quantization convert bias of 32 bit float type in conv2d to 8 bit int type on tflite
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device rpi 3b tensorflow instal from source or binary source tensorflow version use command below 2 3 1 python version 3 6 bazel version if compile from source gcc compiler version if compile from source 5 4 0 cuda cudnn version gpu model and memory describe the current behavior error use network be a kind of resnet18 and this network be apply int8 quantization by use tflite but bias of one of the conv2d have int8 type the other bias have int32 type then inference with this model be not work like as below image describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook I share you kear and tflite version of resnet18 benchmark model zip other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
miss gpu op for zero like for raggedtensorvariant error occur when ragged tensor feed thru tf map fn
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes include below os platform and distribution e g linux ubuntu 16 04 ubuntu 20 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip binary tensorflow version use command below v1 12 1 49539 g18d8bcbe72b 2 5 0 dev20210123 python version 3 8 6 package by conda forge default nov 27 2020 19 31 52 n gcc 9 3 0 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 11 0 8 gpu model and memory titan x pascal computecapability 6 1 describe the current behavior I have a keras layer rescaleb that accept a ragged tensor with shape batch time in dim the layer call map fn to process each example in the batch separately scale the value along the inner dimension by a trainable gain vector the detail of the operation aren t critical but the ragged tensor go into map fn be use this layer fail with no unary variant unary op function find for unary variant op enum 1 variant type name raggedtensorvariant for device type gpu on a node whose name end with rescale b map while tensorarrayv2write tensorlistsetitem grad zero like which suggest that the zero like operation isn t define for ragged tensor on gpu in this simple example I also include rescalea which accomplish the same task use tf ragged map flat value although in my real use case I need map fn this be a simplify example describe the expect behavior I d expect rescaleb and rescalea to function identically standalone code to reproduce the issue I ve reproduce the issue locally with tf nightly gpu tf 2 5 but I can t seem to get the nightly version to see the gpu on colab the colab notebook be use tf 2 4 but the issue remain in tf 2 5 nightly other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach this may be the same issue as 44231 but hopefully the additional detail here be helpful
tensorflowtensorflow
race condition in port picker lead to test failure
Bug
system information tensorflow instal from source or binary source tensorflow version use command below 2 4 1 python version 3 7 4 bazel version if compile from source 3 7 1 gcc compiler version if compile from source 8 3 0 cuda cudnn version 11 0 describe the current behavior run the tensorflow test in parallel through bazel lead to various failure all ultimately with a fail start of the grpc server due to address already in use as the reason I suspect a race condition in tensorflow core platform default net cc function pickunusedportordie when multiple process try to pick a random unused port they may end up pick a port just pick but not yet use by another test process this do not always happen but often enough to become a problem on our 6 gpu system I e 6 parallel process I also think the problem be amplify by the use of rand without properly I e randomly or even at all seed it first which mean the process likely try the same port in the same order this be support by the output of the next test w tensorflow core platform default net cc 65 bind port 52649 fail address already in use w tensorflow core platform default net cc 65 bind port 54915 fail address already in use w tensorflow core platform default net cc 65 bind port 64017 fail address already in use I e how high be the chance that 3 random port in order be already use hence the port choose be not random describe the expect behavior test succeed by choose real unused port I d suggest to block a port through other mean currently a static std unordered set be use but e g a temporary file folder would be more appropriate other info log run capi remoteexecutesilentcopiesasyncfunc 2021 01 21 21 18 32 222204 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2021 01 21 21 18 32 224170 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcuda so 1 2021 01 21 21 18 32 478962 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0035 05 00 0 name tesla v100 sxm2 32 gb computecapability 7 0 coreclock 1 53ghz corecount 80 devicememorysize 31 50gib devicememorybandwidth 836 37gib s 2021 01 21 21 18 32 478977 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 10 1 2021 01 21 21 18 32 481822 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 10 2021 01 21 21 18 32 481866 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 10 2021 01 21 21 18 32 483470 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2021 01 21 21 18 32 484308 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2021 01 21 21 18 32 486499 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 10 2021 01 21 21 18 32 488202 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 10 2021 01 21 21 18 32 492019 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 7 2021 01 21 21 18 32 528329 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 2021 01 21 21 18 33 198519 I tensorflow core common runtime gpu gpu device cc 1261 device interconnect streamexecutor with strength 1 edge matrix 2021 01 21 21 18 33 198528 I tensorflow core common runtime gpu gpu device cc 1267 0 2021 01 21 21 18 33 198533 I tensorflow core common runtime gpu gpu device cc 1280 0 n 2021 01 21 21 18 33 211048 I tensorflow core common runtime gpu gpu device cc 1406 create tensorflow device job localhost replica 0 task 1 device gpu 0 with 30150 mb memory physical gpu device 0 n ame tesla v100 sxm2 32 gb pci bus i d 0035 05 00 0 compute capability 7 0 e0121 21 18 33 212193952 159657 server chttp2 cc 40 create 1611263913 212114250 description no address add out of total 1 resolve file external com github grpc grpc src core ext tran sport chttp2 server chttp2 server cc file line 395 reference error create 1611263913 212112418 description fail to add any wildcard listener file external com github grpc grpc src cor e lib iomgr tcp server posix cc file line 342 reference error create 1611263913 212096081 description unable to configure socket fd 33 file external com github grpc grpc src core lib I omgr tcp server util posix common cc file line 216 reference error create 1611263913 212090911 description address already in use errno 98 file external com github grpc grpc src core lib iomgr tcp server util posix common cc file line 189 os error address already in use syscall bind create 1611263913 212112068 description unable to configure socket fd 33 file external com github grpc grpc src core lib iomgr tcp server util posix common cc file line 216 reference error create 1611263913 212108551 description address already in use errno 98 file external com github grpc grpc src core lib iomgr tcp server util posix common cc file line 189 os error address already in use syscall bind 2021 01 21 21 18 33 212232 e tensorflow core distribute runtime rpc grpc server lib cc 533 unknown could not start grpc server tensorflow c eager c api remote test util cc 81 failure value of tensorflow grpcserver create server def tensorflow env default worker server1 ok actual false expect true fail capi remoteexecutesilentcopiesasyncfunc 998 ms other fail test be tensorflow c eager c api cluster test gpu tensorflow c eager c api remote function test gpu tensorflow c eager c api remote test gpu
tensorflowtensorflow
auto generate cortex m0 hello world make project fail to link
Bug
tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 linux pop os 5 8 0 7625 generic tensorflow instal from source or binary source tensorflow version commit sha if source 2 3 2 target platform e g arm mbe os arduino nano 33 etc cortex m0 describe the problem please provide the exact sequence of command step when you run into the problem I ve download the master branch as zip extract the archive and navigate to the root of the directory structure and run make f tensorflow lite micro tool make makefile target cortex m generic target arch cortex m0 microlite follow by make f tensorflow lite micro tool make makefile target cortex m generic target arch cortex m0 generate hello world make project afterwards I cd to tensorflow lite micro tool make gen cortex m generic cortex m0 default prj hello world make and execute make which result in the follow error tensorflow lite micro cortex m generic debug log cc 25 10 fatal error tensorflow lite micro cortex m generic debug log callback h no such file or directory 25 include tensorflow lite micro cortex m generic debug log callback h compilation terminate make makefile 36 tensorflow lite micro cortex m generic debug log o error 1 which I m able to resolve by copy debug log callback h from tensorflow lite micro cortex m generic to the tensorflow lite micro tool make gen cortex m generic cortex m0 default prj hello world make tensorflow lite micro cortex m generic execute make from within tensorflow lite micro tool make gen cortex m generic cortex m0 default prj hello world make which now run smoothly and produce the individual out file until the first call to the linker arm none eabi g std c 11 fno rtti fno exception fno threadsafe static fno unwind table ffunction section fdata section fmessage length 0 dtf lite static memory dtf lite disable x86 neon o3 werror wsign compare wdouble promotion wshadow wunuse variable wmisse field initializer wunuse function wswitch wvla wall wextra wstrict aliasing wno unused parameter mcpu cortex m0 mfpu auto dtf lite mcu debug log mthumb mfloat abi soft funsigne char mlittle endian wno type limit wno unused private field fomit frame pointer md dcpu m0 1 I I third party gemmlowp I third party flatbuffer include I third party ruy o hello world tensorflow lite micro all op resolver o tensorflow lite micro cortex m generic debug log o tensorflow lite micro memory helper o tensorflow lite micro micro allocator o tensorflow lite micro micro error reporter o tensorflow lite micro micro interpreter o tensorflow lite micro micro profiler o tensorflow lite micro micro string o tensorflow lite micro micro time o tensorflow lite micro micro util o tensorflow lite micro recording micro allocator o tensorflow lite micro record simple memory allocator o tensorflow lite micro simple memory allocator o tensorflow lite micro test helpers o tensorflow lite micro benchmarks keyword scramble model data o tensorflow lite micro memory planner greedy memory planner o tensorflow lite micro memory planner linear memory planner o tensorflow lite micro testing test conv model o tensorflow lite c common o tensorflow lite core api error reporter o tensorflow lite core api flatbuffer conversion o tensorflow lite core api op resolver o tensorflow lite core api tensor util o tensorflow lite kernel internal quantization util o tensorflow lite kernels kernel util o tensorflow lite schema schema util o tensorflow lite micro kernels activations o tensorflow lite micro kernels add o tensorflow lite micro kernels arg min max o tensorflow lite micro kernels ceil o tensorflow lite micro kernels circular buffer o tensorflow lite micro kernels comparison o tensorflow lite micro kernels concatenation o tensorflow lite micro kernels conv o tensorflow lite micro kernels conv test common o tensorflow lite micro kernels depthwise conv o tensorflow lite micro kernels dequantize o tensorflow lite micro kernels detection postprocess o tensorflow lite micro kernels elementwise o tensorflow lite micro kernels ethosu o tensorflow lite micro kernels flexbuffer generate datum o tensorflow lite micro kernels floor o tensorflow lite micro kernels fully connect o tensorflow lite micro kernel fully connect common o tensorflow lite micro kernels hard swish o tensorflow lite micro kernels kernel runner o tensorflow lite micro kernels kernel util o tensorflow lite micro kernels l2norm o tensorflow lite micro kernels logical o tensorflow lite micro kernels logistic o tensorflow lite micro kernels maximum minimum o tensorflow lite micro kernels mul o tensorflow lite micro kernels neg o tensorflow lite micro kernels pack o tensorflow lite micro kernels pad o tensorflow lite micro kernel pool o tensorflow lite micro kernels prelu o tensorflow lite micro kernels quantize o tensorflow lite micro kernel quantize common o tensorflow lite micro kernels reduce o tensorflow lite micro kernels reshape o tensorflow lite micro kernels resize near neighbor o tensorflow lite micro kernels round o tensorflow lite micro kernels shape o tensorflow lite micro kernels softmax o tensorflow lite micro kernels split o tensorflow lite micro kernels split v o tensorflow lite micro kernels stride slice o tensorflow lite micro kernels sub o tensorflow lite micro kernels svdf o tensorflow lite micro kernels svdf common o tensorflow lite micro kernels tanh o tensorflow lite micro kernels transpose conv o tensorflow lite micro kernels unpack o tensorflow lite micro examples hello world main o tensorflow lite micro examples hello world main function o tensorflow lite micro examples hello world model o tensorflow lite micro examples hello world output handler o tensorflow lite micro examples hello world constant o wl fatal warning wl gc section lm which err with usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a abort o in function abort build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc stdlib newlib libc stdlib abort c 59 undefined reference to exit usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a exit o in function exit build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc stdlib newlib libc stdlib exit c 64 undefined reference to exit usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a sbrkr o in function sbrk r build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc reent newlib libc reent sbrkr c 51 undefined reference to sbrk usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a signalr o in function kill r build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc reent newlib libc reent signalr c 53 undefined reference to kill usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a signalr o in function getpid r build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc reent newlib libc reent signalr c 83 undefined reference to getpid usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a writer o in function write r build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc reent newlib libc reent writer c 49 undefined reference to write usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a close o in function close r build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc reent newlib libc reent close c 47 undefined reference to close usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a fstatr o in function fstat r build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc reent newlib libc reent fstatr c 55 undefined reference to fstat usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a isattyr o in function isatty r build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc reent newlib libc reent isattyr c 52 undefined reference to isatty usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a lseekr o in function lseek r build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc reent newlib libc reent lseekr c 49 undefined reference to lseek usr lib gcc arm none eabi 9 2 1 arm none eabi bin ld usr lib gcc arm none eabi 9 2 1 arm none eabi lib thumb v6 m nofp libc a lib a readr o in function read r build newlib cvveyx newlib 3 3 0 build arm none eabi thumb v6 m nofp newlib libc reent newlib libc reent readr c 49 undefined reference to read collect2 error ld return 1 exit status make makefile 42 hello world error 1
tensorflowtensorflow
add window build to nightly libtensorflow c package
Bug
the c api page nightly libtensorflow c package say that libtensorflow package be build nightly and upload to gcs for all support platform but the libtensorflow nightly gcs bucket do not have build for window the readme s list of offical build official build also indicate that window nightlie should be available please make window libtensorflow nightlie available for download
tensorflowtensorflow
modular filesystem inconsistent rename file behaviour
Bug
this be not a contribution the new modular filesystem include a rename file l485 operation the docstre state that the implementation must throw a tf fail precondition error if either of the src or dst path be directory this be also part of the test suite l859 l882 however the gfile rename api description l537 say rename or move a file directory current implementation of the filesystem interface follow the latter description and will happily rename a directory eg s3 l1109 l1151 gcs l1011 l1024 should we follow the exist implementation or the interface documentation
tensorflowtensorflow
no register resourcescatterndupdate opkernel for gpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 cento 7 tensorflow instal from source or binary source tensorflow version use command below 2 4 0rc4 python version 3 7 4 bazel version if compile from source 3 4 1 gcc compiler version if compile from source gcc 8 3 0 cuda cudnn version 10 1 gpu model and memory v100 describe the current behavior a test show that a gpu implementation for bool input of resourcescatterndupdate be seemingly miss the test be tensorflow python kernel test batch scatter op test scattert testbooleanscatterupdate standalone code to reproduce the issue run bazel test other info log error testbooleanscatterupdate main scattert scattert testbooleanscatterupdate traceback most recent call last file tmp bazel tf 20db8ac50b74c328e6dea9b20829b459 execroot org tensorflow bazel out ppc opt bin tensorflow python kernel test batch scatter op test runfiles org tensorflow tensorflow python kernel test batch scatter op test py line 91 in testbooleanscatterupdate update0 state op batch scatter update var 1 true file tmp bazel tf 20db8ac50b74c328e6dea9b20829b459 execroot org tensorflow bazel out ppc opt bin tensorflow python kernel test batch scatter op test runfiles org tensorflow tensorflow python util deprecation py line 340 in new func return func args kwargs file tmp bazel tf 20db8ac50b74c328e6dea9b20829b459 execroot org tensorflow bazel out ppc opt bin tensorflow python kernel test batch scatter op test runfiles org tensorflow tensorflow python op state op py line 915 in batch scatter update ref final index update use lock use lock file tmp bazel tf 20db8ac50b74c328e6dea9b20829b459 execroot org tensorflow bazel out ppc opt bin tensorflow python kernel test batch scatter op test runfiles org tensorflow tensorflow python op state op py line 368 in scatter nd update name name file tmp bazel tf 20db8ac50b74c328e6dea9b20829b459 execroot org tensorflow bazel out ppc opt bin tensorflow python kernel test batch scatter op test runfiles org tensorflow tensorflow python ops gen state op py line 740 in resource scatter nd update op raise from not ok status e name file tmp bazel tf 20db8ac50b74c328e6dea9b20829b459 execroot org tensorflow bazel out ppc opt bin tensorflow python kernel test batch scatter op test runfiles org tensorflow tensorflow python framework op py line 6862 in raise from not ok status six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl notfounderror no register resourcescatterndupdate opkernel for gpu device compatible with node node resourcescatterndupdate opkernel be find but attribute didn t match request attribute t dt bool tindice dt int32 use lock true register device gpu t in dt complex128 tindice in dt int64 device gpu t in dt complex128 tindice in dt int32 device gpu t in dt complex64 tindice in dt int64 device gpu t in dt complex64 tindice in dt int32 device gpu t in dt double tindice in dt int64 device gpu t in dt double tindice in dt int32 device gpu t in dt float tindice in dt int64 device gpu t in dt float tindice in dt int32 device gpu t in dt half tindice in dt int64 device gpu t in dt half tindice in dt int32 device gpu t in dt int64 tindice in dt int64 device gpu t in dt int64 tindice in dt int32 device gpu t in dt int32 tindice in dt int64 device gpu t in dt int32 tindice in dt int32 device cpu t in dt bool tindice in dt int64 device cpu t in dt bool tindice in dt int32 device cpu t in dt string tindice in dt int64 device cpu t in dt string tindice in dt int32 device cpu t in dt complex128 tindice in dt int64 device cpu t in dt complex128 tindice in dt int32 device cpu t in dt complex64 tindice in dt int64 device cpu t in dt complex64 tindice in dt int32 device cpu t in dt double tindice in dt int64 device cpu t in dt double tindice in dt int32 device cpu t in dt float tindice in dt int64 device cpu t in dt float tindice in dt int32 device cpu t in dt bfloat16 tindice in dt int64 device cpu t in dt bfloat16 tindice in dt int32 device cpu t in dt half tindice in dt int64 device cpu t in dt half tindice in dt int32 device cpu t in dt int32 tindice in dt int64 device cpu t in dt int32 tindice in dt int32 device cpu t in dt int8 tindice in dt int64 device cpu t in dt int8 tindice in dt int32 device cpu t in dt uint8 tindice in dt int64 device cpu t in dt uint8 tindice in dt int32 device cpu t in dt int16 tindice in dt int64 device cpu t in dt int16 tindice in dt int32 device cpu t in dt uint16 tindice in dt int64 device cpu t in dt uint16 tindice in dt int32 device cpu t in dt uint32 tindice in dt int64 device cpu t in dt uint32 tindice in dt int32 device cpu t in dt int64 tindice in dt int64 device cpu t in dt int64 tindice in dt int32 device cpu t in dt uint64 tindice in dt int64 device cpu t in dt uint64 tindice in dt int32 op resourcescatterndupdate
tensorflowtensorflow
use mixed precision cause incorrect loss metric value
Bug
tensorflow version 2 4 0 rc3 compile from source gpu rtx 3080 10 gb cuda cudnn 11 1 8 bazel version 3 1 0 window 10 I decide to mixed precision to speed up the training but some issue be I use this code policy keras mixed precision policy mixed float16 keras mixed precision set global policy policy model compile optimizer keras mixed precision lossscaleoptimizer optimizer adam learning rate 1e 2 loss dice loss metric dice coefficient train didn t show any nan loss and metric but when evaluate the model or each epoch end val loss and metric be incorrect they be fix over epoch the colab notebook be here etc the value of model predict x and model x be different when the former be nan the latter seem normal
tensorflowtensorflow
resource exhausted memoryerror unable to allocate
Bug
for similar question see 38414 system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary pip install tensorflow gpu 2 4 0rc3 tensorflow version use command below v2 4 0 rc2 20 g68f236364c 2 4 0 rc3 python version 3 7 9 cuda cudnn version cuda 11 1 gpu model and memory geforce rtx 3090 24gib describe the current behavior an error occur when train to the second epoch memoryerror unable to allocate 184 mib for an array with shape 64 26 26 3 371 and datum type float32 when the problem occur I have 70 gb of ram and 5 gb of video memory leave on my system but unable to allocate 184 mib to hide this problem simply reduce frozen batch size from 64 to 32 I e reduce the batch size describe the expect behavior there should be no error standalone code to reproduce the issue for code and datum please see other info log 15 15 eta 0s loss 7711 74822020 11 28 09 40 12 434912 e tensorflow core grappler optimizer meta optimizer cc 592 layout fail invalid argument subshape must have compute start end since stride be negative but be 0 and 2 compute from start 0 and end 9223372036854775807 over shape with rank 2 and stride 1 2020 11 28 09 40 12 449082 e tensorflow core grappler optimizer meta optimizer cc 592 remapper fail invalid argument subshape must have compute start end since stride be negative but be 0 and 2 compute from start 0 and end 9223372036854775807 over shape with rank 2 and stride 1 2020 11 28 09 40 12 582863 e tensorflow core grappler optimizer meta optimizer cc 592 remapper fail invalid argument subshape must have compute start end since stride be negative but be 0 and 2 compute from start 0 and end 9223372036854775807 over shape with rank 2 and stride 1 2020 11 28 09 40 14 685870 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 512 647 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 686045 w tensorflow core framework op kernel cc 1763 op require fail at image resizer state h 142 resource exhaust oom when allocate tensor with shape 1 416 416 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 686110 w tensorflow core framework op kernel cc 1763 op require fail at image resizer state h 142 resource exhaust oom when allocate tensor with shape 1 416 416 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 686801 w tensorflow core framework op kernel cc 1763 op require fail at image resizer state h 142 resource exhaust oom when allocate tensor with shape 1 416 416 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 687007 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 512 826 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 686816 w tensorflow core framework op kernel cc 1763 op require fail at image resizer state h 142 resource exhaust oom when allocate tensor with shape 1 416 416 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 687972 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 769 512 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 689583 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 768 512 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 689721 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 512 678 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 690045 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 512 768 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 690171 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 512 683 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 772685 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 512 770 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 775723 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 640 640 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 776161 w tensorflow core framework op kernel cc 1763 op require fail at image resizer state h 142 resource exhaust oom when allocate tensor with shape 1 416 416 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 777899 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 512 776 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 782664 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 512 683 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 791474 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 512 932 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 791955 w tensorflow core framework op kernel cc 1763 op require fail at image resizer state h 142 resource exhaust oom when allocate tensor with shape 1 416 416 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 792808 w tensorflow core framework op kernel cc 1763 op require fail at image resizer state h 142 resource exhaust oom when allocate tensor with shape 1 416 416 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 793188 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 559 512 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 793469 w tensorflow core framework op kernel cc 1763 op require fail at cast op cc 109 resource exhaust oom when allocate tensor with shape 1 640 494 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 793868 w tensorflow core framework op kernel cc 1763 op require fail at image resizer state h 142 resource exhaust oom when allocate tensor with shape 1 416 416 3 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu 2020 11 28 09 40 14 816077 w tensorflow core framework op kernel cc 1751 resource exhausted memoryerror unable to allocate 184 mib for an array with shape 64 26 26 3 371 and datum type float32 traceback most recent call last file r programdata anaconda3 envs ml lib site package tensorflow python op script op py line 247 in call return func device token args file r programdata anaconda3 envs ml lib site package tensorflow python op script op py line 135 in call ret self func args file r programdata anaconda3 envs ml lib site package tensorflow python autograph impl api py line 620 in wrapper return func args kwargs file r ml bug test xyolo yolo3 util py line 358 in preprocess true box xyolo dtype float32 for l in range num layer file r ml bug test xyolo yolo3 util py line 358 in dtype float32 for l in range num layer memoryerror unable to allocate 184 mib for an array with shape 64 26 26 3 371 and datum type float32
tensorflowtensorflow
attributeerror tensorarray object have no attribute mark use with tf function
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tensorflow version use command below v2 3 0 54 gfcc4b966f1 2 3 1 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with describe the current behavior python import tensorflow as tf import tensorflow probability as tfp tfd tfp distribution class test tf module def init self self log like list none self I tf constant 0 tf function def call self sample tf function def rnd return tfd normal 0 1 sample tfd normal 3 1 sample if self log like list be none self log like list tf tensorarray tf float32 size sample def cond x I return tf less I sample def body x I x x write I tfm reduce sum tfd normal rnd 1 log prob 0 4 attributeerror tensorarray object have no attribute mark use x write I tfm reduce sum tfd normal rnd 1 log prob 0 4 mark use return x I 1 self log like list I tf while loop cond body self log like list self I self log like self log like list stack self log like tfm reduce mean self log like loss self log like return loss t test t t 5 t log like t log like the code in below be not run t log like list stack describe the expect behavior how can get value in t log like and t log like and t log like list standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook scrollto yj19j0ec cj other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf registerloglistener in c api do not seem to work
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version n a bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when I call tf registerloglistener in the c api it doesn t seem to work as describe my listener be never call and tensorflow log message still go to the console the api description be register a listener method that process print message if any listener be register the print operator will call all listener with the print message and immediately return without write to the log describe the expect behavior my listener should be call and the message should not be write to the console standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook extern c void loglistener const char message this be never call and tensorflow log message go to the console instead std cout tensorflow message std endl tf registerloglistener loglistener other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
bug keras savemodel do not properly save optimizer state
Bug
edit look like this be a dupe of 42749 I ll leave this up for now in case since that issue do not have as reproducible high level example but feel free to close this happen at least for adam do not apply to sgd for example do not test with other test on tf nightly and tf 2 3 0 tl dr run a tf kerasmodel through tf keras model load model save do not properly preserve the state of optimizer for certain optimizer see 42749 for more detail the doc the savefile include 2 read the savefile include the model architecture allow to re instantiate the model the model weight the state of the optimizer allow to resume training exactly where you leave off full example python3 import numpy as np import tensorflow as tf from tensorflow import kera define a minimal model inp keras layers input 1 out keras layer dense 1 inp m1 keras model inp out m1 compile loss mae optimizer adam create some test datum x y np random random 100 np random random 100 fit the model to the test datum to get everything initialize m1 fit x y verbose 0 def roundtrip model keras model keras model save dir tmp mymodel model save save dir restore keras model load model save dir return restore create a copy of the fit m1 m2 roundtrip m1 weight be preserve correctly this pass np testing assert allclose m1 predict x m2 predict x new let train once more round m1 fit x y verbose 0 since optimizer weight state be not preserve this fit call result in different weight in m2 which make the prediction differ m2 fit x y verbose 0 try np testing assert allclose m1 predict x m2 predict x rtol 0 1 large relative tolerance except assertionerror print assertionerror model prediction differ diagnosis optimizer weight be not preserve weights1 m1 optimizer get weight m3 roundtrip m1 weights3 m3 optimizer get weight try assert weights1 weights3 except assertionerror print assertionerror optimizer weight differ print f weights1 n vs n weights3 far we can t even restore the weight without training try m3 optimizer set weight weights1 except exception as e print str e split provide weight 0
tensorflowtensorflow
tf 2 4 kerastensor break type compatibility
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 rc0 and tf nightly python version 3 8 the problem since version 2 4 functional kera model use kerastensor instead of tf tensor as layer output type unfortunately kerastensor doesn t subclass tf tensor which break isinstance x tf tensor check l63 the release note recommend to use tf be tensor instead in my opinion this be not really pythonic and break compatibility with the tf type rfc cc mdanatg wich even mention that tf be tensor be expect to be deprecate eventually concretely switch from isinstance x tf tensor to tf be tensor be also not possible in all case e g this break usage of static typechecker like pytype or typeguard a common pattern which can also be find in tensorflow addon cc seanpmorgan be the follow python from typeguard import typechecke import tensorflow as tf typechecke def foo x tf tensor print x dtype foo tf keras input shape 32 32 3 throw in tf 2 4 since isintance be use for typechecke possible solution 1 make kerastensor a subclass of tf tensor mihaimaruseac fchollet be there a reason why this isn t currently the case 2 make kerastensor a subclass of type tensor l40 l54 I don t see any disadvantage of do this in general but it wouldn t really fix this issue since type tensor be not expose as part of the public api yet so user would need to rely on private tensorflow apis 3 in usercode this could be fix by directly rely on kerastensor to replace the usage of tf tensor with python from type import union from tensorflow python keras engine keras tensor import kerastensor tensortype union tf tensor kerastensor I do not think this be a proper solution since it require user to rely on internal apis and implementation detail that might change in the future I be currious to hear back from you on what the bestpractice for type checking of tensor be or whether I be just miss somthe obvious here
tensorflowtensorflow
size argument of tensorarray work only when specify by pythonic int but tf tensor doesn t
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux lz 5 4 0 48 generic 52 18 04 1 ubuntu tensorflow instal from source or binary in conda env pip install tensorflow gpu I tensorflow version use command below v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 8 cuda cudnn version conda install cudatoolkit 10 1 cudnn 7 6 5 gpu model and memory 4 x titan xp 12 gb describe the current behavior reference issue 43698 thank for saxenasaurabh s answer I could initialized the tensorarray with a fix size however when I try to specify the size of the tensorarray with tf tensor from tf shape the length of the tensor after stack together will be none emb shape tensorshape 2 3 180 320 32 cor l shape tensorshape none 2 180 320 cor prob1 shape tensorshape 2 none 180 320 cor prob2 shape tensorshape 2 none 180 320 1 cor prob3 shape tensorshape 2 none 180 320 32 cor prob4 shape tensorshape 2 180 320 none 32 cor prob5 shape tensorshape 2 180 320 none align fea shape tensorshape 2 180 320 3 32 align fea shape tensorshape 2 180 320 96 part of the code be as follow show python align fea shape tf shape align fea b n h w c b align fea shape 0 n align fea shape 1 h align fea shape 2 w align fea shape 3 c align fea shape 4 some other code omit here emb tf reshape emb b n h w 1 tf print emb shape emb shape cor l tf tensorarray dtype tf float32 size n tensorflow bug here def cond I n input arr return tf less I n def body I n input arr emb nbr input I cor tmp tf reduce sum emb nbr emb ref axis 3 b h w arr arr write I cor tmp I tf add I 1 return I n input arr cor l tf while loop cond body 0 n emb cor l n b h w cor l cor l stack n b h w tf print cor l shape cor l shape cor l tf transpose cor l 1 0 2 3 b n h w cor prob tf sigmoid cor l b n h w tf print cor prob1 shape cor prob shape cor prob tf expand dim cor prob axis 4 b n h w 1 tf print cor prob2 shape cor prob shape cor prob tf tile cor prob 1 1 1 1 c b n h w c cor prob tf transpose cor prob 0 2 3 1 4 b h w n c cor prob tf reshape cor prob b h w 1 b h w nc align fea tf transpose align fea 0 2 3 1 4 b h w n c tf print align fea shape align fea shape align fea tf reshape align fea b h w 1 cor prob tf print align fea shape align fea shape look into the comment where tensorflow bug point out if size be specify by n which be tf tensor the output will be none otherwise the size be designate by python int eg cor l tf tensorarray dtype tf float32 size self nframe where self nframe be pythonic int everything be fine and the first size of the tensor will be fix the output be as follow emb shape tensorshape 2 3 180 320 32 cor l shape tensorshape 3 2 180 320 cor prob1 shape tensorshape 2 3 180 320 cor prob2 shape tensorshape 2 3 180 320 1 cor prob3 shape tensorshape 2 3 180 320 32 cor prob4 shape tensorshape 2 180 320 3 32 cor prob5 shape tensorshape 2 180 320 96 align fea shape tensorshape 2 180 320 3 32 align fea shape tensorshape 2 180 320 96 thus I be wonder if this be a bug of tensorflow since the documentation say size optional int32 scalar tensor the size of the tensorarray require if handle be not provide please feel free to contact I if more testing code be require
tensorflowtensorflow
save and loading keras model with rnn layer that use both multiple input and constant will result in valueerror when use model predict
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 8 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when use a rnn layer that have multiple input as well as contant input saving and load it to disk use the h5 file format will produce the following when use model predict tensorflow python keras engine input spec py 155 assert input compatibility raise valueerror layer layer name expect valueerror layer rnn expect 3 input but it receive 2 input tensor input receive describe the expect behavior I expect equal result and thus no valueerror when call model predict on a certain set of input and do it again but after save and load the model standalone code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow import kera from tensorflow keras import layer class rnncellwithconstant keras layers layer def init self unit constant size kwargs self unit unit self state size unit self constant size constant size super rnncellwithconstant self init kwargs def build self input shape self input kernel self add weight shape input shape 0 1 self unit initializer uniform name kernel self recurrent kernel self add weight shape self unit self unit initializer uniform name recurrent kernel self constant kernel self add weight shape self constant size self unit initializer uniform name constant kernel self build true def call self input state constant x1 input prev output state constant constant h input keras backend dot x1 self input kernel h state keras backend dot prev output self recurrent kernel h const keras backend dot constant self constant kernel output h input h state h const return output output def get config self config unit self unit constant size self constant size base config super rnncellwithconstant self get config return dict list base config item list config item x1 keras input none 5 x2 keras input none 5 c keras input 3 cell rnncellwithconstant 32 constant size 3 layer keras layer rnn cell y layer x1 x2 constant c model keras model model x1 x2 c y model compile optimizer rmsprop loss mse model train on batch np zero 6 5 5 np zero 6 5 5 np zero 6 3 np zero 6 32 test basic case x1 np np random random 6 5 5 x2 np np random random 6 5 5 c np np random random 6 3 y np model predict x1 np x2 np c np model save test h5 load model keras model load model test h5 custom object rnncellwithconstant rnncellwithconstant load y np load model predict x1 np x2 np c np assert np array equal y np load y np other info log I also figure out why this happen python s standard json encoder convert list and tuple into array and the decoder always turn array s into list when the rnn call function get call with a list instead of tuple the follow line full input spec generic util to list nest map structure lambda none input additional spec will map that input differently than if it be a tuple I will be submit a pr to fix this
tensorflowtensorflow
training tracking datum structure list be not properly serialize deserialize
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 1 tensorflow instal from source or binary binary tensorflow version use command below v2 3 0 54 gfcc4b966f1 2 3 1 python version 3 6 9 cuda cudnn version 10 1 gpu model and memory gtx 2080 ti describe the current behavior weight load from a layer contain sublayer in a tracking datum structure list be not properly list in the weight attribute this will not only cause confusion but will also sabotage any further optimization or modification of weight they be properly restore though the issue be not present when listwrapper be use describe the expect behavior variable of sublayer contain in a list should be contain in weight when the model be load again standalone code to reproduce the issue import tensorflow as tf class testlayer tf keras layers layer def init self kwargs super testlayer self init kwargs self static layer tf keras layer dense 128 self my layer tf python training track datum structure list for I in range 4 layer tf keras layer dense 128 self my layer append layer def call self x x self static layer x for layer in self my layer x layer x return x def get config self return super testlayer self get config model tf keras sequential testlayer x tf constant 42 0 shape 1 1 y1 model x model save my test model save format tf model load tf keras model load model my test model y2 model load x output model summary model load summary print n var len model weight len model load weight print diff tf norm y1 y2 output model sequential layer type output shape param test layer testlayer none 128 66304 total param 66 304 trainable param 66 304 non trainable param 0 model sequential layer type output shape param test layer testlayer none 128 256 total param 256 trainable param 256 non trainable param 0 n var 10 2 diff tf tensor 0 0 shape dtype float32
tensorflowtensorflow
discrepancy between available operation semantic documentation and documentation of tf2xla python xla conv
Bug
url s with the issue conv convolution l239 l269 description of issue what need change clear description tf2xla python xla conv l239 l269 point to the operation semantic for convwithgeneralpadde but actually wrap the more general convgeneraldilate it would make sense to actually have documentation about the operation semantic of this more general operation
tensorflowtensorflow
tf datum experimental dense to ragged batch fail with input from generator with unspecified shape in tf 2 3
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow both os platform and distribution e g linux ubuntu 16 04 windows linux colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary pypi tensorflow version use command below 2 1 2 2 2 3 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 7 6 gpu model and memory titan rtx p100 describe the expect behavior in tf 2 1 2 2 and 2 3 batch variable length element work fine when generate from tensor slice python ds tf datum dataset from tensor slice tf range 4 generate variable length element via map first batch will have length 1 subsequent batch will have length 2 def f x if x 0 return tf one 1 else return tf one 2 ds ds map f inspect individual element print unbatched shape for batch in ds print batch shape print batch into ragged tensor ds d apply tf datum experimental dense to ragged batch batch size 2 inspect batch element print batch shape for batch in ds print batch to tensor shape output unbatched shape 1 2 2 2 batch shape 2 2 2 2 now in tf 2 1 and 2 2 this also work when the dataset consume element from a generator python generate element via a generator first batch will have length 1 subsequent batch will have length 2 def gen for I in range 4 if I 0 yield tf one 1 else yield tf one 2 ds tf datum dataset from generator gen output type tf float32 inspect individual element print unbatched shape for batch in ds print batch shape print batch into ragged tensor ds d apply tf datum experimental dense to ragged batch batch size 2 inspect batch element print batch shape for batch in ds print batch to tensor shape output unbatched shape 1 2 2 2 batch shape 2 2 2 2 as expect we get identical output both before and after batch describe the current behavior in tf 2 3 the generator version result in an error invalidargumenterror can not batch tensor with different shape in component 0 first element have shape 1 and element 1 have shape 2 op iteratorgetnext during handling of the above exception another exception occur invalidargumenterror traceback most recent call last in 16 17 print batch shape 18 for batch in ds 19 print batch to tensor shape invalidargumenterror can not batch tensor with different shape in component 0 first element have shape 1 and element 1 have shape 2 the release note for 2 3 mention tf datum experimental dense to ragged batch work correctly with tuple tf datum experimental dense to ragged batch to output variable ragged rank presumably this issue be relate to these change here s the relevant implementation for the actual batching in tf 2 2 l371 l426 and in tf 2 3 l380 l452 as suggest by the change above the behavior prior to 2 3 be achieve again when the output shape be specify even if unknown python generate element via a generator first batch will have length 1 subsequent batch will have length 2 def gen for I in range 4 if I 0 yield tf one 1 else yield tf one 2 create the generator explicitly specify the unknown shape ds tf datum dataset from generator gen output type tf float32 output shape tf tensorshape none inspect individual element print unbatched shape for batch in ds print batch shape print batch into ragged tensor ds d apply tf datum experimental dense to ragged batch batch size 2 inspect batch element print batch shape for batch in ds print batch to tensor shape output unbatched shape 1 2 2 2 batch shape 2 2 2 2 it s definitely less convenient to have to specify the output shape in the generator require some refactoring when update to 2 3 maybe the shape could just default to unknown when batch if unspecified in the generator I appreciate that this be an experimental function which be note in the tf datum experimental doc note that the tf data experimental api be not subject to the same backwards compatibility guarantee as tf datum but we will provide deprecation advice in advance of remove exist functionality if this be intend behavior then perhaps it could be document somewhere as it do remove the functionality of be able to ragged batch element from a generator without specify output shape standalone code to reproduce the issue colab tf 2 1 work colab tf 2 2 work colab tf 2 3 fail colab tf 2 3 with output shape specify work other info log full traceback of the error in tf 2 3 invalidargumenterror traceback most recent call last usr local lib python3 6 dist package tensorflow python eager context py in execution mode mode 2101 ctx executor executor new 2102 yield 2103 finally usr local lib python3 6 dist package tensorflow python data op iterator op py in next internal self 757 output type self flat output type 758 output shape self flat output shape 759 usr local lib python3 6 dist package tensorflow python ops gen dataset op py in iterator get next iterator output type output shape name 2609 except core notokstatusexception as e 2610 op raise from not ok status e name 2611 except core fallbackexception usr local lib python3 6 dist package tensorflow python framework op py in raise from not ok status e name 6842 pylint disable protect access 6843 six raise from core status to exception e code message none 6844 pylint enable protect access usr local lib python3 6 dist package six py in raise from value from value invalidargumenterror can not batch tensor with different shape in component 0 first element have shape 1 and element 1 have shape 2 op iteratorgetnext during handling of the above exception another exception occur invalidargumenterror traceback most recent call last in 16 17 print batch shape 18 for batch in ds 19 print batch to tensor shape usr local lib python3 6 dist package tensorflow python data op iterator op py in next self 734 735 def next self for python 3 compatibility 736 return self next 737 738 def next internal self usr local lib python3 6 dist package tensorflow python data op iterator op py in next self 770 def next self 771 try 772 return self next internal 773 except error outofrangeerror 774 raise stopiteration usr local lib python3 6 dist package tensorflow python data op iterator op py in next internal self 762 return self element spec from compatible tensor list ret pylint disable protect access 763 except attributeerror 764 return structure from compatible tensor list self element spec ret 765 766 property usr lib python3 6 contextlib py in exit self type value traceback 97 value type 98 try 99 self gen throw type value traceback 100 except stopiteration as exc 101 suppress stopiteration unless it s the same exception that usr local lib python3 6 dist package tensorflow python eager context py in execution mode mode 2103 finally 2104 ctx executor executor old 2105 executor new wait 2106 2107 usr local lib python3 6 dist package tensorflow python eager executor py in wait self 65 def wait self 66 wait for op dispatch in this executor to finish 67 pywrap tfe tfe executorwaitforallpendingnode self handle 68 69 def clear error self invalidargumenterror can not batch tensor with different shape in component 0 first element have shape 1 and element 1 have shape 2
tensorflowtensorflow
doc on use tpus with custom training loop can be mislead
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue train a model use custom training loop description of issue what need change after read the doc on how to use a tpu with a custom training loop I go ahead and try train my model on the tpu the training loop I implement look like for step in range step per epoch train step train iterator if step 1000 0 print step when I run this with my model I would see step print very quickly but then after a while I see an oom error since my model didn t fit in memory I spend a long time try to figure out how I could get an oom error after successfully train for 1000 of step eventually I realize that train step with its internal strategy run call doesn t block on completion of the training step and if I instead run the follow loop for step in range step per epoch train step train iterator if step 1000 0 print optimizer iteration numpy I would see the oom before any step complete as expect when initially read the doc it be not at all clear to I this would happen so I think it would be nice if the doc mention that strategy run be non block I m pretty new to tf 2 so maybe I miss some doc that would ve give I this understanding and if that s the case I apologize
tensorflowtensorflow
sample softmax loss weight and logit don t get gradient
Bug
system information os platform and distribution linux ubuntu 20 04 tensorflow version 2 2 0 python version 3 8 2 cuda cudnn version 10 2 7 6 2 gpu model and memory titan x describe the current behavior in order to use tf nn sample softmax loss weight and bias need to be provide as input I believe internally row from those tensor be select base on the sample and hte computation be perform the problem be that if you create a model with a final dense layer and provide the weight and bias of that layer as input to tf nn sample softmax loss you end up receve a warning that gradient for they be not compute warn tensorflow gradient do not exist for variable my model dense 1 kernel 0 my model dense 1 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable my model dense 1 kernel 0 my model dense 1 bias 0 when minimize the loss as a consequence they never get update during training describe the expect behavior gradient for those tensor should be compute and they should get update standalone code to reproduce the issue python import numpy as np import tensorflow as tf from tensorflow keras import model from tensorflow keras layer import dense from tensorflow keras optimizer import sgd num class 500 num epoch 3 num sample 10000 batch size 10 learning rate 0 001 y np random randint 0 num class num sample dtype np int64 x np expand dim y astype np float32 1 x test x 10 y test y 10 class mymodel model def init self num class args kwargs super init args kwargs self dense1 dense 10 self dense2 dense num class self first step true def call self input training none mask none hide self dense1 input if training and not self first step return none hide else logit self dense2 hide return logit hide class sampledsoftmaxcrossentropyloss tf keras loss loss def init self decoder obj none num class 0 super init self decoder obj decoder obj self num class num class def call self label hide label tf cast tf expand dim label 1 tf int64 weight tf transpose self decoder obj get weight 0 bias self decoder obj get weight 1 sample value tf random uniform candidate sampler true class label num true 1 num sample 5 range max self num class unique false loss val tf nn sample softmax loss weight weight bias bias label label input hide num sample 5 num class self num class sample value sample value return loss val my model mymodel num class optimizer sgd learn rate learning rate sample loss sampledsoftmaxcrossentropyloss decoder obj my model dense2 num class num class def train step model loss optimizer input target with tf gradienttape as tape logit hide model input training true loss val loss target hide grad tape gradient loss val model trainable weight optimizer apply gradient zip grad model trainable weight return loss val def oredict model input logit model input training true prediction tf argmax logit 1 return prediction x batch np split x 100 y batch np split y 100 print x test print oredict my model x test first batch true for I in range num epoch for x batch y batch in zip x batch y batch if first batch print weight and bias after first batch print my model dense2 get weight 0 print my model dense2 get weight 1 first batch false loss val train step my model sample loss optimizer x batch y batch print loss val print x test print oredict my model x test print weight and bias after train print my model dense2 get weight 0 print my model dense2 get weight 1
tensorflowtensorflow
autograph fail to convert nest if else in a for loop
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory p100 describe the current behavior unexpected behavior of tf function decorator the decorator fail to convert if else inside a loop and because of this behavior it can t be use standalone or inside tf keras train step function standalone code to reproduce the issue python import tensorflow as tf import numpy as np class sampler def init self sample size 10 self sample size tf variable sample size dtype tf int32 self sample tf tensorarray dtype tf float32 size 0 dynamic size true tf function def get new sample self datum size tf shape datum 0 new sample tf tensorarray dtype tf float32 size 0 dynamic size true for I in range size if self sample size self sample size self sample write I datum I new sample write I datum I else if tf random uniform 1 0 5 idx np random randint 0 size new sample self sample read idx self sample write idx datum I new sample write I new sample else new sample write I datum I return new sample stack def call self datum return tf cond tf equal self sample size 0 true fn lambda datum false fn self get new sample datum s sampler s tf convert to tensor np random rand 5 3 astype np float32 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach python operatornotallowedingrapherror in user code 13 get new sample self sample write I datum I usr local lib python3 6 dist package tensorflow python util tf should use py 235 wrap return add should use warn fn args kwargs usr local lib python3 6 dist package tensorflow python op tensor array op py 1159 write return self implementation write index value name name usr local lib python3 6 dist package tensorflow python op tensor array op py 833 write self write index value usr local lib python3 6 dist package tensorflow python op tensor array op py 796 write if index 0 usr local lib python3 6 dist package tensorflow python framework op py 778 bool self disallow bool cast usr local lib python3 6 dist package tensorflow python framework op py 545 disallow bool cast use a tf tensor as a python bool usr local lib python3 6 dist package tensorflow python framework op py 532 disallow when autograph enable decorate it directly with tf function format task operatornotallowedingrapherror use a tf tensor as a python bool be not allow autograph do not convert this function try decorate it directly with tf function
tensorflowtensorflow
op require failure in c loadsavedmodel with conv2d w bias into an add
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 arch linux late mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary instal from arch community tensorflow version use command below tensorflow opt cuda 2 3 0rc2 2 python version 3 8 4 bazel version if compile from source n a gcc compiler version if compile from source 10 1 cuda cudnn version cuda 11 0 2 1 cudnn 8 0 0 180 2 gpu model and memory nvidia titan xp 12 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version 2020 07 26 12 25 21 992520 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 11 0 unknown 2 3 0 rc2 describe the current behavior I have successfully train a little custom kera resnet with skip connection use the python side of thing so simplify the portion of network that be errore during c load look like a conv2d lrelu conv2d add out when I save this keras model use model save foo then attempt to load it in and execute it use the c layer during execute it be unable to run report fusion be not implement biasadd add node foo test residadd add describe the expect behavior I would have expect it to load and optimize correctly so I could use it for inference in a c application standalone code to reproduce the issue file attach you can see the step in recreate sh but basically run the python file this will save a model then compile run the c program to load the model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach 2020 07 26 13 21 36 941101 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 11 0 2020 07 26 13 21 36 960504 I tensorflow cc save model reader cc 31 reading savedmodel from home kimball development bugreport foo 2020 07 26 13 21 36 961289 I tensorflow cc save model reader cc 54 read meta graph with tag serve 2020 07 26 13 21 36 961304 I tensorflow cc save model loader cc 234 reading savedmodel debug info if present from home kimball development bugreport foo 2020 07 26 13 21 36 961385 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 07 26 13 21 36 985670 I tensorflow core platform profile util cpu util cc 104 cpu frequency 3601000000 hz 2020 07 26 13 21 36 986383 I tensorflow compiler xla service service cc 168 xla service 0x55de6bc0eaa0 initialize for platform host this do not guarantee that xla will be use device 2020 07 26 13 21 36 986394 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 07 26 13 21 36 987511 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcuda so 1 2020 07 26 13 21 37 057104 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 26 13 21 37 057480 I tensorflow compiler xla service service cc 168 xla service 0x55de6bc0e030 initialize for platform cuda this do not guarantee that xla will be use device 2020 07 26 13 21 37 057492 I tensorflow compiler xla service service cc 176 streamexecutor device 0 titan xp compute capability 6 1 2020 07 26 13 21 37 057594 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 26 13 21 37 057904 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 01 00 0 name titan xp computecapability 6 1 coreclock 1 582ghz corecount 30 devicememorysize 11 88gib devicememorybandwidth 510 07gib s 2020 07 26 13 21 37 057932 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 11 0 2020 07 26 13 21 37 059248 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 11 2020 07 26 13 21 37 059806 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2020 07 26 13 21 37 059948 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2020 07 26 13 21 37 061270 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2020 07 26 13 21 37 061599 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 11 2020 07 26 13 21 37 061666 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 8 2020 07 26 13 21 37 061692 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 26 13 21 37 061985 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 26 13 21 37 062276 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 2020 07 26 13 21 37 062290 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 11 0 2020 07 26 13 21 37 268129 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 07 26 13 21 37 268149 I tensorflow core common runtime gpu gpu device cc 1263 0 2020 07 26 13 21 37 268154 I tensorflow core common runtime gpu gpu device cc 1276 0 n 2020 07 26 13 21 37 268252 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 26 13 21 37 268624 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 26 13 21 37 268930 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 10019 mb memory physical gpu device 0 name titan xp pci bus i d 0000 01 00 0 compute capability 6 1 2020 07 26 13 21 37 269287 I tensorflow core common runtime process util cc 146 create new thread pool with default inter op set 2 tune use inter op parallelism thread for good performance 2020 07 26 13 21 37 275552 I tensorflow cc save model loader cc 199 restore savedmodel bundle 2020 07 26 13 21 37 388514 I tensorflow cc save model loader cc 183 running initialization op on savedmodel bundle at path home kimball development bugreport foo 2020 07 26 13 21 37 391535 I tensorflow cc save model loader cc 303 savedmodel load for tag serve status success ok take 431036 microsecond load model ok 2020 07 26 13 21 37 422171 w tensorflow core framework op kernel cc 1744 op require fail at conv op fuse impl h 700 unimplemented fusion be not implement biasadd add error fusion be not implement biasadd add node foo testresid add add bugreport zip
tensorflowtensorflow
autograph apply to keras custom loss during eager execution
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 x64 tensorflow instal from source or binary binary pip tensorflow version use command below tf 2 4 0 dev20200707 python version 3 7 cuda cudnn version 10 1 7 6 gpu model and memory bug appear on several computer with different gpu describe the current behavior tensorflow apply autograph to keras custom loss even in eager execution mean that we can t debug the loss anymore unless use tf print this do not happen in previous version of tensorflow notice that it both happen when run eagerly be set to true in model compile and when tf config run function eagerly be set to true describe the expect behavior when run eagerly true be pass to the model during compilation we should expect tensorflow to run eagerly in the loss function standalone code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow import keras custom model autograph be not apply in eager execution so debugging be possible class custommodel keras model model def init self super custommodel self init self layer tf keras layer dense 3 can debug here def call self input training none mask none x self layer input can debug here return x custom loss autograph be apply in eager execution so debugging be impossible class customloss kera loss loss def call self y true y pre x tf reduce mean tf abs y pre y true can not debug here return x if name main datum np random random 1000 3 astype np float32 model custommodel model compile loss customloss run eagerly true model fit x datum y datum batch size 32 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
remove training node from freeze graph failure use transformgraph
Bug
get tensorrt conversion failure due to op not support in tensorrt till now I have come to conclusion to optimize prune the original tensorflow flow graph so that freeze graph should have only support op as per tensorrt and that should then convert easily to uff to do this I want to remove training node map use transformgraph option I use be below other then input shape change and identity node no other node remove like switch exit add 1 what option to choose in transformgraph to remove switch exit add node 2 use optimize for inference and remvoe trae node without success save model freeze graph pb transformgraph notebook I want to remove thse node graph image thank
tensorflowtensorflow
tf io gfile gcs fail to work on opensuse
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux opensuse tumblewee tensorflow instal from source or binary binary conda tensorflow version use command below unknown 2 1 0 python version 3 7 5 cuda cudnn version n a gpu model and memory n a describe the current behavior python import tensorflow as tf tf io gfile listdir gs some bucket replace w bucket of your choice this code give an error 2020 06 01 15 43 56 684531 w tensorflow core platform cloud google auth provider cc 178 all attempt to get a google authentication bearer token fail return an empty token retrieve token from file fail with unavailable error execute an http request libcurl code 77 meaning problem with the ssl can cert path access right error detail error set certificate verify location cafile etc ssl cert ca certificate crt capath none retrieve token from gce fail with abort all 10 retry attempt fail the last failure unavailable error execute an http request libcurl code 6 mean couldn t resolve host name error detail couldn t resolve host metadata after that it hang for a while and then raise a notfounderror I believe this be because the libcurl package with tensorflow doesn t know where to find the ca certificates bundle file on opensuse which be at etc ssl can bundle pem rather than etc ssl cert ca certificate crt also I instal through miniconda so there s another equivalent file at conda prefix ssl cacert pem neither of these seem to be find by tensorflow this code l129 l132 suggest that the bundle file s location can be customize with the curl can bundle env variable however this doesn t change the behavior as far as I can tell the error be still raise describe the expect behavior it should list the content of the bucket
tensorflowtensorflow
mlir xla invalid ir pass verification
Bug
with tensorflow head at 3ffb4ad2d43311f41155b6e00fd105e50df685da may 31 the follow snippet should fail verification with tf opt but it doesn t func main arg0 memref 4x64x128x3xf32 tuple xla lhlo copy arg0 arg0 memref 4x64x128x3xf32 memref 4x64x128x3xf32 xla lhlo terminator to reproduce bazel bin tensorflow compiler mlir tf opt verify mlir joker eph pifon2a sherhut
tensorflowtensorflow
mlir xla hlo lhlo conversion issue when operand be a constant tensor
Bug
with master branch as of 831a55584749593400807e0baa7478476b5dbc70 may 26 the xla hlo to lhlo lower doesn t convert completely when the operand of an op in this example below that of broadcast in dim be a constant tensor to reproduce please use func main arg0 tensor 4x64x128x3xf32 arg1 tensor 5x3x3x10xf32 cst constant dense 0 000000e 00 tensor 0 xla hlo broadcast in dim cst broadcast dimension dense tensor 0xi64 name broadcast 6 tensor tensor 4x30x42x10xf32 return and tf opt hlo legalize to lhlo constant tensor lower mlir constant tensor lower mlir 3 8 error xla lhlo broadcast in dim op operand 0 must be memref of float point or signless integer or complex type value but get tensor 0 xla hlo broadcast in dim cst broadcast dimension dense tensor 0xi64 name broadcast 6 tensor tensor 4x30x42x10xf32 constant tensor lower mlir 3 8 note see current operation xla lhlo broadcast in dim cst 0 broadcast dimension dense tensor 0xi64 name broadcast 6 tensor memref 4x30x42x10xf32 use print ir after all reveal the operand for the lhlo broadcast in dim op wasn t convert to a memref tuple mlir 4 10 error xla lhlo broadcast in dim op operand 0 must be memref of float point or signless integer or complex type value but get tensor 0 xla hlo broadcast in dim cst broadcast dimension dense tensor 0xi64 name broadcast 6 tensor tensor 4x30x42x10xf32 tuple mlir 4 10 note see current operation xla lhlo broadcast in dim cst 0 broadcast dimension dense tensor 0xi64 name broadcast 6 tensor memref 4x30x42x10xf32 ir dump after mlir detail verifierpass fail module func bb0 arg0 memref 4x64x128x3xf32 arg1 memref 5x3x3x10xf32 no predecessor cst std constant name constant 5 value dense 0 000000e 00 tensor tensor 0 std alloc memref 4x30x42x10xf32 xla lhlo broadcast in dim cst 0 broadcast dimension dense tensor 0xi64 name broadcast 6 tensor memref 4x30x42x10xf32 1 std alloc memref 4x30x42x10xf32 xla lhlo convolution arg0 arg1 1 batch group count 1 i64 dimension number input batch dimension 0 i64 input feature dimension 3 i64 input spatial dimension dense 1 2 tensor 2xi64 kernel input feature dimension 2 i64 kernel output feature dimension 3 i64 kernel spatial dimension dense 0 1 tensor 2xi64 output batch dimension 0 i64 output feature dimension 3 i64 output spatial dimension dense 1 2 tensor 2xi64 feature group count 1 i64 lhs dilation dense 1 tensor 2xi64 name convolution 4 padding dense 0 tensor 2x2xi64 precision config default default rhs dilation dense 1 tensor 2xi64 window stride dense 2 3 tensor 2xi64 memref 4x64x128x3xf32 memref 5x3x3x10xf32 memref 4x30x42x10xf32 2 std alloc memref 4x30x42x10xf32 xla lhlo maximum 0 1 2 name maximum 7 memref 4x30x42x10xf32 memref 4x30x42x10xf32 memref 4x30x42x10xf32 xla lhlo terminator sym name main type memref 4x64x128x3xf32 memref 5x3x3x10xf32 module terminator this be nothing specific to broadcast in dim happen with say xla hlo add as well this be likely a miss conversion for std constant that need to be complete
tensorflowtensorflow
tf queue fifoqueue throw notfounderror on enqueue dequeue if create in graph mode
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution window 7 and ubuntu 18 04 3 lts colab tensorflow instal from source or binary pip install tf nightly gpu tensorflow version use command below v1 12 1 32502 g2544e4e277 2 3 0 dev20200523 window and colab python version 3 6 6 window 3 6 9 colab cuda cudnn version 10 1 7 6 3 30 window unknown colab gpu model and memory nvidia 1080ti 11 gb window unknown colab describe the current behavior when create a tf fifoqueue inside a tf function decorate function enqueuee dequeuee tensor from the queue fail with a notfounderror mention a non exist resource of name localhost some number c mangle name of class tensorflow queueinterface all operation work fine if the queue be create in eager mode describe the expect behavior either the graph mode execution should succeed just like in eager mode or the unsuitability of queue creation for graph mode should be document as api standalone code to reproduce the issue linux case window case import tensorflow as tf tf function def foo queue tf queue fifoqueue 1 tf string tf tensorshape queue enqueue hello s queue dequeue tf print s foo log be attach log txt
tensorflowtensorflow
fail to legalize operation xla hlo return
Bug
I be use tensorflow instal from source and run on google cloud my git commit I d be b52f058cb7fe2aee523fb2f0ae8ba712d2339b3a I be try to test reduce op I generate a frozen graph of pbtxt format use a very simple program by use reduce sum on 4 3 2 tensor I generate graph def from it give below module attribute tf version bad consumer min consumer 0 i32 producer 175 i32 func main arg0 tensor 4x3x2xf32 tensor attribute tf entry function control output input placeholder output sum 0 tf executor graph output control tf executor island wrap tf const device value dense 0 1 2 tensor 3xi32 tensor 3xi32 output 0 control 1 tf executor island wrap tf sum arg0 output device keep dim false tensor 4x3x2xf32 tensor 3xi32 tensor tf executor fetch output 0 tensor return 0 tensor above be graphdef file I convert it into xla hlo use tf opt tf executor island coarsen canonicalize xla legalize tf which give the output as module attribute tf version bad consumer min consumer 0 i32 producer 175 i32 func main arg0 tensor 4x3x2xf32 tensor attribute tf entry function control output input placeholder output sum 0 xla hlo constant dense 0 1 2 tensor 3xi32 1 tensor cast 0 tensor 3xi32 to tensor 3xi32 2 xla hlo convert arg0 tensor 4x3x2xf32 tensor 4x3x2xf32 3 xla hlo constant dense 0 000000e 00 tensor 4 xla hlo reduce 2 3 bb0 arg1 tensor arg2 tensor no predecessor 6 xla hlo add arg1 arg2 tensor xla hlo return 6 tensor dimension dense 0 1 2 tensor 3xi64 tensor 4x3x2xf32 tensor tensor 5 xla hlo convert 4 tensor tensor return 5 tensor now I be try to convert above hlo file to lhlo use command tf opt hlo legalize to lhlo but an segmentation fault come b txt 12 7 error fail to legalize operation xla hlo return that be explicitly mark illegal xla hlo return 6 tensor b txt 12 7 note see current operation xla hlo return 10 tensor please submit a bug report to and include the crash backtrace stack dump 0 program argument tf opt hlo legalize to lhlo b txt tf opt 0x7fdd05d tf opt 0x7fdb1ad tf opt 0x7fdb6fd lib64 libpthread so 0 0x12dc0 0x7fac3735fdc0 tf opt 0x7f7ba86 tf opt 0x7f7eb0e tf opt 0x7f7deca tf opt 0x7f7e3e6 tf opt 0x7f834b6 tf opt 0x7f2e932 tf opt 0x7f2f7ca tf opt 0x7f345dd tf opt 0x7d322dc tf opt 0x7f1890b tf opt 0x7f18c0c tf opt 0x7f18c0c tf opt 0x7f18c0c tf opt 0x7f1995f tf opt 0x7e0ebc9 tf opt 0x7e13c89 tf opt 0x7e13cfa tf opt 0x7e1ae85 tf opt 0x5964480 tf opt 0x59648c5 tf opt 0x5964a80 tf opt 0x930390 lib64 libc so 6 libc start main 0xf3 0x7fac36d95873 tf opt 0xa38cce segmentation fault core dump this be the whole output so be something still miss currently in the repo
tensorflowtensorflow
documentation for ensure cupti for profiling be mislead
Bug
url s with the issue install the profiler and gpu prerequisite description of issue what need change the documentation say to do ldconfig p grep libcupti to check that cupti exist on the path and to do export ld library path usr local cuda extras cupti lib64 ld library path to fix it if it be not on the path however the documentation can be mislead in situation where an old install of cuda 10 0 have be replace with 10 1 at least on my installation my output when check the path be as below console tyler lambda2 usr local cuda bin ldconfig p grep libcupti libcupti so 10 0 libc6 x86 64 usr lib x86 64 linux gnu libcupti so 10 0 libcupti so libc6 x86 64 usr lib x86 64 linux gnu libcupti so read the documentation this suggest to I that I do indeed have a version of libcupti on the path and that everything should work however when I train my model with the profiler on I see the follow error log in the console 2020 05 13 15 49 23 364143 I tensorflow core profiler lib profiler session cc 163 profiler session start 2020 05 13 15 49 23 364212 I tensorflow core profiler internal gpu cupti tracer cc 1365 profiler find 1 gpu 2020 05 13 15 49 23 364588 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcupti so 10 1 dlerror libcupti so 10 1 can not open share object file no such file or directory ld library path usr local cuda 10 1 lib64 2020 05 13 15 49 23 364606 e tensorflow core profiler internal gpu cupti tracer cc 1415 function cupti interface subscribe subscriber cupti callbackfunc apicallback this fail with error cupti could not be load or symbol could not be find after double checking that I have cuda 10 1 instal and not 10 2 I do the below console tyler lambda2 export ld library path ld library path usr local cuda extras cupti lib64 tyler lambda2 echo ld library path usr local cuda 10 1 lib64 usr local cuda extras cupti lib64 usr local cuda extras cupti lib64 this then allow the profiler to load cupti 2020 05 13 18 18 51 560268 I tensorflow core profiler lib profiler session cc 163 profiler session start 2020 05 13 18 18 51 560338 I tensorflow core profiler internal gpu cupti tracer cc 1365 profiler find 1 gpu 2020 05 13 18 18 51 561266 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcupti so 10 1 however rerun the command from the documentation for check that cupti be on the path give the same output as before console tyler lambda2 ldconfig p grep libcupti libcupti so 10 0 libc6 x86 64 usr lib x86 64 linux gnu libcupti so 10 0 libcupti so libc6 x86 64 usr lib x86 64 linux gnu libcupti so desire fix after update my path I would expect that ldconfig p grep libcupti would update to show that usr local cuda extras cupti lib64 with version 10 1 be available additionally I believe the documentation should explicitly state that run ldconfig p grep libcupti should show libcupti so 10 1 or great submit a pull request no I m not sure of what the good way to check for 10 1 or 10 2 would be
tensorflowtensorflow
control dependency with assert equal
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change do we need to mention the debug use case in return clear description we declare in the note note in tensorflow 2 with eager and or autograph you should not require this method as code execute in the expect order only use tf control dependency when work with v1 style code or in a graph context such as inside dataset map but there be any direct reference to the assert equal use case for example why should someone use this method how be it useful take a look at the issue here
tensorflowtensorflow
unintende tf distribute replicacontext merge call behavior on tpu
Bug
have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 colab tensorflow instal from source or binary binary tensorflow version use command below 2 2 0rc3 python version 3 cuda cudnn version 7 6 gpu model and memory gtx1080ti describe the current behavior the argument in merge fn be the original input tensor describe the expect behavior accord to the doc merge fn function that join argument from thread that be give as perreplica standalone code to reproduce the issue import os import tensorflow as tf if colab tpu addr in os environ resolver tf distribute cluster resolver tpuclusterresolver tpu grpc os environ colab tpu addr tf config experimental connect to cluster resolver tf tpu experimental initialize tpu system resolver strategy tf distribute experimental tpustrategy resolver else strategy tf distribute mirroredstrategy tf function def step fn v tf zero 10 def merge fn strategy key print key context tf distribute get replica context context merge call merge fn args v strategy run step fn other info log on a two gpu machine the output be perreplica 0 tensor zero 0 shape 10 dtype float32 device job localhost replica 0 task 0 device gpu 0 1 tensor replica 1 zero 0 shape 10 dtype float32 device job localhost replica 0 task 0 device gpu 1 on a colab tpu the output be tensor zero 0 shape 10 dtype float32
tensorflowtensorflow
inconsistency in mirroredstrategy evaluation result for batch dependent metric
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary colab and from pip locally tensorflow version use command below 2 2 0 rc4 but also locally on 2 1 python version 3 6 9 and 3 6 8 locally cuda cudnn version 10 1 gpu model and memory quadro p50000 locally colab otherwise describe the current behavior when I use mirroredstrategy for model evaluation with whole batch dependent metric there be some inconsistency in the metric return this have to do with the fact that the batch be separate before be send to the different device and the metric be compute on each device before be average on the master device describe the expect behavior I would like the metric evaluation to be independent of whether I use mirroredstrategy or not standalone code to reproduce the issue you can test that the metric computation be otherwise consistent by change the flag to false python import tensorflow as tf from tensorflow keras model import sequential from tensorflow keras layer import dense logical device separation physical device tf config list physical device gpu if true tf config set logical device configuration physical device 0 tf config logicaldeviceconfiguration memory limit 8000 tf config logicaldeviceconfiguration memory limit 8000 my full batch dependent loss def my loss y true y pre return tf reduce max tf abs y true y pre my toy model mirror strategy tf distribute mirroredstrategy cross device op tf distribute reductiontoonedevice with mirror strategy scope model distribute sequential dense 10 model distribute compile loss my loss my toy datum x tf random normal 32 10 y tf random normal 32 10 my experiment metric model distribute evaluate x y print metric y pre model distribute predict x print my loss y y pre other info log a sample output from the previous code would for example be 1 1 0s 1ms step loss 4 1214 4 121423721313477 tf tensor 4 1544814 shape dtype float32 an obvious solution be to not use evaluate but predict and iterate myself over the dataset in my real use case I use a dataset and nccl compute the metric myself but I be then lose some nice property of evaluate like the callback and I have to compute manually potentially a range of metric maybe this isn t a bug but in which case it would be nice to have a warning in the doc I also would like to know if there be a way to still use evaluate maybe with a custom cross device op
tensorflowtensorflow
serialize a tensor and write to tf train example from within a graph
Bug
I would like to write tensorflow example record to a tfrecordwriter from inside an autograph generate graph I be run inference at scale over million of example and so don t want to collect all result in memory but write they out as I go I m read from a dataset run everything inside a graph be way fast than break out every batch to process and save result so I just want to be able to write result from within the graph the documentation for tensorflow 2 0 state the follow the simple way to handle non scalar feature be to use tf serialize tensor to convert tensor to binary string string be scalar in tensorflow however tf io serialize tensor return a tensor of byte string create an example proto require a bytes list not a tensor how do I write a tf train example to a tf record from inside a graph code to reproduce tensorflow version 2 x import tensorflow as tf tf function def example write writer tf io tfrecordwriter test tfr x tf constant 0 1 2 3 x tf io serialize tensor x feature datum tf train feature byte list tf train byteslist value x ex tf train example feature tf train feature feature feature writer write ex serializetostring example write and the error typeerror traceback most recent call last in 12 writer write ex serializetostre 13 14 example write 8 frame usr local lib python3 6 dist package tensorflow python framework func graph py in wrapper args kwargs 966 except exception as e pylint disable broad except 967 if hasattr e ag error metadata 968 raise e ag error metadata to exception e 969 else 970 raise typeerror in user code 6 example write feature typeerror have type tensor but expect one of byte
tensorflowtensorflow
autograph and tf function be not work for tpu
Bug
it seem that tpu only support keras fit function but unable to use function from tf function and autograph tensorflow version 2 1 python version 3 6 issue can be reproduce in colab
tensorflowtensorflow
ko zh cn how to get chinese korean font to work in matplotlib colab
Bug
the default matplotlib setup in colab doesn t include chinese or korean font so these character don t render I believe this be one of the reason we have not be translate figure text I can get the browser to render this text by output svg text from ipython display import set matplotlib format set matplotlib format svg matplotlib rcparam svg fonttype none but that mess up a lot of the format some searching show that it might just be a matter of instal the right font and add they to the matplotlib rc configuration but I haven t find a end to end setup that work yet do anyone have experience set this up
tensorflowtensorflow
tf function decorate function fail in graph mode if any of the branch of conditional would be invalid at runtime
Bug
system information have I write custom code yes os platform and distribution ubuntu 18 04 tensorflow instal from binary tensorflow version 2 1 0 python version 3 7 6 describe the current behavior as far as I follow the development of tf s tf function decoration autotrace the aim be to mostly write ideomatic python and have tf take care of build a corresponding tf op graph with this feature be a highlight of tf2 the simple function below where one branch can only be properly execute when the conditional be meet albeit ideomatic python fail in graph mode albeit it work in eager mode I be aware that I can work around the situation but I guess it not work as be be a bug describe the expect behavior the function work identically in both eager or graph mode standalone code to reproduce the issue python import tensorflow as tf tf function def test function tensor if tf size tensor 2 return tensor 1 else return tensor 0 tf config experimental run function eagerly true print test function tf constant 1 print test function tf constant 1 2 print test function tf constant 1 2 3 tf config experimental run function eagerly false print test function tf constant 1 print test function tf constant 1 2 print test function tf constant 1 2 3 tf cpp min log level 2 python test py tf tensor 1 shape dtype int32 tf tensor 2 shape dtype int32 tf tensor 1 shape dtype int32 traceback most recent call last file test py line 20 in print test function tf constant 1 file tensorflow core python eager def function py line 568 in call result self call args kwd file tensorflow core python eager def function py line 615 in call self initialize args kwd add initializer to initializer file tensorflow core python eager def function py line 497 in initialize args kwd file tensorflow core python eager function py line 2389 in get concrete function internal garbage collect graph function self maybe define function args kwargs file tensorflow core python eager function py line 2703 in maybe define function graph function self create graph function args kwargs file tensorflow core python eager function py line 2593 in create graph function capture by value self capture by value file tensorflow core python framework func graph py line 978 in func graph from py func func output python func func args func kwargs file tensorflow core python eager def function py line 439 in wrap fn return weak wrap fn wrap args kwd file tensorflow core python framework func graph py line 968 in wrapper raise e ag error metadata to exception e valueerror in convert code test py 7 test function return tensor 1 tensorflow core python op array op py 898 slice helper name name tensorflow core python op array op py 1064 stride slice shrink axis mask shrink axis mask tensorflow core python ops gen array op py 9535 stride slice shrink axis mask shrink axis mask name name tensorflow core python framework op def library py 742 apply op helper attrs attr proto op def op def tensorflow core python framework func graph py 595 create op internal compute device tensorflow core python framework op py 3322 create op internal op def op def tensorflow core python framework op py 1786 init control input op tensorflow core python framework op py 1622 create c op raise valueerror str e valueerror slice index 1 of dimension 0 out of bound for stride slice op stridedslice with input shape 1 1 1 1 and with compute input tensor input 1 1 input 2 2 input 3 1
tensorflowtensorflow
tf save model assertion error call a function reference variable which have be delete
Bug
I be face an issue when return a tf save model load object inside a function and then try to use it it be not work I be have a file sample py sample py import tensorflow as tf def load model model dir load model load tf save model load model dir model load signature serve default print model load return model when I be execute main py from sample import load model model dir som path of a save model model1 load model model dir if I print model variable I be get follow error assertionerror call a function reference variable which have be delete this likely mean that function local variable be create and not reference elsewhere in the program this be generally a mistake consider store variable in an object attribute on first call but if load the model with same code inside the function but not use the function it work fine main py load tf save model load model dir model load signature serve default if I print model variable its working as expect
tensorflowtensorflow
hang on out of memory error
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 2 1 0 and nightly use tensorflow tensorflow 2 1 0 gpu py3 and tensorflow tensorflow nightly gpu py3 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory p100 and v100 driver 440 33 01 cuda 10 1 in container you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensorflow hang when it hit out of memory after it dump the out of memory message describe the expect behavior tensorflow should exit on non zero return code on oom standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf from tensorflow keras import backend as k import numpy as np def random image generator batch size num class input shape template 2 num class np random random num class input shape random datum np random normal loc 0 scale 1 size input shape while true y np random randint 0 num class size batch size x np zero batch size input shape dtype np float32 for I in range batch size x I template y I random datum x array np array x y array tf keras util to categorical y num class yield x array y array def run model k set image datum format channel first image dim 5000 input shape 3 image dim image dim num class 15 batch size 1 model class tf keras application resnet50 model model class weight none include top true input shape input shape class num class model compile optimizer rmsprop loss categorical crossentropy random generator random image generator batch size num class input shape model fit random generator step per epoch 10 epoch 1 run model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach this program hang after dump the out of memory error on 16 gb and 32 gb gpu p100 and v100 test the program use to exit on tensorflow 1 15 this happen on both the 2 1 0 and nightly container on intel x86 system I originally hit this on build from source tensorflow 2 1 0 on ppc64le on that system I attach gdb and dump the stack it seem the code be hang on the three thread stack note in the attachment threethreadstack txt
tensorflowtensorflow
tf datum dataset unusable with step per epoch standard training loop
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux mint tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 4 describe the current behavior this be the underlie issue for basically dataset can be use as convert to an iterator once the iterator reach the end of the dataset it do not restart yield no more sample step per epoch for the fit method of a keras model can be use to specify the number of batch to run per epoch more exactly the number of time the iterator be advance dereference step per epoch be require for e g if not the full dataset should be traverse either due to external requirement or to avoid use the trail incomplete batch step per epoch be absolutely require to be set for multiworkermirroredstrategy train the model with multiworkermirroredstrategy whether to recreate an iterator for each epoch out of the dataset be determine by datasetadapter should recreate iterator at l241 l242 note that the iterator absolutely must be recreate for all common use case iterate over full dataset iterate over full dataset except incomplete trail batch iterate over a random subset of the full dataset per epoch not to be recreate for infinite dataset maybe if the full dataset should be consume over multiple epoch but why what should happen if the dataset be exhaust after some epoch usually restart right in the current tf 2 1 0 the implementation of datasetadapter should recreate iterator be return self get size be not none or step per epoch be none l208 this be wrong for dataset the size be always none see which intend for this to be change and as motivate above have step per epoch set be a common use case but the iterator should still be recreate on each epoch this be recently change to self user step be none or cardinality cardinality self dataset numpy self user step diff f8dd40712ac721c1b363e1a1ec44c1a3r741 r747 this be also wrong again assume user step be set see above reason first the cardinality might be unknown e g for any tfds dataset and tfrecorddataset it be unknown I guess due to use of interleave in the dataset reader second even if the size be know it may not be equal to the number of step common example skip the last batch describe the expect behavior this be a design issue and hence hard to resolve in general it would be good to eliminate the unknown size of a dataset but when read data line by line from a file it might not be know upfront so the user have to input the size of the dataset or specify explicitly auto which would iterate over the whole dataset once to get the number of sample this can be costly but should not be possible in general e g tfds know the number of sample I think the sane approach would be to default to recreate the iterator on each epoch unless the dataset be know to be infinite this might still be wrong for case I can t imagine right now but be correct for all case I can think of maybe even allow the user to specify this but this default be imo way well than the current the other approach would be to recreate the iterator when it run out of datum after start an epoch this would partially solve the issue but fail for omit the trail batch it would yield the incomplete batch from the last epoch first but that should be skip use only some random sample per batch but want to shuffle before each batch it would happily consume the rest of the sample and not see the sample use in an early epoch until it run out of datum with the unknown size fix one could also fix the check at diff f8dd40712ac721c1b363e1a1ec44c1a3r741 r747 to check if user step yield 1 epoch and optional trailing batch by use size step 1 which would be well than the previous approach recreate iterator when out of datum as the use case with of omit the trail batch be cover but it would fail for the other so my suggestion would to optional avoid unknown size recreate iterator unless dataset be infinite by default allow the user to overwrite this explicitly maybe via with option of the dataset code to reproduce the issue some reduce example code base on e g train the model with multiworkermirroredstrategy import tensorflow as tf import tensorflow dataset as tfds import tensorflow as tf from tensorflow python datum experimental op import cardinality import numpy as np tfds disable progress bar scale mnist datum from 0 255 to 0 1 def scale image label image tf cast image tf float32 image 255 return image label def build and compile cnn model model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss tf keras loss sparse categorical crossentropy optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model batch size 64 if false example np one 10 batch size 28 28 1 label np one example shape 0 dataset tf datum dataset from tensor slice example label num example example shape 0 else dataset info tfds load name mnist with info true as supervise true dataset dataset test num example info split test num example x dataset map scale cache shuffle 10000 batch batch size model build and compile cnn model card cardinality cardinality x num batch sum 1 for in x full batch num example batch size print sample s nbatche s s full ncardinality s num example num batch full batch card model fit x x epoch 2 step per epoch full batch other info log there be a warning warning tensorflow your input run out of datum interrupting training make sure that your dataset or generator can generate at least step per epoch epoch batch in this case 312 batch you may need to use the repeat function when build your dataset it seem that add repeat and hence create an infinite dataset be a viable option however the docu also state in tf 1 x the idiomatic way to create epoch be through the repeat transformation in tf 2 0 tf datum dataset object be python iterable which make it possible to also create epoch through python iteration so it seem that it should not be require and as per above explanation the return value of datasetadapter should recreate iterator be not correct
tensorflowtensorflow
file io get match file fail for valid filename that contain glob
Bug
the function file io get match file say it take a filepath but actually it take a glob this mean that if you save your checkpoint to a folder like x abc then you can t load the previous checkpoint use something like def load checkpoint sess checkpoint path saver tf train saver tf global variable ckpt tf train get checkpoint state checkpoint path tf log info loading model s ckpt model checkpoint path saver restore sess ckpt model checkpoint path where checkpoint path log x abc
tensorflowtensorflow
tf io gfile glob do not list all file in a google cloud storage bucket
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when list file with tf io gfile glob not all image be return it seem it be not resolve the folder recursively when use the same path with gsutil we get the correct image count describe the expect behavior when use the same gs path with gsutil we get the correct amount of image code to reproduce the issue in order to reproduce the behavior I prepare a google bucket with the follow structure the bucket be public accessible please feel free to use it to reproduce the behavior on your end gs tensorflow issue reproduction level0 level1 level2 level3 in summary we have 4 jpg image nest in different folder level tensorflow 2 code to reproduce file tf io gfile glob gs tensorflow issue reproduction jpg print file count len file find file 1 gsutil command which work properly gsutil du gs tensorflow issue reproduction jpg wc l find file 4 other info log good regard sascha
tensorflowtensorflow
collective allgather fail to collect tensor in multi tf task between graph distribute execution
Bug
system information os platform and distribution ubuntu 18 04 tensorflow instal from source or binary binary whl tensorflow version use command below tensorflow gpu 2 0 0 python version 3 6 8 cuda cudnn version 10 0 7 6 4 recommend with the support of nccl gpu model and memory geforce gtx 1080 ti describe the current behavior first we need the two file below cluster py python import time from multiprocesse import process from tensorflow core protobuf import config pb2 from tensorflow python training server lib import server cluster spec worker localhost 14286 localhost 14287 group size 4 def configure group size gpu option config pb2 gpuoption visible device list 0 1 per process gpu memory fraction 0 7 group size experimental config pb2 configproto experimental collective nccl true experimental collective group leader job worker replica 0 task 0 return config pb2 configproto gpu option gpu option experimental experimental class tfcluster def init self cluster spec self cluster spec cluster spec self num worker len self cluster spec get worker self tf server def start self def server job name str task index int s server self cluster spec job name job name task index task index config configure group size s join assert self num worker 1 for I in range self num worker self tf server append process target server args worker I daemon true break for proc in self tf server proc start def stop self for proc in self tf server proc terminate if name main cluster tfcluster cluster spec cluster start time sleep 5 input press enter to stop cluster stop task py python import argparse import numpy as np import tensorflow as tf from tensorflow core protobuf import config pb2 from tensorflow python import op from tensorflow python client session import session from tensorflow python op import collective op from cluster import cluster spec group size var np array 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 var task index 0 def test collective job name task index num gpu worker device job s task d job name task index master target grpc cluster spec job name task index print session target master target with op graph as default session target master target as sess def run x run option config pb2 runoption run option experimental collective graph key task index 1 different positive graph key for different task to avoid racing condition return sess run x option run option with op device job worker task d device cpu 0 var task index make sure all use the same variable var tf variable var name w target collective for I in range num gpu with op device worker device device gpu str I t var 0 2 task index 0 1 I target append t collective append collective op all gather t group size group size group key 1 instance key 1 collective op all reduce t group size group size group key 1 instance key 1 merge op add final op div run tf compat v1 global variable initializer var value run var print variable value var value target value run target print target value target value collective value run collective print collective value collective value if name main parser argparse argumentparser parser register type bool lambda v v low true parser add argument job name type str default parser add argument task index type int default 0 flag unparse parser parse know args num gpu per node 2 test collective job name flag job name task index flag task index num gpu num gpu per node then one could use the follow command to run the experiment bash python cluster py python task py job name worker task index 0 python task py job name worker task index 1 experiment 1 failure run the above code without change note that the group size 4 it be program to all gather the 4 tensor across task 0 gpu 0 task 0 gpu 1 task 1 gpu 0 task 1 gpu 1 but get stuck if one turn on the env tf cpp min vlog level 1 one will notice it successfully gather the value but get stick right after it specifically after tensorflow core common runtime collective rma local cc 105 posttopeer experiment 2 success run the above code with group size 2 on each task it be program to gather 2 tensor on the 2 gpu of that task for example on task 0 it gather task 0 gpu 0 and task 0 gpu 1 and it succeed experiment 3 success run the above code with the original group size 4 but with the all reduce op as provide in the comment code instead of all gather it succeed describe the expect behavior it be expect that all the experiment above should succeed however experiment 1 fail but 2 and 3 succeed 3 mean the collective executor work in a multi task way of execute the graph on all reduce 2 mean all gather work if only collect tensor within one graph 1 mean the bug where all gather fail to collect tensor across the task code to reproduce the issue see above
tensorflowtensorflow
tf image ssim multiscale do not work in tf 2 0 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution linux ubuntu 16 04 tensorflow instal from binary pip tensorflow version 2 0 0 python version 3 5 2 cuda version 10 1 gpu model and memory gtx 1060 6 gb python example image batch video1 tf random uniform shape 8 64 64 1 minval 0 maxval 1 video2 tf random uniform shape 8 64 64 1 minval 0 maxval 1 ssim work fine but when I use the multiscale ssim I be get the follow error message what be I do wrong how do I fix this ssim python ssim score tf image ssim img1 video1 img2 video1 max val 1 0 print ssim score tf tensor 1 1 1 1 1 1 1 1 shape 8 dtype float32 ms ssim python ms ssim score tf image ssim multiscale img1 video1 img2 video2 max val 1 0 python invalidargumenterror traceback most recent call last in 1 ms ssim score tf image ssim multiscale img1 video1 img2 video2 max val 1 0 local lib python3 5 site package tensorflow core python op image op impl py in ssim multiscale img1 img2 max val power factor filter size filter sigma k1 k2 3405 filter sigma filter sigma 3406 k1 k1 3407 k2 k2 3408 mcs append nn op relu cs 3409 local lib python3 5 site package tensorflow core python op image op impl py in ssim per channel img1 img2 max val filter size filter sigma k1 k2 3174 math op great equal shape1 3 1 filter size 3175 shape1 filter size 3176 summarize 8 3177 control flow op assert 3178 math op reduce all local lib python3 5 site package tensorflow core python util tf should use py in wrap args kwargs 196 197 def wrap args kwargs 198 return add should use warn fn args kwargs 199 return tf decorator make decorator 200 fn wrap should use result local lib python3 5 site package tensorflow core python op control flow op py in assert condition datum summarize name 154 op none 155 message expect s to be true summarize datum s 156 condition n join data str 157 return 158 invalidargumenterror expect tf tensor false shape dtype bool to be true summarize datum 8 8 8 1 11
tensorflowtensorflow
can not seek on write only tf gfile gfile
Bug
system information have I write custom code yes os platform and distribution linux ubuntu 18 04 tensorflow instal from binary tensorflow version v1 14 0 rc1 22 gaf24dc9 1 14 0 python version 3 6 describe the current behavior call seek on a tf gfile gfile open in write only mode raise tensorflow python framework error impl permissiondeniederror describe the expect behavior gfile should support the python io semantic that support seek on a write only file more generally it would be preferable if gfile follow the api of python s io iobase io iobase code to reproduce the issue import tensorflow as tf with tf io gfile gfile test txt w as f f seek 0 other info log traceback most recent call last file line 2 in file venv lib python3 6 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file venv lib python3 6 site package tensorflow python lib io file io py line 146 in seek self preread check file venv lib python3 6 site package tensorflow python lib io file io py line 82 in preread check file isn t open for read tensorflow python framework error impl permissiondeniederror file isn t open for read
tensorflowtensorflow
interrelation of collection and scope count be not clear from documentation
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change the graphkey doc be lack explanation of variable store and varscope key relation to the other collection e to the variable creation in in a variable scope context in my modelling since iam build a dnn from scratch its cond sine qua non to estimate memory usage like give in this for vggnet case however to I its not clear which tensor summup the computation memory allocation even if it initialize in the runtime you should be able to pre calculate the estimative from the graph builte before run so iam able to realize the relation of scope count variable creation and use will be hard to do such kind of memory estimation function today iam use collection variable I think it subsum the variable use in any session of train clear description collection be create in the modelling process with the intent of variable management for some functional reason the developer must have a clear image of which be the intent of each collection and the inter relation of they this be somewhat do in graphkey but the mention key be lack submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide more info I have seem the rfc and know that the 2 0beta variable become abs class and the management and implementation be more flexible but now I do not have time to migrate the code so I think this be stuff be good to keep up update if it be possible and desirable by tflow team more people may be in the same condition
tensorflowtensorflow
potential bug find with static analysis
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 1 14 0 python version 2 7 12 bazel version if compile from source 0 26 1 gcc compiler version if compile from source c ubuntu 5 4 0 6ubuntu1 16 04 9 5 4 0 20160609 cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior issue 1 l445 l447 dst copy nullptr be check immediately after if dst copy nullptr continue be one of comparison suppose to be different or can the tf ret check be remove issue 2 l126 l128 this code be unreachable because both branch of if above return either it should be delete or some part of if modify issue 3 l218 l220 the condition be always true my guess be that be intend instead of issue 4 l55 should this be void since nothing get return in any branch issue 5 l112 l125 scopedtracer doesn t obey the rule of 3 but this scope trace macro expand to call its copy constructor this should be safe in practice because of copy elision I believe but undesirable to rely on unfortunately change to auto tracer makescopedtracer this loc begin loc complete va args will infer initializer list be that right describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
low gpu usage of rnn layer under mirroredstrategy
Bug
system information have I write custom code yes os platform and distribution ubuntu 14 04 tensorflow instal from binary tensorflow version 2 0 0b1 python version 3 6 8 cuda cudnn version 10 0 gpu model and memory titan x describe the current behavior rnn layer have poor performance and low gpu usage when use with mirroredstrategy by monitor nvidia smi part of the execution seem to run sequentially on each gpu describe the expect behavior with mirroredstrategy the model be expect to be run in parallel code to reproduce the issue consider the dummy training code below it generate example with random shape and apply a stack of lstmcell on batch of sequence on 3 gpu if you replace the rnn layer by e g a stack of dense layer the parallelism be visibly improve python import tensorflow as tf class mymodel tf keras layers layer def init self super mymodel self init cell tf keras layers stackedrnncell tf keras layer lstmcell 512 for in range 12 self rnn tf keras layer rnn cell def call self input return self rnn input dataset tf datum dataset from tensor slice tf random uniform 10000 minval 1 maxval 80 dtype tf int32 dataset dataset shuffle 10000 dataset dataset map lambda t tf zeros t 512 dataset dataset padded batch 64 padded shape tf compat v1 datum get output shape dataset dataset dataset repeat dataset dataset prefetch 1 device gpu 0 gpu 1 gpu 2 strategy tf distribute mirroredstrategy device device with strategy scope dataset strategy experimental distribute dataset dataset model mymodel optimizer tf keras optimizer adam def step input output model input loss tf keras loss meansquarederror reduction tf keras loss reduction sum tf zero like output output variable model trainable variable gradient optimizer get gradient loss variable optimizer apply gradient zip gradient variable return loss tf function def train with strategy scope for input in dataset loss strategy experimental run v2 step args input train cc jkamalu
tensorflowtensorflow
add build in helper function for byte feature float feature and int64 feature from use tfrecord and tf example page
Bug
the tf documentation page use tfrecord and tf example list these helper function the follow function can be use to convert a value to a type compatible with tf example def byte feature value return a bytes list from a string byte return tf train feature byte list tf train byteslist value value def float feature value return a float list from a float double return tf train feature float list tf train floatlist value value def int64 feature value return an int64 list from a bool enum int uint return tf train feature int64 list tf train int64list value value search github code show these have be cut and paste 3453 time into other project and presumably many more time besides could tf include helper function for these and other common tf train feature example helper system information tensorflow version 1 13 1 be you willing to contribute it yes at some point describe the feature and the current behavior state example and feature be recommend as the canonical way to store tf dataset however understand the protobufs be non trivial they be multiple layer deep and have a verbose api will this change the current api how this will make the api simple and more pythonic for build usual feature and example who will benefit with this feature all user build dataset any other info
tensorflowtensorflow
make documentation link to c code
Bug
system information tensorflow version all doc link all the python api documentation for example describe the documentation issue the python api documentation often point to the python code on github where the operation be define for the example for tf nn conv2d transpose it link to this code unfortunately most operation be fairly thin wrapper around c operation and since the link from python to c be automatically generate in this example it s gen nn op conv2d backprop input it be not trivial to find the correspond c code the mapping be in bazel code really hard to find many people have be bother by this problem as you can see by search for gen nn op on stackoverflow for example this question it would be great if the documentation could point to both the python function and the c operation in this case it would be and the source code be in tensorflow core kernel conv grad input op cc l265 to find it I have to search locally on my computer to find the gen nn op py file and I find that gen nn op conv2d backprop input just call the conv2dbackpropinput operation but then I have to go back to github to search for its c source code since the tensorflow binary do not include it and it be tricky to find since the c operation be also dynamically register so the actual name of the function be conv2dcustombackpropinputop search for register kernel builder conv2dbackpropinput help we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue I m not sure where I could contribute this fix