repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
unit test nn op pool op 3d test cpu break with onednn enable
Bug
click to expand issue type bug source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 8 10 bazel version 5 3 0 gcc compiler version 9 3 1 cuda cudnn version n a gpu model and memory n a current behaviour shell tensorflow python kernel test nn op pool op 3d test cpu fail with tensorflow python framework error impl abortederror function node wrap mklnativeavgpool3dgrad device job localhost replica 0 task 0 device cpu 0 compute receive an exception status 2 message could not create a descriptor for a pool backward propagation primitive in file tensorflow core kernel mkl mkl avgpoole op cc 298 op avgpool3dgrad standalone code to reproduce the issue shell bazel bazelrc usertools cpu bazelrc test config sigbuild local cache cache test result no crosstool top sigbuild r2 10 config cuda crosstool toolchain job 16 test timeout 30 50 1 1 test env tf enable onednn opt 1 tensorflow python kernel test nn op pool op 3d test cpu relevant log output shell error testavgpool3dgrademptyinput main poolingtest poolingtest testavgpool3dgrademptyinput traceback most recent call last file root cache bazel bazel root e953b164f58eb4c9598ad736d787ff39 execroot org tensorflow bazel out k8 opt bin tensorflow python kernel test nn op pool op 3d test cpu runfiles org tensorflow tensorflow python kernel test nn op pool op 3d test py line 152 in testavgpool3dgrademptyinput t gen nn op avgpool3dgrad file root cache bazel bazel root e953b164f58eb4c9598ad736d787ff39 execroot org tensorflow bazel out k8 opt bin tensorflow python kernel test nn op pool op 3d test cpu runfiles org tensorflow tensorflow python util tf export py line 422 in wrapper return f kwargs file root cache bazel bazel root e953b164f58eb4c9598ad736d787ff39 execroot org tensorflow bazel out k8 opt bin tensorflow python kernel test nn op pool op 3d test cpu runfiles org tensorflow tensorflow python ops gen nn op py line 448 in avg pool3d grad op raise from not ok status e name file root cache bazel bazel root e953b164f58eb4c9598ad736d787ff39 execroot org tensorflow bazel out k8 opt bin tensorflow python kernel test nn op pool op 3d test cpu runfiles org tensorflow tensorflow python framework op py line 7215 in raise from not ok status raise core status to exception e from none pylint disable protect access tensorflow python framework error impl abortederror function node wrap mklnativeavgpool3dgrad device job localhost replica 0 task 0 device cpu 0 compute receive an exception status 2 message could not create a descriptor for a pool backward propagation primitive in file tensorflow core kernel mkl mkl avgpoole op cc 298 op avgpool3dgrad
tensorflowtensorflow
why my code get well f1 score on old version
Bug
click to expand issue type bug source binary tensorflow version 2 10 0 2 7 0 2 4 0 custom code no os platform and distribution no response mobile device no response python version 3 7 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour hi I adopt different tensorflow and kera version with my code but it return well f 1 score on old version the reproduction result be as follow tensorflow keras f1 score 2 7 0 2 7 0 0 29846938775510207 2 4 0 2 4 0 0 7376928728875827 I also test it with tensorflow 2 10 0 and the f 1 score be 0 6031746031746031 standalone code to reproduce the issue I post the code on colad click I to reproduce relevant log output no response
tensorflowtensorflow
typo in
Bug
click to expand issue type documentation bug source binary tensorflow version 2 10 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen here be the statement in the link there be two feature description detailed description of patent summary patent abastract and I think the word abastract be a typo standalone code to reproduce the issue shell and I think the word abastract be a typo in the link above relevant log output no response
tensorflowtensorflow
contain some information of
Bug
click to expand issue type documentation bug source binary tensorflow version 2 10 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen in the link below it should have introduce some info about the binaryfocalcrossentropy but it put some info of binarycrossentropy as below binary cross entropy loss be often use for binary 0 or 1 classification task the loss function require the follow input y true true label this be either 0 or 1 y pre predict value this be the model s prediction I e a single float point value which either represent a logit I e value in inf inf when from logit true or a probability I e value in 0 1 when from logit false standalone code to reproduce the issue shell click on the link above relevant log output no response
tensorflowtensorflow
two unit test failure on high cpu core count machine
Bug
click to expand issue type bug source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 8 10 bazel version 5 3 0 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour shell tensorflow python data experimental kernel test service worker tag test and tensorflow python data experimental kernel test service local worker test timeout which be down to the element in the dataset be prefetche one per cpu core this can result in unexpected exception due to end of sequence cause the test to fail when the cpu core count be more than approximately 200 standalone code to reproduce the issue shell bazel test test timeout 30 50 1 1 flaky test attempt 1 test output all cache test result no config nonccl config mkl aarch64 threadpool copt mtune generic copt march armv8 a copt o3 test env tf enable onednn opt 1 verbose failure build tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 test tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 build test only tensorflow python data experimental kernel test service worker tag test tensorflow python data experimental kernel test service local worker test relevant log output shell error testmultipleconsumer test mode graph tfapiversion 2 main localworkerst localworkerst testmultipleconsumer test mode graph tfapiversion 2 testmultipleconsumer test mode graph tfapiversion 2 mode graph tf api version 2 traceback most recent call last file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel test service local worker test runfiles org tensorflow tensorflow python client session py line 1378 in do call return fn args file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel test service local worker test runfiles org tensorflow tensorflow python client session py line 1361 in run fn return self call tf sessionrun option feed dict fetch list file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel test service local worker test runfiles org tensorflow tensorflow python client session py line 1454 in call tf sessionrun return tf session tf sessionrun wrapper self session option feed dict tensorflow python framework error impl outofrangeerror end of sequence node iteratorgetnext 6
tensorflowtensorflow
fix typo error on calibrator py
Bug
fix typo error arugment to argument fix 58383
tensorflowtensorflow
misspell of the word argument
Bug
l97
tensorflowtensorflow
check fail in fix unigram candidate sampler
Bug
click to expand issue type bug source source tensorflow version tf 2 10 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell check fail range weight size 10 vs 3 standalone code to reproduce the issue shell import tensorflow as tf import numpy as np true class 0 1 2 3 4 5 6 7 8 9 num true 2 num sample 4 unique true range max 10 distortion 1 0 num reserve ids 0 num shard 1 shard 0 unigram 0 1 0 8 0 1 seed none sampler tf random fix unigram candidate sampler true class num true num sample unique range max distortion distortion num reserve id num reserve ids num shard num shard shard shard unigram unigram seed seed relevant log output shell f tensorflow core kernel range sampler cc 264 check fail range weight size 10 vs 3 abort
tensorflowtensorflow
fail to build tf with rocm 5 2
Bug
click to expand issue type bug source source tensorflow version tf 2 10 custom code no os platform and distribution linux ubuntu 20 04 4 lts mobile device no response python version 3 8 10 bazel version 5 3 0 gcc compiler version 9 4 0 cuda cudnn version no response gpu model and memory amd mi210 current behaviour when I build tf from source with rocm 5 3 0 the error happen error an error occur during the fetch of repository local config rocm traceback most recent call last file global home jzzeng package tensorflow tensorflow 2 10 third party gpu rocm configure bzl line 869 column 38 in rocm autoconf impl create local rocm repository repository ctx file global home jzzeng package tensorflow tensorflow 2 10 third party gpu rocm configure bzl line 547 column 35 in create local rocm repository rocm config get rocm config repository ctx bash bin find rocm config script file global home jzzeng package tensorflow tensorflow 2 10 third party gpu rocm configure bzl line 395 column 30 in get rocm config config find rocm config repository ctx find rocm config script file global home jzzeng package tensorflow tensorflow 2 10 third party gpu rocm configure bzl line 373 column 41 in find rocm config exec result exec find rocm config repository ctx script path file global home jzzeng package tensorflow tensorflow 2 10 third party gpu rocm configure bzl line 369 column 19 in exec find rocm config return execute repository ctx python bin c decompress and execute cmd file global home jzzeng package tensorflow tensorflow 2 10 third party remote config common bzl line 230 column 13 in execute fail error in fail repository command fail a bug happen error define rocrand version be either not present in file global software rocm rocm 5 3 0 rocrand include rocrand version h or its value be not an integer literal global software rocm rocm 5 3 0 rocrand include rocrand version h give copyright c 2022 advanced micro devices inc all right reserve ifndef rocm wrapper rocrand version h define rocm wrapper rocrand version h if define rocm no wrapper header warn define rocm wrapper give warning include file include include rocrand rocrand version h else give warning if define msc ver pragma message warn this file be deprecate use the header file from opt rocm 5 3 0 include rocrand rocrand version h by use include elif define gnuc pragma message warn this file be deprecate use the header file from opt rocm 5 3 0 include rocrand rocrand version h by use include endif include file define rocm wrapper give warning include include rocrand rocrand version h undef rocm wrapper give warn endif define rocm no wrapper header warn define rocm wrapper give warn endif rocm wrapper rocrand version h so in rocm 5 2 the version should be read from include rocrand rocrand version h instead of rocrand include rocrand version h it look that tensorflow do not handle this thing note that in rocm 5 1 rocrand include rocrand version h do have version information standalone code to reproduce the issue shell build from source against rocm 5 2 relevant log output no response
tensorflowtensorflow
versione scheme of cve patch
Bug
I post this question first to the tensorflow forum but I think it fit here well I m use tf 2 7 2 and be wonder if the recently publish cf e g cve 2022 36001 be fix in 2 7 2 as far as I understand they be fix first in 2 7 4 the github security advisory be not complety obvious to I it say patched version be 2 7 2 2 8 1 2 9 1 2 10 0 while 2 10 0 be patch and doesn t contain the vulnerability the other version be not patch when look at the code on github correspond to the tag also the release note only mention the cve in 2 7 4 2 8 3 2 9 2 and 2 10 0 other project like nodebb for example just mention the fix release version under patch version which make a clear split between affected and unaffected version e g so be there a release tag 2 7 2 with the cherry pick fix include or be the secure release only 2 7 4 2 8 2 8 3 2 9 2 9 2 if not what be the rationale behind list patched and unpatched version number together under patch version click to expand issue type documentation bug source source tensorflow version 2 7 2 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell standalone code to reproduce the issue shell relevant log output no response
tensorflowtensorflow
check fail in tf raw op assignaddvariableop when value have dtype of float64
Bug
click to expand issue type bug source binary tensorflow version 2 10 custom code no os platform and distribution ubuntu 20 04 4 lts mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell when pass value that have dtype of float64 it will trigger check fail result in abort core dump standalone code to reproduce the issue shell import tensorflow as tf from tensorflow python eager import context input1 tf raw op varhandleop dtype tf int32 shape 2 3 share name context anonymous name input2 tf constant dtype tf float64 tf raw op assignaddvariableop resource input1 value input2 relevant log output shell abort core dump 2022 10 26 05 24 02 781619 f tensorflow core framework tensor cc 719 check fail dtype expect dtype 3 vs 2 double expect get int32 type for more q to quit c to continue without page bt 10 thread 1 python receive signal sigabrt abort gi raise sig sig entry 6 at sysdep unix sysv linux raise c 50 50 sysdep unix sysv linux raise c no such file or directory gdb bt 10 0 gi raise sig sig entry 6 at sysdep unix sysv linux raise c 50 1 0x00007f316f18a859 in gi abort at abort c 79 2 0x00007f3156020c5c in tensorflow internal logmessagefatal logmessagefatal this 0x7fff07149690 in chrg vtt parm at tensorflow core platform default log cc 375 3 0x00007f312aa65733 in tensorflow tensor checktype this 0x47ddb80 expect dtype tensorflow dt double at tensorflow core framework tensor cc 719 4 0x00007f31477b0222 in tensorflow tensor shape this 0x47ddb80 new size at tensorflow core framework tensor h 890 5 0x00007f31477af708 in tensorflow tensor flat this 0x47ddb80 at tensorflow core framework tensor h 564 6 0x00007f314c308717 in tensorflow status tensorflow preparetoupdatevariable tensorflow opkernelcontext tensorflow tensor bool from usr local lib python3 8 dist package tensorflow python pywrap tensorflow internal so 7 0x00007f314c25eb12 in tensorflow assignupdatevariableop compute tensorflow opkernelcontext from usr local lib python3 8 dist package tensorflow python pywrap tensorflow internal so 8 0x00007f312ad60c82 in tensorflow threadpooldevice compute this 0x230a7b0 op kernel 0x50a2f10 context 0x7fff07149fe0 at tensorflow core common runtime threadpool device cc 184 9 0x00007f312ac0d09c in tensorflow anonymous namespace singlethreadedexecutorimpl run this 0x4d91eb0 args at tensorflow core common runtime single thread executor cc 445
tensorflowtensorflow
undefined symbol xla hlocomputation collectunreachableroot const with tf nightly pip package
Bug
click to expand issue type bug source binary tensorflow version tf nightly cpu 2 12 0 dev20221019 and new custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour start from version 2 12 0 dev20221019 horovod can not be build correctly with tf nightly tf nightly cpu 2 12 0 dev20221018 and early be fine upon import of horovod tensorflow an undefined symbol error be encounter tensorflow python framework error impl notfounderror usr local lib python3 8 dist package horovod tensorflow mpi lib cpython 38 x86 64 linux gnu so undefined symbol znk3xla14hlocomputation23collectunreachablerootsev in demangle form that be xla hlocomputation collectunreachableroot const have the implementation be move to a different dynamic library recently horovod currently link to libtensorflow framework so 2 and to pywrap tensorflow internal so standalone code to reproduce the issue shell pip install tf nightly cpu collect tf nightly cpu download tf nightly cpu 2 12 0 dev20221024 cp39 cp39 manylinux 2 17 x86 64 manylinux2014 x86 64 whl 225 0 mb 225 0 225 0 mb 1 2 mb s eta 0 00 00 pip install v horovod tensorflow librarie l horovod tf nightly venv lib python3 9 site package tensorflow l libtensorflow framework so 2 horovod tf nightly venv lib python3 9 site package tensorflow python pywrap tensorflow internal so find tensorflow l horovod tf nightly venv lib python3 9 site package tensorflow l libtensorflow framework so 2 horovod tf nightly venv lib python3 9 site package tensorflow python pywrap tensorflow internal so find suitable version 2 12 0 dev20221024 minimum require be 1 15 0 python c import horovod tensorflow error message quote below under relevant log output relevant log output shell 2022 10 24 16 12 49 274846 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag traceback most recent call last file line 1 in file horovod tf nightly venv lib python3 9 site package horovod tensorflow init py line 27 in from horovod tensorflow import elastic file horovod tf nightly venv lib python3 9 site package horovod tensorflow elastic py line 24 in from horovod tensorflow function import broadcast object broadcast object fn broadcast variable file horovod tf nightly venv lib python3 9 site package horovod tensorflow function py line 24 in from horovod tensorflow mpi op import allgather broadcast broadcast file horovod tf nightly venv lib python3 9 site package horovod tensorflow mpi op py line 53 in raise e file horovod tf nightly venv lib python3 9 site package horovod tensorflow mpi op py line 50 in mpi lib load library mpi lib get ext suffix file horovod tf nightly venv lib python3 9 site package horovod tensorflow mpi op py line 45 in load library library load library load op library filename file horovod tf nightly venv lib python3 9 site package tensorflow python framework load library py line 54 in load op library lib handle py tf tf loadlibrary library filename tensorflow python framework error impl notfounderror horovod tf nightly venv lib python3 9 site package horovod tensorflow mpi lib cpython 39 x86 64 linux gnu so undefined symbol znk3xla14hlocomputation23collectunreachablerootsev
tensorflowtensorflow
tensorflow autograph could not transform function
Bug
click to expand issue type bug source source tensorflow version 2 10 0 custom code no os platform and distribution window 10 enterprise mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a function to find the value in one tensor close to another tensor return a warning it also return the output as expect standalone code to reproduce the issue shell import tensorflow as tf sampling function randomly sample space pair def space sampler nsim s tf random uniform shape nsim 1 minval 0 maxval 2 0 v tf random uniform shape nsim 1 minval 1 0 maxval 1 0 return s v find interior value close to boundary value tf function experimental relax shape true def boundarypenalty s int v int s bind v bind interior tf concat s int v int 1 bind tf concat s bind v bind 1 y n tf one 1 tf shape bind 1 for I in range bind shape 0 tf autograph experimental set loop option shape invariant y n tf tensorshape none tf autograph experimental set loop option shape invariant interior tf tensorshape none index tf argmin tf reduce sum tf math square difference interior bind I 1 y n tf concat y n interior index none 0 interior tf concat interior index interior index 1 0 y n y n 1 return y n call the function s interior v interior space sampler 12 s boundary v boundary space sampler 6 boundarypenalty s interior v interior s boundary v boundary relevant log output shell warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause set loop option must be the first statement in the loop block to silence this warning decorate the function with tf autograph experimental do not convert warning autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause set loop option must be the first statement in the loop block to silence this warning decorate the function with tf autograph experimental do not convert
tensorflowtensorflow
tinyml o reilly error running test hello world test
Bug
hi I clone successfully the git repository from tinyml o reilly but when I run the command make f tensorflow lite micro tool make makefile test hello world test I get the follow answer make tensorflow lite micro tool make makefile no such file or directory make no rule to make target tensorflow lite micro tool make makefile stop and if I look into the directory micro I find only a readme md file what do I have to do thank you very much for your answer andr koller
tensorflowtensorflow
a check fail can be trigger in mappeek
Bug
click to expand issue type bug source binary tensorflow version tf 2 9 and 2 12 0 dev20221018 custom code no os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 5 gpu model and memory no response current behaviour shell a crash due to check fail can be trigger in mappeek standalone code to reproduce the issue shell import os os environ tf enable onednn opt 0 import tensorflow as tf import numpy as np print tf version for in range 20 try capacity 0 memory limit 0 dtype 0 tf uint64 dtype 1 tf float32 dtype dtype 0 dtype 1 container share name key tf saturate cast tf random uniform 6 14 4 minval 0 maxval 64 dtype tf int64 dtype tf int64 indice tf saturate cast tf random uniform 2 minval 0 maxval 64 dtype tf int64 dtype tf int32 re tf raw op mappeek capacity capacity memory limit memory limit dtype dtype container container share name share name key key index index except pass relevant log output shell f tensorflow core framework tensor cc 733 check fail 1 numelement 1 vs 336 must have a one element tensor abort core dump
tensorflowtensorflow
a check fail can be trigger in lstmblockcell
Bug
click to expand issue type bug source binary tensorflow version tf 2 9 and 2 12 0 dev20221018 custom code no os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 5 gpu model and memory no response current behaviour shell a crash due to check fail can be trigerre standalone code to reproduce the issue shell import os os environ tf enable onednn opt 0 import tensorflow as tf import numpy as np print tf version for in range 20 try forget bias 112 66590343649887 cell clip 67 12389445926587 use peephole false x tf saturate cast tf random uniform 2 16 minval 0 maxval 64 dtype tf int64 dtype tf half cs prev tf saturate cast tf random uniform 2 0 minval 0 maxval 64 dtype tf int64 dtype tf half h prev tf saturate cast tf random uniform 2 0 minval 0 maxval 64 dtype tf int64 dtype tf half w tf saturate cast tf random uniform 16 0 minval 0 maxval 64 dtype tf int64 dtype tf half wci tf saturate cast tf random uniform 5 minval 0 maxval 64 dtype tf int64 dtype tf half wcf tf saturate cast tf random uniform 16 minval 0 maxval 64 dtype tf int64 dtype tf half wco tf saturate cast tf random uniform 13 minval 0 maxval 64 dtype tf int64 dtype tf half b tf saturate cast tf random uniform 0 minval 0 maxval 64 dtype tf int64 dtype tf half re tf raw op lstmblockcell forget bias forget bias cell clip cell clip use peephole use peephole x x cs prev cs prev h prev h prev w w wci wci wcf wcf wco wco b b except pass relevant log output shell f tensorflow core kernel rnn lstm op gpu cu cc 277 non ok status gpulaunchkernel lstm gate grid dim 2d block dim 2d 0 cu stream gate data b datum cs prev datum wci data wcf datum wco datum o datum h datum ci datum cs data co data I data f datum forget bias cell clip batch size cell size status internal invalid configuration argument abort core dump
tensorflowtensorflow
a check fail can be triggerre in grublockcell
Bug
click to expand issue type bug source binary tensorflow version tf 2 9 and 2 12 0 dev20221018 custom code no os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 5 gpu model and memory no response current behaviour shell a check fail can be triggerre in grublockcell which can lead to a crash standalone code to reproduce the issue shell import tensorflow as tf import numpy as np print tf version for in range 20 try x tf random uniform 1 0 1 dtype tf float32 h prev tf random uniform 1 1 1 dtype tf float32 w ru tf random uniform 1 2 1 1 1 1 dtype tf float32 w c tf random uniform 1 1 1 dtype tf float32 b ru tf random uniform 2 dtype tf float32 b c tf random uniform 1 dtype tf float32 re tf raw op grublockcell x x h prev h prev w ru w ru w c w c b ru b ru b c b c except pass relevant log output shell f tensorflow core framework tensor shape cc 45 check fail ndim dim 2 vs 3 ask for tensor of 2 dimension from a tensor of 3 dimension abort core dump
tensorflowtensorflow
typesec error sinerelu advaced activation function
Bug
click to expand issue type bug source source tensorflow version 2 9 2 custom code yes os platform and distribution google colab mobile device no response python version 3 7 15 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell previously I have call sinerelu function in subclass model building method use string alia through get custom object unfortunately batchnormalization function fail in that method produce an interesting error of invalid tesnor rank don t want to be carry out I then swtiche to something call sequential method yada our favourite proven way but this time it show a typesec error with the sinerelu function which be totally weird because I have imprte and run the functional call in previous cell and all work perfectly fine except while I call the function within the core model architecture I be really adamant at this point to get sinerelu function to work so much I will even write custom code in the keras contrib library or tensorflow if that what take to fix this error I could get away with swish function but I win t if I m not wrong tensorflow can t match class typsec from keras contrib library standalone code to reproduce the issue shell import tensorflow as tf import numpy as np from tensorflow import kera from tensorflow keras import layer from tensorflow keras layer import conv2d dense flatten maxpool2d input dropout categoryencode batchnormalization relu from keras contrib layers advanced activation sinerelu import sinerelu from tensorflow keras import model from keras util generic util import get custom object from tensorflow kera preprocesse image import imagedatagenerator from keras layers import activation from keras layer pool max pooling2d import maxpooling2d call custom function get custom object update sinerelu activation sinerelu relevant log output shell 891 3 fail to convert r to tensor s type value name e 892 893 raise typeerror f could not build a typespec for value of 894 f unsupported type type value 895 typeerror could not build a typespec for of unsupported type
tensorflowtensorflow
keras load lstm gru model with constant mask initial state will raise error
Bug
click to expand issue type bug source binary tensorflow version tf 2 6 3 custom code yes os platform and distribution redhat 7 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell traceback most recent call last file home runner work sample code example py line 28 in dd tf keras model load model lstm h5 compile false option load option file home runner anaconda3 envs late env lib python3 8 site package keras util traceback util py line 67 in error handler raise e with traceback filter tb from none file home runner anaconda3 envs late env lib python3 8 site package keras backend py line 1470 in int shape shape x shape attributeerror float object have no attribute shape standalone code to reproduce the issue shell import numpy as np import tensorflow as tf import tensorflow kera backend as k input t tf keras input shape 28 10 batch size 2 dtype float32 rdm value np one 2 28 astype np float32 rdm value 20 0 mask value k constant np array rdm value dtype bool m state tf keras initializers glorotuniform shape 2 6 dtype float32 c state tf keras initializers glorotuniform shape 2 6 dtype float32 init state value m state c state mask value tf keras input shape 28 batch size 2 dtype bool lstm tf keras layers lstm 6 return sequence true return state true bias initializer random uniform time major false input input t mask mask value train false initial state init state value keras model tf keras model input t lstm keras model save lstm h5 load option tf save model loadoption allow partial checkpoint true dd tf keras model load model lstm h5 compile false option load option print dd input relevant log output no response
tensorflowtensorflow
check fail can be trigger in blocklstm
Bug
click to expand issue type bug source binary tensorflow version tf 2 9 and 2 12 0 dev20221018 custom code no os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 5 gpu model and memory no response current behaviour shell a check fail can be trigger standalone code to reproduce the issue shell import os os environ tf enable onednn opt 0 import tensorflow as tf import numpy as np print tf version for in range 20 try with tf device gpu 0 forget bias 121 22699269620765 cell clip 106 82307555235684 use peephole false seq len max tf saturate cast tf random uniform 13 11 0 minval 0 maxval 64 dtype tf int64 dtype tf int64 x tf random uniform 1 3 15 dtype tf float32 cs prev tf random uniform 3 0 dtype tf float32 h prev tf random uniform 3 0 dtype tf float32 w tf random uniform 15 0 dtype tf float32 wci tf random uniform 0 dtype tf float32 wcf tf random uniform 0 dtype tf float32 wco tf random uniform 0 dtype tf float32 b tf random uniform 0 dtype tf float32 re tf raw ops blocklstm forget bias forget bias cell clip cell clip use peephole use peephole seq len max seq len max x x cs prev cs prev h prev h prev w w wci wci wcf wcf wco wco b b except pass relevant log output shell 2022 10 20 10 45 56 646548 f tensorflow core framework tensor cc 733 check fail 1 numelement 1 vs 0 must have a one element tensor abort core dump
tensorflowtensorflow
significant increase in the size of macos wheel
Bug
click to expand issue type bug source source tensorflow version tf nightly 2 11 custom code no os platform and distribution macos mobile device no response python version 3 7 3 8 3 9 3 10 bazel version 5 3 0 gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell pr 55941 refactore and deduplicate tensorflow c dependency from pywrap tensorflow internal so into libtensorflow cc so this change increase the size of macos pip package from 240 mb to 350 mb standalone code to reproduce the issue shell relevant log output no response
tensorflowtensorflow
internal error fail to run on the give interpreter tensorflow lite kernel reshape cc 66 num input element num output element 1024 1633876594
Bug
click to expand issue type bug source binary tensorflow version tf2 3 2 8 custom code yes os platform and distribution ubuntu 20 04 4 lts mobile device android python version 3 6 3 7 bazel version 4 2 1 gcc compiler version gcc version 9 4 0 ubuntu 9 4 0 1ubuntu1 20 04 1 cuda cudnn version no response gpu model and memory no response current behaviour shell native crash standalone code to reproduce the issue shell the code can not be share due to company privacy regulation and I would like to know the possible reason for this type of issue relevant log output shell for tflite2 3 abort message scudo error race on chunk header at address 0x007b45478370 lib arm64 libtensorflowlite jni so java org tensorflow lite nativeinterpreterwrapper run 100 fortify pthread mutex lock call on a destroy mutex 0x7dc6aeab18 abort message scudo error race on chunk header at address 0x007b4546caf0 internal error fail to run on the give interpreter tensorflow lite kernel reshape cc 66 num input element num output element 1024 1633876594 for tflite2 8 internal error fail to run on the give interpreter tensorflow lite kernel reshape cc 85 num input element num output element 1024 1407367860
tensorflowtensorflow
tf range have accumulate floating point error on cpu
Bug
click to expand issue type bug source binary tensorflow version tf 2 10 0 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour tf range have accumulate great and great floating point error on cpu in the example below range 10 10 0 01 give 9 980267 9 990267 when the correct result should be 9 98 9 99 and this deviation be too much in my opinion standalone code to reproduce the issue shell import tensorflow as tf with tf device cpu tensor1 tf range 10 10 0 01 10 9 99 9 980267 9 990267 print tensor1 with tf device gpu tensor2 tf range 10 10 0 01 10 9 99 9 98 9 99 print tensor2 assert np allclose tensor1 tensor2 assertionerror relevant log output shell tf tensor 10 9 99 9 98 9 970266 9 980267 9 990267 shape 2000 dtype float32 tf tensor 10 9 99 9 98 9 969999 9 98 9 99 shape 2000 dtype float32 assertionerror
tensorflowtensorflow
tf scan have inconsistent result on cpu gpu
Bug
click to expand issue type bug source binary tensorflow version tf 2 10 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell tf scan have inconsistent result on cpu gpu on cpu 10 20 30 0 0 0 on gpu 10 20 30 5332261958806667264 8722786653543858176 4459375070678089728 standalone code to reproduce the issue shell import tensorflow as tf with tf device cpu input datum np array 10 20 30 11 12 13 result tf scan lambda a x a x 2 input datum o result numpy print o with tf device gpu input datum np array 10 20 30 11 12 13 result tf scan lambda a x a x 2 input datum o result numpy print o relevant log output shell 10 20 30 0 0 0 10 20 30 5332261958806667264 8722786653543858176 4459375070678089728
tensorflowtensorflow
I train custom pose classification model but its giving error at the time of inference
Bug
I m use window 10 system get this error I follow the official blog of tensorflow for model training this blog hasn t provide inference script so I try official for tflite inference but get isse can not set tensor get value of type float32 but expect type uint8 for input 0 name serve default input 0
tensorflowtensorflow
tf raw op truncatediv documentation be not consistent
Bug
click to expand issue type documentation bug source binary tensorflow version 2 9 1 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell tf raw op truncatediv be only support integer value while it be write in the first line in the documentation the parameter section be say that all numeric dtype be allow standalone code to reproduce the issue shell tf raw op truncatediv x np array 1 y np array 1 relevant log output shell tf raw op truncatediv x np array 1 y np array 1 2022 10 07 22 46 30 214942 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2022 10 07 22 46 30 214981 w tensorflow stream executor cuda cuda driver cc 269 fail call to cuinit unknown error 303 2022 10 07 22 46 30 214999 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host 653368f87bf1 proc driver nvidia version do not exist 2022 10 07 22 46 30 215221 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag traceback most recent call last file usr lib python3 8 code py line 90 in runcode exec code self local file line 1 in file usr local lib python3 8 dist package tensorflow python util tf export py line 400 in wrapper return f kwargs file usr local lib python3 8 dist package tensorflow python ops gen math op py line 11558 in truncate div return truncate div eager fallback file usr local lib python3 8 dist package tensorflow python ops gen math op py line 11603 in truncate div eager fallback result execute execute b truncatediv 1 input input flat file usr local lib python3 8 dist package tensorflow python eager execute py line 54 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl notfounderror could not find device for node node truncatediv truncatediv t dt double all kernel register for op truncatediv device gpu t in dt uint64 device gpu t in dt uint32 device gpu t in dt int8 device gpu t in dt int64 device gpu t in dt int16 device gpu t in dt uint16 device gpu t in dt uint8 device cpu t in dt int64 device cpu t in dt int32 device cpu t in dt int16 device cpu t in dt int8 device cpu t in dt uint64 device cpu t in dt uint32 device cpu t in dt uint16 device cpu t in dt uint8 op truncatediv
tensorflowtensorflow
another check fail in conv2dbackpropfilter
Bug
click to expand issue type bug source binary tensorflow version tf 2 10 and 2 11 0 dev20221005 custom code no os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 5 gpu model and memory no response current behaviour shell in current implementation of conv2dbackpropfilter argument shape be not check carefully as a result a check fail can be trigger which can lead to crash and do the bug be similar to 57980 but have different input and output so they might have different cause standalone code to reproduce the issue shell import os os environ tf enable onednn opt 1 import tensorflow as tf print tf version with tf device gpu 0 input tf random uniform 1 1 1 1 1 dtype tf bfloat16 filter size tf saturate cast tf random uniform 1 minval 128 maxval 129 dtype tf int64 dtype tf int32 out backprop tf random uniform 1 1 1 1 dtype tf bfloat16 stride 1 1 1 use cudnn on gpu true padding same explicit padding datum format nhwc dilation 1 1 1 1 re tf raw op conv2dbackpropfilter input input filter size filter size out backprop out backprop stride stride use cudnn on gpu use cudnn on gpu padding padding explicit padding explicit padding datum format datum format dilation dilation relevant log output shell 2022 10 05 16 56 46 002944 f tensorflow core util tensor format h 427 check fail index 0 index num total dim invalid index from the dimension 3 0 c abort core dump
tensorflowtensorflow
check fail in conv2dbackpropfilter
Bug
click to expand issue type bug source binary tensorflow version tf 2 10 and 2 11 0 dev20221005 custom code no os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 5 gpu model and memory no response current behaviour shell in current implementation of conv2dbackpropfilter argument shape be not check carefully as a result a check fail can be trigger which can lead to a crash and do the bug can be replicate when run with gpu standalone code to reproduce the issue shell import os os environ tf enable onednn opt 1 import tensorflow as tf print tf version with tf device gpu 0 input tf random uniform 1 1 1 1 1 1 dtype tf bfloat16 filter size tf saturate cast tf random uniform 1 minval 128 maxval 129 dtype tf int64 dtype tf int32 out backprop tf random uniform dtype tf bfloat16 stride 1 1 1 1 1 1 use cudnn on gpu true padding valid explicit padding datum format nhwc dilation 1 1 1 1 re tf raw op conv2dbackpropfilter input input filter size filter size out backprop out backprop stride stride use cudnn on gpu use cudnn on gpu padding padding explicit padding explicit padding datum format datum format dilation dilation relevant log output shell 2022 10 05 16 49 28 663172 f tensorflow core kernel mkl mkl conv grad filter op cc 671 check fail tensorshapeutil makeshape filter tensor vec filter tf shape ok true 0 vs 1 abort core dump
tensorflowtensorflow
undocumented explicit padding exception in tf nn max pool and tf nn max pool2d
Bug
click to expand issue type documentation bug source source tensorflow version tf 2 10 custom code yes os platform and distribution microsoft window 11 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell tf nn max pool raise 2 exception that be not mention in the documentation 1 for input tensor of rank 5 explicit padding be not support this requirement be likely to be infer by most user but maybe a mention in raise section might be useful for some 2 nchw vect c be not support with explicit padding this be also the case for tf nn max pool1d and tf nn max pool2d nchw vect c be mention as a valid option in a datum format docstring of tf nn max pool2d for max pool1d the documentation be probably ok as it be because user be unlikely to set datum format to nchw vect c for max pool and max pool2d a suggestion be to revise the docstring of datum format or add this requirement in a raise section or remove nchw vect c from the documentation if it be not allow in general standalone code to reproduce the issue shell import tensorflow as tf from tensorflow python op import array op x array op one 2 2 2 2 2 tf nn max pool x ksize 2 stride 2 padding 0 0 1 1 1 1 0 0 relevant log output shell valueerror explicit padding be not support with an input tensor of rank 5 receive padding 0 0 1 1 1 1 0 0 standalone code to reproduce the issue shell modify from max pool s api doc example import tensorflow as tf from tensorflow python op import array op matrix tf constant 0 0 1 7 0 2 0 0 5 2 0 0 0 0 9 8 reshape tf reshape matrix 1 4 4 1 result tf nn max pool2d reshape ksize 2 stride 2 padding 0 0 1 1 1 1 0 0 datum format nchw vect c relevant log output shell valueerror datum format nchw vect c be not support with explicit padding receive padding 0 0 1 1 1 1 0 0
tensorflowtensorflow
tf nn gelu raise an exception when feature dtype be not a float point tensor
Bug
click to expand issue type documentation bug source source tensorflow version tf 2 10 custom code yes os platform and distribution no response mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell tf nn gelu raise an exception when feature dtype be not a float point tensor the documentation do not mention this requirement a suggestion be to revise the docstring of feature or add a raise section to tf nn gelu documentation issue 54475 and pr 54550 previously discuss about input type a change be make in the code but not in the documentation standalone code to reproduce the issue shell x tf constant 3 1 0 1 3 y tf nn gelu x relevant log output shell valueerror feature dtype must be a float point tensor receive feature dtype
tensorflowtensorflow
tf nn ctc loss documentation do not mention that blank index must be provide when label be a sparsetensor
Bug
click to expand issue type documentation bug source source tensorflow version tf 2 10 custom code no os platform and distribution microsoft window 11 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell tf nn ctc loss raise an exception when label be a sparsetensor and blank index be not provide the documentation do not mention this a suggestion be to revise the docstring for either label or blank index or add a raise section to tf nn ctc loss documentation the code below be modify from python kernel test nn op ctc loss op test py standalone code to reproduce the issue shell import tensorflow as tf from tensorflow python op import random op from tensorflow python framework import dtype from tensorflow python op import array op batch size 8 num label 6 max label length 5 num frame 12 label random op random uniform batch size max label length minval 1 maxval num label dtype dtype int64 logit random op random uniform num frame batch size num label label length random op random uniform batch size minval 2 maxval max label length dtype dtype int64 label mask array op sequence mask label length maxlen max label length dtype label length dtype label label mask logit length num frame batch size sp label tf sparse from dense label tf nn ctc loss sp label logit label length logit length relevant log output shell traceback most recent call last file tfteste py line 23 in tf nn ctc loss sp label logit label length logit length file tensorflow python util traceback util py line 153 in error handler raise e with traceback filter tb from none file c tensorflow python op ctc op py line 937 in ctc loss v3 raise valueerror valueerror argument blank index must be provide when label be a sparsetensor
tensorflowtensorflow
tf nn conv2d transpose abort with large output shape
Bug
click to expand issue type bug source binary tensorflow version 2 10 0 custom code no os platform and distribution ubuntu 18 04 4 lts x86 64 mobile device no response python version 3 7 6 bazel version no response gcc compiler version no response cuda cudnn version n a gpu model and memory no response current behaviour shell tf nn conv2d transpose crash with abort with large output shape standalone code to reproduce the issue shell import numpy as np import tensorflow as tf tf nn conv2d transpose input np one 2 2 2 2 output shape 114078056 179835296 stride 10 filter 1 relevant log output shell 2022 10 03 23 45 34 556541 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2022 10 03 23 45 34 556569 w tensorflow compiler xla stream executor cuda cuda driver cc 265 fail call to cuinit unknown error 303 2022 10 03 23 45 34 556596 I tensorflow compiler xla stream executor cuda cuda diagnostic cc 163 no nvidia gpu device be present dev nvidia0 do not exist 2022 10 03 23 45 34 556893 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 avx512f fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2022 10 03 23 45 34 595200 f tensorflow core framework tensor shape cc 201 non ok status initdim dim size status invalid argument encounter overflow when multiply 41030521935729152 with 22001 result 1 abort core dump
tensorflowtensorflow
extension file tensorflow core platform default rule cc bzl have error when install tflite support 0 4 2 use bazel 4 2 2
Bug
click to expand issue type bug source source tensorflow version tf 2 custom code no os platform and distribution raspbian buster mobile device no response python version 3 7 bazel version 4 2 2 gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I try to install tflite support 0 4 2 use bazel on my raspberry pi raspbian buster since I can t install it directly use pip3 I expect that instal from bazel will result in complete installation of tflite support however when I try to build the package it give I this error I try several time but the result be still the same error traceback most recent call last file home pi cache bazel bazel pi e377461fba63f25ab8757a7080bd56fe external org tensorflow tensorflow core platform default rule cc bzl line 6 column 28 in cc shared library native cc shared library error no native function or rule cc shared library error analysis of target tensorflow lite support tool pip package build pip package fail build aborted error loading package org tensorflow tensorflow in home pi cache bazel bazel pi e377461fba63f25ab8757a7080bd56fe external org tensorflow tensorflow tensorflow bzl in home pi cache bazel bazel pi e377461fba63f25ab8757a7080bd56fe external org tensorflow tensorflow core platform rule cc bzl extension file tensorflow core platform default rule cc bzl have error the code in tensorflow core platform default rule cc bzl be provide an indirection layer to bazel cc rule load tensorflow core platform default rule cc bzl cc binary cc binary cc import cc import cc library cc library cc shared library cc shared library cc test cc test cc binary cc binary cc import cc import cc library cc library cc shared library cc shared library cc test cc test standalone code to reproduce the issue shell I just use this command no custom code bazel build c opt tensorflow lite support tool pip package build pip package that I learn from the command run ok but eventually fail build do not complete successfully 7 package load 10 target configure thank relevant log output no response
tensorflowtensorflow
markdown failure for tf dialect documentation
Bug
click to expand issue type documentation bug source source tensorflow version n a custom code no os platform and distribution n a mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the bottom part of this documentation be completely a mess now look like a markdown failure link the syntax error be somewhere in the tf tensorarrayconcatv3 mlir tf tensorarrayconcatv3op op standalone code to reproduce the issue shell open your chrome scroll to the very bottom relevant log output no response
tensorflowtensorflow
random set seed do not work on tensorflow macos
Bug
click to expand issue type bug source source tensorflow version 2 10 custom code no os platform and distribution macos 13 0 ventura beta 7 mobile device no response python version 3 9 13 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory m1 metal current behaviour shell use tf random set seed work perfectly on google colab environment yet use tensorflow macos version it do not seem to work at all it s not give I an error either everytime I run my code I get different result code I give as an example use numpy but even use datum import from panda as dataframe do not work which work on google colab my miniconda installation and python installation be all fresh I be use python version 3 10 but the apple website do not mention that so uninstalled miniconda and reinstall it use an old version effectively downgrade python but that do not fix the problem either standalone code to reproduce the issue shell import tensorflow as tf import numpy as np x np array 7 0 4 0 1 0 2 0 5 0 8 0 11 0 14 0 y np array 3 0 6 0 9 0 12 0 15 0 18 0 21 0 24 0 tf random set seed 42 model tf keras sequential tf keras layer dense 1 model compile loss tf keras loss mae optimizer tf keras optimizer sgd metric mae model fit tf expand dim x axis 1 y epoch 5 relevant log output first time epoch 1 5 2022 09 30 18 25 08 200023 I tensorflow core grappler optimizer custom graph optimizer registry cc 114 plugin optimizer for device type gpu be enable 1 1 1s 583ms step loss 15 3376 mae 15 3376 epoch 2 5 1 1 0s 18ms step loss 15 0563 mae 15 0563 epoch 3 5 1 1 0s 21ms step loss 14 8426 mae 14 8426 epoch 4 5 1 1 0s 17ms step loss 14 7101 mae 14 7101 epoch 5 5 1 1 0s 38ms step loss 14 5776 mae 14 5776 second time epoch 1 5 1 1 0s 222ms step loss 11 3347 mae 11 3347 epoch 2 5 1 1 0s 10ms step loss 11 2022 mae 11 2022 epoch 3 5 1 1 0s 15ms step loss 11 0697 mae 11 0697 epoch 4 5 1 1 0s 16ms step loss 10 9372 mae 10 9372 epoch 5 5 1 1 0s 19ms step loss 10 8047 mae 10 8047 2022 09 30 18 28 03 323935 I tensorflow core grappler optimizer custom graph optimizer registry cc 114 plugin optimizer for device type gpu be enable
tensorflowtensorflow
don t show correct formula
Bug
click to expand issue type documentation bug source source tensorflow version 2 8 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen definition mathcal l y s sum I y I cdot log leave frac exp s I sum j exp s j right standalone code to reproduce the issue shell a bug happen definition mathcal l y s sum I y I cdot log leave frac exp s I sum j exp s j right relevant log output no response
tensorflowtensorflow
template code do not work
Bug
click to expand issue type bug source source tensorflow version 2 8 custom code no os platform and distribution colab mobile device colab python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell a bug happen relevant log output shell input 0 of layer batch normalization be incompatible with the layer expect axis 2 of input shape to have value 1 but receive input with shape none none 136 call argument receive by layer model f type functional input mask tf tensor shape none none dtype bool float feature tf tensor shape none none 136 dtype float64 training true mask none
tensorflowtensorflow
tensorflow lite do not support tf math rsqrt operation conversion
Bug
system information os platform and distribution e g linux ubuntu 16 04 window 10 64bit tensorflow instal from source or binary python 3 8 2 pip 22 1 pip package tensorflow version or github sha if from source tf version 2 9 0 provide the text output from tflite convert copy and paste here 2022 09 29 14 49 40 084984 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx avx2 to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2022 09 29 14 49 40 575045 I tensorflow core common runtime gpu gpu device cc 1532 create device job localhost replica 0 task 0 device gpu 0 with 2807 mb memory device 0 name nvidia geforce mx230 pci bus i d 0000 01 00 0 compute capability 6 1 warning tensorflow compile the loaded model but the compile metric have yet to be build model compile metric will be empty until you train or evaluate the model c python38 lib site package tensorflow lite python convert py 766 userwarning statistic for quantize input be expect but not specify continue anyway warning warn statistic for quantize input be expect but not 2022 09 29 14 49 40 988706 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 362 ignore output format 2022 09 29 14 49 40 989008 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 365 ignore drop control dependency 2022 09 29 14 49 40 990306 I tensorflow cc save model reader cc 43 reading savedmodel from c user dell appdata local temp tmpv6cmtx6 m 2022 09 29 14 49 40 993266 I tensorflow cc save model reader cc 81 read meta graph with tag serve 2022 09 29 14 49 40 993545 I tensorflow cc save model reader cc 122 reading savedmodel debug info if present from c user dell appdata local temp tmpv6cmtx6 m 2022 09 29 14 49 40 997743 I tensorflow compiler mlir mlir graph optimization pass cc 354 mlir v1 optimization pass be not enable 2022 09 29 14 49 40 998221 I tensorflow cc save model loader cc 228 restore savedmodel bundle 2022 09 29 14 49 41 024607 I tensorflow cc save model loader cc 212 running initialization op on savedmodel bundle at path c user dell appdata local temp tmpv6cmtx6 m 2022 09 29 14 49 41 030886 I tensorflow cc save model loader cc 301 savedmodel load for tag serve status success ok take 40573 microsecond 2022 09 29 14 49 41 037367 I tensorflow compiler mlir tensorflow util dump mlir util cc 263 disable mlir crash reproducer set env var mlir crash reproducer directory to enable 2022 09 29 14 49 41 053320 I tensorflow compiler mlir lite flatbuffer export cc 1972 estimate count of arithmetic op 0 op equivalently 0 mac estimate count of arithmetic op 0 op equivalently 0 mac fully quantize 0 inference type 6 input inference type 9 output inference type 9 error illegal scale inf standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook code import tensorflow as tf import numpy as np import pathlib def make model datum tf constant np arange 60000 reshape 200 10 10 3 60000 dtype tf float32 input var tf keras input shape 10 10 3 out var tf math rsqrt input var model tf keras model model input var out var model datum return model datum def save model model path output sqrt h5 tf keras model save model model path h5 sqrt h5 def convert model model datum def representative datum gen for input value in datum input value input value np newaxis yield input value shape should be 1
tensorflowtensorflow
nan error multiple gpu training with mirroredstrategy rtx a5000 pcie4 0
Bug
click to expand issue type bug source binary tensorflow version tf 2 10 custom code no os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 10 bazel version no response gcc compiler version no response cuda cudnn version cuda11 0 cudnn8 1 0 gpu model and memory rtx a5000 24 gb current behaviour shell I move 8 nvidia rtx a5000 from an old server asrockrack 3u8 g c612 2 intel xeon e5 2640 v4 with 8 pcie3 0 lane detail here specification to a more recent server populate with 10 nvidia rtx a5000 include the 8 one from the former server 2 new rtx a5000 supermicro sys 420gp tnr 2 intel xeon gold 5317 with 12 pcie4 0 lane detail here I keep the same os version ubuntu 20 04 with kernel 5 13 0 52 generic and same driver version 515 57 all the 10 rtx a5000 be visible with nvidia smi but when I start to execute my code I have nan in loss and model s output after few iteration less than 10 sometimes at the second iteration it happen only when I use multiple gpu I write code that reproduce the error I use docker image provide by tensorflow tensorflow tensorflow 2 10 0 gpu jupyter sha tensorflow tensorflow sha256 a72deb34d32e26cf4253608b0e86ebb4e5079633380c279418afb5a131c499d6 choice of cross device op tf distribute reductiontoonedevice reduce to device cpu 0 if I use tf distribute mirroredstrategy cross device op tf distribute ncclallreduce the call to distribute train step be hang on the new server need to docker stop the container whatever the cross device op the script be work properly on the old server what you can see in the output everything be ok for the first replicate no nan in his input per sample loss but for the second replicate if 2 gpu we have nan value in image use in train step and nan what be strange image from the distribute dataset do not contain nan value while image collect after the train step contain nan and these nan value be only present on the second replica or more if more than 2 gpu never on the 1st replica after search on past issue and stackoverflow as first action I disable intel vmx within bio option same error after re install driver even to 515 76 same error after I roll back to 515 57 also I add old nvidia titanx pascal into the old server to test the code and before when the 8 rtx a5000 be in the old server everything be work well with my experiment feel free to ask I any question or more detail good regard standalone code to reproduce the issue shell import tensorflow as tf import numpy as np import os from pprint import pprint def main print f tf version git version tf version git version print f tf version version tf version version print tf config list physical device pprint tf config list physical device fashion mnist tf keras datasets fashion mnist train image train label fashion mnist load datum create 4d np ndarray n h w n h w 1 train image train image np newaxis scale pixel to 0 1 train image train image np float32 255 get the tf distribute strategy strategy tf distribute mirroredstrategy cross device op tf distribute ncclallreduce test strategy tf distribute mirroredstrategy cross device op tf distribute reductiontoonedevice reduce to device cpu 0 print f strategy strategy print f number of device strategy num replicas in sync some training argument buffer size len train image batch size per replica 64 global batch size batch size per replica strategy num replicas in sync create tf datum dataset train dataset tf datum dataset from tensor slice train image train label shuffle buffer size batch global batch size extend to a distribute one version train dist dataset strategy experimental distribute dataset train dataset with strategy scope set reduction to none so you can do the reduction afterwards and divide by global batch size loss object tf keras loss sparsecategoricalcrossentropy from logit true reduction tf keras loss reduction none def compute loss label prediction per example loss loss object label prediction average loss tf nn compute average loss per example loss global batch size global batch size return average loss per example loss with strategy scope model create model optimizer tf keras optimizer adam learning rate 0 01 def train step input image label input with tf gradienttape as tape prediction model image training true average loss per example loss compute loss label prediction gradient tape gradient average loss model trainable variable optimizer apply gradient zip gradient model trainable variable return average loss on all example per example loss batch size tensor and input image return average loss per example loss image tf function def distribute train step dataset input per replica average loss per replica per example loss per replica image strategy run train step args dataset input return strategy reduce tf distribute reduceop sum per replica average loss axis none per replica per example loss per replica image train dist iter iter train dist dataset iteration 0 for in range 10 get next image label from the dataset image label next train dist iter do we have nan in these image print f iteration iteration if strategy num replicas in sync 1 print t nan in per replica image from dataset replica np isnan image numpy any for replica image in enumerate image value else if single replica cpu or mono gpu specific print statement print t nan in per replica image from dataset 0 np isnan image numpy any average loss per replica per sample loss per replica image distribute train step image label print f t average loss average loss if strategy num replicas in sync 1 do we have nan in image return by train step and distribute train step print t nan in per replica image replica np isnan image numpy any for replica image in enumerate per replica image value do we have nan in loss per sample per replica print t nan in per replica per sample loss replica np isnan per sample loss numpy any for replica per sample loss in enumerate per replica per sample loss value print n else if single replica cpu or mono gpu specific print statement do we have nan in image return by train step and distribute train step print t nan in per replica image 0 np isnan per replica image numpy any do we have nan in loss per sample per replica print t nan in per replica per sample loss 0 np isnan per replica per sample loss numpy any print n iteration 1 def create model model tf keras sequential tf keras layer conv2d 32 3 activation relu tf keras layer maxpooling2d tf keras layer conv2d 64 3 activation relu tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 return model if name main main relevant log output shell on new server with 2 gpu nan in image use in the train step after 1 iteration cuda visible device 0 1 python nan error tf py log iteration 0 nan in per replica image from dataset 0 false 1 false average loss 2 320328712463379 nan in per replica image 0 false 1 false nan in per replica per sample loss 0 false 1 false iteration 1 nan in per replica image from dataset 0 false 1 false average loss nan nan in per replica image 0 false 1 true nan in per replica per sample loss 0 false 1 false iteration 2 nan in per replica image from dataset 0 false 1 false average loss nan nan in per replica image 0 false 1 true nan in per replica per sample loss 0 true 1 false iteration 3 nan in per replica image from dataset 0 false 1 false average loss nan nan in per replica image 0 false 1 true nan in per replica per sample loss 0 true 1 false on new server with 1 gpu no nan no issue shell cuda visible device 0 python nan error tf py log iteration 0 nan in per replica image from dataset 0 false average loss 2 3019003868103027 nan in per replica image 0 false nan in per replica per sample loss 0 false iteration 1 nan in per replica image from dataset 0 false average loss 2 2981200218200684 nan in per replica image 0 false nan in per replica per sample loss 0 false iteration 2 nan in per replica image from dataset 0 false average loss 2 0697689056396484 nan in per replica image 0 false nan in per replica per sample loss 0 false iteration 3 nan in per replica image from dataset 0 false average loss 1 8743340969085693 nan in per replica image 0 false nan in per replica per sample loss 0 false if I use the same script with 1 or 2 or more gpu on the old server everything be work well
tensorflowtensorflow
tflite give wrong result after reshape with bool tensor
Bug
1 system information tf 2 10 0 2 code provide code to help we reproduce your issue use one of the follow option option a reference colab notebook option b paste your code here or provide a link to a custom end to end colab python import tensorflow as tf print tf version from keras import layer def get tflite callable model inp dict converter tf lite tfliteconverter from concrete function func model call get concrete function inp dict trackable obj model converter target spec support op tf lite opsset tflite builtin enable tensorflow lite op tf lite opsset select tf op enable tensorflow op tflite bytes converter convert interpreter tf lite interpreter model content tflite bytes runner interpreter get signature runner return runner class mymodule tf module def init self super init self const tf constant true shape 2 2 dtype tf bool tf function def call self x x tf logical or self const x work fine x tf reshape x 2 2 1 1 after reshape the result be wrong return x inp x tf constant true shape 2 dtype tf bool m mymodule out m inp print f out runner get tflite callable m inp error out runner inp print f out python 2 10 0 true true true true output 0 array true false false false wrong result with tflite
tensorflowtensorflow
roberta example from tfhub produce error during variant host device copy non dma copy attempt of tensor type string
Bug
click to expand issue type bug source source tensorflow version 2 8 2 custom code yes os platform and distribution no response mobile device no response python version 3 7 14 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen I would like to use the roberta base model from tfhub I be try to run the example below although I get an error when I try to feed sentence to model as input I get the follow error invalid argument during variant host device copy non dma copy attempt of tensor type string I be use python 3 7 tensorflow 2 8 standalone code to reproduce the issue shell define a text embed model text input tf keras layers input shape dtype tf string preprocessor hub keraslayer encoder input preprocessor text input encoder hub keraslayer trainable true encoder output encoder encoder input pool output encoder output pool output batch size 768 sequence output encoder output sequence output batch size seq length 768 model tf keras model text input pool output you can embed your sentence as follow sentence tf constant your text here print model sentence relevant log output shell invalidargumenterror graph execution error 2 root error s find 0 invalid argument during variant host device copy non dma copy attempt of tensor type string node map tensorarrayunstack tensorlistfromtensor model preprocesse statefulpartitionedcall statefulpartitionedcall statefulpartitionedcall bpe sentencepiece tokenizer statefulpartitionedcall raggedfromrowsplit 1 rowpartitionfromrowsplit assert non negative assert less equal assert assertguard else 18720 raggedfromrowsplit 1 rowpartitionfromrowsplit assert non negative assert less equal assert assertguard assert data 0 135 1 invalid argument during variant host device copy non dma copy attempt of tensor type string node map tensorarrayunstack tensorlistfromtensor 0 successful operation 0 derive error ignore op inference train function 534484
tensorflowtensorflow
asset file vanishe after load save the model twice
Bug
click to expand issue type bug source binary tensorflow version tf 2 4 custom code no os platform and distribution linux mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour when I create the tf module with an asset file inside the textfileinitializer and save load the module twice the asset file will vanish after the 2nd save and the loading will fail complain the absolute path of that file can not be find import shutil import tensorflow as tf class primarymodule tf module def init self name none super primarymodule self init name name initializer tf lookup textfileinitializer chzhu vocab txt key dtype tf int64 key index 0 value dtype tf int64 value index 1 delimiter self table tf lookup staticvocabularytable initializer 1 tf function def call self input return self table lookup input model primarymodule print model tf constant 509323409 dtype tf int64 tf save model save model asset model import new tf save model load asset model print import new tf constant 509323409 dtype tf int64 tf save model save import new wrap asset model error when load the model for the second time it will be try to find the original path instead of path in the asset folder inside model shutil rmtree asset model shutil rm chzhu vocab txt tf save model load wrap asset model error see full stack trace below filenotfounderror chzhu vocab txt no such file or directory when change initializer to be self initializer in the object constructor it will work as normal import tensorflow as tf class primarymodule tf module def init self name none super primarymodule self init name name self initializer tf lookup textfileinitializer chzhu vocab txt key dtype tf int64 key index 0 value dtype tf int64 value index 1 delimiter self table tf lookup staticvocabularytable self initializer 1 tf function def call self input return self table lookup input same saving logic I do a deep dive into the model save logic in tf 2 4 and find the difference here when save the model for the first time the model saver will save the object dependency into the savedobjectgraph object graph def in the meta graph def object graph def node child node i d 1 local name table child node i d 2 local name signature child node i d 5 local name call user object identifi generic user object version producer 1 min consumer 1 node child node i d 3 local name initializer child node i d 6 local name create resource child node i d 7 local name initialize child node i d 8 local name destroy resource resource as we can see above initializer be not the attribute of the top level object and be the child of vocabularytable which be a resource object the dependency will look like object table initializer asset during the first time load the loader will try to recreate all the object in the savedobjectgraph the resource object will be recreate as restoredresource which inherit the base trackable not autotrackable when call setattr in the add object graph edge it will not update checkpoint dependency unlike normal object inherit autotrackable which override the setattr function to update the attribute dependency therefore the table initializer asset dependency be not entirely recover during the second time save it will find out the asset object in the object dependency via breadth first traversal and add that into asset info in the fill meta graph def we will create asset initializer op for all asset in asset info in the fill meta graph def the asset initializer and its downstream variable will be write into init op and the main graph will be update to initialize the variable from the corresponding asset initializer however since in the previous loading stage the checkpoint dependency will be empty for the table attribute and we can not access the asset object via breadth first traversal of the object graph thus we be get empty asset info and 2 and 3 above will not happen standalone code to reproduce the issue shell import shutil import tensorflow as tf class primarymodule tf module def init self name none super primarymodule self init name name initializer tf lookup textfileinitializer chzhu vocab txt key dtype tf int64 key index 0 value dtype tf int64 value index 1 delimiter self table tf lookup staticvocabularytable initializer 1 tf function def call self input return self table lookup input model primarymodule print model tf constant 509323409 dtype tf int64 tf save model save model asset model import new tf save model load asset model print import new tf constant 509323409 dtype tf int64 tf save model save import new wrap asset model error when load the model for the second time shutil rmtree asset model tf save model load wrap asset model relevant log output shell traceback most recent call last file home chzhu tf2 trainer tf2 trainer component training test asset testing py line 32 in tf save model load custom model wrap asset model file home chzhu tf2 trainer build tf2 trainer component training environment development venv lib python3 7 site package tensorflow python save model load py line 859 in load return load internal export dir tag option root file home chzhu tf2 trainer build tf2 trainer component training environment development venv lib python3 7 site package tensorflow python save model load py line 893 in load internal str err n if try to load on a different device from the filenotfounderror custom model asset model asset chzhu vocab txt no such file or directory node statefulpartitionedcall text file init initializetablefromtextfilev2 op inference restore function body 375
tensorflowtensorflow
tf image crop and resize crash abort when give num box 0
Bug
click to expand issue type bug source binary tensorflow version 2 11 0 dev20220916 custom code no os platform and distribution ubuntu 18 04 4 lts x86 64 mobile device no response python version 3 7 6 bazel version no response gcc compiler version no response cuda cudnn version n a gpu model and memory no response current behaviour shell tf image crop and resize crash abort when give num box 0 standalone code to reproduce the issue shell import numpy as np import tensorflow as tf tf image crop and resize crop size 1 1 box index np one 0 1 box np one 0 4 image np one 2 2 2 2 relevant log output shell 2022 09 19 20 55 05 906144 f tensorflow core framework tensor shape cc 45 check fail ndim dim 1 vs 2 ask for tensor of 1 dimension from a tensor of 2 dimension abort core dump
tensorflowtensorflow
integer divide by 0 during fuse convolution with onednn on cpu support avx512 instruction
Bug
click to expand issue type bug source binary tensorflow version 2 9 1 2 10 0 custom code no os platform and distribution window 10 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no gpu current behaviour with onednn optimisation enable and a cpu support avx512 instruction an integer divide by zero exception occur during execution of fuse convolution with certain input size to reproduce 1 use a machine equip with a cpu support avx512 instruction e g intel i7 1165g7 2 enable onednn optimisation by set the tf enable onednn opt environment variable to 1 and disable gpu usage by set the cuda visible device environment variable to 1 3 run the script below 4 the python exe process will crash with an integer divide by zero exception the problem can be work around by set the dnnl max cpu isa variable to e g avx2 in order to prevent onednn from use avx512 instruction the issue occur in tensorflow 2 9 1 and 2 10 0 instal use pip to debug it I ve build 2 9 1 tensorflow from source with some debug symbol and obtain the following stack trace 0x0 pywrap tensorflow internal dnnl impl cpu x64 brgemm convolution util brg block t est eff 1x1 0x38d 0x1 pywrap tensorflow internal dnnl impl cpu x64 brgemm convolution util brg block t calc block 1x1 0x9fc 0x2 pywrap tensorflow internal dnnl impl cpu x64 brgemm convolution util init 1x1 conf 0x434 0x3 pywrap tensorflow internal dnnl impl cpu x64 brgemm 1x1 convolution fwd t 71 pd t init 0x2f4 0x4 pywrap tensorflow internal dnnl impl primitive desc t create pd t 0x17c 0x5 pywrap tensorflow internal dnnl primitive desc iterator operator 0x2c1 0x6 pywrap tensorflow internal dnnl primitive desc iterator create 0x1de 0x7 pywrap tensorflow internal dnnl primitive desc primitive desc 0x73 0x8 pywrap tensorflow internal tensorflow mklconvfwdprimitive setup 0x6d7 0x9 pywrap tensorflow internal tensorflow mklconvfwdprimitive mklconvfwdprimitive 0x1a8 0xa pywrap tensorflow internal tensorflow mklconvfwdprimitivefactory get 0x14c 0xb pywrap tensorflow internal tensorflow mklconvop compute 0x1ba0 0xc pywrap tensorflow internal tensorflow threadpooldevice compute 0x4a 0xd pywrap tensorflow internal tensorflow anonymous namespace executorstate processsync 0x13f 0xe pywrap tensorflow internal tensorflow anonymous namespace executorstate process 0xf58 0xf pywrap tensorflow internal std func class operator 0xf 0x10 pywrap tensorflow internal tensorflow thread eigenenvironment executetask 0x13 0x11 pywrap tensorflow internal eigen threadpooltempl workerloop 0x4b6 0x12 pywrap tensorflow internal std func class operator 0xf 0x13 pywrap tensorflow internal tensorflow thread eigenenvironment createthread l2 operator 0x38 0x14 pywrap tensorflow internal std invoke 0x38 0x15 pywrap tensorflow internal std invoker ret call 0x38 0x16 pywrap tensorflow internal std func impl no alloc void do call 0x41 0x17 pywrap tensorflow internal std thread invoke 0 0x18 0x18 ucrtbase thread start 0x42 standalone code to reproduce the issue python import tensorflow as tf conv2d tf keras layer conv2d filter 8 kernel size 1 1 padding same relu tf keras layers relu tf function def fuse conv bias relu x y conv2d x y relu y return y x shape 1 2048 2048 8 input tf random normal x shape output fuse conv bias relu input print success relevant log output no response
tensorflowtensorflow
crash with no error message but exit code 1073740791 0xc0000409 on cuda 11 7
Bug
click to expand issue type bug source binary tensorflow version tf 2 9 2 tf 2 10 both affect do not test previous version custom code yes os platform and distribution window 10 19043 mobile device no response python version 3 9 13 bazel version no response gcc compiler version no response cuda cudnn version 11 7 8401 gpu model and memory no response current behaviour shell any python script that attempt to train a tensorflow model immediately crash during model fit call standalone code to reproduce the issue shell import tensorflow as tf from tensorflow keras model import model load model from tensorflow keras layer import x train y train x test y test tf keras datasets fashion mnist load datum x train x test x train 255 0 x test 255 0 normalize to between 0 1 model layer xin input 28 28 1 x conv2d 32 3 3 activation relu xin x dropout 0 4 x x conv2d 64 3 3 activation relu x x dropout 0 4 x x conv2d 128 3 3 activation relu x x dropout 0 4 x x flatten x x dense 128 activation swish x x dropout 0 5 x xout dense 10 x model model input xin outputs xout model compile optimizer tf keras optimizer adam 1e 3 loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy callback tf keras callback earlystopping monitor val accuracy patience 8 restore good weight true tf keras callbacks reducelronplateau monitor val accuracy factor 0 1 patience 5 verbose 1 model summary model fit x train y train epoch 100 batch size 16 validation datum x test y test callback callback relevant log output shell c program file python39 python exe c user alien document pycharm project irs ml test py 2022 09 18 12 12 09 933811 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx avx2 to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2022 09 18 12 12 10 296700 I tensorflow core common runtime gpu gpu device cc 1532 create device job localhost replica 0 task 0 device gpu 0 with 9436 mb memory device 0 name nvidia geforce rtx 3080 ti pci bus i d 0000 2b 00 0 compute capability 8 6 model model layer type output shape param input 1 inputlayer none 28 28 1 0 conv2d conv2d none 26 26 32 320 dropout dropout none 26 26 32 0 conv2d 1 conv2d none 24 24 64 18496 dropout 1 dropout none 24 24 64 0 conv2d 2 conv2d none 22 22 128 73856 dropout 2 dropout none 22 22 128 0 flatten flatten none 61952 0 dense dense none 128 7929984 dropout 3 dropout none 128 0 dense 1 dense none 10 1290 total param 8 023 946 trainable param 8 023 946 non trainable param 0 epoch 1 100 2022 09 18 12 12 11 925564 I tensorflow stream executor cuda cuda dnn cc 384 load cudnn version 8401 process finish with exit code 1073740791 0xc0000409 image can see gpu memory be briefly take up before the crash and gpu usage spike to 100 for a second too
tensorflowtensorflow
tf random poisson crash abort
Bug
click to expand issue type bug source binary tensorflow version 2 11 0 dev20220914 custom code no os platform and distribution ubuntu 18 04 4 lts x86 64 mobile device no response python version 3 7 6 bazel version no response gcc compiler version no response cuda cudnn version n a gpu model and memory no response current behaviour shell tf random poisson crash abort standalone code to reproduce the issue shell import numpy as np import tensorflow as tf tf random poisson lam np one 10 10 11 2 shape 27 187 229 relevant log output shell 2022 09 16 19 45 10 220556 f tensorflow core util work sharder cc 34 check fail total 0 0 vs 1751281096 abort core dump
tensorflowtensorflow
doc change
Bug
do some change 57565
tensorflowtensorflow
align alloc might not be define
Bug
click to expand issue type bug source source tensorflow version 2 0 custom code no os platform and distribution cento 6 0 mobile device no response python version 3 10 bazel version 5 2 0 gcc compiler version 7 1 0 cuda cudnn version no response gpu model and memory no response current behaviour shell info analyze target tensorflow lite c tensorflowlite c 0 package load 0 target configure info find 1 target error root tensorflow 2 10 0 tensorflow lite build 505 11 compile tensorflow lite interpreter builder cc fail exit 1 gcc fail error execute command usr local bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section remain 47 argument skip tensorflow lite interpreter builder cc in member function virtual void tflite anonymous mallocdataallocator allocate size t size t tensorflow lite interpreter builder cc 312 12 error align alloc be not declare in this scope return align alloc use alignment use size target tensorflow lite c tensorflowlite c fail to build align alloc be not define for a gcc compiler of version 7 1 0 stdc11 and c 14 shell root gcc dm e dev null grep stdc version define stdc version 201112l root echo include int main if cplusplus 201703l std cout c 17 n else if cplusplus 201402l std cout c 14 n else if cplusplus 201103l std cout c 11 n else if cplusplus 199711l std cout c 98 n else std cout pre standard c n g x c root a out c 14 if we refer to your code l312 you need tflite use std align alloc to be define which happen with all those condition l49 l57 c align alloc be available via cstdlib stdlib h with c 17 c11 if cplusplus 201703l stdc version 201112l if define android android api 28 neither apple nor window provide align alloc if define apple define win32 define tflite use std align alloc endif endif endif however have c11 be not enough align malloc be mainly ship with c 17 so the first condition should have be if cplusplus 201703l stdc version 201112l and not a simple or this l39 also need to change standalone code to reproduce the issue shell just run the compilation with gcc 7 1 0 build c opt tensorflow lite c tensorflowlite c relevant log output shell info analyze target tensorflow lite c tensorflowlite c 0 package load 0 target configure info find 1 target error root tensorflow 2 10 0 tensorflow lite build 505 11 compile tensorflow lite interpreter builder cc fail exit 1 gcc fail error execute command usr local bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section remain 47 argument skip tensorflow lite interpreter builder cc in member function virtual void tflite anonymous mallocdataallocator allocate size t size t tensorflow lite interpreter builder cc 312 12 error align alloc be not declare in this scope return align alloc use alignment use size target tensorflow lite c tensorflowlite c fail to build
tensorflowtensorflow
importerror can not import name test from partially initialize module tensorflow api v2 internal
Bug
click to expand issue type bug source source tensorflow version 2 9 2 custom code yes os platform and distribution debian gnu linux 11 bullseye mobile device no response python version 3 9 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I m use tensorflow in docker container but I get this error when try to import this module error can be solve if I install another or same version of this module but for production its bad solution I have instal this package pip freeze absl py 1 2 0 anyio 3 4 0 asgi lifespan 1 0 1 asgiref 3 4 1 astroid 2 11 7 astunparse 1 6 3 asyncio dgram 2 1 2 attrs 22 1 0 autoflake 1 4 bandit 1 7 4 black 22 3 0 cachetool 5 2 0 certifi 2021 10 8 charset normalizer 2 0 7 click 8 1 3 dataclasse 0 6 deap 1 3 1 dill 0 3 5 1 fastapi 0 73 0 flatbuffer 1 12 gast 0 4 0 gitdb 4 0 9 gitpython 3 1 27 google auth 2 11 0 google auth oauthlib 0 4 6 google pasta 0 2 0 grpcio 1 48 1 h11 0 12 0 h5py 3 7 0 httpcore 0 14 7 httptool 0 3 0 httpx 0 22 0 idna 3 3 importlib metadata 4 12 0 iniconfig 1 1 1 isort 5 10 1 joblib 0 17 0 keras 2 9 0 kera preprocesse 1 1 2 lazy object proxy 1 7 1 libclang 14 0 6 lxml 4 7 1 markdown 3 4 1 markupsafe 2 1 1 mccabe 0 7 0 mypy 0 942 mypy extension 0 4 3 numpy 1 21 3 oauthlib 3 2 0 opt einsum 3 3 0 packaging 21 3 panda 1 3 4 pathspec 0 10 1 pbr 5 10 0 platformdir 2 5 2 plotly 4 8 1 pluggy 1 0 0 protobuf 3 19 4 py 1 11 0 pyasn1 0 4 8 pyasn1 module 0 2 8 pydantic 1 9 0 pyflake 2 5 0 pylint 2 13 2 pyparsing 3 0 9 pyt 7 1 1 pyt asyncio 0 18 3 pyt timeout 2 1 0 python dateutil 2 8 2 python dotenv 0 19 2 pytz 2021 3 pyyaml 6 0 request 2 26 0 request oauthlib 1 3 1 retry 1 3 3 rfc3986 1 5 0 rsa 4 9 scapy 2 4 5 scikit learn 0 24 2 scipy 1 7 2 six 1 16 0 smmap 5 0 0 sniffio 1 2 0 starlette 0 17 1 stevedore 4 0 0 stopit 1 1 2 tensorboard 2 9 1 tensorboard datum server 0 6 1 tensorboard plugin wit 1 8 1 tensorflow 2 9 2 tensorflow estimator 2 9 0 tensorflow io gcs filesystem 0 26 0 termcolor 1 1 0 threadpoolctl 3 0 0 toml 0 10 2 tomli 2 0 1 tpot 0 11 5 tqdm 4 62 3 type pyyaml 6 0 5 type extension 3 10 0 2 update checker 0 18 0 urllib3 1 26 7 uvicorn 0 17 1 uvloop 0 16 0 vulture 2 3 watchgod 0 7 websocket 10 1 werkzeug 2 2 2 wrapt 1 14 1 zipp 3 8 1 standalone code to reproduce the issue shell root c666258f6b9d app python3 python 3 9 9 main mar 28 2022 09 18 32 gcc 10 2 1 20210110 on linux type help copyright credit or license for more information import tensorflow relevant log output shell 2022 09 06 09 42 52 620866 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudart so 11 0 dlerror libcudart so 11 0 can not open share object file no such file or directory 2022 09 06 09 42 52 620890 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine traceback most recent call last file line 1 in file usr local lib python3 9 site package tensorflow init py line 45 in from api v2 import internal file usr local lib python3 9 site package tensorflow api v2 internal init py line 22 in from import test importerror can not import name test from partially initialize module tensorflow api v2 internal most likely due to a circular import usr local lib python3 9 site package tensorflow api v2 internal init py
tensorflowtensorflow
tflite model maker not export to quantise int8
Bug
1 system information this error happen both on my pc os platform and distribution window 10 amd64 v10 0 tensorflow installation pip package or build from source from pip tf version v2 9 0 18 gd8ce9f9c301 2 9 1 on python 3 9 tensorflow library and on google colab 2 code what I be try to do train a model and export a quantise int8 version for later use with the tpu usb accelerator I export a non quantize version for comparison what I get the model which be suppose to be quantize always export identically to the un quantize version with the same message when run model export that statistic for quantize input be expect but not specify continue anyway link to the export model import import tensorflow as tf assert tf version startswith 2 from tflite model maker config import quantizationconfig from tflite model maker image classifier import dataloader from tflite model maker import model spec from tflite model maker import image classifier train datum folder path to my google drive all image be jpeg data dataloader from folder train datum folder path train datum val datum datum split 0 8 model spec model spec get mobilenet v2 model image classifier create train datum model spec model spec validation datum val datum test data folder path go to my google drive all image be jpeg test datum dataloader from folder test data folder path first export the model with default setting tflite filename default model tflite model export export dir tflite filename tflite filename then export as quantize quant tflite filename int8 model tflite quantization config quantizationconfig for int8 test datum model export export dir tflite filename quant tflite filename quantization config quantization config 3 failure after conversion if the conversion be successful but the generate model be wrong then state what be wrong model do not produce a quantize version the output file be of identical file size and the log below be the same for both 5 any other info log model export tflite tflite filepath tflite filename quantization config quantization config info tensorflow asset write to c user sm251 appdata local temp tmpymme3pmk asset info tensorflow asset write to c user sm251 appdata local temp tmpymme3pmk asset 2022 09 06 14 35 11 483922 I tensorflow core grappler device cc 66 number of eligible gpu core count 8 compute capability 0 0 0 2022 09 06 14 35 11 484556 I tensorflow core grappler cluster single machine cc 358 start new session c user sm251 appdata roam python python39 site package tensorflow lite python convert py 766 userwarning statistic for quantize input be expect but not specify continue anyway warning warn statistic for quantize input be expect but not 2022 09 06 14 35 21 939062 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 362 ignore output format 2022 09 06 14 35 21 939219 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 365 ignore drop control dependency fully quantize 0 inference type 6 input inference type 3 output inference type 3 info tensorflow label file be inside the tflite model with metadata info tensorflow label file be inside the tflite model with metadata info tensorflow save label in c user sm251 appdata local temp tmpf7kb5uh9 label txt info tensorflow save label in c user sm251 appdata local temp tmpf7kb5uh9 label txt
tensorflowtensorflow
ondevice prediction produce same result for all input in a batch
Bug
issue type bug source binary tensorflow version 2 9 2 custom code no os platform and distribution mac m1 window x64 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour the model work well in python but when the model be export as a savedmodel and predict concrete function be call from c api all the input in the same batch produce exact same result test in mac m1 and window x64 same behaviour standalone code to reproduce the issue python import tensorflow as tf import tensorflow probability as tfp print tf version def create model board width board height class renjumodel tf module def init self l2 penalty beta 1e 4 define the tensorflow neural network 1 input self input tf keras input shape 4 board height board width dtype tf dtype float32 name input self transpose input tf keras layers lambda lambda x tf transpose x 0 2 3 1 self input 2 common network layer self conv1 tf keras layer conv2d name conv1 filter 32 kernel size 3 3 padding same datum format channel last activation tf keras activations relu kernel regularizer tf keras regularizer l2 l2 penalty beta self transpose input self conv2 tf keras layer conv2d name conv2 filter 64 kernel size 3 3 padding same datum format channel last activation tf keras activations relu kernel regularizer tf keras regularizer l2 l2 penalty beta self conv1 self conv3 tf keras layer conv2d name conv3 filter 128 kernel size 3 3 padding same datum format channel last activation tf keras activations relu kernel regularizer tf keras regularizer l2 l2 penalty beta self conv2 3 1 action network self action conv tf keras layer conv2d name action conv filter 4 kernel size 1 1 padding same datum format channel last activation tf keras activations relu kernel regularizer tf keras regularizer l2 l2 penalty beta self conv3 flatten tensor self action conv flat tf keras layer reshape 1 4 board height board width name action conv flat self action conv 3 2 full connected layer the output be the log probability of move on each slot on the board self action fc tf keras layer dense board height board width activation tf nn log softmax name action fc kernel regularizer tf keras regularizer l2 l2 penalty beta self action conv flat 4 evaluation network self evaluation conv tf keras layer conv2d name evaluation conv filter 2 kernel size 1 1 padding same datum format channel last activation tf keras activations relu kernel regularizer tf keras regularizer l2 l2 penalty beta self conv3 self evaluation conv flat tf keras layer reshape 1 2 board height board width name evaluation conv flat self evaluation conv self evaluation fc1 tf keras layer dense 64 activation tf keras activations relu name evaluation fc1 kernel regularizer tf keras regularizer l2 l2 penalty beta self evaluation conv flat self evaluation fc2 tf keras layer dense 1 activation tf keras activation tanh name evaluation fc2 kernel regularizer tf keras regularizer l2 l2 penalty beta self evaluation fc1 self output tf keras layers concatenate self action fc self evaluation fc2 self model tf keras model input self input output self output name renju model self model summary self lr tf variable 0 002 trainable false dtype tf dtype float32 tf function input signature tf tensorspec none 1 board height board width 1 tf float32 tf tensorspec none 1 board height board width 1 tf float32 def custom loss label prediction act probs label value label tf split label board height board width 1 axis 2 act prob prediction value prediction tf split prediction board height board width 1 axis 2 tf print act probs label summarize 1 tf print value label summarize 1 tf print act prob prediction summarize 1 tf print value prediction summarize 1 value loss tf reduce mean tf loss mean square error value label value prediction policy loss tf negative tf reduce mean tf reduce sum tf multiply act probs label act prob prediction 2 total loss policy loss value loss tf print value loss value loss policy loss policy loss total loss total loss return total loss self model compile optimizer tf keras optimizer adam learning rate self lr loss custom loss metric accuracy tf function input signature tf tensorspec none 4 board height board width tf float32 def predict self state batch if tf shape state batch 0 1 tf print state batch summarize 1 prediction self model state batch if tf shape state batch 0 1 tf print prediction summarize 1 return tf split prediction board height board width 1 axis 2 tf function input signature tf tensorspec shape none 4 board height board width dtype tf float32 tf tensorspec shape none 1 board height board width dtype tf float32 tf tensorspec shape none 1 1 dtype tf float32 tf tensorspec shape 1 dtype tf float32 def train self state batch prob batch score batch lr label tf concat prob batch score batch axis 2 self lr assign tf gather lr 0 with tf gradienttape as tape prediction self model state batch training true forward pass the loss function be configure in compile loss self model compile loss label prediction regularization loss self model loss gradient tape gradient loss self model trainable variable self model optimizer apply gradient zip gradient self model trainable variable entropy tf negative tf reduce mean tf reduce sum tf exp prediction prediction 2 return loss entropy tf function input signature tf tensorspec shape dtype tf string def save self checkpoint path tensor name weight name for weight in self model weight tensor to save weight read value for weight in self model weight tf raw op save filename checkpoint path tensor name tensor name datum tensor to save name save return checkpoint path tf function input signature tf tensorspec shape dtype tf string def restore self checkpoint path restore tensor for var in self model weight restore tf raw op restore file pattern checkpoint path tensor name var name dt var dtype name restore var assign restore restore tensor var name restore return checkpoint path tf function input signature tf tensorspec shape none dtype tf float32 def random choose with dirichlet noice self prob concentration 0 3 tf one tf size prob dist tfp distribution dirichlet concentration p 0 75 prob 0 25 dist sample 1 0 sample tf random categorical tf math log p 1 return sample 0 select index return renjumodel model create model 15 15 model model save renju 15x15 model save format tf signature predict model predict get concrete function train model train get concrete function save model save get concrete function restore model restore get concrete function random choose with dirichlet noice model random choose with dirichlet noice get concrete function if call predict method from python you can clearly see the output of each input in the same batch be different python input 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 model predict get concrete function tf convert to tensor input but if try to call predict method of savedmodel from c api the output be exactly the same for multiple input in the same batch relevant log output the predict method print out the input and output via tf print python def predict self state batch if tf shape state batch 0 1 tf print state batch summarize 1 prediction self model state batch if tf shape state batch 0 1 tf print prediction summarize 1 return tf split prediction board height board width 1 axis 2 and the follow log be from stdout when call from c produce by the tf print above 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 16 6252575 16 6252575 16 554945 16 6252575 16 5581188 16 6252575 16 5571423 16 6252575 16 5570812 16 6252575 16 5592785 16 6252575 16 5568371 16 6252575 16 6252575 16 6252575 16 6252575 16 6252575 16 6252575 16 556715 16 5604382 16 5638561 16 1153088 16 5592785 16 5548229 16 5536633 16 6252575 16 6252575 16 6252575 16 6252575 16 5546398 16 6252575 16 6251965 16 5716686 16 3776989 15 2738171 15 9649181 16 549757 16 2080822 15 4714489 16 3686047 16 5711803 16 6251965 16 6252575 16 5547 16 6252575 16 6252575 16 57057 16 5050793 16 4986095 16 1685314 15 5485363 15 7727184 16 2866344 16 1884899 16 2140026 16 5062389 16 570631 16 6252575 16 6252575 16 5578747 16 5541515 16 4574718 16 237318 12 7237072 14 7688122 15 8670177 3 9945817 14 1392956 13 9726696 14 495863 16 5278454 16 3515148 16 5543957 16 5592785 16 6252575 16 5604382 16 3097668 16 0515881 10 2605724 2 71552896 3 54908609 5 74897623 3 83827066 4 30714273 13 6384411 16 1675549 15 1023083 16 5556774 16 6252575 16 5547 16 5589123 16 4617443 16 3551769 15 2829113 5 34351969 3 03394938 1 94203043 4 53840494 3 27900553 15 9859142 16 3760509 16 4613781 16 5639782 16 554945 16 6252575 16 4714489 16 5536022 15 8650646 4 17939615 5 1966691 2 17225504 2 46821451 2 10627604 4 61872721 3 8862443 15 8424816 16 5542126 15 9301281 16 6252575 16 5547 16 5638561 15 9878063 16 3649426 14 6254406 4 10206461 4 06068277 3 20484781 4 07081461 5 61872721 15 9856701 16 383131 16 4535046 16 5583019 16 5559216 16 6252575 16 5608654 16 3747082 16 4586315 14 030653 3 08790445 3 91151285 6 01582193 3 30915689 3 78974771 11 8531628 16 1242809 16 1317883 16 5604382 16 6252575 16 5598888 16 5536633 16 4726696 16 5026379 14 5284557 10 9210339 15 3974743 4 61561441 15 4461193 11 3503551 12 4599743 16 4962292 16 3061047 16 5563488 16 5615978 16 6252575 16 6252575 16 57057 16 5019054 16 5019054 13 358778 16 1637096 15 8333263 16 3820934 16 0954113 16 1836071 16 5053844 16 5707531 16 6252575 16 6252575 16 5546398 16 6252575 16 6251965 16 570509 16 4332409 15 9699841 16 4579 16 4486217 15 909193 16 3141 15 8626842 16 5714855 16 6251965 16 6252575 16 5547619 16 6252575 16 6252575 16 6252575 16 6252575 16 5536633 16 5536022 16 5640392 15 6290417 16 5579967 16 5569592 16 5561047 16 6252575 16 6252575 16 6252575 16 6252575 16 6252575 16 6252575 16 5554943 16 6252575 16 5639172 16 6252575 16 5551281 16 6252575 16 5547 16 6252575 16 5608654 16 6252575 16 5559216 16 6252575 16 6252575 0 242623821 16 6252575 16 6252575 16 554945 16 6252575 16 5581188 16 6252575 16 5571423 16 6252575 16 5570812 16 6252575 16 5592785 16 6252575 16 5568371 16 6252575 16 6252575 16 6252575 16 6252575 16 6252575 16 6252575 16 556715 16 5604382 16 5638561 16 1153088 16 5592785 16 5548229 16 5536633 16 6252575 16 6252575 16 6252575 16 6252575 16 5546398 16 6252575 16 6251965 16 5716686 16 3776989 15 2738171 15 9649181 16 549757 16 2080822 15 4714489 16 3686047 16 5711803 16 6251965 16 6252575 16 5547 16 6252575 16 6252575 16 57057 16 5050793 16 4986095 16 1685314 15 5485363 15 7727184 16 2866344 16 1884899 16 2140026 16 5062389 16 570631 16 6252575 16 6252575 16 5578747 16 5541515 16 4574718 16 237318 12 7237072 14 7688122 15 8670177 3 9945817 14 1392956 13 9726696 14 495863 16 5278454 16 3515148 16 5543957 16 5592785 16 6252575 16 5604382 16 3097668 16 0515881 10 2605724 2 71552896 3 54908609 5 74897623 3 83827066 4 30714273 13 6384411 16 1675549 15 1023083 16 5556774 16 6252575 16 5547 16 5589123 16 4617443 16 3551769 15 2829113 5 34351969 3 03394938 1 94203043 4 53840494 3 27900553 15 9859142 16 3760509 16 4613781 16 5639782 16 554945 16 6252575 16 4714489 16 5536022 15 8650646 4 17939615 5 1966691 2 17225504 2 46821451 2 10627604 4 61872721 3 8862443 15 8424816 16 5542126 15 9301281 16 6252575 16 5547 16 5638561 15 9878063 16 3649426 14 6254406 4 10206461 4 06068277 3 20484781 4 07081461 5 61872721 15 9856701 16 383131 16 4535046 16 5583019 16 5559216 16 6252575 16 5608654 16 3747082 16 4586315 14 030653 3 08790445 3 91151285 6 01582193 3 30915689 3 78974771 11 8531628 16 1242809 16 1317883 16 5604382 16 6252575 16 5598888 16 5536633 16 4726696 16 5026379 14 5284557 10 9210339 15 3974743 4 61561441 15 4461193 11 3503551 12 4599743 16 4962292 16 3061047 16 5563488 16 5615978 16 6252575 16 6252575 16 57057 16 5019054 16 5019054 13 358778 16 1637096 15 8333263 16 3820934 16 0954113 16 1836071 16 5053844 16 5707531 16 6252575 16 6252575 16 5546398 16 6252575 16 6251965 16 570509 16 4332409 15 9699841 16 4579 16 4486217 15 909193 16 3141 15 8626842 16 5714855 16 6251965 16 6252575 16 5547619 16 6252575 16 6252575 16 6252575 16 6252575 16 5536633 16 5536022 16 5640392 15 6290417 16 5579967 16 5569592 16 5561047 16 6252575 16 6252575 16 6252575 16 6252575 16 6252575 16 6252575 16 5554943 16 6252575 16 5639172 16 6252575 16 5551281 16 6252575 16 5547 16 6252575 16 5608654 16 6252575 16 5559216 16 6252575 16 6252575 0 242623821
tensorflowtensorflow
unncessary semicolon in tutorial terminate statement possible documentation improvement
Bug
click to expand issue type documentation bug source source tensorflow version 2 9 1 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the documentation find at under head training loop as well as additional documentation on that tutorial guide page have unnecessary semicolon at end of statement not necessary for python syntactically for example plt legend under training loop heading throw warn error in certain ide with ai assist code review such as vscode because of the superfluous I kindly recommend discontinuation of unnecessary semicolon and revision of this page to remove all that be not need per proper syntax or convention kindly let I know if any more information be need thank you for consideration standalone code to reproduce the issue shell import tensorflow as tf import matplotlib from matplotlib import pyplot as plt matplotlib rcparam figure figsize 9 6 x tf linspace 2 2 201 x tf cast x tf float32 def f x y x 2 2 x 5 return y y f x tf random normal shape 201 plt plot x numpy y numpy label datum plt plot x f x label ground truth plt legend excerpt from relevant log output shell n a
tensorflowtensorflow
tflitecoremldelegateoption documentation be not up to date
Bug
it currently show the follow structure which be not up to date typedef struct we have dummy for now as we can t have empty struct in c char dummy tflitecoremldelegateoption see current definition in
tensorflowtensorflow
indentation bug in tensorflow main expert tutorial
Bug
click to expand issue type documentation bug source binary tensorflow version 2 8 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell there be an indentation bug in the main tensorflow expert colab tutorial locate here the code do not run because there be an indentation problem tf function def train step image label with tf gradienttape as tape training true be only need if there be layer with different behavior during training versus inference e g dropout prediction model image training true loss loss object label prediction gradient tape gradient loss model trainable variable problem need indentation optimizer apply gradient zip gradient model trainable variable train loss loss train accuracy label prediction standalone code to reproduce the issue shell run the colab all the way through and you will see that it choke on the last step relevant log output no response
tensorflowtensorflow
segmentation fault and float point exception pop up while training use intel cpus with onednn optimization enable
Bug
click to expand issue type bug source binary tensorflow version tf 2 9 custom code no os platform and distribution linux ubuntu 18 04 mobile device no response python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell while I train a 3d unet like model use intel cpus with onednn optimization enable segmentation fault and float point exception pop up with specific configuration config 1 input size 512 512 7 1 h w c n batch size 80 error segmentation fault core dump config 2 input size 512 512 8 1 batch size 80 run without any error config 3 input size 512 512 32 1 batch size 1 4 16 any of they have the same reuslt error float point exception core dump standalone code to reproduce the issue shell this be the link to the source code it be a zip file with 2 python script in it train py be the script that define datum loader and the train task unet 3d layer py be the script that define the model note because the dataset we re use be private I use numpy to generate random train datum in the source code to reproduce the problem to reproduce config 1 1 open train py 2 uncomment line 17 41 and 42 3 comment line 18 19 44 45 47 and 48 4 in the terminal run python train py to run the code config 2 1 open train py 2 uncomment line 18 44 and 45 3 comment line 17 19 41 42 47 and 48 4 in the terminal run python train py to run the code config 3 1 open train py 2 uncomment line 19 47 and 48 3 modify the variable batch size in line 48 4 comment line 17 18 41 42 44 and 45 5 in the terminal run python train py to run the code relevant log output segmentation fault shell 2022 08 31 10 29 25 997447 I tensorflow core util util cc 169 onednn custom operation be on you may see slightly different numerical result due to float point round off error from different computation order to turn they off set the environment variable tf enable onednn opt 0 2022 08 31 10 29 26 001968 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudart so 11 0 dlerror libcudart so 11 0 can not open share object file no such file or directory 2022 08 31 10 29 26 001989 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine 2022 08 31 10 29 27 476182 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2022 08 31 10 29 27 476213 w tensorflow stream executor cuda cuda driver cc 269 fail call to cuinit unknown error 303 2022 08 31 10 29 27 476230 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host user cob 1932c proc driver nvidia version do not exist 2022 08 31 10 29 27 476445 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 avx512f avx512 vnni fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag epoch 1 10 2022 08 31 10 29 31 296605 w tensorflow core common runtime forward type inference cc 231 type inference fail this indicate an invalid graph that escape type check error message invalid argument expect compatible input type but input 1 type i d tft optional args type i d tft product args type i d tft tensor args type i d tft bool be neither a subtype nor a supertype of the combine input precede it type i d tft optional args type i d tft product args type i d tft tensor args type i d tft legacy variant while infer type of node dice loss cond output 11 2 40 eta 29 12 loss 2 1932segmentation fault core dump float point exception shell 2022 08 31 10 27 02 379986 I tensorflow core util util cc 169 onednn custom operation be on you may see slightly different numerical result due to float point round off error from different computation order to turn they off set the environment variable tf enable onednn opt 0 2022 08 31 10 27 02 384512 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudart so 11 0 dlerror libcudart so 11 0 can not open share object file no such file or directory 2022 08 31 10 27 02 384533 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine 2022 08 31 10 27 03 883968 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2022 08 31 10 27 03 883996 w tensorflow stream executor cuda cuda driver cc 269 fail call to cuinit unknown error 303 2022 08 31 10 27 03 884012 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host user cob 1932c proc driver nvidia version do not exist 2022 08 31 10 27 03 884231 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 avx512f avx512 vnni fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag epoch 1 10 2022 08 31 10 27 07 807669 w tensorflow core common runtime forward type inference cc 231 type inference fail this indicate an invalid graph that escape type check error message invalid argument expect compatible input type but input 1 type i d tft optional args type i d tft product args type i d tft tensor args type i d tft bool be neither a subtype nor a supertype of the combine input precede it type i d tft optional args type i d tft product args type i d tft tensor args type i d tft legacy variant while infer type of node dice loss cond output 11 float point exception core dump
tensorflowtensorflow
when dtype be tf uint64 tf image convert image dtype throw exception
Bug
click to expand issue type bug source source tensorflow version tf2 4 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell tf image convert image dtype support data type for image and dtype of uint8 uint16 uint32 uint64 int8 int16 int32 int64 float16 float32 float64 bfloat16 but when dtype be uint64 it throw exception standalone code to reproduce the issue shell import tensorflow as tf result try arg 0 1 2 3 4 5 6 7 8 9 10 11 12 dtype tf uint64 saturate false result re tf image convert image dtype arg 0 dtype dtype saturate saturate except exception as e result err error str e print result err error value for attr t of uint64 be not in the list of allow value bfloat16 half float double uint8 int8 uint16 int16 int32 int64 complex64 complex128 n t nodedef node mul op z t attr t type allow dt bfloat16 dt half dt float dt double dt uint8 dt int8 dt uint16 dt int16 dt int32 dt int64 dt complex64 dt complex128 be commutative true op mul relevant log output no response
tensorflowtensorflow
tf clip by norm documentation wrong
Bug
click to expand issue type documentation bug source source tensorflow version tf2 8 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell clip norm should be a 0 d scalar tensor 0 write on documentation args however when the clip norm be a negative integral the code work standalone code to reproduce the issue shell import tensorflow as tf result try arg 0 tf random uniform 1 5 dtype tf float32 result re tf clip by norm arg 0 clip norm 1 except exception as e result err error str e print result re relevant log output no response
tensorflowtensorflow
update tf math tan dtype
Bug
update the tf math tan dtype to the support dtype fix
tensorflowtensorflow
doc be incorrect for tf raw op atan
Bug
click to expand issue type documentation bug source binary tensorflow version 2 9 1 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the doc for tf raw op atan indicate integer type be support but this be not the case as show below with colab code standalone code to reproduce the issue shell import tensorflow as tf tf raw op atan x tf constant 1 dtype tf int16 relevant log output shell notfounderror could not find device for node node atan atan t dt int16 all kernel register for op atan device gpu t in dt double device gpu t in dt float device gpu t in dt half device gpu t in dt bfloat16 device cpu t in dt complex128 device cpu t in dt complex64 device cpu t in dt double device cpu t in dt float device cpu t in dt bfloat16 device cpu t in dt half op atan
tensorflowtensorflow
model improve bad with gradienttape than with fit
Bug
click to expand issue type bug source binary tensorflow version tf 2 9 1 custom code yes os platform and distribution linux ubuntu 22 04 1 mobile device no response python version 3 8 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I already open a question on stackoverflow for this but I believe it could be some issue with tensorflow the same model use either custom training with gradienttape or keras model fit will perform differently the custom run will stop improve early and will take long to improve at all I ve also include a google colab notebook in order to reproduce the issue standalone code to reproduce the issue shell def build model kernel regularizer l2 0 0001 dropout 0 001 recurrent dropout 0 x1 input 62 x2 input 62 3 x embed 30 100 mask zero true x1 x concatenate x x2 x bidirectional lstm 500 return sequence true kernel regularizer kernel regularizer dropout dropout recurrent dropout recurrent dropout x x bidirectional lstm 500 return sequence false kernel regularizer kernel regularizer dropout dropout recurrent dropout recurrent dropout x x activation softmax x x dense 1000 x x dense 500 x x dense 250 x x dense 1 bias initializer one x x tf math abs x return model input x1 x2 outputs x optimizer adam learning rate 0 0001 model build model model compile optimizer optimizer loss mse metric mse option tf datum option option experimental distribute auto shard policy autoshardpolicy datum dat train tf datum dataset from generator generator lambda output type tf int32 tf float32 tf float32 dat train dat train with option option keras training model fit dat train epoch 50 custom training for epoch in range 50 for x1 x2 y in dat train with tf gradienttape as tape y pre model x1 x2 training true loss model loss y y pre grad tape gradient loss model trainable variable model optimizer apply gradient zip grad model trainable variable relevant log output no response
tensorflowtensorflow
model maker object detection tutorial can not export model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device tensorflow instal from source or binary source tensorflow version use command below 2 8 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version same as google colab gpu model and memory same as google colab exact command to reproduce you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem when run the line model export export dir in the colab notebook of the example the error occur source code log typeerror traceback most recent call last in 1 model export export dir 8 frame usr local lib python3 7 dist package tensorflow example lite model maker core task custom model py in export self export dir tflite filename label filename vocab filename save model filename tfjs folder name export format kwargs 130 tflite filepath os path join export dir tflite filename 131 export tflite kwargs kwargs get param self export tflite kwargs 132 self export tflite tflite filepath export tflite kwargs 133 tf compat v1 log info 134 tensorflow lite model export successfully s tflite filepath usr local lib python3 7 dist package tensorflow example lite model maker core task object detector py in export tflite self tflite filepath quantization config with metadata export metadata json file 195 writer util load file tflite filepath 196 self model spec config mean rgb 197 self model spec config stddev rgb label filepath 198 writer util save file writer populate tflite filepath 199 usr local lib python3 7 dist package tensorflow lite support metadata python metadata writer object detector py in create for inference cls model buffer input norm mean input norm std label file path score calibration md 293 input md input md 294 output category md output category md 295 output score md output score md usr local lib python3 7 dist package tensorflow lite support metadata python metadata writer object detector py in create from metadata info cls model buffer general md input md output location md output category md output score md output number md 224 b flatbuffer builder 0 225 b finish 226 model metadata pack b 227 metadata metadatapopulator metadata file identifier 228 usr local lib python3 7 dist package tensorflow lite support metadata metadata schema py generate py in pack self builder 2698 subgraphmetadatalist 2699 for I in range len self subgraphmetadata 2700 subgraphmetadatalist append self subgraphmetadata I pack builder 2701 modelmetadatastartsubgraphmetadatavector builder len self subgraphmetadata 2702 for I in reversed range len self subgraphmetadata usr local lib python3 7 dist package tensorflow lite support metadata metadata schema py generate py in pack self builder 1018 inputtensormetadatalist 1019 for I in range len self inputtensormetadata 1020 inputtensormetadatalist append self inputtensormetadata I pack builder 1021 subgraphmetadatastartinputtensormetadatavector builder len self inputtensormetadata 1022 for I in reversed range len self inputtensormetadata usr local lib python3 7 dist package tensorflow lite support metadata metadata schema py generate py in pack self builder 256 processunitslist 257 for I in range len self processunit 258 processunitslist append self processunit I pack builder 259 tensormetadatastartprocessunitsvector builder len self processunit 260 for I in reversed range len self processunit usr local lib python3 7 dist package tensorflow lite support metadata metadata schema py generate py in pack self builder 2076 def pack self builder 2077 if self option be not none 2078 option self option pack builder 2079 processunitstart builder 2080 processunitaddoptionstype builder self optionstype usr local lib python3 7 dist package tensorflow lite support metadata metadata schema py generate py in pack self builder 3013 for I in reversed range len self mean 3014 builder prependfloat32 self mean I 3015 mean builder endvector 3016 if self std be not none 3017 if np be not none and type self std be np ndarray typeerror endvector miss 1 require positional argument vectornumelem
tensorflowtensorflow
opencl delegate issue with model that output result from their intermediate node
Bug
click to expand issue type bug source source tensorflow version 2 9 1 nightly version custom code no os platform and distribution android mobile device test on snapdragon 888 865 855 python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the opencl delegate generate all zero output in model that output result from their intermediate node this issue do not happen with other delegate like xnnpack standalone code to reproduce the issue shell we have implement a small tool with comprehensive documentation to reproduce the mention issue here be the link to the repository relevant log output no response
tensorflowtensorflow
tf lite api reference on internet page do not match code for interpreter option h
Bug
click to expand issue type documentation bug source source tensorflow version master branch custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the tensorflow api reference for tf lite be show a different name for a method for interpreter option h the webpage show the name setdynamicallocationforlargetensor setdynamicallocationforlargetensor but the c code have the name optimizememoryforlargetensor I expect that both name be equal either the on or the other standalone code to reproduce the issue shell n a relevant log output no response
tensorflowtensorflow
analyzer wrapper model analyzer cc have typo cause c2001 error while compile
Bug
click to expand issue type bug source source tensorflow version tf 2 9 custom code no os platform and distribution window 10 mobile device no response python version 3 10 bazel version 5 0 0 gcc compiler version msvc cuda cudnn version 8 4 1 gpu model and memory geforce gtx 750 ti current behaviour shell while build below error pop up tensorflow lite python analyzer wrapper model analyzer cc 197 error c2001 newline in constant this be cause by non ascii quote in the line 197 and 198 standalone code to reproduce the issue shell this be fix by below patch please review diff git a tensorflow lite python analyzer wrapper model analyzer cc b tensorflow lite python analyzer wrapper model analyzer cc index 370a4eff470 d4609faab33 100644 a tensorflow lite python analyzer wrapper model analyzer cc b tensorflow lite python analyzer wrapper model analyzer cc 194 8 194 8 void dump model signature def std stringstream out stream return out stream ksectionsplitter out stream your tflite model have signature length signature def s n n out stream your tflite model have signature length signature def s n n for int I 0 I signature length I auto signature def signature get I out stream signature I key relevant log output no response
tensorflowtensorflow
colab cudnn issue involve lstm
Bug
click to expand issue type bug source source tensorflow version 2 9 custom code yes os platform and distribution linux mobile device no response python version python 3 7 13 bazel version no response gcc compiler version no response cuda cudnn version cuda version 11 2 gpu model and memory nvidia smi 460 32 03 driver version 460 32 03 current behaviour shell encounter this error suddenly on a notebook that be previously work it seem that the forward lstm be cause issue for the graph the lstm be in a bidirectional wrapper I have also see the backward lstm have the same issue to my knowledge google colab do not switch gpu driver version or tf keras version I have not try to replicate the issue locally because I do not have a gpu when I comment out the gpu connection I still run in to the issue so I believe it relate to either the os colab run on or the tf keras version or also perhaps the python version itself interestingly I do have the train model that I run in another notebook without gpu it fail in the same way when run on cpu outside of any training loop standalone code to reproduce the issue shell here be a link to the colab with the issue in cell 2 code for issue from tensorflow keras layer import dense bidirectional flatten lstm batchnormalization tensorflow version 2 9 lstm neuron 160 seq len 12 patience 5 lr 2 5e 5 alpha 1 batchsize 48 dropoutval 25 epoch 1000 import datum as dataframe alldata pickle load open binancealldataforrl pkl rb inputdata pickle load open binanceinputdataforrl pkl rb directory os getcwd nvidia smi device name tf test gpu device name if device name device gpu 0 raise systemerror gpu device not find print find gpu at format device name essential function def preprocess df df outofsamplepct df df drop close open high low volume volume denominator 1 df dropna inplace true sequential datum prev day deque maxlen seq len for I in df value prev day append n for n in I 1 if len prev day seq len sequential datum append np array prev day I 1 x train y train x test y test testcutoff np int 1 outofsamplepct len sequential datum for export for seq target in sequential datum testcutoff x train append seq y train append target for seq target in sequential datum testcutoff x test append seq y test append target output section return np array x train np array y train np array x test np array y test def preprocess df pre df df df drop close open high low volume volume denominator 1 df dropna inplace true extra prediction datum sequential datum prev day deque maxlen seq len for I in df value prev day append n for n in I 1 if len prev day seq len sequential datum append np array prev day I 1 x pre y pre for seq target in sequential datum x pre append seq y pre append target return np array x pre np array y pre experimental past present predictor grab datum run model indlength len alldata mainhold length np shape alldata close wasup np nan np zero length wasup period alldata close value period alldata open value 1 length 0 period 1 alldata wasup wasup 1 test train datum that have zero overlap w oos datum inputdata alldata alldata inputlength wasup target variable length np shape inputdata close wasup np nan np zero length wasup period inputdata close value period inputdata open value 1 length 0 period 1 inputdata wasup wasup 1 inputlabel alldata column val inputlabel inputdata column time sort alldata index value last 5pct int outofsamplepct len alldata testcutoff time last 5pct mainholder for export datum alldata alldata dropna axis 0 how any inputdata inputdata dropna axis 0 how any x train y train x test y test preprocess df alldata outofsamplepct x pre y pre preprocess df pre inputdata custom loss function def custom loss function y true y pre position tf math multiply y true y pre omniscience perform the good sh tf math reduce mean position tf math reduce std position sharp return sh custom metric function def custom metric function y true y pre position tf math multiply y true y pre sh tf math reduce mean position tf math reduce std position sharp return sh custom metric function 2 def custom metric function2 y true y pre position tf math multiply y true y pre r tf math reduce mean position return r build residual model input tf keras input shape seq len x train shape 2 original x bidirectional lstm lstm neuron activation tanh return sequence true dropout dropoutval kernel initializer he uniform input x batchnormalization x x rnn bidirectional lstm lstm neuron activation tanh return sequence true dropout dropoutval kernel initializer he uniform x x rnn batchnormalization x rnn x tf keras layer add x x rnn add this back in and take out the concat below x bidirectional lstm np int lstm neuron activation tanh return sequence true dropout dropoutval kernel initializer he uniform x x flatten x x batchnormalization x out dense 1 activation tanh x model tf keras model model input input output out model summary opt tf keras optimizer adam learning rate lr checkpoint filepath allstock epoch 02d val custom metric function 10f checkpoint modelcheckpoint model model format filepath monitor val custom metric function verbose 1 save well only true save weight only false mode min earlystopping es earlystopping monitor val custom metric function mode min patience patience model compile optimizer opt metric custom metric function custom metric function2 loss custom loss function history model fit x train y train validation datum x test y test epochs epoch batch size batchsize callback es checkpoint verbose 1 use multiprocesse false worker 1 shuffle true saving model and oos metric identify the good model maximal accuracy tix allstock for root dir file in os walk os getcwd model def find s ch return i d for i d ltr in enumerate s if ltr ch acc0 0 for name in dir if name 0 len tix tix file pertain to specific industry dashspot dashspot find name acc float name dashspot 1 1 dashspot 1 13 if acc acc0 acc0 acc bestfilename name print good model bestfilename model tf keras model load model os getcwd model bestfilename compile false os rename os getcwd model bestfilename os getcwd model bestmodel export model to local drive zip r content bestmodel zip content model bestmodel from google colab import file file download content bestmodel zip file download content pca pkl file download content comp pkl opt tf keras optimizer adam model compile optimizer opt loss custom loss function metric custom metric function relevant log output shell unknownerror graph execution error fail to find the dnn implementation node cudnnrnn model bidirectional forward lstm partitionedcall op inference train function 17056
tensorflowtensorflow
be the estimate of how big tflite be on arm incorrect for io
Bug
click to expand issue type documentation bug source binary tensorflow version 2 9 1 custom code no os platform and distribution no response mobile device ios python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour when try to link against tflite for io base on the cocoapod we notice a nearly 3 4 mb increase in our binary size as oppose to the documentation which state that the tensorflow lite binary be 1 mb when all 125 support operator be link for 32 bit arm build and less than 300 kb when use only the operator need for support the common image classification model inceptionv3 and mobilenet admittedly this be for 64 bit arm but I don t believe that should make a difference per bloaty the top symbol come from file size vm size 30 1 1 27mi 30 1 1 27mi tflite acquireflexdelegate 16 9 728ki 16 9 728ki tflitexnnpackdelegatedelete 12 8 548ki 12 8 548ki tflitesignaturerunnerdelete 9 4 405ki 9 4 405ki tfliteopaquenodegetoutput 8 7 372ki 8 6 372ki tflitedelegatecreate 6 4 276ki 6 4 276ki tflitetensorcopytobuffer look into it far it look like the flex delegate be allowliste though I wouldn t normally expect it to be I think this lead to an incorrect estimate of the binary size on the tflite doc link above be there any reason why the flexdelegate need to be link for io similar question for xnnpack can t they be extras to install like the metal or coreml delegate make those two delegate optional would reduce the binary size by nearly 2 mb a near 50 decrease standalone code to reproduce the issue shell n a relevant log output no response
tensorflowtensorflow
integer dtype not support for tensorflow math tan
Bug
click to expand issue type documentation bug source source tensorflow version 2 9 1 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the documentation here state that it support integer but it doesn t standalone code to reproduce the issue shell import tensorflow print tensorflow math tan tensorflow constant 1 2 3 dtype int32 extra it doesn t support any integer type relevant log output shell file usr local lib python3 8 dist package tensorflow python framework op py line 7164 in raise from not ok status raise core status to exception e from none pylint disable protect access tensorflow python framework error impl notfounderror could not find device for node node tan tan t dt int32 all kernel register for op tan device gpu t in dt double device gpu t in dt float device gpu t in dt half device cpu t in dt complex128 device cpu t in dt complex64 device cpu t in dt double device cpu t in dt float device cpu t in dt bfloat16 device cpu t in dt half op tan
tensorflowtensorflow
fix typo
Bug
fix a small typo in graph property cc and update a missing punctuation in tf doc fix the issue 56618
tensorflowtensorflow
opencl crash in model convert by the new version of tf e g tf 2 9
Bug
1 system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow installation pip package or build from source source tensorflow library version if pip package or github sha if build from source tf 2 8 tf 2 9 tf nightly 2 code we have implement a small tool with a comprehensive readme to reproduce the mention issue here be the link to the repository 3 failure after conversion converting model in which there be at least one dense layer feed by a 3d input should be do use old version of tensorflow we have test v2 3 and v2 4 otherwise the obtain tflite model will crash with the opencl delegate with this error message error tflitegpudelegate init fully connect amount of input datum should match weight width
tensorflowtensorflow
tf keras layer dense documentation wrong
Bug
click to expand issue type documentation bug source source tensorflow version tf2 4 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the parameter unit be a positive integer on documentation however when unit be 0 code work and when unit be 1 the error message be error dimension 1 must be 0 so unit be not a negative integer whose value can be set to positive integer and zero standalone code to reproduce the issue shell import tensorflow as tf result try arg 0 0 activation softmax arg class tf keras layer dense arg 0 activation activation arg input tf random uniform 3 50 1 dtype tf float32 result re arg class arg input except exception as e result err error str e print result result re relevant log output no response
tensorflowtensorflow
the type of parameter prefix in tf keras backend get uid documentation wrong
Bug
click to expand issue type documentation bug source source tensorflow version tf2 4 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell parameter prefix must be a string on documentation however I find that when type of prefix be bool tuple integer the code work but when type of prefix be list code throw an exception error unhashable type list I think support type like integer should be add to documentation standalone code to reproduce the issue shell import tensorflow as tf result try arg 0 5 result re tf keras backend get uid arg 0 except exception as e result err error str e print result result re 1 relevant log output no response
tensorflowtensorflow
the type of parameter device type in tf config list logical device wrong
Bug
click to expand issue type documentation bug source source tensorflow version tf2 4 custom code yes os platform and distribution win11 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell device type be optional string on documentation however I find that when device type be list int float or other type code work it just return an empty list so type restriction should be remove from the documentation standalone code to reproduce the issue shell import tensorflow as tf result try arg 0 51 result re tf config list logical device arg 0 except exception as e result err error str e print result result re relevant log output no response
tensorflowtensorflow
update documentation of tf ab
Bug
click to expand issue type documentation bug source source tensorflow version tf2 4 custom code yes os platform and distribution win11 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the input of tf abs shoule be a tensor or sparsetensor of type float16 float32 float64 int32 int64 complex64 or complex128 on documentation that be to say input doesn t support the type of int8 tensor so if I input a int8 tensor it would throw exception but I find that when I input int8 tensor it work well I think it would be well to add an example to illustrate how the code work in the case of int8 or add support for int8 type on documentation standalone code to reproduce the issue shell import tensorflow as tf result try arg 0 tf saturate cast tf random uniform minval 0 maxval 2 dtype tf int64 dtype tf int8 print arg 0 tf tensor 1 shape dtype int8 result re tf abs arg 0 except exception as e result err error str e print result re relevant log output no response
tensorflowtensorflow
problem with constructor of tensor
Bug
click to expand issue type bug source source tensorflow version tf2 9 1 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell l699 there be a problem with the constructor of tensor pass datatype to shape standalone code to reproduce the issue shell tensor tensor datatype type shape type buf nullptr l699 relevant log output no response
tensorflowtensorflow
win fft2d try to link m lib
Bug
click to expand issue type bug source source tensorflow version 2 9 1 custom code no os platform and distribution window 10 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell can not compile tensorflow lite on window use cmake build system fail at linker stage while try to link m lib the math library be only suitable for link on macos and linux will provide a patch for this issue standalone code to reproduce the issue shell build on any 2 9 x branch on window use cmake build will fail while try to link non existent m lib at fft2d link stage relevant log output no response
tensorflowtensorflow
header tensorflow tsl platform stack frame h not include with tf nightly pip package
Bug
click to expand issue type bug source binary tensorflow version v1 12 1 79472 g4aaefec710e 2 11 0 dev20220810 custom code yes os platform and distribution no response mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour horovod do not build with tf nightly anymore see the code include tensorflow core framework op h at l34 header file in tensorflow core be include with the pip package however this ultimately include tensorflow tsl platform stack frame h which do not come with the package this appear to have change with cc joker eph standalone code to reproduce the issue shell include tensorflow core framework op h relevant log output shell 2022 08 10t16 38 11 6788601z 44 157 3 66 building cxx object horovod tensorflow cmakefile tensorflow dir mpi op cc o 2022 08 10t16 38 11 6792823z 44 157 3 cd tmp pip req build ggufvm1f build temp linux x86 64 3 8 relwithdebinfo horovod tensorflow usr bin c deigen mpl2 only 1 dhave gloo 1 dhave mpi 1 dtensorflow version 9999999999 dtensorflow export I tmp pip req build ggufvm1f third party httprequ include I tmp pip req build ggufvm1f third party boost assert include I tmp pip req build ggufvm1f third party boost config include I tmp pip req build ggufvm1f third party boost core include I tmp pip req build ggufvm1f third party boost detail include I tmp pip req build ggufvm1f third party boost iterator include I tmp pip req build ggufvm1f third party boost lockfree include I tmp pip req build ggufvm1f third party boost mpl include I tmp pip req build ggufvm1f third party boost parameter include I tmp pip req build ggufvm1f third party boost predef include I tmp pip req build ggufvm1f third party boost preprocessor include I tmp pip req build ggufvm1f third party boost static assert include I tmp pip req build ggufvm1f third party boost type trait include I tmp pip req build ggufvm1f third party boost utility include I tmp pip req build ggufvm1f third party lbfgs include I tmp pip req build ggufvm1f third party gloo I tmp pip req build ggufvm1f third party flatbuffer include isystem usr local lib python3 8 dist package tensorflow include I usr local lib python3 8 dist package tensorflow include d glibcxx use cxx11 abi 1 deigen max align byte 64 pthread fpic wall ftree vectorize mf16c mavx mfma o3 g dndebug fpic std c 17 o cmakefile tensorflow dir mpi op cc o c tmp pip req build ggufvm1f horovod tensorflow mpi op cc 2022 08 10t16 38 11 6795960z 44 157 4 in file include from tmp pip req build ggufvm1f horovod common gloo mpi mpi context h 25 2022 08 10t16 38 11 8295689z 44 157 4 from tmp pip req build ggufvm1f horovod common gloo gloo context h 25 2022 08 10t16 38 11 8296650z 44 157 4 from tmp pip req build ggufvm1f horovod common gloo gloo controller h 19 2022 08 10t16 38 11 8297220z 44 157 4 from tmp pip req build ggufvm1f horovod common gloo gloo controller cc 16 2022 08 10t16 38 11 8298063z 44 157 4 tmp pip req build ggufvm1f horovod common gloo mpi half h in function void horovod common halfbits2float const short unsigned int float 2022 08 10t16 38 11 8298871z 44 157 4 tmp pip req build ggufvm1f horovod common gloo mpi half h 76 11 warning dereferencing type pun pointer will break strict aliasing rule wstrict aliase 2022 08 10t16 38 11 8299357z 44 157 4 76 re reinterpret cast f 2022 08 10t16 38 11 8299655z 44 157 4 2022 08 10t16 38 13 5852575z 44 159 3 in file include from usr local lib python3 8 dist package tensorflow include tensorflow core platform status h 32 2022 08 10t16 38 13 7360889z 44 159 3 from usr local lib python3 8 dist package tensorflow include tensorflow core lib core status h 19 2022 08 10t16 38 13 7361596z 44 159 3 from usr local lib python3 8 dist package tensorflow include tensorflow core framework resource base h 20 2022 08 10t16 38 13 7362253z 44 159 3 from usr local lib python3 8 dist package tensorflow include tensorflow core framework resource handle h 21 2022 08 10t16 38 13 7362956z 44 159 3 from usr local lib python3 8 dist package tensorflow include tensorflow core framework type h 32 2022 08 10t16 38 13 7363571z 44 159 3 from usr local lib python3 8 dist package tensorflow include tensorflow core framework op def builder h 28 2022 08 10t16 38 13 7364233z 44 159 3 from usr local lib python3 8 dist package tensorflow include tensorflow core framework full type inference util h 23 2022 08 10t16 38 13 7364870z 44 159 3 from usr local lib python3 8 dist package tensorflow include tensorflow core framework op h 24 2022 08 10t16 38 13 7365433z 44 159 3 from tmp pip req build ggufvm1f horovod tensorflow mpi op cc 34 2022 08 10t16 38 13 7366144z 44 159 3 usr local lib python3 8 dist package tensorflow include tensorflow core platform stack frame h 19 10 fatal error tensorflow tsl platform stack frame h no such file or directory 2022 08 10t16 38 13 7366631z 44 159 3 19 include tensorflow tsl platform stack frame h 2022 08 10t16 38 13 7366941z 44 159 3 2022 08 10t16 38 13 7367206z 44 159 3 compilation terminate 2022 08 10t16 38 13 7367601z 44 159 3 make 2 horovod tensorflow cmakefile tensorflow dir build make 453 horovod tensorflow cmakefile tensorflow dir mpi op cc o error 1
tensorflowtensorflow
docker hub ubuntu version documentation error
Bug
click to expand issue type documentation bug source binary tensorflow version n a custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell on docker hub it say image build after may 20 2019 tf nightly plus tf version 1 14 and onward be base on ubuntu 18 04 early image be base on ubuntu 16 04 however if I pull tensorflow tensorflow late gpu I get root d98e1fb52f91 cat etc lsb release distrib i d ubuntu distrib release 20 04 distrib codename focal distrib description ubuntu 20 04 4 lt the documentation should be fix to accurately indicate which ubuntu version you ll get for each image standalone code to reproduce the issue shell n a relevant log output no response
tensorflowtensorflow
please update the simplify chinese document or at least set an indicator to tell reader the document be out of date
Bug
click to expand issue type documentation bug source source tensorflow version na custom code no os platform and distribution na mobile device na python version na bazel version na gcc compiler version na cuda cudnn version na gpu model and memory na current behaviour android cc bazel build c opt config android arm64 tensorflow lite delegates gpu gl delegate for static library bazel build c opt config android arm64 tensorflow lite delegates gpu libtensorflowlite gpu gl so for dynamic library android cc bazel build c opt config android arm64 tensorflow lite delegates gpu delegate for static library bazel build c opt config android arm64 tensorflow lite delegates gpu libtensorflowlite gpu delegate so for dynamic library I read document in all 20 language but only the simplified chinese version be out of date and it doesn t have even an indicator to tell I that it be out of date the old version library gl delegate its header gl delegate h show c absl deprecate use tflitegpudelegatev2create define in delegate h instead tfl capi export tflitedelegate tflitegpudelegatecreate const tflitegpudelegateoption option standalone code to reproduce the issue shell na relevant log output shell na
tensorflowtensorflow
when I set unroll of lstm to true conversion problem occur
Bug
click to expand issue type bug source binary tensorflow version tf 2 9 custom code yes os platform and distribution windows10 mobile device no response python version 3 9 current behaviour shell my model use the lstm layer during training parameter of lstm unroll false and stateful false after train I convert the model to tflite if I still set unroll false and stateful false during conversion then there be a unidirectionalsequencelstm layer in the tflite file unfortunately I need to use tensorflow lite for micro unidirectionalsequencelstm layer be not support in tensorflow lite for micro now so I need to set unroll true and stateful false during conversion this setting can avoid use unidirectionalsequencelstm test result display that tensorflow lite for micro support the model with parameter unroll true and stateful false however the result of inference with unroll true and stateful false be different from that of inference with unroll false and stateful false how to make the two result consistent standalone code to reproduce the issue shell during train p dense unit self nbin activation tanh p p dense unit self nbin activation tanh p p lstm unit self nbin activation tanh return sequence true stateful false p p dense unit self nbin activation softplus p be lambda self forward e e dense unit self nbin activation linear be e lstm unit self nbin activation tanh return sequence true stateful false e z e p z dense unit self wlen z activation linear z during convert p dense unit self nbin activation tanh p p dense unit self nbin activation tanh p p lstm unit self nbin activation tanh return sequence true unroll true stateful false p p dense unit self nbin activation softplus p be lambda self forward e e dense unit self nbin activation linear be e lstm unit self nbin activation tanh return sequence true unroll true stateful false e z e p z dense unit self wlen z activation linear z relevant log output shell the result of inference with unroll true and stateful false be different from that of inference with unroll false and stateful false
tensorflowtensorflow
matrixtriangularsolve error invalidargumenterror input matrix be not invertible
Bug
click to expand issue type bug source binary tensorflow version v2 7 0 rc1 69 gc256c071bb2 custom code no os platform and distribution ubuntu 20 04 4 lts mobile device n a python version 3 8 10 bazel version n a gcc compiler version n a cuda cudnn version n a gpu model and memory n a current behaviour when a matrix be not invertible I get an exception tensorflow python framework error impl invalidargumenterror input matrix be not invertible op matrixtriangularsolve while compute the gradient I expect the gradient to be fill with nan value in the appropriate location as occur on the forward pass in the code sample I provide below note that there be two set of datum store in x the first one have an ill define cholesky decomposition of the covariance matrix whereas the second one be fine I could still use the gradient of the second one to adjust my parameter despite it be in the same batch with the first one but I be prevent from do so because the matrixtriangularsolve operation raise a hard exception rather than fill the appropriate portion of the gradient tensor with nan as be the typical behavior for other op even if I split the datum into separate batch keras will still choke on this hard exception and fail to complete the training epoch if matrixtriangularsolve return nan in the gradient user can add their own code to assert that nan be not present in the gradient and raise a hard exception if that be the desire behavior or importantly we have the option to use a function with a custom gradient on the input tensor and block the nan gradient from corrupt our parameter while continue to train but as long as matrixtriangularsolve raise an exception itself user be leave with little to no control over how the problem be handle standalone code to reproduce the issue python3 import tensorflow as tf import tensorflow probability as tfp x tf variable 0 96169615 0 03600049 0 86897445 0 14993548 0 32782674 0 05467033 0 70350194 0 45990896 0 24882722 0 19966352 0 22277904 0 18279016 0 996189 0 8356074 0 32451844 0 9954922 0 95547783 0 7614579 with tf gradienttape as tape y tfp stat cholesky covariance x sample axis 1 tf print y g tape gradient y x relevant log output python3 2022 08 03 15 25 26 880273 w tensorflow core kernel linalg cholesky op cc 56 cholesky decomposition be not successful eigen llt fail with error code 1 fill low triangular output with nan traceback most recent call last file usr lib python3 dist package ipython core interactiveshell py line 3331 in run code exec code obj self user global ns self user n file line 10 in g tape gradient y x file usr local lib python3 8 dist package tensorflow python eager backprop py line 1084 in gradient flat grad imperative grad imperative grad file usr local lib python3 8 dist package tensorflow python eager imperative grad py line 71 in imperative grad return pywrap tfe tfe py tapegradient file usr local lib python3 8 dist package tensorflow python eager backprop py line 159 in gradient function return grad fn mock op out grad file usr local lib python3 8 dist package tensorflow python ops linalg grad py line 471 in choleskygrad l inverse linalg op matrix triangular solve l file usr local lib python3 8 dist package tensorflow python util traceback util py line 153 in error handler raise e with traceback filter tb from none file usr local lib python3 8 dist package tensorflow python framework op py line 7107 in raise from not ok status raise core status to exception e from none pylint disable protect access tensorflow python framework error impl invalidargumenterror input matrix be not invertible op matrixtriangularsolve nan 0 2 78880322e 11 nan nan 0 nan nan nan 0 375321597 0 0 0 317106605 0 049177371 0 0 169663742 0 178507909 0 000427246094
tensorflowtensorflow
attributeerror userobject object have no attribute add slot
Bug
issue type bug source binary tensorflow version 2 6 custom code yes current behaviour I have model save use savedmodel format the model be train on kaggle tpu and save in the follow way python in kaggle tpu it must be save in the following way save locally tf save model saveoption experimental io device job localhost final model save final model option save locally now it seem like if I try to load this model in the following way python model tf save model load final model attributeerror traceback most recent call last tmp ipykernel 17 3033429325 py in 1 model tf save model load final model attributeerror userobject object have no attribute add slot but if I do as follow it work model tf keras model load model final model but for some reason I need to make tf save model load api work instead tf keras model load model api what approach should we take here
tensorflowtensorflow
assertionerror duplicate registration for type optimizer in keras optimizer tensorflow 2 9 1
Bug
click to expand issue type bug source source tensorflow version tf 2 9 1 custom code yes os platform and distribution macos mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell exception error in keras regualrization and optimizer 1 un instal and re instal tensorflow kera from scratch but not luck 2 create a new env and start from scracth and use late tf 2 9 1 but no luck as same issue be see iot dl akram isheriff m rbna data iot dl akram isheriff m rbna data python c import tensorflow as tf print tf version git version tf version version v2 9 0 18 gd8ce9f9c301 2 9 1 iot dl akram isheriff m rbna datum standalone code to reproduce the issue shell import numpy as np import panda as pd import matplotlib pyplot as plt import time from keras model import sequential from keras layers recurrent import lstm from keras callbacks import tensorboard from keras callbacks import modelcheckpoint from tensorflow python keras layers core import reshape dense dropout activation flatten epoch 100 batch size 50 each training datum point will be length 100 1 since the last value in each sequence be the label sequence length 100 omp error issue fix some machine may show error import os os environ kmp duplicate lib ok true input data generation def prepare data datum train start train end test start test end print length of datum len datum training datum print preparae training datum result for index in range train start train end sequence length result append data index index sequence length result np array result result result mean normalize result print training data shape result shape train result train start train end np random shuffle train x train train 1 y train train 1 test datum print create test datum result for index in range test start test end sequence length result append data index index sequence length result np array result result result mean normalize result print test data shape format result shape x test result 1 y test result 1 print shape x train np shape x train print shape x test np shape x test x train np reshape x train x train shape 0 x train shape 1 1 x test np reshape x test x test shape 0 x test shape 1 1 return x train y train x test y test model genration function def generate model model sequential first lstm layer define the input sequence length model add lstm input shape sequence length 1 1 unit 32 return sequence true model add dropout 0 2 second lstm layer with 128 unit model add lstm unit 128 return sequence true model add dropout 0 2 third lstm layer with 100 unit model add lstm unit 100 return sequence false model add dropout 0 2 densely connect output layer with the linear activation function model add dense unit 1 model add activation linear model compile loss mean squared error optimizer rmsprop return model function for result normalisation def normalize result result mean result mean result std result std result result mean result result std return result result mean function for run the model def run model model none data none global start time time time print loading datum datum b pd read csv user akram akram code folder iot dl har data cpu utilisation csv parse date 0 infer datetime format true datum datum b cpu utilisation as matrix train on first 700 sample and test on next 300 sample test set have anomaly x train y train x test y test prepare data datum 0 600 400 700 tensor board set tensorboard tensorboard log dir log histogram freq 0 write graph true write image true if model be none model generate model try print training checkpointer modelcheckpoint filepath lstm result checkpoint 02 hdf5 verbose 1 save well only true monitor loss model fit x train y train batch size batch size epoch epoch callback checkpointer tensorboard validation split 0 05 print predict predict model predict x test model fit x train y train batch size batch size nb epoch 1 callback checkpointer tensorboard print reshaping predict predict np reshape predict predict size except keyboardinterrupt print prediction exception print training duration format time time global start time return model y test 0 try plt figure figsize 20 8 plt plot y test len y test b label observe plt plot predict len y test g label predict plt plot y test predict 2 r label root mean square deviation plt legend plt show except exception as e print plot exception print str e print training duration format time time global start time return model y test predict call the run function model y test predict run model relevant log output shell traceback most recent call last file user akram akram code folder iot dl har model lstm security threat py line 6 in from keras model import sequential file user akram opt anaconda3 envs iot dl lib python3 9 site package kera init py line 25 in from keras import model file user akram opt anaconda3 envs iot dl lib python3 9 site package kera model init py line 18 in from keras engine functional import functional file user akram opt anaconda3 envs iot dl lib python3 9 site package keras engine functional py line 25 in from keras engine import base layer file user akram opt anaconda3 envs iot dl lib python3 9 site package kera engine base layer py line 40 in from keras mixed precision import loss scale optimizer file user akram opt anaconda3 envs iot dl lib python3 9 site package kera mix precision loss scale optimizer py line 20 in from keras optimizer v2 import optimizer v2 file user akram opt anaconda3 envs iot dl lib python3 9 site package kera optimizer v2 optimizer v2 py line 1461 in tf internal save model load register revive type file user akram opt anaconda3 envs iot dl lib python3 9 site package tensorflow python save model revive type py line 133 in register revive type raise assertionerror f duplicate registration for type identifi assertionerror duplicate registration for type optimizer
tensorflowtensorflow
break link to model mitigation guide for variable scope python
Bug
click to expand issue type documentation bug source source tensorflow version 2 9 1 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the link to the model mitigation guide be break in the documentation for variable scope in the second paragraph of migrate to tf2 the current link point to instead of the link on line 2154 just need https add to prevent it from be treat as a relative url standalone code to reproduce the issue shell it s live in the current documentation relevant log output no response
tensorflowtensorflow
fix fail unit test quantization op quantization op test
Bug
the failure be report in this pr fix the issue
tensorflowtensorflow
c api index error in shapeinference c api
Bug
click to expand issue type bug source binary tensorflow version 2 9 1 custom code no os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell error when use shape inference c api tf shapeinferencecontextgetinput and tf shapeinferencecontextsetoutput even the index be not out of range there be an out of range error cause check the source code we can see the condition 0 I I cc ctx num input which be wrong obviously source code here be source code link l146 standalone code to reproduce the issue shell no standalone test need relevant log output shell check fail tf ok tf getcode status 0 vs 3
tensorflowtensorflow
gradient calculate in reverse mode and forward mode be not equal for tf compat v1 kera layer globalavgpool2d
Bug
click to expand issue type bug source binary tensorflow version tf 2 9 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the jacobian matrix calculate by reverse mode be not equal to that compute in forward mode standalone code to reproduce the issue shell import tensorflow as tf import numpy as np input tf constant 0 1695373 0 2855878 0 67758346 0 2562498 0 11648738 0 76352525 0 61349154 0 12928855 0 0187813 0 72227335 0 9399148 0 3782258 0 4337635 0 4784329 0 28710878 dtype tf float32 globalavgpool2d tf compat v1 kera layer globalavgpool2d data format channel last keepdim 2 2 with tf gradienttape as g g watch input re backward globalavgpool2d input jacob g jacobian re backward input grad fwd arr for I in range tf size input tangent tf reshape tf one hot I tf size input shape input shape with tf autodiff forwardaccumulator input tangent as acc re forward globalavgpool2d input jvp acc jvp res forward grad fwd arr append jvp jacob fwd tf reshape tf convert to tensor grad fwd arr shape jacob shape np testing assert allclose jacob jacob fwd relevant log output shell assertionerror not equal to tolerance rtol 1e 07 atol 0 mismatch element 20 45 44 4 max absolute difference 0 2 max relative difference 1 x array 0 2 0 2 y array 0 2 0
tensorflowtensorflow
message convert variable to constant be deprecate
Bug
click to expand issue type documentation bug source binary tensorflow version 2 8 2 custom code no os platform and distribution linux ubuntu 18 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour tf compat v1 graph util convert variable to constant give a warning message warn tensorflow from 2 convert variable to constant from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util convert variable to constant it s confuse here I do use tf compat v1 graph util convert variable to constant why do it still give a warning message standalone code to reproduce the issue py import tensorflow as tf tf compat v1 graph util convert variable to constant tf compat v1 session tf compat v1 graphdef relevant log output no response
tensorflowtensorflow
use tf function on a model call will not utilize gpu nor vram but log placement to gpu
Bug
click to expand issue type bug source source tensorflow version 2 9 1 custom code yes os platform and distribution window 11 mobile device no response python version 3 9 12 bazel version no response gcc compiler version no response cuda cudnn version 11 2 8 6 gpu model and memory 3090 24 gb current behaviour shell utilize tf function on a function that call model a b aka model call although eager tensor have all be allocate to the gpu actual training do not happen on the gpu rather the cpu notice 20x slow training speed than when utilize nest tf function in model call which be not allow in mirroredstrategy standalone code to reproduce the issue shell model call def call self x y val self disc gradient x y self gen gradient x y return val model call wrapper tf function this be the problematic tf function if this tf function be move to the below tf function it work fine def run batch self batch x batch y val self model batch x batch y return val gradient function tf function def get disc gradient self x y calc val self disc optimizer apply gradient disc gradient var return disc gradient obviously this code currently do nothing it s a minimal reproducible example relevant log output shell warning tensorflow use mirroredstrategy eagerly have significant overhead currently we will be work on improve this in the future but for now please wrap call for each replica or experimental run or run inside a tf function to get the good performance the above log be relevant as it try to make we wrap the entire model call in a tf function which be where I discover this entire issue this issue be replicateable w o use mirroredstrategy all that matter be that tf function be wrap a model call no other relevant log output could be find only way to notice the issue be the extremely long batch time along with the gpu only utilize 3 gb of vram compare to a full 20 gb when the tf function be move
tensorflowtensorflow
keras metric mean do not support cross replica context
Bug
click to expand issue type bug source binary tensorflow version 2 9 1 custom code yes os platform and distribution linux ubunto 18 04 6 lts mobile device no response python version 3 9 12 bazel version no response gcc compiler version no response cuda cudnn version 11 2 8 gpu model and memory no response current behaviour shell try to call update state on a kera metric mean object within a cross replica context specifically tf distribute mirroredstrategy raise valueerror synconreadvariable do not support assign add in cross replica context when aggregation be set to tf variableaggregation sum reproducible in tf 2 8 0 as well standalone code to reproduce the issue shell import tensorflow as tf from tensorflow import kera with tf distribute mirroredstrategy scope metric kera metric mean metric update state 0 relevant log output shell info tensorflow use mirroredstrategy with device job localhost replica 0 task 0 device gpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 valueerror traceback most recent call last input in 8 in 1 with tf distribute mirroredstrategy scope 2 metric kera metric mean 3 metric update state 0 file anaconda3 envs tf 2 9 0 lib python3 9 site package kera util metric util py 70 in update state wrapper decorate metric obj args kwargs 64 raise valueerror 65 try to run metric update state in replica context when 66 the metric be not create in tpustrategy scope 67 make sure the keras metric be create in tpustrategy scope 69 with tf util graph context for symbolic tensor args kwargs 70 update op update state fn args kwargs 71 if update op be not none update op will be none in eager execution 72 metric obj add update update op file anaconda3 envs tf 2 9 0 lib python3 9 site package kera metric base metric py 140 in metric new update state fn args kwargs 137 control status tf internal autograph control status ctx 138 ag update state tf internal autograph tf convert 139 obj update state control status 140 return ag update state args kwargs file anaconda3 envs tf 2 9 0 lib python3 9 site package tensorflow python autograph impl api py 689 in convert decorator wrapper args kwargs 687 try 688 with conversion ctx 689 return convert call f args kwargs option option 690 except exception as e pylint disable broad except 691 if hasattr e ag error metadata file anaconda3 envs tf 2 9 0 lib python3 9 site package tensorflow python autograph impl api py 331 in convert call f args kwargs caller fn scope option 329 if conversion be in allowlist cache f option 330 log log 2 allowliste s from cache f 331 return call unconverted f args kwargs option false 333 if ag ctx control status ctx status ag ctx status disable 334 log log 2 allowliste s autograph be disable in context f file anaconda3 envs tf 2 9 0 lib python3 9 site package tensorflow python autograph impl api py 458 in call unconverted f args kwargs option update cache 455 return f self call args kwargs 457 if kwargs be not none 458 return f args kwargs 459 return f args file anaconda3 envs tf 2 9 0 lib python3 9 site package kera metric base metric py 465 in reduce update state self value sample weight 463 value sum tf reduce sum value 464 with tf control dependency value sum 465 update total op self total assign add value sum 467 exit early if the reduction doesn t have a denominator 468 if self reduction metric util reduction sum file anaconda3 envs tf 2 9 0 lib python3 9 site package tensorflow python distribute value py 1313 in synconreadvariable assign add self value use lock name read value 1310 if ds context in cross replica context and 1311 not value util in replica update context 1312 value util mark as unsaveable 1313 return value util on read assign add cross replica 1314 self value read value read value 1315 else 1316 return super synconreadvariable 1317 self assign add value use lock name read value file anaconda3 envs tf 2 9 0 lib python3 9 site package tensorflow python distribute value util py 197 in on read assign add cross replica var value read value 195 if ds context in cross replica context 196 if var aggregation vs variableaggregation sum 197 raise valueerror 198 synconreadvariable do not support assign add in 199 cross replica context when aggregation be set to 200 tf variableaggregation sum 201 return assign on each device var assign add on device 202 value read value valueerror synconreadvariable do not support assign add in cross replica context when aggregation be set to tf variableaggregation sum
tensorflowtensorflow
unit test quantization op quantization op test fail on mkl aarch64
Bug
click to expand issue type bug source source tensorflow version git head custom code no os platform and distribution cento 7 mobile device n a python version 3 8 10 bazel version 5 1 1 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour shell unit test tensorflow python kernel test quantization op quantization op test fail with segfault introduce by standalone code to reproduce the issue shell bazel test test timeout 300 500 1 1 flaky test attempt 3 test output all cache test result no noremote accept cache config nonccl config mkl aarch64 copt mtune generic copt march armv8 a copt o3 test env tf enable onednn opt 1 copt fopenmp linkopt lgomp build tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu test tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu verbose failure build test only job 75 tensorflow python kernel test quantization op quantization op test relevant log output shell test output for tensorflow python kernel test quantization op quantization op test 2022 07 22 11 25 14 622796 I tensorflow core util util cc 175 experimental onednn custom operation be on if you experience issue please turn they off by set the environment variable tf enable onednn opt 0 run test under python 3 8 13 tmp workspace venv cp38 cp38 bin python3 run fakequantwithminmaxvarsopt test invalid input info tensorflow run test invalid input in graph mode i0722 11 25 15 747941 281472890588256 test util py 1490 run test invalid input in graph mode warn tensorflow from opt python cp38 cp38 lib python3 8 contextlib py 83 tensorflowtestcase test session from tensorflow python framework test util be deprecate and will be remove in a future version instruction for update use self session or self cache session instead w0722 11 25 15 748297 281472890588256 deprecation py 350 from opt python cp38 cp38 lib python3 8 contextlib py 83 tensorflowtestcase test session from tensorflow python framework test util be deprecate and will be remove in a future version instruction for update use self session or self cache session instead info tensorflow time main fakequantwithminmaxvarsoptest test invalid input 0 08 i0722 11 25 15 824039 281472890588256 test util py 2460 time main fakequantwithminmaxvarsoptest test invalid input 0 08 info tensorflow run test invalid input in eager mode i0722 11 25 15 824797 281472890588256 test util py 1499 run test invalid input in eager mode info tensorflow time main fakequantwithminmaxvarsoptest test invalid input 0 05s i0722 11 25 15 870929 281472890588256 test util py 2460 time main fakequantwithminmaxvarsoptest test invalid input 0 05s ok fakequantwithminmaxvarsopt test invalid input run fakequantwithminmaxvarsopt test session skip fakequantwithminmaxvarsopt test session run fakequantwithminmaxvarsperchannelopt test invalid input info tensorflow run test invalid input in graph mode i0722 11 25 15 872137 281472890588256 test util py 1490 run test invalid input in graph mode info tensorflow time main fakequantwithminmaxvarsperchanneloptest test invalid input 0 01s i0722 11 25 15 883851 281472890588256 test util py 2460 time main fakequantwithminmaxvarsperchanneloptest test invalid input 0 01s info tensorflow run test invalid input in eager mode i0722 11 25 15 884400 281472890588256 test util py 1499 run test invalid input in eager mode info tensorflow time main fakequantwithminmaxvarsperchanneloptest test invalid input 0 01s i0722 11 25 15 890146 281472890588256 test util py 2460 time main fakequantwithminmaxvarsperchanneloptest test invalid input 0 01s ok fakequantwithminmaxvarsperchannelopt test invalid input run fakequantwithminmaxvarsperchannelopt test session skip fakequantwithminmaxvarsperchannelopt test session run quantizedownandshrinkrangeopt test invalid input info tensorflow run test invalid input in graph mode i0722 11 25 15 891172 281472890588256 test util py 1490 run test invalid input in graph mode info tensorflow time main quantizedownandshrinkrangeopt test invalid input 0 0s i0722 11 25 15 895756 281472890588256 test util py 2460 time main quantizedownandshrinkrangeopt test invalid input 0 0s info tensorflow run test invalid input in eager mode i0722 11 25 15 896286 281472890588256 test util py 1499 run test invalid input in eager mode info tensorflow time main quantizedownandshrinkrangeopt test invalid input 0 02 i0722 11 25 15 911795 281472890588256 test util py 2460 time main quantizedownandshrinkrangeopt test invalid input 0 02 ok quantizedownandshrinkrangeopt test invalid input run quantizedownandshrinkrangeopt test session skip quantizedownandshrinkrangeopt test session run quantizedaddopt test invalid input info tensorflow run test invalid input in graph mode i0722 11 25 15 912965 281472890588256 test util py 1490 run test invalid input in graph mode info tensorflow time main quantizedaddopt test invalid input 0 01s i0722 11 25 15 919473 281472890588256 test util py 2460 time main quantizedaddopt test invalid input 0 01s info tensorflow run test invalid input in eager mode i0722 11 25 15 920017 281472890588256 test util py 1499 run test invalid input in eager mode info tensorflow time main quantizedaddopt test invalid input 0 01s i0722 11 25 15 933510 281472890588256 test util py 2460 time main quantizedaddopt test invalid input 0 01s ok quantizedaddopt test invalid input run quantizedaddopt test session skip quantizedaddopt test session run quantizedavgpoolingopt test invalid input info tensorflow run test invalid input in graph mode i0722 11 25 15 934696 281472890588256 test util py 1490 run test invalid input in graph mode info tensorflow time main quantizedavgpoolingopt test invalid input 0 01s i0722 11 25 15 941932 281472890588256 test util py 2460 time main quantizedavgpoolingopt test invalid input 0 01s info tensorflow run test invalid input in eager mode i0722 11 25 15 942471 281472890588256 test util py 1499 run test invalid input in eager mode fatal python error segmentation fault current thread 0x0000ffff83a84c60 most recent call first file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python eager execute py line 54 in quick execute file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python ops gen nn op py line 6987 in quantize avg pool eager fallback file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python ops gen nn op py line 6934 in quantize avg pool file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python kernel test quantization op quantization op test py line 170 in test invalid input file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python framework test util py line 1504 in run eagerly file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python framework test util py line 1520 in decorate file opt python cp38 cp38 lib python3 8 unitt case py line 633 in calltestmethod file opt python cp38 cp38 lib python3 8 unitt case py line 676 in run file opt python cp38 cp38 lib python3 8 unitt case py line 736 in call file opt python cp38 cp38 lib python3 8 unitt suite py line 122 in run file opt python cp38 cp38 lib python3 8 unitt suite py line 84 in call file opt python cp38 cp38 lib python3 8 unitt suite py line 122 in run file opt python cp38 cp38 lib python3 8 unitt suite py line 84 in call file opt python cp38 cp38 lib python3 8 unitt runner py line 176 in run file opt python cp38 cp38 lib python3 8 unitt main py line 271 in runtest file opt python cp38 cp38 lib python3 8 unitt main py line 101 in init file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles absl py absl test absltest py line 2537 in run and get test result file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles absl py absl test absltest py line 2568 in run test file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles absl py absl test absltest py line 2156 in run in app file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles absl py absl test absltest py line 2049 in main file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python platform googlet py line 51 in g main file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles absl py absl app py line 258 in run main file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles absl py absl app py line 312 in run file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python platform googlet py line 60 in main wrapper file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python platform benchmark py line 503 in benchmark main file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python platform googlet py line 62 in main file root cache bazel bazel root 7043a081cadd05f91bd91c35f2a2c120 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test quantization op quantization op test runfiles org tensorflow tensorflow python kernel test quantization op quantization op test py line 347 in target tensorflow python kernel test quantization op quantization op test up to date bazel bin tensorflow python kernel test quantization op quantization op test info elapse time 156 845s critical path 120 79 info 216 process 1 internal 215 local info build complete 1 test fail 216 total action tensorflow python kernel test quantization op quantization op test fail in 3 out of 3 in 2 7
tensorflowtensorflow
unit test failure on high cpu core count machine
Bug
click to expand issue type bug source source tensorflow version git head custom code no os platform and distribution cento 7 mobile device n a python version 3 8 10 bazel version 5 1 1 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour shell tensorflow python data experimental kernel tests service cross trainer cache test fail if there be more than 48 cpu core in the machine be use to test standalone code to reproduce the issue shell bazel test test timeout 300 500 1 1 flaky test attempt 2 test output all cache test result no config nonccl copt mtune generic copt march armv8 a copt o3 verbose failure build tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu test tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu build test only tensorflow python data experimental kernel tests service cross trainer cache test relevant log output shell fail testconcurrentreader test mode graph tfapiversion 2 main crosstrainercachet crosstrainercachet testconcurrentreader test mode graph tfapiversion 2 testconcurrentreader test mode graph tfapiversion 2 mode graph tf api version 2 traceback most recent call last file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel tests service cross trainer cache test runfiles absl py absl testing parameterized py line 314 in bind param test return test method self testcase param file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel tests service cross trainer cache test runfiles org tensorflow tensorflow python framework test combination py line 362 in decorate execute test method file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel tests service cross trainer cache test runfiles org tensorflow tensorflow python framework test combination py line 345 in execute test method test method kwarg to pass file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel tests service cross trainer cache test runfiles org tensorflow tensorflow python data experimental kernel tests service cross trainer cache test py line 97 in testconcurrentreader self assertequal self evaluate iterator j I assertionerror 9 0
tensorflowtensorflow
different output at inference when different version of tensorflow be use to perform quantization during conversion to tflite
Bug
1 system information os platform and distribution e g linux ubuntu 16 04 colaboratory tensorflow installation pip package or build from source pip package tensorflow library version if pip package or github sha if build from source issue with various version of tenrorflow 2 code provide code to help we reproduce your issue use one of the follow option option a reference colab notebook 1 reference tensorflow model colab demonstrate how to build your tf model 2 reference tensorflow lite model colab demonstrate how to convert your tf model to a tf lite model with quantization if use and run tflite inference if possible you can paste link or attach file by drag drop they below provide link to your update version of the above two colab notebook provide link to your tensorflow model and optionally tensorflow lite model option b paste your code here or provide a link to a custom end to end colab perform conversion and dynamic range quantization import tensorflow as tf converter tf lite tfliteconverter from save model save model dir converter optimization tf lite optimize default tflite quant model converter convert perform inference with the generate model import numpy as np import cv2 from matplotlib import pyplot as plt load in image resize and normalize fn content 114 jpeg img cv2 imread fn 1 img cv2 resize img 256 128 interpolation cv2 inter area input img astype np float32 127 5 1 input np expand dim input 0 load tflite model and set param mod path content model convert tflite interpreter tf lite interpreter model path mod path interpreter allocate tensor input index interpreter get input detail 0 index output index interpreter get output detail 0 index run inference interpreter set tensor input index input interpreter invoke borderi interpreter get tensor output index print type borderi border list borderi 0 print border 3 failure after conversion if the conversion be successful but the generate model be wrong then state what be wrong the issue here can be reproduce by use different version of tensorflow to convert and quantize the model check below for the version and the result version 2 3 result 0 3330792 0 27843374 0 40366906 0 250709 0 48135388 0 24179175 0 5636318 0 24119636 0 6400584 0 2480475 0 6005912 0 3724205 0 5628794 0 3754966 0 48778155 0 40332332 0 3985254 0 3810564 0 34678322 0 3575413 0 52223235 0 3023828 0 5825569 0 30930337 version 2 4 result 0 3341408 0 28175732 0 40583873 0 25392115 0 48184934 0 24735713 0 55686283 0 2499246 0 63230157 0 26033315 0 6035842 0 37444225 0 5606153 0 37672445 0 4867457 0 39785832 0 4086929 0 38542932 0 35176694 0 3601483 0 51643425 0 31068143 0 5710402 0 31282204 version 2 5 result 0 3330438 0 28234786 0 40442967 0 25366008 0 48140293 0 24701637 0 55590045 0 24874169 0 63329554 0 2600382 0 60310024 0 373787 0 56032354 0 37653822 0 48664606 0 39639705 0 40806895 0 38597408 0 3511423 0 35862684 0 5150639 0 30999073 0 57102746 0 31190303 version 2 7 result 0 3341408 0 28175732 0 40583873 0 25392115 0 48184934 0 24735713 0 55686283 0 2499246 0 63230157 0 26033315 0 6035842 0 37444225 0 5606153 0 37672445 0 4867457 0 39785832 0 4086929 0 38542932 0 35176694 0 3601483 0 51643425 0 31068143 0 5710402 0 31282204 version 2 8 2 result 0 33199123 0 28214237 0 40430748 0 2546782 0 48055232 0 24780622 0 555871 0 2505081 0 63259435 0 2612171 0 6028995 0 3754816 0 5595857 0 37812153 0 48534507 0 397737 0 40594587 0 38545215 0 3501406 0 35963872 0 5159489 0 31203791 0 57132494 0 31412107 important this happen only when we apply dynamic range quantization without quantization the output be the same for the different version of tensorflow and they be 0 33383682 0 28292328 0 40593028 0 2521686 0 4803361 0 24693146 0 5551163 0 24844697 0 63013196 0 26205748 0 60453784 0 37450537 0 55849314 0 37527987 0 4858575 0 3947457 0 4057536 0 38485324 0 3498182 0 35843956 0 51736057 0 30842167 0 5722974 0 31274813 so the question here be why the quantization algorithm be different at the different version of tensorflow and in the end which one be correct 4 optional rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title 5 optional any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf linalg eigh provide inaccurate result conflict with tf linalg eig
Bug
click to expand issue type bug source binary tensorflow version 2 8 2 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour I be work on a project in which I need to compute the eigenvalue of each matrix element in a batch while implement I discover that tf linalg eigh be return different result from tf linalg eig to statistical significance weirdly both tf linalg eig and tf linalg eigval return the same value as do tf linalg eigh and tf linalg eigval matrix 1 0 1 2 have eigenvalue 2 1 with eigenvector 0 1 and 1 1 respectively both eig and eigval get this correct but both eigh and eigvalsh incorrectly return the value 0 38196601 2 61803399 standalone code to reproduce the issue see the following for the above buggy example relevant log output no response
tensorflowtensorflow
opencl delegate generate 0 and random value with tf stack
Bug
click to expand issue type bug source source tensorflow version 2 9 1 nightly version custom code no os platform and distribution android mobile device test on snapdragon 888 865 python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell our model with tf stack pack in the tflite version node generate wrong result with the opencl delegate our experiment show that the output of a pack node in a tflite model contain lot of zero and random value when we use opencl delegate this issue do not happen with other delegate like xnnpack this issue be very similar to the issue that we report before here standalone code to reproduce the issue shell we have implement a small tool to reproduce the mention issue with the tf stack node here be the link to the repository relevant log output no response
tensorflowtensorflow
error during fit on distribute dataset with multiple gpu valueerror when provide a distribute dataset you must specify the number of step to run
Bug
click to expand issue type bug source binary tensorflow version 2 9 custom code no os platform and distribution linux debian 4 19 194 3 mobile device no response python version 3 8 10 bazel version no response gcc compiler version no response cuda cudnn version 11 7 8 3 gpu model and memory tesla v100 pcie 32 gb x2 current behaviour shell when try to train a model on multiple gpu we get a valueerror at the start of fit method the error be cause by the dataset distribution do through mirroredstrategy and strategy experimental distribute dataset if the distribution be remove from the code no error occur but the dataset be not distribute standalone code to reproduce the issue shell import tensorflow as tf def parse function example float feature tf io fixedlenfeature tf float32 default value 0 0 feature description f1 float feature f2 float feature f3 float feature f4 float feature f5 float feature label tf io fixedlenfeature tf int64 default value 0 sample tf io parse example example feature description label sample label feature tf stack sample f1 sample f2 sample f3 sample f4 sample f5 axis 1 return feature label gpu tf config list logical device gpu strategy tf distribute mirroredstrategy gpu batch size per replica 256 batch size batch size per replica strategy num replicas in sync train filename training datum tfrec train dataset tf datum tfrecorddataset train filename batch batch size map parse function val filename val datum tfrec val dataset tf datum tfrecorddataset val filename batch batch size map parse function train dataset strategy experimental distribute dataset train dataset val dataset strategy experimental distribute dataset val dataset with strategy scope mdl tf keras sequential tf keras layers inputlayer input shape 5 tf keras layer dense 5 tf keras layer dense 1 activation sigmoid mdl compile tf keras optimizer adam loss tf keras loss binarycrossentropy h mdl fit train dataset validation datum val dataset verbose 0 epoch 50 batch size batch size relevant log output shell traceback most recent call last file main py line 76 in h mdl fit file usr local lib python3 8 dist package kera util traceback util py line 67 in error handler raise e with traceback filter tb from none file usr local lib python3 8 dist package kera engine datum adapter py line 755 in validate args raise valueerror when provide a distribute dataset you must valueerror when provide a distribute dataset you must specify the number of step to run
tensorflowtensorflow
opencl delegate generate 0 and inf value with reduce sum
Bug
click to expand issue type bug source source tensorflow version tf 2 8 tf 2 9 custom code no os platform and distribution android mobile device test on snapdragon 888 865 python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell my model with tf math reduce sum sum in the tflite version node generate wrong result with the opencl delegate my experiment show that the output of a sum node in a tflite model contain lot of zero and inf value when we use opencl delegate this issue do not happen with other delegate like xnnpack standalone code to reproduce the issue shell here be a very simple model with a reduce sum node in the structure x0 input shape 23 512 x1 tf math reduce sum x0 axis 1 model model x0 x1 name test model compile optimizer adam loss categorical crossentropy metric accuracy model save reduce sum h5 the tflite version of this model generate zero and inf value with the opencl delegate relevant log output no response
tensorflowtensorflow
numpy 1 23 0 cause unit test failure
Bug
click to expand issue type bug source source tensorflow version git head custom code no os platform and distribution cento 7 mobile device n a python version 3 8 10 bazel version 5 1 1 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour shell the late release of numpy ie 1 23 0 be now cause unit test failure see standalone code to reproduce the issue shell python m pip install numpy 1 23 0 bazel test test timeout 300 500 1 1 flaky test attempt 1 test output all cache test result no noremote accept cache config nonccl build tag filter no oss oss serial gpu tpu benchmark test v1only require gpu test tag filter no oss oss serial gpu tpu benchmark test v1only require gpu verbose failure build test only tensorflow python kernel tests control flow scan op test cpu tensorflow python kernel tests array op pad op test cpu tensorflow python kernel tests array op concat op test cpu tensorflow python kernel tests array op array op test cpu tensorflow python kernel tests array op pad op test gpu tensorflow python kernel tests array op concat op test gpu tensorflow python kernel tests array op split op test cpu tensorflow python kernel tests array op array op test gpu tensorflow python kernel tests array op split op test gpu tensorflow python kernel tests array op slice op test gpu tensorflow python kernel tests array op slice op test cpu tensorflow python kernel tests control flow scan op test gpu relevant log output shell sample faliure error test1d main cumprodt cumprodtest test1d traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 251 in test1d self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 219 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 210 in compare np out handle option np cumprod x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error test2d main cumprodtest cumprodt test2d traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 258 in test2d self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 219 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 210 in compare np out handle option np cumprod x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error test3d main cumprodt cumprodt test3d traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 265 in test3d self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 219 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 210 in compare np out handle option np cumprod x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error test6d main cumprodt cumprodt test6d traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 272 in test6d self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 219 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 210 in compare np out handle option np cumprod x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error testempty main cumprodt cumprodt testempty traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 226 in testempty self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 219 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 210 in compare np out handle option np cumprod x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error testnan main cumprodt cumprodtest testnan traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 244 in testnan self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 219 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 210 in compare np out handle option np cumprod x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error test1d main cumsumt cumsumt test1d traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 118 in test1d self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 86 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 77 in compare np out handle option np cumsum x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error test2d main cumsumt cumsumt test2d traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 125 in test2d self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 86 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 77 in compare np out handle option np cumsum x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error test3d main cumsumt cumsumt test3d traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 132 in test3d self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 86 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 77 in compare np out handle option np cumsum x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error test6d main cumsumt cumsumt test6d traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 139 in test6d self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 86 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 77 in compare np out handle option np cumsum x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error testempty main cumsumt cumsumt testempty traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 93 in testempty self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 86 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 77 in compare np out handle option np cumsum x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error testlarge main cumsumt cumsumt testlarge traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 2157 in decorate return func self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 146 in testlarge self compareall x 0 file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 86 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 77 in compare np out handle option np cumsum x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index error testnan main cumsumt cumsumt testnan traceback most recent call last file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python framework test util py line 1625 in decorate return f self args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 111 in testnan self compareall x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 86 in compareall self compare x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 77 in compare np out handle option np cumsum x axis exclusive reverse file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 47 in handle option x numpy reverse x axis file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out k8 opt bin tensorflow python kernel tests control flow scan op test cpu runfiles org tensorflow tensorflow python kernel tests control flow scan op test py line 37 in numpy reverse return x ix indexerror only integer slice ellipsis numpy newaxis none and integer or boolean array be valid index run 29 test in 4 799s fail error 13 skip 2
tensorflowtensorflow
save and load a dataset return a different element
Bug
click to expand issue type bug source binary tensorflow version 2 9 1 custom code no os platform and distribution macos monterey mobile device no response python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour I be try to save and subsequently load a dataset I be observe that after load the dataset the element of it be many time different from the save one in order to run the code I have two text file in the follow structure with content shell test neg file txt once again mr costner have drag out a movie pos file txt I go and see this movie last I be expect that the two loop should print an identical tensor standalone code to reproduce the issue python import tensorflow as tf from tensorflow import kera from tensorflow keras import layer import pickle def create int ds test ds max sequence length 10 max token 20000 text vectorization layer textvectorization max tokens max tokens output mode int output sequence length max sequence length text only train ds test ds map lambda x y x text vectorization adapt text only train ds int test ds test ds map lambda x y text vectorization x y num parallel call 4 return int test ds batch size 1 test ds keras util text dataset from directory test batch size batch size int test ds create int ds test ds take test ds int test ds take 1 print f type of dataset type take test ds print f spec take test ds element spec for e in take test ds print e save tf datum experimental save take test ds int test ds with open element spec wb as out also save the element spec to disk for future loading pickle dump take test ds element spec out load with open element spec rb as in es pickle load in new take test ds tf datum experimental load int test ds element spec es print type new take test ds for e in new take test ds print e shoud print the same as the previous for loop relevant log output type of dataset spec tensorspec shape none 10 dtype tf int64 name none tensorspec shape none dtype tf int32 name none
tensorflowtensorflow
tf image rgb to hsv fail on back prop when dtype be double
Bug
click to expand issue type bug source source tensorflow version tf 2 9 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell can not compute the gradient for tf image rgb to hsv with float64 input it be work fine with forward pass standalone code to reproduce the issue shell import tensorflow as tf image tf random uniform 5 5 3 dtype tf float64 x tf image rgb to hsv image pass with tf gradienttape persistent true as g g watch image x tf image rgb to hsv image grad g gradient x image invalidargumenterror relevant log output shell invalidargumenterror can not compute mul as input 1 zero base be expect to be a float tensor but be a double tensor op mul
tensorflowtensorflow
tf 2 9 1 gpu not detect
Bug
click to expand issue type bug source binary tensorflow version tf 2 9 1 custom code no os platform and distribution win10 21h2 x64 mobile device no response python version 3 9 13 bazel version no response gcc compiler version no response cuda cudnn version cudatoolkit 11 2 2 cudnn 8 1 0 77 gpu model and memory no response current behaviour shell I install tf 2 9 1 by conda install c conda forge cudatoolkit 11 2 cudnn 8 1 0 python3 m pip install tensorflow verify install python3 c import tensorflow as tf print tf config list physical device gpu tf do not detect gpu standalone code to reproduce the issue shell python3 c import tensorflow as tf print tf config list physical device gpu output empty relevant log output no response