repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
ctc loss w tensorflow core util ctc ctc loss calculator h 499 no valid path find
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 4 0 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 11 2 gpu model and memory on cpu describe the current behavior when use ctc loss with dense tensor it function correctly but when use it with sparse tensor although the result be the right but it raise a warning say w tensorflow core util ctc ctc loss calculator h 499 no valid path find I m not sure if this be a real bug or something else describe the expect behavior it shouldn t raise the warning standalone code to reproduce the issue import tensorflow as tf label 1 2 1 0 0 1 1 0 0 0 1 1 1 1 1 logit 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 label length 3 2 5 logit length 5 5 5 label tensor tf convert to tensor label dtype tf int32 label tensor sparse tf sparse from dense label tensor logit tensor tf convert to tensor logit dtype tf float32 label length tensor tf convert to tensor label length dtype tf int32 logit length tensor tf convert to tensor logit length dtype tf int32 loss dense tf nn ctc loss label tensor logit tensor label length tensor logit length tensor logit time major false print loss dense numpy 0 loss sparse tf nn ctc loss label tensor sparse logit tensor none logit length tensor logit time major false blank index 0 print loss sparse numpy 0 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach image
tensorflowtensorflow
runtimeerror tensorflow lite kernel conv cc 349 input dim datum 3 filter dim datum 3 64 1 node number 13 conv 2d fail to prepare
Bug
hello I be use semantic segmentation model the model be suclass custom model it have be train and save successfully I also convert it into tflite version but when I try for inference of tflite model it show the mention error in allocate tensor I have also try it use tf nightly 2 7 and tf version 2 5 but it show same error while allocation tensor any help will be highly appreciate thank 1 system information os platform and distribution e g linux ubuntu 16 04 tensorflow version 2 5 2 code provide code to help we reproduce your issue use one of the follow option option a reference colab notebook 1 reference tensorflow model colab scrollto l pmowppmafx
tensorflowtensorflow
hexagon delegate crash on max pooling operation
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device snapdragon 855 dev platform tensorflow instal from source or binary binary tensorflow version use command below 2 6 0 python version 3 8 0 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior after convert yolov4 from tensorflow to tflite the model run well on mobile device on cpu when run on dsp use hexagon delegate I get the follow error info initialize tensorflow lite runtime info tflitehexagondelegate delegate 535 node delegate out of 568 node with 2 partition info replace 535 node s with delegate tflitehexagondelegate node yield 5 partition timestamp sun may 30 16 21 13 2021 log hexagon op src op maxpool d32 c 567 insufficient width padding hexagon src execute c 167 execute fail on node i d 689 err 1 hexagon src interface c 1297 fail in execute inner error fail fail to execute graph error node number 568 tflitehexagondelegate fail to invoke describe the expect behavior I expect it to run successfully standalone code to reproduce the issue see attached model other info log I ve narrow it down to a single max pool operation that be later concatenate the operation in question take a 1x13x13x512 input tensor and compute tf nn max pool input datum ksize 13 padding same stride 1 other instance of this operator with ksize 9 and ksize 5 all other parameter be the same work just fine another thing I notice be that depend on what else be go on in the model sometimes the converter emit a tflite model that work however it be intermittent and I be unable to reproduce this behavior reliably see model attach here
tensorflowtensorflow
tensorflow2 memory leak when use tf function
Bug
I post this question here because no one be able to answer it for over a month hope someone here would give the solution
tensorflowtensorflow
attribute section in keras model documentation be not generate properly
Bug
model attribute section in the documentation have 2 issue 1 it be duplicate attribute attribute and attribute 1 attribute 1 you might have to look at table of content to spot the difference 2 some attribute be miss even though they have be document and have property decorator in the github code e g metric l635 l683 be miss but metric name l685 l723 be include note that the documentation link be nightly and the code link be the current master I guess there might be issue with how the attribute section be generate
tensorflowtensorflow
unable to generate train record but for text record it work fine
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow nope os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary I use the guide at tensorflow version use command below 2 6 0 python version python 3 8 8 bazel version if compile from source gcc compiler version if compile from source nope cuda cudnn version nope gpu model and memory nope you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when follow the guide command to change format from xml to record it doesn t work for train but work for text describe the expect behavior it should output something like successfully create the tfrecord file home jrhin tensorflow workspace training demo annotation train record contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python generate tfrecord py x tensorflow workspace training demo image train l tensorflow workspace training demo annotation label map pbtxt o tensorflow workspace training demo annotation train record other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file generate tfrecord py line 172 in tf app run file home jrhin anaconda3 lib python3 8 site package tensorflow python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home jrhin anaconda3 lib python3 8 site package absl app py line 303 in run run main main args file home jrhin anaconda3 lib python3 8 site package absl app py line 251 in run main sys exit main argv file generate tfrecord py line 162 in main tf example create tf example group path file generate tfrecord py line 136 in create tf example class append class text to int row class file generate tfrecord py line 105 in class text to int return label map dict row label keyerror w
tensorflowtensorflow
several link to the github code be break in
Bug
as the title say some link from to the correspond code on github be break here an example I believe this be due to the removal of the python module also some other link have a line offset such as conv2d this be probably because some import be remove
tensorflowtensorflow
symmetric convolve with symmetric return non symmetric
Bug
system information os platform and distribution arch linux 5 13 12 tensorflow instal from pip tensorflow version v2 6 0 rc2 32 g919f693420e 2 6 0 python version 3 9 cuda cudnn run on cpu for now when convolve a symmetric kernel with itself I expect the result to be also symmetric this be what I get when use numpy scipy or jax but for some reason it be not the case when use tensorflow I don t know if I could not understand something in the documentation if it be like this on purpose but it be surely not obvious if this be not a bug why do tensorflow work like this and how could I overcome this behaviour to get a 2d cross correlation convolution right standalone code to reproduce the issue see comparison bellow consider a symmetric laplacian filter be convolve with itself python import numpy as np def test convolution jax filt from jax import numpy as jnp lax filt jnp asarray filt return lax conv filt none none filt none none window stride 1 1 padding same 0 0 def test convolution tf filt import tensorflow as tf filt tf variable filt return tf nn convolution filt none none filt none none stride 1 1 padding same 0 0 filt np asarray 0 1 0 1 4 1 0 1 0 dtype np float32 print jax test convolution jax filt sep n end 2 n print tf test convolution tf filt sep n output jax 1 4 1 4 18 4 1 4 1 tf tf tensor 1 4 1 8 18 8 1 4 1 shape 3 3 dtype float32
tensorflowtensorflow
tf2 6 load subclasse model fail to be retrain
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 dockerfile from python 3 8 workdir usr src app copy requirement txt run pip install no cache dir r requirement txt copy cmd python test py requirement txt numpy tensorflow and docker build t python docker mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not test tensorflow instal from source or binary docker on mac mini 2018 tensorflow version use command below tf2 6 python version 3 8 bazel version if compile from source not test unlikely to be relate to this issue gcc compiler version if compile from source not test unlikely to be relate to this issue cuda cudnn version no cuda on this mac mini gpu model and memory n a describe the current behavior type docker run it rm v pwd usr src app w usr src app python docker python test py in my environment will end up get the follow error more specifically predict x y work perfectly however fit x y be the cause valueerror no gradient provide for any variable ab 0 non distributional model dense 1 kernel 0 non distributional model dense 1 bias 0 non distributional model dense 2 kernel 0 non distributional model dense 2 bias 0 non distributional model output layer kernel 0 non distributional model output layer bias 0 describe the expect behavior the loaded model should be retrainable standalone code to reproduce the issue python import numpy as np import tensorflow as tf from tensorflow import kera from tensorflow keras optimizer import adam from tensorflow keras layer import dense batch size 2 num action 11 state dim 1 customer loss function def huber loss y true y pre max grad 1 a tf abs y true y pre less than max 0 5 tf square a great than max max grad a 0 5 max grad return tf where a max grad x less than max y great than max tf keras util register keras serializable class meanhuberloss keras loss loss def init self name mean huber loss kwargs super meanhuberloss self init name name kwargs def call self y true y pre error huber loss y true y pre the reduce mean be automatically do as default return error tf keras util register keras serializable class directmappingforab keras metric metric def init self name direct map for abs kwargs super directmappingforab self init name name kwargs self output value tf variable initial value name abs shape none validate shape false dtype tf float32 def update state self value sample weight none self output value assign value def result self return self output value def reset state self self output value assign tf keras util register keras serializable class nondistributionalmodel keras model def init self name non distributional model num output none trainable true super nondistributionalmodel self init name name self loss tracker keras metric mean name loss self abs metric directmappingforab name abs return a tensor with the same shape of the input tensor self criterion meanhuberloss self layer 1 dense 10 trainable trainable activation relu name dense 1 self layer 2 dense 10 trainable trainable activation relu name dense 2 self output layer dense num output trainable trainable activation none name output layer def call self input input tf cast input tf float32 layer 1 self layer 1 input layer 2 self layer 2 layer 1 output self output layer layer 2 return output tf function def train step self datum state target datum target tf stop gradient target with tf gradienttape as tape logit self state training true forward pass loss self criterion target logit trainable var self trainable variable grad tape gradient loss trainable var self optimizer apply gradient zip grad trainable var self loss tracker update state loss self abs metric update state tf cast tf reduce mean tf math ab target logit axis 1 tf float32 return loss self loss tracker result abs self abs metric result property def metric self return self loss tracker self abs metric class history keras callbacks callback def on train begin self log self loss self abs def on train batch end self batch log self loss append log get loss self abs append log get abs class dqn model def init self alpha batch size num output trainable true self alpha alpha self batch size batch size self num output num output self trainable trainable self model self build model def build model self lr schedule keras optimizer schedule cosinedecayrestart initial learning rate self alpha first decay step 1000 model nondistributionalmodel num output self num output trainable self trainable model compile optimizer adam lr schedule return model def predict self state return self model predict state def train self state target history history return self model fit state target batch size self batch size epoch 1 verbose 0 callback history class critic object def init self alpha batch size self alpha alpha self batch size batch size self eval model dqn model alpha alpha batch size batch size num output num action trainable true def learn self x np random random self batch size 1 y np random random self batch size num action history self eval model train x y return tf squeeze history history loss critic critic alpha 0 1 batch size batch size critic learn critic eval model model save model save load model keras model load model model save print save model weight type format len critic eval model model get weight type critic eval model model print load model weight type format len load model get weight type load model critic eval model model summary load model summary x np random random batch size 1 y np random random batch size num action let s check np testing assert allclose critic eval model predict x load model predict x print continue to train the load model n history history result load model fit x y batch size batch size epoch 1 verbose 0 callback history other info log 2021 08 27 12 28 38 619800 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudart so 11 0 dlerror libcudart so 11 0 can not open share object file no such file or directory 2021 08 27 12 28 38 619860 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine 2021 08 27 12 28 40 667183 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2021 08 27 12 28 40 667238 w tensorflow stream executor cuda cuda driver cc 269 fail call to cuinit unknown error 303 2021 08 27 12 28 40 667269 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host a3f95dfc3a77 proc driver nvidia version do not exist 2021 08 27 12 28 40 667437 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2021 08 27 12 28 40 722464 I tensorflow compiler mlir mlir graph optimization pass cc 185 none of the mlir optimization pass be enable register 2 warn tensorflow gradient do not exist for variable abs 0 when minimize the loss 2021 08 27 12 28 41 133216 w tensorflow python util util cc 348 set be not currently consider sequence but this may change in the future so consider avoid use they save model weight 9 type load model weight 9 type nondistributionalmodel model non distributional model layer type output shape param dense 1 dense multiple 20 dense 2 dense multiple 110 output layer dense multiple 121 total param 253 trainable param 251 non trainable param 2 model non distributional model layer type output shape param dense 1 dense multiple 20 dense 2 dense multiple 110 output layer dense multiple 121 total param 253 trainable param 251 non trainable param 2 continue to train the load model traceback most recent call last file test2 py line 182 in result load model fit x y batch size batch size epoch 1 verbose 0 callback history file usr local lib python3 8 site package kera engine training py line 1184 in fit tmp log self train function iterator file usr local lib python3 8 site package tensorflow python eager def function py line 885 in call result self call args kwd file usr local lib python3 8 site package tensorflow python eager def function py line 933 in call self initialize args kwd add initializer to initializer file usr local lib python3 8 site package tensorflow python eager def function py line 759 in initialize self stateful fn get concrete function internal garbage collect pylint disable protect access file usr local lib python3 8 site package tensorflow python eager function py line 3066 in get concrete function internal garbage collect graph function self maybe define function args kwargs file usr local lib python3 8 site package tensorflow python eager function py line 3463 in maybe define function graph function self create graph function args kwargs file usr local lib python3 8 site package tensorflow python eager function py line 3298 in create graph function func graph module func graph from py func file usr local lib python3 8 site package tensorflow python framework func graph py line 1007 in func graph from py func func output python func func args func kwargs file usr local lib python3 8 site package tensorflow python eager def function py line 668 in wrap fn out weak wrap fn wrap args kwd file usr local lib python3 8 site package tensorflow python framework func graph py line 994 in wrapper raise e ag error metadata to exception e valueerror in user code usr local lib python3 8 site package kera engine training py 853 train function return step function self iterator usr local lib python3 8 site package kera engine training py 842 step function output model distribute strategy run run step args datum usr local lib python3 8 site package tensorflow python distribute distribute lib py 1286 run return self extend call for each replica fn args args kwargs kwargs usr local lib python3 8 site package tensorflow python distribute distribute lib py 2849 call for each replica return self call for each replica fn args kwargs usr local lib python3 8 site package tensorflow python distribute distribute lib py 3632 call for each replica return fn args kwargs usr local lib python3 8 site package kera engine training py 835 run step output model train step datum usr local lib python3 8 site package kera engine training py 791 train step self optimizer minimize loss self trainable variable tape tape usr local lib python3 8 site package kera optimizer v2 optimizer v2 py 522 minimize return self apply gradient grad and var name name usr local lib python3 8 site package kera optimizer v2 optimizer v2 py 622 apply gradient grad and var optimizer util filter empty gradient grad and var usr local lib python3 8 site package kera optimizer v2 util py 72 filter empty gradient raise valueerror no gradient provide for any variable s valueerror no gradient provide for any variable ab 0 non distributional model dense 1 kernel 0 non distributional model dense 1 bias 0 non distributional model dense 2 kernel 0 non distributional model dense 2 bias 0 non distributional model output layer kernel 0 non distributional model output layer bias 0 thank you for your time
tensorflowtensorflow
a suspect bug in binary crossentropy
Bug
hi I find that the the calculation result of binary crossentropy loss function be different from other deep learning library such as theano here be an example code use tensorflow2 6 0 python import numpy as np import tensorflow as tf y true np array 0 1 0 dtype np float32 y pre np array 0 9999999 0 9999999 0 0000001 dtype np float32 re tf keras backend binary crossentropy y true y pre print loss re numpy the result be loss 15 333239 0 0 and this be the code use theano1 0 4 python import numpy as np from theano import tensor as t y true np array 0 1 0 dtype np float32 y pre np array 0 9999999 0 9999999 0 0000001 dtype np float32 re t nnet binary crossentropy y pre y true print loss re eval the result be loss 1 5942385e 01 1 1920930e 07 1 1920930e 07 I then find the cause of the inconsistency be that tensorflow use the epsilon to caculate the loss value which I think be redundant because the output have be clip use epsilon eariler here be the source code location l5047 l5052 besides I find that the categorical crossentropy doesn t use the epsilon to caculate the loss value this make I more suspicious of the usage of epsilon in binary crossentropy l4906 l4908 thank
tensorflowtensorflow
tf2 6 model save error for a customize model and loss function
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 dockerfile from python 3 8 workdir usr src app copy requirement txt run pip install no cache dir r requirement txt copy cmd python test py requirement txt numpy tensorflow and docker build t python docker mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not test tensorflow instal from source or binary docker on mac mini 2018 tensorflow version use command below tf2 6 python version 3 8 bazel version if compile from source not test unlikely to be relate to this issue gcc compiler version if compile from source not test unlikely to be relate to this issue cuda cudnn version no cuda on this mac mini gpu model and memory n a describe the current behavior type docker run it rm v pwd usr src app w usr src app python docker python test py in my environment will end up get the follow error if the self compile loss target logit and model compile optimizer adam lr schedule loss mean huber loss be use keyerror fail to add concrete function b inference train step 1086 to object base save model as it capture tensor tf tensor shape dtype resource which be unsupported or not reachable from root one reason could be that a stateful object or a variable that the function depend on be not assign to an attribute of the serialized trackable object see savet test capture unreachable variable if instead I use loss self criterion target logit and model compile optimizer adam lr schedule everything work perfectly describe the expect behavior model should be properly save to the assign folder regardless of the two different combination above standalone code to reproduce the issue python import tensorflow as tf from tensorflow import kera from tensorflow keras optimizer import adam batch size 2 7 num action 11 state dim 1 def huber loss y true y pre max grad 1 a tf abs y true y pre less than max 0 5 tf square a great than max max grad a 0 5 max grad return tf where a max grad x less than max y great than max def mean huber loss y true y pre return tf reduce mean huber loss y true y pre class nondistributionalmodel keras model def init self input output super nondistributionalmodel self init input input output output self loss tracker keras metric mean name loss self abs metric keras metric meantensor name abs return a tensor with the same shape of the input tensor self criterion mean huber loss tf function def train step self datum state target datum with tf gradienttape as tape logit self state training true loss self compile loss target logit loss self criterion target logit trainable var self trainable variable gradient tape gradient loss trainable var self optimizer apply gradient zip gradient trainable var self loss tracker update state loss self abs metric update state tf reduce mean tf math ab target logit axis 1 return loss self loss tracker result abs self abs metric result property def metric self we list our metric object here so that reset state can be call automatically at the start of each epoch or at the start of evaluate if you don t implement this property you have to call reset state yourself at the time of your choose return self loss tracker self abs metric input keras input shape state dim output keras layer dense num action input model nondistributionalmodel input output lr schedule tf keras optimizer schedule cosinedecayrestart initial learning rate 0 1 first decay step 1000 model compile optimizer adam lr schedule loss mean huber loss model compile optimizer adam lr schedule x np random random batch size 1 y np random random batch size num action model fit x y batch size batch size epoch 1 model save model other info log 2021 08 25 08 06 24 613536 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudart so 11 0 dlerror libcudart so 11 0 can not open share object file no such file or directory 2021 08 25 08 06 24 613609 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine memory usage 0 2776603698730469 gb total processing time 1 830595807s 2021 08 25 08 06 26 613800 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2021 08 25 08 06 26 613940 w tensorflow stream executor cuda cuda driver cc 269 fail call to cuinit unknown error 303 2021 08 25 08 06 26 613981 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host b9da2514de51 proc driver nvidia version do not exist 2021 08 25 08 06 26 614187 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2021 08 25 08 06 26 669676 I tensorflow compiler mlir mlir graph optimization pass cc 185 none of the mlir optimization pass be enable register 2 1 1 1s 567m step loss 0 1817 abs 0 5033 2021 08 25 08 06 27 407536 w tensorflow python util util cc 348 set be not currently consider sequence but this may change in the future so consider avoid use they traceback most recent call last file usr local lib python3 8 site package tensorflow python save model function serialization py line 65 in serialize concrete function bind input append node ids capture file usr local lib python3 8 site package tensorflow python util object identity py line 139 in getitem return self storage self wrap key key keyerror objectidentitywrapper wrapping during handling of the above exception another exception occur traceback most recent call last file test1 py line 238 in model save model file usr local lib python3 8 site package kera engine training py line 2145 in save save save model self filepath overwrite include optimizer save format file usr local lib python3 8 site package kera save save py line 149 in save model save model save save model filepath overwrite include optimizer file usr local lib python3 8 site package keras save save model save py line 90 in save save nodes node path save lib save and return nodes file usr local lib python3 8 site package tensorflow python save model save py line 1228 in save and return node build meta graph obj signature option meta graph def file usr local lib python3 8 site package tensorflow python save model save py line 1399 in build meta graph return build meta graph impl obj signature option meta graph def file usr local lib python3 8 site package tensorflow python save model save py line 1362 in build meta graph impl object graph proto serialize object graph file usr local lib python3 8 site package tensorflow python save model save py line 936 in serialize object graph serialize function serialization serialize concrete function file usr local lib python3 8 site package tensorflow python save model function serialization py line 67 in serialize concrete function raise keyerror keyerror fail to add concrete function b inference train step 1086 to object base save model as it capture tensor tf tensor shape dtype resource which be unsupported or not reachable from root one reason could be that a stateful object or a variable that the function depend on be not assign to an attribute of the serialized trackable object see savet test capture unreachable variable any idea thank you for your time
tensorflowtensorflow
javaapi illegalstateexception happende while run a model load from savedmodel and the graph instance can not close itself
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes I will attach below os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 window 10 1909 tensorflow instal from source or binary java maven xml org tensorflow tensorflow 1 15 0 org tensorflow libtensorflow 1 15 0 org tensorflow libtensorflow jni gpu 1 15 0 tensorflow version use command below 1 15 0 describe the current behavior I try to load a save model from keras it all work well till I try to run the session strange thing occur the code just can t continue and throw no exception when I use try catch finally style instead of try with resource style I finally get such error message below java lang illegalstateexception error while read resource variable dense 2 kernel from container localhost this could mean that the variable be uninitialize not find container localhost do not exist could not find resource localhost dense 2 kernel node dense 2 matmul readvariableop and what s more though I get the error message the graph instance can t close itself when I pause the test code in idea I find the code stop at the object wait method which mean that the graph refcount keep 1 value all the time the code couldn t escape from graph close method to prove the correctness of save model I try to load model in python code like below python import tensorflow as tf import numpy as np export path test input np random random 1 30 with tf session graph tf graph as sess load tf save model loader load sess serve export path graph tf get default graph print graph get operation x sess graph get tensor by name rp input 0 y sess graph get tensor by name dense 2 sigmoid 0 score sess run y feed dict x input print predict d np argmax score 1 it work well and print predict result in that case I think the problem may not lie in the model maybe I try hard to find solution or workaround on stackoverflow and issue here I see several similar problem to mine but they all occur in python such as and the second issue seem most alike but the model export method be different standalone code to reproduce the issue here be my model model zip here be the test code because it fail all over the time I ommit the code to close the resource java public void test 09 justtestapi float a new float 1 53672f 2 047399f 1 42194f 1 494959f 0 69123f 0 39482f 0 236573f 0 733827f 0 531855f 0 973978f 1 704854f 2 085134f 1 615931f 1 723842f 0 102458f 0 017833f 0 693043f 1 263669f 0 217664f 1 058611f 1 300499f 2 260938f 1 156857f 1 291565f 0 42401f 0 069758f 0 252202f 0 808431f 0 189161f 0 490556f long shape new long 1 30 try savedmodelbundle savedmodelbundle savedmodelbundle load serve graph graph savedmodelbundle graph tensor datum tensor create shape floatbuffer wrap a session session new session graph session runner runner session runner feed rp input datum fetch dense 2 sigmoid float re new float 1 1 tensor out runner run get 0 out copyto re common io bigdecimal pro bigdecimal valueof re 0 0 catch exception e throw e other info log the model be produce by webank federal learining program in their code the model be build by code use keras api py def load model nn struct json return tf keras model model from json nn struct json custom object the json content be definde like json nn define class name sequential config name sequential layer class name repeatvector config name rp n 1 class name lstm config name lstm unit 32 class name dense config name dense trainable true dtype float32 unit 64 activation relu class name dense config name dense 2 trainable true dtype float32 unit 1 activation sigmoid keras version 2 2 4 tf backend tensorflow the model be save by code below py def export model self with tempfile temporarydirectory as tmp path try tf keras model save model self model filepath tmp path save format tf except notimplementederror import warning warning warn save the model as savedmodel be still in experimental stage try tf keras experimental export save model tf keras experimental export save model self model save model path tmp path model bytes zip dir as bytes tmp path return model bytes you can check the code in this link in case here be the log of my test code c 2021 08 24 15 11 00 368680 I tensorflow cc save model reader cc 31 reading savedmodel from 2021 08 24 15 11 00 377471 I tensorflow cc save model reader cc 54 read meta graph with tag serve 2021 08 24 15 11 00 382175 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 2021 08 24 15 11 00 390552 I tensorflow cc save model loader cc 202 restore savedmodel bundle 2021 08 24 15 11 00 409095 I tensorflow cc save model loader cc 151 running initialization op on savedmodel bundle at path 2021 08 24 15 11 00 416363 I tensorflow cc save model loader cc 311 savedmodel load for tag serve status success take 47658 microsecond I really stick on this problem I would appreciate it if someone can help I out many thank
tensorflowtensorflow
tf 2 6 break interfance of model compile unable to pass tf keras optimizer adam
Bug
tf 2 6 from tensorflow python keras optimizer v1 import optimizer from tensorflow python keras optimizer v2 import optimizer v2 optimizer tf keras optimizer adam learning rate 1e 3 ic optimizer ic isinstance optimizer optimizer optimizer v2 optimizerv2 ic optimizer ic isinstance optimizer optimizer optimizer v2 optimizerv2 false so we can not pass optimizer to model compile now as in tensorflow python keras optimizers py keras export keras optimizer get def get identifi retrieve a keras optimizer instance args identifi optimizer identifier one of string name of an optimizer dictionary configuration dictionary keras optimizer instance it will be return unchanged tensorflow optimizer instance it will be wrap as a keras optimizer return a keras optimizer instance raise valueerror if identifi can not be interpret if isinstance identifier optimizer optimizer v2 optimizerv2 return identifier
tensorflowtensorflow
document xla core functionality
Bug
url s with the issue description of issue what need change there be reference throughout the xla docs to various core type that aren t fully document these include xlabuilder xlaop xlacomputation and parameter there may be other it would be really helpful if these be document clear description ideally there would be comprehensive api doc include all public functionality how and why to use each but anything in this direction would be most appreciated correct link where they exist parameter define no return define no raise list and define no usage example some request visual if applicable I don t know I don t think they re crucial don t know if they d help submit a pull request no
tensorflowtensorflow
deprecationwarne the imp module be deprecate in favour of importlib
Bug
python version 3 8 10 tensorflow version 2 6 0 hi team I keep on get the deprecation warning while use tensorflow deprecationwarne the imp module be deprecate in favour of importlib see the module s documentation for alternative use I know that s just a warning but do you plan to update this library good
tensorflowtensorflow
resource exhaust efficientnetb7 oom
Bug
1 system information os platform and distribution e g linux ubuntu 16 04 colab tensorflow installation pip package or build from source 2 6 0 tensorflow library version if pip package or github sha if build from source all other library clone from 2 error code full error code be save in error code python epoch 40 hist model fit ds train epochs epoch validation datum ds test verbose 2 error python resourceexhaustederror 2 root error s find 0 resource exhaust oom when allocate tensor with shape 64 192 300 300 and type half on job localhost replica 0 task 0 device gpu 0 by allocator gpu 0 bfc node sequential 2 model 2 efficientnetb7 block2a expand conv conv2d define at lib python3 7 thread py 926 hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info this isn t available when run in eager mode div no nan 1 readvariableop 26 hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info this isn t available when run in eager mode 1 resource exhaust oom when allocate tensor with shape 64 192 300 300 and type half on job localhost replica 0 task 0 device gpu 0 by allocator gpu 0 bfc node sequential 2 model 2 efficientnetb7 block2a expand conv conv2d define at lib python3 7 thread py 926 hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info this isn t available when run in eager mode 0 successful operation 0 derive error ignore op inference train function 308507 I use a efficientnetb7 but efficientnetb7 fail fit model because the resource exhaust in colab so I wish want to know how do fix this error 3 after an error first I add operation such as batch normalization because the efficientnetb7 parameter be too large to cause oom but it didn t help second I follow the code on but the tensorflow version be different so I couldn t use it third I ve try similar code on but I don t know if this be the solution the error remain the same
tensorflowtensorflow
android audio classification example do not work on a pixel with android 10
Bug
url s with the issue description of issue what need change perhaps indicate that although hardware may support android 6 it may not be capable enough to run the example clear description the readme md file indicate that any device support android 6 with audio support be sufficient while the example work fine on my pixel 4 xl android 11 it would not on my pixel android 10 the screen control be display but it only display silence as the classification and do not update the bar to the right as it do on my 4 xl the slider control be also extremely slow to respond somewhere on the order of 3 4 of a second to a second I think the hardware just isn t capable enough to run tensorflow even though it do support android 10
tensorflowtensorflow
typeerror int object be not callable
Bug
tf version 2 6 0 use tf random set seed 7 produce typeerror int object be not callable
tensorflowtensorflow
google protobuf message decodeerror error parse message when use tf function
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 21 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary conda binary tensorflow version use command below tf 2 5 python version python 3 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 11 4 cudatoolkit 11 0 221 cudnn 8 2 1 32 gpu model and memory nvidia titan rtx 24 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version successfully open dynamic library libcudart so 11 0 v2 5 0 rc3 213 ga4dfb8d1a71 2 5 0 describe the current behavior I write a custom model involve the tf gather nd function when the train step function be not decorate by tf function the model can be well train but when I use tf function to decorate the train step function I get the error google protobuf message decodeerror error parse message describe the expect behavior the model can be train contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute no standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf import numpy as np class gathermodel tf keras model def init self ind1 w1 super gathermodel self init self ind1 ind1 self w1 tf cast w1 tf float32 self lambda1 tf variable initial value tf constant 0 1 trainable true name lambda1 def call self input training 0 y input for I in range 5 y tf transpose y 1 2 3 0 y tf gather nd y 1 0 self ind1 y y self w1 y tf reduce sum y 0 y tf transpose y 3 0 1 2 y self lambda1 y return y tf function def train step model input label loss optimizer with tf gradienttape as tape prediction model input training 1 loss loss label prediction grad tape gradient loss model trainable variable optimizer apply gradient zip grad model trainable variable return loss if name main ind1 np random randint 0 300 367 217 721 2 w1 np random normal size 367 217 721 1 1 model gathermodel ind1 w1 input tf random normal 2 256 256 1 label tf random normal 2 217 721 1 loss tf keras loss meansquarederror optimizer tf keras optimizer adam 0 001 for I in range 10 ll train step model input label loss optimizer print ll in the google colab scrollto wmcegevfww m it say that the program crash because of exhausting of ram other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
google protobuf message decodeerror error parse message when use tf function
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 21 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary conda binary tensorflow version use command below tf 2 5 python version python 3 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 11 4 cudatoolkit 11 0 221 cudnn 8 2 1 32 gpu model and memory nvidia titan rtx 24 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version successfully open dynamic library libcudart so 11 0 v2 5 0 rc3 213 ga4dfb8d1a71 2 5 0 describe the current behavior I write a custom model involve the tf gather nd function when the train step function be not decorate by tf function the model can be well train but when I use tf function to decorate the train step function I get the error google protobuf message decodeerror error parse message describe the expect behavior the model can be train contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute no standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf import numpy as np class gathermodel tf keras model def init self ind1 w1 super gathermodel self init self ind1 ind1 self w1 tf cast w1 tf float32 self lambda1 tf variable initial value tf constant 0 1 trainable true name lambda1 def call self input training 0 y input for I in range 5 y tf transpose y 1 2 3 0 y tf gather nd y 1 0 self ind1 y y self w1 y tf reduce sum y 0 y tf transpose y 3 0 1 2 y self lambda1 y return y tf function def train step model input label loss optimizer with tf gradienttape as tape prediction model input training 1 loss loss label prediction grad tape gradient loss model trainable variable optimizer apply gradient zip grad model trainable variable return loss if name main ind1 np random randint 0 300 367 217 721 2 w1 np random normal size 367 217 721 1 1 model gathermodel ind1 w1 input tf random normal 2 256 256 1 label tf random normal 2 217 721 1 loss tf keras loss meansquarederror optimizer tf keras optimizer adam 0 001 for I in range 10 ll train step model input label loss optimizer print ll in the google colab scrollto wmcegevfww m url it say that the program be crash because of exhausting of ram other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
how to record loss with tf summary in the tf function graph mode
Bug
tensorflow 2 5 python 3 7 import tensorflow as tf import numpy as np import datetime class dense tf module def init self input dim output size name none super dense self init name name self w tf variable tf random normal input dim output size name w self b tf variable tf zero output size name b def call self x y tf matmul x self w self b return tf nn relu y model dense 2 4 class test object def init self self output model 7 0 3 self optimizer tf compat v1 train adamoptimizer 0 4 self step 0 stamp datetime datetime now strftime y m d h m s logdir log test8 s stamp self summary writer tf summary create file writer logdir tf summary trace on graph true profiler false for I in range 10 self run def compare self y true output return tf square y true output def loss fn self y true tf one 1 4 self output model 7 0 3 comp tf py function self compare y true self output tf float32 loss tf reduce mean comp error output2 model 7 0 3 loss tf reduce mean tf square y true output2 tf print loss tf print self step with self summary writer as default tf summary scalar loss loss step self step self step 1 return loss tf function autograph true jit compile true tf function def run self train op self optimizer minimize self loss fn test as can be see from the output the variable self step be not update in def loss fn self the reason be simple from what be say in the step value must be pass into each op via a the step argument tensorboard require a step value to render the datum as a time series explicit passing be necessary because the global step from tf 1 x have be remove so each op must know the desire step variable to read however accord to expandable 3 when eager execution be enable loss should be a python function that take no argument and compute the value to be minimize definitely these two announcement contradict to each other when try to use tf summary in def loss fn self any solution I just want to record the loss with tf summary
tensorflowtensorflow
can not convert model to tensorflowjs from tf hub by url
Bug
I be use tensorflowjs converter to convert a tflite model to tensorflowjs model the repository can be download with web browser but fail when use tensorflowjs converter I just redo the pip install tensorflowjs to make sure that I be in the late version tensorflowjs converter input format tf hub web model it output an http 404 but my connection be fine when I run the example code it work fine and I download it tensorflowjs converter input format tf hub web model if you can tell I the reason or you can successfully convert tflite model from the follow url I will appreciate it a lot thank you so much
tensorflowtensorflow
time series documentation doesn t display on chrome
Bug
url s with the issue the weather dataset description of issue what need change in chrome the weather dataset table only show a portion of the table also I have be struggle to learn from this documentation it would be much easy to learn if there be at least one complete script preferably one per section in a single code block for example it would be much easy to understand if I could see a code block that have everything need to train and test a single step and multi step model also also it would help if all the variable s name be consistent with the diagram and their purpose be describe in text I e multi step dense conv width be not describe and it s purpose can only be guess at from the diagram request visual if applicable
tensorflowtensorflow
link lead to 404 on food101 dataset page
Bug
what s wrong the link of homepage of food101 on the food101 page of the catalog section of tensorflow dataset do not work link of the page description the link give for the homepage of the food 101 doesn t work image it lead to a 404 page not find image correct link link to the correct paper submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide
tensorflowtensorflow
can t create multiple instance of tf keras model subclasse
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip install tensorflow version use command below v2 6 0 rc2 32 g919f693420e 2 6 0 python version 3 9 6 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory gtx 2080 ti you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior here be an example of trigger the issue python import tensorflow as tf class mymodel tf keras model def init self input shape super mymodel self init self my model input shape input shape self dense1 tf keras layer dense 5 activation tf nn relu self dense2 tf keras layer dense 5 activation tf nn softmax input layer tf keras layers input self my model input shape output layer self call input layer super mymodel self init input input layer output output layer def call self input training none x self dense1 input return self dense2 x x model 1 mymodel 10 model 1 summary model 2 mymodel 20 model 2 summary it fail at the creation of model 2 with file lib python3 9 site package tensorflow python training tracking base py line 530 in method wrapper result method self args kwargs typeerror init miss 2 require positional argument input and output without the the second 2 param super init call at the end of mymodel init it be ok to run but the summary and model 1 layer output be different the one with the second init call contain more and well information such as the input and last addition layer and a connect to column model my model 1 layer type output shape param connect to input 1 inputlayer none 10 0 dense dense none 5 55 input 1 0 0 dense 1 dense none 5 30 dense 0 0 tf operator add tfoplambd none 5 0 dense 1 0 0 dense 0 0 total param 85 trainable param 85 non trainable param 0 I would much prefer the one with well information I learn this from various example such as issuecomment 432846634 describe the expect behavior create multiple instance of subclass should have no issue and the variable and model should be independent from each other contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute na standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook see above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
normalization and quantization of input issue
Bug
I have observe difference between the documentation and the example reference normalization and quantization parameter tfliteinputnormalization but in the example I have find difference capturefgregtrgtr we be not do normalization when input and output be int8 and uint8 why be that so capturesssssss as per documentation the output do not need normalization why be we do this in this example in the example we be not use any scale and zero point we be not use quantization param
tensorflowtensorflow
unauthenticatederror ioctl fail when create tensor after initialize jax on a tpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 tpu v3 8 vm so ubuntu 20 04 2 lts tensorflow instal from source or binary binary but I m use what come pre instal on the vm tensorflow version use command below unknown 2 6 0 when I run pip freeze the version be tf nightly 2 6 0 python version python 3 8 5 cuda cudnn version n a gpu model and memory n a use a tpu describe the current behavior I m try to load a dataset use tensorflow dataset and run code with jax on a tpu vm I d like to do some preprocesse operation on that dataset in tensorflow however after initialize jax tensorflow can t seem to create any tensor here s a mwe setup first create a cloud tpu vm ssh into it and install jax sudo pip3 install jax tpu 0 2 18 f now I run python import jax import tensorflow as tf print jax process local device format jax process index jax process count jax local device flush true x tf constant 1 0 dtype tf float32 and get python unauthenticatederror traceback most recent call last in 2 import tensorflow as tf 3 print jax process local device format jax process index jax process count jax local device flush true 4 x tf constant 1 0 dtype tf float32 usr local lib python3 8 dist package tensorflow python framework constant op py in constant value dtype shape name 262 valueerror if call on a symbolic tensor 263 264 return constant impl value dtype shape name verify shape false 265 allow broadcast true 266 usr local lib python3 8 dist package tensorflow python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 274 with trace trace tf constant 275 return constant eager impl ctx value dtype shape verify shape 276 return constant eager impl ctx value dtype shape verify shape 277 278 g op get default graph usr local lib python3 8 dist package tensorflow python framework constant op py in constant eager impl ctx value dtype shape verify shape 299 def constant eager impl ctx value dtype shape verify shape 300 implementation of eager constant 301 t convert to eager tensor value ctx dtype 302 if shape be none 303 return t usr local lib python3 8 dist package tensorflow python framework constant op py in convert to eager tensor value ctx dtype 95 except attributeerror 96 dtype dtype as dtype dtype as datatype enum 97 ctx ensure initialize 98 return op eagertensor value ctx device name dtype 99 usr local lib python3 8 dist package tensorflow python eager context py in ensure initialize self 552 pywrap tfe tfe contextoptionssetruneageropasfunction 553 opt self run eager op as function 554 context handle pywrap tfe tfe newcontext opt 555 finally 556 pywrap tfe tfe deletecontextoption opt unauthenticatederror ioctl fail my hunch about what be go on I suspect but be not sure that the problem be that tensorflow be try to create tensor on the tpu this isn t intend because I d like it to create tensor on the cpu instead I ve try look through this guide and also add this line of code right after import tensorflow tf config experimental set visible device gpu which I see in a bunch of tutorial but it doesn t help which be perhaps not surprising because there aren t any gpu on a tpu vm when I run tf config list logical device afterwards I get python logicaldevice name device cpu 0 device type cpu logicaldevice name device tpu system 0 device type tpu system logicaldevice name device tpu 0 device type tpu logicaldevice name device tpu 1 device type tpu logicaldevice name device tpu 2 device type tpu logicaldevice name device tpu 3 device type tpu logicaldevice name device tpu 4 device type tpu logicaldevice name device tpu 5 device type tpu logicaldevice name device tpu 6 device type tpu logicaldevice name device tpu 7 device type tpu more surprisingly though for some reason even when I run python tf config experimental set visible device gpu tf config experimental set visible device tpu system tf config experimental set visible device tpu I get the same result so I m not quite sure how to hide tpus from tensorflow or for that matter how anyone else be able to do so describe the expect behavior I d like the tensor to be initialize a constant value of 1 0 and on the cpu the reason I m create a tensor here be to do some datum preprocessing and augmentation like what be use for the flax imagenet example and that augmentation should live on cpu contribute do you want to contribute a pr yes no no I m not sure what the fix be or what I m do wrong briefly describe your candidate solution if contribute standalone code to reproduce the issue see above other info log inspect the log in tmp tpu log I find cat tmp tpu log tpu driver t1v n e14a9395 w 0 rowanz log error 20210812 234557 16896 log file create at 2021 08 12 23 45 57 run on machine t1v n e14a9395 w 0 binary build on may 19 2021 03 34 24 1621420438 binary build at google src cloud buildrabbit username buildrabbit client g3 binary build for gcc 4 x y crosstool v18 llvm grtev4 k8 log line format iwef mmdd hh mm ss uuuuuu threadid file line msg e0812 23 45 57 936586 18120 kernel dma mapper cc 88 error set number simple with fail precondition ioctl fail type googleapis com util errorspacepayload util posixerrorspace device or resource busy e0812 23 45 57 936686 18119 kernel dma mapper cc 88 error set number simple with fail precondition ioctl fail type googleapis com util errorspacepayload util posixerrorspace device or resource busy e0812 23 45 57 936775 18120 tensor node cc 436 0000 00 05 0 pe0 c1 mc 1 tn0 fail to set number of simple dma address fail precondition ioctl fail type googleapis com util errorspacepayload util posixerrorspace device or resource busy e0812 23 45 57 936795 18119 tensor node cc 436 0000 00 07 0 pe0 c3 mc 1 tn0 fail to set number of simple dma address fail precondition ioctl fail type googleapis com util errorspacepayload util posixerrorspace device or resource busy
tensorflowtensorflow
operatornotallowedingrapherror iterate over tf tensor be not allow autograph do convert this function this might indicate you be try to use an unsupported feature
Bug
I m try to implement a triplet lose base nn and for this cause I have implement a custom image generator when I start the model fit I get the follow error operatornotallowedingrapherror iterate over tf tensor be not allow autograph do convert this function this might indicate you be try to use an unsupported feature even though I m not even iterate a single tensor in my code my code be list here below the image generator python class tripletdatagenerator tf keras util sequence note on model fit shuffle false must be false def init self df pd dataframe batch size 256 shuffle true rescale float none 1 255 target img size 128 128 preprocess function none rand preproc single imagedatagenerator dict none rand preproc batch list none self batch counter 0 self last batch index 0 if shuffle self triplet df df sample frac 1 reset index drop true randomize it else self triplet df df reset index drop true randomize it self preprocess function preprocess function self rescale rescale self target img size target img size assert batch size 0 minimum batch size be 1 must be positive self batch size batch size self shuffle shuffle index of row every batch we draw 2 sample 1 similar and 1 dissimilar self index np arange len self triplet df if self shuffle np random shuffle self index self rand preproc single rand preproc single self rand preproc batch rand preproc batch def len self denote the number of batch per epoch return len self triplet df self batch size def getitem self index generate one batch of datum generate index of the batch index self index index self batch size index 1 self batch size self last batch index index anchor positive negative for idx in index a p n self triplet df iloc idx anchor append self load image a positive append self load image p negative append self load image n anchor np array anchor dtype float32 positive np array positive dtype float32 negative np array negative dtype float32 label np zero self batch size if self rand preproc batch be not none for func in self rand preproc batch anchor func anchor positive func positive negative func negative return anchor positive negative label def on epoch end self update index after each epoch self batch counter self last batch index 1 index start from 0 if self batch counter len self if self shuffle np random shuffle self index self triplet df self triplet df sample frac 1 reset index drop true self batch counter 0 else self index np append self index self last batch index 1 self index self last batch index 1 def load image self path load an image use tensorflow tool param path absolute path refer to the project s folder to the image return an image array if self rand preproc single be not none if isinstance self rand preproc imagedatagenerator img arr cv2 cvtcolor cv2 imread path cv2 color bgr2rgb img arr self rand preproc random transform img arr img arr cv2 resize img arr self target img size else img arr my util image augmentation path self rand preproc else img arr cv2 imread path img arr cv2 cvtcolor img arr cv2 color bgr2rgb img arr cv2 resize img arr self target img size if self preprocess function be not none img arr self preprocess function img arr elif self rescale be not none img arr img arr self rescale return img arr the model s architecture be as follow python def get siamese model input shape conv2d filter define the tensor for the two input image anchor input input input shape name anchor input positive input input input shape name positive input negative input input input shape name negative input body build body input shape conv2d filter generate the encoding feature vector for the two image encode a body anchor input encode p body positive input encode n body negative input ap distance tf reduce sum tf square encode a encode p axis 1 keepdim true an distance tf reduce sum tf square encode a encode n axis 1 keepdim true connect the input with the output siamese net model input anchor input positive input negative input output ap distance an distance return siamese net and loss be python def get loss margin 1 0 def triplet loss y true y pre the output of the network be a tuple contain the distance between the anchor and the positive example and the anchor and the negative example ap distance an distance y pre compute the triplet loss by subtract both distance and make sure we don t get a negative value loss ap distance an distance loss tf maximum loss margin 0 0 return loss return triplet loss main python if name main epoch configuration epoch batch size configuration batch size img size configuration img size monitor configuration monitor patience configuration patience embed node configuration embed node learning rate configuration learn rate step per epoch configuration step per epoch validation step configuration validation step conv2d filter configuration conv2d filter margin configuration margin datum file lfw triplets csv augment param none augment param dict resize img size 1 random gray scale 0 2 random contrast range 0 65 1 5 hsv noise max amp 0 02 0 2 0 max brightness delta 0 15 lr flip true ud flip false rotate range 30 random shift 0 1 0 1 random zoom range 0 3 rescale 1 255 note n join f batch size batch size f learning rate learning rate f embed node embed node f datafilepath data file f batchnormalization use configuration add batchnorm f conv2d filter count conv2d filter f loss triplet loss augmentation with hsv brightness contrast np random seed 42 tf random set seed 42 t time time train gen test gen dataframegeneratorclass create train test generator csv path data file pair gen false validation split 0 3 shuffle true batch size configuration batch size rescale 1 255 img size configuration img size 1 preprocess func none rand preproc single none rand preproc batch none dt time time t print f take dt second to create train generator with len train gen batch f and test generator with len test gen batch siamese model get siamese model input shape img size conv2d filter conv2d filter siamese model summary loss func get loss margin margin siamese model compile optimizer adam learning rate 0 0001 loss loss func early stop tf keras callback earlystopping monitor monitor min delta 1e 5 patience patience verbose 1 mode auto restore good weight true date string datetime datetime today strftime d m y h m s os mkdir f check point date string chk point tf keras callbacks modelcheckpoint f check point date string monitor configuration monitor verbose 1 save well only true save weight only true call back early stop chk point tf keras callbacks tensorboard log dir f log triplet date string write image true history siamese model fit train gen shuffle false its mandatory when use my custom generator epoch epoch step per epoch step per epoch validation step validation step callback call back validation datum test gen note f n n chk point monitor chk point good my util save result note note history obj history directory dst result model siamese model date str triplet date string
tensorflowtensorflow
multiple version of tensorflow co instal potentially cause abi mismatch
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary both tensorflow version use command below 2 6 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when multiple tensorflow instance be instal in different part of your python path tensorflow will attempt to load kernel library from all of they potentially result in an abi mismatch python3 c import tensorflow no protocol specify traceback most recent call last file line 1 in file home sclarkson local lib python3 8 site package tensorflow init py line 438 in ll load library main dir file home sclarkson local lib python3 8 site package tensorflow python framework load library py line 154 in load library py tf tf loadlibrary lib tensorflow python framework error impl notfounderror usr lib python3 dist package tensorflow core kernel libtfkernel sobol op so undefined symbol znk10tensorflow8opkernel11tracestringb5cxx11erkns 15opkernelcontexteb observe above the tensorflow instance in local lib python3 attempt to load a share library from usr lib python3 because of different compilation option the library from the system wide install be expect symbol that do not exist in the pip install describe the expect behavior tensorflow should only load kernel from its own install contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute modify kernel preloade to only use its own install standalone code to reproduce the issue compile tensorflow from source and install system wide install tensorflow from pip with pip install user tensorflow then run python c import tensorflow other info this be a continuation of 42978
tensorflowtensorflow
auc in the classification on imbalance data tutorial
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue check training history 2 description of issue what need change in the documentation the resample model have the high auc as show below but the chart plot the roc 3 show that the baseline auc be the high I think the auc calculate by model evaluate method may be incorrect here you can see that with class weight the accuracy and precision be low because there be more false positive but conversely the recall and auc be high because the model also find more true positive auc calculate by model evaluate method baseline 0 9296237826347351 weight 0 9428448677062988 resample 0 9575912952423096 I have try other method to check the correct value the result of calculate the auc use the sklearn metric roc auc score method be as follow as expect it can be confirm that the auc of the baseline be the high auc calculate by sklearn metric roc auc score method baseline auc sklearn metric roc auc score test label test prediction baseline weight auc sklearn metric roc auc score test label test prediction weight resample auc sklearn metric roc auc score test label test prediction resample baseline 0 9685415795364084 weight 0 9387766618590307 resample 0 9665411665226982 clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request after confirm this issue with other people I will make a pull request
tensorflowtensorflow
version tag mismatch github v2 5 0 python 2 5 1
Bug
if you build from source from github tag v2 5 0 it produce a module file that s 2 5 1
tensorflowtensorflow
mix precision not work with stateful lstm gru
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 colab and own machine mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not test tensorflow instal from source or binary pip from a conda env tensorflow version use command below 2 5 0 python version 3 7 11 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 2 gpu model and memory test with tesla t4 colab and geforce rtx 3090 24 gb ram describe the current behavior it seem it be not possible to use stateful rnns lstm gru with mixed precision try to run the following on colab tesla t4 import tensorflow as tf tf keras mixed precision set global policy mixed float16 datum tf random uniform 1 64 16 minval 0 maxval 1 dtype tf float16 rnn tf keras layers gru 1024 return sequence true stateful true rnn datum I get this error valueerror traceback most recent call last in 1 rnn datum 11 frame usr local lib python3 7 dist package tensorflow python keras layers recurrent py in call self input initial state constant kwargs 666 667 if initial state be none and constant be none 668 return super rnn self call input kwargs 669 670 if any of initial state or constant be specify and be keras usr local lib python3 7 dist package tensorflow python keras engine base layer py in call self args kwargs 1028 with autocast variable enable auto cast variable 1029 self compute dtype object 1030 output call fn input args kwargs 1031 1032 if self activity regularizer usr local lib python3 7 dist package tensorflow python keras layers recurrent v2 py in call self input mask training initial state 456 else 457 last output output runtime state self defun gru call 458 input initial state training mask row length 459 460 if self stateful usr local lib python3 7 dist package tensorflow python keras layers recurrent v2 py in defun gru call self input initial state training mask sequence length 527 under eager context check the device placement and prefer the 528 if can use gpu 529 last output output new h runtime gpu gru gpu gru kwargs 530 else 531 last output output new h runtime standard gru usr local lib python3 7 dist package tensorflow python keras layers recurrent v2 py in gpu gru input init h kernel recurrent kernel bias mask time major go backwards sequence length 690 output h gen cudnn rnn op cudnnrnn 691 input input input h init h input c 0 param param 692 be train true rnn mode gru 693 694 last output output 1 usr local lib python3 7 dist package tensorflow python util tf export py in wrapper args kwargs 402 please pass these args as kwargs instead 403 format f f name kwargs f argspec args 404 return f kwargs 405 406 return tf decorator make decorator f wrapper decorator argspec f argspec usr local lib python3 7 dist package tensorflow python ops gen cudnn rnn op py in cudnn rnn input input h input c param rnn mode input mode direction dropout seed seed2 be train name 101 input mode input mode direction direction dropout dropout 102 seed seed seed2 seed2 be training be train name name 103 ctx ctx 104 except core symbolicexception 105 pass add node to the tensorflow graph usr local lib python3 7 dist package tensorflow python ops gen cudnn rnn op py in cudnn rnn eager fallback input input h input c param rnn mode input mode direction dropout seed seed2 be train name ctx 171 be train true 172 be train execute make bool be training be train 173 attr t input t execute args to match eager input input h input c param ctx dtype half dtype float32 dtype float64 174 input input h input c param input t 175 input flat input input h input c param usr local lib python3 7 dist package tensorflow python eager execute py in args to match eager l ctx allow dtype default dtype 278 dtype tensor dtype 279 else 280 ret op convert to tensor t dtype ctx ctx for t in l 281 282 todo slebedev consider remove this as it leak a keras concept usr local lib python3 7 dist package tensorflow python eager execute py in 0 278 dtype tensor dtype 279 else 280 ret op convert to tensor t dtype ctx ctx for t in l 281 282 todo slebedev consider remove this as it leak a keras concept usr local lib python3 7 dist package tensorflow python profiler trace py in wrap args kwargs 161 with trace trace name trace kwargs 162 return func args kwargs 163 return func args kwargs 164 165 return wrap usr local lib python3 7 dist package tensorflow python framework op py in convert to tensor value dtype name as ref prefer dtype dtype hint ctx accept result type 1533 raise valueerror 1534 tensor conversion request dtype s for tensor with dtype s r 1535 dtype name value dtype name value 1536 return value 1537 valueerror tensor conversion request dtype float16 for tensor with dtype float32 perhaps this be something to do with the dtype of the initial state the last valueerror seem to be print that tensor hence the zero with dtype float32 even if I explicitly set the initial state with a float16 tensor however the issue persist it make no difference if I run an lstm or gru layer same error I also run this code on my own ubuntu machine with a geforce rtx 3090 24 gb ram and get the follow tensorflow python framework error impl invalidargumenterror can not compute matmul as input 1 zero base be expect to be a float tensor but be a half tensor op matmul describe the expect behavior without stateful true the code run without a problem return a tensor with shape 1 64 1024 and dtype float16 mixed precision be surely expect to work with stateful rnn layer the documentation doesn t seem to indicate otherwise unless I m miss something
tensorflowtensorflow
inputspec miss float64 support and wrong error message
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 rc4 71 g582c8d236cb 2 4 0 python version 3 8 5 describe the current behavior the inputspec seem to have an issue when the dtype argument be set to float64 in this case an error be produce that doesn t make much sense valueerror input 0 of layer sequential be incompatible with the layer expect dtype float64 find dtype dtype float32 and strangely the opposite error expect dtype float32 find dtype dtype float64 occur when only the dtype be set to float32 the issue be illustrate in the following example with a simple custom layer standalone code to reproduce the issue import tensorflow as tf from tensorflow import kera from tensorflow keras import layer import numpy as np tf keras backend set floatx float64 class templayer tf keras layers activation def init self super keras layer activation self init self input spec keras layers inputspec dtype float64 shape 1 1 float32 float also do not work def call self input 1 training false return input 1 model keras sequential templayer x tf constant 1 4392 print np array2string model predict x step 1 separator
tensorflowtensorflow
vae model cause tensorflow gradient do not exist for variable error
Bug
I be construct a variational auto encoder vae model in tensorflow 2 15 0 but when I go to fit it I get the tensorflow gradient do not exist for variable for the variable in the decoder part of the vae here be the code for the encoder encoder sequential conv2d filter 32 kernel size 4 4 stride 2 2 padding same activation relu input shape input image shape name encoder conv2d 0 batchnormalization name encoder batchnorm 0 conv2d filter 64 kernel size 4 4 stride 2 2 padding same activation relu name encoder conv2d 1 batchnormalization name encoder batchnorm 1 flatten name encoder flatten dense tfpl multivariatenormaltril param size latent dim name encoder dense tfpl multivariatenormaltril latent dim activity regularizer kl regularizer dtype tf float64 name encoder outdist name encoder and for the decoder decoder sequential dense unit 9 9 64 activation relu input shape latent dim name decoder dense reshape 9 9 64 name decoder reshape upsampling2d size 2 2 name decoder upsampl 0 18 x 18 x 32 conv2d filter 128 kernel size 3 3 padding same activation relu name decoder conv2d 0 upsampling2d size 2 2 name decoder upsampl 1 36 x 36 x 16 conv2d filter 128 kernel size 3 3 padding same activation relu name decoder conv2d 1 conv2d filter 3 kernel size 3 3 padding same name decoder conv2d 2 flatten name decoder flatten tfpl independentbernoulli event shape input image shape name decoder outdist name decoder here be the model summary for the encoder and decoder model encoder layer type output shape param encoder conv2d 0 conv2d none 18 18 32 1568 encoder batchnorm 0 batchno none 18 18 32 128 encoder conv2d 1 conv2d none 9 9 64 32832 encoder batchnorm 1 batchno none 9 9 64 256 encoder flatten flatten none 5184 0 encoder dense dense none 5 25925 encoder outdist multivariat multiple 0 total param 60 709 trainable param 60 517 non trainable param 192 none model decoder layer type output shape param decoder dense dense none 5184 15552 decoder reshape reshape none 9 9 64 0 decoder upsampl 0 upsamplin none 18 18 64 0 decoder conv2d 0 conv2d none 18 18 128 73856 decoder upsampl 1 upsamplin none 36 36 128 0 decoder conv2d 1 conv2d none 36 36 128 147584 decoder conv2d 2 conv2d none 36 36 3 3459 decoder flatten flatten none 3888 0 decoder outdist independent multiple 0 total param 240 451 trainable param 240 451 non trainable param 0 here be the probabilistic loss function tf function def reconstruction loss batch of image decode dist this function should compute and return the average expect reconstruction loss as define above the function take batch of image tensor contain a batch of input image to the encoder and decode dist output distribution of decoder after pass the image batch through the encoder and decoder as argument the function should return the scalar average expect reconstruction loss batch loss decode dist log prob batch of image loss tf math reduce sum batch loss batch of image shape 0 return loss here be how the encoder and decoder be connect for model fitting vae model input encoder input output decoder encoder output 0 optimizer tf keras optimizer adam learning rate 0 0005 vae compile optimizer optimizer loss reconstruction loss and here be how the fitting be do history vae fit x train dataset validation datum test dataset epoch 50 verbose true finally here be the error message I m get from the fitting step epoch 1 50 warning tensorflow gradient do not exist for variable decoder dense kernel 0 decoder dense bias 0 decoder conv2d 0 kernel 0 decoder conv2d 0 bias 0 decoder conv2d 1 kernel 0 decoder conv2d 1 bias 0 decoder conv2d 2 kernel 0 decoder conv2d 2 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable decoder dense kernel 0 decoder dense bias 0 decoder conv2d 0 kernel 0 decoder conv2d 0 bias 0 decoder conv2d 1 kernel 0 decoder conv2d 1 bias 0 decoder conv2d 2 kernel 0 decoder conv2d 2 bias 0 when minimize the loss 40 40 4s 29ms step loss 0 1177 val loss 0 0263 epoch 2 50 40 40 0s 10ms step loss 0 0406 val loss 0 0381
tensorflowtensorflow
attributeerror userobject object have no attribute add slot
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary source tensorflow version use command below v2 6 0 rc1 12 gb61c987109e 2 6 0 rc1 python version 3 8 5 bazel version if compile from source 3 7 2 gcc compiler version if compile from source 7 5 0 cuda cudnn version na gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior git checkout r2 6 build the source cpu only install generate mnist model then try to convert it have issue 1 install sudo apt install python3 dev python3 pip pip install u user pip numpy wheel pip install u user kera preprocesse no dep bazel build tensorflow tool pip package build pip package bazel bin tensorflow tool pip package build pip package tmp tensorflow pkg pip install tmp tensorflow pkg tensorflow 2 6 0rc1 cp38 cp38 linux x86 64 whl pip install tensorflow model optimization 2 generate model refer to below code 3 convert bazel bin tensorflow lite python tflite convert save model dir mnist qat notdocker float model output file oo error message file home tensorflow2 tensorflow bazel bin tensorflow lite python tflite convert runfiles org tensorflow tensorflow python save model load py line 448 in load node slot variable optimizer object add slot attributeerror userobject object have no attribute add slot describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tempfile import os import tensorflow as tf from tensorflow import kera load mnist dataset mnist keras datasets mnist train image train label test image test label mnist load datum normalize the input image so that each pixel value be between 0 to 1 train image train image 255 0 test image test image 255 0 define the model architecture model keras sequential keras layers inputlayer input shape 28 28 keras layer reshape target shape 28 28 1 kera layer conv2d filter 12 kernel size 3 3 activation relu keras layer maxpooling2d pool size 2 2 keras layer flatten kera layer dense 10 train the digit classification model model compile optimizer adam loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy model fit train image train label epoch 1 validation split 0 1 model save mnist qat notdocker float model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
parameter description generate weirdly in tf keras metric auc
Bug
url s with the issue description of issue what need change the description for num label be generate weirdly screen shoot 2021 08 02 at 9 30 05 am
tensorflowtensorflow
dnn implementation error
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code system window 10 tensorflow instal from source or binary describe below tensorflow version use command below 2 5 0 python version 3 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 11 2 cudnn 8 1 gpu model and memory nvidia geforce rtx 2080 ti describe the current behavior I be get an error unknownerror fail to find the dnn implementation op cudnnrnn after I instal tensorflow text and no matter what I do I can not get rid of it here be exactly what happen first installation in anaconda navigator under environment install kera 2 4 3 kera gpu 2 4 3 try running program code below get the follow error notimplementederror can not convert a symbolic tensor lstm stride slice 0 to a numpy array this error may indicate that you re try to pass a tensor to a numpy call which be not support in anaconda navigator under environment downgrade numpy 1 2 with numpy 1 19 2 numpy base 1 19 2 run program and it work in anaconda prompt I do conda install panda conda install matplotlib run program in jupyter notebook still work in anaconda prompt I do pip install tensorflow hub and get the follow error when try to import in temp file importerror can not import name parameter server strategy v2 from tensorflow python distribute c user shapi anaconda3 envs temp1 lib site package tensorflow python distribute init py pip uninstall tensorflow hub conda install tensorflow hub run program in jupyter notebook still work in anaconda prompt I do python c import tensorflow as tf print tf version 2 5 0 pip install user tensorflow text 2 5 0 try to run program and get error unknownerror fail to find the dnn implementation op cudnnrnn shutdown computer and try to make a new environment in navigator create conda environment python 3 8 in anaconda navigator under environment install kera 2 4 3 kera gpu 2 4 3 in anaconda navigator under environment downgrade numpy 1 19 2 numpy base 1 19 2 in anaconda prompt conda install tensorflow hub python c import tensorflow as tf print tf version 2 5 0 try to run program and get error unknownerror fail to find the dnn implementation op cudnnrnn the problem be at this point the program work fine in the first installation I have check my path variable and they didn t change I try add allow growth true and that didn t help I be completely stuck I have no idea what happen and why even a regular installation will not work anymore program import numpy as np import tensorflow as tf from tensorflow keras import layer import time from tensorflow import keras physical device tf config list physical device gpu tf config experimental set memory growth physical device 0 enable true def build model1 macro datum tf keras input shape none 3 whole seq output final memory state final carry state layer lstm 16 dropout 95 input shape none 3 return sequence true return state true macro datum sdf network tf keras model input macro datum output whole seq output name sdf network return sdf network temp np array 1 2 3 1 2 3 1 2 3 1 2 3 sdf network build model1 sdf network temp full error unknownerror traceback most recent call last in 1 import numpy as np 2 sdf network build model1 3 sdf network temp appdata roaming python python38 site package tensorflow python keras engine base layer py in call self args kwargs 1028 with autocast variable enable auto cast variable 1029 self compute dtype object 1030 output call fn input args kwargs 1031 1032 if self activity regularizer appdata roaming python python38 site package tensorflow python keras engine functional py in call self input training mask 418 a list of tensor if there be more than one output 419 420 return self run internal graph 421 input training training mask mask 422 appdata roaming python python38 site package tensorflow python keras engine functional py in run internal graph self input training mask 554 555 args kwargs node map argument tensor dict 556 output node layer args kwargs 557 558 update tensor dict appdata roaming python python38 site package tensorflow python keras layers recurrent py in call self input initial state constant kwargs 666 667 if initial state be none and constant be none 668 return super rnn self call input kwargs 669 670 if any of initial state or constant be specify and be kera appdata roaming python python38 site package tensorflow python keras engine base layer py in call self args kwargs 1028 with autocast variable enable auto cast variable 1029 self compute dtype object 1030 output call fn input args kwargs 1031 1032 if self activity regularizer appdata roaming python python38 site package tensorflow python keras layers recurrent v2 py in call self input mask training initial state 1257 gpu implementation when gpu be available 1258 if can use gpu 1259 last output output new h new c runtime gpu lstm 1260 gpu lstm kwargs 1261 else appdata roaming python python38 site package tensorflow python keras layers recurrent v2 py in gpu lstm input init h init c kernel recurrent kernel bias mask time major go backwards sequence length 1509 reverse axis 0 since the input be already convert to time major 1510 input array op reverse input axis 0 1511 output h c gen cudnn rnn op cudnnrnn 1512 input input input h init h input c init c param param 1513 be train true rnn mode lstm appdata roaming python python38 site package tensorflow python util tf export py in wrapper args kwargs 402 please pass these args as kwargs instead 403 format f f name kwargs f argspec args 404 return f kwargs 405 406 return tf decorator make decorator f wrapper decorator argspec f argspec appdata roaming python python38 site package tensorflow python ops gen cudnn rnn op py in cudnn rnn input input h input c param rnn mode input mode direction dropout seed seed2 be train name 96 pass 97 try 98 return cudnn rnn eager fallback 99 input input h input c param rnn mode rnn mode 100 input mode input mode direction direction dropout dropout appdata roaming python python38 site package tensorflow python ops gen cudnn rnn op py in cudnn rnn eager fallback input input h input c param rnn mode input mode direction dropout seed seed2 be train name ctx 176 direction direction dropout dropout seed seed seed2 seed2 177 be training be train 178 result execute execute b cudnnrnn 4 input input flat 179 attrs attrs ctx ctx name name 180 if execute must record gradient appdata roaming python python38 site package tensorflow python eager execute py in quick execute op name num output input attrs ctx name 57 try 58 ctx ensure initialize 59 tensor pywrap tfe tfe py execute ctx handle device name op name 60 input attrs num output 61 except core notokstatusexception as e unknownerror fail to find the dnn implementation op cudnnrnn
tensorflowtensorflow
inception v4 can not node in the pb file
Bug
hi all convert tensorflow model inceptionv4 to onnx format error in tensorflow core python framework graph util impl py the model url I can find the node name graph inceptionv4 logit prediction by the script import tensorflow as tf from tensorflow python platform import gfile import os model dir home yon workspace model tf1 inception v4 model name inception v4 2016 09 09 frozen pb graph google inception v4 def create graph with gfile gfile os path join model dir model name rb as f tf graphdef graph graph def tf compat v1 graphdef graph def parsefromstring f read import the graph from graph def into the current default graph tf import graph def graph def name graph graph create graph tensor name list tensor name for tensor in tf compat v1 get default graph as graph def node result file os path join model dir result txt with open result file w as f for tensor name in tensor name list f write tensor name n error 2021 07 30 16 41 47 830 warning from home yon workspace anaconda3 envs nx cross compilers lib python3 7 site package tf2onnx verbose log py 76 the name tf log set verbosity be deprecate please use tf compat v1 log set verbosity instead traceback most recent call last file home yon workspace anaconda3 envs nx cross compilers lib python3 7 runpy py line 193 in run module as main main mod spec file home yon workspace anaconda3 envs nx cross compilers lib python3 7 runpy py line 85 in run code exec code run global file home yon workspace anaconda3 envs nx cross compilers lib python3 7 site package tf2onnx convert py line 605 in main file home yon workspace anaconda3 envs nx cross compilers lib python3 7 site package tf2onnx convert py line 213 in main graph def input output tf loader from graphdef args graphdef args input args output file home yon workspace anaconda3 envs nx cross compilers lib python3 7 site package tf2onnx tf loader py line 315 in from graphdef frozen graph freeze session sess input name input name output name output name file home yon workspace anaconda3 envs nx cross compilers lib python3 7 site package tf2onnx tf loader py line 262 in freeze session graph def convert variable to constant sess graph def output node name file home yon workspace anaconda3 envs nx cross compilers lib python3 7 site package tensorflow core python util deprecation py line 324 in new func return func args kwargs file home yon workspace anaconda3 envs nx cross compilers lib python3 7 site package tensorflow core python framework graph util impl py line 277 in convert variable to constant inference graph extract sub graph input graph def output node name file home yon workspace anaconda3 envs nx cross compilers lib python3 7 site package tensorflow core python util deprecation py line 324 in new func return func args kwargs file home yon workspace anaconda3 envs nx cross compilers lib python3 7 site package tensorflow core python framework graph util impl py line 197 in extract sub graph assert node be present name to node d node file home yon workspace anaconda3 envs nx cross compilers lib python3 7 site package tensorflow core python framework graph util impl py line 152 in assert node be present assert d in name to node s be not in graph d assertionerror graph inceptionv4 logit prediction be not in graph system information os platform and distribution linux ubuntu 18 04 ubuntu18 04 python 3 7 10 tensorflow estimator 1 15 1 binary tensorflow gpu 1 15 5 binary onnx 1 6 0 onnxruntime 1 3 0 tf2onnx 1 9 1
tensorflowtensorflow
autograph set verbosity doc example show use integer value for env var but only string can be use
Bug
url s with the issue description of issue what need change the documentation usage description for set verbosity through environment variable use an integer value however this be not possible as environment variable must be string in the current state the follow code example be provide py import os import tensorflow as tf os environ autograph verbosity 5 verbosity be now 5 tf autograph set verbosity 0 verbosity be now 0 os environ autograph verbosity 1 no effect because set verbosity be already call however it should actually be py import os import tensorflow as tf os environ autograph verbosity 5 verbosity be now 5 tf autograph set verbosity 0 verbosity be now 0 os environ autograph verbosity 1 no effect because set verbosity be already call
tensorflowtensorflow
reduce sum in tflite output be incorrect
Bug
1 system information os platform and distribution e g linux ubuntu 16 04 18 04 tensorflow installation pip package or build from source pip tensorflow library version if pip package or github sha if build from source 2 5 0 2 code provide code to help we reproduce your issue use one of the follow option python import tensorflow as tf import numpy as np keras tf keras layer keras layers def infer tflite model img interpreter tf lite interpreter model content tflite model interpreter allocate tensor input detail interpreter get input detail 0 interpreter set tensor input detail index img interpreter invoke output detail interpreter get output detail 0 output interpreter get tensor output detail index return output def convert to tflite model converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize default converter experimental new converter true tflite model converter convert return tflite model image to test np random seed 1 input shape 12 14 18 img np random random 1 input shape astype np float32 7 0 3 5 create a model I layer input shape input shape x tf quantization fake quant with min max args I min 0 0 max 3 984375 x tf reduce sum x axis none x tf quantization fake quant with min max args x min 4 0 max 3 96875 model keras model model input I output x tf output model predict img tflite model convert to tflite model tflite output infer tflite model img print f tensorflow output tf output tflite output tflite output 3 failure after conversion if the conversion be successful but the generate model be wrong then state what be wrong I get a huge difference between the tf model and the tflite model tensorflow output 3 96875 tflite output 2 78125 the tensorflow model give completely different result than the tflite model furthermore before the reduce sum I quantize the tensor use only positive value so it do not make sense that the output of the model be negative which be what I get
tensorflowtensorflow
documentation old installation requirement
Bug
hi on the page with installation instruction the list requirement have not be update it list python 3 6 3 8 explicitly accord to the pip installation guide python 3 9 be a valid requirement I suggest update the page good regard thomas
tensorflowtensorflow
keras io xray tpu example fail on 2 6 0 rc1
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary binary tensorflow version use command below 2 6 0 rc1 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version current colab july 27 2021 gpu model and memory tpu 8 core colab standard you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior example fail in model fit describe the expect behavior complete notebook finish in colab with tpu standard memory size contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook open this example in colab set runtime to tpu with standard size memory install tensorflow 2 6 0 rc1 at begin pip uninstall y tensorflow pip install q tensorflow 2 6 0 rc1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach run entire notebook via f9 you will eventually receive this stack trace in model fit before finish any epoch notfounderror traceback most recent call last in 1 with strategy scope 2 model build model 3 4 metric 5 tf keras metric binaryaccuracy 26 frame in build model 3 input keras input shape image size 0 image size 1 3 4 x preprocesse rescale 1 0 255 input 5 x layer conv2d 16 3 activation relu pad same x 6 x layer conv2d 16 3 activation relu pad same x 7 x layer maxpool2d x usr local lib python3 7 dist package kera engine base layer py in call self args kwargs 975 if in functional construction mode self input args kwargs input list 976 return self functional construction call input args kwargs 977 input list 978 979 maintain info about the layer call stack usr local lib python3 7 dist package kera engine base layer py in functional construction call self input args kwargs input list 1113 check input assumption set after layer build e g input shape 1114 output self keras tensor symbolic call 1115 input input mask args kwargs 1116 1117 if output be none usr local lib python3 7 dist package kera engine base layer py in keras tensor symbolic call self input input mask args kwargs 846 return tf nest map structure keras tensor kerastensor output signature 847 else 848 return self infer output signature input args kwargs input mask 849 850 def infer output signature self input args kwargs input mask usr local lib python3 7 dist package kera engine base layer py in infer output signature self input args kwargs input mask 884 overridden 885 todo kaftan do we maybe build here or have we already do it 886 self maybe build input 887 input self maybe cast input input 888 output call fn input args kwargs usr local lib python3 7 dist package kera engine base layer py in maybe build self input 2657 operation 2658 with tf util maybe init scope self 2659 self build input shape pylint disable not callable 2660 we must set also ensure that the layer be mark as build and the build 2661 shape be store since user define build function may not be call usr local lib python3 7 dist package keras layers convolutional py in build self input shape 202 constraint self kernel constraint 203 trainable true 204 dtype self dtype 205 if self use bias 206 self bias self add weight usr local lib python3 7 dist package kera engine base layer py in add weight self name shape dtype initializer regularizer trainable constraint use resource synchronization aggregation kwargs 661 synchronization synchronization 662 aggregation aggregation 663 cache device cache device 664 if regularizer be not none 665 todo fchollet in the future this should be handle at the usr local lib python3 7 dist package tensorflow python training tracking base py in add variable with custom getter self name shape dtype initializer getter overwrite kwarg for getter 816 dtype dtype 817 initializer initializer 818 kwarg for getter 819 820 if we set an initializer and the variable process it tracking will not usr local lib python3 7 dist package kera engine base layer util py in make variable name shape dtype initializer trainable cache device validate shape constraint use resource collection synchronization aggregation partitioner 127 synchronization synchronization 128 aggregation aggregation 129 shape variable shape if variable shape else none 130 131 usr local lib python3 7 dist package tensorflow python op variable py in call cls args kwargs 264 def call cls args kwargs 265 if cls be variablev1 266 return cls variable v1 call args kwargs 267 elif cls be variable 268 return cls variable v2 call args kwargs usr local lib python3 7 dist package tensorflow python op variable py in variable v1 call cls initial value trainable collection validate shape cache device name variable def dtype expect shape import scope constraint use resource synchronization aggregation shape 225 synchronization synchronization 226 aggregation aggregation 227 shape shape 228 229 def variable v2 call cls usr local lib python3 7 dist package tensorflow python op variable py in getter kwargs 65 66 def getter kwargs 67 return capture getter capture previous kwargs 68 69 return getter usr local lib python3 7 dist package tensorflow python distribute distribute lib py in creator with resource var next creator kwargs 2125 checkpoint restore uid none 2126 2127 create self create variable next creator kwargs 2128 2129 if checkpoint restore uid be not none usr local lib python3 7 dist package tensorflow python distribute tpu strategy py in create variable self next creator kwargs 1167 self container strategy real mirror creator 1168 distribute util tpu variable class mapping 1169 distribute util tpu variable policy mapping kwargs 1170 1171 def gather to implementation self value destination axis option usr local lib python3 7 dist package tensorflow python distribute distribute util py in create mirror variable strategy real mirror creator class mapping policy mapping kwargs 306 here 307 with tape stop record 308 value list real mirror creator kwargs 309 mirroredvariable be recreate during save model loading and its 310 component variable value list will have none initializer we usr local lib python3 7 dist package tensorflow python distribute tpu strategy py in real mirror creator kwargs 1146 with maybe init scope 1147 initial value initial value if callable 1148 initial value else initial value 1149 1150 if I 0 usr local lib python3 7 dist package kera initializers initializer v2 py in call self shape dtype kwargs 515 else 516 limit math sqrt 3 0 scale 517 return self random generator random uniform shape limit limit dtype 518 519 def get config self usr local lib python3 7 dist package kera initializers initializer v2 py in random uniform self shape minval maxval dtype 971 op tf random uniform 972 return op 973 shape shape minval minval maxval maxval dtype dtype seed self seed 974 975 def truncate normal self shape mean stddev dtype usr local lib python3 7 dist package tensorflow python util dispatch py in wrapper args kwargs 204 call target and fall back on dispatcher if there be a typeerror 205 try 206 return target args kwargs 207 except typeerror valueerror 208 note convert to eager tensor currently raise a valueerror not a usr local lib python3 7 dist package tensorflow python op random op py in random uniform shape minval maxval dtype seed name 313 result math op multiply result maxval 314 else 315 result math op add result maxval minval minval name name 316 todo b 132092188 c shape inference inside functional op do not 317 cross funcgraph boundary since that information be only available in usr local lib python3 7 dist package tensorflow python op math ops py in binary op wrapper x y 1365 r binary op wrapper use different force same dtype value 1366 x y maybe promote tensor x y force same dtype false 1367 return func x y name name 1368 except typeerror valueerror as e 1369 even if dispatch the op fail the rhs may be a tensor aware usr local lib python3 7 dist package tensorflow python util dispatch py in wrapper args kwargs 204 call target and fall back on dispatcher if there be a typeerror 205 try 206 return target args kwargs 207 except typeerror valueerror 208 note convert to eager tensor currently raise a valueerror not a usr local lib python3 7 dist package tensorflow python op math ops py in subtract x y name 546 dispatch add dispatch support 547 def subtract x y name none 548 return gen math op sub x y name 549 550 usr local lib python3 7 dist package tensorflow python ops gen math op py in sub x y name 10642 return result 10643 except core notokstatusexception as e 10644 op raise from not ok status e name 10645 except core fallbackexception 10646 pass usr local lib python3 7 dist package tensorflow python framework op py in raise from not ok status e name 6939 message e message name name if name be not none else 6940 pylint disable protect access 6941 six raise from core status to exception e code message none 6942 pylint enable protect access 6943 usr local lib python3 7 dist package six py in raise from value from value notfounderror eagerconst be neither a type of a primitive operation nor a name of a function register in binary run on n d02110c1 w 0 make sure the operation or function be register in the binary run in this process op sub
tensorflowtensorflow
the max version of bazel support by tensorflow should be increase to 4 1 0 or high
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution window 10 21h1 tensorflow instal from source or binary source tensorflow version 2 4 describe the problem I m a member of microsoft vcpkg team we recently upgrade the version of bazel use in vcpkg to 4 1 0 then tensorflow build fail with follow error you have bazel 4 1 0 instal please downgrade your bazel installation to version 3 99 0 or low to build tensorflow to downgrade download the installer for the old version from then run the installer this issue be due to the fact that tensorflow have set the maximum support version of bazel to 3 99 0 on line 53 of the file for fix this issue the max version of bazel support by tensorflow should be increase to 4 1 0 or high any other info log I have test that tensorflow could instal successfully when use bazel 4 1 0
tensorflowtensorflow
bug
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
code not visible in dark theme in the tensorflow documentation
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 wsl2 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach this be the currently output show in tensorflow documentation image
tensorflowtensorflow
update gast mirror url
Bug
update the mirrored url for gast archive fix for 50777
tensorflowtensorflow
bug with optimizer v2 optimizerv2 set weight method
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow none os platform and distribution e g linux ubuntu 16 04 linux ubuntu debina mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below python version 2 4 1 2 4 x bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 0 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version 2 4 1 describe the current behavior I be see an issue in set adam optimizer weight in optimizer v2 implementation look at the code here l154 l160 it seem to be able to set the weight the current optimizer must have be initialize and set the right size in term of array length if this assertion be correct set weight on train begin or prior to the first gradient calculation be a bit too soon to set optimizer weight in early version this be not a problem and optimizer set weight use to work at on train begin I suspect this be a bug in how slot and weight be internally structure code to reproduce the issue be here to run try python debug optimizer issue py then try again python debug optimizer issue py start epoch 2 the first run finish and the second one where its suppose to start from epoch 2 fail with error as valueerror you call set weight weight on optimizer adam with a weight list of length 9 but the optimizer be expect 0 weight provide weight 120 array 0 0000000e 00 0 0000000e 00 0 0 I look around in adam v2 code to see if there be any other apis that can help with create slot etc but I didn t see any that do not require knowledge of internal implementation detail I think as a crazy thought I use array extension on actual weight to bypass the bug in set weight like this model optimizer weight extend ref model optimizer get weight this let past the training regime but the next model save error into attributeerror numpy int64 object have no attribute name worth note the same problem exist with sgd which be meet with the follow relate error valueerror you call set weight weight on optimizer sgd with a weight list of length 1 but the optimizer be expect 0 weight provide weight 120 please note this code be write for tf2 and will not work as in tf1 also in early version of tf 2 1 support for load h5 weight in distribute scope be not there in that case a callback be necessary to set the weight for more see here also worth note that I appreciate that the instead of set weight I can just load the model like follow if args start epoch model load model f model args start epoch h5 deduce epoch int ref model optimizer iteration numpy len train image args batch size if deduce epoch args start epoch raise valueerror model set weight ref model get weight model optimizer set weight ref model optimizer get weight and that work fine but for the use case I have I need to set the weight and optimizer weight explicitly describe the expect behavior I expect the set weight to work naturally contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
bug
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tflite convertion bug numdimension input 4 3 4 node number 0 resize bilinear fail to prepare
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 tensorflow instal from source or binary pip tensorflow version use command below 2 4 2 python version 3 6 9 describe the current behavior I be try to quantize the tf image resize function to tflite but the follow error be raise runtimeerror tensorflow lite kernel resize bilinear cc 73 numdimension input 4 3 4 node number 0 resize bilinear fail to prepare standalone code to reproduce the issue the code below lead to the error import tensorflow as tf print tf version version input tf keras input shape 720 1280 3 batch size 1 output tf image resize input 360 640 model tf keras model inputs input output output model summary def representative dataset generator for in range 20 image tf random uniform shape 1 720 1280 3 dtype tf float32 yield image converter tf lite tfliteconverter from keras model model converter representative dataset representative dataset generator converter target spec support op tf lite opsset tflite builtins int8 converter target spec support type tf int8 converter inference input type tf uint8 converter inference output type tf uint8 tflite model converter convert with tf 2 5 0 the error change and become valueerror the inference input type and inference output type must be tf float32 as show in this google colab this be also a problem because I want to do full integer quantization since this operation have its tflite equivalent tflresize bilinear tflresizebilinearop I expect to be able to quantize it without a problem look forward to hear from you
tensorflowtensorflow
tf2 save model pb from exporter main v2 py be different than save model pb from official tf2 zoo
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 3 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 11 4 gpu model and memory geforce rtx 3090 I be try to convert a re train tf2 object detection ssd mobilenetv2 model to a proprietary framework I have successfully re train the network and it run properly however I be have trouble with convert the save model pb to the other framework the conversion script from the sdk I be work with perform optimization on the save model pb use meta optimizer cc which return an empty graph after run through my re train model I use exporter main v2 py to export my re train checkpoint to the save model pb which I be have trouble with the issue be not with my training or checkpoint but with the export process from checkpoint to a save model pb use exporter main v2 py I know this because I download the ssd mobilenetv2 model from the tf2 zoo to test with it I have no issue convert the official save model pb file find in the official repo but when I try to convert the official checkpoint find in the repo to a save model pb use exporter main v2 py I face the same issue try to convert the newly produce save model pb file to the proprietary framework this mean that something wrong be happen when execute the exporter main v2 py script describe the expect behavior the export save model pb file should not be different than the official save model pb file find in the official repo the following be what I get show 0 node and 0 edge grappler empty graph aaea8fd8 1609 4bac ae9b 53918e23ea20 standalone code to reproduce the issue the model I download be the command I use to export the official checkpoint to a save model pb be python model research object detection exporter main v2 py input type image tensor pipeline config path pipeline config train checkpoint dir checkpoint output directory export model
tensorflowtensorflow
attributeerror module tensorflow have no attribute report uninitialized variable
Bug
I m use tensorflow version 2 and get this issue when I run this code uninitialized variable set I decode ascii for I in tf report uninitialized variable init op tf variable initializer v for v in tf global variable if v name split 0 in uninitialized variable init op error attributeerror module tensorflow have no attribute report uninitialized variable
tensorflowtensorflow
valueerror destination can not be empty
Bug
I m have this bug when I try to get feature at the layer before last layer compute loss with expect feature use cosine similarity then propagate back to input I update input not network s weight with optimizer minimine loss var list input I can not find any information about this bug please give I advice
tensorflowtensorflow
tf keras preprocesse timeserie dataset from array function be break in late code
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information tensorflow instal from source or binary late source tensorflow version use command below late source python version 3 x x bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 4 gpu model and memory describe the current behavior the error say that there be a circular kera reference however if I install last month s code it work describe the expect behavior
tensorflowtensorflow
gast archive url error
Bug
there s an error in the mirrored url for gast archive l441 that url give an error the specify key do not exist the url should be we ve only whiteliste storage googleapis com in our firewall so when the build try to fall back to the official file pythonhoste org source it fail
tensorflowtensorflow
invalidargumenterror 2 root error s find
Bug
system information colab with gpu late tflite version colab link failure after conversion info tensorflow load image with size 25002 num label 2 label cat dog info tensorflow load image with size 25002 num label 2 label cat dog info tensorflow retrain the model info tensorflow retrain the model warn tensorflow please add keras layers inputlayer instead of keras input to sequential model keras input be intend to be use by functional model warn tensorflow please add keras layers inputlayer instead of keras input to sequential model keras input be intend to be use by functional model model sequential 2 total param 3 415 586 trainable param 2 562 non trainable param 3 413 024 9 frame usr local lib python3 7 dist package tensorflow example lite model maker core task image classifier py in create cls train datum model spec validation datum batch size epoch train whole model dropout rate learning rate momentum shuffle use augmentation use hub library warmup step model dir do train 318 if do train 319 tf compat v1 log info retrain the model 320 image classifier train train datum validation datum 321 else 322 use in evaluation usr local lib python3 7 dist package tensorflow example lite model maker core task image classifier py in train self train datum validation datum hparam 179 lib train image classifier lib 180 self history lib train model self model hparam train datum and size 181 validation datum and size 182 return self history 183 usr local lib python3 7 dist package tensorflow hub tool make image classifier make image classifier lib py in train model model hparam train datum and size valid datum and size log dir 249 validation datum valid datum 250 validation step validation step 251 callback callback 252 253 usr local lib python3 7 dist package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation batch size validation freq max queue size worker use multiprocesse 1181 r 1 1182 callback on train batch begin step 1183 tmp log self train function iterator 1184 if datum handler should sync 1185 context async wait usr local lib python3 7 dist package tensorflow python eager def function py in call self args kwd 887 888 with optionalxlacontext self jit compile 889 result self call args kwd 890 891 new tracing count self experimental get trace count usr local lib python3 7 dist package tensorflow python eager def function py in call self args kwd 915 in this case we have create variable on the first call so we run the 916 defunned version which be guarantee to never create variable 917 return self stateless fn args kwd pylint disable not callable 918 elif self stateful fn be not none 919 release the lock early so that multiple thread can perform the call usr local lib python3 7 dist package tensorflow python eager function py in call self args kwargs 3022 filter flat args self maybe define function args kwargs 3023 return graph function call flat 3024 filter flat args capture input graph function capture input pylint disable protect access 3025 3026 property usr local lib python3 7 dist package tensorflow python eager function py in call flat self args capture input cancellation manager 1959 no tape be watch skip to run the function 1960 return self build call output self inference function call 1961 ctx args cancellation manager cancellation manager 1962 forward backward self select forward and backward function 1963 args usr local lib python3 7 dist package tensorflow python eager function py in call self ctx args cancellation manager 594 input args 595 attrs attrs 596 ctx ctx 597 else 598 output execute execute with cancellation usr local lib python3 7 dist package tensorflow python eager execute py in quick execute op name num output input attrs ctx name 58 ctx ensure initialize 59 tensor pywrap tfe tfe py execute ctx handle device name op name 60 input attrs num output 61 except core notokstatusexception as e 62 if name be not none invalidargumenterror 2 root error s find 0 invalid argument try to decode bmp format use a wrong op use decode bmp or decode image instead op use decodepng node cond else 1 cond decodepng iteratorgetnext iteratorgetnext 2 1 invalid argument try to decode bmp format use a wrong op use decode bmp or decode image instead op use decodepng node cond else 1 cond decodepng iteratorgetnext 0 successful operation 0 derive error ignore op inference train function 33212 function call stack train function train function
tensorflowtensorflow
tfa activation snake wrong function in doc
Bug
url s with the issue description of issue what need change the compute function in the source code be x 1 cos 2 frequency x 2 frequency but the function in the doc be x 1 cos 2 frequency x 2 frequency
tensorflowtensorflow
markdown render incorrectly in operation semantic batchnormgrad
Bug
url s with the issue batchnormgrad description of issue what need change the brachnormgrad table render incorrectly clear description image correct link no find yet markdown to html use google internel doc system
tensorflowtensorflow
snapshot not save in tf datum experimental save
Bug
unable to load tf dataset at a later stage as the snapshot be not save while run tf datum experimental save ds train path train ds path compression none shard func none what be the solution for this I have to save a tf dataset and reload it later
tensorflowtensorflow
error when apply tf map fn on symbolic tensor
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary conda tensorflow version use command below tensorflow gpu 2 3 0 from conda install he13fc11 0 python version 3 8 10 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version cudatoolkit 10 1 243 from conda install cudnn 7 6 5 from conda install gpu model and memory nvidia geforce gt 730 m 1 gb current behavior error when apply tf map fn on symbolic tensor describe the expect behavior no error when apply tf map fn on symbolic tensor standalone code to reproduce the issue this be a toy example of course I would like to perform a specific per row function import tensorflow as tf input tf keras layers input shape 1 1 3 batch size 4 dtype tf double result tf map fn lambda x x input I would like the function to be apply on each of the 4 elem of the batch dimension traceback traceback most recent call last file line 4 in result tf map fn lambda x x file c programdata anaconda3 envs py3tf2 3 lib site package tensorflow python util deprecation py line 574 in new func return func args kwargs file c programdata anaconda3 envs py3tf2 3 lib site package tensorflow python util deprecation py line 507 in new func return func args kwargs file c programdata anaconda3 envs py3tf2 3 lib site package tensorflow python ops map fn py line 633 in map fn v2 return map fn file c programdata anaconda3 envs py3tf2 3 lib site package tensorflow python util deprecation py line 507 in new func return func args kwargs file c programdata anaconda3 envs py3tf2 3 lib site package tensorflow python ops map fn py line 440 in map fn elem batchable ta file c programdata anaconda3 envs py3tf2 3 lib site package tensorflow python ops map fn py line 441 in tensor array op tensorarray file c programdata anaconda3 envs py3tf2 3 lib site package tensorflow python op tensor array op py line 1071 in init self implementation implementation file c programdata anaconda3 envs py3tf2 3 lib site package tensorflow python op tensor array op py line 718 in init self tensor array none for in range size typeerror tensor object can not be interpret as an integer
tensorflowtensorflow
element spec in tf datum experimental load variable image shape
Bug
I be face an issue while use check val ds tf datum experimental load val ds path element spec tf tensorspec shape 224 224 3 dtype tf uint8 name none tf tensorspec shape dtype tf int64 name none here the syntax require element spec argument to be fill while I have image of variable shape in the dataset how do I mention the shape in such case please suggest
tensorflowtensorflow
module tensorflow api v2 summary have no attribute scalar
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow custom code os platform and distribution e g linux ubuntu 16 04 red hat 7 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below 2 5 0 python version 3 6 8 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the current behavior when run one machine when I execute the line tf summary scalar train ll per seq ll per seq I get module tensorflow api v2 summary have no attribute scalar interestingly enough on another machine also rhel 7 6 the code execute as expect describe the expect behavior the above scalar should be recognize on both machine standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf tf summary scalar train ll per seq ll per seq
tensorflowtensorflow
update error impl py
Bug
add the code example for tf error resourceexhaustederror fix 29847
tensorflowtensorflow
calculation of gradient of 2d convolution operation through gradienttape return none
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux mint 20 1 tensorflow instal from source or binary conda tensorflow version use command below 2 4 1 python version 3 9 4 cuda cudnn version 10 1 243 7 6 5 both through conda gpu model and memory geforce rtx 2060 rev a 6 gb describe the current behavior currently I be try to use gradienttape to capture the gradient of a 2d convolution operation tf nn conv2d on a test matrix however when I go to actually fetch the gradient tape gradient return none instead of the expect gradient I test the exact same code except I disable eager execution and didn t use gradienttape and it return the gradient just fine describe the expect behavior when use tf nn conv2d inside of gradienttape tensorflow should successfully calculate the gradient standalone code to reproduce the issue this code work import os os environ tf force gpu allow growth true import numpy as np import tensorflow as tf tf compat v1 disable eager execution size fix stride in channel out channel be 1 for now x size 4 w size 3 use an odd number here x shape 1 x size x size 1 w shape w size w size 1 1 out shape 1 x size w size 1 x size w size 1 1 stride 1 1 1 1 numpy value x np np random randint 10 size x shape w np np random randint 10 size w shape out scale np np random randint 10 size out shape tf forward x tf constant x np dtype tf float32 w tf constant w np dtype tf float32 out tf nn conv2d input x filter w stride stride padding valid out scale tf constant out scale np dtype tf float32 f tf reduce sum tf multiply out out scale tf backward d out tf gradient f out 0 d x tf gradient f x 0 d w tf gradient f w 0 print d out d x d w this code do not import os os environ tf force gpu allow growth true import numpy as np import tensorflow as tf size fix stride in channel out channel be 1 for now with tf gradienttape as g x size 4 w size 3 use an odd number here x shape 1 x size x size 1 w shape w size w size 1 1 out shape 1 x size w size 1 x size w size 1 1 stride 1 1 1 1 numpy value x np np random randint 10 size x shape w np np random randint 10 size w shape out scale np np random randint 10 size out shape tf forward x tf constant x np dtype tf float32 w tf constant w np dtype tf float32 out tf nn conv2d input x filter w stride stride padding valid out scale tf constant out scale np dtype tf float32 f tf reduce sum tf multiply out out scale tf backward d out g gradient f out 0 d x g gradient f x 0 d w g gradient f w 0 print d out d x d w
tensorflowtensorflow
exporter main v2 py on official tf2 od checkpoint produce save model pb different than official save model pb
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 3 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 11 4 gpu model and memory geforce rtx 3090 I be try to convert a re train tf2 object detection ssd mobilenetv2 model to a proprietary framework I have successfully re train the network and it run properly however I be have trouble with convert the save model pb to the other framework the conversion script from the sdk I be work with perform optimization on the save model pb use meta optimizer cc which return an empty graph after run through my re train model I use exporter main v2 py to export my re train checkpoint to the save model pb which I be have trouble with the issue be not with my training or checkpoint but with the export process from checkpoint to a save model pb use exporter main v2 py I know this because I download the ssd mobilenetv2 model from the tf2 zoo to test with it I have no issue convert the official save model pb file find in the official repo but when I try to convert the official checkpoint find in the repo to a save model pb use exporter main v2 py I face the same issue try to convert the newly produce save model pb file to the proprietary framework this mean that something wrong be happen when execute the exporter main v2 py script describe the expect behavior the export save model pb file should not be different than the official save model pb file find in the official repo the following be what I get show 0 node and 0 edge grappler empty graph standalone code to reproduce the issue the model I download be the command I use to export the official checkpoint to a save model pb be python model research object detection exporter main v2 py input type image tensor pipeline config path pipeline config train checkpoint dir checkpoint output directory export model
tensorflowtensorflow
issue with conv1d when group 1 and use our own tensorflow build
Bug
system information os platform and distribution linux red hat enterprise 8 1 tensorflow instal from source tensorflow version v2 5 0 0 ga4dfb8d1a71 2 5 0 more generally any version start from 2 3 1 python version 3 7 10 bazel version 3 7 2 gcc compiler version 8 3 1 cuda cudnn version 11 2 8 0 gpu model and memory nvidia v100 describe the current behavior when use our own tensorflow build start at version 2 3 1 which add the group parameter the follow code snippet fail for group great than 1 python import tensorflow as tf import traceback for g in 1 2 4 try print f group g c tf keras layer conv1d 4 4 group g print c tf one 2 16 4 except exception as e traceback print exc finally print output be as follow group 1 tf tensor 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 0 28654966 0 9454404 1 1466699 0 91166556 shape 2 13 4 dtype float32 group 2 traceback most recent call last file group py line 8 in print c tf one 2 16 4 file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python keras engine base layer py line 1030 in call output call fn input args kwargs file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python keras layers convolutional py line 249 in call output self convolution op input self kernel file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python util dispatch py line 206 in wrapper return target args kwargs file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python op nn op py line 1019 in convolution v2 name name file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python op nn op py line 1149 in convolution internal name name file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python util dispatch py line 206 in wrapper return target args kwargs file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python util deprecation py line 602 in new func return func args kwargs file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python util deprecation py line 602 in new func return func args kwargs file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python op nn op py line 1892 in conv1d name name file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python ops gen nn op py line 932 in conv2d op raise from not ok status e name file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python framework op py line 6897 in raise from not ok status six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror input and filter must have the same depth 4 vs 2 op conv2d group 4 traceback most recent call last file group py line 8 in print c tf one 2 16 4 file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python keras engine base layer py line 1030 in call output call fn input args kwargs file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python keras layers convolutional py line 249 in call output self convolution op input self kernel file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python util dispatch py line 206 in wrapper return target args kwargs file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python op nn op py line 1019 in convolution v2 name name file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python op nn op py line 1149 in convolution internal name name file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python util dispatch py line 206 in wrapper return target args kwargs file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python util deprecation py line 602 in new func return func args kwargs file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python util deprecation py line 602 in new func return func args kwargs file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python op nn op py line 1892 in conv1d name name file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python ops gen nn op py line 932 in conv2d op raise from not ok status e name file pub anaconda py3 2020 11 envs tensorflow gpu 2 5 0 lib python3 7 site package tensorflow python framework op py line 6897 in raise from not ok status six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror input and filter must have the same depth 4 vs 1 op conv2d describe the expect behavior if I run the same code snippet use a tensorflow build from pip I do not get any error but I have a hard time understand how this can build dependent group 1 tf tensor 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 0 05965281 1 473797 0 5487337 0 34858704 shape 2 13 4 dtype float32 group 2 tf tensor 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 0 555179 0 11330175 0 22858751 1 1606797 shape 2 13 4 dtype float32 group 4 tf tensor 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 0 9354149 0 7340534 0 01698902 0 4960574 shape 2 13 4 dtype float32 any idea would be greatly appreciate
tensorflowtensorflow
tensorflow 2 3 0 do not respect no proxy environment variable
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 5 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binay tensorflow version use command below v2 3 0 rc2 23 gb36436b087 2 3 0 python version python 3 6 2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensorflow do not recognize environment variable no proxy and no grpc proxy we have set http proxy and no proxy in the environment variable and tf seem only pickup http proxy use the http proxy for the inter worker grpc communication and get error like this create 1625564310 040212700 description error receive from peer ipv4 proxy ip 8080 file external com github grpc grpc src core lib surface call cc file line 1056 grpc message socket close grpc status 14 op collectivebcastrecv this be the tf config tf config cluster worker 10 0 0 7 45661 10 0 0 7 46771 task type worker index 0 environment cloud this be proxy relate env http proxy no proxy proxy ip localhost 10 0 0 4 10 0 0 5 10 0 0 6 10 244 0 0 16 10 0 0 0 8 describe the expect behavior grpc should not use proxy since we have define the ip not to use proxy contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tflite conversion of lstm model do not work with multiple batch size
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 kaggle notebook standard ubuntu tensorflow instal from source or binary pip install tensorflow 2 5 0 tensorflow version use command below 2 5 0 python version 3 7 cuda cudnn version cpu only gpu model and memory cpu only describe the current behavior I train on gpu and save an lstm model in keras and convert it to tflite model sequential model add mask mask value 1 input shape n timestep n feature this layer be use in the final model model add lstm 64 input shape n timestep n feature this layer be use in the final model model add repeatvector n timestep model add lstm 64 return sequence true model add timedistribute dense n feature optimizer adam learning rate 0 001 epsilon 1e 04 model compile optimizer optimizer loss mse model fit x train x train epoch 1000 verbose 2 saving encoder model input model input output model layer 1 output encoder save encoder h5 image use both experimental and non experimental converter I be able to convert the model to tflite where the shape signature be 1 x y where the first dimension be batch size from tensorflow keras model import load model encoder load model encoder h5 follow code from run model tf function lambda x encoder x batch size none step 6952 input size 20 concrete func run model get concrete function tf tensorspec batch size step input size encoder input 0 dtype encoder save encoder save format tf signature concrete func save the model as tflite converter tf lite tfliteconverter from save model encoder converter experimental new converter true tflite model converter convert with open encoder tflite wb as f f write tflite model now when I use the model interpreter tf lite interpreter model path encoder tflite input detail interpreter get input detail output detail interpreter get output detail datum np vstack a a astype np float32 shape 2 6952 20 interpreter resize tensor input input detail 0 index data shape shape 2 6952 20 interpreter resize tensor input output detail 0 index datum shape 0 64 shape 2 64 interpreter allocate tensor interpreter set tensor input detail 0 index datum interpreter invoke output datum interpreter get tensor output detail 0 index I get the follow error on invoke runtimeerror tensorflow lite kernels concatenation cc 80 t dim datum d t0 dim datum d 2 1 node number 26 concatenation fail to prepare node number 28 while fail to invoke describe the expect behavior an output of size 2 64 relate issue that I ve look at
tensorflowtensorflow
colab tpu tf 2 5 unavailableerror socket close
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab enviroment tensorflow version use command below 2 5 python version 3 7 10 context my model be train ok untill a certain number of step then throw the error mention in issue 50522 which I then come to realize that it be relate to the in my tfrecord file cause different batch output size as an attempt to fix it I adapt the code to run with a dinamic batch size by use tf shape x coincidentally or not when try to perform train the error unavailableerror socket close begin show up after further research as point by the doc dynamic shape not support I remove back the tf shape op and add drop remainder true to my dataset batch config despite that the error persist although after enable drop remainder the code be identical to the previous work version obs dropping remainder and tf shape be add as an attempt to deal with batch lenght different than the original configuration as the datum need to be properly reshape accordingly to the length of the augment batch and the batch size vary accordingly to the number of sample within my tfrecord that be not multiple code python def train model train path validation path buffer size epoch step per epoch model datum augmentation preprocesse model train filename get filenamestpu train path random shuffle train filename validation filename get filenamestpu validation path random shuffle validation filename dataset length 91758 train size dataset length 0 7 validation size dataset length train size batch size 42 tpu strategy num replicas in sync shape 32 batch size datum reshape lambda x y tf reshape x shape shape 224 224 3 tf reshape y 0 shape shape 1000 tf reshape y 1 shape shape 516 tf reshape y 2 shape shape 124 augmentation pipeline lambda x y datum augmentation tf expand dim x axis 0 tf tile tf reshape y 0 1 1000 32 1 tf tile tf reshape y 1 1 516 32 1 tf tile tf reshape y 2 1 124 32 1 apply augment then tile the label to the correct length auto tf datum autotune train dataset tf datum tfrecorddataset buffer size int 1e 8 num parallel read auto filename train filename map parse fn num parallel call auto shuffle buffer size buffer size reshuffle each iteration true train dataset train dataset map augmentation pipeline num parallel call auto batch batch size batch size drop remainder true train dataset train dataset map datum reshape num parallel call auto train dataset train dataset repeat train dataset train dataset prefetch auto create a validation dataset validation dataset tf datum tfrecorddataset num parallel read auto filename validation filename map parse fn num parallel call auto validation dataset validation dataset batch batch size validation dataset validation dataset prefetch auto validation dataset validation dataset repeat 1 validation step validation size batch size history model fit x train dataset epoch epoch step per epoch step per epoch validation datum validation dataset validation step validation step return history perform training python loss class 0 categoricalcrossentropy class 1 categoricalcrossentropy class2 categoricalcrossentropy metric categorical accuracy optimizer adam learning rate 5e 3 weight file none with tpu strategy scope resnet 50v2 load and configure model optimizer loss metric weight file resnet 50v2 load and configure model optimizer loss metric weight file base directory gs 2015 tfrecord train path base directory train validation path base directory validation buffer size 10240 epoch 30 step per epoch 192 resnet 50v2 summary history train model train path validation path buffer size epoch step per epoch resnet 50v2 plot training history3 history error total param 26 925 160 trainable param 7 825 000 non trainable param 19 100 160 epoch 1 30 unavailableerror traceback most recent call last in 4 5 resnet 50v2 summary 6 history train model train path validation path buffer size epoch step per epoch resnet 50v2 7 plot training history3 history 8 14 frame usr local lib python3 7 dist package six py in raise from value from value unavailableerror socket close
tensorflowtensorflow
grappler error when softmax input have a variable dimension
Bug
system information have I write custom code yes os platform and distribution ubuntu 18 04 tensorflow instal from binary tensorflow version 2 5 0 python version 3 6 9 cuda cudnn version 11 2 8 1 0 gpu model and memory nvidia quadro p620 4 gb describe the current behavior the grappler pass be log an error when softmax be use in a tf function with one or more variable dimension e g a variable batch size this log start appear in tensorflow 2 5 it do not seem to cause issue when run the model but it might indicate a bug in the grappler implementation describe the expect behavior this operation should not produce any warning or error contribute do you want to contribute a pr no briefly describe your candidate solution if contribute standalone code to reproduce the issue python import tensorflow as tf input signature tf tensorspec none 20 tf float32 softmax tf function tf nn softmax input signature input signature softmax tf random uniform 2 20 other info log the code above log the follow error 2021 07 02 07 34 12 478300 w tensorflow core grappler cost op level cost estimator cc 689 error in predictcost for the op op softmax attr key t value type dt float input dtype dt float shape unknown rank true device type gpu vendor nvidia model quadro p620 frequency 1442 num core 4 environment key architecture value 6 1 environment key cuda value 11020 environment key cudnn value 8100 num register 65536 l1 cache size 24576 l2 cache size 524288 share memory size per multiprocessor 98304 memory size 3092316160 bandwidth 96128000 output dtype dt float shape unknown rank true
tensorflowtensorflow
custom train ssd mobilenet v1 fpn share coco model can not detect object with tensorflow
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 macos bigsur version 11 2 1 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device not tensorflow instal from source or binary source tensorflow version use command below 1 14 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce describe the problem hi all I create a customer train tflite model for my tensorflow object detection project which will be run on io but somehow inferece object detection do not work properly during inferece the model detect always the first train item the first item which be in my labelmap txt file and give some wrong score as object detection prediction do anyone an idea what the problem could be here be step by step my project flow 1 I train my image with ssd mobilenet v1 fpn share box predictor 640x640 coco14 sync model 1 1 code master branch 1 2 command python3 7 train py logtostderr train dir user document temp tensorflow last model train pipeline config path user document temp tensorflow last model ssd mobilenet v1 fpn share box predictor 640x640 coco14 sync config note I can also attach my config file if it be relate 1 3 output info tensorflow global step 89547 loss 0 0628 8 283 sec step i0629 19 34 56 030186 4505345536 learn py 512 global step 89547 loss 0 0128 8 283 sec step info tensorflow global step 89548 loss 0 0596 9 801 sec step i0629 19 35 05 831597 4505345536 learn py 512 global step 89548 loss 0 0196 9 801 sec step 2 convert train model to tflite graph pb file 2 1code master branch 2 2command python3 6 model research object detection object detection export tflite ssd graph py pipeline config path user emre documents temp tensorflow last model ssd mobilenet v1 fpn share box predictor 640x640 coco14 sync config train checkpoint prefix user emre documents temp tensorflow last model train model ckpt 90728 output directory user emre documents temp tensorflow last model lite 2 3output usr local bin python3 6 user emre documents temp tensorflow last model research object detection object detection export tflite ssd graph py pipeline config path user emre documents temp tensorflow last model ssd mobilenet v1 fpn share box predictor 640x640 coco14 sync config train checkpoint prefix user emre documents temp tensorflow last model train model ckpt 89545 output directory user emre documents temp tensorflow last model lite user emre documents temp tensorflow last model research object detection object detection user emre documents temp tensorflow last model research object detection library framework python framework version 3 6 lib python36 zip library framework python framework version 3 6 lib python3 6 library framework python framework version 3 6 lib python3 6 lib dynload user emre library python 3 6 lib python site package library framework python framework version 3 6 lib python3 6 site package library framework python framework version 3 6 lib python3 6 site package aeosa library framework python framework version 3 6 lib python3 6 site package object detection 0 1 py3 6 egg library framework python framework version 3 6 lib python3 6 site package lvis 0 5 3 py3 6 egg library framework python framework version 3 6 lib python3 6 site package apache beam 2 27 0 py3 6 macosx 10 9 x86 64 egg library framework python framework version 3 6 lib python3 6 site package avro python3 1 10 1 py3 6 egg library framework python framework version 3 6 lib python3 6 site package tensorflow model optimization 0 5 0 py3 6 egg library framework python framework version 3 6 lib python3 6 site package tensorflow dataset 4 2 0 py3 6 egg library framework python framework version 3 6 lib python3 6 site package tensorflow addon 0 12 1 py3 6 macosx 10 9 x86 64 egg library framework python framework version 3 6 lib python3 6 site package seqeval 1 2 2 py3 6 egg library framework python framework version 3 6 lib python3 6 site package sentencepiece 0 1 95 py3 6 macosx 10 9 x86 64 egg library framework python framework version 3 6 lib python3 6 site package py cpuinfo 7 0 0 py3 6 egg library framework python framework version 3 6 lib python3 6 site package psutil 5 8 0 py3 6 macosx 10 9 x86 64 egg library framework python framework version 3 6 lib python3 6 site package opencv python headless 4 5 1 48 py3 6 macosx 10 9 x86 64 egg library framework python framework version 3 6 lib python3 6 site package oauth2client 4 1 3 py3 6 egg library framework python framework version 3 6 lib python3 6 site package kaggle 1 5 10 py3 6 egg library framework python framework version 3 6 lib python3 6 site package google cloud bigquery 2 7 0 py3 6 egg library framework python framework version 3 6 lib python3 6 site package gin config 0 4 0 py3 6 egg library framework python framework version 3 6 lib python3 6 site package dataclasse 0 8 py3 6 egg library framework python framework version 3 6 lib python3 6 site package opencv python 4 5 1 48 py3 6 macosx 10 9 x86 64 egg library framework python framework version 3 6 lib python3 6 site package pymongo 3 11 3 py3 6 macosx 10 9 x86 64 egg library framework python framework version 3 6 lib python3 6 site package pydot 1 4 1 py3 6 egg library framework python framework version 3 6 lib python3 6 site package pyarrow 2 0 0 py3 6 macosx 10 9 x86 64 egg library framework python framework version 3 6 lib python3 6 site package mock 2 0 0 py3 6 egg library framework python framework version 3 6 lib python3 6 site package httplib2 0 17 4 py3 6 egg library framework python framework version 3 6 lib python3 6 site package hdfs 2 5 8 py3 6 egg library framework python framework version 3 6 lib python3 6 site package fastavro 1 3 1 py3 6 macosx 10 9 x86 64 egg library framework python framework version 3 6 lib python3 6 site package dill 0 3 1 1 py3 6 egg library framework python framework version 3 6 lib python3 6 site package crcmod 1 7 py3 6 macosx 10 9 x86 64 egg 2021 06 29 19 23 16 734088 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma info tensorflow find and fix 2 match i0629 19 23 17 298635 4339817984 exporter py 140 find and fix 2 match info tensorflow find and fix 0 match i0629 19 23 17 328562 4339817984 exporter py 140 find and fix 0 match warn tensorflow from library framework python framework version 3 6 lib python3 6 site package tensorflow python tool freeze graph py 127 checkpoint exist from tensorflow python training checkpoint management be deprecate and will be remove in a future version instruction for update use standard file apis to check for file with this prefix w0629 19 23 17 652231 4339817984 deprecation py 323 from library framework python framework version 3 6 lib python3 6 site package tensorflow python tool freeze graph py 127 checkpoint exist from tensorflow python training checkpoint management be deprecate and will be remove in a future version instruction for update use standard file apis to check for file with this prefix info tensorflow restore parameter from user emre documents temp tensorflow last model train model ckpt 89545 i0629 19 23 18 414107 4339817984 saver py 1280 restore parameter from user emre documents temp tensorflow last model train model ckpt 89545 warn tensorflow from library framework python framework version 3 6 lib python3 6 site package tensorflow python tool freeze graph py 233 convert variable to constant from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util convert variable to constant w0629 19 23 19 047316 4339817984 deprecation py 323 from library framework python framework version 3 6 lib python3 6 site package tensorflow python tool freeze graph py 233 convert variable to constant from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util convert variable to constant warn tensorflow from library framework python framework version 3 6 lib python3 6 site package tensorflow python framework graph util impl py 270 extract sub graph from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util extract sub graph w0629 19 23 19 047602 4339817984 deprecation py 323 from library framework python framework version 3 6 lib python3 6 site package tensorflow python framework graph util impl py 270 extract sub graph from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util extract sub graph info tensorflow freeze 333 variable i0629 19 23 19 428654 4339817984 graph util impl py 311 freeze 333 variable info tensorflow convert 333 variable to const op i0629 19 23 19 579565 4339817984 graph util impl py 364 convert 333 variable to const op 2021 06 29 19 23 19 800005 I tensorflow tool graph transform transform graph cc 317 apply strip unused nodes 3 convert tflite graph pb to tflite file 3 1code master branch 3 2command python3 6 tflite convert py output file test tflite graph def file tflite graph pb input array normalize input image tensor output array tflite detection postprocess tflite detection postprocess 1 tflite detection postprocess 2 tflite detection postprocess 3 input shape 1 640 640 3 allow custom op 3 3output 2021 06 29 19 15 19 235482 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma hallo 2021 06 29 19 15 19 931976 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation tflite detection postprocess 2021 06 29 19 15 19 976190 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 900 operator 1329 array 0 quantize 2021 06 29 19 15 20 001280 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 900 operator 1329 array 0 quantize 2021 06 29 19 15 20 111332 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 187 operator 417 array 0 quantize 2021 06 29 19 15 20 116690 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 2 182 operator 407 array 0 quantize 2021 06 29 19 15 20 121787 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 182 operator 407 array 0 quantize 2021 06 29 19 15 20 124878 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 182 operator 407 array 0 quantize 2021 06 29 19 15 20 132192 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 52428800 byte theoretical optimal value 39321600 byte 2021 06 29 19 15 20 133140 I tensorflow lite toco toco tooling cc 433 estimate count of arithmetic op 103 489 billion note that a multiply add be count as 2 op 2021 06 29 19 15 20 134138 w tensorflow lite toco tflite operator cc 2112 ignore unsupported type in list attribute with key output type 4 I download follow project for tflite object detection and camera capture code and 5 I modify and merge they a little bit so if I click the capture button capture the camera view it detect object which be train in my model from capture photo and print out the item number class i d in labelmap txt and score of the detect object predication 6 if I execute my custom train model on macos my code print out always the first class i d and some wrong score it do not matter which photo I show 6 1ex first photo 2021 06 29 19 19 14 526 python 14486 591568 select fps 30 not available on this platform bounding box array 0 77786434 0 4134923 0 8920009 0 5161122 dtype float32 class i d 0 0 score 0 84353364 6 2ex second photo 2021 06 29 19 19 20 729 python 14486 591568 select fps 30 not available on this platform bounding box array 0 7591856 0 41594303 0 8673101 0 5112757 dtype float32 class i d 0 0 score 0 9267361 bounding box array 0 42481518 0 6758945 0 53535295 0 7300209 dtype float32 class i d 0 0 score 0 50106883 bounding box array 0 6508195 0 6713367 0 7614183 0 72627753 dtype float32 class i d 0 0 score 0 4232775 6 3ex third photo 2021 06 29 19 19 27 549 python 14486 591568 select fps 30 not available on this platform bounding box array 0 6415653 0 2917411 0 7683315 0 38982102 dtype float32 class i d 0 0 score 0 9865758 if I use another tflite pre train model with my code racoon and squirrel detection sd mobilenet v2 quantize coco model link it work totally fine I get follow output if I show a squirrel or racoon photo to camera and capture 6 4 ex racoon 2021 07 01 21 27 53 884 python 21077 785260 select fps 30 not available on this platform bounding box array 0 10106477 0 5392377 0 5041685 0 9078557 dtype float32 class i d 2 0 score 0 9921875 6 5 ex squirrel 2021 07 01 21 28 15 506 python 21077 785260 select fps 30 not available on this platform bounding box array 0 28669962 0 52812684 0 83007085 0 8363091 dtype float32 class i d 1 0 score 0 95703125 additional note if I use pretraine frozen ssd mobilenet v1 fpn share box predictor 640x640 coco14 sync model link with the code from follow website link it work also totally fine so I do not think that ssd mobilenet v1 fpn share box predictor 640x640 coco14 sync model have a basic problem so I guess something wrong be happen in step 2 convert pb file to tflite graph pb and or step3 convert tflite graph pb file to tflite I would be grateful if you give I some advice thank a lot in advance source code log
tensorflowtensorflow
update resource handle h
Bug
change access modifier to private fix for 50481
tensorflowtensorflow
xla operation semantics documentation have self conflict
Bug
url s with the issue slice description of issue what need change clear description the number of parameter be not match image
tensorflowtensorflow
reappear bug typeerror parameter to mergefrom must be instance of same class expect tensorflow tensorshapeproto get tensorflow tensorshapeproto
Bug
the bug which be fix in tensorflow 2 4 seem to have reappear in tensorflow 2 5 a different way I already comment in that bug but as it be closed and I m not able to reopen it and the cause seem to be a bit different I open this new bug report so when run my code on tf 2 4 everything work fine include the ipynb example from but when I run it with tf 2 5 I get this error typeerror traceback most recent call last in 1 model tf keras sequential 2 tf keras layer flatten input shape 28 28 3 tf keras layer dense 128 activation relu 4 tf keras layer dense 10 5 usr lib python3 8 site package tensorflow python training tracking base py in method wrapper self args kwargs 520 self self setattr track false pylint disable protect access 521 try 522 result method self args kwargs 523 finally 524 self self setattr track previous value pylint disable protect access usr lib python3 8 site package tensorflow python keras engine sequential py in init self layer name 112 113 skip the init in functionalmodel since model doesn t have input output yet 114 super functional functional self init pylint disable bad super call 115 name name autocast false 116 base layer keras api gauge get cell sequential set true usr lib python3 8 site package tensorflow python training tracking base py in method wrapper self args kwargs 520 self self setattr track false pylint disable protect access 521 try 522 result method self args kwargs 523 finally 524 self self setattr track previous value pylint disable protect access usr lib python3 8 site package tensorflow python keras engine training py in init self args kwargs 316 self step per execution none 317 318 self init batch counter 319 self base model initialize true 320 usr lib python3 8 site package tensorflow python training tracking base py in method wrapper self args kwargs 520 self self setattr track false pylint disable protect access 521 try 522 result method self args kwargs 523 finally 524 self self setattr track previous value pylint disable protect access usr lib python3 8 site package tensorflow python keras engine training py in init batch counter self 324 evaluate and predict 325 agg variable variableaggregationv2 only first replica 326 self train counter variable variable 0 dtype int64 aggregation agg 327 self test counter variable variable 0 dtype int64 aggregation agg 328 self predict counter variable variable usr lib python3 8 site package tensorflow python op variable py in call cls args kwargs 260 return cls variable v1 call args kwargs 261 elif cls be variable 262 return cls variable v2 call args kwargs 263 else 264 return super variablemetaclass cls call args kwargs usr lib python3 8 site package tensorflow python op variable py in variable v2 call cls initial value trainable validate shape cache device name variable def dtype import scope constraint synchronization aggregation shape 242 if aggregation be none 243 aggregation variableaggregation none 244 return previous getter 245 initial value initial value 246 trainable trainable usr lib python3 8 site package tensorflow python op variable py in kws 235 shape none 236 call on variable class useful to force the signature 237 previous getter lambda kws default variable creator v2 none kws 238 for getter in op get default graph variable creator stack pylint disable protect access 239 previous getter make getter getter previous getter usr lib python3 8 site package tensorflow python ops variable scope py in default variable creator v2 next creator kwargs 2660 shape kwarg get shape none 2661 2662 return resource variable op resourcevariable 2663 initial value initial value 2664 trainable trainable usr lib python3 8 site package tensorflow python op variable py in call cls args kwargs 262 return cls variable v2 call args kwargs 263 else 264 return super variablemetaclass cls call args kwargs 265 266 usr lib python3 8 site package tensorflow python op resource variable op py in init self initial value trainable collection validate shape cache device name dtype variable def import scope constraint distribute strategy synchronization aggregation shape 1582 self init from proto variable def import scope import scope 1583 else 1584 self init from args 1585 initial value initial value 1586 trainable trainable usr lib python3 8 site package tensorflow python op resource variable op py in init from args self initial value trainable collection cache device name dtype constraint synchronization aggregation distribute strategy shape 1736 else 1737 shape initial value shape 1738 handle eager safe variable handle 1739 initial value initial value 1740 shape shape usr lib python3 8 site package tensorflow python op resource variable op py in eager safe variable handle initial value shape share name name graph mode 235 236 dtype initial value dtype base dtype 237 return variable handle from shape and dtype shape dtype share name name 238 graph mode initial value 239 usr lib python3 8 site package tensorflow python op resource variable op py in variable handle from shape and dtype shape dtype share name name graph mode initial value 175 handle datum be set true 176 handle data shape and type append 177 cpp shape inference pb2 cppshapeinferenceresult handleshapeandtype 178 shape shape as proto dtype dtype as datatype enum 179 typeerror parameter to mergefrom must be instance of same class expect tensorflow tensorshapeproto get tensorflow tensorshapeproto
tensorflowtensorflow
grammatical mistake in the documentation of tf keras model load weight
Bug
url s with the issue please provide a link to the documentation entry for example load weight description of issue what need change the documentation be a bit confusing with the term like false weight should be the same as when etc it can be refine
tensorflowtensorflow
oserror savedmodel file do not exist at error
Bug
hello please help I be not much of an expert on tensorflow I have follow the ai guy tutorial and create custom my yolov4 weight I can not convert it to tensorflow please help yolov4 gpu c user lenovo l3 yolov4 custom function python detect py weight checkpoint custom 416 size 416 model yolov4 image datum image car2 jpg plate traceback most recent call last file detect py line 146 in app run main file c user lenovo l3 anaconda3 envs yolov4 gpu lib site package absl app py line 312 in run run main main args file c user lenovo l3 anaconda3 envs yolov4 gpu lib site package absl app py line 258 in run main sys exit main argv file detect py line 49 in main save model load tf save model load flag weight tag tag constant serve file c user lenovo l3 anaconda3 envs yolov4 gpu lib site package tensorflow python save model load py line 590 in load return load internal export dir tag option file c user lenovo l3 anaconda3 envs yolov4 gpu lib site package tensorflow python save model load py line 601 in load internal loader impl parse save model with debug info export dir file c user lenovo l3 anaconda3 envs yolov4 gpu lib site package tensorflow python save model loader impl py line 56 in parse save model with debug info save model parse save model export dir file c user lenovo l3 anaconda3 envs yolov4 gpu lib site package tensorflow python save model loader impl py line 113 in parse save model constant save model filename pb oserror savedmodel file do not exist at checkpoint custom 416 save model pbtxt save model pb I have run the below command before run the above but the weight file do not convert as expect I think python save model py weight datum yolov4 weight output checkpoint yolov4 416 input size 416 model yolov4 any help be much appreciated
tensorflowtensorflow
colab tf2 5 reshape image and label after augmentation pipeline generate error during train
Bug
system information have I write custom code adapt this tutorial scrollto ltavr 4cp1rp to run over custom implementation of model training example os platform and distribution google colab enviroment tensorflow 2 5 0 tpu gpu backend python version python 3 7 10 I have a tf data pipeline adapt to run on tpu accord to the tutorial mention above after parse the tfrecord it basically map to an augmentation pipeline model that produce 32 augment version of every input image then it tile the label and reshape the datum transform it into batch training run fine untill step 249 from first epoch when reshape generate an error the request shape be equal to 1233125376 which correspond to batch size x num generate image augmentation x image dimension namely 32x8 x 32 x 224 x 224 x 3 whereas the input tensor which be equal to 125239296 actually correspond to 26 x 32 x 224 x 224 x 3 I don t have a clue why this be happen error python epoch 1 30 249 2007 eta 32 46 loss 14 8993 class 1 loss 5 9974 class 2 loss 5 1770 class 3 loss 3 7249 class 1 categorical accuracy 0 0472 class 2 categorical accuracy 0 0721 class 3 categorical accuracy 0 1524 invalidargumenterror traceback most recent call last in 4 5 resnet 50v2 summary 6 history train model train path validation path buffer size epoch step per epoch resnet 50v2 7 plot training history3 history 8 invalidargumenterror 2 root error s find 0 invalid argument function node inference train function 214827 input to reshape be a tensor with 125239296 value but the request shape have 1233125376 node reshape multideviceiteratorgetnextfromshard remotecall iteratorgetnext 1 invalid argument function node inference train function 214827 input to reshape be a tensor with 125239296 value but the request shape have 1233125376 node reshape multideviceiteratorgetnextfromshard remotecall iteratorgetnext cluster train function execute 2 0 59 0 successful operation 7 derive error ignore python def train model train path validation path buffer size epoch step per epoch model datum augmentation preprocesse model train filename get filenamestpu train path random shuffle train filename validation filename get filenamestpu validation path random shuffle validation filename dataset length 91758 train size dataset length 0 7 validation size dataset length train size batch size 32 tpu strategy num replicas in sync 32 x 8 tpu arg 0 32 batch size datum reshape lambda x y tf reshape x shape tpu arg 0 224 224 3 tf reshape y 0 shape tpu arg 0 1000 tf reshape y 1 shape tpu arg 0 516 tf reshape y 2 shape tpu arg 0 124 augmentation pipeline lambda x y datum augmentation tf expand dim x axis 0 tf tile tf reshape y 0 1 1000 32 1 tf tile tf reshape y 1 1 516 32 1 tf tile tf reshape y 2 1 124 32 1 auto tf datum autotune train dataset tf datum tfrecorddataset buffer size int 1e 8 num parallel read auto filename train filename map parse fn num parallel call auto shuffle buffer size buffer size reshuffle each iteration true train dataset train dataset map augmentation pipeline num parallel call auto batch batch size train dataset train dataset map datum reshape num parallel call auto train dataset train dataset repeat train dataset train dataset prefetch auto create a validation dataset validation dataset tf datum tfrecorddataset num parallel read auto filename validation filename map parse fn num parallel call auto validation dataset validation dataset map augmentation pipeline num parallel call auto batch batch size map datum reshape num parallel call auto validation dataset validation dataset prefetch auto validation dataset validation dataset repeat 1 validation step validation size batch size history model fit x train dataset epoch epoch step per epoch step per epoch validation datum validation dataset validation step validation step return history training fn python loss class1 categoricalcrossentropy class2 categoricalcrossentropy class3 categoricalcrossentropy metric categorical accuracy optimizer adam lr 5e 3 weight file none with tpu strategy scope resnet 50v2 load and configure model optimizer loss metric weight file base directory gs 2015 tfrecord train path base directory train validation path base directory validation buffer size 10240 epoch 30 step per epoch 2007 resnet 50v2 summary history train model train path validation path buffer size epoch step per epoch resnet 50v2 plot training history3 history I also accept suggestion on how to vectorize the datum augmentation pipeline since when I input a batch of image instead of a single tensor it run on parallel make the generate image come in a random sequence hence make tile the label unfeasible
tensorflowtensorflow
valueerror python input incompatible with input signature
Bug
environment tf2 2 linux 2080ti cuda 10 2 I use tf kern model and tf gradienttape to build my bert crf custom model and now I want to use tensorflow model serve deploy my model so the first thing I need to do be save the model as a pb file I try to use tf save model save to save the model but I encounter the problem of dtype because in tf2 2 I can t change the dtype of tensor to tf int32 whether I use constant cast or conver to tensor and variable can t change dtype to tf int32 and their output form be change to int32 I see from the internet that by set tf function tf tensorspec before the call function I still encounter the problem that dtype could not be convert to tf int32 what I want to ask be if you use tf keras model to build your own model how can you save the model as pb or can you tell I how to change the tensor of tf2 2 to tf int32 model class mybertcrf tf keras model def init self use crf input dim output dim super mybertcrf self init use crf input dim output dim self use crf use crf self input dim input dim self output dim output dim self bert tfbertmodel from pretraine hfl chinese bert wwm ext self dropout tf keras layers dropout 0 3 self dense tf keras layer dense self output dim self other param tf variable tf random uniform shape output dim output dim tf function input signature tf tensorspec none 128 name ids dtype tf int32 tf tensorspec none 128 name mask dtype tf int32 tf tensorspec none 128 name token dtype tf int32 tf tensorspec none 128 name target dtype tf int32 tf tensorspec 1 name input seq len dtype tf int32 def call self ids mask token target input seq len hide self bert ids mask token 0 dropout input self dropout hide 1 logistic seq self dense dropout input print hide print ids mask token target input seq len if self use crf log likelihood self other param tfa text crf crf log likelihood logistic seq target input seq len self other param decode predict crf score tfa text crf decode logistic seq self other param input seq len return decode predict log likelihood crf score else prob seq tf nn softmax logistic seq return prob seq none none error raise valueerror python input incompatible with input signature n s valueerror python input incompatible with input signature input tf tensor 101 6163 1906 0 0 0 101 872 4761 0 0 0 101 1259 6163 0 0 0 101 2769 2682 0 0 0 shape 16 128 dtype int32 tf tensor 1 1 1 0 0 0 1 1 1 0 0 0 shape 16 128 dtype int32 tf tensor 0 0 0 0 0 0 0 0 0 0 0 0 shape 16 128 dtype int32 tf tensor 2 2 2 0 0 0 1 1 1 0 0 0 shape 16 128 dtype int32 tf tensor 19 19 28 28 21 27 19 20 32 22 23 29 27 21 29 25 shape 16 dtype int32 input signature tensorspec shape none 128 dtype tf int32 name ids tensorspec shape none 128 dtype tf int32 name mask tensorspec shape none 128 dtype tf int32 name token tensorspec shape none 128 dtype tf int32 name target tensorspec shape 1 dtype tf int32 name input seq len process finish with exit code 1
tensorflowtensorflow
the phi definition of gelu be incorrect
Bug
url s with the issue description of issue what need change clear description currently the definition of phi x in the gelu document be as the follow phi x frac x 2 leave 1 tanh sqrt frac 2 pi cdot x 0 044715 cdot x 3 right but it seem that this be the definition of gelu x be not it of phi x in the original paper I expect that a definition of phi x be as the follow phi x frac 1 2 leave 1 tanh sqrt frac 2 pi cdot x 0 044715 cdot x 3 right or the document define gelu x directly when approximate be true like the follow gelu x frac x 2 leave 1 tanh sqrt frac 2 pi cdot x 0 044715 cdot x 3 right
tensorflowtensorflow
api docs tf keras sequential predict class be deprecate
Bug
predict class tf keras sequential predict class be deprecate there should be a deprecate sign similar to model predict generator s api doc sequential py l441 l468 warning warn model predict class be deprecate and will be remove after 2021 01 01 please use instead np argmax model predict x axis 1 if your model do multi class classification e g if it use a softmax last layer activation model predict x 0 5 astype int32 if your model do binary classification e g if it use a sigmoid last layer activation image cc lamberta markdaoust
tensorflowtensorflow
softmax layer unexpected and confusing error message
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 rc4 71 g582c8d236cb 2 4 0 python version 3 8 5 describe the current behavior I be get a strange error message for the following model the error seem to originate from the softmax layer at the end the message be the follow tensorflow python framework error impl invalidargumenterror dimension must be equal but be 2 and 4 for node sof43628 add add addv2 t dt double placeholder sof43628 mul with input shape 4 2 1 4 2 the error be unexpected since I think the model should work and it be confusing since in the softmax layer there be no requirement for match dimension what could be the reason for this error standalone code to reproduce the issue import tensorflow as tf from tensorflow import kera from tensorflow keras import layer import numpy as np in0mas92349 tf keras layers input shape 4 2 1 in0dep9315 tf keras layers input shape 1 2 1 in0ave93999 tf keras layers input shape 2 1 mas92349 keras layer mask mask value 1 name mas92349 in0mas92349 dep9315 keras layer depthwiseconv2d 1 2 stride 1 1 padding same name dep9315 in0dep9315 zer31495 keras layer zeropadding2d pad 3 0 0 0 name zer31495 dep9315 max17507 keras layer maximum name max17507 mas92349 zer31495 ave93999 keras layers averagepooling1d pool size 2 name ave93999 in0ave93999 res89 keras layer reshape 1 1 1 name res89 ave93999 zer84427 keras layer zeropadding2d pad 3 0 1 0 name zer84427 res89 sub42930 keras layer subtract name sub42930 max17507 zer84427 sof43628 keras layer softmax axis 1 name sof43628 sub42930 model tf keras model model input in0mas92349 in0dep9315 in0ave93999 output sof43628 in0mas92349 tf constant 1 8457 1 4313 1 8795 1 1303 1 4792 1 9199 1 996 1 583 in0dep9315 tf constant 0 6111 0 9861 in0ave93999 tf constant 1 1779 1 8115 print np array2string model predict in0mas92349 in0dep9315 in0ave93999 step 1 separator
tensorflowtensorflow
tensorflow 2 x model training indefinitely in graph mode
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below pip install tensorflow python version python 3 9 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be train the model in both the eager execution mode and the graph mode the model be train well in the eager execution mode however run for indefinitely in the graph mode I try to debug in multiple way with no success describe the expect behavior the model should train in the same way in both eager execution and graph mode contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import numpy as np import tensorflow as tf tf compat v1 disable eager execution comment to enable eager execution mode print info eager mode tf executing eagerly for easy reset of notebook state class custommodelv2 tf keras model def init self super custommodelv2 self init self encoder encoder 32 self encoder build input shape none 32 self loss tracker tf keras metric mean name loss def call self input training return self encoder input training property def metric self we list our metric object here so that reset state can be call automatically at the start of each epoch or at the start of evaluate if you don t implement this property you have to call reset state yourself at the time of your choose return self loss tracker tf function def train step self datum unpack the datum its structure depend on your model and on what you pass to fit x y datum with tf gradienttape as tape y pre self call x training true forward pass compute the loss value the loss function be configure in compile r loss tf keras loss mean square error y y pre loss r loss compute gradient trainable var self trainable variable gradient tape gradient loss trainable var update weight self optimizer apply gradient zip gradient trainable var update metric include the metric that track the loss self loss tracker update state loss return a dict mapping metric name to current value return loss self loss tracker result class encoder tf keras model def init self input size super encoder self init name encoder self input layer denselayer 128 input size 0 0 0 0 float32 self hide layer1 denselayer 128 128 0 001 0 0 float32 self dropout laye1 tf keras layers dropout 0 2 self hide layer2 denselayer 64 128 0 001 0 0 float32 self dropout laye2 tf keras layers dropout 0 2 self hide layer3 denselayer 64 64 0 001 0 0 float32 self dropout laye3 tf keras layers dropout 0 2 self output layer linearlayer 64 64 0 001 0 0 float32 def call self input datum training fx self input layer input data fx self hide layer1 fx if train fx self dropout laye1 fx fx self hide layer2 fx if train fx self dropout laye2 fx fx self hide layer3 fx if train fx self dropout laye3 fx return self output layer fx class linearlayer tf keras layers layer def init self unit input dim weight regularizer bias regularizer d type super linearlayer self init self w self add weight name w linear shape input dim unit initializer tf keras initializer randomuniform minval tf cast tf math sqrt 6 input dim unit dtype d type maxval tf cast tf math sqrt 6 input dim unit dtype d type seed 16751 regularizer tf keras regularizer l1 weight regularizer trainable true self b self add weight name b linear shape unit initializer tf zeros initializer regularizer tf keras regularizer l1 bias regularizer trainable true def call self input return tf matmul input self w self b class denselayer tf keras layers layer def init self unit input dim weight regularizer bias regularizer d type super denselayer self init self w self add weight name w dense shape input dim unit initializer tf keras initializer randomuniform minval tf cast tf math sqrt 6 0 input dim unit dtype d type maxval tf cast tf math sqrt 6 0 input dim unit dtype d type seed 16751 regularizer tf keras regularizer l1 weight regularizer trainable true self b self add weight name b dense shape unit initializer tf zeros initializer regularizer tf keras regularizer l1 bias regularizer trainable true def call self input x tf matmul input self w self b return tf nn elu x just use fit as usual x tf datum dataset from tensor slice np random random 5000 32 y numpy np random random 5000 1 y numpy 3 none y tf datum dataset from tensor slice y numpy x window x window 30 shift 10 stride 1 flat x x window flat map lambda t t flat x scale flat x map lambda t t 2 y window y window 30 shift 10 stride 1 flat y y window flat map lambda t t flat y scale flat y map lambda t t 2 z tf datum dataset zip flat x scale flat y scale batch 32 cache shuffle buffer size 32 prefetch buffer size tf data experimental autotune stop criterion if the training loss doesn t go down by 1e 3 early stop cb tf keras callback earlystopping monitor loss min delta 1e 3 verbose 1 mode min patience 3 baseline none restore good weight true construct and compile an instance of custommodel model custommodelv2 model compile optimizer tf optimizer adagrad 0 01 history model fit z epoch 3 callback early stop cb other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach output in graph mode warn tensorflow output output 1 miss from loss dictionary we assume this be do on purpose the fit and evaluate apis will not be expect any datum to be pass to output 1 warn tensorflow from c user jain432 anaconda3 envs tf lib site package tensorflow python keras optimizer v2 adagrad py 87 call constant init from tensorflow python op init op with dtype be deprecate and will be remove in a future version instruction for update call initializer instance with the dtype argument instead of pass it to the constructor train on none step epoch 1 3 479916 unknown 667 1ms step batch 239957 5000 size 1 0000 loss 2 1716e 04 output in eager mode epoch 1 3 468 468 2s 3ms step loss 0 4173 epoch 2 3 468 468 1s 3ms step loss 0 3695 epoch 3 3 468 468 1s 3ms step loss 0 3608
tensorflowtensorflow
assign add not work on gpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 16 04 tensorflow version 2 5 0 python version 3 9 cuda cudnn version 11 2 gpu model and memory 4x geforce 1080 ti describe the current behavior I have implement a patch for optimizer which should perform simple gradient accumulation see link on stackoverflow the interesting observation here be that use just assign the implementation seem to work on cpu as well as on gpu obviously this would not accumulate gradient and be therefore pointless however use assign add as it would be require for ga the model do not converge anymore if I run the training with one gpu I have also try to work around this by compute acc grad I assign acc grad i new grad I n but this be not work either image describe the expect behavior if assign add work here the model should do at least something but in the good case start to converge standalone code to reproduce the issue it s not entirely standalone but the code below would be use as follow python model build optimizer get patch optimizer optimizer n trainable variable model compile optimizer optimizer where we have python class gradientaccumulationpatch def init self n int orig apply gradient trainable variable self n tf constant n dtype tf int64 policy tf keras mixed precision global policy self variable dtype policy variable dtype self accu gradient tf variable tf zero g shape dtype g dtype for g in trainable variable self current step tf variable 0 dtype tf int64 self orig apply gradient orig apply gradient def apply gradient self grad and var args kwargs can apply self can apply on next step 1 0 whenever we want to apply gradient 0 0 otherwise apply tf cast can apply dtype self variable dtype will be 0 0 if apply be 1 0 and vice versa keep tf cast tf logical not can apply dtype self variable dtype grad and var list grad and var gradient grad for grad in grad and var trainable variable var for var in grad and var accumulate gradient for I grad in enumerate gradient fixme should be assign add self accu gradient I assign grad tf cast self n dtype grad dtype multiply each gradient with our apply signal final gradient grad apply for grad in self accu gradient self orig apply gradient zip final gradient trainable variable args kwargs this will reset our buffer whenever keep be 0 0 for g in self accu gradient g assign g keep def apply accu gradient self trainable variable args kwargs call the original apply gradient function self orig apply gradient zip self accu gradient trainable variable args kwargs reset all accumulate gradient to zero for I in range len self accu gradient self accu gradient I assign tf zero like trainable variable I def can apply on next step self return true if gradient should be apply false otherwise increment always do this first self current step assign add 1 count mod step tf math mod self current step self n return tf equal count mod step 0 def get patch optimizer optimizer n trainable variable patch optimizer for gradient accumulation param optimizer the optimizer to patch param n the number of accumulation step before apply gradient param trainable variable trainable parameter of the model return a patch patch optimizer for gradient accumulation accumulator gradientaccumulationpatch n n orig apply gradient optimizer apply gradient trainable variable trainable variable replace the original function optimizer apply gradient accumulator apply gradient return optimizer update I have now try this on mnist and it appear to work for that example any idea why this might not be work in my example above mnist example click I python import numpy as np import tensorflow as tf from tensorflow import kera from tensorflow keras import layer class gradientaccumulationpatch def init self n int orig apply gradient trainable variable self n tf constant n dtype tf int64 policy tf keras mixed precision global policy self variable dtype policy variable dtype self accu gradient tf variable tf zero g shape dtype g dtype for g in trainable variable self current step tf variable 0 dtype tf int64 self orig apply gradient orig apply gradient def apply gradient self grad and var args kwargs can apply self can apply on next step 1 0 whenever we want to apply gradient 0 0 otherwise apply tf cast can apply dtype self variable dtype will be 0 0 if apply be 1 0 and vice versa keep tf cast tf logical not can apply dtype self variable dtype grad and var list grad and var gradient grad for grad in grad and var trainable variable var for var in grad and var accumulate gradient for I grad in enumerate gradient self accu gradient I assign add grad tf cast self n dtype self variable dtype multiply each gradient with our apply signal final gradient grad apply for grad in self accu gradient self orig apply gradient zip final gradient trainable variable args kwargs this will reset our buffer whenever keep be 0 0 for g in self accu gradient g assign g keep def apply accu gradient self trainable variable args kwargs call the original apply gradient function self orig apply gradient zip self accu gradient trainable variable args kwargs reset all accumulate gradient to zero for I in range len self accu gradient self accu gradient I assign tf zero like trainable variable I def can apply on next step self return true if gradient should be apply false otherwise increment always do this first self current step assign add 1 count mod step tf math mod self current step self n return tf equal count mod step 0 def get patch optimizer optimizer n trainable variable patch optimizer for gradient accumulation param optimizer the optimizer to patch param n the number of accumulation step before apply gradient param trainable variable trainable parameter of the model return a patch patch optimizer for gradient accumulation accumulator gradientaccumulationpatch n n orig apply gradient optimizer apply gradient trainable variable trainable variable replace the original function optimizer apply gradient accumulator apply gradient return optimizer def get ffn model input size int output size int hide size int 64 keras model input layer input shape input size x input x layer dense unit hide size activation tanh x x layer dense unit hide size activation tanh x x layer dense unit output size activation softmax x return keras model inputs input output x def make dataset input target batch size int def sample generator while true idx np random randint 0 len input yield input idx flatten tf one hot target idx depth num class input input astype np float32 255 0 input np expand dim input axis 1 num class len set target input shape np prod input 0 shape target shape num class return tf datum dataset from generator lambda sample generator output type tf float32 tf float32 output shape input shape target shape pad batch batch size def main train batch size 1 valid batch size 10 grad acc n 10 step per epoch 1000 grad acc n make sure we have the same number of update x train y train x test y test keras datasets mnist load datum train datum make dataset x train y train batch size train batch size valid datum make dataset x test y test batch size valid batch size input size train datum element spec 0 shape 1 output size train datum element spec 1 shape 1 model get ffn model input size input size output size output size hide size 128 optimizer tf keras optimizer adam learning rate 0 0005 optimizer get patch optimizer optimizer n grad acc n trainable variable model trainable variable model compile optimizer optimizer loss categorical crossentropy metric accuracy model fit train datum epoch 10 step per epoch step per epoch train batch size validation datum valid datum validation step 10 if name main main
tensorflowtensorflow
how to translate known tensor
Bug
hi dear in dist training I must use for to get matrix concat but I can not translate it to constant tensor the code down for the bug reproduce import tensorflow as tf import numpy as np np random seed 123 tf random set seed 123 class mymodel tf keras model def init self super mymodel self init def call self input tf function autograph true def f v tf constant 0 for I in tf range 3 tf autograph experimental set loop option shape invariant v tf tensorshape none v tf concat v I 0 return v print tf constant f bug return tf math reduce sum f strategy tf distribute mirroredstrategy device device gpu d I for I in range 2 cross device op none train dataset tf datum dataset from tensor slice np random randint 0 12 size 100 3 astype np int32 shuffle 10 batch 4 train dist dataset strategy experimental distribute dataset train dataset with strategy scope model mymodel def train step input with tf gradienttape as tape prediction model input training true return prediction tf function def distribute train step dataset input per replica loss strategy run train step args dataset input return strategy reduce tf distribute reduceop sum per replica loss axis none p 0 for x in train dist dataset p distribute train step x I want to translate the know tensor to constant how to deal with it thx
tensorflowtensorflow
api docs tensor incoherent sentence
Bug
url s with the issue description of issue what need change the first code snippet in the url be come in between the sentence which make it incoherent submit a pull request I be new to open source and wish to fix it myself can you please direct I to the file wherein the view alias and code snippet be include in the definition give in l264 would remove the newline character after first line in multiline comment solve the issue
tensorflowtensorflow
tflite convert split of int64 tensor but fail to evaluate the conversion
Bug
system information os platform and distribution macos mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from binary tensorflow version 2 5 0 python version 3 8 standalone code to reproduce the issue def test split export tf function def test fn inp tf tensor tf tensor return tf concat tf split inp 3 axis 1 axis 1 concrete f test fn get concrete function tf tensorspec none 3 tf int64 converter tf lite tfliteconverter from concrete function concrete f tflite model converter convert interpreter tf lite interpreter model content tflite model interpreter allocate tensor describe the current behavior the example convert fine but attempt to instantiate evaluation break in allocate tensor with the error e runtimeerror tensorflow lite kernel split cc 90 input type ktflitefloat32 input type ktfliteuint8 input type ktfliteint8 input type ktfliteint16 input type ktfliteint32 be not true node number 0 split fail to prepare describe the expect behavior tflite1 do not produce this error ideally behavior should be identical to tflite1 I think it downconverte all int64s to int32 if tflite do not do downconversion then the split should be able to support int64 tensor thank you
tensorflowtensorflow
update prefetching op py
Bug
add an example of use prefetch to device fix
tensorflowtensorflow
image transformation require scipy but scipy already instal
Bug
hello I m run tensorflow 2 4 1 on a window 10 computer system info below I m have problem try to launch a cnn model use the imagedatagenerator feed from a panda dataframe after define the datum generator and the model I get the follow error when run model fit image transformation require scipy install scipy however when I can confirm that scipy be instal by run import scipy without error image generator be define as follow train datagen imagedatagenerator horizontal flip false vertical flip false rescale 1 255 0 flow from dataframe dataframe x x col image name y col response shuffle false directory src path target size 128 128 class mode none the error occur when run model fit train datagen epoch 5 system information os platform and distribution window 10 tensorflow instal from source or binary binary tensorflow version use command below 2 4 1 python version 3 8 5 cuda cudnn version 11 3 8 2 1 gpu model and memory nvidia geforce rtx 2060
tensorflowtensorflow
crash in backward pass of group conv1d
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 tensorflow instal from source or binary binary tensorflow version use command below 2 5 0 python version 3 7 cuda cudnn version n a gpu model and memory n a backward pass of group convolution conv1d crash invalidargumenterror filter size do not have enough element request 896 get 224 node gradient tape sequential conv1d conv1d conv2dbackpropfilter define at 1 op inference train function 898 forward pass simple call work code be available at colab
tensorflowtensorflow
notimplementederror can not convert a symbolic tensor to a numpy array
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow stock os platform and distribution e g linux ubuntu 16 04 debian 11 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source and binary tensorflow version use command below 2 5 and nightly python version 3 9 bazel version if compile from source 3 7 2 gcc compiler version if compile from source gcc11 cuda cudnn version na gpu model and memory na describe the current behavior tf reduce mean fail on a ragged tensor with notimplemete exception see full error below it happen in graph non eager mode only furthermore it probably do not happen with order version of numpy 1 19 but it definitely happen on numpy 1 20 3 describe the expect behavior it must calculate mean contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute the current code do not catch this type of exception which be probably new in numpy 1 20 standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach follow be the full error message notimplementederror in user code home eli virtenvs tf mkl lib python3 9 site package tensorflow python keras engine train py 869 train function return step function self iterator home eli virtenvs tf mkl lib python3 9 site package tensorflow python keras engine training py 859 step function output model distribute strategy run run step args datum home eli virtenvs tf mkl lib python3 9 site package tensorflow python distribute distribute lib py 1286 run return self extend call for each replica fn args args kwargs kwargs home eli virtenvs tf mkl lib python3 9 site package tensorflow python distribute distribute lib py 2849 call for each replica return self call for each replica fn args kwargs home eli virtenvs tf mkl lib python3 9 site package tensorflow python distribute distribute lib py 3632 call for each replica return fn args kwargs home eli virtenvs tf mkl lib python3 9 site package tensorflow python keras engine training py 852 run step output model train step datum home eli virtenvs tf mkl lib python3 9 site package tensorflow python keras engine training py 813 train step self optimizer minimize loss self trainable variable tape tape home eli virtenvs tf mkl lib python3 9 site package tensorflow python keras optimizer v2 optimizer v2 py 539 minimize grad and var self compute gradient home eli virtenvs tf mkl lib python3 9 site package tensorflow python keras optimizer v2 optimizer v2 py 591 compute gradient grad and var self get gradient tape loss var list grad loss home eli virtenvs tf mkl lib python3 9 site package tensorflow python keras optimizer v2 optimizer v2 py 473 get gradient grad tape gradient loss var list grad loss home eli virtenvs tf mkl lib python3 9 site package tensorflow python eager backprop py 1084 gradient flat grad imperative grad imperative grad home eli virtenvs tf mkl lib python3 9 site package tensorflow python eager imperative grad py 71 imperative grad return pywrap tfe tfe py tapegradient home eli virtenvs tf mkl lib python3 9 site package tensorflow python eager backprop py 159 gradient function return grad fn mock op out grad home eli virtenvs tf mkl lib python3 9 site package tensorflow python ops math grad py 522 unsortedsegmentsumgrad return gatherdropnegative grad op input 1 0 none none home eli virtenvs tf mkl lib python3 9 site package tensorflow python ops math grad py 488 gatherdropnegative array op one array op rank gather home eli virtenvs tf mkl lib python3 9 site package tensorflow python util dispatch py 206 wrapper return target args kwargs home eli virtenvs tf mkl lib python3 9 site package tensorflow python op array op py 3216 one output constant if small one shape dtype name home eli virtenvs tf mkl lib python3 9 site package tensorflow python op array op py 2900 constant if small if np prod shape 1000 array function internal 5 prod home eli virtenvs tf mkl lib python3 9 site package numpy core fromnumeric py 3030 prod return wrapreduction a np multiply prod axis dtype out home eli virtenvs tf mkl lib python3 9 site package numpy core fromnumeric py 87 wrapreduction return ufunc reduce obj axis dtype out passkwargs home eli virtenvs tf mkl lib python3 9 site package tensorflow python framework op py 867 array raise notimplementederror notimplementederror can not convert a symbolic tensor gradient tape tree model inner node raggedreducemean raggedreducesum sub 0 to a numpy array this error may indicate that you re try to pass a tensor to a numpy call which be not support
tensorflowtensorflow
tf container do not contain tensorrt impossible to use tf trt
Bug
cc bixia1 sanjoy pkanwar23 whitefangbuck we just notice today that the tensorflow container seem to be incorrectly build bash docker run it rm gpu all shm size 2 g ulimit memlock 1 ulimit stack 67108864 tensorflow tensorflow late gpu root 9bc628c19dc5 workspace python from tensorflow python compiler tensorrt import trt convert as trt conversion param trt trtconversionparam converter trt trtgraphconverterv2 conversion param conversion param 2021 06 17 20 18 13 799235 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvrtc so 11 1 can not open share object file no such file or directory ld library path usr local cuda extras cupti lib64 usr local cuda lib64 usr local nvidia lib usr local nvidia lib64 2021 06 17 20 18 13 799292 f tensorflow compiler tf2tensorrt stub nvinfer stub cc 49 getinferlibversion symbol not find abort core dump as you can see tensorrt can not load and seem miss maybe link to trt7 upgrade this be an issue because tf trt use extentisely this container in our talk tutorial and blog post nv and google side thank for your help to address it
tensorflowtensorflow
unable to change task parameter in tensorflow decision forest from default classification
Bug
system information have I write custom code no os linux ubuntu 18 04 tensorflow instal from pip tensorflow version 2 5 0 python version 3 8 10 tensorflow decision forest version 0 1 6 current behavior attempt to use the tensorflow decision forest module for regression follow this example training configuration the problematic behavior be that the only way to change the task from the default classification to anything else be by specify task tf keras task regression or rank but module tensorflow kera have no attribute task and so the model can only be use for classification as this be the default set this behavior occur both locally as well as on google colab notebooks expect behavior the module tf keras would contain the attribute task and the task parameter would be adjustable standalone code to reproduce the issue import tensorflow as tf import tensorflow decision forest as tfdf model tfdf keras gradientboostedtreesmodel task tf keras task regression
tensorflowtensorflow
keras generic util mangle convlstm2d default layer name
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 arch linux tensorflow instal from source or binary binary tensorflow version use command below 2 4 1 python version 3 9 4 describe the current behavior python import tensorflow as tf tf keras layer convlstm2d 1 1 name conv lst m2d 0 describe the expect behavior python import tensorflow as tf tf keras layer convlstm2d 1 1 name conv lstm 2d 0 other info problem cause here l2435 if a name be not provide to a layer then it use the generic util function to snake case which show unexpected behavior here python from tensorflow python keras util import generic util generic util to snake case convlstm2d conv lst m2d an ad hoc fix here seem inelegant in generic util moreover change the regex in the link function will likely have repercussion for many other name where this may not be an issue thought on set a default name within convlstm2d init instead l154 something pass to super like name kwargs get name backend unique object name conv lstm 2d zero base true I don t have an elegant solution here
tensorflowtensorflow
valueerror image not in jpeg format for custom object detection use tf lite model maker
Bug
url s with the issue codelab colab notebook link for custom object detection use tf lite model maker scrollto p79nhcx0xfqb description of issue what need change I have be follow the above mention codelab for train a model for custom object detection use tf lite model maker however when I try import the image dataset as a csv from local path I face an error that say valueerror image not in jpeg format although all the training validation and test image be of the say format I e jpeg clear description below be the screenshot of the same image correct link be the link to the source code correct yes parameter define csv file be all parameter define and format correctly yes return define be return value define yes raise list and define be the error define for example usage example be there a usage example yes the example be take from the follow codelab 8
tensorflowtensorflow
url of tf raw op sparsereducesum comprise irrelevant garbage value
Bug
url s with the issue please provide a link to the documentation entry for example description of issue what need change the hyperlink of the op tf raw op sparsereducesum have some garbage irrelevant value like souten beti jeetendra rekha etc
tensorflowtensorflow
update call method in base layer
Bug
add a description about enable eager mode for debug purpose fix
tensorflowtensorflow
basic image classification tutorial miss minor step
Bug
url s with the issue the step where the last plot be be draw follow code be miss plt show to display the plot despite the plot be show as display in the doc plot value array 1 prediction single 0 test label plt xtick range 10 class name rotation 45 miss plt show here
tensorflowtensorflow
load runtime cudnn library 8 0 5 but source be compile with 8 1 0
Bug
os window 10 python 3 9 1 tensorflow 2 5 0 cuda 11 3 cudnn 8 2 0 output of nvcc version nvcc nvidia r cuda compiler driver copyright c 2005 2021 nvidia corporation build on sun mar 21 19 24 09 pacific daylight time 2021 cuda compilation tool release 11 3 v11 3 58 build cuda 11 3 r11 3 compiler 29745058 0 python build cuda version 64 112 build cudnn version 64 8 I encounter problem state in title when test a model which I train on the same machine during train a log message state that cudnn version 8200 be load 2021 06 11 15 49 39 798483 I tensorflow stream executor cuda cuda dnn cc 359 load cudnn version 8200 but then during test the message below appear 2021 06 11 15 49 10 188342 e tensorflow stream executor cuda cuda dnn cc 352 load runtime cudnn library 8 0 5 but source be compile with 8 1 0 cudnn library need to have match major version and equal or high minor version if use a binary install upgrade your cudnn library if build from source make sure the library load at runtime be compatible with the version specify during compile configuration how should I solve the problem thank you
tensorflowtensorflow
code in the documentation be result in warn message
Bug
url s with the issue please provide a link to the documentation entry for example use the gradienttape a first end to end example description of issue what need change need to modify the code such that warning go away please find the github gist that demonstrate the warning
tensorflowtensorflow
tensorflow load save model throw error google protobuf message decodeerror error parse message
Bug
background I set up a new mbp with macos big sur 11 4 then my tensorflow model loading script be not work anymore which work previously and still work on my another laptop I try to set up exactly same python virtual environment and even use exactly same dockerfile still one be work and the other be not work so apparently this be os setup issue I just can not figure out what be wrong and already hit the wall for day pls help to point out what potential stuff I can check to fix this and if possible pls don t close as I already ask this question in stackoverflow yesterday and till now no answer yet and the other confusion be how come within the docker container also one laptop be work and another be not work so it must be some physical host os stuff can even affect the docker container the error trace stack traceback most recent call last file venvs tf1 15 lib python3 6 site package tensorflow core python save model loader impl py line 68 in parse save model save model parsefromstre file content google protobuf message decodeerror error parse message during handling of the above exception another exception occur traceback most recent call last file naive tf1 model loader py line 55 in tf1 model object detection 1 file naive tf1 model loader py line 14 in init self sess tf save model serve model path file venvs tf1 15 lib python3 6 site package tensorflow core python util deprecation py line 324 in new func return func args kwargs file venvs tf1 15 lib python3 6 site package tensorflow core python save model loader impl py line 268 in load loader savedmodelloader export dir file venvs tf1 15 lib python3 6 site package tensorflow core python save model loader impl py line 284 in init self save model parse save model export dir file venvs tf1 15 lib python3 6 site package tensorflow core python save model loader impl py line 71 in parse save model raise ioerror can not parse file s s path to pb str e oserror can not parse file b path save model pb error parse message the model be download from tensorflow model zoo this one 1 fyi the model loading script import time import tensorflow as tf input key to tensor input key to tensor output key key to tensor output key to tensor class tf1model def init self model path self sess tf compat v1 session if hasattr tf compat v1 save model load graph meta def tf compat v1 save model load self sess tf save model serve model path else graph meta def tf compat v1 save model loader load self sess tf save model serve model path signature graph meta def signature def self signature tensor mapping for signature name in signature key indiv sig datum self signature tensor mapping signature name input key to tensor output key key to tensor input signature signature name input for k in inputs key tensor self sess graph get tensor by name input k name indiv sig datum input key to tensor k tensor output signature signature name output for k in outputs key tensor self sess graph get tensor by name output k name indiv sig datum output key key to tensor k tensor def predict self payload start time time payload sig payload signature name re self sess run self signature tensor mapping payload sig output key key to tensor self signature tensor mapping payload sig input key to tensor input payload input input print prediction take s format time time start return re if name main model tf1model tf1 model object detection 1 payload signature name serve default input input 0 0 0 0 0 0 for in range 3 model predict payload 1
tensorflowtensorflow
tensorflow python keras application vgg16 model get low accuracy on the ilsvrc2012 validation dataset
Bug
please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with the documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device na tensorflow instal from source or binary pip install tensorflow version use command below 2 4 python version 3 7 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version cuda 11 0 cudnn 8 0 gpu model and memory gtx1060 6 g exact command to reproduce na you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem I import the vgg 16 model from tensorflow python keras application vgg16 and evaluate it on the ilsvrc2012 validation dataset accord to the top 1 and top 5 accuracy of vgg 16 model should be 71 3 and 90 1 but I get only 65 7 and 86 8 on the ilsvrc2012 validation dataset I download the validation dataset and load it use image dataset from directory import from tensorflow kera preprocesse I preprocesse the image use preprocess input import from tensorflow python keras application vgg16 I have also deal with the inconsistency between the original imagenet ilsvrc2012 i d and the class index which the pre train model use I wonder why I get low accuracy I suspect that the image resize interpolation method affect datum destribution I suggest that the image preprocesse method should be explicitly list since it might vary from model to model I also find a bug the function image dataset from directory interpolation near will return a dataset of which the datat type be uint8 instead of float32 while other interpolation method always return float32 dataset source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem source code from future import print function as print function import sys as sys from tensorflow python keras application vgg16 import vgg16 from tensorflow python keras application vgg16 import decode prediction from tensorflow python keras application vgg16 import preprocess input from tensorflow kera preprocesse import image dataset from directory import numpy as np def preprocess img label img preprocess input img return img label if name main model vgg16 include top true weight imagenet model compile optimizer adam loss sparse categorical crossentropy metric accuracy 65 68 model compile optimizer adam loss sparse categorical crossentropy metric sparse top k categorical accuracy 86 75 ds image dataset from directory c ilsvrc2012 ilsvrc2012 img val label list np loadtxt new val truth txt dtype int label mode int color mode rgb batch size 32 image size 224 224 shuffle false interpolation bilinear ds ds map preprocess model evaluate ds verbose 1 log 1563 1563 684s 432ms step loss 1 4227 accuracy 0 6568 1563 1563 682 430ms step loss 1 4227 sparse top k categorical accuracy 0 8675