repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | excessive memory consumption and preparation runtime of tf keras backend max in custom layer with mask | Bug | system information have I write custom code yes os platform and distribution linux ubuntu 18 04 mobile device if the issue happen on mobile device tensorflow instal from binary tensorflow version 2 2 0 dev20200303 python version 3 6 9 bazel version gcc compiler version cuda cudnn version cpu only gpu model and memory cpu only describe the current behavior memory consumption seem to be proportional to num iteration and thus excessive most likely be a memory leak runtime until see the first fit result be also extremely slow 15 second until the first fit call 55 second until see the result of the first fit and the other fit run through in less than a second apparently runtime be due to memory management and not due to the actual max function evaluation when use tf keras backend max for compute a mask with tf stack in a real setup memory consumption increase steadily until run out of memory at approx 30 gb in contrast without compute mask memory consumption doesn t go beyond approx 1 gb describe the expect behavior I would expect memory consumption to be independent of num iteration and thus be much low plus preparation runtime be much low code to reproduce the issue python import tensorflow as tf import numpy as np batch size 100 dim input 100 dim output 1 num iteration 100 will consume approx 5 gb ram when set to 1000 class custommask tf keras layers layer def init self super custommask self init def compute mask self input mask none batch size input shape 0 batch maxe tf keras backend max input axis 1 for batch in range batch size for I in range num iteration max tf keras backend max batch max batch return none def call self input mask none return input model tf keras sequential model add tf keras layers input batch input shape batch size dim input model add custommask model add tf keras layer dense dim output model compile loss categorical crossentropy optimizer adam metric accuracy training input np zero batch size dim input training output np zero batch size dim output model fit training input training output batch size batch size other info log if my usage of tf keras backend max be wrong with regard to memory consumption and or runtime please let I know I need to call it frequently within compute mask for compute a custom mask in conjunction with tf stack however the latter do not seem to be the problem which be why I leave it out in the strip down code |
tensorflowtensorflow | tflite allocate tensor fail concatenation fail to prepare | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior create a tf keras model and export it to tflite cause an error when try to allocate the tensor for inference the error be runtimeerror tensorflow lite kernels concatenation cc 68 t dim size t0 dim size 0 4 node number 3 concatenation fail to prepare describe the expect behavior the export model should load without error standalone code to reproduce the issue colab gist reproduce the error can be find here scrollto 2 tz7z1knvzk import numpy as np import tensorflow as tf from tensorflow keras layers import input conv2d concatenate add from tensorflow keras model import model if name main print tf version git version tf version version test image np random randn 8 388 420 1 model m input input shape test image shape 1 d1 conv2d 8 3 dilation rate 1 padding same use bias false m input d2 conv2d 8 3 dilation rate 2 padding same use bias false m input d16 conv2d 8 3 dilation rate 16 padding same use bias false m input add1 add d2 d16 m output concatenate d1 d2 add1 model model input m input output m output model summary model compile optimizer rmsprop loss mse pre model predict test image print pre shape conversion converter tf lite tfliteconverter from keras model model tflite model converter convert with open model tflite wb as fp fp write tflite model inference interpreter tf lite interpreter model path model tflite input detail interpreter get input detail interpreter allocate tensor print do other info log runtimeerror traceback most recent call last in 31 interpreter tf lite interpreter model path model tflite 32 input detail interpreter get input detail 33 interpreter allocate tensor 34 print do usr local lib python3 6 dist package tensorflow core lite python interpreter py in allocate tensor self 245 def allocate tensor self 246 self ensure safe 247 return self interpreter allocatetensor 248 249 def safe to run self usr local lib python3 6 dist package tensorflow core lite python interpreter wrapper tensorflow wrap interpreter wrapper py in allocatetensor self 108 109 def allocatetensor self 110 return tensorflow wrap interpreter wrapper interpreterwrapper allocatetensor self 111 112 def invoke self runtimeerror tensorflow lite kernels concatenation cc 68 t dim size t0 dim size 0 4 node number 3 concatenation fail to prepare |
tensorflowtensorflow | importerror no module name builtin | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip3 tensorflow version use command below 2 1 0 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior screenshot from 2020 03 10 08 30 00 I be unable to complete the util test describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook bazel test flag tensorflow python keras other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | op with sparsetensor on gpu give result in cpu | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow custom code os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary conda tensorflow version use command below 2 1 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 1 gpu model and memory nvidia gtx 1060 6 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when execute operation on sparse tensor in the gpu within a tf device block the result be store in the cpu this be cause a huge slowdown in my program as the data be be copy around between gpu and gpu multiple time I have verify that this occur with the function tf sparse sparse dense matmul and tf sparse reduce sum it may also occur with other be this the intend behavior I be miss something describe the expect behavior the result of an operation on a sparse tensor with the gpu should stay in the gpu standalone code to reproduce the issue see this colab notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach thank in advance |
tensorflowtensorflow | document purpose for tf keras preprocesse module | Bug | url s with the issue description of issue what need change the module documentation be very terse and only state that it contain preprocesse util it do not state the specific purpose it would be helpful if it define the intend use of the tf keras preprocesse module e g to clean up or transform tf data dataset before they be feed to the model also since the tf feature column module have similar functionality it would be nice to describe when to use one or the other or how they be intend to be use together |
tensorflowtensorflow | valueerror can not convert a tensor of dtype resource to a numpy array | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 n a as it can be reproduce in google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary tensorflow version use command below 2 1 python version bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version gpu model and memory n a describe the current behavior it be result in error invalidargumenterror can not convert a tensor of dtype resource to a numpy array while run the first code but be work fine when tf keras input be replace with tf variable in the second code describe the expect behavior code should work fine with tf keras input as well standalone code to reproduce the issue code with error import tensorflow as tf num uid 50 input uid tf keras layers input shape 1 dtype tf int32 batch size 32 input uid tf keras layers input shape 1 dtype tf int32 param tf variable tf random normal num uids 9 trainable true param tf gather nd param input uid input share feature tf keras layers input shape 128 dtype tf float32 batch size 32 input share feature tf keras layers input shape 128 dtype tf float32 combine tf concat param input share feature axis 1 net tf keras layer dense 128 combine work code import tensorflow as tf num uid 50 input uid tf variable tf one 32 1 dtype tf int32 param tf variable tf random normal num uids 9 trainable true param tf gather nd param input uid input share feature tf variable tf one 32 128 dtype tf float32 combine tf concat param input share feature axis 1 net tf keras layer dense 128 combine please find the github gist there be a stack overflow question also associate with this issue |
tensorflowtensorflow | autograph error with for loop in keras loss | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 x64 tensorflow instal from source or binary binary pip tensorflow version use command below tf 2 1 0 as well as tf 2 2 0 2 2 0 dev20200304 python version 3 7 cuda cudnn version 10 1 7 6 gpu model and memory bug appear on several computer with different gpu describe the current behavior error tensorflow python framework error impl operatornotallowedingrapherror iterate over tf tensor be not allow in graph execution use eager execution or decorate this function with tf function when use a for loop over a tensor dimension in a custom keras loss in autograph mode notice that when run eagerly the bug do not appear notice also that use a similar for loop in a custom keras model work both in autograph and eager mode the bug be specific to keras loss as expect replace the loop with a call to tf map fn work correctly describe the expect behavior the behavior should be the same as run eagerly without any error standalone code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow import kera from tensorflow keras import layer custom loss with a for loop raise an error in autograph mode a similar for loop in a keras model work as expect class customloss kera loss loss def call self y true y pre x y true y pre for I in tf range tf shape y true 0 the error be reaise here x 1 return tf reduce mean x if name main datum np random random 1000 3 astype np float32 input tf keras input shape 1000 3 output tf keras layer dense 3 input model tf keras model inputs input output output model compile loss customloss do not work model compile loss customloss run eagerly true work model fit x datum y datum other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach log in attachment log txt |
tensorflowtensorflow | typeerror userobject object be not callable why tf save model load fail | Bug | system information tensorflow instal from source or binary tensorflow version use command below 2 1 0 python version 3 7 4 describe the current behavior traceback most recent call last file c pyfile tensorflow2 x load model py line 12 in print model tf random normal 1 3 typeerror userobject object be not callable describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import os import tensorflow as tf os environ tf cpp min log level 2 class model tf keras model def init self super model self init self d tf keras layer dense 2 def call self x training true mask none return self d x model model high level api model predict tf random normal 2 3 model save save high save format tf low level api tf save model save model save low import os import tensorflow as tf high level api model tf keras model load model save high low level api model tf save model load save low print model tf random normal 1 3 error if I user the high level api to save and load it run successfully if I use tf save mode save it can save successfully with warn warn tensorflow skip full serialization of keras model main model object at 0x0000028c0515f6d8 because its input be not define 2020 03 09 18 20 37 304479 w tensorflow python util util cc 319 set be not currently consider sequence but this may change in the future so consider avoid use they warn tensorflow skip full serialization of keras layer because it be not build then load fail so what do userobject object be not callable mean how can I fix this thank for any help |
tensorflowtensorflow | undefine reference error while build lite for esp32 person detection example | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 win7 tensorflow instal from source or binary source tensorflow version commit sha if source late version target platform e g arm mbe os arduino nano 33 etc esp32 describe the problem I git the whole late tensorflow branch on my win7 pc I intend to use tensorflow lite for micro I want to try the include example person detection first to make sure everything work out of box so I follow the readme instruction by run make f tensorflow lite micro tool make makefile target esp generate person detection esp project accord to the makefile five 3rd party download should be download first however I get the md5 mismatch error for each and every single one of they so I have to hack the makefile to disable the md5 checksum checking and manually download those require 5 download and instal in the c project tensorflow tensorflow lite micro tool make download folder then the make seem to run for a few minute the stop at the follow error 888 894 building cxx object esp idf core api flatbuffer conversion cc obj cc1plus exe warn command line option std c11 be valid for c objc but not for c 893 894 link cxx executable person detection elf fail person detection elf cmd exe c cd c user tianhao espressif tool xtensa esp32 elf esp 2019r 2 8 2 0 xtensa esp32 elf bin xtensa esp32 elf g exe mlongcall wno frame ad dress nostdlib cmakefile person detection elf rsp o person detection elf cd c user tianhao espressif tool xtensa esp32 elf esp 2019r2 8 2 0 xtensa esp32 elf bin lib gcc xtensa esp32 elf 8 2 0 xtensa esp32 elf bin ld e xe esp idf main libmain a main function cc obj literal loop 0x14 undefine d reference to respondtodetection tflite errorreporter unsigned char unsign ed char c user tianhao espressif tool xtensa esp32 elf esp 2019r2 8 2 0 xtensa esp32 elf bin lib gcc xtensa esp32 elf 8 2 0 xtensa esp32 elf bin ld e xe esp idf main libmain a main function cc obj in function loop c project tensorflow tensorflow lite micro tool make gen esp xtensa esp32 prj person detection esp idf build main main function cc 111 undefined referen ce to respondtodetection tflite errorreporter unsigned char unsigned char collect2 exe error ld return 1 exit status ninja build stop subcommand fail ninja fail with exit code 1 I also attach full log file for reference please show I how to fix this problem and get at least the example build and run thank build log txt please provide the exact sequence of command step when you run into the problem |
tensorflowtensorflow | precision at top k do not compute the precision on average as state in the api | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change this method do not compute the precision on average when top k be set this could lead to bad evaluation especially when the sample weight be set and use as count clear description to see the issue it s enough to run m tf keras metrics precision top k 2 m update state 0 0 1 1 1 1 1 1 print final result m result numpy return 0 but should return 0 5 it always compute the precision accord to the give order and return 0 however it should return 0 5 if it s on average |
tensorflowtensorflow | tensorflow2 0 be not support this code | Bug | tensorflow2 0 be not support this tf contrib training bucket by sequence length and what can I use instead of this |
tensorflowtensorflow | operatornotallowedingrapherror occur when use tf function | Bug | system information python 3 6 9 on ubuntu 18 04 use tensorflow 2 1 0 gpu hardware quadro p6000 issue description I try to use a map function to split a string to multiple tensor with follow sample code python tf function def csv text parse line f0 f1 f2 tf string split line sep return f0 f1 f2 csv text parse b 0 2 hello image but if change the above function to python tf function def csv text parse2 line return tf string split line sep the result will be fine |
tensorflowtensorflow | tflite inference throw error at one of the run | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow follow official classifi example os platform and distribution e g linux ubuntu 16 04 android 9 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device huawei 9 lite tensorflow version use command below implementation org tensorflow tensorflow lite 0 0 0 nightly implementation org tensorflow tensorflow lite gpu 0 0 0 nightly implementation org tensorflow tensorflow lite support 0 0 0 nightly implementation org tensorflow tensorflow lite select tf op 0 0 0 nightly describe the current behavior I have a model that have 1 input and 2 output I have write the below code for inference use run rather than runformultipleinputsoutput so first I have a query would run it with runformultipleinputsoutput rather than call run twice make it run fast or do runformultipleinputsoutput do same internally now I get right inference for first output layer but for the 2nd output inference I get the below error message java lang illegalargumentexception can not convert between a tensorflowlite buffer with 32 byte and a java buffer with 8 byte describe the expect behavior should run ok standalone code to reproduce the issue initialization code tflite new interpreter tflitemodel tfliteoption load label out from the label file labelsage agelabel list labelsgender genderlabel list read type and shape of input and output tensor respectively int imagetensorindex 0 int imageshape tflite getinputtensor imagetensorindex shape 1 height width 3 imagesizey imageshape 1 imagesizex imageshape 2 datatype imagedatatype tflite getinputtensor imagetensorindex datatype create the input tensor inputimagebuffer new tensorimage imagedatatype for output for age int probabilityagetensorindex 0 age int probabilityageshape tflite getoutputtensor probabilityagetensorindex shape 1 num class datatype probabilityagedatatype tflite getoutputtensor probabilityagetensorindex datatype create the output tensor and its processor outputprobabilityagebuffer tensorbuffer createfixedsize probabilityageshape probabilityagedatatype create the post processor for the output probability probabilityageprocessor new tensorprocessor builder add getpostprocessnormalizeop build for gender int probabilitygendertensorindex 1 gender int probabilitygendershape tflite getoutputtensor probabilitygendertensorindex shape 1 num class datatype probabilitygenderdatatype tflite getoutputtensor probabilitygendertensorindex datatype create the output tensor and its processor outputprobabilitygenderbuffer tensorbuffer createfixedsize probabilitygendershape probabilitygenderdatatype create the post processor for the output probability probabilitygenderprocessor new tensorprocessor builder add getpostprocessnormalizeop build inference code public agegendervalue estimateagegender final bitmap bitmap log this method so that it can be analyze with systrace trace beginsection estimateimage trace beginsection loadimage inputimagebuffer loadimage bitmap trace endsection run the inference call trace beginsection ageruninference tflite run inputimagebuffer getbuffer outputprobabilityagebuffer getbuffer rewind trace endsection trace beginsection genderruninference error throw at below statement tflite run inputimagebuffer getbuffer outputprobabilitygenderbuffer getbuffer rewind trace endsection get the map of label and probability map labeledageprobability new tensorlabel labelsage probabilityageprocessor process outputprobabilityagebuffer getmapwithfloatvalue map labeledgenderprobability new tensorlabel labelsgender probabilitygenderprocessor process outputprobabilitygenderbuffer getmapwithfloatvalue trace endsection agegendervalue tmpagegendervalue new agegendervalue get top k result tmpagegendervalue age gettopkprobability labeledageprobability tmpagegendervalue gender gettopkprobability labeledgenderprobability return tmpagegendervalue graph agegendermultitaskspcnn |
tensorflowtensorflow | tf ragged map flat value with tf argmax on ragged axis produce wrong operation | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version python 3 7 4 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version v10 1 243 gpu model and memory geforce rtx 2080 8 g describe the current behavior use a raggedtensor flat map into tf argmax and the flatten ragged dimension as the axis to argmax against the wrong shape operation result batch none n use map flat value tf argmax 2 result in a batch none tensor describe the expect behavior I d expect to result in a batch n tensor standalone code to reproduce the issue import tensorflow as tf foo tf ragged constant 0 1 2 3 4 5 6 7 8 9 dtype tf int64 ragged rank 1 map into the ragged axis should produce a 4 2 bar tf ragged map flat value tf argmax foo 2 bar bar shape tensorshape 4 none map into the non ragged axis bar tf ragged map flat value tf argmax foo 1 bar notice the same shape bar shape tensorshape 4 none what I expect be something like this import tensorflow as tf foo tf ragged constant 0 1 2 3 4 5 6 7 8 9 dtype tf int64 ragged rank 1 row split foo row split old row row split 0 bar for row in row split 1 bar append tf argmax foo value old row row 2 old row row bar tf stack bar bar bar shape tensorshape 4 2 |
tensorflowtensorflow | tf image ssim multiscale break in tensorflow 2 1 0 rc2 | Bug | system information python 3 7 6 on window 10 x64 use tensorflow 2 1 0 rc2 gpu hardware pcibusid 0000 01 00 0 name titan x pascal computecapability 6 1 coreclock 1 531ghz corecount 28 devicememorysize 12 00gib devicememorybandwidth 447 48gib s describe the current behavior code should print the word do describe the expect behavior tensorflow python framework error impl operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function standalone code to reproduce the issue import tensorflow as tf tf test gpu device name print tf version build model img input tf keras layers input shape 128 128 1 img output tf keras layers convolution2d 1 1 img input model tf keras model model img input img output add reconstruction loss toggle between the next 2 line of code to see that ssim multiscale do not work but simple mse do loss tf reduce mean tf image ssim multiscale img input img output 1 0 this loss do not loss tf reduce mean img input img output 2 this loss work model add loss loss model compile optimizer tf keras optimizer rmsprop lr 1e 4 loss none model summary the error iget when use the ssim multiscale loss be tensorflow python framework error impl operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function print do other info log this problem be present in 1 15 0 and 2 1 0 this bug be not present in in 1 13 1 I have try several image metric in tf image include ssim and psnr and they all result in the same error |
tensorflowtensorflow | matplotlib mlab have be remove update code please | Bug | url s with the issue a link to the documentation entry description of issue what need change a bug I find in tensorflow learn tensorflow core tutorial gradient boost tree model understanding as below matplotlib mlab have be remove version 3 1 0 but the tutorial still use griddata class for plot I can not get the result follow this so would you mind adjust this for correct code I be a beginner for tensorflow and python so fix it by myself be difficult for I thank |
tensorflowtensorflow | hexagon delegate not work with quantize efficientnet lite0 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device intrinsyc sd820 dev board tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 6 9 bazel version if compile from source 1 2 1 gcc compiler version if compile from source 7 4 0 cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the hexagon dsp delegate be work well on some google host quantize tflite model e g mobilenet use the benchmark model cli however when use benchmark model on the newly release efficientnet lite model from the url below the dsp delegate fail to engage 0 node delegate example benchmark run adb shell benchmark model use hexagon true input layer image input layer shape 1 224 224 3 graph sdcard efficientnet lite0 int8 tflite start min num run 50 min run duration second 1 max run duration second 150 int run delay second 1 num thread 1 benchmark name output prefix min warmup run 1 min warmup run duration second 0 5 graph sdcard efficientnet lite0 int8 tflite input layer image input shape 1 224 224 3 input value range use legacy nnapi 0 allow fp16 0 require full delegation 0 enable op profile 0 max profiling buffer entry 1024 use gpu 0 allow low precision in gpu 1 use hexagon 1 hexagon lib path datum local tmp hexagon profiling 0 use nnapi 0 load model efficientnet lite0 int8 tflite info initialize tensorflow lite runtime remote handle control available and use info create tensorflow lite delegate for hexagon info hexagon delegate 0 node delegate out of 64 node apply hexagon delegate the input model file size mb 5 42276 initialize session in 93 33ms run benchmark for at least 1 iteration and at least 0 5 second but terminate if exceed 150 second count 9 first 69875 curr 58811 min 58605 max 69875 avg 60623 8 std 3476 running benchmark for at least 50 iteration and at least 1 second but terminate if exceed 150 second count 50 first 60688 curr 58755 min 58527 max 61817 avg 59263 9 std 711 average inference timing in we warmup 60623 8 init 93330 inference 59263 9 note as the benchmark tool itself affect memory footprint the follow be only approximate to the actual memory footprint of the model at runtime take the information at your discretion peak memory footprint mb init 2 44922 overall 9 28125 at first I think it be due to the new quantize node at the beginning of the efficientnet lite0 network but point input layer to the next node doesn t help image be this a dsp delegate bug or an issue with post training quantization vs quantization aware training if so when can we expect tf2 quantization aware training and or dsp delegate compatible efficientnet lite model thank describe the expect behavior majority of the 64 node in the efficientnet lite0 quantize model above should have be delegate to the dsp and the inference time should have be much fast standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | resnet model in tf keras application contain a bias term which should not be there | Bug | pretraine resnet model available as part of tf keras application include a bias weight with all the convolutional layer which be weird what be even weird be that the pretraine weight contain all zero for the bias weight which be definitely a problem resnet model do not use a bias term because of the use of batch normalization even the tensorflow model repository the resnet construction code do not contain a bias term in convolution the follow code show the weight in the resnet50 model in tf keras application as can be see the bias term be all zero import tensorflow as tf model tf keras application resnet50 resnet50 include top false weight imagenet print model trainable variable a small portion of the output show the bias term as all zero array 1 76406968e 02 2 18379945e 02 6 38491847e 03 1 56918354e 02 1 33828130e 02 7 58931879e 03 6 57748384e 03 1 13832625e 02 1 44122150e 02 1 07535999e 02 1 99317057e 02 5 90330362e 03 1 96981058e 02 6 84878789e 03 1 30715151e 03 8 99719913e 03 1 00973761e 02 1 09837623e 02 4 02560830e 03 2 51277094e 03 1 91410668e 02 1 84022412e 02 1 05592925e 02 3 84159223e 03 1 21582337e 02 2 44973949e 03 8 21000524e 03 3 52650182e 03 9 62345582e 03 1 55217517e 02 1 57500952e 02 5 96316298e 03 4 53999359e 03 4 88574570e 03 4 60040662e 03 8 99072620e 05 dtype float32 this be definitely an issue which need to be clear up as a lot of people be depend upon tf keras application |
tensorflowtensorflow | convlstm2d mix precision cast | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary tensorflow version use command below 2 1 0 describe the current behavior presumably dtype isn t correctly cast for convlstm2d when use mixed precision policy issue be not present in other convolution layer such as conv2d describe the expect behavior filter of the conv2d op for convlstm2d should be float16 when use mixed float16 mixed precision policy standalone code to reproduce the issue import tensorflow as tf policy tf keras mixed precision experimental policy mixed float16 tf keras mixed precision experimental set policy policy x tf keras input shape 1 10 10 3 clstm1 tf keras layer convlstm2d filter 1 stride 1 kernel size 1 padding same return sequence true x other info log tensorflow 2 1 0 python3 6 tensorflow core python framework op def library py in apply op helper op type name name keyword 502 s type s of argument s 503 prefix dtype as dtype attrs input arg type attr name 504 infer from input arg type attr 505 506 type value dtype typeerror input filter of conv2d op have type float32 that do not match type float16 of argument input |
tensorflowtensorflow | tf where raise valueerror for raggedtensor argument | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 os x tensorflow instal from source or binary pip install tensorflow tensorflow version use command below v2 1 0 rc2 17 ge5bf8de410 2 1 0 python version 3 7 4 describe the current behavior call tf where with a raggedtensor argument raise valueerror typeerror object of type raggedtensor have no len describe the expect behavior no exception raise accord to tf where be suppose to support raggedtensor standalone code to reproduce the issue import tensorflow as tf digit tf ragged constant 3 1 4 1 5 9 2 6 tf where tf equal digit 1 1 0 other info log in 4 tf where tf equal digit 1 1 0 fallbackexception traceback most recent call last usr local lib python3 7 site package tensorflow core python ops gen math op py in select v2 condition t e name 8679 ctx context handle tld device name selectv2 name 8680 tld op callback condition t e 8681 return result fallbackexception this function do not handle the case of the path where all input be not already eagertensor during handling of the above exception another exception occur valueerror traceback most recent call last in 1 tf where tf equal digit 1 1 0 usr local lib python3 7 site package tensorflow core python op array ops py in where v2 condition x y name 3928 return gen array op where condition condition name name 3929 elif x be not none and y be not none 3930 return gen math op select v2 condition condition t x e y name name 3931 else 3932 raise valueerror x and y must both be non none or both be none usr local lib python3 7 site package tensorflow core python ops gen math op py in select v2 condition t e name 8683 try 8684 return select v2 eager fallback 8685 condition t e name name ctx ctx 8686 except core symbolicexception 8687 pass add node to the tensorflow graph usr local lib python3 7 site package tensorflow core python ops gen math op py in select v2 eager fallback condition t e name ctx 8706 attr t input t execute args to match eager t e ctx 8707 t e input t 8708 condition op convert to tensor condition dtype bool 8709 input flat condition t e 8710 attrs t attr t usr local lib python3 7 site package tensorflow core python framework op py in convert to tensor value dtype name as ref prefer dtype dtype hint ctx accept result type 1312 1313 if ret be none 1314 ret conversion func value dtype dtype name name as ref as ref 1315 1316 if ret be notimplemente usr local lib python3 7 site package tensorflow core python framework constant op py in constant tensor conversion function v dtype name as ref 315 as ref false 316 as ref 317 return constant v dtype dtype name name 318 319 usr local lib python3 7 site package tensorflow core python framework constant op py in constant value dtype shape name 256 257 return constant impl value dtype shape name verify shape false 258 allow broadcast true 259 260 usr local lib python3 7 site package tensorflow core python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 264 ctx context context 265 if ctx execute eagerly 266 t convert to eager tensor value ctx dtype 267 if shape be none 268 return t usr local lib python3 7 site package tensorflow core python framework constant op py in convert to eager tensor value ctx dtype 94 dtype dtype as dtype dtype as datatype enum 95 ctx ensure initialize 96 return op eagertensor value ctx device name dtype 97 98 valueerror typeerror object of type raggedtensor have no len |
tensorflowtensorflow | quantization problem while reproduce person detection example | Bug | tensorflow micro system information host os platform and distribution ubuntu 18 04 4 lts tensorflow version 1 15 target platform openmv cam h7 problem description I be try to reproduce the person detection example the document be not completely update but with slight modification I be able to train freeze and convert the model into tflite because of the target I need full integer quantization so I set the input and output as uint8 when I try to load the model in the openmv the quantization layer fail with the follow error tensorflow lite micro kernels quantize cc 51 input type ktflitefloat32 input type ktfliteint16 be not true I guess that the problem be in the post training quantization as the output model have an input of uint8 type and afterwards it have a quantize layer which try to convert uint8 to int8 post training quantization def representative dataset gen record iterator tf python io tf record iterator path coco dataset val record 00000 of 00010 count 0 for string record in record iterator example tf train example example parsefromstre string record a io bytesio example feature feature image encode byte list value 0 image pil image open a image image resize 96 96 image image convert l array np array image array np expand dim array axis 2 array np expand dim array axis 0 array array 127 5 1 0 astype np float32 yield array count 1 if count 300 break converter tf lite tfliteconverter from frozen graph freeze pb input mobilenetv1 prediction reshape 1 converter optimization tf lite optimize default converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 converter representative dataset representative dataset gen tflite quant model converter convert open quantize tflite wb write tflite quant model |
tensorflowtensorflow | training of tf keras layer rnn with precede reshape use tf shape fail | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 18 04 mobile device if the issue happen on mobile device tensorflow instal from binary tensorflow version 2 2 0 dev20200303 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cpu only gpu model and memory cpu only describe the current behavior the model compile but training fail describe the expect behavior I would expect training to succeed code to reproduce the issue python import tensorflow as tf import numpy as np batch size 1 num unit 1 dim other 10 build model model tf keras sequential model add tf keras layers input shape none dim other dim time read tf shape model output 1 model add tf keras layer reshape target shape dim time read dim other model add tf keras layer rnn cell tf keras layers grucell unit num unit model compile loss categorical crossentropy optimizer adam metric accuracy train training input1 np zero batch size 1 dim other training input2 np zero batch size 2 dim other training output np zero batch size num unit model fit training input1 training output model fit training input2 training output other info log apparently the reshape layer be not require in this strip down example if one leave it out training succeed however in my actual setup I need a reshape layer before the rnn layer due to a precede conv2d layer the context be train a crnn with variable length input use padding and masking be not an option unfortunately since tf keras layer conv2d do currently not support mask if my usage of tf shape be wrong if there be an alternative or workaround please let I know traceback in case of failure bash traceback most recent call last file home test local lib python3 6 site package tensorflow python eager execute py line 60 in quick execute input attrs num output typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name stride slice 0 during handling of the above exception another exception occur traceback most recent call last file test py line 27 in model fit training input training output file home test local lib python3 6 site package tensorflow python keras engine training py line 62 in method wrapper return method self args kwargs file home test local lib python3 6 site package tensorflow python keras engine training py line 775 in fit tmp log train function iterator file home test local lib python3 6 site package tensorflow python eager def function py line 580 in call result self call args kwd file home test local lib python3 6 site package tensorflow python eager def function py line 644 in call return self stateless fn args kwd file home test local lib python3 6 site package tensorflow python eager function py line 2420 in call return graph function filter call args kwargs pylint disable protect access file home test local lib python3 6 site package tensorflow python eager function py line 1665 in filter call self capture input file home test local lib python3 6 site package tensorflow python eager function py line 1746 in call flat ctx args cancellation manager cancellation manager file home test local lib python3 6 site package tensorflow python eager function py line 598 in call ctx ctx file home test local lib python3 6 site package tensorflow python eager execute py line 74 in quick execute tensor but find format keras symbolic tensor tensorflow python eager core symbolicexception input to eager execution function can not be keras symbolic tensor but find |
tensorflowtensorflow | tf2 1 can not save model train by distribution strategy while tf2 0 could do | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary tensorflow version use command below tf2 1 and tf2 0 python version bazel version if compile from source from source cuda cudnn version gpu model and memory cuda 10 1 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the model train under distribution strategy could not be save in tf2 1 while it could be save in tf2 0 describe the expect behavior standalone code to reproduce the issue the exapmle code I copy from another issue issue python import tensorflow as tf def build and compile model input tf keras input 20 x tf keras layers batchnormalization input y tf keras layer dense 2 model tf keras model input input output y model compile loss tf keras loss sparse categorical crossentropy optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model strategy tf distribute mirroredstrategy with strategy scope model build and compile model model save test save format tf other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach image |
tensorflowtensorflow | tensorflow fail in building gru model in some case | Bug | system information have I write custom code as oppose to use example directory os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow backend yes no yes tensorflow version 1 15 0 python version 3 6 9 cuda cudnn version gpu model and memory describe the current behavior when I directly build a model with layer gru on tensorflow I get some variable multiplication error as show below raise in tensorflow core python op resource variable op py line 1229 the error reveal unsatisfactory implementation on tensorflow in support variable multiplication runtimeerror variable value not support use var assign var value to modify the variable or var var value to get a new tensor object similar issue also happen in lstm and simplernn for detailed parameter of gru you can refer to the follow code snippet key insight the error indicate that the variable multiplication with format variable value be not well support on tensorflow it should be extend to full mode to conduct multiplication this cause tensorflow to be unable to build the model code to reproduce the issue python import numpy as np import keras layer as l from keras engine import model input use tensorflow as keras backend input dtype default be float32 gru kwargs kwargs unit 2 dropout 0 20430343923336958 recurrent dropout 0 7597739154146002 implementation 2 reset after true use bias true return sequence false return state false go backwards false stateful true unroll false simplernn kwargs kwargs unit 2 dropout 0 9030407578803185 use bias true recurrent dropout 0 8988069898639027 return sequence false return state false go backwards true stateful true unroll true input 10 np random random 2 10 8 layer l recurrent gru kwargs layer l recurrent simplernn kwargs x input batch shape input shape y layer x bk model model x y print finish |
tensorflowtensorflow | tensorflow can run and build model with the corner case dense unit 0 | Bug | system information have I write custom code as oppose to use example directory os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow backend yes no yes tensorflow version 1 15 0 python version 3 6 9 cuda cudnn version gpu model and memory describe the current behavior when I set dense unit 0 tensorflow can build the model normally unit 0 be an obviously unreasonable parameter which should be stop before build model but tensorflow treat it as a normal parameter the model save by tensroflow may lead to potential risk do unit 0 have any special effect on tensorflow I don t see the correspond instruction in the documentation if not should tensorflow set a check for such unreasonable parameter to avoid the risk and incorrect usage in the model code to reproduce the issue import numpy as np import keras layer as l import kera backend as k from keras engine import model input use tensorflow as keras backend input dtype default be float32 kwargs unit 0 input 10 np random random 1 32 32 16 layer l core dense kwargs x input batch shape input shape y layer x bk model model x y print finish |
tensorflowtensorflow | tensorflow can build and even run a model with conv2d kernel size 0 | Bug | system information have I write custom code as oppose to use example directory os platform and distribution e g linux ubuntu 16 04 win 10 linux ubuntu18 04 tensorflow backend yes no yes tensorflow version 1 15 0 cpu python version 3 6 9 cuda cudnn version gpu model and memory describe the current behavior when I build a model with unreasonable parameter conv2d kernel size 0 on tensorflow it can run normally and even generate save an model when I use this model to predict tensorflow spend about 5 minute and still can t return an output conv2d kernel size 0 seem like a corner case because in the convolution operation it be impossible to calculate with kernel size 0 do kernel size 0 have some special meaning in tensorflow I have not find any description about this case in document if no special meaning should tensorflow set a check for such unreasonable parameter to avoid the risk and incorrect usage in the model code to reproduce the issue import os import numpy as np import keras layer as l from keras model import load model from keras engine import model input kwargs filter 19 kernel size 0 pad valid stride 2 4 dilation rate 1 datum format channel first input 10 np random random 1 32 32 16 layer l convolutional conv2d kwargs x input batch shape input shape y layer x bk model model x y model path os path join model h5 bk model save model path bk model model load model model path output model predict input print finish |
tensorflowtensorflow | improve the tfds get start documentation | Bug | url s with the issue description of issue what need change I complete step 1 and go to to get start with tfds I launch the code lab to continue with the overview the code lab be a great option to easily run python and tensorflow I complete the first command to install tensorflow and tensorflow dataset image the download run but it be not clear which version of tensorflow be download the reason I be confused and want to know which version be instal be the disclaimer above states version 1 15 be require in the second command I receive an error message after run the python script image I be not clear if this be just a warning message or an error due to my current tensorflow version step 2 be delightful image add in a disclaimer to include citation be great however why be it after the download step this seem out of place and disrupt the developer workflow next be step 3 to initiate eager execution without a baseline on what ee be I feel require to read the eager execution page before I could move forward it s frustrating when a developer guide link out to other documentation or I feel compel to read the other page because it cause disruption in grasp one concept at a time this frustration can be a drop off point for developer try to onboard tensorflow image enable v2 behavior be the command run after ask the user to enable eager execution why be that after read the eager execution documentation this be clear but it take time to dig for this info step 5 understand what the tf load function do be frustrating image I m strongly encouraged to read the official tensorflow guide which be over 30 page of material I be 5 step down this get start guide and then send to another page that will reasonably take 4 focused hour to additionally complete this be very frustrating when I be try to just get an overview of tensorflow dataset step 5 do a great job here show an example directly in relation to the above paragraph on versione I m delighted and can move on without need to read the hyperlink image step 7 be confusing since it state we can achieve the same output use the datasetbuilder but when you run the test it only output the ds train variable as oppose to build the graph image what should happen I have organize answer to the above friction point in the follow grouping tensorflow installation to identify which version of tensorflow I instal I run a grep command in the code lab to output the following image have something like this output during installation will help user know what be download and execute in the install command eager execution a simple way to clarify what eager execution be to write a one sentence definition in the guide for example tensorflow s eager execution be an imperative programming environment that evaluate operation immediately without build graph operation return concrete value instead of construct a computational graph to run later this way I have a quick understanding and don t feel compel to read the link page which be a very long document I liken this to application have a tooltip in consumer facing application add in quick non intrusive explanation to respect the user keep your user engage and on the same page additionally add in the follow message to define the command enable v2 behavior would help clarify that eager execution be enable by default tensorflow 2 image link to the official guide for tensorflow dataset image we need to respect the user and provide simplicity when on board someone new to tfds they have invest time to make it down to the 5th step if it be imperative the user get a baseline understanding of the tensorflow api first then we should put the disclaimer at the top of the overview to go read the guide first before continue if it be not necessary then we should summarize the api guide into 3 5 concise pillar of information that be require for the user to understand the rest of the overview when the user complete the overview we can encourage they to go deeply and read the rest of the guide similarly an analogy be when load a website you respect the user by build a light weight modern site performant site lazy load in image when they be need to improve performance and minimize how much datum your user need to download we should apply the same principle to information datasetbuilder when introduce in the datasetbuilder we should place this information right after step 5 call load to show the two way to load in dataset side by side this way the user do not need to scroll back up the documentation and read before step 6 plot the dataset |
tensorflowtensorflow | break link inside the friction log google doc | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change inside the friction log document there be a break link to the bug performance template correct link I see there be two template performance bug the friction log should update these link in the google doc |
tensorflowtensorflow | gradient of einsum be incorrect for complex number | Bug | system information attach be a small script reproduce the problem os platform and distribution ubuntu 18 04 lt tensorflow instal from pip tensorflow version v2 1 0 rc2 17 ge5bf8de 2 1 0 python version python 3 7 5 observe both on cpu and gpu describe the current behavior the gradient of this tf matmul expression with respect to p be compute correctly python h be an nxn complex128 matrix p be float64 number and zero be a float64 zero def f p h h1 tf complex p zero h return tf ab tf reduce sum tf matmul h h but the value of the follow tf einsum expression be the same and compute correctly while the gradient with respect to p be wrong python def f p h h1 tf complex p zero h return tf ab tf reduce sum tf einsum ab bc ac h h the problem happen only when the matrix be complex describe the expect behavior the two function should produce the same value which be work fine and their gradient with respect to p should be the same which be not happen standalone code to reproduce the issue |
tensorflowtensorflow | bug with callback in tf keras model | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 19 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary tensorflow version use command below pip install tensorflow 2 0 python version bazel version if compile from source 3 7 5 gcc compiler version if compile from source n a cuda cudnn version gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when force stop a model on training all the callback on epoch end on train end be call this do not happen with a keras model describe the expect behavior the other callback namely on epoch end and on train end aren t suppose to be call standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf2 x api doc example code in tf module raise error and fix it | Bug | url s with the issue description of issue what need change I extract the example code run it and get error it seem cause by inconsistant kwargs name python class dense tf module def init self in feature output feature name none super dense self init name name self w tf variable tf random normal input feature output feature name w self b tf variable tf zero output feature name b def call self x y tf matmul x self w self b return tf nn relu y class mlp tf module def init self input size size name none super mlp self init name name self layer with self name scope for size in size self layer append dense input size input size output size size input size size tf module with name scope def call self x for layer in self layer x layer x return x mlp mlp input size 100 size 30 30 output typeerror traceback most recent call last in 1 mlp mlp input size 100 size 30 30 in init self input size size name 17 with self name scope 18 for size in size 19 self layer append dense input size input size output size size 20 input size size 21 typeerror init get an unexpected keyword argument input size submit a pull request I think just need to modify several line to fix it python class dense tf module def init self input size output size name none super dense self init name name self w tf variable tf random normal input size output size name w self b tf variable tf zero output size name b def call self x y tf matmul x self w self b return tf nn relu y class mlp tf module def init self input size size name none super mlp self init name name self layer with self name scope for size in size self layer append dense input size input size output size size input size size tf module with name scope def call self x for layer in self layer x layer x return x should I submit a pr to fix this |
tensorflowtensorflow | unable to remove model and release gpu memory | Bug | system information custom code ubuntu 16 04 tensorflow instal from source with pip tensorflow version v2 0 0 python version 3 6 cuda 10 0 titan xp at first I try to load the pre train transformer base model a and then add additional word embedding dynamic vocabulary to it however it doesn t work I e can t add value to tf variable instead I create another model b with the size of dynamic vocabulary and set its weight to exist transformer model a then I do not need initial model a so I want to remove it and clear gpu memory I try such command but they didn t work as well delete not model b but only model a tf keras backend clear session del model gc collect accord to 36465 I can either use multiprocesse function but then I have to remove both model a and model b so I just want more clear solution thank |
tensorflowtensorflow | dataset be not cache after model predict call issue present in tf2 0 0 but not in tf2 1 0 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 issue present on linux and windows mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 no issue with 2 1 0 python version python 3 7 5 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version cuda 10 1 cudnn 7 6 gpu model and memory issue happen with without gpu you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the dataset be not correctly put in cache after the first call to predict as it should so the 2nd call to predict return different prediction than the 1st call despite we have use cache on the give dataset describe the expect behavior the 2 call to predict in the code below should return the same prediction and thus the assert should pass standalone code to reproduce the issue python import tensorflow as tf input datum tf keras layers input name input data shape 1 dtype float32 dummy model tf keras model model input input datum output input datum 2 ds tf datum dataset range 10 shuffle 10 batch 1 cache len list ds next assert pass if we uncomment this line because it force the caching of the dataset assert dummy model predict ds dummy model predict ds all provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf name scope do not obey its own document rule | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts gnu linux 4 14 137 x86 64 tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 0 ge5bf8de410 python version 3 6 9 default nov 7 2019 10 44 02 accord to the if the scope name already exist the name will be make unique by append n for example call my op the second time will generate myop 1 a etc however it turn out to be not nor do I manage to find any code inside tf name scope that might generate the uniqueness python import tensorflow as tf layer tf keras input shape none def get shape layer with tf name scope scope as scope return tf shape layer name scope get shape layer get shape layer the first get shape layer the second get shape layer traceback most recent call last file tensorflow 2 1 0 python3 6 tensorflow core python framework op py line 1619 in create c op c op c api tf finishoperation op desc tensorflow python framework error impl invalidargumenterror duplicate node name in graph scope during handling of the above exception another exception occur traceback most recent call last file line 1 in file line 3 in get shape file tensorflow 2 1 0 python3 6 tensorflow core python op array op py line 519 in shape v2 return shape input name out type file tensorflow 2 1 0 python3 6 tensorflow core python op array op py line 545 in shape return shape internal input name optimize true out type out type file tensorflow 2 1 0 python3 6 tensorflow core python op array op py line 573 in shape internal return gen array op shape input name name out type out type file tensorflow 2 1 0 python3 6 tensorflow core python ops gen array op py line 8234 in shape shape input input out type out type name name file tensorflow 2 1 0 python3 6 tensorflow core python framework op def library py line 742 in apply op helper attrs attr proto op def op def file tensorflow 2 1 0 python3 6 tensorflow core python framework func graph py line 595 in create op internal compute device file tensorflow 2 1 0 python3 6 tensorflow core python framework op py line 3322 in create op internal op def op def file tensorflow 2 1 0 python3 6 tensorflow core python framework op py line 1786 in init control input op file tensorflow 2 1 0 python3 6 tensorflow core python framework op py line 1622 in create c op raise valueerror str e valueerror duplicate node name in graph scope |
tensorflowtensorflow | give example on website have no model predict example | Bug | first of all you have a great website to understand the tensorflow library I be a newbie and want to understand the point be I go through the example except for basic image classification there be no example how to feed my own datum wich have no label and I want to get prediction of my own data example |
tensorflowtensorflow | get keyerror stack value 0 when use custom network layer by tf keras layer layer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution linux ubuntu 16 04 tensorflow instal from anaconda python version 3 7 3 cuda cudnn version 10 0 7 0 gpu model and memory 6 gb tensorfflow version 2 0 0 custom network layer python import tensorflow as tf class roipoolinglayer tf keras layers layer def init self pool height pool width kwargs self pool height pool height self pool width pool width super roipoolinglayer self init kwargs staticmethod def pool roi feature map roi pool height pool width apply roi pooling to a single image and a single roi compute the region of interest feature map height int feature map shape 0 feature map width int feature map shape 1 h start tf cast feature map height roi 0 dtype tf int32 w start tf cast feature map width roi 1 dtype tf int32 h end tf cast feature map height roi 2 dtype tf int32 w end tf cast feature map width roi 3 dtype tf int32 region feature map h start h end w start w end divide the region into non overlap area region height h end h start region width w end w start h step tf cast region height pool height dtype tf int32 w step tf cast region width pool width dtype tf int32 area I h step j w step I 1 h step if I 1 pool height else region height j 1 w step if j 1 pool width else region width for j in range pool width for I in range pool height take the maximum of each area and stack th result def pool area x return tf reduce max region x 0 x 2 x 1 x 3 axis 0 1 pool feature tf stack pool area x for x in row for row in area return pool feature staticmethod def pool rois feature map rois pool height pool width apply roi pooling for a single image and vario rois def curry pool roi roi return roipoolinglayer pool roi feature map roi pool height pool width pool area tf map fn curry pool roi rois dtype tf float32 return pool area def compute output shape self input shape return the shape of the roi pooling layer output feature map shape rois shape input shape assert feature map shape 0 rois shape 0 batch size feature map shape 0 n rois rois shape 1 n channel feature map shape 0 return tuple batch size n rois self pool height self pool width n channel def get config self config pool height self pool height pool width self pool width base config super roipoolinglayer self get config return dict list base config item list config item def call self input kwargs map the input tensor of the roi layer to its output def curry pool rois x return roipoolinglayer pool rois x 0 x 1 self pool height self pool width pool area tf map fn curry pool rois input dtype tf float32 if pool area shape 1 1 return tf squeeze pool area axis 1 return pool area test code python import numpy as np import tensorflow as tf from net layer roi pool import roipoolinglayer ddef test for tf2 input img tf keras input shape 200 100 1 batch size 1 name input img roi region roipoolinglayer 3 7 input img np asarray 0 5 0 2 0 7 0 4 0 0 0 0 1 0 1 0 dtype np float32 fc0 tf keras layer flatten roi region fc1 tf keras layer dense 30 activation none name pose ren output fc0 model tf keras model inputs input img output fc1 name test model model summary def main test for tf2 if name main main error info traceback most recent call last file home lxz pycharmproject pose ren tf2 src test roi pooling test py line 59 in main file home lxz pycharmproject pose ren tf2 src test roi pooling test py line 55 in main test for tf2 file home lxz pycharmproject pose ren tf2 src test roi pooling test py line 48 in test for tf2 roi region roipoolinglayer 3 7 input img np asarray 0 5 0 2 0 7 0 4 0 0 0 0 1 0 1 0 dtype np float32 file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python keras engine base layer py line 891 in call output self call cast input args kwargs file home lxz pycharmproject pose ren tf2 src net layer roi pool py line 95 in call pool area tf map fn curry pool rois input dtype tf float32 file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python ops map fn py line 268 in map fn maximum iteration n file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python op control flow op py line 2714 in while loop loop var body loop var file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python op control flow op py line 2705 in body lambda I lv I 1 orig body lv file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python ops map fn py line 257 in compute pack fn value fn pack value file home lxz pycharmproject pose ren tf2 src net layer roi pool py line 93 in curry pool rois return roipoolinglayer pool rois x 0 x 1 self pool height self pool width file home lxz pycharmproject pose ren tf2 src net layer roi pool py line 61 in pool rois pool area tf map fn curry pool roi rois dtype tf float32 file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python ops map fn py line 268 in map fn maximum iteration n file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python op control flow op py line 2714 in while loop loop var body loop var file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python op control flow op py line 2705 in body lambda I lv I 1 orig body lv file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python ops map fn py line 257 in compute pack fn value fn pack value file home lxz pycharmproject pose ren tf2 src net layer roi pool py line 59 in curry pool roi return roipoolinglayer pool roi feature map roi pool height pool width file home lxz pycharmproject pose ren tf2 src net layer roi pool py line 49 in pool roi pool feature tf stack pool area x for x in row for row in area file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python util dispatch py line 180 in wrapper return target args kwargs file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python op array op py line 1165 in stack return gen array op pack value axis axis name name file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python ops gen array op py line 6304 in pack pack value value axis axis name name file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python framework op def library py line 793 in apply op helper op def op def file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python util deprecation py line 507 in new func return func args kwargs file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python framework op py line 3360 in create op attrs op def compute device file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python framework op py line 3429 in create op internal op def op def file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python framework op py line 1792 in init self control flow post processing file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python framework op py line 1800 in control flow post processing for input tensor in self input file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python framework op py line 2167 in input for tf output in tf output file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python framework op py line 2167 in for tf output in tf output file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python framework op py line 3801 in get tensor by tf output op self get operation by tf operation tf output oper file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python framework op py line 3765 in get operation by tf operation return self get operation by name unsafe op name file home lxz anaconda3 envs tf lib python3 7 site package tensorflow core python framework op py line 3761 in get operation by name unsafe return self node by name name keyerror stack value 0 |
tensorflowtensorflow | blas gemm launch fail fail to create cubla handle cubla status internal error | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information os platform and distribution e g linux ubuntu 16 04 windows10 tensorflow instal from source or binary tensorflow version use command below tensorflow2 0 0 python version python3 7 3 cuda cudnn version cuda10 2 cudnn7 6 5 gpu model and memory geforce gtx 1060 6 g you can collect some of this information use our environment capture error log as blow 2020 03 03 20 36 53 881317 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status internal error 2020 03 03 20 36 53 886872 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status internal error traceback most recent call last file main py line 69 in train file main py line 61 in train train epoch epoch file main py line 43 in train epoch out model x file d anaconda3 lib site package tensorflow core python keras engine base layer py line 891 in call output self call cast input args kwargs file d anaconda3 lib site package tensorflow core python keras engine sequential py line 270 in call output layer input kwargs file d anaconda3 lib site package tensorflow core python keras engine base layer py line 891 in call output self call cast input args kwargs file d anaconda3 lib site package tensorflow core python keras layers core py line 1056 in call output gen math op mat mul input self kernel file d anaconda3 lib site package tensorflow core python ops gen math op py line 6126 in mat mul six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl internalerror blas gemm launch fail a shape 200 784 b shape 784 512 m 200 n 512 k 784 op matmul describe the current behavior two error 1 fail to create cubla handle 2 bla gemm launch fail I don t know where there be a problem this be my first time use a gpu to code when I think the installation these cuda and tensorflow well everyte be ok the problem occur as below |
tensorflowtensorflow | invalidargumenterror indice 0 4 10 be not in 0 10 node model embed 2 embed lookup define at 38 op inference distribute function 20003 error may have originate from an input operation input source operation connect to node model embed 2 embed lookup model embed 2 embed lookup 19444 define at c user naik9 appdata local program python python37 lib contextlib py 112 function call stack distribute function | Bug | train on 10240 sample validate on 1284 sample epoch 1 30 40 10240 eta 4 26warning tensorflow can save good model only with val categorical accuracy available skip invalidargumenterror traceback most recent call last in 1 train model cnn cnn use pos use meta use dep in train model name use pos use meta use dep 36 main input x val aux input x val meta dep input x val dep 37 main output y val 38 callback tb csv logger checkpoint 39 else 40 model fit c user n appdata local program python python37 lib site package tensorflow core python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 817 max queue size max queue size 818 worker worker 819 use multiprocesse use multiprocesse 820 821 def evaluate self c user n appdata local program python python37 lib site package tensorflow core python keras engine training v2 py in fit self model x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 340 mode modekey train 341 training context training context 342 total epoch epoch 343 cbks make logs model epoch log training result modekeys train 344 c user n appdata local program python python37 lib site package tensorflow core python keras engine training v2 py in run one epoch model iterator execution function dataset size batch size strategy step per epoch num sample mode training context total epoch 126 step step mode mode size current batch size as batch log 127 try 128 batch out execution function iterator 129 except stopiteration error outofrangeerror 130 todo kaftan file bug about tf function and error outofrangeerror c user n appdata local program python python37 lib site package tensorflow core python keras engine training v2 util py in execution function input fn 96 numpy translate tensor to value in eager mode 97 return nest map structure non none constant value 98 distribute function input fn 99 100 return execution function c user n appdata local program python python37 lib site package tensorflow core python eager def function py in call self args kwd 566 xla context exit 567 else 568 result self call args kwd 569 570 if trace count self get trace count c user n appdata local program python python37 lib site package tensorflow core python eager def function py in call self args kwd 630 lifting succeed so variable be initialize and we can run the 631 stateless function 632 return self stateless fn args kwd 633 else 634 canon args canon kwd c user n appdata local program python python37 lib site package tensorflow core python eager function py in call self args kwargs 2361 with self lock 2362 graph function args kwargs self maybe define function args kwargs 2363 return graph function filter call args kwargs pylint disable protect access 2364 2365 property c user n appdata local program python python37 lib site package tensorflow core python eager function py in filter call self args kwargs 1609 if isinstance t op tensor 1610 resource variable op baseresourcevariable 1611 self capture input 1612 1613 def call flat self args capture input cancellation manager none c user n appdata local program python python37 lib site package tensorflow core python eager function py in call flat self args capture input cancellation manager 1690 no tape be watch skip to run the function 1691 return self build call output self inference function call 1692 ctx args cancellation manager cancellation manager 1693 forward backward self select forward and backward function 1694 args c user n appdata local program python python37 lib site package tensorflow core python eager function py in call self ctx args cancellation manager 543 input args 544 attrs executor type executor type config proto config 545 ctx ctx 546 else 547 output execute execute with cancellation c user n appdata local program python python37 lib site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 65 else 66 message e message 67 six raise from core status to exception e code message none 68 except typeerror as e 69 keras symbolic tensor c user n appdata local program python python37 lib site package six py in raise from value from value invalidargumenterror indice 0 4 10 be not in 0 10 node model embed 2 embed lookup define at 38 op inference distribute function 20003 error may have originate from an input operation input source operation connect to node model embed 2 embed lookup model embed 2 embed lookup 19444 define at c user n appdata local program python python37 lib contextlib py 112 function call stack distribute function |
tensorflowtensorflow | tensorflow autograph could not transform | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 profession mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de410 python version bazel version if compile from source 3 68 gcc compiler version if compile from source cuda cudnn version gpu model and memory 10 2cuda 7 6cudnn you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue here be some code class multiheadattention keras layers layer def init self d model num head super multiheadattention self init self num head num head self d model d model assert self d model self num head 0 self depth self d model self num head self wq keras layer dense self d model self wk keras layer dense self d model self wv keras layer dense self d model self dense kera layer dense self d model def call self q k v mask batch size tf shape q 0 other info log the main warning maybe it be a bug warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find |
tensorflowtensorflow | save model with tf keras layer rnn and stateful true with save format tf fail | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 18 04 mobile device if the issue happen on mobile device tensorflow instal from binary tensorflow version 2 2 0 dev20200228 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cpu only gpu model and memory cpu only describe the current behavior save a tf keras sequential model with tf keras layer rnn and stateful true with save format tf fail describe the expect behavior saving should succeed code to reproduce the issue python import tensorflow as tf number of cell 2 model tf keras sequential model add tf keras layers input batch input shape 1 1 1 cell for in range number of cell cell append tf keras layer grucell 10 model add tf keras layer rnn cell stateful true model compile model save rnn tf save format tf model2 tf keras model load model rnn tf other info log saving succeed with save format h5 traceback in case of failure bash traceback most recent call last file test py line 18 in model save rnn tf save format tf file home test local lib python3 6 site package tensorflow python keras engine network py line 1044 in save signature option file home test local lib python3 6 site package tensorflow python keras save save py line 138 in save model signature option file home test local lib python3 6 site package tensorflow python keras save save model save py line 78 in save save lib save model filepath signature option file home test local lib python3 6 site package tensorflow python save model save py line 951 in save obj export dir signature option meta graph def file home test local lib python3 6 site package tensorflow python save model save py line 1027 in build meta graph option namespace whitelist file home test local lib python3 6 site package tensorflow python save model save py line 629 in fill meta graph def signature generate signature signature function resource map file home test local lib python3 6 site package tensorflow python save model save py line 497 in generate signature function map input resource map file home test local lib python3 6 site package tensorflow python save model save py line 449 in call function with map capture resource map file home test local lib python3 6 site package tensorflow python save model save py line 372 in map capture to create tensor format interior assertionerror try to export a function which reference untracked object tensor 2164 0 shape dtype resource tensorflow object e g tf variable capture by function must be track by assign they to an attribute of a track object or assign to an attribute of the main object directly |
tensorflowtensorflow | how to deserialize from a dict with tf keras loss get | Bug | url s with the issue description of issue what need change currently there be no documentation at all its fairly straightforward to use by inputte a string denote default class name ex identifi categorical crossentropy tf keras loss get identifi however I be have issue with dictionary object ex identifi class name categorical crossentropy config from logit true tf keras loss get identifi return traceback most recent call last file main py line 85 in loss tf keras loss get jsn file c user jopatterson document autoprime ml env lib site package tensorflow core python keras loss py line 1186 in get return deserialize identifi file c user jopatterson document autoprime ml env lib site package tensorflow core python keras loss py line 1175 in deserialize printable module name loss function file c user jopatterson document autoprime ml env lib site package tensorflow core python keras util generic util py line 315 in deserialize keras object return cls cls config typeerror categorical crossentropy miss 2 require positional argument y true and y pre I believe it be fail because cls be the already initialize loss function and it be pass cls config as its input rather than use they as parameter during initialization clear description this be a very useful method for abstract implementation of loss object correct link this be where the issue be occur within the deserialize keras object function l382 parameter define there currently be no documentation for this as identifi can be a string dictionary or callable return define return be not define but its fairly obvious it return a loss function raise list and define no usage example no request visual if applicable no submit a pull request I would do this if I have enough knowledge to do so unfortunately I only know how it work with identifier as a string |
tensorflowtensorflow | tf keras experimental widedeepmodel example have wrong input for constructor | Bug | the tutorial example for tf keras experimental widedeepmodel instantiate the class with first input augment dnn model and second augment linear model as show in example 4 however the class should be instantiate by linear model then dnn model as show in l72 please update the tutorial example |
tensorflowtensorflow | doc say that sigmoid be map to tflite but the tflite schema doesn t mention sigmoid | Bug | it list tf sigmoid as mappable to tflite here but the tflite schema doesn t mention sigmoid |
tensorflowtensorflow | step per epoch not propagate to keras callback in tf nightly | Bug | this behavior be test in tf nightly 2 2 0 dev20200302 on macos some of our keras callback rely on access self param step in order to take action base on how far along the process be into the current epoch e g learn rate schedule however it appear that this param be no long be set properly even when mode fit step per epoch be call I do some digging and it look like the issue can be trace to the way callbacklist step be be set here l753 the dataadapter object be an instance of datasetadapter which be initialize with the step per epoch param and assign it to self user step but the get size method always return none here l704 be there a workaround for this ideally we d like for callback to have access to the step per epoch information without have to pass it in to each callback manually you can see an example of how we use this in horovod in the learningratewarmupcallback from the tensorflow2 keras mnist py l77 example as you can see in the implementation of that callback here l108 we be attempt to access self param get step which use to get set correctly prior to v2 2 cc reedwm pkanwar23 |
tensorflowtensorflow | cloning model load from disk savedmodel fail | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d382ca 2 0 0 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when load a simple kera sequential model with two conv2d layer store in savedmodel format from disk and apply the tensorflow kera model clone model function a typeerror keyword argument not understand filter be throw describe the expect behavior I expect the tensorflow kera model clone model to return a clone of the provide model and not fail with an exception standalone code to reproduce the issue |
tensorflowtensorflow | tf debugging assert shape | Bug | url s with the issue description of issue what need change the documentation for the shape argument state that be expect a dict but in fact in many circumstance it be not possible to assemble such a dict because eager tensor be unhashable so while it be possible to submit a dict here in graphmode eager expect a list of key value pair it should also state whether tf variable s can be use as key usage example the usage example seem to confuse the two concept and provide a mixup of both in invalid python syntax tf assert shape x n q y n d param q scalar this seem to be fix already in master further tf assert shape should probably be change to tf debugging assert shape submit a pull request yes |
tensorflowtensorflow | modulenotfounderror no module name utility | Bug | 0 hi I wanna use from utility import lazy property code but I can t I think that code for tf1 can you let I know what be the code that serve the same role in tf2 if not please let I know the appropriate code thank |
tensorflowtensorflow | default type for complex sparse tensor be complex128 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version use command below pip describe the current behavior when I instantiate a complex sparse tensor the default type be complex128 whereas with a float sparse tensor it be float32 describe the expect behavior the default type for complex sparse tensor should probably be complex64 since we can t set the type in the tensor creation standalone code to reproduce the issue a tf sparse sparsetensor index 0 0 value 1 0j dense shape 2 2 print a dtype this give tf complex128 other info log this can be circumvent use numpy to fix the type of the value but it isn t ideal imho |
tensorflowtensorflow | guarantee for the log argument in keras callbacks | Bug | url s with the issue description of issue what need change the documentation for the keras callback base class contain the follow generic statement about the log parameter pass to its method the log dictionary that callback method take as argument will contain key for quantity relevant to the current batch or epoch and the logs dict contain the loss value and all the metric at the end of a batch or epoch example include the loss and mean absolute error on the keras custom callback page since python pass object by reference the question become whether write access to this log parameter be allow and support an example use case would be to provide a custom callback that populate the log dictionary with some additional information that than would automatically be display in the progress bar and tensorboard and record in history and csv callback therefore I think the documentation should clearly state whether 1 write access to the log dict be forbid in which case it might be worthwhile to pass a non writeable dict like type 2 write access to log be allow and will not have any side effect on any other callback I e each callback get an independent copy 3 the log dict be writable and change to it be visible to any further callback this would also require to specify in which order the callback be process |
tensorflowtensorflow | tensorflow stream executor lib statusor cc 34 attempt to fetch value instead of handle error internal fail to get device attribute 13 for device 0 cuda error unknown unknown error | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary try with both pip and conda tensorflow version use command below 2 10 python version 3 6 cuda cudnn version cuda 10 1 cudnn 7 6 5 gpu model and memory nvidia mx250 2 gb describe the current behavior tensorflow 2 10 get instal it be also import into python when I try to list the available gpu from future import absolute import division print function unicode literal import tensorflow as tf print num gpu available len tf config experimental list physical device gpu the follow error occur 2020 02 29 18 26 23 596966 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library nvcuda dll 2020 02 29 18 26 24 237060 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 02 00 0 name geforce mx250 computecapability 6 1 coreclock 1 582ghz corecount 3 devicememorysize 2 00gib devicememorybandwidth 44 76gib s 2020 02 29 18 26 24 240888 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 02 29 18 26 24 544030 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 02 29 18 26 24 727079 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cufft64 10 dll 2020 02 29 18 26 24 811098 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library curand64 10 dll 2020 02 29 18 26 24 994088 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusolver64 10 dll 2020 02 29 18 26 25 075048 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusparse64 10 dll 2020 02 29 18 26 25 508640 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 02 29 18 26 25 513010 f tensorflow stream executor lib statusor cc 34 attempt to fetch value instead of handle error internal fail to get device attribute 13 for device 0 cuda error unknown unknown error when it s run inside a jupyter notebook the kernel freeze for a while and then restart other info log capture2 |
tensorflowtensorflow | wrong name parameter default for nadam optimizer | Bug | url s with the issue description of issue what need change optional name attribute be say to default as adamax rather than nadam clear description name optional name for the operation create when apply gradient default to adamax correct link l33 l238 source code correctly show default as name nadam therefore documentation issue submit a pull request not at the moment |
tensorflowtensorflow | encode png function should be export from tf io module | Bug | url s with the issue description of the issue what need change currently all image decode and encoding function be a part of the tf io module but encode png function be still a part of the tf image module change require change tf image encode png to tf io encode png submit a pull request I will be happy to help |
tensorflowtensorflow | save and load nest model fail | Bug | edit I adjust the bug description the bug appear in a different place than I think before system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below unknown 2 1 0 installation from conda repository python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior I create a simple nest tensorflow keras model with an input node and sequential model contain two convolutional layer python import tensorflow as tf model inside tf keras model sequential tf keras layer conv2d 16 3 input shape none none 1 tf keras layer conv2d 2 1 activation softmax model outside input tf keras input shape 256 256 1 model outside model inside model outside input model outside tf keras model model input model outside input output model outside save this model to disk in savedmodel format and load it again result in a typeerror list index must be integer or slice not nonetype accord to my observation this error only occur with nest model describe the expect behavior I expect the loaded model to be exactly the same as before save it and loading should not lead to an error standalone code to reproduce the issue |
tensorflowtensorflow | run distribute training mnist fail | Bug | my dependency python 3 6 8 default aug 7 2019 17 28 10 gcc 4 8 5 20150623 red hat 4 8 5 39 on linux type help copyright credit or license for more information import tensorflow as tf print tf version 2 2 0 dev20200227 here be my code I use both parameterserverstrategy and multiworkermirroredstrategy all fail I can successfully run this tutorial locally and I follow this link to run distribute training but always fail instead of run multiworker locally I run on 3 machine below be error message 2020 02 27 21 54 15 735213 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory ld library path opt cloudera parcel cdh lib64 opt jdk1 8 0 221 jre lib amd64 server 2020 02 27 21 54 15 735272 e tensorflow stream executor cuda cuda driver cc 313 fail call to cuinit unknown error 303 2020 02 27 21 54 15 735320 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host kevin2 proc driver nvidia version do not exist 2020 02 27 21 54 15 735784 I tensorflow core platform cpu feature guard cc 143 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 02 27 21 54 15 849834 I tensorflow core platform profile util cpu util cc 102 cpu frequency 2494140000 hz 2020 02 27 21 54 15 933775 I tensorflow compiler xla service service cc 168 xla service 0x55fd9c0 initialize for platform host this do not guarantee that xla will be use device 2020 02 27 21 54 15 933870 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 02 27 21 54 16 139575 I tensorflow core distribute runtime rpc grpc channel cc 300 initialize grpcchannelcache for job ps 0 kevin2 38518 2020 02 27 21 54 16 139650 I tensorflow core distribute runtime rpc grpc channel cc 300 initialize grpcchannelcache for job worker 0 localhost 43854 1 kevin2 36868 2020 02 27 21 54 16 157359 I tensorflow core distribute runtime rpc grpc server lib cc 390 start server with target grpc localhost 43854 2020 02 27 21 54 16 157667 I tensorflow core distribute runtime rpc grpc server lib cc 394 server already start target grpc localhost 43854 warn tensorflow from yarn nm usercache root appcache application 1582468300204 0090 container 1582468300204 0090 01 000003 tf2 zip tf2 lib64 python3 6 site package tensorflow python op resource variable op py 1666 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer warn tensorflow from yarn nm usercache root appcache application 1582468300204 0090 container 1582468300204 0090 01 000003 tf2 zip tf2 lib64 python3 6 site package tensorflow python op resource variable op py 1666 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer warn tensorflow from yarn nm usercache root appcache application 1582468300204 0090 container 1582468300204 0090 01 000003 tf2 zip tf2 lib64 python3 6 site package tensorflow estimator python estimator util py 96 distributediteratorv1 initialize from tensorflow python distribute input lib be deprecate and will be remove in a future version instruction for update use the iterator s initializer property instead warn tensorflow from yarn nm usercache root appcache application 1582468300204 0090 container 1582468300204 0090 01 000003 tf2 zip tf2 lib64 python3 6 site package tensorflow estimator python estimator util py 96 distributediteratorv1 initialize from tensorflow python distribute input lib be deprecate and will be remove in a future version instruction for update use the iterator s initializer property instead |
tensorflowtensorflow | attributeerror module tensorflow have no attribute app in tensorflow 2 1 0 | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary source tensorflow version use command below v2 1 0 python version 3 7 6 cuda cudnn version 10 1 7 6 gpu model and memory gtx 1050 8 g describe the current behavior I write flag tf compat v1 flag flag in my file but there be still an error flag tf compat v1 app flag flag attributeerror module tensorflow have no attribute app |
tensorflowtensorflow | representative datum gen input order | Bug | in tflite post integer quantization converter representative dataset be necessary however the documentation never specify the order of the fed input be they order by lexicographic order of the name size of shape or even random order when multiple input be present it be totally a guess game |
tensorflowtensorflow | tensorflow 2 0 uncompatible with guinicorn | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code os platform and distribution e g linux ubuntu 18 04 tensorflow instal from binary tensorflow version 2 0 keras version test with 2 3 1 and 2 3 0 python version 3 6 5 cuda cudnn version gpu model and memory describe the current behavior I have a feed forward nn and I save it as h5 and I can make prediction but when I set an endpoint use flask and guinicorn and then I call my model for prediction I get the follow message attributeerror gevent local local object have no attribute value describe the expect behavior the expect behavior would be that when I make a request curl x post it return the prediction for my request |
tensorflowtensorflow | mul doesn t work on gpu delegate with error dimension can not be reduce to linear | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device iphone xr iphone xs tensorflow instal from source or binary instal from source tensorflow version use command below 1 14 0 python version 3 6 bazel version if compile from source 1 2 1 gcc compiler version if compile from source 4 2 1 describe the current behavior mul doesn t work with multiple axis while use gpu delegate no issue while use cpu try to create a dummy tflite graph with simply a multiply operation on input of shape of 1 1 n m and variable with the same shape while verify with tflite ios benchmark app with gpu delegate the benchmark app report fail to apply gpu delegate on the mul operation note that the model run fine without gpu delegate describe the expect behavior mul operation work on gpu delegate and support multiple axis code to reproduce the issue here s the dummy model that should reproduce the issue in the tflite ios benchmark app dummy model tflite other info log graph private var container bundle application dd8d26f4 6bc1 4efb a444 ea2fb7f411d9 tflitebenchmark app dummy model 2 tflite input layer input feature placeholder input shape 1 1 2 80 input value range allow fp16 0 require full delegation 0 enable op profile 0 max profiling buffer entry 1024 csv file to export profiling datum to use gpu 1 allow low precision in gpu 1 gpu delegate wait type aggressive load model private var container bundle application dd8d26f4 6bc1 4efb a444 ea2fb7f411d9 tflitebenchmark app dummy model 2 tflite 2020 02 26 15 16 14 857526 0500 tflitebenchmark 444 628154 initialize tensorflow lite runtime 2020 02 26 15 16 14 857843 0500 tflitebenchmark 444 628154 create tensorflow lite delegate for metal 2020 02 26 15 16 14 858337 0500 tflitebenchmark 444 628154 metal gpu frame capture enable 2020 02 26 15 16 14 859181 0500 tflitebenchmark 444 628154 metal api validation enable 2020 02 26 15 16 14 932455 0500 tflitebenchmark 444 628154 tflitegpudelegate prepare mul dimension can not be reduce to linear 2020 02 26 15 16 14 932670 0500 tflitebenchmark 444 628154 node number 2 tflitemetaldelegate fail to prepare 2020 02 26 15 16 14 932800 0500 tflitebenchmark 444 628154 tensorflow lite kernel mul cc 74 input1 type input2 type 1 10 2020 02 26 15 16 14 932891 0500 tflitebenchmark 444 628154 node number 1 mul fail to prepare fail to apply gpu delegate |
tensorflowtensorflow | entity could not be transform error | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code yes os platform and distribution mac os 10 15 2 catalina tensorflow instal from source or binary conda python version 3 6 7 use cpu you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version use tensorflow 2 0 0 describe the current behavior I be try to implement custom layer and I keep get the follow error warn tensorflow entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause warn entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause warn tensorflow entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause warn entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause warn tensorflow entity initialize variable at 0x1ac8e150e0 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause warn entity initialize variable at 0x1ac8e150e0 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause model model 8 I do not care about the lambda layer as those should be non trainable however I want the bin conv2d to be trainable and it appear that something be prevent that but I can not figure out why I also do not understand the last warning of entity function initialize unitialized variable be any help with this would be greatly appreciate to give an idea for what I m try to do I generate a binary kernel in bin conv2d and then element wise multiply that by each input I be wrap everything in timedistribute as my input be video and I want the output to be evaluate as such the original implementation of bin conv2d use the standard convolution and that compile fine it seem that once I change it to be element multiply it break down the weird thing be that it seem that the binarize function be what break when I change the code even though I be not change that at all here be a link to my gist describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach here be the output when I set tf autograph set verbosity 3 true info tensorflow convert call args kwargs convert call args kwargs info tensorflow not whiteliste default rule not whiteliste default rule info tensorflow not whiteliste default rule not whiteliste default rule info tensorflow not whiteliste default rule not whiteliste default rule info tensorflow cache hit for entity key subkey frozenset convertedentityfactoryinfo tf call in tmppbrtju 8 cache hit for entity key subkey frozenset convertedentityfactoryinfo tf call in tmppbrtju 8 info tensorflow error transforming entity traceback most recent call last file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl api py line 506 in convert call convert f conversion convert target entity program ctx file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 324 in convert return instantiate entity convert entity info free nonglobal var name file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 266 in instantiate factory convert entity info get factory file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 92 in get factory assert self module name in sys module assertionerror error transforming entity warn tensorflow entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause warn entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause info tensorflow convert call args kwargs convert call args kwargs info tensorflow cache hit for entity key subkey frozenset convertedentityfactoryinfo tf streak in tmpqw9832gt cache hit for entity key subkey frozenset convertedentityfactoryinfo tf streak in tmpqw9832gt info tensorflow error transforming entity traceback most recent call last file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl api py line 506 in convert call convert f conversion convert target entity program ctx file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 324 in convert return instantiate entity convert entity info free nonglobal var name file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 266 in instantiate factory convert entity info get factory file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 92 in get factory assert self module name in sys module assertionerror error transforming entity warn tensorflow entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause warn entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause traceback most recent call last file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl api py line 506 in convert call convert f conversion convert target entity program ctx file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 324 in convert return instantiate entity convert entity info free nonglobal var name file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 266 in instantiate factory convert entity info get factory file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 92 in get factory assert self module name in sys module assertionerror traceback most recent call last file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl api py line 506 in convert call convert f conversion convert target entity program ctx file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 324 in convert return instantiate entity convert entity info free nonglobal var name file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 266 in instantiate factory convert entity info get factory file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 92 in get factory assert self module name in sys module assertionerror info tensorflow convert call initialize variable at 0x1acd33c320 args kwargs convert call initialize variable at 0x1acd33c320 args kwargs info tensorflow cache hit for entity initialize variable at 0x1acd33c320 key subkey frozenset initializer map convertedentityfactoryinfo tf initialize variable in tmphhcwjhr8 cache hit for entity initialize variable at 0x1acd33c320 key subkey frozenset initializer map convertedentityfactoryinfo tf initialize variable in tmphhcwjhr8 info tensorflow error transforming entity initialize variable at 0x1acd33c320 traceback most recent call last file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl api py line 506 in convert call convert f conversion convert target entity program ctx file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 324 in convert return instantiate entity convert entity info free nonglobal var name file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 266 in instantiate factory convert entity info get factory file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 92 in get factory assert self module name in sys module assertionerror error transforming entity initialize variable at 0x1acd33c320 warn tensorflow entity initialize variable at 0x1acd33c320 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause warn entity initialize variable at 0x1acd33c320 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause info tensorflow convert call args kwargs convert call args kwargs traceback most recent call last file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl api py line 506 in convert call convert f conversion convert target entity program ctx file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 324 in convert return instantiate entity convert entity info free nonglobal var name file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 266 in instantiate factory convert entity info get factory file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 92 in get factory assert self module name in sys module assertionerror model model 9 info tensorflow convert call permutation at 0x1acd0b20e0 args kwargs convert call permutation at 0x1acd0b20e0 args kwargs info tensorflow whiteliste permutation at 0x1acd0b20e0 donotconvert rule for tensorflow whiteliste permutation at 0x1acd0b20e0 donotconvert rule for tensorflow info tensorflow convert call slice batch index at 0x1acd0b2f80 args kwargs convert call slice batch index at 0x1acd0b2f80 args kwargs info tensorflow whiteliste slice batch index at 0x1acd0b2f80 donotconvert rule for tensorflow whiteliste slice batch index at 0x1acd0b2f80 donotconvert rule for tensorflow info tensorflow convert call grab batch at 0x1acd0b2440 args kwargs convert call grab batch at 0x1acd0b2440 args kwargs info tensorflow whiteliste grab batch at 0x1acd0b2440 donotconvert rule for tensorflow whiteliste grab batch at 0x1acd0b2440 donotconvert rule for tensorflow train on 16 sample epoch 1 10 warn tensorflow gradient do not exist for variable time distribute 280 kernel 0 when minimize the loss info tensorflow convert call initialize variable at 0x1acd133950 args kwargs convert call initialize variable at 0x1acd133950 args kwargs info tensorflow cache hit for entity initialize variable at 0x1acd133950 key subkey frozenset initializer map convertedentityfactoryinfo tf initialize variable in tmphhcwjhr8 cache hit for entity initialize variable at 0x1acd133950 key subkey frozenset initializer map convertedentityfactoryinfo tf initialize variable in tmphhcwjhr8 info tensorflow error transforming entity initialize variable at 0x1acd133950 traceback most recent call last file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl api py line 506 in convert call convert f conversion convert target entity program ctx file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 324 in convert return instantiate entity convert entity info free nonglobal var name file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 266 in instantiate factory convert entity info get factory file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 92 in get factory assert self module name in sys module assertionerror error transforming entity initialize variable at 0x1acd133950 warn tensorflow entity initialize variable at 0x1acd133950 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause warn entity initialize variable at 0x1acd133950 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause traceback most recent call last file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl api py line 506 in convert call convert f conversion convert target entity program ctx file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 324 in convert return instantiate entity convert entity info free nonglobal var name file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 266 in instantiate factory convert entity info get factory file user matthew opt miniconda3 envs engs89 lib python3 7 site package tensorflow core python autograph impl conversion py line 92 in get factory assert self module name in sys module assertionerror warn tensorflow gradient do not exist for variable time distribute 280 kernel 0 when minimize the loss |
tensorflowtensorflow | lookuperror when calculate gradient of gradient with rnn on gpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts tensorflow instal from source or binary binary anaconda pip tensorflow version use command below git version v1 12 1 25080 gca585e7 version 2 2 0 dev20200218 python version 3 6 10 cuda cudnn version cuda 10 2 cudnn 7 6 2 gpu model and memory geforce rtx 2080 ti 12 gb vram describe the current behavior when try to implement a gradient penalty for a wgan gp which require to calculate a gradient of a tensor which itself depend on a gradient the program terminate with a lookuperror and tell I lookuperror gradient registry have no entry for cudnnrnnbackprop this occur only on the gpu version of tensorflow and only if a recurrent layer be use gru or lstm doesn t matter on the cpu the provide minimum work example run as expect the error do not occur when train a normal model with a recurrent layer on the gpu no gradient penalty I try with tensorflow gpu 2 0 instal via conda tensorflow gpu 2 1 instal via pip and tf nightly gpu 2 2 more specific 2 2 0 dev20200218 instal via pip the error be the same for all gpu version the operation work with tensorflow gpu if the parameter unroll of the gru layer be set to true since this disable the usage of the cudnn implementation of the gru layer deviation from default parameter this do not solve the actual problem but might indicate that there be just a small bug in the interface to cudnn standalone code to reproduce the issue this code do not make much sense but be short than provide a full optimization loop for a wgan gp and produce the same error import tensorflow as tf import tensorflow kera as k import tensorflow kera layer as kl import numpy as np physical device tf config experimental list physical device gpu tf config experimental set memory growth physical device 0 true def gradient penalty model input datum get gradient input datum tf convert to tensor input datum with tf gradienttape as t t watch input datum pre model input data grad t gradient pre input datum 0 define gradient penalty slope tf sqrt tf reduce sum tf square grad axis 1 2 gp tf reduce mean slope 1 2 return gp if name main model with recurrent layer model k sequential kl inputlayer input shape 50 20 kl gru 100 kl dense 1 optimizer opt tf optimizer adam dummy datum datum np random normal 0 1 8 50 20 astype np float32 optimize with tf gradienttape as tape gp gradient penalty model model input data datum grad tape gradient gp model trainable variable opt apply gradient zip grad model trainable variable other info log full log home hendrik anaconda3 envs mls22 bin python home hendrik pycharmproject gan minimumminimumworkingexample py 2020 02 26 13 30 08 149540 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 02 26 13 30 08 172329 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 02 26 13 30 08 172601 I tensorflow core common runtime gpu gpu device cc 1558 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 545ghz corecount 68 devicememorysize 10 76gib devicememorybandwidth 573 69gib s 2020 02 26 13 30 08 172716 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 02 26 13 30 08 173554 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 02 26 13 30 08 174413 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 02 26 13 30 08 174551 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 02 26 13 30 08 175425 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 02 26 13 30 08 175952 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 02 26 13 30 08 177916 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 02 26 13 30 08 177978 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 02 26 13 30 08 178270 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 02 26 13 30 08 178512 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 02 26 13 30 08 187368 I tensorflow core platform cpu feature guard cc 143 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 02 26 13 30 08 208283 I tensorflow core platform profile util cpu util cc 102 cpu frequency 3600000000 hz 2020 02 26 13 30 08 208513 I tensorflow compiler xla service service cc 168 xla service 0x55cac97d38c0 initialize for platform host this do not guarantee that xla will be use device 2020 02 26 13 30 08 208523 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 02 26 13 30 08 267483 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 02 26 13 30 08 267804 I tensorflow compiler xla service service cc 168 xla service 0x55cac97f6830 initialize for platform cuda this do not guarantee that xla will be use device 2020 02 26 13 30 08 267814 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce rtx 2080 ti compute capability 7 5 2020 02 26 13 30 08 267915 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 02 26 13 30 08 268226 I tensorflow core common runtime gpu gpu device cc 1558 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 545ghz corecount 68 devicememorysize 10 76gib devicememorybandwidth 573 69gib s 2020 02 26 13 30 08 268246 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 02 26 13 30 08 268253 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 02 26 13 30 08 268259 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 02 26 13 30 08 268265 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 02 26 13 30 08 268271 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 02 26 13 30 08 268276 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 02 26 13 30 08 268282 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 02 26 13 30 08 268308 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 02 26 13 30 08 268554 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 02 26 13 30 08 268778 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 02 26 13 30 08 268795 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 02 26 13 30 08 269304 I tensorflow core common runtime gpu gpu device cc 1099 device interconnect streamexecutor with strength 1 edge matrix 2020 02 26 13 30 08 269310 I tensorflow core common runtime gpu gpu device cc 1105 0 2020 02 26 13 30 08 269314 I tensorflow core common runtime gpu gpu device cc 1118 0 n 2020 02 26 13 30 08 269362 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 02 26 13 30 08 269613 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 02 26 13 30 08 269856 I tensorflow core common runtime gpu gpu device cc 1244 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 9813 mb memory physical gpu device 0 name geforce rtx 2080 ti pci bus i d 0000 01 00 0 compute capability 7 5 2020 02 26 13 30 08 667776 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 02 26 13 30 09 284739 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 traceback most recent call last file home hendrik pycharmproject gan minimumminimumworkingexample py line 31 in grad tape gradient gp model trainable variable file home hendrik anaconda3 envs mls22 lib python3 6 site package tensorflow python eager backprop py line 1048 in gradient unconnected gradient unconnected gradient file home hendrik anaconda3 envs mls22 lib python3 6 site package tensorflow python eager imperative grad py line 77 in imperative grad compat as str unconnected gradient value file home hendrik anaconda3 envs mls22 lib python3 6 site package tensorflow python eager backprop py line 145 in gradient function grad fn op gradient registry lookup op name pylint disable protect access file home hendrik anaconda3 envs mls22 lib python3 6 site package tensorflow python framework registry py line 97 in lookup s registry have no entry for s self name name lookuperror gradient registry have no entry for cudnnrnnbackprop process finish with exit code 1 thank in advance for your time |
tensorflowtensorflow | mislead convert error when no concrete function be give | Bug | system information os platform and distribution e g linux ubuntu 18 04 tensorflow instal from source tensorflow version 2 1 0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook converter tf lite tfliteconverter from save model model path tflite model converter convert the output from the converter invocation valueerror this converter can only convert a single concretefunction convert multiple function be under development failure detail when attempt to convert a model in which no concretefunction have be define the error message imply that there be multiple as someone new come to tf and tflite I find this very confusing as to where my concrete function be be define inspect the code from in the convert method at line 417 the value error be throw when anything but 1 concrete function be define in the model rightly so but the error message imply that more than one have be define if len self func 1 raise valueerror this converter can only convert a single concretefunction convert multiple function be under development |
tensorflowtensorflow | reintroduce typo in tf keras layer attention doc example | Bug | url s with the issue description of issue what need change typo be reintroduce in v2 1 see must be value embedding token embed value input rabitt |
tensorflowtensorflow | documentation issue about tf math xlog1py nightly only apis | Bug | url s with the issue description of issue what need change this documentation describe tf math xlog1py show as a part of stable version of tf 2 1 but I believe this api have not be release yet tf math xlog1py be only available on nightly at this point add in jan 2020 fyi I run into this issue as I be use tfp nightly which depend on tf nightly module tensorflow core api v2 math have no attribute xlog1py the documentation should have not publish under tensorflow core v2 1 0 why be it the case if it be due to a mistake could we improve on the process so we can have a nightly doc and a stable doc |
tensorflowtensorflow | documentation need to upgrade to python3 | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change since psf have officially stop it s support for python2 the documentation need to be upgrade to python3 pip2 pip3 clear description correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request yes I ll be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | usage documentation need for tf image yuv to rgb | Bug | url s with the issue description of issue what need change the tf image yuv to rgb method specify a yuv input of shape h w 3 and an rgb output of the same shape it also note the output be only well define if the y value in image be in 0 1 u and v value be in 0 5 0 5 however yuv be natively encode in hxwx1 5 byte with value range from 0 255 consider that multiple yuv rgb conversion standard exist it be unclear what pre processing step need to be do by a user who want to pass yuv input to his network jpeg conversion usage example more documentation on the proper usage of this method would be highly helpful specifically an example show how to pre process a raw yuv image of size hxwx1 5b to the expect shape of h w 3 with normalize value y 0 1 uv 0 5 0 5 an example of how one might append this method to an rgb train model to enable it to accept yuv input during inference a likely scenario might be export a frozen model to an android device that natively capture in yuv |
tensorflowtensorflow | model class api predict gradient update | Bug | url s with the issue and predict description of issue what need change documentation for model predict say batch size integer or none number of sample per gradient update but unless I m miss something there be no gradient update while predict |
tensorflowtensorflow | tflite interpreter allocatetensor fail | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary docker image nightly tensorflow version use command below tensorflow 2 2 0 dev20200218 describe the current behavior in python tf lite interpreter everything work as expect in c however interpreter allocatetensor fail when use the attach model see link to minimal example below absolutely no clue be give no error I have no idea why this happen I m not use any custom op describe the expect behavior the c code should give the same result as the python code tf lite interpreter standalone code to reproduce the issue can be find here |
tensorflowtensorflow | request for document of tf compat v1 profiler profiler | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change the readme link for this function be 404 so there be no support for usage the sample code be not complete either clear description image for example python profiler profile name scope option option builder profileoptionbuilder trainable variable parameter no declarence of option builder it s hard for I to reproduce the result by this sample code could tensorflow provide new support to this function |
tensorflowtensorflow | shape information be lose with depthwiseconv2d | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below 1 15 cuda cudnn version 10 2 gpu model and memory rtx 2060 describe the current behavior in some case use bias false dilation rate 1 datum format channel first the shape information be lose after depthwiseconv2d describe the expect behavior it should behave the same way for channel last and channel first when run the code below channel last print shape 32 and channel first print shape it should be shape 32 standalone code to reproduce the issue here for the gist other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | unsatisfiedlinkerror on nightly build of tensorflow lite | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information mobile device android tensorflow instal from gradle jcenter describe the current behavior on sample project we run into unsatisfiedlinkerror lib arm64 libtensorflowlite gpu jni so use nightly build if we switch to 2 1 0 that doesn t have the problem on android guide to tensorflow lite it also suggest to use nightly build and I don t think this be good for production build describe the expect behavior doesn t run into unsatisfiedlinkerror at runtime standalone code to reproduce the issue your sample project |
tensorflowtensorflow | jupyter notebook kernel die when run a unet with tensorflow 2 1 0 | Bug | system information os linux redhat 8 conda install tensorflow gpu tensorflow version 2 1 0 python version 3 7 4 jupyter notebook cuda cudnn version 10 2 cpu amd threadripper 3960x gpu 2x rtx titans memory 128 gb corsair 3200mhz I recently upgrade to tensorflow 2 1 0 from tensorflow 2 0 0 and now my code will not run before the first epoch end I get a message say the kernel appear to have die it will restart automatically the code be use mirroredstrategy to distribute the unet to the gpu I don t get this error when I remove mirroredstrategy and run on a single gpu originally I be run my code with tensorflow 2 0 0 use jupyter notebook and it would work fine except that when I would try to load my model and make a prediction I get an error at model predict say attributeerror model object have no attribute loss I call model summary after load the model and it show the model be empty so I upgrade to tensorflow 2 1 0 and call model summary this time it show a readout however I still get the same error attributeerror model object have no attribute loss be there currently an incompatibility with mirroredstrategy and tensorflow 2 1 0 def get model optimizer loss metric metric lr 1e 4 with tf device job localhost replica 0 task 0 device gpu 0 input input sample width sample height sample depth 1 conv1 conv3d 32 3 3 3 activation relu pad same input conv1 conv3d 32 3 3 3 activation relu pad same conv1 pool1 maxpooling3d pool size 2 2 2 conv1 drop1 dropout 0 5 pool1 conv2 conv3d 64 3 3 3 activation relu pad same drop1 conv2 conv3d 64 3 3 3 activation relu pad same conv2 pool2 maxpooling3d pool size 2 2 2 conv2 drop2 dropout 0 5 pool2 conv3 conv3d 128 3 3 3 activation relu pad same drop2 conv3 conv3d 128 3 3 3 activation relu pad same conv3 pool3 maxpooling3d pool size 2 2 2 conv3 drop3 dropout 0 3 pool3 conv4 conv3d 256 3 3 3 activation relu pad same drop3 conv4 conv3d 256 3 3 3 activation relu pad same conv4 pool4 maxpooling3d pool size 2 2 2 conv4 drop4 dropout 0 3 pool4 conv5 conv3d 512 3 3 3 activation relu pad same drop4 conv5 conv3d 512 3 3 3 activation relu pad same conv5 with tf device job localhost replica 0 task 0 device gpu 1 up6 concatenate conv3dtranspose 256 2 2 2 stride 2 2 2 padding same conv5 conv4 axis 4 conv6 conv3d 256 3 3 3 activation relu pad same up6 conv6 conv3d 256 3 3 3 activation relu pad same conv6 up7 concatenate conv3dtranspose 128 2 2 2 stride 2 2 2 padding same conv6 conv3 axis 4 conv7 conv3d 128 3 3 3 activation relu pad same up7 conv7 conv3d 128 3 3 3 activation relu pad same conv7 up8 concatenate conv3dtranspose 64 2 2 2 stride 2 2 2 padding same conv7 conv2 axis 4 conv8 conv3d 64 3 3 3 activation relu pad same up8 conv8 conv3d 64 3 3 3 activation relu pad same conv8 up9 concatenate conv3dtranspose 32 2 2 2 stride 2 2 2 padding same conv8 conv1 axis 4 conv9 conv3d 32 3 3 3 activation relu pad same up9 conv9 conv3d 32 3 3 3 activation relu pad same conv9 conv10 conv3d 1 1 1 1 activation sigmoid conv9 model model input input output conv10 model compile optimizer optimizer lr lr loss loss metric metric metric return model smooth 1 def dice coef y true y pre y true f k flatten y true y pre f k flatten y pre intersection k sum y true f y pre f return 2 intersection smooth k sum y true f k sum y pre f smooth def dice coef loss y true y pre return dice coef y true y pre mirror strategy tf distribute mirroredstrategy with mirror strategy scope model get model optimizer adam loss metric dice coef loss metric dice coef lr 1e 4 observe var dice coef strategy max great dice coef be well model checkpoint modelcheckpoint epoch 04 model monitor observe var save well only true model fit train x train y batch size 2 epoch 1000 verbose 1 shuffle true validation split 2 callback model checkpoint model save finalmodel model summary model model layer type output shape param input 1 inputlayer multiple 0 conv3d conv3d multiple 896 conv3d 1 conv3d multiple 27680 max pooling3d maxpooling3d multiple 0 dropout dropout multiple 0 conv3d 2 conv3d multiple 55360 conv3d 3 conv3d multiple 110656 max pooling3d 1 maxpooling3 multiple 0 dropout 1 dropout multiple 0 conv3d 4 conv3d multiple 221312 conv3d 5 conv3d multiple 442496 max pooling3d 2 maxpooling3 multiple 0 dropout 2 dropout multiple 0 conv3d 6 conv3d multiple 884992 conv3d 7 conv3d multiple 1769728 max pooling3d 3 maxpooling3 multiple 0 dropout 3 dropout multiple 0 conv3d 8 conv3d multiple 3539456 conv3d 9 conv3d multiple 7078400 conv3d transpose conv3dtran multiple 1048832 concatenate concatenate multiple 0 conv3d 10 conv3d multiple 3539200 conv3d 11 conv3d multiple 1769728 conv3d transpose 1 conv3dtr multiple 262272 concatenate 1 concatenate multiple 0 conv3d 12 conv3d multiple 884864 conv3d 13 conv3d multiple 442496 conv3d transpose 2 conv3dtr multiple 65600 concatenate 2 concatenate multiple 0 conv3d 14 conv3d multiple 221248 conv3d 15 conv3d multiple 110656 conv3d transpose 3 conv3dtr multiple 16416 concatenate 3 concatenate multiple 0 conv3d 16 conv3d multiple 55328 conv3d 17 conv3d multiple 27680 conv3d 18 conv3d multiple 33 total param 22 575 329 trainable param 22 575 329 non trainable param 0 prediction code norm image imagedata model load model finalmodel compile false print model summary prediction model predict norm image verbose 1 attribute error attributeerror traceback most recent call last in 75 76 print predict the label 77 prediction model predict norm image verbose 1 78 prediction tf keras model predict norm image 79 conda envs gputest lib site package tensorflow core python keras engine train py in predict self x batch size verbose step callback max queue size worker use multiprocesse 1011 max queue size max queue size 1012 worker worker 1013 use multiprocesse use multiprocesse 1014 1015 def reset metric self conda envs gputest lib site package tensorflow core python keras engine training v2 py in predict self model x batch size verbose step callback max queue size worker use multiprocesse kwargs 496 model modekey predict x x batch size batch size verbose verbose 497 step step callback callback max queue size max queue size 498 worker worker use multiprocesse use multiprocesse kwargs 499 500 conda envs gputest lib site package tensorflow core python keras engine training v2 py in model iteration self model mode x y batch size verbose sample weight step callback max queue size worker use multiprocesse kwargs 424 max queue size max queue size 425 worker worker 426 use multiprocesse use multiprocesse 427 total sample get total number of sample adapter 428 use sample total sample be not none conda envs gputest lib site package tensorflow core python keras engine training v2 py in process input model mode x y batch size epoch sample weight class weight shuffle step distribution strategy max queue size worker use multiprocesse 644 standardize function none 645 x y sample weight standardize 646 x y sample weight sample weight 647 elif adapter cls be datum adapter listsofscalarsdataadapter 648 standardize function standardize conda envs gputest lib site package tensorflow core python keras engine training py in standardize user datum self x y sample weight class weight batch size check step step name step validation split shuffle extract tensor from dataset 2358 be compile call false 2359 if not self be compile and self optimizer 2360 self compile from input all input y input x y 2361 be compile call true 2362 conda envs gputest lib site package tensorflow core python keras engine training py in compile from input self all input target orig input orig target 2609 self compile 2610 optimizer self optimizer 2611 loss self loss 2612 metric self compile metric 2613 weight metric self compile weight metric other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | portuguese version | Bug | be it possible to correct some portuguese notebook as they have some typo I m a portuguese native speaker |
tensorflowtensorflow | convolution operation such as conv2d do not detect corner case kernel size 0 which lead to an unexpected result | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 15 0 cpu python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when I use convolution relate operation tensorflow doesn t seem to handle the corner case kernel size 0 well the problem can be divide into two part 1 when pad same be set separableconv2d depthwiseconv2d conv2d conv2dtranspose take kernel size 0 as a normal value to calculate the padding crop shape and finally report an negative shape error for example conv2dtranspos report a valueerror crop can not be negative for conv2d transpose 1 atrous conv2d transpose batchtospacend op batchtospacend with input shape 9 10 10 2 2 2 2 and with compute input tensor input 1 3 3 input 2 2 0 2 0 at tensorflow core python framework op py line 1610 the negative value be actually create in tensorflow core python op nn op py for example the negative value which lead to the padding error of depthwiseconv2d be calculate around line 619 of nn op py see the following snapshot of nn op py pic1 2 when pad valid be set the situation be even bad depthwiseconv2d and separableconv2d can build the model and even predict but the output be the all zero matrix conv2d can also build and save the model but it seem to get stick in an infinite loop when predict it take a lot of time and eventually get no result only conv2dtranspose behave normally and report the follow error in tensorflow core python client session py line1470 yaml tensorflow python framework error impl invalidargumenterror conv2dcustombackpropinput size of out backprop doesn t match compute actual 16 compute 17 spatial dim 1 input 16 filter 0 output 16 stride 1 dilation 1 node conv2d transpose 1 1 atrous conv2d transpose conv2dbackpropinput from the description above we conclude that tensorflow seem to lack a critical check whether kernel size be 0 when conduct convolution relate operation which be a dangerous corner case this illegal parameter should not be bring into the calculation but tensorflow use it to build a layer and even use this layer to process the input and get all zero matrix as output this should be a logical bug standalone code to reproduce the issue python import os import numpy as np import keras layer as l import kera backend as k import importlib from keras engine import model input use tensorflow as keras backend input dtype default be float32 kwargs filter 8 kernel size 0 padding same stride 2 dilation rate 1 datum format channel last conv2d separableconv2d kwargs kernel size 0 padding same stride 2 dilation rate 1 datum format channel last depthwiseconv2d input 10 np random random 1 32 32 16 layer l convolutional depthwiseconv2d kwargs you can use conv2d separableconv2d conv2dtranspose instead of depthwisconv2d layer l convolutional separableconv2d kwargs layer l convolutional conv2d kwargs x input batch shape input shape y layer x bk model model x y model path os path join model h5 bk model save model path bk model from keras model import load model model load model model path output model predict input print finish |
tensorflowtensorflow | there be no documentation relate to decode prediction and preprocess input in any kera application | Bug | url s with the issue description of issue what need change the doc correspond to these two function must be add in each application model doc be you plan to also submit a pull request to fix the issue yes will mention these issue soon in those prs |
tensorflowtensorflow | recompute grad compute gradient incorrectly when the same tensor be pass in multiple argument position | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux 5 5 3 arch1 1 tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 7 5 cuda cudnn version 10 2 7 gpu model and memory 1070 ti 8 gb describe the current behavior pass the same tensor as two argument to a function decorate with recompute grad lead to incorrect gradient computation describe the expect behavior the gradient should be compute properly the tf custom gradient code use experimental ref s to deduplicate variable and do the same for input tensor will resolve this issue standalone code to reproduce the issue import tensorflow as tf tf recompute grad def break add a b return a b x tf one 3 dtype tf float32 with tf gradienttape as g g watch x z tf reduce sum break add x x print g gradient z x numpy this output 4 4 4 instead of the correct value 2 2 2 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the follow be use to deduplicate variable in tf custom gradient and add the same for input tensor should resolve the issue l410 l414 |
tensorflowtensorflow | colab tf2 x socket close error receive from peer | Bug | I be try to train an gan here be the code full code on colab python with strategy scope generator model generator model tf keras model sequential tf keras layer dense 512 activation relu input shape noise shape tf keras layers dropout 0 1 tf keras layer dense 1024 activation relu tf keras layer dense 784 activation tanh tf keras layers reshape img shape generator model compile optimizer tf keras optimizer adam 0 0002 0 5 loss binary crossentropy metric accuracy generator model summary discriminator model discriminator model tf keras model sequential tf keras layer flatten input shape img shape tf keras layer dense 1024 activation relu tf keras layers dropout 0 1 tf keras layer dense 512 activation relu tf keras layer dense 1 activation sigmoid discriminator model compile optimizer tf keras optimizer adam 0 0002 0 5 loss binary crossentropy discriminator model summary combine model z tf keras layers input shape noise shape discriminator model trainable false valid discriminator model generator model z combine model tf keras model model z valid combine model compile loss binary crossentropy optimizer tf keras optimizer adam 0 0002 0 5 combine model summary train for epoch in range 10000 idx np random randint 0 x train shape 0 batch size imgs x train idx noise np random normal 0 1 batch size 100 gen imgs generator model predict noise astype np float32 train discriminator model d loss real discriminator model fit imgs astype np float32 np one batch size 1 d loss fake discriminator model fit gen imgs astype np float32 np zero batch size 1 d loss 0 5 np add d loss real d loss fake train combine model noise np random normal 0 1 batch size 2 100 valid y np array 1 batch size 2 g loss combine model fit noise astype np float32 valid y when I run it throw out socket close python train on 128 sample 32 128 eta 7s unavailableerror traceback most recent call last in 82 83 train discriminator model 84 d loss real discriminator model fit imgs astype np float32 np one batch size 1 85 d loss fake discriminator model fit gen imgs astype np float32 np zero batch size 1 86 d loss 0 5 np add d loss real d loss fake 10 frame tensorflow 2 1 0 python3 6 tensorflow core python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 817 max queue size max queue size 818 worker worker 819 use multiprocesse use multiprocesse 820 821 def evaluate self tensorflow 2 1 0 python3 6 tensorflow core python keras engine training v2 py in fit self model x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 340 mode modekey train 341 training context training context 342 total epoch epoch 343 cbks make logs model epoch log training result modekeys train 344 tensorflow 2 1 0 python3 6 tensorflow core python keras engine training v2 py in run one epoch model iterator execution function dataset size batch size strategy step per epoch num sample mode training context total epoch 126 step step mode mode size current batch size as batch log 127 try 128 batch out execution function iterator 129 except stopiteration error outofrangeerror 130 todo kaftan file bug about tf function and error outofrangeerror tensorflow 2 1 0 python3 6 tensorflow core python keras engine training v2 util py in execution function input fn 96 numpy translate tensor to value in eager mode 97 return nest map structure non none constant value 98 distribute function input fn 99 100 return execution function tensorflow 2 1 0 python3 6 tensorflow core python util nest py in map structure func structure kwargs 566 567 return pack sequence as 568 structure 0 func x for x in entry 569 expand composite expand composite 570 tensorflow 2 1 0 python3 6 tensorflow core python util nest py in 0 566 567 return pack sequence as 568 structure 0 func x for x in entry 569 expand composite expand composite 570 tensorflow 2 1 0 python3 6 tensorflow core python keras engine training v2 util py in non none constant value v 128 129 def non none constant value v 130 constant value tensor util constant value v 131 return constant value if constant value be not none else v 132 tensorflow 2 1 0 python3 6 tensorflow core python framework tensor util py in constant value tensor partial 820 821 if isinstance tensor op eagertensor 822 return tensor numpy 823 if not be tensor tensor 824 return tensor tensorflow 2 1 0 python3 6 tensorflow core python framework op py in numpy self 940 941 todo slebedev consider avoid a copy for non cpu or remote tensor 942 maybe arr self numpy pylint disable protect access 943 return maybe arr copy if isinstance maybe arr np ndarray else maybe arr 944 tensorflow 2 1 0 python3 6 tensorflow core python framework op py in numpy self 908 return self numpy internal 909 except core notokstatusexception as e 910 six raise from core status to exception e code e message none 911 912 property usr local lib python3 6 dist package six py in raise from value from value unavailableerror socket close additional grpc error information create 1582482234 677879877 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message socket close grpc status 14 |
tensorflowtensorflow | how to get integer batch size in keras model fit | Bug | I m try to use model fit on a sequential model consist of custom layer subclasse tf keras layer layer use gradienttape where I feed every batch in explicitly work fine include in graph mode with tf function try to use the high level kera api for training py model compile loss loss fn optimizer adam model fit x train y train I get a bunch of valueerror none value not support for thing like py def call self x epsilon tf random normal x shape reparametrization trick since x shape 0 be none so the question be how do I get an integer batch size when use model fit I try py model compile loss loss fn optimizer adam model fit x train y train batch size 64 step per epoch x train shape 0 64 but that make no difference x shape 0 remain none during graph creation |
tensorflowtensorflow | multi head multi loss model with gradienttape | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change I have model with two head and two loss I want to optimize my model in the way that each loss propagate separately in it s head branch |
tensorflowtensorflow | shape issue in keras metric sparse top k categorical accuracy with multiple dimension | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 24394 gc24d2f9 2 2 0 dev20200210 python version 3 5 2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior describe the expect behavior standalone code to reproduce the issue when there be multiple dimension e g for image datum the behavior of tf metric sparse top k categorical accuracy and tf metric top k categorical accuracy differ from both each other and that of their categorical crossentropy equivalent in particular as of which be use to fix tf metric sparse top k categorical accuracy flatten the extra dim result in a different output shape compare to sparse categorical crossentropy while tf metric top k categorical accuracy raise an error say there must be only 2 dimension the docs don t say much about what should be expect the flatten shape also cause an error when pass weight to tf keras metrics sparsetopkcategoricalaccuracy see minimal repro code here |
tensorflowtensorflow | tensorflow 2 x prevent I use two tf datum dataset in multiprocess | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 macos catalina version 10 15 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary tensorflow version use command below from binary 2 0 0 and 2 1 0 python version bazel version if compile from source python 3 7 4 gcc compiler version if compile from source no cuda cudnn version gpu model and memory no you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior in my multiprocess program I create a queue for process communication but when I use two different tf datum dataset in different process my program get stuck describe the expect behavior when I replace one of these tf datum dataset to a list everything work ideally and this piece of code transfer from tf1 x which also work well standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook py import os from multiprocesse import process queue import tensorflow as tf class trainable object def init self self queue queue self valid handle tf datum dataset from tensor slice 1 2 3 4 5 1 2 3 4 5 self train handle tf datum dataset from tensor slice 1 2 3 4 5 6 7 8 9 def eval self q print process to write s os getpid tmp 1 for parsed record in self valid handle print parse record tmp parse record self queue put q 1 tmp def train self process none print process to read s os getpid for context in self train handle if process valid detail self queue get process none print valid detail if 8 in valid detail print early stop break process process target self eval args context numpy process start if not self queue empty valid detail self queue get process none print valid detail if name main model trainable model train other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach here be my question post on stackoverflow |
tensorflowtensorflow | can t find doc on ssl support for distribute training | Bug | describe the current behavior I haven t be able to find documentation on if ssl be use during distribute training with tf distribute strategy with grpc describe the expect behavior I should be able to easily find this information in the documentation and if it s support then I should easily be able to turn ssl on off in distribute training for when I prefer security vs performance |
tensorflowtensorflow | image resize tensor as size argumentnot work in tf function | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow custom layer use tf image resize os platform and distribution e g linux ubuntu 16 04 linux mint 19 3 cinnamon ubuntu base mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip install tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 6 9 describe the current behavior image resize be work in eager standard mode with tensor as size argument stop work if warp in tf function describe the expect behavior should work like in eager possible reason live tf function tensor be in the implementation of image resize not evaluate so the cast to a appropriate value fail and none be return error full traceback most recent call last file home bhb vscode extension ms python python 2020 2 63990 pythonfile ptvsd launcher py line 48 in main ptvsdargs file home bhb vscode extension ms python python 2020 2 63990 pythonfile lib python old ptvsd ptvsd main py line 432 in main run file home bhb vscode extension ms python python 2020 2 63990 pythonfile lib python old ptvsd ptvsd main py line 316 in run file runpy run path target run name main file usr lib python3 6 runpy py line 263 in run path pkg name pkg name script name fname file usr lib python3 6 runpy py line 96 in run module code mod name mod spec pkg name script name file usr lib python3 6 runpy py line 85 in run code exec code run global file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src share net model modules base py line 204 in main file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src share net model modules base py line 191 in main out test multiscale input file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src share net model modules base py line 175 in run output op input kwargs file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python keras engine base layer py line 822 in call output self call cast input args kwargs file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src share net model modules base py line 126 in call out append self hour glass in file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python keras engine base layer py line 822 in call output self call cast input args kwargs file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src share net model modules base py line 62 in call big normal self big normal big shared2 shc scale 2 file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python keras engine base layer py line 822 in call output self call cast input args kwargs file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python eager def function py line 615 in call self initialize args kwd add initializer to initializer file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python eager def function py line 497 in initialize args kwd file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python eager function py line 2389 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python eager function py line 2703 in maybe define function graph function self create graph function args kwargs file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python eager function py line 2593 in create graph function capture by value self capture by value file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python framework func graph py line 978 in func graph from py func func output python func func args func kwargs file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python eager def function py line 439 in wrap fn return weak wrap fn wrap args kwd file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python eager function py line 3211 in bind method wrapper return wrap fn args kwargs file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python framework func graph py line 968 in wrapper raise e ag error metadata to exception e valueerror in convert code mnt 7f43981f bc0a 4b76 a721 46c0159f0cf5 cloud code git 3d person pose estimation from 2d singelview image datum src share net model layer base py 228 call scale conv tf image resize conv destination size preserve aspect ratio true antialias true home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python op image op impl py 1357 resize image v2 skip resize if same false home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python op image op impl py 1100 resize image common math op cast new height const dtype float32 home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python util dispatch py 180 wrapper return target args kwargs home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python ops math op py 705 cast x op convert to tensor x name x home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python framework op py 1314 convert to tensor ret conversion func value dtype dtype name name as ref as ref home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python framework constant op py 317 constant tensor conversion function return constant v dtype dtype name name home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python framework constant op py 258 constant allow broadcast true home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python framework constant op py 296 constant impl allow broadcast allow broadcast home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow core python framework tensor util py 439 make tensor proto raise valueerror none value not support valueerror none value not support standalone code to reproduce the issue sorry not yet the time but relevant code be custom layer for scale feature to variable depend on other feature size size 1 class scale kera layers layer def init self destination channel none name scale kwargs super init name name kwargs self destination channel destination channel def build self input shape if self destination channel be none self destination channel input shape 1 self compress input keras layers convolution2d int input shape 1 2 kernel size 1 padding same activation tf nn leaky relu kernel initializer tf initializer he normal bias initializer tf initializer he uniform self conv keras layers convolution2d input shape 1 kernel size 3 padding same activation tf nn leaky relu kernel initializer tf initializer he normal bias initializer tf initializer he uniform self pool keras layers maxpool2d pool size 3 stride 1 padding same self compress output keras layers convolution2d self destination channel kernel size 1 padding same activation tf nn leaky relu kernel initializer tf initializer he normal bias initializer tf initializer he uniform super build input shape def call self input destination size compress input self compress input input conv self conv compressed input pool self pool input scale conv tf image resize conv destination size preserve aspect ratio true antialias true scale pool tf image resize pool destination size preserve aspect ratio true antialias true concat keras layers concatenate scale pool scale conv compress output self compress output concat return compressed output work if like show stop work if tf function be add to def call self input destination size call code def call self input input res input shc input scale tf cast input shc shape 1 3 dtype tf int32 scale 2 tf cast scale 2 dtype tf int32 scale 4 tf cast scale 4 dtype tf int32 scale 8 tf cast scale 8 dtype tf int32 big normal self big normal big shared2 shc scale 2 return big normal big normal be a instance of class scale thank in advance |
tensorflowtensorflow | how to optimize a complex phase | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution macos 10 15 3 tensorflow instal from binary 2 1 0 describe the current behavior a real loss function that use complex number internally require variable to be complex but this lead to complex gradient even though they should be real describe the expect behavior in tf1 x I use to circumvent this problem by define my real variable as e g python x tf complex tf variable 1 0 dtype tf float32 tf constant 0 0 dtype tf float32 tf2 x can not compute gradient if I use this trick standalone code to reproduce the issue for example I want to optimize the real angle x in the complex phase exp 1j x python def loss x return tf abs tf exp 1j x 1 0 minimize by x 0 0 x tf variable 1 0 dtype tf complex64 force to have type tf complex64 lr 0 01 for n in range 10 with tf gradienttape as tape l loss x grad tape gradient l x x assign sub lr grad print x |
tensorflowtensorflow | do pass training flag to tf keras sequential do anything | Bug | url s with the issue define the loss and gradient function please provide a link to the documentation entry for example description of issue what need change it be unclear whether the sequential class make use a training flag feed into it during training inference as the tutorial above imply clear description when build a custom model subclasse from tf keras model the standard signature for write the call be as follow def call self input training none mask none if my class include submodel of the form sequential I be able to pass this flag forward but I m unaware whether it s do anything as documentation from the class doesn t mention this flag look at the customization tutorial above however the flag be pass into a sequential model that do not include layer whose behavior change during training inference so I don t know if that flag be do anything correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | if gpu available raise exception cholesky decomposition be not successful the input might not be valid otherwise in cpu mode the official unitt be ok | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary pip install tensorflow gpu 1 15 0 tf env txt python c import tensorflow as tf print tf version 1 15 0 python c import tensorflow as tf print tf test gpu device name find device 0 with property name tesla v100 sxm2 32 gb major 7 minor 0 memoryclockrate ghz 1 53 create tensorflow device device gpu 0 with 30555 mb memory physical gpu device 0 name tesla v100 sxm2 32 gb pci bus i d 0000 8a 00 0 compute capability 7 0 describe the current behavior cuda visible device 1 python gmm test py gmmtest test fit ok cuda visible device 0 python gmm test py gmmtest test fit error original stack trace for cholesky describe the expect behavior expect the official code gem test py should pass the unit test in gpu mode cuda visible device 0 python gmm test py gmmtest test fit ok standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook wget o gmm test py cuda visible device 0 python gmm test py gmmtest test fit other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach python trace err log which be python back trace info |
tensorflowtensorflow | array relu which be an input to the conv operator produce the output array conv2d 1 biasadd be lack min max datum which be necessary for quantization | Bug | hi I be face the follow issue while convert my frozen graph pb to tflite array relu which be an input to the conv operator produce the output array conv2d 1 biasadd be lack min max datum which be necessary for quantization if accuracy matter either target a non quantize output format or run quantize training with your model from a float point checkpoint to change the input graph to contain min max information if you don t care about accuracy you can pass default range min and default range max for easy experimentation fatal python error abort current thread 0x00007f637125f740 most recent call first file home shubhamsingh miniconda3 lib python3 7 site package tensorflow lite toco python toco from protos py line 33 in execute file home shubhamsingh miniconda3 lib python3 7 site package absl app py line 250 in run main file home shubhamsingh miniconda3 lib python3 7 site package absl app py line 299 in run file home shubhamsingh miniconda3 lib python3 7 site package tensorflow python platform app py line 40 in run file home shubhamsingh miniconda3 lib python3 7 site package tensorflow lite toco python toco from protos py line 59 in main file home shubhamsingh miniconda3 bin toco from protos line 8 in abort core dump I have use follow script to convert the model tflite convert output file mode new yoyo1 tflite graph def file frozen model pb inference type quantize uint8 input array input 1 output array dense1 softmax dense2 softmax mean value 0 std dev value 255 I have use wideresnet model with two output node |
tensorflowtensorflow | tensorflow autograph could not transform | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information download and prepare dataset imdb review 80 23 mib to root tensorflow dataset imdb review subwords8k 1 0 0 shuffle and write example to root tensorflow dataset imdb review subwords8k 1 0 0 incomplete7ztmsy imdb review train tfrecord shuffle and write example to root tensorflow dataset imdb review subwords8k 1 0 0 incomplete7ztmsy imdb review test tfrecord shuffle and write example to root tensorflow dataset imdb review subwords8k 1 0 0 incomplete7ztmsy imdb review unsupervise tfrecord dataset imdb review download and prepare to root tensorflow dataset imdb review subwords8k 1 0 0 subsequent call will reuse this datum warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num warning tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num warning autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num warning tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 warning tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 warning autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 warning tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num warning tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num warning autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num warning tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 warning tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 warning autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 |
tensorflowtensorflow | can not convert tensor object to numpy array | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code yes os platform and distribution macos catalina 10 15 2 tensorflow instal from conda tensorflow version use command below 2 0 0 python version 3 6 7 cpu you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be write a custom layer where I need a kernel to be element wise multiply on the input I be try to convert the input tensor into a numpy array use k eval input but I get the follow error attributeerror tensor object have no attribute numpy describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem here be the custom layer that I be try to implement python import numpy as np import tensorflow from tensorflow keras import backend as k from tensorflow keras layer import inputspec layer dense conv2d lambda multiply from tensorflow keras import constraint from tensorflow keras import initializer from binary op import binarize class binaryconv2d conv2d binarize convolution2d layer reference binarynet training deep neural network with weight and activation constrain to 1 or 1 def init self filter kernel lr multiplier glorot bias lr multipli none h 1 kwargs super binaryconv2d self init filter kwargs self h h self kernel lr multipli kernel lr multipli self bias lr multipli bias lr multipli def build self input shape if self data format channel first channel axis 1 else channel axis 1 if input shape channel axis be none raise valueerror the channel dimension of the input should be define find none input dim input shape channel axis kernel shape self kernel size input dim self filter base self kernel size 0 self kernel size 1 if self h glorot nb input int input dim base nb output int self filter base self h np float32 np sqrt 1 5 nb input nb output print glorot h format self h if self kernel lr multiplier glorot nb input int input dim base nb output int self filter base self kernel lr multiplier np float32 1 np sqrt 1 5 nb input nb output print glorot learning rate multipli format self lr multipli self kernel constraint clip self h self h self kernel initializer initializer randomuniform self h self h self kernel self add weight shape kernel shape initializer self kernel initializer name kernel regularizer self kernel regularizer constraint self kernel constraint if self use bias self lr multiplier self kernel lr multipli self bias lr multipli self bias self add weight self output dim initializer self bias initializer name bias regularizer self bias regularizer constraint self bias constraint else self lr multiplier self kernel lr multipli self bias none set input spec self input spec inputspec ndim 4 axis channel axis input dim self build true def call self input binary kernel binarize self kernel h self h print type k eval binary kernel bk temp np reshape k eval binary kernel 0 1 self kernel size 0 self kernel size 0 1 bk cube np zero 30 30 30 1 bk cube bk temp output input bk cube if self use bias output k bias add output self bias datum format self datum format if self activation be not none return self activation output return output def get config self config h self h kernel lr multipli self kernel lr multipli bias lr multipli self bias lr multipli base config super binaryconv2d self get config return dict list base config item list config item I should add that this error come up only when I try to compile this into a model if I just call build then it work fine other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | importerror dll load fail the specify module could not be find | Bug | c user ravikumarm appdata local program python python37 python exe c ai imageclassification py use tensorflow backend traceback most recent call last file c user ravikumarm appdata local program python python37 lib site package tensorflow core python pywrap tensorflow py line 58 in from tensorflow python pywrap tensorflow internal import file c user ravikumarm appdata local program python python37 lib site package tensorflow core python pywrap tensorflow internal py line 28 in pywrap tensorflow internal swig import helper file c user ravikumarm appdata local program python python37 lib site package tensorflow core python pywrap tensorflow internal py line 24 in swig import helper mod imp load module pywrap tensorflow internal fp pathname description file c user ravikumarm appdata local program python python37 lib imp py line 242 in load module return load dynamic name filename file file c user ravikumarm appdata local program python python37 lib imp py line 342 in load dynamic return load spec importerror dll load fail the specify module could not be find during handling of the above exception another exception occur traceback most recent call last file c ai imageclassification py line 1 in from keras preprocesse image import imagedatagenerator file c user ravikumarm appdata local program python python37 lib site package kera init py line 3 in from import util file c user ravikumarm appdata local program python python37 lib site package keras util init py line 6 in from import conv util file c user ravikumarm appdata local program python python37 lib site package keras util conv util py line 9 in from import backend as k file c user ravikumarm appdata local program python python37 lib site package keras backend init py line 1 in from load backend import epsilon file c user ravikumarm appdata local program python python37 lib site package keras backend load backend py line 90 in from tensorflow backend import file c user ravikumarm appdata local program python python37 lib site package keras backend tensorflow backend py line 5 in import tensorflow as tf file c user ravikumarm appdata local program python python37 lib site package tensorflow init py line 101 in from tensorflow core import file c user ravikumarm appdata local program python python37 lib site package tensorflow core init py line 40 in from tensorflow python tool import module util as module util file c user ravikumarm appdata local program python python37 lib site package tensorflow init py line 50 in getattr module self load file c user ravikumarm appdata local program python python37 lib site package tensorflow init py line 44 in load module importlib import module self name file c user ravikumarm appdata local program python python37 lib importlib init py line 127 in import module return bootstrap gcd import name level package level file c user ravikumarm appdata local program python python37 lib site package tensorflow core python init py line 49 in from tensorflow python import pywrap tensorflow file c user ravikumarm appdata local program python python37 lib site package tensorflow core python pywrap tensorflow py line 74 in raise importerror msg importerror traceback most recent call last file c user ravikumarm appdata local program python python37 lib site package tensorflow core python pywrap tensorflow py line 58 in from tensorflow python pywrap tensorflow internal import file c user ravikumarm appdata local program python python37 lib site package tensorflow core python pywrap tensorflow internal py line 28 in pywrap tensorflow internal swig import helper file c user ravikumarm appdata local program python python37 lib site package tensorflow core python pywrap tensorflow internal py line 24 in swig import helper mod imp load module pywrap tensorflow internal fp pathname description file c user ravikumarm appdata local program python python37 lib imp py line 242 in load module return load dynamic name filename file file c user ravikumarm appdata local program python python37 lib imp py line 342 in load dynamic return load spec importerror dll load fail the specify module could not be find fail to load the native tensorflow runtime see for some common reason and solution include the entire stack trace above this error message when ask for help process finish with exit code 1 |
tensorflowtensorflow | not able to load a save model with custom layer | Bug | please make sure that this be a bug as per our system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory os cento tensorflow gpu 2 1 0 tensorflow 2 0 0 instal in the conda version conda version 4 8 1 use cuda cuda version 10 1 gpu tesla k80 python 3 6 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I have create a custom layer and train the model use that layer but when I load the model back it be not able to load all the weight value describe the expect behavior it should be able to load the weight value code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python from future import absolute import division print function unicode literal from tensorflow keras import initializer from tensorflow keras import layer from tensorflow keras model import load model from tensorflow keras layers import input dense from tensorflow python keras import backend as k from tensorflow python keras util import tf util from tensorflow keras layers import leakyrelu import tensorflow as tf import tensorflow as tf import tensorflow kera from tensorflow keras callback import from tensorflow keras loss import mse create a custom layer class custombatchnormalization layer layer def init self momentum 0 99 epsilon 1e 3 beta initializer zero gamma initializer one move mean initializer zero move range initializer one kwargs super custombatchnormalization self init kwargs self support mask true self momentum momentum self epsilon epsilon self beta initializer initializer get beta initializer self gamma initializer initializer get gamma initializer self move mean initializer initializer get move mean initializer self move range initializer initializer get move range initializer def build self input shape dim input shape 1 shape dim self gamma self add weight shape shape name gamma initializer self gamma initializer trainable true self beta self add weight shape shape name beta initializer self beta initializer trainable true self moving mean self add weight shape shape name move mean initializer self move mean initializer trainable false self move range self add weight shape shape name move range initializer self move range initializer trainable false def call self input training none input shape input shape if train false scale input self move mean self move range self epsilon return self gamma scale self beta mean tf math reduce mean input axis 0 maxr tf math reduce max input axis 0 minr tf math reduce min input axis 0 range diff tf math subtract maxr minr self moving mean tf math add self momentum self move mean 1 self momentum mean self move range tf math add self momentum self move range 1 self momentum range diff scale tf math divide tf math subtract input mean range diff self epsilon return tf math add tf math multiply self gamma scale self beta def get config self config momentum self momentum epsilon self epsilon beta initializer initializer serialize self beta initializer gamma initializer initializer serialize self gamma initializer move mean initializer initializer serialize self move mean initializer move range initializer initializer serialize self move range initializer base config super custombatchnormalization self get config return dict list base config item list config item def compute output shape self input shape return input shape create your model inp input shape 4 batch norm 1 custombatchnormalization dynamic true inp densout dense 24 activation linear batch norm 1 densout leakyrelu alpha 0 3 densout batch norm 2 custombatchnormalization dynamic true densout densout dense 128 activation linear batch norm 2 densout leakyrelu alpha 0 3 densout batch norm out custombatchnormalization dynamic true densout out dense 5 activation linear batch norm out test nw tf keras model model inp out compile it test nw compile tf keras optimizer adam loss mse experimental run tf function false path hdf5 path to save this model earlystoppe earlystopping monitor val loss patience 10 verbose 0 mode min mcp save rh modelcheckpoint path hdf5 save well only true monitor val loss mode min x np random randn 4 4 y np random randn 4 5 x val np random randn 4 4 y val np random randn 4 5 test nw fit x y batch size 4 epoch 10 validation datum x val y val callback earlystopping mcp save rh now restart the kernel and load the model dict lay custombatchnormalization custombatchnormalization mod load model path hdf5 custom object dict lie I get an error say that valueerror traceback most recent call last in 1 mod load model path hdf5 ci01 rh cbn test hdf5 custom object dict lie miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python keras save save py in load model filepath custom object compile 144 if h5py be not none and 145 isinstance filepath h5py file or h5py be hdf5 filepath 146 return hdf5 format load model from hdf5 filepath custom object compile 147 148 if isinstance filepath six string type miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python keras save hdf5 format py in load model from hdf5 filepath custom object compile 169 170 set weight 171 load weight from hdf5 group f model weight model layer 172 173 if compile miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python keras save hdf5 format py in load weight from hdf5 group f layer 695 str len symbolic weight 696 weight but the save weight have 697 str len weight value element 698 weight value tuple zip symbolic weight weight value 699 k batch set value weight value tuple valueerror layer 0 name custom batch normalization 17 in the current model be find to correspond to layer custom batch normalization 17 in the save file however the new layer custom batch normalization 17 expect 4 weight but the save weight have 2 element other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | be nonmaxsuppressionv4 support | Bug | system information linux ubuntu 16 04 tensorflow 2 1 0 provide the text output from tflite convert here be a list of operator for which you will need custom implementation nonmaxsuppressionv4 code select index pad num valid tf image non max suppression box box squeeze score score max output size 20 score threshold 0 7 iou threshold 0 5 pad to max output size true accord to non max suppression v4 be already support however when I use it to build my model and try to convert it into tflite the error come out I try a solution with tf lite opsset select tf op and the error go converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op but I can not run it properly with native tflite intepreter another error show as below runtimeerror regular tensorflow op be not support by this interpreter make sure you apply link the flex delegate before inference node number 92 flexnonmaxsuppressionv4 fail to prepare |
tensorflowtensorflow | how could I can add a new os on the support on the test and support list | Bug | url s with the issue description of issue what need change could someone instruct I how to add a new os on the support on the test and support list I would like to make some effort for opensuse and sle there might have some step to make sure an os be test and support right |
tensorflowtensorflow | basic lstm tflite model create with new experimental converter result in model with incorrect output shape | Bug | system information os platform and distribution e g linux ubuntu 16 04 google colab tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 2 0 dev20200218 command use to run the converter or code if you re use the python api converter tf lite tfliteconverter from keras model model converter experimental new converter true tflite model converter convert open mnist lstm f32 tflite wb write tflite model the output from the converter invocation conversion be successful failure detail I be create a basic mnist classification model which include a single lstm layer use the new experimental converter option the model can be convert to tflite format successfully and infer from the tflite model use the python interpreter api give correct result however the model s output shape be just an empty array and run the model on android require the output buffer size to be specify so the model crash I expect the output shape to be 1 10 however that be not the case as show below name identity index 52 shape array dtype int32 shape signature array dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter any other info log colab notebook |
tensorflowtensorflow | memory leak in tensorflow 2 0 dataset when use group by window | Bug | system information os platform google cloud linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version 2 0 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 01 gpu model and memory tesla p100 16 gb run on graph mode yes this be create memory leak def pairwise batch iterator tf record no thread 14 batch size 64 num epoch 50 dataset make dataset tf record no thread dataset dataset repeat num epoch dataset dataset apply tf datum experimental group by window key func lambda elem args elem reduce func lambda window window batch batch size window size batch size dataset dataset prefetch buffer size tf data experimental autotune return dataset this work fine def pairwise batch iterator tf record no thread 14 batch size 64 num epoch 50 dataset make dataset tf record no thread dataset dataset repeat num epoch dataset dataset prefetch buffer size tf data experimental autotune return dataset I be train a pairwise rank model where I have to group by query i d that s why I be use tf datum experimental group by window and this be create a memory leak if I use the second version of code I don t face any issue but I have to group by query i d |
tensorflowtensorflow | tensorflow 2 1 0 be not use gpu | Bug | system information os platform google cloud linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version 2 1 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 01 gpu model and memory tesla p100 16 gb when I train any model use tensorflow use 2 1 it run on cpu only even it show on gpu but gpu utilization become always 0 and cpu utilization be very high |
tensorflowtensorflow | save everything the optimizer configuration it be not able to save tensorflow optimizer | Bug | url s with the issue description of issue what need change the first reference to optimizer configuration be unqualified the second reference to optimizer be kind of qualified with from tf train do this mean tf keras optimizer state be store but not tf optimizer or do this mean no optimizer state be store full text as follow this technique save everything the weight value the model s configuration architecture the optimizer configuration keras save model by inspect the architecture currently it be not able to save tensorflow optimizer from tf train when use those you will need to re compile the model after load and you will lose the state of the optimizer |
tensorflowtensorflow | typeerror tensor be unhashable if tensor equality be enable instead use tensor experimental ref as the key in colab | Bug | when tryig to build the follow model I get error tensor be unhashable see figure input datum input name the input shape 208 224 224 3 dtype dtype layer1 timedistribute mobilenet weight imagenet include top false input datum layer2 timedistribute globalaveragepooling2d layer1 create bidirectional lstm for I in range 0 n layer x bidirectional cudnnlstm 20 kernel initializer kernel init rnn bias initializer bias init rnn unit forget bias true return sequence true merge mode sum name cudnn bi lstm str i 1 layer2 x timedistribute dense unit 20 kernel initializer kernel init dense bias initializer bias init dense activation relu name fc 4 x x timedistribute dropout dropout name dropout 4 x output layer with softmax y pre timedistribute dense unit 30 kernel initializer kernel init dense bias initializer bias init dense activation softmax name softmax x model input input datum output y pre summary colab issue |
tensorflowtensorflow | not able to use training argument in the call method for a custom layer | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information os cento tensorflow gpu 2 1 0 tensorflow 2 0 0 instal in the conda version conda version 4 8 1 use cuda cuda version 10 1 gpu tesla k80 python 3 6 describe the current behavior not able to use the training flag when design a custom layer get error say that operatornotallowedingrapherror traceback most recent call last miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python keras engine base layer py in call self input args kwargs 841 with auto control dep automaticcontroldependencie as acd 842 output call fn cast input args kwargs 843 wrap tensor in output in tf identity to avoid miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python autograph impl api py in wrapper args kwargs 236 if hasattr e ag error metadata 237 raise e ag error metadata to exception e 238 else operatornotallowedingrapherror in convert code 43 call if not train home ankitesh miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python framework op py 765 bool self disallow bool cast home ankitesh miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python framework op py 534 disallow bool cast self disallow in graph mode use a tf tensor as a python bool home ankitesh miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python framework op py 523 disallow in graph mode this function with tf function format task operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function describe the expect behavior it should not give an error code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem this be the class that I make python from tensorflow keras import initializer from tensorflow keras import layer class custombatchnormalization layer layer def init self momentum 0 99 epsilon 1e 3 beta initializer zero gamma initializer one move mean initializer zero move range initializer one kwargs self momentum momentum self epsilon epsilon self beta initializer initializer get beta initializer self gamma initializer initializer get gamma initializer self move mean initializer initializer get move mean initializer self move range initializer initializer get move range initializer super init kwargs def build self input shape dim input shape 1 shape dim self gamma self add weight shape shape name gamma initializer self gamma initializer trainable true self beta self add weight shape shape name beta initializer self beta initializer trainable true self moving mean self add weight shape shape name move mean initializer self move mean initializer trainable false self move range self add weight shape shape name move range initializer self move range initializer trainable false def call self input training none input shape input shape if not train scale input self move mean self move range self epsilon return self gamma scale self beta mean tf math reduce mean input axis 0 maxr tf math reduce max input axis 0 minr tf math reduce min input axis 0 range diff tf math subtract maxr minr self moving mean tf math add self momentum self move mean 1 self momentum mean self move range tf math add self momentum self move range 1 self momentum range diff scale tf math divide tf math subtract input mean range diff self epsilon return tf math add tf math multiply self gamma scale self beta def get config self config momentum self momentum epsilon self epsilon beta initializer initializer serialize self beta initializer gamma initializer initializer serialize self gamma initializer move mean initializer initializer serialize self move mean initializer move range initializer initializer serialize self move range initializer base config super custombatchnormalization self get config return dict list base config item list config item def compute output shape self input shape return input shape below be the network inp input shape 64 batch norm 1 custombatchnormalization inp densout dense 128 activation linear batch norm 1 densout leakyrelu alpha 0 3 densout for I in range 6 batch norm I custombatchnormalization densout densout dense 128 activation linear batch norm I densout leakyrelu alpha 0 3 densout batch norm out custombatchnormalization densout out dense 64 activation linear batch norm out inp rh cbn tf keras model model inp out other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach here be the full log error operatornotallowedingrapherror traceback most recent call last miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python keras engine base layer py in call self input args kwargs 841 with auto control dep automaticcontroldependencie as acd 842 output call fn cast input args kwargs 843 wrap tensor in output in tf identity to avoid miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python autograph impl api py in wrapper args kwargs 236 if hasattr e ag error metadata 237 raise e ag error metadata to exception e 238 else operatornotallowedingrapherror in convert code 43 call if not train home ankitesh miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python framework op py 765 bool self disallow bool cast home ankitesh miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python framework op py 534 disallow bool cast self disallow in graph mode use a tf tensor as a python bool home ankitesh miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python framework op py 523 disallow in graph mode this function with tf function format task operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function during handling of the above exception another exception occur typeerror traceback most recent call last in 1 inp input shape 64 2 batch norm 1 custombatchnormalization inp 3 densout dense 128 activation linear batch norm 1 4 densout leakyrelu alpha 0 3 densout 5 for I in range 6 miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python keras engine base layer py in call self input args kwargs 852 dynamic pass dynamic true to the class 853 constructor nencountere error n n 854 str e n 855 else 856 we will use static shape inference to return symbolic tensor typeerror you be attempt to use python control flow in a layer that be not declare to be dynamic pass dynamic true to the class constructor encounter error in convert code 43 call if not train home ankitesh miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python framework op py 765 bool self disallow bool cast home ankitesh miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python framework op py 534 disallow bool cast self disallow in graph mode use a tf tensor as a python bool home ankitesh miniconda3 envs cbraincustomlayer lib python3 6 site package tensorflow core python framework op py 523 disallow in graph mode this function with tf function format task operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function |
tensorflowtensorflow | there aren t enough element in this dataset for each shard to have at least one element elem 1 shard 2 | Bug | environment python 3 7 3 default dec 20 2019 18 57 59 gcc 8 3 0 on linux tensorflow cpu 2 1 0 tensorflow dataset 2 0 0 description raspbian gnu linux 10 buster use code the example from tfconfig below cluster worker rpicluster1 2222 rpicluster2 2222 task type worker index 0 cluster worker rpicluster1 2222 rpicluster2 2222 task type worker index 1 error below seem to suggest sharding be not work 2020 02 18 03 41 12 788327 I tensorflow core distribute runtime rpc grpc channel cc 300 initialize grpcchannelcache for job worker 0 rpicluster1 2222 1 localhost 2222 2020 02 18 03 41 12 789117 I tensorflow core distribute runtime rpc grpc server lib cc 390 start server with target grpc localhost 2222 2020 02 18 03 41 21 023441 I tensorflow core distribute runtime rpc grpc channel cc 300 initialize grpcchannelcache for job worker 0 localhost 2222 1 rpicluster2 2222 2020 02 18 03 41 21 024802 I tensorflow core distribute runtime rpc grpc server lib cc 390 start server with target grpc localhost 2222 warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation 2020 02 18 03 41 34 450011 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext warning tensorflow your input run out of datum interrupting training make sure that your dataset or generator can generate at least step per epoch epoch batch in this case 15 batch you may need to use the repeat function when build your dataset warning tensorflow your input run out of datum interrupting training make sure that your dataset or generator can generate at least step per epoch epoch batch in this case 15 batch you may need to use the repeat function when build your dataset 2020 02 18 03 41 35 775198 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort invalid argument there aren t enough element in this dataset for each shard to have at least one element elem 1 shard 2 if you be use dataset with distribution strategy consider set the auto sharde policy to either datum or off use the experimental distribute auto shard policy optionof tf datum option node multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2020 02 18 03 41 35 795862 I tensorflow core profiler lib profiler session cc 225 profiler session start find cluster spec worker rpicluster1 2222 rpicluster2 2222 find tfconfig cluster worker rpicluster1 2222 rpicluster2 2222 task type worker index 0 create strategy number of worker 2 create dataset inside scope create model inside scope start to fit model train for 5 step epoch 1 3 2020 02 18 03 41 36 589112 w tensorflow python util util cc 319 set be not currently consider sequence but this may change in the future so consider avoid use they warn tensorflow from usr local lib python3 7 dist package tensorflow core python op resource variable op py 1786 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer warn tensorflow from usr local lib python3 7 dist package tensorflow core python op resource variable op py 1786 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer 2020 02 18 03 41 39 502892 w tensorflow python util util cc 319 set be not currently consider sequence but this may change in the future so consider avoid use they 2020 02 18 03 41 39 615315 e tensorflow core common runtime ring alg cc 279 abort ringreduce with cancel rpc request be cancel 2020 02 18 03 41 39 615446 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort cancel rpc request be cancel 2020 02 18 03 41 39 615763 w tensorflow core framework op kernel cc 1655 op require fail at collective op cc 253 cancel rpc request be cancel 2020 02 18 03 41 39 615850 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort cancel rpc request be cancel node collectivereduce find cluster spec worker rpicluster1 2222 rpicluster2 2222 find tfconfig cluster worker rpicluster1 2222 rpicluster2 2222 task type worker index 1 create strategy number of worker 2 create dataset inside scope create model inside scope start to fit model train for 5 step epoch 1 3 traceback most recent call last file usr local lib python3 7 dist package tensorflow core python keras engine training v2 py line 753 in on start yield file usr local lib python3 7 dist package tensorflow core python keras engine training v2 py line 397 in fit prefix val file usr lib python3 7 contextlib py line 130 in exit self gen throw type value traceback file usr local lib python3 7 dist package tensorflow core python keras engine training v2 py line 771 in on epoch self callback on epoch end epoch epoch log file usr local lib python3 7 dist package tensorflow core python keras callbacks py line 302 in on epoch end callback on epoch end epoch log file usr local lib python3 7 dist package tensorflow core python keras callbacks py line 990 in on epoch end self save model epoch epoch log log file usr local lib python3 7 dist package tensorflow core python keras callbacks py line 1040 in save model self model save filepath overwrite true file usr local lib python3 7 dist package tensorflow core python keras engine network py line 1008 in save signature option file usr local lib python3 7 dist package tensorflow core python keras save save py line 115 in save model signature option file usr local lib python3 7 dist package tensorflow core python keras save save model save py line 78 in save save lib save model filepath signature option file usr local lib python3 7 dist package tensorflow core python save model save py line 916 in save object saver save util impl get variable path export dir file usr local lib python3 7 dist package tensorflow core python training track util py line 1168 in save file prefix file prefix tensor object graph tensor object graph tensor file usr local lib python3 7 dist package tensorflow core python training track util py line 1116 in save cache when graph building save op saver save file prefix file usr local lib python3 7 dist package tensorflow core python training save functional saver py line 230 in save sharde save append saver save shard prefix file usr local lib python3 7 dist package tensorflow core python training save functional saver py line 69 in save tensor append spec tensor file usr local lib python3 7 dist package tensorflow core python training save saveable object py line 52 in tensor return self tensor if callable self tensor else self tensor file usr local lib python3 7 dist package tensorflow core python distribute value py line 1252 in tensor return strategy extend read var sync on read variable file usr local lib python3 7 dist package tensorflow core python distribute mirror strategy py line 769 in read var return replica local var get cross replica pylint disable protect access file usr local lib python3 7 dist package tensorflow core python distribute value py line 1347 in get cross replica self axis none file usr local lib python3 7 dist package tensorflow core python distribute distribute lib py line 808 in reduce return self extend reduce reduce op value pylint disable protect access file usr local lib python3 7 dist package tensorflow core python distribute distribute lib py line 1449 in reduce device util current or device cpu 0 0 file usr local lib python3 7 dist package tensorflow core python distribute collective all reduce strategy py line 528 in reduce to reduce op value destination destination file usr local lib python3 7 dist package tensorflow core python distribute cross device op py line 282 in reduce destination file usr local lib python3 7 dist package tensorflow core python distribute cross device op py line 1038 in reduce implementation all reduce self batch all reduce reduce op per replica value 0 file usr local lib python3 7 dist package tensorflow core python distribute cross device op py line 1118 in batch all reduce dense result self do batch all reduce dense reduce op dense value file usr local lib python3 7 dist package tensorflow core python distribute cross device op py line 1160 in do batch all reduce dense i d communication hint file usr local lib python3 7 dist package tensorflow core python distribute cross device util py line 368 in build collective reduce return collective all reduce file usr local lib python3 7 dist package tensorflow core python eager def function py line 568 in call result self call args kwd file usr local lib python3 7 dist package tensorflow core python eager def function py line 638 in call return self concrete stateful fn filter call canon args canon kwd pylint disable protect access file usr local lib python3 7 dist package tensorflow core python eager function py line 1611 in filter call self capture input file usr local lib python3 7 dist package tensorflow core python eager function py line 1692 in call flat ctx args cancellation manager cancellation manager file usr local lib python3 7 dist package tensorflow core python eager function py line 545 in call ctx ctx file usr local lib python3 7 dist package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl cancellederror rpc request be cancel node collectivereduce define at usr lib python3 7 contextlib py 130 op inference collective all reduce 1457 function call stack collective all reduce during handling of the above exception another exception occur traceback most recent call last file main 2 py line 98 in multi worker model fit x train dataset epoch 3 step per epoch 5 callback callback file usr local lib python3 7 dist package tensorflow core python keras engine training py line 819 in fit use multiprocesse use multiprocesse file usr local lib python3 7 dist package tensorflow core python keras engine training distribute py line 790 in fit args kwargs file usr local lib python3 7 dist package tensorflow core python keras engine training distribute py line 777 in wrapper mode dc coordinatormode independent worker file usr local lib python3 7 dist package tensorflow core python distribute distribute coordinator py line 853 in run distribute coordinator task i d session config rpc layer file usr local lib python3 7 dist package tensorflow core python distribute distribute coordinator py line 360 in run single worker return worker fn strategy file usr local lib python3 7 dist package tensorflow core python keras engine training distribute py line 772 in worker fn return method model kwargs file usr local lib python3 7 dist package tensorflow core python keras engine training v2 py line 397 in fit prefix val file usr lib python3 7 contextlib py line 130 in exit self gen throw type value traceback file usr local lib python3 7 dist package tensorflow core python keras engine training v2 py line 757 in on start self callback call end hook mode file usr local lib python3 7 dist package tensorflow core python keras callbacks py line 262 in call end hook self on train end file usr local lib python3 7 dist package tensorflow core python keras callbacks py line 379 in on train end callback on train end log file usr local lib python3 7 dist package tensorflow core python keras callbacks py line 966 in on train end self training state delete backup file usr local lib python3 7 dist package tensorflow core python keras distribute multi worker training state py line 173 in delete backup track autotrackable delattr self model ckpt save epoch file usr local lib python3 7 dist package tensorflow core python training tracking track py line 94 in delattr super autotrackable self delattr name attributeerror ckpt save epoch warn tensorflow from usr local lib python3 7 dist package tensorflow core python op resource variable op py 1786 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer warn tensorflow from usr local lib python3 7 dist package tensorflow core python op resource variable op py 1786 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer 2020 02 18 03 41 41 376299 w tensorflow core common runtime eager context cc 349 unable to destroy server object so release instead server don t support clean shutdown srun error rpicluster2 task 1 exit with exit code 1 2020 02 18 03 41 45 023032 e tensorflow core common runtime ring alg cc 279 abort ringreduce with invalid argument derive there aren t enough element in this dataset for each shard to have at least one element elem 1 shard 2 if you be use dataset with distribution strategy consider set the auto sharde policy to either datum or off use the experimental distribute auto shard policy optionof tf datum option node multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2020 02 18 03 41 45 023246 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort invalid argument derive there aren t enough element in this dataset for each shard to have at least one element elem 1 shard 2 if you be use dataset with distribution strategy consider set the auto sharde policy to either datum or off use the experimental distribute auto shard policy optionof tf datum option node multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2020 02 18 03 41 45 023715 w tensorflow core framework op kernel cc 1655 op require fail at collective op cc 253 invalid argument derive there aren t enough element in this dataset for each shard to have at least one element elem 1 shard 2 if you be use dataset with distribution strategy consider set the auto sharde policy to either datum or off use the experimental distribute auto shard policy optionof tf datum option node multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2020 02 18 03 41 45 024006 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort invalid argument derive there aren t enough element in this dataset for each shard to have at least one element elem 1 shard 2 if you be use dataset with distribution strategy consider set the auto sharde policy to either datum or off use the experimental distribute auto shard policy optionof tf datum option node multideviceiteratorgetnextfromshard remotecall iteratorgetnext collectivereduce traceback most recent call last file main 2 py line 98 in multi worker model fit x train dataset epoch 3 step per epoch 5 callback callback file usr local lib python3 7 dist package tensorflow core python keras engine training py line 819 in fit use multiprocesse use multiprocesse file usr local lib python3 7 dist package tensorflow core python keras engine training distribute py line 790 in fit args kwargs file usr local lib python3 7 dist package tensorflow core python keras engine training distribute py line 777 in wrapper mode dc coordinatormode independent worker file usr local lib python3 7 dist package tensorflow core python distribute distribute coordinator py line 853 in run distribute coordinator task i d session config rpc layer file usr local lib python3 7 dist package tensorflow core python distribute distribute coordinator py line 360 in run single worker return worker fn strategy file usr local lib python3 7 dist package tensorflow core python keras engine training distribute py line 772 in worker fn return method model kwargs file usr local lib python3 7 dist package tensorflow core python keras engine training v2 py line 397 in fit prefix val file usr lib python3 7 contextlib py line 130 in exit self gen throw type value traceback file usr local lib python3 7 dist package tensorflow core python keras engine training v2 py line 771 in on epoch self callback on epoch end epoch epoch log file usr local lib python3 7 dist package tensorflow core python keras callbacks py line 302 in on epoch end callback on epoch end epoch log file usr local lib python3 7 dist package tensorflow core python keras callbacks py line 990 in on epoch end self save model epoch epoch log log file usr local lib python3 7 dist package tensorflow core python keras callbacks py line 1040 in save model self model save filepath overwrite true file usr local lib python3 7 dist package tensorflow core python keras engine network py line 1008 in save signature option file usr local lib python3 7 dist package tensorflow core python keras save save py line 115 in save model signature option file usr local lib python3 7 dist package tensorflow core python keras save save model save py line 78 in save save lib save model filepath signature option file usr local lib python3 7 dist package tensorflow core python save model save py line 916 in save object saver save util impl get variable path export dir file usr local lib python3 7 dist package tensorflow core python training track util py line 1168 in save file prefix file prefix tensor object graph tensor object graph tensor file usr local lib python3 7 dist package tensorflow core python training track util py line 1116 in save cache when graph building save op saver save file prefix file usr local lib python3 7 dist package tensorflow core python training save functional saver py line 230 in save sharde save append saver save shard prefix file usr local lib python3 7 dist package tensorflow core python training save functional saver py line 69 in save tensor append spec tensor file usr local lib python3 7 dist package tensorflow core python training save saveable object py line 52 in tensor return self tensor if callable self tensor else self tensor file usr local lib python3 7 dist package tensorflow core python distribute value py line 1252 in tensor return strategy extend read var sync on read variable file usr local lib python3 7 dist package tensorflow core python distribute mirror strategy py line 769 in read var return replica local var get cross replica pylint disable protect access file usr local lib python3 7 dist package tensorflow core python distribute value py line 1347 in get cross replica self axis none file usr local lib python3 7 dist package tensorflow core python distribute distribute lib py line 808 in reduce return self extend reduce reduce op value pylint disable protect access file usr local lib python3 7 dist package tensorflow core python distribute distribute lib py line 1449 in reduce device util current or device cpu 0 0 file usr local lib python3 7 dist package tensorflow core python distribute collective all reduce strategy py line 528 in reduce to reduce op value destination destination file usr local lib python3 7 dist package tensorflow core python distribute cross device op py line 282 in reduce destination file usr local lib python3 7 dist package tensorflow core python distribute cross device op py line 1038 in reduce implementation all reduce self batch all reduce reduce op per replica value 0 file usr local lib python3 7 dist package tensorflow core python distribute cross device op py line 1118 in batch all reduce dense result self do batch all reduce dense reduce op dense value file usr local lib python3 7 dist package tensorflow core python distribute cross device op py line 1160 in do batch all reduce dense i d communication hint file usr local lib python3 7 dist package tensorflow core python distribute cross device util py line 368 in build collective reduce return collective all reduce file usr local lib python3 7 dist package tensorflow core python eager def function py line 568 in call result self call args kwd file usr local lib python3 7 dist package tensorflow core python eager def function py line 638 in call return self concrete stateful fn filter call canon args canon kwd pylint disable protect access file usr local lib python3 7 dist package tensorflow core python eager function py line 1611 in filter call self capture input file usr local lib python3 7 dist package tensorflow core python eager function py line 1692 in call flat ctx args cancellation manager cancellation manager file usr local lib python3 7 dist package tensorflow core python eager function py line 545 in call ctx ctx file usr local lib python3 7 dist package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror derive there aren t enough element in this dataset for each shard to have at least one element elem 1 shard 2 if you be use dataset with distribution strategy consider set the auto sharde policy to either datum or off use the experimental distribute auto shard policy optionof tf datum option node multideviceiteratorgetnextfromshard remotecall iteratorgetnext collectivereduce op inference collective all reduce 1477 function call stack collective all reduce 2020 02 18 03 41 47 749541 w tensorflow core common runtime eager context cc 349 unable to destroy server object so release instead server don t support clean shutdown 1 5 eta 22ssrun error rpicluster1 task 0 exit with exit code 1 |
tensorflowtensorflow | documentation be very unclear it lack formula in many apis | Bug | for example in sparsecategoricalaccuracy the word in this api do not help to give a clear picture to understand what it be do why not give a formula a formula associate with an example be clear enough for this api |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.