repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
lite assertion failure if shape of dynamic output tensor change between invoke s
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip tensorflow version use command below 1 13 1 cpu python version 3 6 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a move from 26248 if an op depend on a 1 dynamic tensor and 2 a normal intermediate tensor the computation of 2 be perform before 1 dim of 1 change between interpreter invoke s then assertion failure error tensorflow lite simple memory arena cc 100 erase alloc count 1 0 1 be trigger an unit test be attach in 26248 on my box it be also reproducible by the python snippet below which generate a model dynamically however my reviewer can only reproduce the issue by unit test but not by script so the script may miss something to help debug a tflite model file which I can reproduce the issue with be upload fail tar gz python import numpy as np import tensorflow as tf I tf placeholder name I dtype tf float32 shape 1 p tf placeholder name p dtype tf int32 shape 1 2 a tf placeholder name a dtype tf float32 shape 1 n tf negative a o tf add tf pad I p n test input np array 0 dtype np float32 test pad np array 2 2 dtype np int32 test pad 2 np array 4 4 dtype np int32 test a np array 1 dtype np float32 def test tf with tf session as sess out sess run o I test input p test pad a test a assert out shape 5 out sess run o I test input p test pad 2 a test a assert out shape 9 test tf convert with tf session as sess conv tf lite tfliteconverter from session sess I p a o lite model bytes conv convert with open fail tflite mode wb as fp fp write lite model bytes def test tflite interp tf lite interpreter model content lite model bytes I just hard code they print interp get input detail print interp get output detail interp allocate tensor interp set tensor 4 test input interp set tensor 5 test pad interp set tensor 3 test a interp invoke assert interp get tensor 0 shape 5 interp set tensor 5 test pad 2 interp invoke assert interp get tensor 0 shape 9 test tflite
tensorflowtensorflow
tutorial tf 2 0 text generation with keras layer lstm fail to generate sensible text
Bug
system information tensorflow version 2 0 0 alpha0 doc link generate text describe the documentation issue the model after 10 epoch the default generate unreadable text this be through run the default colab notebook via the doc as load from github py print generate text model start stre u romeo sh output romeo vinlnkenkjvenvjnkj3nkjkenqukcjukenqenvenjvenknqkjjnkjqnk zzzzzzzzzzz33yywk3jvayyyy3njv3nkjvjqnukkenjnvajjnkkxjjnkjkkjjjnkyyyjjnujnukc ujvenkevbwenvqnkjjusjhjjjnkdenvkenvqvqvkenu ujhenjjnjnxqgdvxjukaqchenkyyyjj3nvjvmanqnkjqkenvenvjnnjnhjqukenxenkenv3njnumensenvvenvenvanqnvjqkqvhmjjnjnjnyukenjvjnknvanukevxx3nxukenvqvujken3nkjhjukenx qukenqkjjnkjnnukenkenvnqnvjkyjjnujhenvjhyjjvanjjkyv3nvywqjk3kjvjqukkcjkynvjqkeqnkeennkjqjjnkyv3nkjjnuqdnkjkzzxj3nv3nqnukenkeqnkenv3nkyvqvanchenkjjnkyvjyan yy33yyvjxmanqnkenvnvavdnvankjqnlyvqvhivajkjqukenkenkjjqukejnnqskcjukenjhiven3nenvuvenvnvanjnvayyvenqvkejv3nkjx3nkyjjxqvnvhenvanmjvyyjqnusq3vkenvyyvnvaqxkennkyvjjnkjqnkenx3njxynvjvukjqkkjxjqnqkkenv3nkyvanjjujqvennkemx3nxjnquvqnkenrqkenqkexknrjnkyyyvqnvqvove3nhjjxkkennqvhenvanjkenjjnn vqujkhenvqchinqevk jvjhyyyywqukemkyvjjnqvqvkajenjqqkkenusk jjhev nvenv3nvjnjvenvenkyvjvmayb yyvjjnjjnukcukexxx3nvaqyeyyyyy3y3vjyyyvjmjnjkk3jjnkkyv3njnkyvqvjqnkekycjukyv3nvevjkyyyjjjjukcendnzzzzrnuvenvvqlnjvakkenkjnneqkejnnkyyv training output w loss py history model fit dataset epoch epoch callback checkpoint callback sh epoch 1 10 172 172 37 213ms step loss 2 6031 epoch 2 10 172 172 33 190ms step loss 1 9073 epoch 3 10 172 172 34 198ms step loss 1 6539 epoch 4 10 172 172 33 194ms step loss 1 5161 epoch 5 10 172 172 34 196ms step loss 1 4312 epoch 6 10 172 172 33 189ms step loss 1 3695 epoch 7 10 172 172 33 190ms step loss 1 3196 epoch 8 10 172 172 34 199ms step loss 1 2751 epoch 9 10 172 172 34 196ms step loss 1 2337 epoch 10 10 172 172 34 196ms step loss 1 1941 I ve validate the source datum on gcs be ok the char2idx mapping be correct reversible as expect no typo reduce the temperature prediction be still invalid as expect they aren t really close to what we attempt to train run for 30 epoch loss 0 6435 continue to debug far from an expert but file this as a head up
tensorflowtensorflow
modulenotfounderror no module name tensorflow compat v2
Bug
be go through the jupyter notebook on tfp that be associate with the tf dev summit run run the line import tensorflow compat v2 as tf return the error modulenotfounderror no module name tensorflow compat v2 run tf version show it s 1 13 I m assume this be relate to not have tf 2 0 instal and only have t 1 13 instal be a separate 2nd installation of tf 2 0 need to import tensorflow compat v2 or can both be instal into the same virtual environment just want to make sure instal both alongside each other win t break either or both you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version b unknown 1 13 1
tensorflowtensorflow
tf 2 0 api docs tf zero like
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue raise list and define no raise list we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
tf 2 0 api docs tf zeros initializer
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue raise list and define no raise list we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
tf 2 0 api docs tf zero
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue visual if applicable no visual be include raise list and define no raise list we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
tf 2 0 api docs tf unique
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue visual if applicable no visual be include raise list and define no raise list we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
tf 2 0 api doc tf unstack
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue visual if applicable no visual be include we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
tf 2 0 api docs tf argsort
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue usage example no usage example be provide visual if applicable no visual be include we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
tf 2 0 api docs tf math argmin
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue usage example no usage example be provide visual if applicable no visual be include raise list and define no raise list much like 26530 documentation for tf math argmin be create from a generate file python ops gen math op py a link to the file that generate python ops gen math op py would be handy for user relate file to be update tensorflow core api def base api api def argmin pbtxt tensorflow core op math op cc we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
tensorflow official website tutorial error
Bug
system information tensorflow version doc link preprocess the datum describe the documentation issue plt show miss
tensorflowtensorflow
tf 2 0 api docs tf math argmax
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue usage example no usage example be provide visual if applicable no visual be include raise list and define no raise list much like 25802 documentation for tf math argmax be create from a generate file python ops gen math op py a link to the file that generate python ops gen math op py would be handy for user relate file to be update tensorflow core api def base api api def argmax pbtxt tensorflow core op math op cc we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
fail to import trtengineop
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow custom code os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 6 3 cuda cudnn version 9 0 gpu model and memory tesla p4 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version describe the current behavior fail to load a tensorrt trt convert save model meta graph def tf save model loader load sess tf save model tag constant serve model path file usr local lib python3 6 site package tensorflow python util deprecation py line 324 in new func return func args kwargs file usr local lib python3 6 site package tensorflow python save model loader impl py line 269 in load return loader load sess tag import scope saver kwargs file usr local lib python3 6 site package tensorflow python save model loader impl py line 420 in load saver kwargs file usr local lib python3 6 site package tensorflow python save model loader impl py line 350 in load graph meta graph def import scope import scope saver kwargs file usr local lib python3 6 site package tensorflow python training saver py line 1457 in import meta graph with return element kwargs file usr local lib python3 6 site package tensorflow python framework meta graph py line 806 in import scope meta graph with return element return element return element file usr local lib python3 6 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file usr local lib python3 6 site package tensorflow python framework importer py line 399 in import graph def removedefaultattrs op dict producer op list graph def file usr local lib python3 6 site package tensorflow python framework importer py line 159 in removedefaultattrs op def op dict node op keyerror trtengineop but when I add extra import statement the original code work I must import tensorflow contrib tensorrt as trt but this be a unused import for my code describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem tf save model loader load sess tf save model tag constant serve tensorrt save mode path other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
break link in tf 2 0 alpha distribute training in tensorflow documentation
Bug
system information tensorflow version tf 2 0 alpha doc link example and tutorial describe the documentation issue the second link the tutorial text show below in example and tutorial section be miss text 2 tutorial to train fashion mnist with tpustrategy currently use disable eager execution image
tensorflowtensorflow
ide can not resolve module tf keras
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 10 tensorflow instal from source or binary binary pip tensorflow version use command below 2 0 alpha python version 3 6 describe the current behavior import tensorflow as tf then pycharm can not resolve module tf kera and report an error can not find reference kera in init py but when run program everything work well describe the expect behavior tf keras import successfully with autocomplete of pycharm code to reproduce the issue import tensorflow as tf other info log it seem that there be no import command for keras module in init py of tensorflow package when I add from tensorflow python import kera to init py manually everything work well maybe there be some problem for package importing after keras be move from api to python
tensorflowtensorflow
tf 2 0 api documentation issue tf lookup statichashtable usage example be incorrect
Bug
system information tensorflow version 2 0 preview doc link tf lookup statichashtable class statichashtable describe the documentation issue the example usage describe use statichashtable init run which be not possible since there be no init attribute and it s miss from the documentation this comment l321 in lookup op py indicate that it s definitely not the correct way to initialize the table in tf 2 we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue I don t think so because I really have no idea what the correct way to initialize the table be I ve try a few thing as show here file tokenize layer tf2 py l50 but to no avail usually end in a failedpreconditionerror
tensorflowtensorflow
tflite convert unknown layer vladpoole
Bug
system information os platform and distribution e g linux ubuntu 16 04 mac osx 10 14 3 tensorflow instal from source or binary source tensorflow version or github sha if from source 1 13 1 provide the text output from tflite convert valueerror unknown layer vladpoole also please include a link to a graphdef or the model if possible from any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach command line tflite convert output file weight keras tflite keras model file weight kera h5 result traceback most recent call last file anaconda3 bin tflite convert line 11 in sys exit main file anaconda3 lib python3 7 site package tensorflow lite python tflite convert py line 442 in main app run main run main argv sys argv 1 file anaconda3 lib python3 7 site package tensorflow python platform app py line 125 in run sys exit main argv file anaconda3 lib python3 7 site package tensorflow lite python tflite convert py line 438 in run main convert model tflite flag file anaconda3 lib python3 7 site package tensorflow lite python tflite convert py line 122 in convert model converter get toco converter flag file anaconda3 lib python3 7 site package tensorflow lite python tflite convert py line 109 in get toco converter return converter fn converter kwargs file anaconda3 lib python3 7 site package tensorflow lite python lite py line 370 in from keras model file keras model keras model load model model file file anaconda3 lib python3 7 site package tensorflow python keras engine save py line 234 in load model model model from config model config custom object custom object file anaconda3 lib python3 7 site package tensorflow python keras engine save py line 324 in model from config return deserialize config custom object custom object file anaconda3 lib python3 7 site package tensorflow python keras layers serialization py line 74 in deserialize printable module name layer file anaconda3 lib python3 7 site package tensorflow python keras util generic util py line 192 in deserialize keras object list custom object item file anaconda3 lib python3 7 site package tensorflow python keras engine network py line 1263 in from config process layer layer datum file anaconda3 lib python3 7 site package tensorflow python keras engine network py line 1249 in process layer layer deserialize layer layer datum custom object custom object file anaconda3 lib python3 7 site package tensorflow python keras layers serialization py line 74 in deserialize printable module name layer file anaconda3 lib python3 7 site package tensorflow python keras util generic util py line 181 in deserialize keras object config module object custom object printable module name file anaconda3 lib python3 7 site package tensorflow python keras util generic util py line 166 in class and config for serialized keras object raise valueerror unknown printable module name class name valueerror unknown layer vladpoole
tensorflowtensorflow
tf 2 0 api docs tf math atan and tf math asin
Bug
system information tensorflow version 2 0 doc link tf math atan tf math asin describe the documentation issue documentation for tf math asin and tf math atan be create from a generate file python ops gen math op py the documentation can be modify by edit the appropriate pbtxt file within the tensorflow tensorflow core api def base api directory of the source repository both of these math operation could use a clear description usage example and a list specify the error raise by these operation we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue definitely smile
tensorflowtensorflow
use for input image in tf datum dataset typeerror zip argument 1 must support iteration
Bug
I try use the tf datum dataset to load my own image from local directory but when I want to feed batch size datum to model I get a typeerror zip argument 1 must support iteration here be my code import tensorflow as tf tf enable eager execution import os import time import numpy as np import matplotlib pyplot as plt import pil from ipython display import clear output train dataset tf datum dataset list file image path train imgs png train dataset train dataset shuffle 200 train dataset train dataset map lambda x load image x train dataset train dataset batch 16 val dataset tf datum dataset list file image path val imgs png val dataset val dataset shuffle 200 val dataset val dataset map lambda x load image x val dataset val dataset batch 16 test dataset tf datum dataset list file image path test imgs png test dataset test dataset shuffle 200 test dataset test dataset map lambda x load image x test dataset test dataset batch 16 train test dataset batch shape 16 256 256 1 class downsample tf keras model def init self filter size apply batchnorm true super downsample self init self apply batchnorm apply batchnorm initializer tf random normal initializer 0 0 02 self conv1 tf keras layer conv2d filter size size stride 2 padding same kernel initializer initializer if self apply batchnorm self batchnorm tf keras layers batchnormalization def call self x training x self conv1 x if self apply batchnorm x self batchnorm x training training x tf nn leaky relu x return x class upsample tf keras model def init self filter size apply dropout false super upsample self init self apply dropout apply dropout initializer tf random normal initializer 0 0 02 self up conv tf keras layer conv2dtranspose filter size size stride 2 padding same kernel initializer initializer use bias false self batchnorm tf keras layer batchnormalization if self apply dropout self dropout tf keras layers dropout 0 5 def call self x training x self up conv x x self batchnorm x training training if self apply dropout x self dropout x training training x tf nn relu x return x class network tf keras model def init self super network self init initializer tf random normal initializer 0 0 02 self down1 downsample 64 4 apply batchnorm false self down2 downsample 128 4 self down3 downsample 256 4 self down4 downsample 512 4 self down5 downsample 512 4 self down6 downsample 512 4 self down7 downsample 512 4 self down8 downsample 512 4 self up1 upsample 512 4 apply dropout true self up2 upsample 512 4 apply dropout true self up3 upsample 512 4 apply dropout true self up4 upsample 512 4 self up5 upsample 256 4 self up6 upsample 128 4 self up7 upsample 64 4 self last tf keras layer conv2dtranspose output channel 4 4 stride 2 padding same kernel initializer initializer tf contrib eager defun def call self x training x shape bs 256 256 3 x1 self down1 x training training bs 128 128 64 x2 self down2 x1 training training bs 64 64 128 x3 self down3 x2 training training bs 32 32 256 x4 self down4 x3 training training bs 16 16 512 x5 self down5 x4 training training bs 8 8 512 x6 self down6 x5 training training bs 4 4 512 x7 self down7 x6 training training bs 2 2 512 x8 self down8 x7 training training bs 1 1 512 x9 self up1 x8 x7 training training bs 2 2 1024 x10 self up2 x9 x6 training training bs 4 4 1024 x11 self up3 x10 x5 training training bs 8 8 1024 x12 self up4 x11 x4 training training bs 16 16 1024 x13 self up5 x12 x3 training training bs 32 32 512 x14 self up6 x13 x2 training training bs 64 64 256 x15 self up7 x14 x1 training training bs 128 128 128 x16 self last x15 bs 256 256 3 x16 tf nn tanh x16 return x16 def train dataset epoch for epoch in range epoch start time time for input image in dataset print input image shape with tf gradienttape as net tape output net input image training true loss net loss input image output net gradient net tape gradient loss net variable optimizer apply gradient zip net gradient net variable if epoch 1 0 clear output wait true for inp tar in test dataset take 1 generate image net inp tar save checkpoint evey 20 epoch if epoch 1 20 0 checkpoint save file prefix checkpoint prefix print time take for epoch be sec n format epoch 1 time time start train train dataset epoch now I get a error typeerror traceback most recent call last in 1 train train dataset epoch in train dataset epoch 6 print input image shape 7 with tf gradienttape as net tape 8 output net input image training true 9 loss net loss input image output 10 usr local lib python3 5 dist package tensorflow python keras engine base layer py in call self input args kwargs 701 702 if not in defer mode 703 output self call input args kwargs 704 if output be none 705 raise valueerror a layer s call method should return a tensor usr local lib python3 5 dist package tensorflow python keras engine network py in call self input training mask 718 output self run internal graph input 719 training training 720 mask mask 721 return output 722 usr local lib python3 5 dist package tensorflow python keras engine network py in run internal graph self input training mask 853 do not return a list the same size as call 854 tensor map 855 for x y mask in zip self input input mask 856 tensor map str i d x y mask 857 typeerror zip argument 1 must support iteration input image tensor shape be 16 256 256 1
tensorflowtensorflow
tf upgrade v2 fail if the file contain f string and give pasta base annotate annotationerror
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 14 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary pip install tensorflow 2 0 0 alpha0 tensorflow version use command below 2 0 0 alpha0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version na gpu model and memory na describe the current behavior tf upgrade v2 fail if the file contain f string describe the expect behavior tf upgrade v2 do not fail if the file contain f string code to reproduce the issue file foo py print f tf upgrade v2 fail to convert f string like this one 42 command that produce the error tf upgrade v2 infile foo py outfile foo tf20 py other info log traceback most recent call last file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 1161 in visit super astannotator self visit node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 127 in visit super basevisitor self visit node file anaconda3 envs tf2 0 lib python3 6 ast py line 253 in visit return visitor node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 47 in wrap f self node args kwargs file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 1213 in visit num self attr node content contentargs dep n default str node n file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 1352 in attr attr part append attr val file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 1210 in contentargs lambda self token next of type token number type src file anaconda3 envs tf2 0 lib python3 6 site package pasta base token generator py line 347 in next of type self line token start 0 1 valueerror expect number but find line 1 print f tf upgrade v2 fail to convert f string like this one 42 during handling of the above exception another exception occur traceback most recent call last file anaconda3 envs tf2 0 bin tf upgrade v2 line 10 in sys exit main file anaconda3 envs tf2 0 lib python3 6 site package tensorflow tool compatibility tf upgrade v2 main py line 110 in main args input file output file upgrade file anaconda3 envs tf2 0 lib python3 6 site package tensorflow tool compatibility tf upgrade v2 main py line 33 in process file upgrader process file in filename out filename file anaconda3 envs tf2 0 lib python3 6 site package tensorflow tool compatibility ast edit py line 494 in process file temp file file anaconda3 envs tf2 0 lib python3 6 site package tensorflow tool compatibility ast edit py line 548 in process open file self update string pasta join line in filename file anaconda3 envs tf2 0 lib python3 6 site package tensorflow tool compatibility ast edit py line 510 in update string pasta t pasta parse text file anaconda3 envs tf2 0 lib python3 6 site package pasta init py line 25 in parse annotator visit t file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 1161 in visit super astannotator self visit node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 127 in visit super basevisitor self visit node file anaconda3 envs tf2 0 lib python3 6 ast py line 253 in visit return visitor node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 47 in wrap f self node args kwargs file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 211 in visit module self generic visit node file anaconda3 envs tf2 0 lib python3 6 ast py line 261 in generic visit self visit item file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 1161 in visit super astannotator self visit node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 127 in visit super basevisitor self visit node file anaconda3 envs tf2 0 lib python3 6 ast py line 253 in visit return visitor node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 47 in wrap f self node args kwargs file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 574 in visit expr self visit node value file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 1161 in visit super astannotator self visit node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 127 in visit super basevisitor self visit node file anaconda3 envs tf2 0 lib python3 6 ast py line 253 in visit return visitor node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 47 in wrap f self node args kwargs file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 703 in visit call any args self visit call arguments35 node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 770 in visit call arguments35 self visit arg file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 1161 in visit super astannotator self visit node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 127 in visit super basevisitor self visit node file anaconda3 envs tf2 0 lib python3 6 ast py line 253 in visit return visitor node file anaconda3 envs tf2 0 lib python3 6 ast py line 261 in generic visit self visit item file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 1161 in visit super astannotator self visit node file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 127 in visit super basevisitor self visit node file anaconda3 envs tf2 0 lib python3 6 ast py line 253 in visit return visitor node file anaconda3 envs tf2 0 lib python3 6 ast py line 263 in generic visit self visit value file anaconda3 envs tf2 0 lib python3 6 site package pasta base annotate py line 1163 in visit raise annotationerror e pasta base annotate annotationerror expect number but find line 1 print f tf upgrade v2 fail to convert f string like this one 42
tensorflowtensorflow
android tflite call result in a npe
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary source tensorflow version training or github sha if from source 1 11 0 gpu support custom model base on ssdlite mobilenet v2 coco tensorflow version convert or github sha if from source try with 1 11 and 1 12 we be try to do object detection in an android app to do this we use a ssdlite mobilenet v2 coco pretraine network and continue train our own dataset we create the tflite model use these script tflite convert python3 tensorflow model research object detection export tflite ssd graph py pipeline config path input ssdlite mobilenet v2 coco pipeline config train checkpoint prefix input ssdlite mobilenet v2 coco model ckpt 381700 output directory output add postprocesse op true tflite convert output file output ssdlite mobilenet v2 coco tflite graph def file input ssdlite mobilenet v2 coco pb input array float output array concat concat 1 input shape 1 300 300 3 what this app basically do it take a pre record video decode it frame by frame use ffmpegmediametadataretriever and pass the bitmap into tflite to detect object there the app be build with gradle and we be use org tensorflow tensorflow lite 1 12 0 but we basically get the same error with 1 11 we scale the bitmap down to 300x300 and convert it from argb into 3 float channel and call tflite like this og v tag feed tflite outputlocation array 1 array num detection floatarray 4 outputclasse array 1 floatarray num detection outputscore array 1 floatarray num detection numdetection floatarray 1 val inputarray arrayof imgdata val outputmap hashmap outputmap put 0 outputlocation outputmap put 1 outputclasse outputmap put 2 outputscore outputmap put 3 numdetection log v tag run tflite tflite runformultipleinputsoutput inputarray outputmap log v tag return from tflite val recognition arraylist num detection the error we be get be 2019 02 28 11 52 33 486 26807 26879 com package xxxxxx a libc fatal signal 11 sigsegv code 1 segv maperr fault addr 0x0 in tid 26879 xxxxxx pid 26807 xxxxxx 2019 02 28 11 52 33 600 26890 26890 a debug 2019 02 28 11 52 33 601 26890 26890 a debug build fingerprint google sdk gphone x86 generic x86 9 psr1 180720 075 5124027 userdebug dev key 2019 02 28 11 52 33 601 26890 26890 a debug revision 0 2019 02 28 11 52 33 601 26890 26890 a debug abi x86 2019 02 28 11 52 33 604 26890 26890 a debug pid 26807 tid 26879 name xxxxxx com package xxxxxx 2019 02 28 11 52 33 604 26890 26890 a debug signal 11 sigsegv code 1 segv maperr fault addr 0x0 2019 02 28 11 52 33 604 26890 26890 a debug cause null pointer dereference 2019 02 28 11 52 33 604 26890 26890 a debug eax 00000000 ebx 00000000 ecx 00000000 edx 00000000 2019 02 28 11 52 33 604 26890 26890 a debug edi c75094a8 esi 00000000 2019 02 28 11 52 33 604 26890 26890 a debug ebp c75090f8 esp c7509070 eip c757c77b 2019 02 28 11 52 33 606 26890 26890 a debug backtrace 2019 02 28 11 52 33 606 26890 26890 a debug 00 pc 0007277b datum app com package xxxxxx ygil4a0ylttbbhpio26swa lib x86 libtensorflowlite jni so 2019 02 28 11 52 33 606 26890 26890 a debug 01 pc 00074fe0 datum app com package xxxxxx ygil4a0ylttbbhpio26swa lib x86 libtensorflowlite jni so 2019 02 28 11 52 33 606 26890 26890 a debug 02 pc 0007590d datum app com package xxxxxx ygil4a0ylttbbhpio26swa lib x86 libtensorflowlite jni so 2019 02 28 11 52 33 606 26890 26890 a debug 03 pc 000755b0 datum app com package xxxxxx ygil4a0ylttbbhpio26swa lib x86 libtensorflowlite jni so 2019 02 28 11 52 33 606 26890 26890 a debug 04 pc 00076322 datum app com package xxxxxx ygil4a0ylttbbhpio26swa lib x86 libtensorflowlite jni so 2019 02 28 11 52 33 606 26890 26890 a debug 05 pc 0013389c datum app com package xxxxxx ygil4a0ylttbbhpio26swa lib x86 libtensorflowlite jni so 2019 02 28 11 52 33 607 26890 26890 a debug 06 pc 00132fa7 datum app com package xxxxxx ygil4a0ylttbbhpio26swa lib x86 libtensorflowlite jni so 2019 02 28 11 52 33 607 26890 26890 a debug 07 pc 00132e37 datum app com package xxxxxx ygil4a0ylttbbhpio26swa lib x86 libtensorflowlite jni so 2019 02 28 11 52 33 607 26890 26890 a debug 08 pc 0016550e datum app com package xxxxxx ygil4a0ylttbbhpio26swa lib x86 libtensorflowlite jni so 2019 02 28 11 52 33 607 26890 26890 a debug 09 pc 0008f065 system lib libc so pthread start void 53 2019 02 28 11 52 33 607 26890 26890 a debug 10 pc 0002485b system lib libc so start thread 75 2019 02 28 11 52 34 197 1761 1761 e system bin tombstone tombstone write to datum tombstone tombstone 35 as you can see the npe occur deep in the libtensorflow and we be basically run out of idea what we could do to fix it so any help be appreciate it happen on both a physical device and the android sandbox api 28 we use as a starting point and also the tensorflow tflite demo form the tensorflow repository
tensorflowtensorflow
meet unsupported operator of type cast when quantize model get from quantize aware training
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes I try to use quantize aware training post training quantization and tflite converter to train and convert a custom mobilenetv2 model os platform and distribution macos 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary conda install tensorflow tensorflow version use command below tensorflow 1 13 python version python 3 6 8 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version none gpu model and memory none I be train a mobilenetv2 model use quantize aware training define in tf contrib quantize follow the direction in tflite convert be use to convert the quantize aware train model to 8bit fix point everything be fine until I save frozen graph I then use toco command line tool and get this error massage 2019 03 08 16 13 24 208096 f tensorflow lite toco toco tooling h 38 check fail s ok unimplemented this graph contain an operator of type cast for which the quantize form be not yet implement sorry and patch welcome that s a relatively fun patch to write mostly provide the actual quantize arithmetic code for this op fatal python error abort I also try tflite converter python api convert from session convert from frozen graph and convert from save model and all get the same error message by print out op in the gragh I find some cast op in the graph in batch norm layer mostly I guess these cast operator be use build training graph and fold batch norm and should be delete when build eval graph or freeze the graph but I still get they in my frozen graph so I would like to ask for some help to get rid of they and other unsupported op and convert the model to 8bit the node in graph def be save here save frozen graph be available here
tensorflowtensorflow
android tflite benchmark performance issue with deeplab segmentation model depthwiseconv2d
Bug
system information the benchmark test be carry out use the follow tool and device bazel version build label 0 23 1 tensorflow version 1 13 0 android device oneplus 3 describe the current behavior we train a segmentation model use deeplab with mobilenet with tf 1 13 0 to replicate the segmentation model by tensorflow I e deeplabv3 257 mv gpu tflite use sample code in repo for pascal voc however there be significant time difference for average running time for depthwiseconv2d when we benchmarke the two tflite model with tflite android benchmark tool in the official model avg time be around 28 ms whereas in our own model it be around 103 ms the only difference between the two float model be the quantization level I e ours be 0 255 and official model have 1 to 0 99 also there be a difference in model size official 2 7 mb vs our 3 3 mb it seem the official model be use a new kernel for depthwiseconv2d and hence run significantly fast compare to our train model be the official model train or optimise use tf2 0 tool what could be do to achieve similar speed depthwiseconv2d use tf 1 13 0 or do we have to migrate and retrain our model use tf 2 0 describe the expect behavior the depthwise conv2d performance should be same in the official model and the replicate model code to reproduce the issue benchmark command bazel build c opt config android arm64 cxxopt std c 11 copt dtflite profiling enable tensorflow lite tool benchmark benchmark model adb shell datum local tmp benchmark model graph datum local tmp deeplabv3 257 mv gpu tflite num thread 1 benchmark result official model number of node execute 70 summary by node type node type count avg ms avg cdf mem kb time call conv 2d 38 137 792 76 447 76 447 0 000 38 depthwise conv 2d 17 27 808 15 428 91 875 0 000 17 resize bilinear 3 13 577 7 533 99 408 0 000 3 concatenation 1 0 528 0 293 99 701 0 000 1 add 10 0 410 0 227 99 928 0 000 10 average pool 2d 1 0 129 0 072 100 000 0 000 1 timing microsecond count 50 first 180683 curr 179967 min 177601 max 186179 avg 180284 std 1486 memory byte count 0 70 node observe replicate model number of node execute 70 summary by node type node type count avg ms avg cdf mem kb time call conv 2d 38 141 189 54 356 54 356 0 000 38 depthwise conv 2d 17 103 263 39 755 94 110 0 000 17 resize bilinear 3 14 178 5 458 99 568 0 000 3 concatenation 1 0 554 0 213 99 782 0 000 1 add 10 0 437 0 168 99 950 0 000 10 average pool 2d 1 0 130 0 050 100 000 0 000 1 timing microsecond count 50 first 258703 curr 259361 min 255581 max 269627 avg 259789 std 3844 memory byte count 0 70 node observe other info log however use tf 12 0 with bazel 0 16 0 the benchmark model build fail for official model but work with replicate model bazel build bazel build c opt config android arm64 cxxopt std c 11 linkopt llog copt dtflite profiling enable tensorflow contrib lite tools benchmark benchmark model adb shell datum local tmp benchmark model graph datum local tmp deeplabv3 257 mv gpu tflite num thread 1 tf lite benchmark adb shell datum local tmp benchmark model graph datum local tmp deeplabv3 257 mv gpu tflite num thread 1 adb opt intel intelpython27 lib libcrypto so 1 0 0 no version information available require by adb start num run 50 int run delay second 1 num thread 1 benchmark name output prefix warmup run 1 graph datum local tmp deeplabv3 257 mv gpu tflite input layer input shape use nnapi 0 nnapi error unable to open library libneuralnetwork so load model datum local tmp deeplabv3 257 mv gpu tflite resolve reporter didn t find op for builtin opcode depthwise conv 2d version 2 registration fail fail to construct interpreter abort can you provide the correspond optimise pb file before tflite conversion for the same model
tensorflowtensorflow
break link in a notebook
Bug
notebook intro to cnns tf 2 0 alpha tutorial image in the last cell there be a break link 404 as you can see our simple cnn have achieve a test accuracy of over 99 not bad for a few line of code for another style of write a cnn use the keras subclasse api and a gradienttape head here
tensorflowtensorflow
doc dead link in recurrent neural network
Bug
system information tensorflow version master doc link language modeling describe the documentation issue the link for penn tree bank be no long accessible the correct url might be that be upper case for ldc99t42
tensorflowtensorflow
segment erro tensorflow tensorflow example label image main cc
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below r1 13 python version 3 6 bazel version if compile from source 0 23 gcc compiler version if compile from source cuda cudnn version 9 0 7 0 gpu model and memory 1080ti 11 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version describe the current behavior describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem tensorflow tensorflow example label image main cc other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach I think the reason be tf return if error env newrandomaccessfile filename file l100
tensorflowtensorflow
load datum tutorial model fail to converge
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version doc link site en tutorial load data image ipynb describe the documentation issue the model fail to converge most of time I change the step per epoch to run through all the datum but it fail to converge a lot of time just stuck in high loss low accuracy I note last layer output logit it this expect we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
tf2 upgrade contrib usage and several other tf 1 x apis uncaught by tf2 upgrade script
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux debian 8 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip binary tensorflow version use command below tf 2 nightly as of 3 7 19 comparison version be 1 13 1 python version 3 4 2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior in order to make sure that my crusty old tf 1 x code work on tf 2 I m use the tf upgrade v2 script as document however on the first bit of code I try I be tell that it detect 0 issue that require attention and the report it give be essentially blank additionally the diff between the old file and the new file show no change this would have be great except there s a few thing that be very obviously incorrect on line 42 and 43 there s a use of tf contrib lookup which should be move to tf lookup on line 63 there s a call to tf sparse tensor to dense which should be move to tf sparse to dense on line 118 121 there s a tf session context and a table initializer which no long exist describe the expect behavior I would hope that the above 4 issue would be point out by the upgrade script as need my attention even if it can not for some reason although I imagine the one on line 63 can be do with a string replacement fix they automatically note I still haven t get this script despite some manual upgrading to work on tf2 but it pre manual upgrade work fine on tf 1 13 1 although it give I a bunch a bunch of deprecation warning code to reproduce the issue see the attached gist
tensorflowtensorflow
where be the conflow flow op like abort use
Bug
previous tf control flow primitive be switch merge enter nextiteration and exit we can see they in a simple while loop graph currently in your r1 13 and r2 0 doc seem the control flow op change a lot how could I see this op in a graphdef be they still in while loop and cond
tensorflowtensorflow
bug report wrong container set in opstestbase addresourceinput easily fix
Bug
file tensorflow core kernel op testutil h class opstestbase function addresourceinput resource container set to empty when use default container due to cla problem I can t contribute rn please review pull reque 26428 from ppwwyyxx thank you please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary python3 pip tensorflow version use command below 1 12 0 python version 3 5 2 bazel version if compile from source 0 22 0 gcc compiler version if compile from source 5 4 0 cuda cudnn version 10 0 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version describe the current behavior describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tensorflow org site not work potential service worker issue
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version doc link describe the documentation issue none of be work on safari late macos for I seem related to service worker work fine on safari technology preview and on googlechrome might have visit the page during the last 48h pre launch skjermbilde 2019 03 06 kl 23 11 45 we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue yes if applicable
tensorflowtensorflow
doc dead link in build and load a savedmodel
Bug
system information tensorflow version master doc link describe the documentation issue the link for tensorflow serve be no long available in the doc build and load a savedmodel build and load a savedmodel
tensorflowtensorflow
keras model saving be not work when graph be finalize
Bug
tensorflow instal from source or binary from pip tensorflow version use command below 1 13 1 python version 3 6 tldr a keras model use in a static graph and session mode can not save its weight when the graph be finalize python import tensorflow as tf model tf keras model sequential tf keras layer dense 256 x tf zero 10 3 dummy input model x sess tf interactivesession sess run tf global variable initializer finalize the graph or it s do automatically in monitoredsession tf get default graph finalize error model save tmp keras test the error be runtimeerror graph be finalize and can not be modify stacktrace file prefix lib python3 6 site package tensorflow python keras engine network py line 1415 in save weight save save weight to hdf5 group f self layer file prefix lib python3 6 site package tensorflow python keras engine save py line 742 in save weight to hdf5 group weight value k batch get value symbolic weight file prefix lib python3 6 site package tensorflow python keras backend py line 2819 in batch get value return get session run tensor file prefix lib python3 6 site package tensorflow python keras backend py line 482 in get session initialize variable session file prefix lib python3 6 site package tensorflow python keras backend py line 758 in initialize variable variable module be variable initialize v for v in candidate var file prefix lib python3 6 site package tensorflow python keras backend py line 758 in variable module be variable initialize v for v in candidate var file prefix lib python3 6 site package tensorflow python util tf should use py line 193 in wrap return add should use warn fn args kwargs file prefix lib python3 6 site package tensorflow python op variable py line 2924 in be variable initialized return state op be variable initialize variable file prefix lib python3 6 site package tensorflow python op state op py line 133 in be variable initialized return ref be initialize name name file prefix lib python3 6 site package tensorflow python op resource variable op py line 833 in be initialize return gen resource variable op var be initialize op self handle name file prefix lib python3 6 site package tensorflow python op gen resource variable op py line 1334 in var be initialize op varisinitializedop resource resource name name file prefix lib python3 6 site package tensorflow python framework op def library py line 788 in apply op helper op def op def file prefix lib python3 6 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file prefix lib python3 6 site package tensorflow python framework op py line 3272 in create op self check not finalize file prefix lib python3 6 site package tensorflow python framework op py line 2945 in check not finalize raise runtimeerror graph be finalize and can not be modify runtimeerror graph be finalize and can not be modify the operation be create here be from keras backend py l761 python be initialize session run variable module be variable initialize v for v in candidate var well I think we should not create and call this new op to check whether a variable be initialize why be it implement in this way not only it doesn t work but also it will result in op leak as well even though it would work without finalize the graph upd it seem that the result be cache through keras initialize so it win t be the case there be another way to check whether the variable be initialize or not without create a new operation
tensorflowtensorflow
miss warning or documentation for upcast in sparse softmax cross entropy with logit
Bug
in the sparse softmax cross entropy with logit function inside tensorflow python op nn op py logit that be in fp16 be automatically upcaste to fp32 at the end of the function the result be cast back down to fp16 this functionality ought to be expose in documentation and or a warning for user do mixed precision train the expect behavior would be that the entire loss softmax cross entropy be do in fp16 relevant line logit op convert to tensor logit precise logit math op cast logit dtype float32 if dtype as dtype logit dtype dtype float16 else logit cost gen nn op sparse softmax cross entropy with logit precise logit label name name thank so much
tensorflowtensorflow
gpu placement of tf nn conv2d during tf datum dataset map call cause unimplementederror nhwc
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux 4 20 13 arch1 1 arch 1 smp preempt we d feb 27 19 10 28 utc 2019 x86 64 gnu linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary community python tensorflow opt cuda tensorflow version use command below 1 13 1 python version 3 7 2 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version 10 1 7 5 gpu model and memory geforce gtx 1080 ti 11 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version describe the current behavior evaluation of tf nn conv2d in a tf datum dataset map call fail during session run call of tf data iterator get next with error python tensorflow python framework error impl unimplementederror generic conv implementation only support nhwc tensor format for now see the full error log and the attached code below for the full example it seem that the graph be successfully create but the evaluation of the tf nn conv2d call which be implicitly place on the gpu be not possible this be somehow relate to the fact that this tf nn conv2d call be wrap in a tf datum dataset map call set datum format nhwc or use cudnn on gpu false procude the same error describe the expect behavior tf nn conv2d should be evaluate which be do iff the convolution be explicitly place on the cpu e g with tf device cpu 0 or cuda visible device code to reproduce the issue python import tensorflow as tf class iteratorinitializerhook tf train sessionrunhook hook to initialise datum iterator after session be create def init self func none super iteratorinitializerhook self init self iterator initializer func func def after create session self session coord initialise the iterator after the session have be create self iterator initializer func session if name main def apply kernel tensor kernel tf random normal 3 3 t tf expand dim tensor 0 t tf expand dim t 1 k tf expand dims kernel 1 k tf expand dim k 1 todo the follow line fail during session run call of tf data iterator get next last line in this file tf conv tf nn conv2d t k 1 1 1 1 same return tf squeeze tf conv def do some thing x y x apply kernel x return x y n image shape 100 256 256 ds tf datum dataset from tensor slice tf random uniform n image shape tf random uniform n ds ds map do some thing iterator tf data iterator from structure ds output type ds output shape datum iterator get next ds init op iterator make initializer ds with tf train singularmonitoredsession hook iteratorinitializerhook lambda s s run ds init op config tf configproto log device placement true as sess sess run data other info log here be the full error log include device placement tf conv nhwc issue log here be a zipped version of the python example tf conv nhwc issue py zip not sure if this be important but it seem that all tensorflow op from the function which be call by tf datum dataset map be not show up in the device placement log
tensorflowtensorflow
add minor version for cudnn cuda in test build configuration table
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 1 12 doc link test build configuration describe the documentation issue the test build configuration table do not supply adequate information for the user since it do not provide minor version for the cudnn and cuda test in the build please add this information we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
tensorflow org devsite code container not work properly
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue link as see in the gif file in the link upload the code container for download a package section isn t work correctly on decrease window size the code container initially get cut into half without a scroll feature and then get completely delete from the webpage we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue yes
tensorflowtensorflow
convert tensor2tensor transformer to tf lite graph
Bug
I be try to convert the frozen transformer model from tensor2tensor to a tf lite graph tflite convert output file tmp tf lite mdl graph def file freeze pb input array wave input output array output I be get the follow error traceback most recent call last file home sfalk miniconda3 envs t2 t bin tflite convert line 11 in sys exit main file home sfalk miniconda3 envs t2 t lib python3 5 site package tensorflow contrib lite python tflite convert py line 412 in main app run main run main argv sys argv 1 file home sfalk miniconda3 envs t2 t lib python3 5 site package tensorflow python platform app py line 125 in run sys exit main argv file home sfalk miniconda3 envs t2 t lib python3 5 site package tensorflow contrib lite python tflite convert py line 408 in run main convert model tflite flag file home sfalk miniconda3 envs t2 t lib python3 5 site package tensorflow contrib lite python tflite convert py line 162 in convert model output datum converter convert file home sfalk miniconda3 envs t2 t lib python3 5 site package tensorflow contrib lite python lite py line 453 in convert converter kwargs file home sfalk miniconda3 envs t2 t lib python3 5 site package tensorflow contrib lite python convert py line 342 in toco convert impl input datum serializetostre file home sfalk miniconda3 envs t2 t lib python3 5 site package tensorflow contrib lite python convert py line 135 in toco convert protos stdout stderr runtimeerror toco fail see console for info I would seem that there be a lot of unsupported operation do this cause the error the last line from the tflite convert output say f tensorflow contrib lite toco tooling util cc 968 check fail array have shape I don t know if anybody can make sense out of that be it possible to fix this or be there simply no way to convert this graph at this point in time click to show full tflite convert output 2019 03 06 13 53 51 592675 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation randomstandardnormal 2019 03 06 13 53 51 592791 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation splitv 2019 03 06 13 53 51 594222 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation cos 2019 03 06 13 53 51 594254 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation rfft 2019 03 06 13 53 51 594260 I tensorflow contrib lite toco import tensorflow cc 1127 op node miss output type attribute stft rfft 2019 03 06 13 53 51 594263 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation complexab 2019 03 06 13 53 51 594283 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation linspace 2019 03 06 13 53 51 594290 I tensorflow contrib lite toco import tensorflow cc 189 unsupported datum type in placeholder op 2 2019 03 06 13 53 51 594346 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation linspace 2019 03 06 13 53 51 594352 I tensorflow contrib lite toco import tensorflow cc 189 unsupported datum type in placeholder op 2 2019 03 06 13 53 51 594399 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation splitv 2019 03 06 13 53 51 594531 I tensorflow contrib lite toco import tensorflow cc 189 unsupported datum type in placeholder op 2 2019 03 06 13 53 51 594564 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 596289 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation abs 2019 03 06 13 53 51 596411 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation cos 2019 03 06 13 53 51 596466 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation where 2019 03 06 13 53 51 596473 I tensorflow contrib lite toco import tensorflow cc 1127 op node miss output type attribute transformer ext body parallel 0 body encoder pad reduce get id where 2019 03 06 13 53 51 596591 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 596714 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 596835 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 597110 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 597220 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 597444 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 597699 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 597780 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation scatternd 2019 03 06 13 53 51 597900 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 598021 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 598154 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 598540 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 598656 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 599083 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 599551 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 599635 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation scatternd 2019 03 06 13 53 51 599755 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 599878 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 600066 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 600402 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 600518 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 600968 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 601443 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 601549 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation scatternd 2019 03 06 13 53 51 601745 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 601951 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 602150 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 602506 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 602647 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 603160 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 603627 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 603779 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation scatternd 2019 03 06 13 53 51 603972 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 604173 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 604374 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 604724 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 604852 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 605260 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 605712 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 605818 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation scatternd 2019 03 06 13 53 51 605950 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation cos 2019 03 06 13 53 51 606029 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation matrixbandpart 2019 03 06 13 53 51 606257 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 606486 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 606714 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 606942 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 607165 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 607388 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 607628 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607651 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607659 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607666 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607672 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607679 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607686 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607693 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607700 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607707 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607714 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607720 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607727 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607733 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607740 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607747 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607753 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607760 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607766 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607802 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 607870 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation loopcond 2019 03 06 13 53 51 607876 I tensorflow contrib lite toco import tensorflow cc 1127 op node miss output type attribute transformer ext while loopcond 2019 03 06 13 53 51 608650 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608661 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608668 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608675 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608682 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608688 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608694 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608701 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608708 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608714 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608720 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608727 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608733 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608740 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608746 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608752 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608833 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608871 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608910 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 608919 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 609056 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 609098 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 609238 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 609276 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 609419 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 609460 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 609746 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 609797 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 609872 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 609893 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 610025 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 610086 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 610326 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 610390 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 610453 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 610462 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 610832 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 610871 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 610896 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 611271 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 611311 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 611333 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 611369 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 611379 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 611499 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 611536 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 611675 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 611712 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 611849 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 611886 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 612169 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 612207 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 612257 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 612266 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 612391 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 612526 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 612746 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 612783 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 612833 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 612842 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 613232 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 613295 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 613330 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 613729 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 613768 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 613793 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 613841 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 613862 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 613986 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 614023 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 614161 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 614213 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 614362 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 614411 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 614710 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 614774 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 614847 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 614868 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 615001 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 615051 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 615280 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 615343 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 615419 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 615440 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 615813 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 615863 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 615899 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616299 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation listdiff 2019 03 06 13 53 51 616350 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616386 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616447 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616456 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616582 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616591 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616597 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616605 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616611 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616618 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616624 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616630 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616636 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616643 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616649 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616656 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616662 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616669 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616675 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 616682 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation enter 2019 03 06 13 53 51 617107 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation isfinite 2019 03 06 13 53 51 617115 I tensorflow contrib lite toco import tensorflow cc 1127 op node miss output type attribute transformer ext while reducelogsumexp isfinite 2019 03 06 13 53 51 617189 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617197 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617203 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617209 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617215 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617221 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617227 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617232 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617237 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617243 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617249 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617255 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617261 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617267 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617310 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617317 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617323 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617329 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617335 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617341 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617346 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617352 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617357 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617363 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617369 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617374 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617380 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617385 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617391 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617446 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617453 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617459 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation gathernd 2019 03 06 13 53 51 617488 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation exit 2019 03 06 13 53 51 617494 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation exit 2019 03 06 13 53 51 617499 I tensorflow contrib lite toco import tensorflow cc 1080 convert unsupported operation exit 2019 03 06 13 53 51 956887 f tensorflow contrib lite toco tooling util cc 968 check fail array have shape abort core dump system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary pip tensorflow version use command below 1 12 0 python version 3 5 6 cuda cudnn version 9 0 gpu model and memory 4x geforce 1080 gtx uname spomvi linux 48 16 04 1 ubuntu smp tue jan 29 18 03 48 utc 2019 x86 64 x86 64 x86 64 gnu linux pip freeze grep tensor mesh tensorflow 0 0 5 tensor2tensor 1 12 0 tensorboard 1 12 0 tensorflow gpu 1 12 0 tensorflow metadata 0 9 0 tensorflow probability 0 5 0
tensorflowtensorflow
allow build tf nvidia gpu target sm35 if xla be not enable
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 gentoo tensorflow instal from source or binary source tensorflow version 1 13 1 python version 3 6 4 instal use virtualenv pip conda pip in venv bazel version if compile from source 0 21 gcc compiler version if compile from source 7 4 0 cuda cudnn version 10 0 gpu model and memory gtx 650 ti I be able to successfully build tf from source with xla enable and compute capability 3 0 however when a session be create the python interpreter exit complain about insufficient compute capability import tensorflow as tf tf session 2019 03 06 12 49 41 776396 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3500130000 hz 2019 03 06 12 49 41 776727 I tensorflow compiler xla service service cc 150 xla service 0x556553ef5ee0 execute computation on platform host device 2019 03 06 12 49 41 776741 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2019 03 06 12 49 41 809556 I tensorflow stream executor cuda cuda gpu executor cc 998 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 03 06 12 49 41 810593 I tensorflow compiler xla service platform util cc 194 streamexecutor cuda device 0 be of insufficient compute capability 3 5 require device be 3 0 2019 03 06 12 49 41 810666 f tensorflow stream executor lib statusor cc 34 attempt to fetch value instead of handle error internal no support device find for platform cuda if I reconfigure tf by disable xla and rebuild again with compute capability 3 0 than tf work fine so I guess a simple check if compute capability 3 5 when xla be enable could at least prevent build non functional tf
tensorflowtensorflow
kubernete gpu docs k8s compatible docker image
Bug
tensorflow version 1 13 1 doc link the exist documentation on how to setup tensorflow with docker gpu be great unfortunately it doesn t work with kubernete this kubernete engine gpu guide suggest use nvidia cuda 10 0 runtime ubuntu18 04 docker image tensorflow doc suggest tensorflow tensorflow late gpu ideally we could use they both as base image but that s impossible and of course this depend also on which vm image you re use for the k8s nodes it would be great if the doc provide some guidance on this setup it would be really awesome if a docker image compatible with one of k8s vm image be available contain both the necessary nvidia support for a k8s node and complete tensorflow gpu package environment
tensorflowtensorflow
tf 2 0 api docs tf keras activation softmax
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue the softmax activation function be not describe in detail and there be no recommendation about when to use it there be no usage example the description of the return value could be more useful we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue yes
tensorflowtensorflow
extend with weight decay function doesn t exist
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 1 12 doc link describe the documentation issue in momentumwoptimizer adamwoptimizer and decoupledweightdecayextension the documentation refer to extend with weight decay python extend with weight decay tf train momentumoptimizer weight decay weight decay afaik there be no extend with weight decay only extend with decouple weight decay which seem to serve a similar purpose python myadamw extend with decouple weight decay tf train adamoptimizer create a myadamw object optimizer myadamw weight decay 0 001 learning rate 0 001 be my understanding correct I could submit a pr if necessary
tensorflowtensorflow
low gpu usage for inference when multithreade vs multiprocess c
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 7 64 bit mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary source tensorflow version use command below 1 13 1 python version 3 6 bazel version if compile from source gcc compiler version if compile from source vs2015 cuda cudnn version 10 0 7 5 gpu model and memory rtx 2060 6 gb describe the current behavior I m use python to build a graph use tf keras layer that contain cudnnlstm timedistribute dense layer it crash the rtx 2060 there s a bug report for that already this be not this issue if I don t set option allow growth or limit ram usage I m use the model for inference from c my batch size be know as soon as the program start but it can vary each time it s start so the graph doesn t have a fix input size I m create several thread with one session on each tensorflow gpu usage be low 38 and it eventually gobble up all gpu ram available if it s not manually limit I can t simply increase the batch size because different weight be load on each run now the issue here be that it doesn t matter much how many parallel session I run in multiple thread gpu utilization be still low but if I limit gpu ram usage so that I can run two separate process not thread then they can both use about half the ram and increase gpu utilization to 80 why can t tensorflow figure out a way to do that in just one process with multiple thread it s the same input and result just multi thread vs multi process describe the expect behavior tf should utilize at least 80 gpu in my case since I m just run inference session in parallel in one process instead of have to resort to limit ram usage and start several process code to reproduce the issue too much code to extract to make a barebone example but I could do it if absolutely necessary other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf 2 0 checkpoint break change for all optimizer
Bug
type of breakage breakage with change code apis that be affect 1 tf train gradientdescentoptimizer 2 tf train momentumoptimizer 3 tf train rmspropoptimizer 4 tf train adamoptimizer 5 tf train adadeltaoptimizer 6 tf train adagradoptimizer 7 tf train ftrloptimizer side note there be two extra tf contrib own optimizer that will also have break change 1 tf contrib opt nadamoptimizer 2 tf contrib opt adamaxoptimizer description of change the current endpoint tf train xxxoptimizer be be deprecate in favor of tf keras optimizers xxx specifically 1 tf train gradientdescentoptimizer lr tf keras optimizer sgd learn rate 2 tf train momentumoptimizer lr momentum tf keras optimizer sgd learn rate momentum 3 tf train rmspropoptimizer lr tf keras optimizer rmsprop learn rate 4 tf train adamoptimizer lr beta1 beta2 tf keras optimizer adam learning rate beta 1 beta 2 5 tf train adadeltaoptimizer lr tf keras optimizers adadelta learning rate 6 tf train adagradoptimizer lr tf keras optimizers adagrad learning rate 7 tf train ftrloptimizer lr tf keras ftrl learning rate tensorflow user of tf train xxxoptimizer will be update to tf keras optimizer xxx and checkpoint from the old tf train xxxoptimizer call will no long work variable name change map the tf keras optimizers xxx weight be in different format than exist tf train xxxoptimizer there isn t direct mapping for that target time window undecided since the update require non trivial user side change
tensorflowtensorflow
gpu support build instruction suggest old docker image
Bug
system information linux ubuntu 18 04 2 lts tensorflow instal from docker image 1 13 1 gpu py3 jupyter tensorflow version should be 1 13 1 python version from docker appear to be 3 5 2 instal use pip na don t get that far bazel version if compile from source miss in docker image gcc compiler version if compile from source cuda cudnn version cuda 10 from docker image gpu model and memory processor 11 vendor i d genuineintel cpu family 6 model 44 model name intel r core tm i7 cpu x 980 3 33ghz step 2 microcode 0x13 cpu mhz 1630 792 cache size 12288 kb physical i d 0 sibling 12 core i d 10 cpu core 6 apicid 21 initial apicid 21 fpu yes fpu exception yes cpuid level 11 wp yes flag fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1 gb rdtscp lm constant tsc arch perfmon pebs bts rep good nopl xtopology nonstop tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds cpl vmx e tm2 ssse3 cx16 xtpr pdcm pcid sse4 1 sse4 2 popcnt aes lahf lm epb pti tpr shadow vnmi flexpriority ept vpid dtherm ida arat bug cpu meltdown spectre v1 spectre v2 spec store bypass l1tf bogomip 6675 50 clflush size 64 cache alignment 64 address size 36 bit physical 48 bit virtual power management the problem the docker image tensorflow tensorflow 1 13 1 gpu py3 jupyter doesn t build as per the instruction here gpu support 2 use the tensorflow tensorflow 1 13 1 gpu py3 jupyter docker image the build fail at the first step configure there s nothing in the tensorflow directory use the tensorflow tensorflow nightly devel gpu py3 docker image everything work fine and after be build tensorflow can be import in python within the docker container in this image there be source code in tensorflow also in this docker container usr local lib have only python3 5 in the nightly devel gpu py image there be a bazel directory
tensorflowtensorflow
bug tf keras update op for compute running mean and variance for batchnorm cause an error
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian gnu linux 9 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 12 python version 3 6 cuda cudnn version cuda 9 0 gpu model and memory titan x pascal I be use tf keras api to build a convnet and I train it with the tensorflow custom training loop use graph api the convnet contain batchnorm layer and as I don t use model fit for training I have to manage the update to the move mean and variance manually however this cause an error here be a minimal reproducible case import numpy as np import tensorflow as tf from tensorflow python keras import layer from tensorflow python keras import initializer from tensorflow python keras import model tfsum tf contrib summary height 480 width 640 def conv x num out layer kernel size stride activation fn relu kernel initializer initializer variancescale x layer conv2d num out layer kernel size kernel size stride stride padding same activation none kernel initializer kernel initializer x x layer batchnormalization x x layer activation activation fn x return x def build cnn img shape input tf keras layers input img shape conv1 conv input 64 7 1 output conv conv1 3 7 1 model model model input input output output return model def main graph learning rate 0 001 batch size 1 num step 100 train dir dataset size 20 image np random random sample dataset size height width 3 astype np float32 dataset tf datum dataset from tensor slice image dataset dataset repeat batch batch size iterator dataset make one shot iterator image iterator get next print image global step tf train get or create global step summary writer tfsum create file writer train dir flush millis 10000 with summary writer as default tfsum record summary every n global step 10 model build cnn height width 3 prediction model image training true update op model update train var list model trainable variable prediction tf ensure shape prediction batch size height width 3 loss tf reduce mean tf abs prediction image tf contrib summary scalar loss loss optimizer tf train adamoptimizer learning rate with tf control dependency update op train op optimizer minimize loss global step global step var list train var list sess tf session with sess summary writer as default sess run tf global variable initializer sess run tf local variable initializer tf contrib summary initialize graph tf get default graph for I in range num step 1 global step val loss val sess run train op global step loss tfsum all summary op print f iter global step val 06d loss loss val 04 4f if name main main graph this result in the follow error message cause by op input 1 define at file train bug py line 104 in main graph file train bug py line 72 in main graph model build cnn height width 3 file train bug py line 44 in build cnn input tf keras layers input img shape file bs eldar 3dshape work app miniconda3 envs py36 lib python3 6 site package tensorflow python keras engine input layer py line 229 in input input tensor tensor file bs eldar 3dshape work app miniconda3 envs py36 lib python3 6 site package tensorflow python keras engine input layer py line 112 in init name self name file bs eldar 3dshape work app miniconda3 envs py36 lib python3 6 site package tensorflow python op array op py line 1747 in placeholder return gen array op placeholder dtype dtype shape shape name name file bs eldar 3dshape work app miniconda3 envs py36 lib python3 6 site package tensorflow python ops gen array op py line 5206 in placeholder placeholder dtype dtype shape shape name name file bs eldar 3dshape work app miniconda3 envs py36 lib python3 6 site package tensorflow python framework op def library py line 787 in apply op helper op def op def file bs eldar 3dshape work app miniconda3 envs py36 lib python3 6 site package tensorflow python util deprecation py line 488 in new func return func args kwargs file bs eldar 3dshape work app miniconda3 envs py36 lib python3 6 site package tensorflow python framework op py line 3274 in create op op def op def file bs eldar 3dshape work app miniconda3 envs py36 lib python3 6 site package tensorflow python framework op py line 1770 in init self traceback tf stack extract stack invalidargumenterror see above for traceback you must feed a value for placeholder tensor input 1 with dtype float and shape 480 640 3 node input 1 define at train bug py 44 placeholder dtype dt float shape 480 640 3 device job localhost replica 0 task 0 device gpu 0 remove with tf control dependency update op make it run but then the batchnorm statistic be not be update how can I use kera in this setting
tensorflowtensorflow
training parameter in keras model pass as none in 1 13
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary pip tensorflow version use command below v1 13 1 0 g6612da8951 1 13 1 python version 3 5 2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version cuda 10 cudnn 7 5 gpu model and memory gtx 1050ti 4 gb describe the current behavior as describe in the third example in the documentation for keras model class model a boolean training parameter can be use in the call method of subclasse model however the parameter be pass as none when it should be true describe the expect behavior the training parameter should be pass as true when the model be train code to reproduce the issue import numpy as np import tensorflow as tf tf enable eager execution class mymodel tf keras model def init self super mymodel self init self dense tf keras layer dense 4 def call self input training false print training training return self dense input model mymodel model compile optimizer tf train adagradoptimizer 0 001 loss categorical crossentropy metric accuracy inp np one 5 3 dtype np float32 out np one 5 4 dtype np float32 training should be false model inp training should be true model fit inp out other info log this only happen in tf 1 13 and not in 1 12 I also try 2 0 alpha and the bug be still present
tensorflowtensorflow
tf crash when tensor size be increase
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary not sure tensorflow version use command below 1 12 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version release 9 0 v9 0 176 9 0 gpu model and memory geforce rtx 2080 ti 11 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version describe the current behavior I be run a custom code use tf part of the code include a 2d convolution use tf nn conv2d the code be run great but when I increase the size of one of the tensor by a factor of 2 then everything crash and I get this error 2019 12 29 56 729188 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx avx2 2019 12 29 56 988944 I tensorflow core common runtime gpu gpu device cc 1432 find device 0 with property name geforce rtx 2080 ti major 7 minor 5 memoryclockrate ghz 1 65 pcibusid 0000 65 00 0 totalmemory 11 00gib freememory 8 99gib 2019 12 29 56 995629 I tensorflow core common runtime gpu gpu device cc 1511 add visible gpu device 0 2019 12 29 57 467089 I tensorflow core common runtime gpu gpu device cc 982 device interconnect streamexecutor with strength 1 edge matrix 2019 12 29 57 467875 I tensorflow core common runtime gpu gpu device cc 988 0 2019 12 29 57 468098 I tensorflow core common runtime gpu gpu device cc 1001 0 n 2019 12 29 57 468449 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 8665 mb memory physical gpu device 0 name geforce rtx 2080 ti pci bus i d 0000 65 00 0 compute capability 7 5 2019 12 30 06 043926 e tensorflow stream executor cuda cuda driver cc 981 fail to synchronize the stop event cuda error launch fail unspecified launch failure 2019 12 30 06 044411 e tensorflow stream executor cuda cuda timer cc 55 internal error destroy cuda event in context 000001f8e589dcd0 cuda error launch fail unspecified launch failure 2019 12 30 06 044888 e tensorflow stream executor cuda cuda timer cc 60 internal error destroy cuda event in context 000001f8e589dcd0 cuda error launch fail unspecified launch failure 2019 12 30 06 045401 f tensorflow stream executor cuda cuda dnn cc 231 check fail status cudnn status success 7 vs 0 fail to set cudnn stream the problem might not be connect to the convolution operation but it fail after I increase the size of this tensor can you help I solve this issue thank you very much gilad describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
train model inference on gpu of nvidia tx2 get poor result even error result
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device nvidia tx2 tensorflow instal from source or binary binary from tensorflow version use command below 1 110 python version 3 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda9 0 cudnn 7 15 gpu model and memory 8 g you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version describe the current behavior I train the model on the server and deploy the same version of tensorflow on tx2 but when I run the train model with the gpu on tx2 I get a lot bad than on the server but run the model on the cpu of tx2 do not cause this problem describe the expect behavior 1 the result of run on the gpu of the server should be the same as the result of the gpu run on tx2 there should not be such a big gap 2 the gpu running result on tx2 should be the same as the cpu run result on tx2 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf python no long available in 1 13 1
Bug
sorry if this be not exactly a documentation bug but it be for we a big change in 1 13 1 that be not address in the release note I realize that tf python be never part of the public api but it use to be available and after upgrade from 1 12 it isn t any long the reason we need it be that it be heavily use internally by tensorflow code and occasionally we need to vendor some of this code it would be really inconvenient to do this by maintain our own fork of tensorflow instead we just copy a python file from the tensorflow source into our repo and make the change we need be there some way to keep do this in 1 13 1 with minimal change I e not find all the use of tf python and change they to their public api equivalent also btw why do the python part of the tensorflow source use tf python so much in the first place couldn t they just use the public api
tensorflowtensorflow
tf 2 0 stridedslice issue with empty slice
Bug
empty array cause typeerror with stridedslice a one liner to reproduce would be python tf constant 1 2 3 tf constant dtype tf int32 which return typeerror only integer slice ellipsis tf newaxis none and scalar tf int32 tf int64 tensor be valid index get the numpy equivalent python np array 1 2 3 np array dtype np int32 work as expect
tensorflowtensorflow
tf 2 0 conversion script fail to parse ipython function
Bug
the tf upgrade v2 tool fail to parse command with common ipython function for example pip install tf nightly and matplotlib inline error fail to parse traceback most recent call last file usr local lib python3 6 dist package tensorflow tool compatibility ast edit py line 510 in update string pasta t pasta parse text file usr local lib python3 6 dist package pasta init py line 23 in parse t ast util parse src file usr local lib python3 6 dist package pasta base ast util py line 56 in parse tree ast parse sanitize source src file usr lib python3 6 ast py line 35 in parse return compile source filename mode pycf only ast file line 37 pip install tf nightly syntaxerror invalid syntax
tensorflowtensorflow
rename tf nn batch normalization
Bug
system information tensorflow version 1 13 doc link describe the documentation issue tf nn batch normalization can actually be use to implement layer normalization or group normalization as well therefore the name of the function be quite confusing since the batch normalization be more or less a special case of this function when you feed the right mean and variance tensor I think something like normalization could work as well and would be less confusing
tensorflowtensorflow
tf image crop and resize weird alignment behavior
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 n a mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below b v1 13 0 rc2 0 gc865ec5621 1 13 0 rc2 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a this be a follow up on issuecomment 468830039 about tf image crop and resize cc martinwicke suppose I have an image that look like this 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 I want to crop the 2x2 patch that contain 6 7 11 12 and upsample it to 4x4 I expect to get the follow output 4 5 5 5 5 6 7 7 5 8 8 5 9 5 10 10 5 11 12 12 5 13 13 5 I think this be a reasonable expectation the above output be also what I get if I do resize and crop instead of tf image crop and resize after the fix yesterday that address the alignment issue in resize op python import tensorflow as tf import numpy as np from tensorflow python op image op impl import resize image v2 arr np arange 25 astype float32 reshape 5 5 input4d tf reshape arr 1 5 5 1 resize resize image v2 input4d 10 10 method bilinear 0 0 resize print resize 2 6 2 6 crop print expect output see a colab proof in scrollto t1zlhi5uumv ok what be the correct box I should provide for crop and resize in order to get the above output here be what the document say box a tensor of type float32 a 2 d tensor of shape num box 4 the I th row of the tensor specify the coordinate of a box in the box ind I image and be specify in normalize coordinate y1 x1 y2 x2 a normalize coordinate value of y be map to the image coordinate at y image height 1 so as the 0 1 interval of normalize image height be map to 0 image height 1 in image height coordinate we do allow y1 y2 in which case the sample crop be an up down flip version of the original image the width dimension be treat similarly normalize coordinate outside the 0 1 range be allow in which case we use extrapolation value to extrapolate the input image value it turn out that the correct box I should use be 3 16 3 16 9 16 9 16 if you can not tell why it be 3 16 and 9 16 from the above documentation you and I be on the same page python import tensorflow as tf import numpy as np import tensorflow contrib eager as tfe tfe enable eager execution want to crop 2x2 out of a 5x5 image and resize to 4x4 image np arange 25 astype float32 reshape 5 5 target 4 print tf image crop and resize image none none np asarray 3 16 3 16 9 16 9 16 0 target target 0 0 print expect output the crop and resize function have weird alignment issue like those fix in 6720 it s less of a problem than 6720 because at least we can provide some box coordinate to make it work as expect and you can say it s just how this function be define there be actually a formula l104 l135 that I use in my code to compute the coordinate in order to use this function but I do hope this function can have a well define behavior and fit reasonable expectation in my experiment this ill pose behavior actually hurt my model which I believe also hurt other model like tf s object detection
tensorflowtensorflow
tensorflow python client device lib list local device crush when none of gpu have enough free memory
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda10 0 cudnn 7 5 0 56 gpu model and memory 0 gtx 1060 ti 6 gb 1 gtx 1050 ti 4 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version b v1 13 1 0 g6612da8951 1 13 1 describe the current behavior if I run list local device in order to get the number of gpu available while almost all of memory be already allocate on all the gpu the process crash due to cuda error out of memory there s no problem at all if at least one gpu have substantial amount of free memory error message 2019 03 02 12 26 38 852155 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 03 02 12 26 38 989613 w tensorflow compiler xla service platform util cc 240 unable to create streamexecutor for cuda 0 fail initialize streamexecutor for cuda device ordinal 0 internal fail call to cudeviceprimaryctxretain cuda error out of memory out of memory total memory report 6370295808 2019 03 02 12 26 38 999115 w tensorflow compiler xla service platform util cc 240 unable to create streamexecutor for cuda 1 fail initialize streamexecutor for cuda device ordinal 1 internal fail call to cudeviceprimaryctxretain cuda error out of memory out of memory total memory report 4236312576 2019 03 02 12 26 38 999399 f tensorflow stream executor lib statusor cc 34 attempt to fetch value instead of handle error internal no support device find for platform cuda describe the expect behavior if none of gpu be available it could just return the empty list not crush itself I write my code for train a model in the way it fully make use of all the available resource but I have another code for do various stuff and I want it to run on cpu when gpu be not available I expect get available gpu to return the empty list when none of they be available code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python from tensorflow python client import device lib def get available gpu local device protos device lib list local device return x name for x in local device proto if x device type gpu get available gpu other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the result of nvidia smi sit mar 2 12 41 52 2019 nvidia smi 418 39 driver version 418 39 cuda version 10 1 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m 0 geforce gtx 106 on 00000000 01 00 0 on n a 40 55c p2 41w 120w 6033mib 6075mib 51 default 1 geforce gtx 105 on 00000000 03 00 0 off n a 30 41c p0 n a 75w 3997mib 4040mib 59 default process gpu memory gpu pid type process name usage 0 1960 g usr lib xorg xorg 27mib 0 2557 g usr bin gnome shell 60mib 0 13385 c python 5933mib 1 13385 c python 3985mib here be my current workaround python def get available gpu workaround true if workaround ret os spawnl os p wait python path python path metax get num gpu py if ret 0 detect aborted process return local device protos device lib list local device return x name for x in local device proto if x device type gpu for now I m create a child process to see if it crush
tensorflowtensorflow
custom model s build method be not call automatically
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macosx 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below tf version version 2 0 0 dev20190301 tf version git version v1 12 0 9345 g4eeb2714f4 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when create a custom model with a build method e g if one of the model s layer have a size that depend on the input shape such as a reconstruction layer the model can not be train unless I explicitly call build with a tf tensorshape moreover I can not specify an input shape describe the expect behavior I expect the custom model to be build automatically the first time it be call e g by the fit method code to reproduce the issue python import numpy as np import tensorflow as tf from tensorflow import kera x train np random randn 1000 8 y train np random rand 1000 1 class reconstructingregressor keras model model def init self output dim kwargs super init kwargs self hide kera layer dense 30 activation elu self out kera layer dense output dim def build self batch input shape n input batch input shape 1 self reconstruct kera layer dense n input super build batch input shape def call self input z self hide input reconstruction self reconstruct z reconstruction loss tf reduce mean tf square reconstruction input self add loss 0 1 reconstruction loss return self out z model reconstructingregressor 1 model build tf tensorshape none 8 work if I add this line model compile loss mse optimizer nadam history model fit x train y train epoch 2 attributeerror see below other info log here be the stacktrace attributeerror traceback most recent call last in 27 model build tf tensorshape none 8 work if I add this line 28 model compile loss mse optimizer nadam 29 history model fit x train y train epoch 2 error virtualenvs tf2 lib python3 6 site package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 804 step step per epoch 805 validation split validation split 806 shuffle shuffle 807 808 prepare validation datum virtualenvs tf2 lib python3 6 site package tensorflow python keras engine training py in standardize user datum self x y sample weight class weight batch size check step step name step validation split shuffle extract tensor from dataset 2501 else 2502 cast input x input 2503 self set input cast input 2504 else 2505 y input y virtualenvs tf2 lib python3 6 site package tensorflow python training tracking base py in method wrapper self args kwargs 454 self setattr track false pylint disable protect access 455 try 456 result method self args kwargs 457 finally 458 self setattr track previous value pylint disable protect access virtualenvs tf2 lib python3 6 site package tensorflow python keras engine training py in set input self input output training 2773 output self call input training training 2774 else 2775 output self call input 2776 reset to the previously save value if call have add metric 2777 or add loss then contain symbolic tensor will have be set in call self input 19 def call self input 20 z self hide input 21 reconstruction self reconstruct z 22 reconstruction loss tf reduce mean tf square reconstruction input 23 self add loss 0 1 reconstruction loss attributeerror reconstructingregressor object have no attribute reconstruct
tensorflowtensorflow
tf 2 0 api docs tf custom gradient
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 1 13 1 doc link I ve be have some difficulty use custom gradient recently before implement the op I actually have in mind I m try to make a simple polynomial op with custom gradient in an effort to get my head around the requirement the documentation be not at all clear on how sum over batch and output index should be do and until recently wasn t clear on whether we should return the full jacobian of the operation or just the vector jacobian product vjp function I do notice some explanation of the latter issue make it in in the 1 12 1 13 update which be helpful I think this problem apply both the derivative wrt the op s input and wrt to the parameter call grad xs and grad var in the documentation however when I incorporate the vjp tidbit in my sample code I get correct result for the former but not the latter I believe because the implicit sum over batch be already present in the supply grad ys and because the vjp also collapse out the index across grad ys so the lack of clarity about sum accidentally only hit I when try to write the grad var part in my gist link above I have to use a reduce sum when compute dy dp which for a polynomial be the correspond power of x since dy dx doesn t depend on x this issue didn t appear when write the grad xs part in addition to simply say what you mean in this documentation the sum of the gradient over example be not the gradient only the per example gradient truly deserve that name but I suppose that particular fight be a lose cause it would be good if there be at least some example that exercise the grad var part and the variable none keyword argument
tensorflowtensorflow
even in eager mode keras be pass custom model non eager tensor in fit
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 10 14 tensorflow instal from source or binary binary tensorflow version use command below 1 14 1 dev20190301 python version 3 7 describe the current behavior when call keras model fit on a custom model it seem the model be pass a graph mode tensor instead of an eager tenser even when in eager mode describe the expect behavior if in eager mode the tensor pass to the call method of a custom model should be eager tensor otherwise the advantage of eager mode like the ability to use native control flow be lose code to reproduce the issue python import tensorflow as tf tf enable v2 behavior from tensorflow import kera class mymodel keras model def call self x if x 0 return x 1 else return x 1 m mymodel m tf constant 0 this work return 1 as expect m compile loss mse optimizer sgd m fit tf constant 0 tf constant 1 this fail typeerror traceback most recent call last in 1 m fit tf constant 0 tf constant 1 anaconda3 lib python3 7 site package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 804 step step per epoch 805 validation split validation split 806 shuffle shuffle 807 808 prepare validation datum anaconda3 lib python3 7 site package tensorflow python keras engine training py in standardize user datum self x y sample weight class weight batch size check step step name step validation split shuffle extract tensor from dataset 2501 else 2502 cast input x input 2503 self set input cast input 2504 else 2505 y input y anaconda3 lib python3 7 site package tensorflow python training tracking base py in method wrapper self args kwargs 454 self setattr track false pylint disable protect access 455 try 456 result method self args kwargs 457 finally 458 self setattr track previous value pylint disable protect access anaconda3 lib python3 7 site package tensorflow python keras engine training py in set input self input output training 2773 output self call input training training 2774 else 2775 output self call input 2776 reset to the previously save value if call have add metric 2777 or add loss then contain symbolic tensor will have be set in call self x 1 class mymodel keras model 2 def call self x 3 if x 0 4 return x 1 5 else anaconda3 lib python3 7 site package tensorflow python framework op py in bool self 658 typeerror 659 660 raise typeerror use a tf tensor as a python bool be not allow 661 use if t be not none instead of if t to test if a 662 tensor be define and use tensorflow op such as typeerror use a tf tensor as a python bool be not allow use if t be not none instead of if t to test if a tensor be define and use tensorflow op such as tf cond to execute subgraph condition on the value of a tensor
tensorflowtensorflow
attributeerror module panda have no attribute compat
Bug
hi I be try to use scipyoptimizerinterface in the tensorflow but it give the follow attribute error attributeerror module panda have no attribute compat by go through the discussion thread at tensorflow github page I have upgarde dask downgrade panda reinstall tensorflow and scipy package unfortunately it be still give I same attributeerror do anyone have similar issue and can help I to resolve it I can use tensorflow normally for other minimization algorithm test adam but for scipy s bfgs implementation I be get this attribute error I be run code on linux cento system with python 3 6 and tensorflow 1 12 0 version for panda be 0 24 0 I try to downgrade the panda to 0 19 2 but it break other part of my code which use f2py library any thought on how to fix this issue
tensorflowtensorflow
tf boost tree doesn t consume the tf dataset as expect incorrect checksum for freed object object be probably modify after be free
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos cloudml mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 2 7 cpu training you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version describe the current behavior the model doesn t consume the dataset object properly my input fn be def build training input fn file pattern train false create an input function read from transform data args transform example base filename of example return the input function for training or valid def parse record record transform metadata metadata io read metadata os path join gs path to bucket transform transform fn io transform metadata dir transform feature spec transform metadata schema as feature spec transform feature tf parse single example record transform feature spec col to remove label key transform label transform feature pop label key transform feature key value for key value in transform feature item if key not in col to remove return transform feature transform label def input fn train train input function for training and valid file tf datum dataset list file file pattern file pattern dataset file apply tf datum experimental parallel interleave lambda filename tf datum tfrecorddataset filename cycle length 32 block length 1 sloppy true if train dataset dataset repeat none dataset dataset apply tf datum experimental map and batch map func parse record batch size 64 drop remainder false num parallel batch 16 return dataset return input fn I train the model use classifier tf estimator boostedtreesclassifier input fn train build training input fn file pattern train directory train true classifier train input fn input fn train describe the expect behavior the model should consume the dataset provide by the train input fn and build tree code to reproduce the issue unfortunately I m unable to share this other info log when train on cloudml the error be the replica master 0 exit with a non zero status of 11 sigsegv locally I get warn tensorflow it seem that global step tf train get global step have not be increase current value could be stable 0 vs previous value 0 you could increase the global step by pass tf train get global step to optimizer apply gradient or optimizer minimize python 71585 0x700009986000 malloc error for object 0x7fe0d6f99e00 incorrect checksum for freed object object be probably modify after be free set a breakpoint in malloc error break to debug as a side note if I simply yield the datum use the snippet below I get the outofrangeerror error as expect dataset input fn eval next val dataset make one shot iterator get next with tf session as sess while true try x y sess run next val print x sleep 0 prediction classifier predict input fn x y print I for I in prediction except tf error outofrangeerror as e print e print expect behaiviour break
tensorflowtensorflow
contrib adamax implementation produce nan on gpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu tensorflow instal from source or binary source tensorflow version use command below test on 1 12 and 1 13 python version 3 6 cuda cudnn version 9 gpu model and memory verify on 1080ti and titan v describe the current behavior on gpu adamax from tf contrib opt adamaxoptimizer appear to apply nan to variable seem fine on cpu and bizarrely if any op be put as a control dependency to the apply grad call then everything seem fine describe the expect behavior not to produce nan code to reproduce the issue import tensorflow as tf a tf get variable a shape 10000 b tf get variable b shape 10000 loss tf minimum tf reduce mean a tf reduce mean b sess tf session print normally opt op tf contrib opt adamaxoptimizer minimize loss sess run tf global variable initializer for a in range 10 print sess run opt op loss 1 print with noop opt tf contrib opt adamaxoptimizer grad and var opt compute gradient loss noop tf no op with tf control dependency noop opt op fix opt apply gradient grad and var sess run tf global variable initializer for a in range 10 print sess run opt op loss 1 the above code produce on my machine normally 3 637023e 05 0 00037572667 nan nan nan nan nan nan nan nan with noop 9 1626454e 05 0 00018933268 0 00028703889 0 00038474516 0 0004824514 0 0005801577 0 0006778639 0 00077557017 0 0008732763 0 00097098254 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
estimator training hang in multiple gpu if dataset doesn t have enough element to feed both gpu last batch
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 distribute training one node multiple gpu tensorflow instal from source or binary pip tensorflow version use command below tf 1 12 python version 3 6 8 cuda cudnn version 9 0 gpu model and memory 2 gtx1080 8go describe the current behavior basically if the dataset doesn t have enough element to feed both gpu last batch the training hang if you doesn t have enough to feed the first gpu last batch and don t want to drop the last batch then the training hang if you doesn t have enough to feed the first gpu last batch and want to drop the last batch then you re fine if you have enough to feed the first gpu last batch but not the second gpu last batch and don t want to drop the last batch then the training hang if you have enough to feed the first gpu last batch but not the second gpu last batch and want to drop the last batch then the training hang describe the expect behavior if you doesn t have enough to feed the first gpu last batch and don t want to drop the last batch then run the first gpu partial batch and do nothing with the second gpu if you doesn t have enough to feed the first gpu last batch and want to drop the last batch then drop the last batch for both gpu if you have enough to feed the first gpu last batch but not the second gpu last batch and don t want to drop the last batch then run the first gpu entire batch and run the second gpu partial batch if you have enough to feed the first gpu last batch but not the second gpu last batch and want to drop the last batch then run the first gpu entire batch and do nothing with the second gpu code to reproduce the issue import tensorflow as tf play with sample count 5 6 7 and drop remainder true false to reproduce the issue sample count 5 drop remainder false def run config run config tf estimator runconfig session config tf configproto allow soft placement true train distribute tf contrib distribute mirroredstrategy num gpus 2 estimator estimator tf estimator estimator model fn model fn config run config estimator train train input fn time two dataset def train input fn return tf data dataset range sample count repeat 1 map lambda x x x 2 batch 2 drop remainder time two model def model fn feature label mode input layer tf cast tf reshape feature 1 1 tf float32 expect output tf cast tf reshape label 1 1 tf float32 logit tf layer dense input layer 1 none false loss tf loss mean squared error expect output logit log hook tf train loggingtensorhook tensor feature value feature name every n iter 1 if mode tf estimator modekeys train optimizer tf train adamoptimizer 0 001 train op optimizer minimize loss loss global step tf train get global step return tf estimator estimatorspec mode mode loss loss train op train op training hook log hook if name main tf logging set verbosity tf logging debug run other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach info tensorflow initialize runconfig with distribution strategy info tensorflow not use distribute coordinator warn tensorflow use temporary folder as model directory tmp tmp0ofz0qx1 info tensorflow use config save checkpoint step none device fn none experimental distribute none task type worker tf random seed none keep checkpoint every n hour 10000 distribute coordinator mode none service none save summary step 100 model dir tmp tmp0ofz0qx1 master keep checkpoint max 5 train distribute protocol none task i d 0 save checkpoint sec 600 session config allow soft placement true be chief true num worker replicas 1 global i d in cluster 0 evaluation master log step count step 100 cluster spec eval distribute none num ps replicas 0 2019 02 28 11 18 22 645478 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 02 28 11 18 22 818624 I tensorflow stream executor cuda cuda gpu executor cc 964 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 02 28 11 18 22 820057 I tensorflow core common runtime gpu gpu device cc 1432 find device 0 with property name geforce gtx 1080 major 6 minor 1 memoryclockrate ghz 1 847 pcibusid 0000 01 00 0 totalmemory 7 90gib freememory 7 11gib 2019 02 28 11 18 22 954140 I tensorflow stream executor cuda cuda gpu executor cc 964 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 02 28 11 18 22 955822 I tensorflow core common runtime gpu gpu device cc 1432 find device 1 with property name geforce gtx 1080 major 6 minor 1 memoryclockrate ghz 1 847 pcibusid 0000 02 00 0 totalmemory 7 93gib freememory 7 81gib 2019 02 28 11 18 22 957142 I tensorflow core common runtime gpu gpu device cc 1511 add visible gpu device 0 1 2019 02 28 11 18 23 349095 I tensorflow core common runtime gpu gpu device cc 982 device interconnect streamexecutor with strength 1 edge matrix 2019 02 28 11 18 23 349133 I tensorflow core common runtime gpu gpu device cc 988 0 1 2019 02 28 11 18 23 349139 I tensorflow core common runtime gpu gpu device cc 1001 0 n y 2019 02 28 11 18 23 349143 I tensorflow core common runtime gpu gpu device cc 1001 1 y n 2019 02 28 11 18 23 349775 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device device gpu 0 with 6853 mb memory physical gpu device 0 name geforce gtx 1080 pci bus i d 0000 01 00 0 compute capability 6 1 2019 02 28 11 18 23 350097 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device device gpu 1 with 7535 mb memory physical gpu device 1 name geforce gtx 1080 pci bus i d 0000 02 00 0 compute capability 6 1 info tensorflow device be available but not use by distribute strategy device cpu 0 info tensorflow device be available but not use by distribute strategy device xla gpu 0 info tensorflow device be available but not use by distribute strategy device xla cpu 0 info tensorflow configure nccl all reduce 2019 02 28 11 18 23 372783 I tensorflow core common runtime gpu gpu device cc 1511 add visible gpu device 0 1 2019 02 28 11 18 23 373002 I tensorflow core common runtime gpu gpu device cc 982 device interconnect streamexecutor with strength 1 edge matrix 2019 02 28 11 18 23 373032 I tensorflow core common runtime gpu gpu device cc 988 0 1 2019 02 28 11 18 23 373038 I tensorflow core common runtime gpu gpu device cc 1001 0 n y 2019 02 28 11 18 23 373043 I tensorflow core common runtime gpu gpu device cc 1001 1 y n 2019 02 28 11 18 23 373272 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6853 mb memory physical gpu device 0 name geforce gtx 1080 pci bus i d 0000 01 00 0 compute capability 6 1 2019 02 28 11 18 23 373346 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 7535 mb memory physical gpu device 1 name geforce gtx 1080 pci bus i d 0000 02 00 0 compute capability 6 1 info tensorflow call model fn info tensorflow call model fn info tensorflow batch all reduce invoke for batch size 1 with algorithm nccl num pack 1 agg small grad max bytes 0 and agg small grad max group 10 info tensorflow do calling model fn info tensorflow do calling model fn info tensorflow create checkpointsaverhook info tensorflow graph be finalize 2019 02 28 11 18 23 707824 I tensorflow core common runtime gpu gpu device cc 1511 add visible gpu device 0 1 2019 02 28 11 18 23 707940 I tensorflow core common runtime gpu gpu device cc 982 device interconnect streamexecutor with strength 1 edge matrix 2019 02 28 11 18 23 707963 I tensorflow core common runtime gpu gpu device cc 988 0 1 2019 02 28 11 18 23 707967 I tensorflow core common runtime gpu gpu device cc 1001 0 n y 2019 02 28 11 18 23 707988 I tensorflow core common runtime gpu gpu device cc 1001 1 y n 2019 02 28 11 18 23 708250 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6853 mb memory physical gpu device 0 name geforce gtx 1080 pci bus i d 0000 01 00 0 compute capability 6 1 2019 02 28 11 18 23 708475 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 7535 mb memory physical gpu device 1 name geforce gtx 1080 pci bus i d 0000 02 00 0 compute capability 6 1 info tensorflow run local init op info tensorflow do run local init op info tensorflow save checkpoint for 0 into tmp tmp0ofz0qx1 model ckpt info tensorflow loss 47 902126 step 0 info tensorflow feature value 2 3
tensorflowtensorflow
tf gpu 1 13 1 35 less batch size before oom vs tf gpu 1 11 0
Bug
system information window 7 tensorflow instal from source or binary pip tensorflow version use command below 1 11 0 1 13 1 python version 3 6 5 cuda cudnn version 9 7 1 4 10 7 4 1 gpu model and memory gtx 1060 6 gb describe the current behavior I have standard ae network with pixel shuffler layer on tf 1 11 0 cuda 9 maximum batch size for my gtx 1060 6 gb be 132 but after upgrade to tf 1 13 1 cuda 10 tf can not handle same batch size it produce oom error and maximum now 90 for my card describe the expect behavior expect not to downgrade performance when upgrade tensorflow code to reproduce the issue import numpy as np import tensorflow as tf keras tf keras kl keras layers k keras backend bgr shape 128 128 3 batch size 132 max tf 1 11 0 cuda 9 batch size 86 max tf 1 13 1 cuda 10 class pixelshuffler keras layers layer def init self size 2 2 datum format none kwargs super pixelshuffler self init kwargs self size size def call self input input shape k int shape input if len input shape 4 raise valueerror input should have rank str 4 receive input shape str input shape batch size h w c input shape if batch size be none batch size 1 rh rw self size oh ow h rh w rw oc c rh rw out k reshape input batch size h w rh rw oc out k permute dimension out 0 1 3 2 4 5 out k reshape out batch size oh ow oc return out def compute output shape self input shape if len input shape 4 raise valueerror input should have rank str 4 receive input shape str input shape height input shape 1 self size 0 if input shape 1 be not none else none width input shape 2 self size 1 if input shape 2 be not none else none channel input shape 3 self size 0 self size 1 if channel self size 0 self size 1 input shape 3 raise valueerror channel of input and size be incompatible return input shape 0 height width channel def get config self config size self size base config super pixelshuffler self get config return dict list base config item list config item def upscale dim def func x return pixelshuffler kl conv2d dim 4 kernel size 3 stride 1 padding same x return func inp kl input bgr shape x inp x kl conv2d 128 5 stride 2 padding same x x kl conv2d 256 5 stride 2 padding same x x kl conv2d 512 5 stride 2 padding same x x kl conv2d 1024 5 stride 2 padding same x x kl dense 1024 kl flatten x x kl dense 8 8 1024 x x kl reshape 8 8 1024 x x upscale 512 x x upscale 256 x x upscale 128 x x upscale 64 x x kl conv2d 3 5 stride 1 padding same x model keras model model inp x model compile optimizer keras optimizer adam lr 5e 5 beta 1 0 5 beta 2 0 999 loss mae training datum np zero batch size 128 128 3 loss model train on batch training datum training datum print fine other info log 1 1 chunk of size 12032 total 11 8kib 2019 02 28 19 45 23 516100 I tensorflow core common runtime bfc allocator cc 64 1 4 chunk of size 19200 total 75 0kib 2019 02 28 19 45 23 517100 I tensorflow core common runtime bfc allocator cc 64 1 4 chunk of size 38400 total 150 0kib 2019 02 28 19 45 23 517100 I tensorflow core common runtime bfc allocator cc 64 1 4 chunk of size 262144 totalling 1 00mib 2019 02 28 19 45 23 517100 I tensorflow core common runtime bfc allocator cc 64 1 1 chunk of size 368640 total 360 0kib 2019 02 28 19 45 23 517100 I tensorflow core common runtime bfc allocator cc 64 1 4 chunk of size 1179648 total 4 50mib 2019 02 28 19 45 23 517100 I tensorflow core common runtime bfc allocator cc 64 1 5 chunk of size 3276800 total 15 63mib 2019 02 28 19 45 23 517100 I tensorflow core common runtime bfc allocator cc 64 1 4 chunk of size 4718592 total 18 00mib 2019 02 28 19 45 23 520100 I tensorflow core common runtime bfc allocator cc 64 1 3 chunk of size 13107200 totalling 37 50mib 2019 02 28 19 45 23 520100 I tensorflow core common runtime bfc allocator cc 64 1 1 chunk of size 17028352 total 16 24mib 2019 02 28 19 45 23 521100 I tensorflow core common runtime bfc allocator cc 64 1 1 chunk of size 17694720 total 16 88mib 2019 02 28 19 45 23 521100 I tensorflow core common runtime bfc allocator cc 64 1 1 chunk of size 17694976 total 16 88mib 2019 02 28 19 45 23 521100 I tensorflow core common runtime bfc allocator cc 64 1 3 chunk of size 18874368 total 54 00mib 2019 02 28 19 45 23 521100 I tensorflow core common runtime bfc allocator cc 64 1 1 chunk of size 23592960 total 22 50mib 2019 02 28 19 45 23 521100 I tensorflow core common runtime bfc allocator cc 64 1 5 chunk of size 52428800 total 250 00mib 2019 02 28 19 45 23 529100 I tensorflow core common runtime bfc allocator cc 64 1 5 chunk of size 75497472 total 360 00mib 2019 02 28 19 45 23 529100 I tensorflow core common runtime bfc allocator cc 64 1 1 chunk of size 94371840 total 90 00mib 2019 02 28 19 45 23 530100 I tensorflow core common runtime bfc allocator cc 64 1 1 chunk of size 100362240 total 95 71mib 2019 02 28 19 45 23 530100 I tensorflow core common runtime bfc allocator cc 64 1 2 chunk of size 188743680 totalling 360 00mib 2019 02 28 19 45 23 530100 I tensorflow core common runtime bfc allocator cc 64 1 1 chunk of size 194688000 total 185 67mib 2019 02 28 19 45 23 530100 I tensorflow core common runtime bfc allocator cc 64 1 12 chunk of size 268435456 total 3 00gib 2019 02 28 19 45 23 530100 I tensorflow core common runtime bfc allocator cc 64 1 1 chunk of size 552317184 total 526 73mib 2019 02 28 19 45 23 530100 I tensorflow core common runtime bfc allocator cc 64 5 sum total of in use chunk 5 02gib 2019 02 28 19 45 23 530100 I tensorflow core common runtime bfc allocator cc 64 7 stat limit 5838622720 inuse 5393793792 maxinuse 5708028928 numalloc 434 maxallocsize 1363673088 2019 02 28 19 45 23 531100 w tensorflow core common runtime bfc allocator cc 27 1 x 2019 02 28 19 45 23 531100 w tensorflow core framework op kernel cc 1401 op re quire fail at conv grad input op cc 1054 resource exhaust oom when allo cat tensor with shape 90 128 64 64 and type float on job localhost replica 0 task 0 device gpu 0 by allocator gpu 0 bfc traceback most recent call last file d deepfacelab internal bin deepfacelab test py line 87 in loss model train on batch training datum training datum file d deepfacelab internal bin lib site package tensorflow python keras e ngine train py line 1188 in train on batch output self train function in pylint disable not callable file d deepfacelab internal bin lib site package tensorflow python keras b ackend py line 3076 in call run metadata self run metadata file d deepfacelab internal bin lib site package tensorflow python client session py line 1439 in call run metadata ptr file d deepfacelab internal bin lib site package tensorflow python framewo rk error impl py line 528 in exit c api tf getcode self status status tensorflow python framework error impl resourceexhaustederror oom when allocat ing tensor with shape 90 128 64 64 and type float on job localhost replica 0 t ask 0 device gpu 0 by allocator gpu 0 bfc node training adam gradient conv2d 1 conv2d grad conv2dbackpropinp ut hint if you want to see a list of allocate tensor when oom happen add repor t tensor allocation upon oom to runoption for current allocation info
tensorflowtensorflow
no register placeholder opkernel for xla tpu jit device compatible with node node tpu 139872799811176 input 2
Bug
please have a look at my original post here it be no long a tf keras problem now I get runtimeerror compilation fail compilation failure detect unsupported operation when try to compile graph cluster 15366487156777984482 on xla tpu jit placeholder no register placeholder opkernel for xla tpu jit device compatible with node node tpu 139872799811176 input 2 register device tpu device cpu device gpu device xla cpu node tpu 139872799811176 input 2
tensorflowtensorflow
tf 2 0 api docs tf keras activation relu
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue the description be minimal not write with complete sentence and lack recommendation of when and when not to use the symbol there be no usage example the parameter be describe only briefly and with inconsistent capitalization the return object description could be more useful we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue yes
tensorflowtensorflow
app loader failure at tensorflow org make website unusable
Bug
system information tensorflow version n a doc link my locale be set to uk ua when I visit the app loader js try to load notice the add uk suffix the file do not exist since devsite app js provide the main functionality none of the link button and selector that use javascript be operational this include the top nav bar the side nav bar and the search it also make it very hard to navigate to apis and to browse the api page it be not possible to change the language because the language drop down be also use javascript the problem persist across different browser firefox safari chrome vivaldi and I assume it also affect other locale for which a localise devsite app js be not available for instance belarusian be croatian hr etc ideally the app loader should check whether the localise version exist if not it should load the unlocalised version instead I m sorry if this be not the right place for the bug however it be directly relate to the documentation and I could not find any other place to submit
tensorflowtensorflow
tensorflow device gpu 0 be not register
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code yes os platform and distribution red hat enterprise linux server release 7 4 tensorflow instal from source or binary source tensorflow version 1 10 0 python version 3 6 6 cuda cudnn version v9 2 88 gpu model and memory intel r xeon r cpu e5 2690 0 2 90ghz describe the current behavior currently I be use distribute tensorflow to connect two machine one with only a cpu ps and one with a cpu and gpu worker the worker machine run only the cluster and server command follow by a join command and then remain idle the training algorithm be run on the ps machine after define the cluster and the server and then the training be run but I get e tensorflow core grappler cluster util cc 128 not find tensorflow device gpu 0 be not register between each epoch of training and it be not run on the gpu the delay be very big describe the expect behavior the training should be able to run on the gpu of the worker machine code to reproduce the issue code run on the worker machine import tensorflow as tf cluster tf train clusterspec worker 172 24 145 121 2222 ps 172 24 145 14 2222 server tf train server cluster job name worker task index 0 server join and on the ps machine import tensorflow as tf import time from tensorflow example tutorial mnist import input datum cluster tf train clusterspec worker 172 24 145 121 2222 ps 172 24 145 14 2222 server tf train server cluster job name ps task index 0 tf log set verbosity tf log error mnist input datum read data set tmp datum one hot true n node hl1 500 n node hl2 500 n node hl3 500 n class 10 batch size 100 x tf placeholder float none 784 name x y tf placeholder float name y pre def neural network model datum with tf device job worker task 0 gpu 0 hide 1 layer weight tf variable tf random normal 784 n nodes hl1 name layer1 w bias tf variable tf random normal n nodes hl1 name layer1 b hide 2 layer weight tf variable tf random normal n nodes hl1 n nodes hl2 name layer2 w bias tf variable tf random normal n nodes hl2 name layer2 b hide 3 layer weight tf variable tf random normal n nodes hl2 n node hl3 name layer3 w bias tf variable tf random normal n nodes hl3 name layer3 b output layer weight tf variable tf random normal n node hl3 n class name output w bias tf variable tf random normal n class name output b l1 tf add tf matmul datum hide 1 layer weight hide 1 layer bias name l1 l1 tf nn relu l1 l2 tf add tf matmul l1 hide 2 layer weight hide 2 layer bias name l2 l2 tf nn relu l2 l3 tf add tf matmul l2 hide 3 layer weight hide 3 layer bias name l3 l3 tf nn relu l3 output tf add tf matmul l3 output layer weight output layer bias name output return output def train neural network x prediction neural network model x cost tf reduce mean tf nn softmax cross entropy with logit v2 logit prediction label y optimizer tf train adamoptimizer minimize cost hm epoch 10 with tf session server target as sess sess run tf global variable initializer for epoch in range hm epoch epoch loss 0 start time time time for in range int mnist train num example batch size x train size 28 28 epoch x epoch y mnist train next batch batch size c sess run optimizer cost feed dict x epoch x y epoch y epoch loss c duration time time start time print epoch epoch 1 complete out of hm epoch loss epoch loss correct tf equal tf argmax prediction 1 tf argmax y 1 accuracy tf reduce mean tf cast correct float print accuracy accuracy eval x mnist test image y mnist test label duration duration train neural network x other info log e tensorflow core grappler cluster util cc 128 not find tensorflow device gpu 0 be not register
tensorflowtensorflow
mirroredstrategy return assertionerror when run with custom estimator
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary source tensorflow version use command below 1 12 python version 3 6 bazel version if compile from source 0 17 2 gcc compiler version if compile from source 7 3 cuda cudnn version cuda 10 0 cudnn 7 4 gpu model and memory nvidia geforce gtx 1060 ti describe the current behavior estimator model fail if pass a tf contrib distribute mirroredstrategy as a parameter to the runconfig object describe the expect behavior I be try to parallelize an estimator model build from keras model via tf keras estimator model to estimator which work fine unless I pass the train distribute parameter of runconfig a mirroredstrategy when create the model see below the work runconfig be comment out the non work one not so my goal be to run the model on the google cloud platform on a couple of gpu at once but first I need to get the code run locally I want to use the mirror strategy to parallelize the training by split the batch across the gpu the input be a tfrecord file with greyscale image of 40x40x1 and 3 class label I be intend to use a cnn and rnn on this datum but distribute doesn t work with this simple example of a mlp any help code to reproduce the issue import tensorflow as tf from tensorflow import kera as k import argparse import random def parse function proto define constant for image imsize 40 num channel 1 define your tfrecord feature key key to feature x tf fixedlenfeature imsize imsize num channel tf float32 image height image width num channel y tf fixedlenfeature tf int64 load one example parse feature tf parse single example proto key to feature extract image and label image parse feature x label tf cast parse feature y tf int32 label tf add label tf constant 1 tf int32 add 1 to transform interval 1 1 to categorical interval 0 2 label tf one hot label depth 3 one hot encoding return image label def create dataset filepath shuffle buffer batch size n epoch random seed num parall call dataset tf datum tfrecorddataset filepath map the parser on every filepath in the array you can set the number of parallel loader here dataset dataset map parse function num parallel call num parall call set the number of datapoint you want to load and shuffle dataset dataset shuffle shuffle buffer random seed repeat n epoch set the batchsize dataset dataset batch batch size prefetch dataset prefetch batch size return dataset def simplemodel in shape 40 40 1 n out 3 dropout rate 0 3 model k model sequential fully connect layer model add k layer flatten input shape in shape model add k layer dense 1024 activation tanh kernel regularizer k regularizer l2 0 01 model add k layer dropout dropout rate model add k layer dense 128 activation tanh kernel regularizer k regularizer l2 0 01 model add k layer dropout dropout rate in the end add another dense layer and an output layer model add k layer dense 32 activation tanh kernel regularizer k regularizer l2 0 01 model add k layer dropout dropout rate model add k layer dense n out activation softmax return model if name main get parameter parser argparse argumentparser description keras gc example nn test parser add argument job dir type str help gcs location to write checkpoint and export model parser add argument train file type str help gcs location of train datum tfrecord file location parser add argument vali file type str help gcs location validation data tfrecord file location parser add argument n cpu type int help number of process use for read and parse the datum for model input parser add argument n gpu type int help number of gpu use for train the model args parser parse args argument n gpu args n gpu num parallel process args n cpu train file args train file vali file args vali file parameter dropout 0 5 num class 3 1 0 1 image size 40 num channel 1 learning rate 0 0001 epoch 3 shuffle buffer 50000 number of sample from which it will sample batch size 100 input size image size image size num channel checkpoint step 2000 rseed int random random 2 16 number of sample num set 6310 num set vali 550 set length 1000 num sample num set set length step per epoch num sample batch size num sample vali num set vali set length step per epoch vali num sample vali batch size assemble the model train model simplemodel in shape input size n out num class dropout rate dropout optim tf train adamoptimizer learning rate learning rate train model compile optimizer optim loss categorical crossentropy metric accuracy tf log set verbosity tf log info strategy tf contrib distribute mirroredstrategy num gpu n gpu runconfig tf estimator runconfig model dir args job dir save checkpoint step checkpoint step train distribute strategy runconfig tf estimator runconfig model dir args job dir save checkpoint step checkpoint step transform model to estimator e train model tf keras estimator model to estimator keras model train model config runconfig train spec tf estimator trainspec input fn lambda create dataset train file shuffle buffer batch size epoch rseed num parallel process max step epoch step per epoch eval spec tf estimator evalspec input fn lambda create dataset vali file shuffle buffer batch size epoch rseed num parallel process step step per epoch vali throttle sec 10 tf estimator train and evaluate e train model train spec eval spec other info log info tensorflow initialize runconfig with distribution strategy info tensorflow not use distribute coordinator info tensorflow use the keras model provide 2019 02 28 13 39 21 467595 I tensorflow stream executor cuda cuda gpu executor cc 964 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 02 28 13 39 21 467913 I tensorflow core common runtime gpu gpu device cc 1432 find device 0 with property name geforce gtx 1060 6 gb major 6 minor 1 memoryclockrate ghz 1 7715 pcibusid 0000 01 00 0 totalmemory 5 93gib freememory 5 37gib 2019 02 28 13 39 21 467921 I tensorflow core common runtime gpu gpu device cc 1511 add visible gpu device 0 2019 02 28 13 39 21 621140 I tensorflow core common runtime gpu gpu device cc 982 device interconnect streamexecutor with strength 1 edge matrix 2019 02 28 13 39 21 621157 I tensorflow core common runtime gpu gpu device cc 988 0 2019 02 28 13 39 21 621160 I tensorflow core common runtime gpu gpu device cc 1001 0 n 2019 02 28 13 39 21 621270 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 5139 mb memory physical gpu device 0 name geforce gtx 1060 6 gb pci bus i d 0000 01 00 0 compute capability 6 1 info tensorflow use config model dir output try1 tf random seed none save summary step 100 save checkpoint step 2000 save checkpoint sec none session config allow soft placement true graph option rewrite option meta optimizer iteration one keep checkpoint max 5 keep checkpoint every n hour 10000 log step count step 100 train distribute device fn none protocol none eval distribute none experimental distribute none service none cluster spec task type worker task i d 0 global i d in cluster 0 master evaluation master be chief true num ps replicas 0 num worker replicas 1 distribute coordinator mode none info tensorflow not use distribute coordinator info tensorflow run training and evaluation locally non distribute info tensorflow start train and evaluate loop the evaluate will happen after every checkpoint checkpoint frequency be determine base on runconfig argument save checkpoint step 2000 or save checkpoint sec none 2019 02 28 13 39 21 625686 I tensorflow core common runtime gpu gpu device cc 1511 add visible gpu device 0 2019 02 28 13 39 21 625718 I tensorflow core common runtime gpu gpu device cc 982 device interconnect streamexecutor with strength 1 edge matrix 2019 02 28 13 39 21 625721 I tensorflow core common runtime gpu gpu device cc 988 0 2019 02 28 13 39 21 625724 I tensorflow core common runtime gpu gpu device cc 1001 0 n 2019 02 28 13 39 21 625814 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device device gpu 0 with 5139 mb memory physical gpu device 0 name geforce gtx 1060 6 gb pci bus i d 0000 01 00 0 compute capability 6 1 info tensorflow device be available but not use by distribute strategy device cpu 0 info tensorflow device be available but not use by distribute strategy device xla cpu 0 info tensorflow device be available but not use by distribute strategy device xla gpu 0 info tensorflow configure nccl all reduce 2019 02 28 13 39 21 651311 I tensorflow core common runtime gpu gpu device cc 1511 add visible gpu device 0 2019 02 28 13 39 21 651333 I tensorflow core common runtime gpu gpu device cc 982 device interconnect streamexecutor with strength 1 edge matrix 2019 02 28 13 39 21 651337 I tensorflow core common runtime gpu gpu device cc 988 0 2019 02 28 13 39 21 651339 I tensorflow core common runtime gpu gpu device cc 1001 0 n 2019 02 28 13 39 21 651435 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 5139 mb memory physical gpu device 0 name geforce gtx 1060 6 gb pci bus i d 0000 01 00 0 compute capability 6 1 info tensorflow call model fn info tensorflow error report to coordinator traceback most recent call last file home aibox local lib python3 6 site package tensorflow python training coordinator py line 297 in stop on exception yield file home aibox local lib python3 6 site package tensorflow contrib distribute python mirror strategy py line 795 in run self main result self main fn self main args self main kwargs file home aibox local lib python3 6 site package tensorflow python estimator estimator py line 1195 in call model fn model fn result self model fn feature feature kwargs file home aibox local lib python3 6 site package tensorflow python estimator keras py line 278 in model fn label file home aibox local lib python3 6 site package tensorflow python estimator keras py line 201 in clone and build model optimizer iteration global step file home aibox local lib python3 6 site package tensorflow python keras model py line 476 in clone and build model target tensor target tensor file home aibox local lib python3 6 site package tensorflow python train checkpointable base py line 474 in method wrapper method self args kwargs file home aibox local lib python3 6 site package tensorflow python keras engine training py line 634 in compile for loss tensor in self loss file home aibox local lib python3 6 site package tensorflow python keras engine network py line 667 in loss loss self unfiltered loss file home aibox local lib python3 6 site package tensorflow python keras engine network py line 571 in unfiltered loss loss layer loss file home aibox local lib python3 6 site package tensorflow python keras engine base layer py line 377 in loss loss tensor regularizer file home aibox local lib python3 6 site package tensorflow python keras engine base layer py line 434 in tag unconditional loss loss file home aibox local lib python3 6 site package tensorflow python keras engine base layer py line 629 in loss for variable with op colocate with v file usr lib python3 6 contextlib py line 81 in enter return next self gen file home aibox local lib python3 6 site package tensorflow python framework op py line 4094 in colocate with for gradient with self colocate with op ignore exist file usr lib python3 6 contextlib py line 81 in enter return next self gen file home aibox local lib python3 6 site package tensorflow python framework op py line 4146 in colocate with op internal convert to tensor or indexed slice op as ref true op file home aibox local lib python3 6 site package tensorflow python framework op py line 1307 in internal convert to tensor or indexed slice value dtype dtype name name as ref as ref file home aibox local lib python3 6 site package tensorflow python framework op py line 1146 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file home aibox local lib python3 6 site package tensorflow contrib distribute python value py line 439 in tensor conversion mirror assert not as ref assertionerror traceback most recent call last file home aibox local lib python3 6 site package ipython core interactiveshell py line 3267 in run code exec code obj self user global ns self user n file line 1 in runfile home aibox document ast research dev r5 r5 nn emotional valence liquid datum gcloud trainer mirroredt py args job dir output try1 train file mnt datum r5 datum ai r5 liquid tfrecord r5 6310x1000x40x40 train datum tfrecord vali file mnt datum r5 datum ai r5 liquid tfrecord r5 550x1000x40x40 vali datum tfrecord n cpu 6 n gpu 1 wdir home aibox document ast research dev r5 r5 nn emotional valence liquid datum gcloud trainer file opt pycharm community 2018 3 1 helper pydev pydev bundle pydev umd py line 198 in runfile pydev import execfile filename global var local var execute the script file opt pycharm community 2018 3 1 helper pydev pydev imp pydev execfile py line 18 in execfile exec compile content n file exec glob loc file home aibox document ast research dev r5 r5 nn emotional valence liquid datum gcloud trainer mirroredt py line 138 in tf estimator train and evaluate e train model train spec eval spec file home aibox local lib python3 6 site package tensorflow python estimator training py line 471 in train and evaluate return executor run file home aibox local lib python3 6 site package tensorflow python estimator training py line 610 in run return self run local file home aibox local lib python3 6 site package tensorflow python estimator training py line 711 in run local saving listener save listener file home aibox local lib python3 6 site package tensorflow python estimator estimator py line 354 in train loss self train model input fn hook save listener file home aibox local lib python3 6 site package tensorflow python estimator estimator py line 1205 in train model return self train model distribute input fn hook save listener file home aibox local lib python3 6 site package tensorflow python estimator estimator py line 1316 in train model distribute self config file home aibox local lib python3 6 site package tensorflow python training distribute py line 721 in call for each tower return self call for each tower fn args kwargs file home aibox local lib python3 6 site package tensorflow contrib distribute python mirror strategy py line 556 in call for each tower return call for each tower self fn args kwargs file home aibox local lib python3 6 site package tensorflow contrib distribute python mirror strategy py line 183 in call for each tower coord join thread file home aibox local lib python3 6 site package tensorflow python training coordinator py line 389 in join six reraise self exc info to raise file home aibox local lib python3 6 site package six py line 693 in reraise raise value file home aibox local lib python3 6 site package tensorflow python training coordinator py line 297 in stop on exception yield file home aibox local lib python3 6 site package tensorflow contrib distribute python mirror strategy py line 795 in run self main result self main fn self main args self main kwargs file home aibox local lib python3 6 site package tensorflow python estimator estimator py line 1195 in call model fn model fn result self model fn feature feature kwargs file home aibox local lib python3 6 site package tensorflow python estimator keras py line 278 in model fn label file home aibox local lib python3 6 site package tensorflow python estimator keras py line 201 in clone and build model optimizer iteration global step file home aibox local lib python3 6 site package tensorflow python keras model py line 476 in clone and build model target tensor target tensor file home aibox local lib python3 6 site package tensorflow python train checkpointable base py line 474 in method wrapper method self args kwargs file home aibox local lib python3 6 site package tensorflow python keras engine training py line 634 in compile for loss tensor in self loss file home aibox local lib python3 6 site package tensorflow python keras engine network py line 667 in loss loss self unfiltered loss file home aibox local lib python3 6 site package tensorflow python keras engine network py line 571 in unfiltered loss loss layer loss file home aibox local lib python3 6 site package tensorflow python keras engine base layer py line 377 in loss loss tensor regularizer file home aibox local lib python3 6 site package tensorflow python keras engine base layer py line 434 in tag unconditional loss loss file home aibox local lib python3 6 site package tensorflow python keras engine base layer py line 629 in loss for variable with op colocate with v file usr lib python3 6 contextlib py line 81 in enter return next self gen file home aibox local lib python3 6 site package tensorflow python framework op py line 4094 in colocate with for gradient with self colocate with op ignore exist file usr lib python3 6 contextlib py line 81 in enter return next self gen file home aibox local lib python3 6 site package tensorflow python framework op py line 4146 in colocate with op internal convert to tensor or indexed slice op as ref true op file home aibox local lib python3 6 site package tensorflow python framework op py line 1307 in internal convert to tensor or indexed slice value dtype dtype name name as ref as ref file home aibox local lib python3 6 site package tensorflow python framework op py line 1146 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file home aibox local lib python3 6 site package tensorflow contrib distribute python value py line 439 in tensor conversion mirror assert not as ref assertionerror
tensorflowtensorflow
convert a save model to tflite use tensorflow build from source give error
Bug
system information os platform and distribution linux ubuntu 16 04 tensorflow instal from source tensorflow version 1 12 python version 3 6 bazel version 0 19 1 gcc compiler version 5 4 0 describe the current behavior convert a save model to tflite use tflite convert py give I the follow error but when I convert the same model use tensorflow binary I do not find the issue other info log tflite convert output file model lite tflite save model dir model save model input shape 1 input array x pre output array y pre traceback most recent call last file home vinay venv tensorflow src 1 12 bin tflite convert line 6 in from tensorflow contrib lite python tflite convert import main file home vinay venv tensorflow src 1 12 lib python3 6 site package tensorflow contrib init py line 48 in from tensorflow contrib import distribute file home vinay venv tensorflow src 1 12 lib python3 6 site package tensorflow contrib distribute init py line 34 in from tensorflow contrib distribute python tpu strategy import tpustrategy file home vinay venv tensorflow src 1 12 lib python3 6 site package tensorflow contrib distribute python tpu strategy py line 27 in from tensorflow contrib tpu python ops import tpu op file home vinay venv tensorflow src 1 12 lib python3 6 site package tensorflow contrib tpu init py line 69 in from tensorflow contrib tpu python op tpu op import file home vinay venv tensorflow src 1 12 lib python3 6 site package tensorflow contrib tpu python op tpu op py line 39 in resource loader get path to datafile tpu op so file home vinay venv tensorflow src 1 12 lib python3 6 site package tensorflow contrib util loader py line 56 in load op library ret load library load op library path file home vinay venv tensorflow src 1 12 lib python3 6 site package tensorflow python framework load library py line 60 in load op library lib handle py tf tf loadlibrary library filename tensorflow python framework error impl invalidargumenterror unrecognized type an op that load optimization parameter into hbm for embed must be precede by a configuretpuembeddinghost op that set up the correct embed table configuration for example this op be use to install parameter that be load from a checkpoint before a training loop be execute parameter a tensor contain the initial embed table parameter to use in embed lookup use the adagrad optimization algorithm accumulator a tensor contain the initial embed table accumulator to use in embed lookup use the adagrad optimization algorithm table name name of this table must match a name in the tpuembeddingconfiguration proto override table i d num shard number of shard into which the embed table be divide shard i d identifier of shard for this operation table i d index of this table in the embeddinglayerconfiguration proto deprecate in attr an op that load optimization parameter into hbm for embed must be precede by a configuretpuembeddinghost op that set up the correct embed table configuration for example this op be use to install parameter that be load from a checkpoint before a training loop be execute parameter a tensor contain the initial embed table parameter to use in embed lookup use the adagrad optimization algorithm accumulator a tensor contain the initial embed table accumulator to use in embed lookup use the adagrad optimization algorithm table name name of this table must match a name in the tpuembeddingconfiguration proto override table i d num shard number of shard into which the embed table be divide shard i d identifier of shard for this operation table i d index of this table in the embeddinglayerconfiguration proto deprecate in opdef name loadtpuembeddingadagradparameter input arg name parameter description nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n type dt float type attr nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n number attr nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n type list attr nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n input arg name accumulator description nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n type dt float type attr nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n number attr nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n type list attr nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n attr name nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n type nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n default value I 1 description nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n have minimum true minimum 1 attr name nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n type nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n default value s description nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n attr name nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n type nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n description nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n attr name nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n type nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n description nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n summary nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n description nan op that load optimization parameter into hbm for embed must be nprecede by a configuretpuembeddinghost op that set up the correct nembedde table configuration for example this op be use to install nparameter that be load from a checkpoint before a training loop be nexecute n nparameter a tensor contain the initial embed table parameter to use in embed nlookup use the adagrad optimization algorithm naccumulator a tensor contain the initial embed table accumulator to use in embed nlookup use the adagrad optimization algorithm ntable name name of this table must match a name in the n tpuembeddingconfiguration proto override table i d nnum shard number of shard into which the embed table be divide nshard i d identifier of shard for this operation ntable i d index of this table in the embeddinglayerconfiguration proto n deprecate n be stateful true thank in advance
tensorflowtensorflow
tf graph util convert variable to constant convert a different pb parameter with orginal ckpt
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version describe the current behavior the result use tf graph util convert variable to constant api be different with freeze graph freeze graph when I use freeze graph freeze graph the result be same as ckpt but different with the output when I use use tf graph util convert variable to constant api why python use tf graph util convert variable to constant api with tf get default graph as default input image tf placeholder tf float32 shape none none none 3 name input image input image readdata mean image subtraction input image with slim arg scope resnet v1 resnet arg scope weight decay 1e 5 output resnet v1 resnet v1 50 input image be train false scope resnet v1 50 num class flag num class saver tf train saver with tf session config tf configproto allow soft placement true as sess tf train write graph sess graph as graph def resnet50 pbtxt as text false saver restore sess model ckpt 5723 pb file path model 5723 pb constant graph tf graph util convert variable to constant sess sess graph def resnet v1 50 prediction softmax with tf gfile fastgfile pb file path mode wb as f f write constant graph serializetostring python use freeze graph api with tf get default graph as default input image tf placeholder tf float32 shape none none none 3 name input image input image readdata mean image subtraction input image with slim arg scope resnet v1 resnet arg scope weight decay 1e 5 output resnet v1 resnet v1 50 input image be train false scope resnet v1 50 num class flag num class saver tf train saver with tf session config tf configproto allow soft placement true as sess tf train write graph sess graph as graph def resnet50 pbtxt as text false freeze graph freeze graph resnet50 pbtxt true model ckpt 5723 resnet v1 50 prediction softmax save restore all save const 0 model freeze pb true describe the expect behavior use tf graph util convert variable to constant api the output parameter should be same as ckpt code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tensorflow lite model request a buffer big than the neccesary
Bug
hi I create a custom model use kera in tensorflow the version that I use be tensorflow nightly 1 13 1 I use the official tool to build the tensorflow lite model the method tf lite tfliteconverter from keras model file after I create the model I review the input shape and nothing seem be bad the input and output shape in tensorflow lite model be name input 1 index 59 shape array 1 240 240 3 dtype int32 dtype quantization 0 0 0 name dense softmax index 57 shape array 1 6 dtype int32 dtype quantization 0 0 0 you can note that input shape be 1 240 240 3 so I expect that the buffer would have a size of 172800 unit however when I try to run the model in an android device I receive the next error e androidruntime fatal exception main process com megacode pid 15067 java lang runtimeexception unable to create application com megacode base applicationbase java lang illegalargumentexception can not convert between a tensorflowlite buffer with 691200 byte and a bytebuffer with 172800 byte at android app activitythread handlebindapplication activitythread java 5771 at android app activitythread wrap2 activitythread java at android app activitythread h handlemessage activitythread java 1648 I don t understand the reason why the model request an input shape of 691200 unit if someone have a suggestion I would appreciate it
tensorflowtensorflow
tf 2 0 api docs tf kera
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue link define in python keras api v2 kera init py be point to a break link description currently it have detailed documentation and user guide be available at keras io it would be well to point user to otherwise the experience feel break it would be great to list some key difference between tf keras vs keras as an independent project keras io or at least point out there be no 1 1 mapping and what each one have and the other one doesn t have this have be discuss in blog post and forum but the official documentation should at least have a high level overview with a few sentence we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue yes
tensorflowtensorflow
dataset not reshuffle between epoch in eager mode
Bug
have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 10 14 tensorflow instal from source or binary binary tensorflow version use command below 1 13 0 dev20190227 python version 3 7 describe the current behavior iterate a shuffle dataset return element in the same order each time it be iterate over and the order be unaffected by the random seed describe the expect behavior the order should be deterministic give the random seed and there should be a way to reshuffle between epoch code to reproduce the issue python import tensorflow as tf tf enable eager execution d tf datum dataset range 5 shuffle 5 tf set random seed 0 elem item numpy for item in d print first epoch elem elem item numpy for item in d print second epoch elem run this twice produce first run first epoch 4 2 1 3 0 second epoch 4 2 1 3 0 second run first epoch 3 1 0 2 4 second epoch 3 1 0 2 4 so even though the same random seed be use in both run the order of element differ between they and additionally the order be the same between epoch which you wouldn t necessarily want in your training loop
tensorflowtensorflow
output dir inconsitancy between model to estimator and export savedmodel
Bug
problem explanation since I want to train my neural network into the google ml cloud I be try to convert a keras model to a tensorflow tf estimator a tutorial explain how to do this can be find on training kera with gpu serve prediction with cloud ml engine google cloud ai huddle the jupyter notebook that accompany this tutorial can be find on kaggle com unfortunately while follow this tutorial I run into some problem when try to convert a keras model into a tf estimator use the model to estimator function while supply the output dir argument and follow save this estimator use the export savedmodel module function I receive the follow error valueerror couldn t find train model at estimator model a solution to overcome this problem have be give on the follow stackoverflow post however I be report the issue here in case it be not yet solve system information test on kaggle kernel and window 10 pro tf instal from source v1 12 0 python version 3 6 8 cuda cudnn version 9 0 gpu model and memory nvidia k80 gpus 13 gb ram describe the current behaviour currently because model to estimator module save the train model in a kera subfolder under the user specify output dir while the export savedmodel module use the model output dir parameter and thus the user specify parent folder to look for the train model I get the follow error valueerror couldn t find train model at estimator model describe the expect behaviour I expect the export savedmodel module to successfully find the train model in the instead of the kera folder and save the tf model current workaround currently I need to move the model file from the kera folder to the to successfully save the model code to reproduce the issue the issue can be reproduce by use the code provide by create a keras model and save it import keras model keras sequential model add keras layer dense unit 1 activation sigmoid input shape 10 model compile loss binary crossentropy optimizer sgd model save model h5 next convert the model to an estimator with tf keras estimator model to estimator add an input receiver function and export it in the savedmodel format with estimator export savedmodel convert keras model to tf estimator tf file path tf estimator tf keras estimator model to estimator keras model model model dir tf file path def serve input receiver fn return tf estimator export build raw serve input receiver fn model input name 0 tf placeholder tf float32 shape none 10 export the estimator export path export estimator export savedmodel export path serve input receiver fn serve input receiver fn error log valueerror traceback most recent call last in 1 export path estimator model export savedmodel export serve input receiver fn serve input receiver fn 2 export path opt conda lib python3 6 site package tensorflow python estimator estimator py in export savedmodel self export dir base serve input receiver fn asset extra as text checkpoint path strip default attrs 661 checkpoint path checkpoint path 662 strip default attrs strip default attrs 663 mode model fn lib modekeys predict 664 665 def export save model opt conda lib python3 6 site package tensorflow python estimator estimator py in export save model for mode self export dir base input receiver fn asset extra as text checkpoint path strip default attrs mode 787 as text as text 788 checkpoint path checkpoint path 789 strip default attrs strip default attrs 790 791 def export all save model opt conda lib python3 6 site package tensorflow python estimator estimator py in export all save model self export dir base input receiver fn map asset extra as text checkpoint path strip default attrs 876 self model dir 877 if not checkpoint path 878 raise valueerror couldn t find train model at s self model dir 879 880 export dir export helper get timestampe export dir export dir base valueerror couldn t find train model at estimator model
tensorflowtensorflow
runtimeerror graph be finalize and can not be modify
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary n a tensorflow version use command below 1 12 0 python version 2 7 x bazel version if compile from source n a gcc compiler version if compile from source ubuntu 5 4 0 6ubuntu1 16 04 11 5 4 0 cuda cudnn version n a gpu model and memory n a describe the problem I use tfrecord file to train a stack autoencoder I want to train the encoding layer for 1000 step I try to create batch from feature and label and use they to train my network however when I run my code I get an error runtimeerror graph be finalize and can not be modify indicate that the problem come from the follow function def train layer output layer layer loss optimizer train each encode layer for 1000 step layer name output layer name split 0 print pretraine format layer name num step 1000 step 1 feature label train input fn input l tf reshape feature 1 flag image row flag image col 1 while step num step instance batch label batch tf train shuffle batch input l batch size 5 capacity 200 min after dequeue 100 out layer layer loss sess run output layer layer loss optimizer feed dict feature instance batch label label batch print layer loss step 1 print layer finish
tensorflowtensorflow
crf function in tensorflow 2 0
Bug
hello it seem like crf tensorflow contrib crf be move to tensorflow probability in tensorflow 2 0 but I couldn t find they on where can I find crf relate feature in tensorflow 2 0
tensorflowtensorflow
tensorflow run with c api cause xavi crash
Bug
tensoflow version 1 12 0 rc2 device xavi nv power mode maxn cuda 10 0 117 cudnn 7 3 1 kernel code void semanticsegpro getimgdata uchar datum auto input tensor map input tensor tensor for int y 0 y height y const uchar source row datum y width 3 for int x 0 x width x const uchar source pixel source row x 3 for int c 0 c 3 c const uchar source value source pixel c input tensor map 0 y x c source value qstre s qdatetime currentdatetime tostre yyyy mm dd hh mm ss zzz qdebug bf run inputtensorname tostdstring input tensor outputtensorname tostdstring output if status run ok qdebug error run fail for int I 0 I height I for int j 0 j width j outdata I width j tmap 0 I j emit segdata uchar outdata kernel code run here status status run session run inputtensorname tostdstring input tensor outputtensorname tostdstring output the program abnormal termination or crash I use the same program with pc gtx 1070 it run well I also use python api with xavier platform it run well also why the program crash with xavi and c api
tensorflowtensorflow
doc non determinism in tf
Bug
describe the feature and the current behavior state a documentation explain determinism and source of non determinism in tf application it should cover thing from have multiple reader seed in the specific op seed in numpy pythonhashsee non deterministic op in cuda and why tf do not support pass deterministic flag even though it would be slow tf cudnn use autotune and probably some other I do not know about who will benefit with this feature I take part in many kaggle competition and very often people there have problem with non determinism in nn
tensorflowtensorflow
tensorflow 1 13 can t deprecate tf layer batch normalization no exist replacement
Bug
I m update to tensorflow 1 13 from 1 12 tf layer batch normalization be be deprecate the note state batch normalization from tensorflow python layer normalization be deprecate and will be remove in a future version instruction for update use keras layer batch normalization instead first there be no keras layer batch normalization this be a keras layer batchnormalization but that be a different function and a direct replacement for tf layer batchnormalization therefore either make a keras layer batchnormalization function or remove the deprecation warn for tf layer batch normalization here s the link to the batch normalization fuction thank
tensorflowtensorflow
tensorflow 1 13 can t deprecate tf layer conv2d no exist replacement
Bug
I m update to tensorflow 1 13 from 1 12 tf layer conv2d be be deprecate the note state conv2d from tensorflow python layer convolutional be deprecate and will be remove in a future version instruction for update use kera layer conv2d instead first there be no keras layer conv2d this be a kera layer conv2d but that be a different function and a direct replacement for tf layer conv2d therefore either make a kera layer conv2d function or remove the deprecation warn for tf layer conv2d here s the link to the conv2d fuction thank
tensorflowtensorflow
why deprecate tf layer dense
Bug
in the tensorflow 1 13 document of tf layer dense there be a warning warn this function be deprecate it will be remove in a future version instruction for update use kera layer dense instead I be very confused because this function be very easy to use and I use it very frequently why do you deprecate it by the way I do not find the alternative kera layer dense there be only keras layer dense
tensorflowtensorflow
tensorflow hang when specify nccl xre as all reduce alg
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 13 0 rc0 python version 2 7 12 bazel version if compile from source 0 20 0 gcc compiler version if compile from source 5 4 0 20160609 ubuntu 5 4 0 6ubuntu1 16 04 11 cuda cudnn version cuda version 9 0 176 and cudnn version 7 2 1 38 gpu model and memory tesla v100 32480mib describe the current behavior I train neumf model in distribute environment and I find that tensorflow hang all the time when training finish below be some detail about distribute training model neumf dataset ml 1 m num worker 2 num gpu each worker 4 strategy tf contrib distribute mirrorstrategy reduce alg replace pscpu pscpu with nccl xre nccl version 2 3 7 1 cuda9 0 the distribute training run 120 step then tensorflow quit when use default all reduce alg pscpu pscpu of mirrorstrategy and when replace default all reduce alg with nccl xre distribute training run 120 step but tensorflow hang all the time below be screen output of two condition log when normal exit i0226 10 04 21 481378 140388122244864 basic session run hook py 247 loss 0 3570031 step 120 1 436 sec i0226 10 04 21 483242 140388122244864 basic session run hook py 680 global step sec 6 96639 i0226 10 04 22 065148 140388122244864 util py 168 finalize strategy i0226 10 04 22 066411 140388122244864 basic session run hook py 594 save checkpoint for 125 into tmp ncf model model ckpt i0226 10 04 22 903732 140388122244864 estimator py 359 loss for final step 0 35619634 i0226 10 04 22 906255 140388122244864 datum preprocesse py 391 shut down train datum creation subprocess log when hang i0226 10 13 19 793405 140644590810880 basic session run hook py 247 loss 0 360583 step 100 1 755 sec i0226 10 13 19 795331 140644590810880 basic session run hook py 680 global step sec 5 70165 i0226 10 13 21 573970 140644590810880 basic session run hook py 247 loss 0 35865137 step 110 1 781 sec i0226 10 13 21 576019 140644590810880 basic session run hook py 680 global step sec 5 616 i0226 10 13 23 344115 140644590810880 basic session run hook py 247 loss 0 35676146 step 120 1 770 sec i0226 10 13 23 347656 140644590810880 basic session run hook py 680 global step sec 5 64505 compare two log tensorflow do not continue to save checkpoint and output final loss when use nccl xre all reduce alg how to solve this problem thank code to reproduce the issue I get neumf model source code from official model repo and original code only support local training with estimator and I add below code to support distribute training distribution tf contrib distribute mirroredstrategy num gpu per worker num gpu run config tf estimator runconfig model dir model dir log step count step 10 train distribute distribution eval distribute distribution estimator tf estimator estimator model fn model fn model dir model dir config run config param param train spec tf estimator trainspec train input fn max step 5000 hook train hook eval spec tf estimator evalspec eval input fn step 100 tf estimator train and evaluate estimator train spec eval spec the cross device op implement of mirrorstrategy be multiworkerallreduce which be define in tensorflow python distribute cross device op py I change value of all reduce alg parameter of init of class multiworkerallreduce from pscpu pscpu to nccl xre other info log when hang stack of two worker be 0 0x00007f990858ba13 in epoll wait at sysdep unix syscall template s 84 1 0x00007f988f250d98 in pollset work from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 2 0x00007f988f270dcf in cq pluck grpc completion queue void gpr timespec void from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 3 0x00007f988f27119b in grpc completion queue pluck from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 4 0x00007f988f1dc898 in grpc corecodegen grpc completion queue pluck grpc completion queue void gpr timespec void from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 5 0x00007f988882ad02 in grpc completionqueue pluck grpc internal completionqueuetag from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 6 0x00007f988882f056 in grpc internal blockingunarycallimpl blockingunarycallimpl grpc channelinterface grpc internal rpcmethod const grpc clientcontext tensorflow runsteprequ const tensorflow runstepresponse from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 7 0x00007f988882f26d in tensorflow grpc masterservice stub runstep grpc clientcontext tensorflow runsteprequ const tensorflow runstepresponse from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 8 0x00007f98886b3f6d in tensorflow grpcremotemaster runstep tensorflow calloption tensorflow runsteprequestwrapper tensorflow mutablerunstepresponsewrapper from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 9 0x00007f98886adfc9 in tensorflow grpcsession runproto tensorflow calloption tensorflow mutablerunsteprequestwrapper tensorflow mutablerunstepresponsewrapper from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 10 0x00007f98886aec52 in tensorflow grpcsession runhelper tensorflow runoption const std vector std allocator tensorflow tensor std allocator std allocator tensorflow tensor const std vector std allocator std allocator std allocator const std vector std allocator std allocator std allocator const std vector tensorflow runmetadata std cxx11 basic string std allocator const from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 11 0x00007f98886af3e0 in tensorflow grpcsession run tensorflow runoption const std vector std allocator tensorflow tensor std allocator std allocator tensorflow tensor const std vector std allocator std allocator std allocator const std vector std allocator std allocator std allocator const std vector tensorflow runmetadata from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 12 0x00007f9888693f42 in tensorflow sessionref run tensorflow runoption const std vector std allocator tensorflow tensor std allocator std allocator tensorflow tensor const std vector std allocator std allocator std allocator const std vector std allocator std allocator std allocator const std vector tensorflow runmetadata from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 13 0x00007f98888997a1 in tf run helper tensorflow session char const tf buffer const std vector std allocator tensorflow tensor std allocator std allocator tensorflow tensor const std vector std allocator std allocator std allocator const tf tensor std vector std allocator std allocator std allocator const tf buffer tf status clone constprop 664 from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 14 0x00007f9888899f9e in tf sessionrun from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 15 0x00007f988868fbc9 in tensorflow tf sessionrun wrapper helper tf session char const tf buffer const std vector const std vector object std allocator object const std vector const std vector const tf buffer tf status std vector object std allocator object from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 16 0x00007f988868fc62 in tensorflow tf sessionrun wrapper tf session tf buffer const std vector const std vector object std allocator object const std vector const std vector const tf buffer tf status std vector object std allocator object from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 17 0x00007f988864a70a in wrap tf sessionrun wrapper from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 18 0x00000000004bc4aa in pyeval evalframeex 19 0x00000000004b9b66 in pyeval evalcodeex 20 0x00000000004c1f56 in pyeval evalframeex 21 0x00000000004b9b66 in pyeval evalcodeex 22 0x00000000004d5669 in 23 0x00000000004a587e in pyobject call 24 0x00000000004be51e in pyeval evalframeex type to continue or q to quit 90 pthread join c no such file or directory bt 0 0x00007f65d1baf98d in pthread join threadid 140057709098752 thread return 0x0 at pthread join c 90 1 0x00007f6540377b97 in std thread join from usr lib x86 64 linux gnu libstdc so 6 2 0x00007f654ef477a0 in tensorflow anonymous namespace stdthread stdthread from home zxy local lib python2 7 site package tensorflow python libtensorflow framework so 3 0x00007f6551b6c032 in tensorflow grpcserver join from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 4 0x00007f6551be4a8c in tf serverjoin from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 5 0x00007f655198d756 in wrap tf serverjoin from home zxy local lib python2 7 site package tensorflow python pywrap tensorflow internal so 6 0x00000000004bc4aa in pyeval evalframeex 7 0x00000000004b9b66 in pyeval evalcodeex 8 0x00000000004c1f56 in pyeval evalframeex 9 0x00000000004b9b66 in pyeval evalcodeex 10 0x00000000004c17c6 in pyeval evalframeex 11 0x00000000004b9b66 in pyeval evalcodeex 12 0x00000000004c17c6 in pyeval evalframeex 13 0x00000000004b9b66 in pyeval evalcodeex 14 0x00000000004c1f56 in pyeval evalframeex 15 0x00000000004b9b66 in pyeval evalcodeex 16 0x00000000004c1f56 in pyeval evalframeex 17 0x00000000004b9b66 in pyeval evalcodeex 18 0x00000000004c1f56 in pyeval evalframeex 19 0x00000000004b9b66 in pyeval evalcodeex 20 0x00000000004c1f56 in pyeval evalframeex 21 0x00000000004b9b66 in pyeval evalcodeex 22 0x00000000004c17c6 in pyeval evalframeex 23 0x00000000004b9b66 in pyeval evalcodeex 24 0x00000000004eb69f in 25 0x00000000004e58f2 in pyrun fileexflag 26 0x00000000004e41a6 in pyrun simplefileexflag 27 0x00000000004938ce in py main 28 0x00007f65d17fd830 in libc start main main 0x493370 argc 30 argv 0x7ffdb8177748 init fini rtld fini stack end 0x7ffdb8177738 at csu libc start c 291 29 0x0000000000493299 in start
tensorflowtensorflow
for an ssd mobilenetv2 model tflite quantize uint8 inference be slow on some android device than float inference
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no use the code in the tflite example app os platform and distribution e g linux ubuntu 16 04 mac 10 14 2 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device samsung galaxy s8 snapdragon 835 api 26 htc bolt snapdragon 810 api 24 tensorflow instal from source or binary binary use implementation org tensorflow tensorflow lite 0 0 0 nightly on android tensorflow version use command below v1 12 0 rc0 17 g7b08198113 1 12 0 rc1 python version 2 7 15 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior on the htc bolt the performance of the ssd mobilenetv2 model be about the same for a quantize uint8 model as well as a float model however on the samsung galaxy s8 the quantize uint8 model be about twice as fast describe the expect behavior I would expect the quantize uint8 model to be fast in every scenario this be what I observe for example on io
tensorflowtensorflow
2 0 compile be leak memory
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 2 different machine tensorflow instal from source or binary binary tensorflow version use command below gpu 2 0 0 dev20190214 gpu 2 0 0 dev20190224 python version 3 6 6 cuda cudnn version 10 gpu model and memory 7 gb k5200 4 gb gtx 970 describe the current behavior compile leak memory describe the expect behavior compile clear the memory when overwrite code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 512 activation tf nn relu tf keras layers dropout 0 2 tf keras layer dense 10 activation tf nn softmax for I in range 100000 model compile optimizer adam loss sparse categorical crossentropy metric accuracy memory slowly increase about 20 mb at a time I discover this whilst work on a genetic algorithm which generate model which be then compile and test but it fill my memory and I have narrow it down to this simple code
tensorflowtensorflow
big memory consumption conv2d vs conv3d
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 version 16 04 5 lts xenial xerus tensorflow instal from binary tensorflow version 1 12 0 python version python 3 5 2 cuda cudnn version cuda 9 0 v9 0 176 cudnn 7 4 2 gpu model and memory nvidia tesla v100 sxm2 32 gb driver version 384 145 describe the problem conv2d layer consume a lot of memory compare to the same operation perform by conv3d I have 2 independent graph 1 input shape 16 224 224 4 conv2d pad same filter 3 3 4 16 bias add shape 16 2 input shape 1 16 224 224 4 conv3d pad same filter 1 3 3 4 16 bias add shape 16 tf profiler report that conv2d operation in first graph consume 2900 mb whereas conv3d that I presume should perform the same operation cause lead dimension of filter be 1 consume 269 2 mb which be an order of magnitude less also for the graph with conv2d I see 2 additional layer be inject conv2d 0 transposenhwctonchw layoutoptimizer consume 16 78 mb conv2d 0 0 transposenchwtonhwc layoutoptimiz consume 67 11 mb these layer be disappear if I remove bias add operation but memory consumption still stay the same be I miss something obvious or my expectation regard conv2d vs conv3d do the same in my case be wrong source code log import os time os environ cuda visible device os environ get cuda visible device 1 import tensorflow as tf import numpy as np def test conv2d input tf placeholder tf float32 shape 16 224 224 4 name input 2d filter tf get variable dtype tf float32 shape 3 3 4 16 name filter 2d conv tf nn conv2d input filter stride 1 1 1 1 padding same dilation 1 1 1 1 vbias1 tf get variable name bias 2d shape 16 dtype tf float32 lbias1 tf nn bias add conv vbias1 return lbias1 def test conv3d input tf placeholder tf float32 shape 1 16 224 224 4 name input 3d filter tf get variable dtype tf float32 shape 1 3 3 4 16 name filter 3d conv tf nn conv3d input filter stride 1 1 1 1 1 padding same dilation 1 1 1 1 1 vbias1 tf get variable name bias 3d shape 16 dtype tf float32 lbias1 tf nn bias add conv vbias1 return lbias1 gpu option tf gpuoption allow growth true visible device list str 0 config tf configproto gpu option gpu option run option tf runoption trace level tf runoption full trace run metadata tf runmetadata input l test conv2d I input 2d 0 np random random 16 224 224 4 l test conv3d I input 3d 0 np random random 1 16 224 224 4 output for input in input with tf session config config as sess sess run tf global variable initializer sess run input l input I option run option run metadata run metadata tf profiler profile tf get default graph run meta run metadata cmd op option tf profiler profileoptionbuilder time and memory output 1 profile node name request byte total execution time accelerator execution time cpu execution time conv2d 2900 02 mb 100 00 97 19 2 70sec 100 00 99 83 14 11ms 100 00 97 40 2 68sec 100 00 99 84 conv2d 0 transposenhwctonchw layoutoptimiz 16 78 mb 2 81 0 56 4 23ms 0 17 0 16 48us 2 60 0 33 4 18ms 0 16 0 16 biasadd 0b 0 00 0 00 207us 0 02 0 01 168us 2 27 1 16 39us 0 00 0 00 conv2d 0 0 transposenchwtonhwc layoutoptimiz 67 11 mb 2 25 2 25 202us 0 01 0 01 160u 1 10 1 10 42us 0 00 0 00 variablev2 2 56 kb 0 00 0 00 20us 0 00 0 00 0us 0 00 0 00 20us 0 00 0 00 2 profile node name request byte total execution time accelerator execution time cpu execution time conv3d 269 20 mb 100 00 100 00 137 06ms 100 00 99 85 64 03ms 100 00 99 74 73 03ms 100 00 99 94 biasadd 0b 0 00 0 00 197us 0 15 0 14 168us 0 26 0 26 29us 0 06 0 04 variablev2 2 56 kb 0 00 0 00 12us 0 01 0 01 0us 0 00 0 00 12us 0 02 0 02
tensorflowtensorflow
tf one hot crash when index be tf uint8
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 window 7 tensorflow instal from source or binary official pip source tensorflow gpu tensorflow version use command below 1 12 0 python version 3 6 cuda cudnn version 9 0 7 5 gpu model and memory 1080ti 12 gb describe the current behavior tf one hot crash when the index tensor have dtype tf uint8 the error message show check fail new num element numelement code to reproduce the issue other info log I ve also test under tf 1 4 1 and tf 1 10 0 both on gpu on different machine both have the same problem
tensorflowtensorflow
sorry unimplemented non trivial designate initializer not support
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 5 lts docker image tensorflow tensorflow late and tensorflow devel mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version commit a2bb5db1bf7931b0dc2cd08e53b8798489568198 python version python 2 7 12 instal use virtualenv pip conda source bazel version if compile from source 0 19 2 gcc compiler version if compile from source gcc ubuntu 5 4 0 6ubuntu1 16 04 11 5 4 0 20160609 cuda cudnn version gpu model and memory describe the problem some of the test don t compile because of the incorrect order of parameter and cause the follow error sorry unimplemented non trivial designate initializer not support problem be cause by the order in designate initializer which be different from the order in struct how should be tensorflow test so that this test pass it even doesn t pass on official google tensorflow image provide the exact sequence of command step that you execute before run into the problem bazel test config opt test size filter small medium tensorflow lite toco tflite operator test any other info log
tensorflowtensorflow
warning raise for deprecate collection abc usage
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below b v1 13 0 rc1 0 g63c13ff 1 13 0 rc1 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory irrelevant describe the current behavior tensorflow raise warning like jump simulation test test simulation py test gpu inference device tensorflow home neil pyenv version 3 7 0 lib python3 7 site package tensorflow python util nest py 823 deprecationwarne use or import the abc from collection instead of from collection abc be deprecate and in 3 8 it will stop work pywrap tensorflow registertype mapping collection mapping jumping simulation test test simulation py test gpu inference device tensorflow home neil pyenv version 3 7 0 lib python3 7 site package tensorflow python util nest py 824 deprecationwarne use or import the abc from collection instead of from collection abc be deprecate and in 3 8 it will stop work pywrap tensorflow registertype sequence collection sequence jump simulation test test simulation py test gpu inference device tensorflow home neil pyenv version 3 7 0 lib python3 7 site package tensorflow python training checkpointable util py 448 deprecationwarne use or import the abc from collection instead of from collection abc be deprecate and in 3 8 it will stop work describe the expect behavior these warning should not be raise code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import warning warning simplefilter error import tensorflow other info log I can do this if the patch will be accept
tensorflowtensorflow
cross compilation from window 10 to ubuntu 16 04
Bug
system information tensorflow version tf r1 12 0 if possible doc link n a describe the documentation issue I can t find any official tutorial on how to cross compile tensorflow on the main platform other than the less than descriptive official tutorial to cross compile for the pi I m not sure if this be more of a doc issue than a feature request but I be look for any help compile for ubuntu 16 04 from a windows 10 machine since there be quite a bunch of error prone step to cross compile as there be many option to configure and file to write in a specific way I would really benefit from some guidance even if minimal there be a few tutorial on medium but those do not t target the same platform and or be old be this something that could be add to tensorflow s website we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue no unless I manage to do it myself
tensorflowtensorflow
sorry unimplemented non trivial designate initializer not support
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 1 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version commit a2bb5db1bf7931b0dc2cd08e53b8798489568198 python version 2 7 15rc1 instal use virtualenv pip conda source bazel version if compile from source 0 20 0 gcc compiler version if compile from source gcc ubuntu 7 3 0 27ubuntu1 18 04 7 3 0 cuda cudnn version gpu model and memory describe the problem during compilation I get an error sorry unimplemented non trivial designate initializer not support it can be solve by set all function pointer to null and apply initialization in order as in the struct defintion provide the exact sequence of command step that you execute before run into the problem bazel test verbose failure tensorflow lite delegates nnapi nnapi delegate any other info log I attach diff which fix this error for I diff txt
tensorflowtensorflow
tensorflow lite test on x86
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 1 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version commit a2bb5db1bf7931b0dc2cd08e53b8798489568198 python version 2 7 15rc1 instal use virtualenv pip conda source bazel version if compile from source 0 20 0 gcc compiler version if compile from source gcc ubuntu 7 3 0 27ubuntu1 18 04 7 3 0 cuda cudnn version gpu model and memory describe the problem tensorflow lite test seem to require android sdk even though we want to test they on x86 provide the exact sequence of command step that you execute before run into the problem bazel test local resource 40000 23 2 tensorflow lite any other info log error home tclbot cache bazel bazel tclbot eee9defa2fa4c4fc557baa005719ebd9 external bazel tool tool android build 391 1 execute genrule bazel tool tool android no android sdk repository error faile d exit 1 this build require an android sdk please add the android sdk repository rule to your workspace
tensorflowtensorflow
tflite model convert from pb file yield different output value
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 macos mojave version 10 14 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device iphone xs io 12 1 4 tensorflow instal from source or binary bazel tool build from source python import install with pip tensorflow 1 12 0 tensorflow version use command below bazel tool github branch r1 13 commit bade323390591fff6fc82b7eeb4a6cc30f807389 fri feb 22 11 00 40 2019 0800 python import v1 12 0 rc2 3 ga6d8ffae09 1 12 0 python version python 2 7 10 bazel version if compile from source build label 0 22 0 gcc compiler version if compile from source configure with prefix application xcode app content developer usr with gxx include dir usr include c 4 2 1 apple llvm version 10 0 0 clang 1000 11 45 5 target x86 64 apple darwin18 2 0 thread model posix installeddir application xcode app content developer toolchain xcodedefault xctoolchain usr bin cuda cudnn version n a gpu model and memory n a exact command to reproduce I train a custom model base on mobilenetv2 with a fews convolution layer and fully connect layer on top to output ranking score of input image I try to convert the model to tflite for run on io but find that the output value of tflite model be different from origin model even with same input same behaviour be observe for tflite interpreter for io and python and also bazel tool model train with different input size of mobilenetv2 224 160 96 also produce similar behaviour I suppose the output value for tflite and original tensorflow model should output the same value the follow link be a zip of the model file and output upload onto google drive path variable ckpt meta path checkpoint mvfn 096 ckpt 15000 meta ckpt weight path checkpoint mvfn 096 ckpt 15000 freeze pb path mvfn 096 pb tflite path mvfn 096 tflite tb path tb log tflite vis html path mvfn 096 tflite html input image size 96 tf path path to tensorflow repo freeze graph python tf path tensorflow python tool freeze graph py input binary true input meta graph ckpt meta path input checkpoint ckpt weight path output graph freeze pb path output node name ranker score func convert pb to tflite bazel run tensorflow lite toco toco input file freeze pb path input format tensorflow graphdef output file tflite path output format tflite inference type float inference input type float input array input image output array ranker score func input shape 1 input image size input image size 3 fix input batch size to 1 pb to tensorboard for visualization python tf path tensorflow python tool import pb to tensorboard py model dir freeze pb path log dir tb path visualize tflite bazel run tensorflow lite tool visualize tflite path tflite vis html path diff tflite pb bazel run tensorflow lite testing tflite diff example test tensorflow model freeze pb path tflite model tflite path input layer input image input layer type float input layer shape 1 input image size input image size 3 output layer ranker score func output of tflite diff example test pb model and tflite model have different output value 2019 02 25 05 34 28 723184 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use sse4 2 avx avx2 fma there be error in invocation output tensor 170 index 0 get 1 76843 but expect 4 20755 there be error in invocation output tensor 170 index 0 get 0 762267 but expect 3 9918 there be error in invocation output tensor 170 index 0 get 0 110205 but expect 4 65119 there be error in invocation output tensor 170 index 0 get 1 10238 but expect 4 45592 there be error in invocation output tensor 170 index 0 get 1 5811 but expect 4 54539 there be error in invocation output tensor 170 index 0 get 1 34377 but expect 4 36198 there be error in invocation output tensor 170 index 0 get 1 87406 but expect 4 3289 there be error in invocation output tensor 170 index 0 get 2 33834 but expect 4 49968 there be error in invocation output tensor 170 index 0 get 1 16959 but expect 4 78387 there be error in invocation output tensor 170 index 0 get 1 22668 but expect 4 53048 there be error in invocation output tensor 170 index 0 get 0 570301 but expect 4 27984 there be error in invocation output tensor 170 index 0 get 0 875985 but expect 4 32995
tensorflowtensorflow
docker tensorflow tensorflow late gpu slow initialisation of gpu
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution ubuntu 16 04 use docker late gpu image mobile device no tensorflow instal from source or binary n a tensorflow version 1 13 0 rc1 python version python 3 5 2 instal use virtualenv pip conda n a bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version nvcc nvidia r cuda compiler driver copyright c 2005 2018 nvidia corporation build on sat aug 25 21 08 01 cdt 2018 cuda compilation tool release 10 0 v10 0 130 gpu model and memory quadro m1200 nvsmi log timestamp mon feb 25 00 50 20 2019 driver version 410 78 cuda version 10 0 attach gpu 1 gpu 00000000 01 00 0 product name quadro m1200 product brand quadro display mode disabled display active disabled persistence mode enable accounting mode disabled accounting mode buffer size 4000 driver model current n a pende n a serial number n a gpu uuid gpu d9093d17 7927 a053 9104 426e68b1d4ac minor number 0 vbios version 82 07 bb 00 13 multigpu board no board i d 0x100 gpu part number n a inforom version image version n a oem object n a ecc object n a power management object n a gpu operation mode current n a pende n a gpu virtualization mode virtualization mode none ibmnpu relaxed ordering mode n a pci bus 0x01 device 0x00 domain 0x0000 device i d 0x13b610de bus i d 00000000 01 00 0 sub system i d 0x224d17aa gpu link info pcie generation max 3 current 3 link width max 16x current 16x bridge chip type n a firmware n a replay since reset 0 tx throughput 0 kb s rx throughput 0 kb s fan speed n a performance state p0 clock throttle reason idle not active application clock set active sw power cap not active hw slowdown not active hw thermal slowdown n a hw power brake slowdown n a sync boost not active sw thermal slowdown not active display clock set not active fb memory usage total 4043 mib use 3813 mib free 230 mib bar1 memory usage total 256 mib use 3 mib free 253 mib compute mode default utilization gpu 0 memory 0 encoder 0 decoder 0 encoder stat active session 0 average fps 0 average latency 0 fbc stat active session 0 average fps 0 average latency 0 ecc mode current n a pende n a ecc error volatile single bit device memory n a register file n a l1 cache n a l2 cache n a texture memory n a texture share n a cbu n a total n a double bit device memory n a register file n a l1 cache n a l2 cache n a texture memory n a texture share n a cbu n a total n a aggregate single bit device memory n a register file n a l1 cache n a l2 cache n a texture memory n a texture share n a cbu n a total n a double bit device memory n a register file n a l1 cache n a l2 cache n a texture memory n a texture share n a cbu n a total n a retire page single bit ecc n a double bit ecc n a pende n a temperature gpu current temp 37 c gpu shutdown temp n a gpu slowdown temp 96 c gpu max operating temp 92 c memory current temp n a memory max operating temp n a power reading power management n a power draw n a power limit n a default power limit n a enforce power limit n a min power limit n a max power limit n a clock graphic 993 mhz sm 993 mhz memory 2505 mhz video 893 mhz application clock graphic n a memory n a default application clock graphic n a memory n a max clock graphic 1150 mhz sm 1150 mhz memory 2505 mhz video 1035 mhz max customer boost clock graphic n a clock policy auto boost n a auto boost default n a process process i d 1123 type g name usr lib xorg xorg use gpu memory 8 mib process i d 31763 type c name python use gpu memory 3791 mib describe the problem when use my gpu it take several minute just over 4 minute to initialise to do anything issue do not exist when use cpu provide the exact sequence of command step that you execute before run into the problem docker run it u i d u i d g runtime nvidia v realpath tensorflow tf tensorflow tensorflow tensorflow late gpu bash python test py content of test py import tensorflow as tf mnist tf keras datasets mnist x train y train x test y test mnist load datum x train x test x train 255 0 x test 255 0 model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 512 activation tf nn relu tf keras layers dropout 0 2 tf keras layer dense 10 activation tf nn softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy model fit x train y train epoch 5 model evaluate x test y test any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach log while run test script download datum from 11493376 11490434 0s 0us step 11501568 11490434 0s 0us step warn tensorflow from usr local lib python2 7 dist package tensorflow python op resource variable op py 435 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer warning tensorflow from usr local lib python2 7 dist package tensorflow python keras layers core py 143 call dropout from tensorflow python op nn op with keep prob be deprecate and will be remove in a future version instruction for update please use rate instead of keep prob rate should be set to rate 1 keep prob 2019 02 25 05 46 52 561440 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 02 25 05 46 52 628689 I tensorflow stream executor cuda cuda gpu executor cc 998 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 02 25 05 46 52 629997 I tensorflow compiler xla service service cc 150 xla service 0x50be7d0 execute computation on platform cuda device 2019 02 25 05 46 52 630035 I tensorflow compiler xla service service cc 158 streamexecutor device 0 quadro m1200 compute capability 5 0 2019 02 25 05 46 52 664820 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2808000000 hz 2019 02 25 05 46 52 666234 I tensorflow compiler xla service service cc 150 xla service 0x5128500 execute computation on platform host device 2019 02 25 05 46 52 666318 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2019 02 25 05 46 52 666979 I tensorflow core common runtime gpu gpu device cc 1433 find device 0 with property name quadro m1200 major 5 minor 0 memoryclockrate ghz 1 148 pcibusid 0000 01 00 0 totalmemory 3 95gib freememory 3 90gib 2019 02 25 05 46 52 667052 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 2019 02 25 05 46 52 669065 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2019 02 25 05 46 52 669122 I tensorflow core common runtime gpu gpu device cc 990 0 2019 02 25 05 46 52 669152 I tensorflow core common runtime gpu gpu device cc 1003 0 n 2019 02 25 05 46 52 669563 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 3696 mb memory physical gpu device 0 name quadro m1200 pci bus i d 0000 01 00 0 compute capability 5 0 epoch 1 5 2019 02 25 05 51 01 254939 I tensorflow stream executor dso loader cc 152 successfully open cuda library libcubla so 10 0 locally 60000 60000 5s 84us sample loss 0 2207 acc 0 9348 epoch 2 5 60000 60000 5s 79us sample loss 0 0960 acc 0 9714 epoch 3 5 60000 60000 5s 78us sample loss 0 0697 acc 0 9774 epoch 4 5 60000 60000 5s 79us sample loss 0 0536 acc 0 9826 epoch 5 5 60000 60000 5s 76us sample loss 0 0430 acc 0 9857 10000 10000 0s 29us sample loss 0 0606 acc 0 9813
tensorflowtensorflow
can not load a keras model with a custom initializer regularizer constraint function
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os x 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below tf version version 2 0 0 dev20190222 tf version git version v1 12 0 8615 g74016a0d51 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior I can not load a model contain a custom initializer or a custom regularizer or a custom constraint if they be define as regular function rather than by subclasse the appropriate class describe the expect behavior I expect it to work since the model otherwise work fine and be save correctly code to reproduce the issue the follow model use a custom initializer and a custom regularizer and a custom constraint it work fine save fine but can not be load you can try use only one at a time they all fail python import tensorflow as tf from tensorflow import kera import numpy as np def my glorot initializer shape dtype tf float32 stddev tf sqrt 2 shape 0 shape 1 return tf random normal shape stddev stddev dtype dtype def my l1 regularizer weight return tf reduce sum tf abs 0 01 weight def my positive weight weight return tf nn relu weight x train np random randn 100 2 y train np random randn 100 1 model keras model sequential kera layer dense 1 kernel regularizer my l1 regularizer kernel constraint my positive weight kernel initializer my glorot initializer model compile loss mse optimizer nadam model fit x train y train epoch 2 model save my model h5 model keras model load model my model h5 custom object my l1 regularizer my l1 regularizer 0 01 my positive weight my positive weight my glorot initializer my glorot initializer other info log here s the stacktrace typeerror traceback most recent call last in 31 my l1 regularizer my l1 regularizer 0 01 32 my positive weight my positive weight 33 my glorot initializer my glorot initializer 34 virtualenvs tf2 lib python3 6 site package tensorflow python keras save hdf5 format py in load model filepath custom object compile 214 model config json loads model config decode utf 8 215 model model config lib model from config model config 216 custom object custom object 217 218 set weight virtualenvs tf2 lib python3 6 site package tensorflow python keras saving model config py in model from config config custom object 53 sequential from config config 54 from tensorflow python keras layers import deserialize pylint disable g import not at top 55 return deserialize config custom object custom object 56 57 virtualenvs tf2 lib python3 6 site package tensorflow python keras layers serialization py in deserialize config custom object 77 module object globs 78 custom object custom object 79 printable module name layer virtualenvs tf2 lib python3 6 site package tensorflow python keras util generic util py in deserialize keras object identifi module object custom object printable module name 190 custom object dict 191 list global custom object item 192 list custom object item 193 with customobjectscope custom object 194 return cls from config cls config virtualenvs tf2 lib python3 6 site package tensorflow python keras engine sequential py in from config cls config custom object 349 for layer config in layer config 350 layer layer module deserialize layer config 351 custom object custom object 352 model add layer 353 if not model input and build input shape virtualenvs tf2 lib python3 6 site package tensorflow python keras layers serialization py in deserialize config custom object 77 module object globs 78 custom object custom object 79 printable module name layer virtualenvs tf2 lib python3 6 site package tensorflow python keras util generic util py in deserialize keras object identifi module object custom object printable module name 192 list custom object item 193 with customobjectscope custom object 194 return cls from config cls config 195 else 196 then cls may be a function return a class virtualenvs tf2 lib python3 6 site package tensorflow python keras engine base layer py in from config cls config 414 a layer instance 415 416 return cls config 417 418 def compute output shape self input shape virtualenvs tf2 lib python3 6 site package tensorflow python keras layers core py in init self unit activation use bias kernel initializer bias initializer kernel regularizer bias regularizer activity regularizer kernel constraint bias constraint kwargs 930 self activation activation get activation 931 self use bias use bias 932 self kernel initializer initializer get kernel initializer 933 self bias initializer initializer get bias initializer 934 self kernel regularizer regularizer get kernel regularizer virtualenvs tf2 lib python3 6 site package tensorflow python keras initializers py in get identifi 176 elif isinstance identifier six string type 177 config class name str identifier config 178 return deserialize config 179 elif callable identifi 180 return identifi virtualenvs tf2 lib python3 6 site package tensorflow python keras initializers py in deserialize config custom object 165 module object global 166 custom object custom object 167 printable module name initializer 168 169 virtualenvs tf2 lib python3 6 site package tensorflow python keras util generic util py in deserialize keras object identifi module object custom object printable module name 199 custom object custom object or 200 with customobjectscope custom object 201 return cls cls config 202 elif isinstance identifier six string type 203 function name identifi typeerror my glorot initializer miss 1 require positional argument shape
tensorflowtensorflow
switch between tf 1 x and tf 2 0 in the api browser lose the context
Bug
system information tensorflow version from doc link describe the documentation issue when I m browse the documentation I often need to switch from tf 1 x to tf 2 0 or vice versa unfortunately the context be lose when I do that for example visit the tf constant documentation for tf 2 0 then from the drop down menu at the top label api r2 0 select any tf 1 x version notice that you do not land on the tf constant page the same be true in the reverse direction this also make it hard to find tf 2 0 documentation by search I generally land on a tf 1 x page when I switch to tf 2 0 the context be lose of course many function have be remove or move but this should not be a problem since most of they be still available in tf compat v1 we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue why not but I would need someone to point I to the code that handle this
tensorflowtensorflow
need upgrade russian s translation of page tutorial keras basic classification
Bug
system information tensorflow version not important doc link describe the documentation issue when I be read the russian translation of this documentation page I find a few place with not a natural way of use the russian language I m a native russian speaker also I notice a few small typo mistake I carefully read the whole page and compare with the original article on english and make some update the pr with these change will be send in the next couple of minute after open this issue pr with update
tensorflowtensorflow
tf nightly gpu 2 0 preview dataset and tf function error
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow instal from source or binary pip tensorflow version use command below tf nightly gpu 2 0 preview python version python 3 in google colab cuda cudnn version 10 gpu model and memory tesla k80 describe the current behavior when use gpu runtime in google colab with tf nightly gpu 2 0 preview tensorflow version create a dataset with tensorflow dataset inside a tf function decorate function result in an exception regard incompatible device type describe the expect behavior expect behavior be not receive an exception remove tf function from the train method result in correct behavior the code also work well without gpu support tf nightly 2 0 preview code to reproduce the issue python import tensorflow dataset as tfds import tensorflow as tf import numpy as np class mymodel tf keras model def init self super mymodel self init self layer1 tf keras layer dense 20 activation relu self layer2 tf keras layer dense 10 def call self x training x self layer1 x x self layer2 x return x def create dataset def process feature image label feature image feature label return tf reshape image 1 np float32 255 0 label datum builder tfds builder mnist dataset datum builder as dataset split tfds split train dataset dataset map process batch 32 repeat 1 return dataset avg loss tf metric mean tf function def train model optimizer dataset create dataset step 0 for image label in dataset step 1 with tf gradienttape as tape logit model image true loss tf nn sparse softmax cross entropy with logit logit logit label label loss tf math reduce mean loss avg loss update state loss grad tape gradient loss model trainable variable optimizer apply gradient zip grad model trainable variable if tf equal step 20 0 tf print avg loss result avg loss reset state num epoch 2 model mymodel optimizer tf keras optimizer adam for in range num epoch train model optimizer other info log link to the original google colab file the encounter exception invalidargumenterror traceback most recent call last in 59 optimizer tf keras optimizer adam 60 for in range num epoch 61 train model optimizer usr local lib python3 6 dist package tensorflow python eager def function py in call self args kwd 436 lifting succeed so variable be initialize and we can run the 437 stateless function 438 return self stateless fn args kwd 439 else 440 canon args canon kwd self canonicalize function input args kwds usr local lib python3 6 dist package tensorflow python eager function py in call self args kwargs 1251 call a graph function specialize to the input 1252 graph function args kwargs self maybe define function args kwargs 1253 return graph function filter call args kwargs pylint disable protect access 1254 1255 property usr local lib python3 6 dist package tensorflow python eager function py in filter call self args kwargs 537 538 return self call flat 539 t for t in nest flatten args kwargs 540 if isinstance t op tensor 541 resource variable op resourcevariable usr local lib python3 6 dist package tensorflow python eager function py in call flat self args 590 only need to override the gradient in graph mode and when we have output 591 if context execute eagerly or not self output 592 output self inference function call ctx args 593 else 594 self register gradient usr local lib python3 6 dist package tensorflow python eager function py in call self ctx args 380 attrs executor type executor type 381 config proto config 382 ctx ctx 383 replace empty list with none 384 output output or none usr local lib python3 6 dist package tensorflow python eager execute py in quick execute op name num output input attrs ctx name 64 else 65 message e message 66 six raise from core status to exception e code message none 67 except typeerror as e 68 if any op be keras symbolic tensor x for x in input usr local lib python3 6 dist package six py in raise from value from value invalidargumenterror can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device cpu 0 vs job localhost replica 0 task 0 device gpu 0 op inference train 925768
tensorflowtensorflow
super do not work within a tf function
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os x 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below tf version version 2 0 0 dev20190222 tf version git version v1 12 0 8615 g74016a0d51 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when I call a method decorate by tf function I get an error if it use super runtimeerror super class cell not find describe the expect behavior I expect no error tf function should ensure that super work normally code to reproduce the issue python import tensorflow as tf class a def foo self x return x 1 class b a tf function def bar self x return super foo x b b b bar 5 raise runtimeexception other info log I can work around this issue in multiple way the easy be to replace super with super b self but it s 2019 who still use python 2 style or else I can work around the issue by use autograph false this show that the issue be link to autograph not recognize super only super b self python import tensorflow as tf class a def foo self x return x 1 class b a tf function autograph false def bar self x return super foo x b b b bar 5 okay return 6 I can also work around this issue by call super outside of the method e g in the constructor python import tensorflow as tf class a def foo self x return x 1 class b a def init self self super super tf function def bar self x return self super foo x b b b bar 5 okay return 6 I try to work around it use a tf init scope but I could not get it to work not sure why here be the full stacktrace for the first example code pycon runtimeerror traceback most recent call last in 9 10 b b 11 b bar 5 virtualenvs tf2 lib python3 6 site package tensorflow python eager def function py in call self args kwd 424 this be the first call of call so we have to initialize 425 initializer map 426 self initialize args kwd add initializer to initializer map 427 if self create variable 428 try virtualenvs tf2 lib python3 6 site package tensorflow python eager def function py in initialize self args kwd add initializer to 368 self concrete stateful fn 369 self stateful fn get concrete function internal garbage collect pylint disable protect access 370 args kwd 371 372 def invalid creator scope unused args unused kwd virtualenvs tf2 lib python3 6 site package tensorflow python eager function py in get concrete function internal garbage collect self args kwargs 1278 if self input signature 1279 args kwargs none none 1280 graph function self maybe define function args kwargs 1281 return graph function 1282 virtualenvs tf2 lib python3 6 site package tensorflow python eager function py in maybe define function self args kwargs 1545 or call context key not in self function cache miss 1546 self function cache miss add call context key 1547 graph function self create graph function args kwargs 1548 self function cache primary cache key graph function 1549 return graph function args kwargs virtualenvs tf2 lib python3 6 site package tensorflow python eager function py in create graph function self args kwargs override flat arg shape 1477 arg name arg name 1478 override flat arg shape override flat arg shape 1479 capture by value self capture by value 1480 self function attribute 1481 virtualenvs tf2 lib python3 6 site package tensorflow python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 636 tf decorator rewrap python func original func convert func 637 638 func output python func func args func kwargs 639 640 invariant func output contain only tensor indexedslice virtualenvs tf2 lib python3 6 site package tensorflow python eager def function py in wrap fn args kwd 315 wrap allow autograph to swap in a converted function we give 316 the function a weak reference to itself to avoid a reference cycle 317 return weak wrap fn wrap args kwd 318 weak wrap fn weakref ref wrap fn 319 virtualenvs tf2 lib python3 6 site package tensorflow python eager function py in bind method wrapper args kwargs 2060 if wrap be replace then it be always an unbound function 2061 that take self as first argument 2062 return wrap fn weak instance args kwargs 2063 weak bind method wrapper weakref ref bind method wrapper 2064 virtualenvs tf2 lib python3 6 site package tensorflow python framework func graph py in wrapper args kwargs 629 optional feature autograph option 630 force conversion true 631 args kwargs 632 633 wrap around a decorator allow check like tf inspect getargspec virtualenvs tf2 lib python3 6 site package tensorflow python autograph impl api py in convert call f owner option args kwargs 358 return f args kwargs 359 360 result convert f effective args kwargs 361 362 the converted function s closure be simply insert into the function s var folder wy h39t6kb11pnbb0pzhksd fqh0000gn t tmpd7lvn4li py in tf bar self x 4 retval none 5 do return true 6 retval ag convert call foo super ag conversionoption recursive true verbose 0 strip decorator tf function defun ag convert ag do not convert ag convert call force conversion false optional feature internal convert user code true x 7 return retval 8 runtimeerror super class cell not find
tensorflowtensorflow
mirroredstrategy doesn t use gpu
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary pip install tensorflow gpu 1 12 tensorflow version use command below 1 12 python version 3 5 2 cuda cudnn version cuda 9 cudnn 7 3 gpu model and memory 2 x nvidia titan x describe the current behavior I be work on rewrite a script from the queue threading approach to tf datum dataset approach of provide datum I get really nice throughput of datum with over 90 util of gpu now that I have rewrite it when start the training with mirroredstrategy the gpu be not use at all and I get the follow output info tensorflow device be available but not use by distribute strategy device cpu 0 info tensorflow device be available but not use by distribute strategy device xla gpu 0 info tensorflow device be available but not use by distribute strategy device xla cpu 0 info tensorflow configure nccl all reduce info tensorflow call model fn at this point I be think there be some issue with tf1 12 code to reproduce the issue here be basically the structure of my code I try out different thing like train directly with tf keras fit with multi gpu model but it didn t work out basically I be try to reproduce the functionality of my randomshufflequeue I have before with multiple thread fill up the queue model model model input input output y output optimizer tf train adamoptimizer learning rate model util multi gpu model model gpu num gpu cpu relocation true model compile loss lossfunc optimizer optimizer def generator n while true try imgbatch yield imgbatch except valueerror pass def get generator n return partial generator n def dataset n return tf datum dataset from generator get generator n output type tf float32 tf float32 output shape tf tensorshape none none 1 tf tensorshape none none 1 def input fn ds tf datum dataset range len dataset apply tf datum experimental parallel interleave dataset cycle length len dataset sloppy true ds ds map map func lambda imgbatch processimage img lbl num parallel call 12 ds ds shuffle shuffle size ds ds batch batch size ds ds prefetch 1 return ds strategy tf contrib distribute mirroredstrategy num gpus num gpu config tf estimator runconfig train distribute strategy estimator tf keras estimator model to estimator model config config estimator train lambda input fn any help would be greatly appreciate since I m stuck on it since a while now
tensorflowtensorflow
tf 2 0 tf keras loss duplication
Bug
window 10 tf version b v1 11 0 rc2 4 gc19e29306c 1 11 0 anaconda python 3 6 5 gpu geforce gtx 1070 max q design tensorflow 2 0 gpu preview instal via pip I m build a reinforcement learning framework on top of tensorflow 2 0 use the tf keras api and I ve come across the follow issue the 2 0 api doc for tf keras loss show many object that be not actually available in the preview package for example the loss class such as huber hinge etc be not accessible 1 why be those class not include in the preview package 2 why be there both class and function for many of the same loss type that seem like unnecessary duplication 2a why be there a huber class but no huber function 3 I d love to contribute prs and help fix these issue would that be desire edit this have also be notice here