repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
tflite only slightly fast with gpu on helio p22
Bug
have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 n a mobile device xiaomi redmi 6 tensorflow instal from source or binary source tensorflow version use command below r1 13 python version n a bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory powervr ge8320 describe the current behavior run the tflite demo average inference time for 1 thread be 480ms but it s just 370ms with gpu with 4 thread it s 360ms describe the expect behavior I d expect gpu performance to be significantly well than 4 thread cpu grant it s not a great gpu but I d expect it to be substantially fast than the cpu other info log xiaomi redmi 6 os android 8 1 oreo plan upgrade to android 9 0 pie chipset mediatek mt6762 helio p22 12 nm cpu octa core 2 0 ghz cortex a53 gpu powervr ge8320
tensorflowtensorflow
assign to ressource variable with vary shape violate the shape information of the tensor of which be assign from
Bug
hi I run into problem when assign to variable from tensor with dynamic shape in the end it boil down to use resource variable and have a strided slice tensor from which to assign from see the upcoming mwe when assign to resource variable from a strided slice tensor the shape information of the tensor become corrupt system information unexpected behaviour in custom code minimum working example see below window 10 tensorflow instal from pip wheel tensorflow version tf version version 1 13 1 tf version git version b unknown tf version compiler version msvc 190024215 sanity check array 1 python version python 3 6 7 default feb 28 2019 07 28 18 msc v 1900 64 bit amd64 on win32 also occur without gpu support describe the current behavior output be result 1 2 array 1 array 1 2 array 2 var value before 3 var value after 1 2 note the contradiction that the tensor with element 1 2 have shape 1 describe the expect behavior expect output result 1 2 array 2 array 1 2 array 2 var value before 3 var value after 1 2 code to reproduce the issue import tensorflow as tf def mwe u tf range 3 4 some tensor of shape 1 v tf range 1 3 some tensor of shape 2 forget about shape of v otherwise assign op will not build with use resource true v tf placeholder with default v none random stride leave this out will not result in error value to assign v var tf variable u use resource true validate shape false assign op tf assign var value to assign validate shape false observe op tf shape value to assign value to assign tf shape v with tf session as sess tf initializer global variable run var value before sess run var result sess run assign op observe op var value after sess run var print result result print var value before var value before print var value after var value after import os os environ cuda visible device 1 mwe note either use resource false leave out the assign op when run the session or leave out the stride op v will yield the expect behaviour this may be test with the follow routine def mwe option for use resource in 0 1 for additional stride in 0 1 u tf range 3 4 some tensor of shape 1 v tf range 1 3 some tensor of shape 2 forget about shape of v otherwise assign op will not build with use resource true v tf placeholder with default v none random stride leave this out will not result in error value to assign v if additional stride else v var tf variable u use resource use resource validate shape false assign op tf assign var value to assign validate shape false observe op tf shape value to assign value to assign with tf session as sess tf initializer global variable run for add assign op in 0 1 if add assign op result sess run assign op observe op else result sess run observed op var value sess run var print f use resource use resource additional stride additional stride add assign op add assign op result var var value output use resource 0 additional stride 0 add assign op 0 array 2 array 1 2 var 3 use resource 0 additional stride 0 add assign op 1 array 2 array 1 2 var 1 2 use resource 0 additional stride 1 add assign op 0 array 2 array 1 2 var 3 use resource 0 additional stride 1 add assign op 1 array 2 array 1 2 var 1 2 use resource 1 additional stride 0 add assign op 0 array 2 array 1 2 var 3 use resource 1 additional stride 0 add assign op 1 array 2 array 1 2 var 1 2 use resource 1 additional stride 1 add assign op 0 array 2 array 1 2 var 3 use resource 1 additional stride 1 add assign op 1 array 1 array 1 2 var 1 2
tensorflowtensorflow
wrong md5 checksum for flatbuffer when build tensorflow lite micro use make
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version ba63891c8b python version na instal use virtualenv pip conda na bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the problem when run make f tensorflow lite experimental micro tool make makefile test the command fail when download the flatbuffer archive since there be a mismatch with the md5 sum provide the exact sequence of command step that you execute before run into the problem make f tensorflow lite experimental micro tool make makefile test any other info log tensorflow lite experimental micro tool make download and extract sh 7e8191b24853d75de2af87622ad293ba tensorflow lite experimental micro tool make download gemmlowp download tensorflow lite experimental micro tool make download and extract sh 3811552512049fac3af419130904bc55 tensorflow lite experimental micro tool make download flatbuffer download checksum error for expect 3811552512049fac3af419130904bc55 but find 02c64880acb89dbd57eebacfd67200d8 tensorflow lite experimental micro tool make makefile 198 recipe for target tensorflow lite experimental micro tool make download flatbuffer fail make tensorflow lite experimental micro tool make download flatbuffer error 1
tensorflowtensorflow
convert error for quantize aware train tf keras application mobilenetv2
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below r1 13 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory 2080ti you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior during the converting the quantize aware train mobilentnetv2 model from tf keras it will raise the follow error massage f tensorflow lite toco tooling util cc 1702 array expand conv project bn fusedbatchnorm which be an input to the conv operator produce the output array block 1 expand relu relu6 be lack min max datum which be necessary for quantization if accuracy matter either target a non quantize output format or run quantize training with your model from a float point checkpoint to change the input graph to contain min max information if you don t care about accuracy you can pass default range min and default range max for easy experimentation abort core dump describe the expect behavior it work for model build with tf contrib slim I feel like the tf contrib quantize create eval graph doesn t support bn layer from tf keras code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf working path tmp tflite tf keras backend set learning phase 1 input tf keras input shape 224 224 3 name input y true tf keras input shape 1000 name label y pre keras model output loss tf keras loss binary crossentropy y true y pre quant aware training graph tf get default graph tf contrib quantize create training graph input graph graph quant delay 0 train step tf train gradientdescentoptimizer learning rate 0 00625 minimize loss saver tf train saver with tf session as sess input np random rand 10 224 224 3 label np zero 10 1000 label 2 np one 10 sess run tf global variable initializer for I in range 3 loss sess run train step loss feed dict input input y true label print loss save saver save sess os path join working path checkpoint model ckpt load and convert to tflite tf reset default graph tf keras backend set learning phase 0 input tf keras input shape 224 224 3 name input keras model tf keras applications mobilenetv2 input tensor input alpha 1 0 weight none include top true output keras model output insert fake quant node graph tf get default graph tf contrib quantize create eval graph graph saver tf train saver with tf session graph graph as sess sess run tf global variable initializer saver restore sess tf train late checkpoint os path join working path checkpoint freeze graph graph def graph as graph def freeze graph tf graph util convert variable to constant sess graph def output op name tf io write graph freeze graph working path freeze graph pb convert to tflite graph def file os path join working path freeze graph pb input array input converter tf lite tfliteconverter from frozen graph graph def file input array output op name input shape input 1 224 224 3 converter inference type tf lite constant quantize uint8 converter inference input type tf lite constant quantize uint8 converter quantize input stat input 0 255 tfmodel converter convert open os path join working path convert model tflite wb write tfmodel other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file home mgou project tf lightweight yolov3 backbone pretrain train recog quant py line 91 in tfmodel converter convert file home mgou virtualenv tf113 py36 lib python3 6 site package tensorflow lite python lite py line 455 in convert converter kwargs file home mgou virtualenv tf113 py36 lib python3 6 site package tensorflow lite python convert py line 442 in toco convert impl input datum serializetostre file home mgou virtualenv tf113 py36 lib python3 6 site package tensorflow lite python convert py line 205 in toco convert protos toco fail see console for info n s n s n stdout stderr tensorflow lite python convert convertererror toco fail see console for info 2019 05 15 02 27 33 413703 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 1161 operator 1728 array 0 quantize 2019 05 15 02 27 33 440573 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 1161 operator 1728 array 0 quantize 2019 05 15 02 27 33 608574 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 137 operator 260 array 1 quantize 2019 05 15 02 27 33 610239 I tensorflow lite toco graph transformation graph transformation cc 39 before pre quantization graph transformation 137 operator 260 array 1 quantize 2019 05 15 02 27 33 610999 I tensorflow lite toco graph transformation graph transformation cc 39 after pre quantization graph transformation pass 1 76 operator 199 array 1 quantize 2019 05 15 02 27 33 611690 f tensorflow lite toco tooling util cc 1702 array expand conv project bn fusedbatchnorm which be an input to the conv operator produce the output array block 1 expand relu relu6 be lack min max datum which be necessary for quantization if accuracy matter either target a non quantize output format or run quantize training with your model from a float point checkpoint to change the input graph to contain min max information if you don t care about accuracy you can pass default range min and default range max for easy experimentation abort core dump
tensorflowtensorflow
autograph fail for keyword only argument
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux tensorflow instal from source or binary pip install tf nightly gpu 2 0 preview tensorflow version use command below v1 12 1 1847 gc095504 2 0 0 dev20190514 python version 3 7 3 describe the current behavior autograph complain when compile function with keyword only argument example output w0515 01 46 22 158518 139635868194560 ag log py 145 entity could not be transform and will be execute as be some feature e g tensor dependent conditional and loop may not work as expect error detail can be find in the log when run with the env variable autograph verbosity 1 please report this to the autograph team cause unexpected error transforming if you believe this be due to a bug please set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output when file the bug report cause by inconsistent node none nonetype and none nonetype describe the expect behavior autograph work for keyword only argument code to reproduce the issue import tensorflow as tf tf function def f a return a 2 f a 0
tensorflowtensorflow
transfer learning use pretraine convnet tutorial should use sigmoid activation
Bug
url s with the issue description of issue what need change classification head should use sigmoid activation clear description the tutorial have this paragraph you don t need an activation function here because this prediction will be treat as a logit or a raw prediciton value positive number predict class 1 negative number predict class 0 I think we need sigmoid activation later we use loss binary crossentropy in model compile binary crossentropy by default have from logit false and expect a probability as be document here and here
tensorflowtensorflow
tensorflow lite demo app give wrong result in gpu delegate honor play android 9 0 gpu turbo
Bug
system information os platform and distribution android 9 0 api 28 mobile device huawei honor play 9 0 gpu mali g72 mp12 gpu turbo tensorflow instal from tensorflow lite 0 0 0 gpu experimental describe the current behavior the tensorflow lite gpu delegate demo application give incorrect result for image classification when it be run on gpu whereas the same float model give correct result when it be run on cpu even the quantize model in the demo application give correct inference result in this application we even try the official deeplab model for semantic segmentation with the same phone but even in this scenario it give wrong result square stripe instead of correct mask the same model be run in other phone with adreno gpu and also in some phone with mali gpu eg samsung a8 android 9 0 describe the expect behavior the tensorflow lite gpu inference should give same result in cpu and gpu in all the android phone other info log it look like the phone use a new feature call gpu turbo initially the model be work correctly with android 8 1 stock os but after the 9 0 upgrade the tflite model be give wrong result
tensorflowtensorflow
defun random addition explode
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution colab or wsl ubuntu 18 04 mobile device if the issue happen on mobile device tensorflow instal from binary tensorflow version 1 13 1 or 1 14 1 dev20190514 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior use defun in eager mode this very simple function seem to cause havoc in particular the first run take a long time and a lot of memory both scale seemingly exponentially with the n iter parameter below python import time import tensorflow as tf tf enable v2 behavior def f d num iter x tf random uniform d for I in range num iter x x x return x danger increase this much beyond 20 may use up all your memory n iter 22 t0 time time f 10 n iter print time time t0 order ms f g tf contrib eager defun f for I in range 3 t0 time time f g 10 n iter print time time t0 around 30 first run for n iter 22 after that it s fast replace x tf random uniform d with x tf one d eliminate the problem as do replace x x x with x 2 x in graph mode the problem do not occur python import time import tensorflow as tf def graph fun node with tf session as sesh for I in range 3 t0 time time sesh run node print time time t0 def f d num iter x tf random uniform d for I in range num iter x x x return x n iter 100 tf reset default graph t0 time time fre f 10 n iter print time time t0 around 0 4s graph fun fre around 0 6 s first run then essentially zero describe the expect behavior this very simple defun d function should execute quickly include the graph building and graph optimization step
tensorflowtensorflow
keep version fix when search api doc
Bug
when use the search bar in the api doc with a version in the url any search will include page from different api version which mean you often end up in the wrong module and have to click on the correct version again how to reproduce go to type in unsorted segment and wait for the result to appear see image choose the top page result you re now in the 1 9 api rather than 1 13
tensorflowtensorflow
possible wrong comment in minimax discriminator loss
Bug
url s with the issue l468 l477 description of issue what need change one line 468 log 1 label smooth sigmoid d x shouldn t it be 1 label smoothing log sigmoid d x I m uncertain about the label smooth part but I think the argument of the log be wrong on line 477 log sigmoid d g x shouldn t it be log 1 sigmoid d g x
tensorflowtensorflow
error instal tf nightly 2 0 preview with pip
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 2 lts tensorflow instal from source or binary binary tensorflow version late tf nightly 2 0 preview python version python 3 6 7 instal use virtualenv pip conda pip describe the problem provide the exact sequence of command step that you execute before run into the problem official colab link for tensorboard 2 0 demo be able to reproduce the bug pip install q tf nightly 2 0 preview error thinc 6 12 1 have requirement wrapt 1 11 0 1 10 0 but you ll have wrapt 1 11 1 which be incompatible
tensorflowtensorflow
tf 2 0 api docs tf typo extra symbol in pip install doc copy code
Bug
exist url s with the issue description of issue what need change copyable code block have extra character at the end pip install tensorflow 2 0 0 alpha0 clear description copyable code block should be change to the following to make copying easy pip install tensorflow 2 0 0 alpha0 correct link this be the overview for the tf 2 0 api but I couldn t find the docstring in the refer source submit a pull request would love to but couldn t find in source where this be a docstring or be generate
tensorflowtensorflow
unsupported operation mean while try to apply gpudelegate to tflite
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information I be write a custom android app build in android studio use the method mention here android with android studio host system linux ubuntu 16 04 target device android s8 tflite instal from source or binary nightly aar org tensorflow tensorflow lite 0 0 0 nightly tflite gpu instal from source or binary nightly aar org tensorflow tensorflow lite gpu 0 0 0 nightly gpu model and memory adreno 540 describe the current behavior when I attempt to load my tflite model on the android after apply the gpudelegate as describe here android it fail say that mean be an unsupported operation see traceback below describe the expect behavior the net load and run fine if I do not attempt to apply the gpu delegate before the attempt to load it the only difference in the code be the follow line c delegate new gpudelegate c tfliteoption adddelegate c delegate code to reproduce the issue unfortunately I can not provide my tflite file but it obviously contain a mean operation be there a script anywhere in the tflite repo that print out all the operation contain in a tflite file I look briefly but be unable to find one other info log 05 13 12 09 56 101 3348 3348 org qus viewqualtflite e androidruntime fatal exception main process org qus viewqualtflite pid 3348 java lang runtimeexception unable to start activity componentinfo org qus viewqualtflite org qus viewqualtflite viewqualactivity java lang illegalargumentexception internal error fail to apply delegate next operation be not support by gpu delegate mean operation be not support first 43 operation will run on the gpu and the remain 7 on the cpu tflitegpudelegate prepare tensor ref have unsupported number of dimension 5node number 50 tflitegpudelegate fail to prepare at android app activitythread performlaunchactivity activitythread java 2955 at android app activitythread handlelaunchactivity activitythread java 3030 at android app activitythread wrap11 unknown source 0 at android app activitythread h handlemessage activitythread java 1696 at android os handler dispatchmessage handler java 105 at android os looper loop looper java 164 at android app activitythread main activitythread java 6938 at java lang reflect method invoke native method at com android internal os zygote methodandargscaller run zygote java 327 at com android internal os zygoteinit main zygoteinit java 1374 cause by java lang illegalargumentexception internal error fail to apply delegate next operation be not support by gpu delegate mean operation be not support first 43 operation will run on the gpu and the remain 7 on the cpu tflitegpudelegate prepare tensor ref have unsupported number of dimension 5node number 50 tflitegpudelegate fail to prepare at org tensorflow lite nativeinterpreterwrapper applydelegate native method at org tensorflow lite nativeinterpreterwrapper init nativeinterpreterwrapper java 83 at org tensorflow lite nativeinterpreterwrapper nativeinterpreterwrapper java 60 at org tensorflow lite interpreter interpreter java 224 at org qus viewqualtflite tensorflowqusrunner create tensorflowqusrunner java 154 at org qus viewqualtflite viewqualactivity oncreate viewqualactivity java 200 at android app activity performcreate activity java 7183 at android app instrumentation callactivityoncreate instrumentation java 1220 at android app activitythread performlaunchactivity activitythread java 2908 9 more
tensorflowtensorflow
tf2 0 0 alpha bad performance compare to tf1 13 1 with model fit for 1 layer tf keras sequential model linear regression
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow nope this only concern performance of model fit for a tf keras sequential model compare version 2 0 0 alpha to 1 13 1 os platform and distribution e g linux ubuntu 16 04 test on windows 10 64 bit local machine and on google colab with similar outcome tensorflow instal from source or binary pip install tensorflow pip install tensorflow 2 0 0 alpha0 tensorflow version use command below 2 0 0 alpha vs 1 13 1 python version python 3 7 3 describe the current behavior much high mse loss with tensorflow 2 0 0 alpha run 500 epoch of sgd use model fit of a 1 layer tf keras sequential model implement a simple linear regression 6 data point when run the same code first with tensorflow 1 13 1 performance well as expect and then with 2 0 0 alpha result run on google colab show below cpu only similar performance difference on my local machine on which I use the specified python version window 10 64 bit i5 7200u cpu scrollto btf2csfh2iex 1 13 1 epoch 500 500 6 6 0s 238us sample loss 2 0710e 05 2 0 0 alpha epoch 500 500 6 6 0s 542us sample loss 0 2409 as you can see the mse loss be several order of magnitude bad for 2 0 0 alpha describe the expect behavior with what I understand about tensorflow 2 0 be that this performance should be the same similar for this code see next item for the code code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem scrollto btf2csfh2iex pip install tensorflow 2 0 0 alpha0 import tensorflow as tf import numpy as np from tensorflow import kera tf version model tf keras sequential keras layer dense unit 1 input shape 1 model compile optimizer sgd loss mean squared error xs np array 1 0 0 0 1 0 2 0 3 0 4 0 dtype float ys np array 3 0 1 0 1 0 3 0 5 0 7 0 dtype float model fit xs ys epoch 500 print model predict 10 0 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
custom tflite model run fail with gpudelegate android p
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 1 12 2 python version 3 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior build gpudelegate fail on android p java demo describe the expect behavior build pass code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem I convert my model like this freeze graph input graph eval pb input checkpoint model quant self model ckpt 19 19 output graph frozen eval graph pb output node name softmax tflite convert output file poolnet gzq tflite graph def file model gzq pb inference type float input array placeholder input shape 1 224 224 3 output array oup other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach here be the log 2019 05 13 16 25 04 870 20456 22983 w system err java lang runtimeexception java lang illegalargumentexception internal error fail to apply delegate gpudelegate prepare fuse auto input failednode number 133 gpudelegate fail to prepare 2019 05 13 16 25 04 870 20456 22983 w system err cause by java lang illegalargumentexception internal error fail to apply delegate gpudelegate prepare fuse auto input failednode number 133 gpudelegate fail to prepare failednode133 be my outputnode sigmiod softmax I have try both be fail
tensorflowtensorflow
keras fit generator use validation do not respect verbose argument
Bug
system information os platform and distribution e g linux ubuntu 16 04 window 10 1809 tensorflow instal from source or binary pip tensorflow version use command below 1 13 gpu python version 3 6 bug when use fit generator with validation the progress bar be print regardless of the verbose set cause line 216 of tensorflow python keras engine training generator py call model iterator to do validation but do not pass the verbose parameter mean that the default value of verbose 1 be use solution add in the miss parameter e g run the test loop every epoch during training if do validation and not callback model stop train val result model iteration model validation datum step per epoch validation step batch size batch size class weight class weight worker worker use multiprocesse use multiprocesse max queue size max queue size verbose verbose mode test
tensorflowtensorflow
tf 2 0 keras model train on batch allocate extremely large cpu memory
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 x64 1809 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 dev20190504 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 gpu model and memory geforce gtx 1070 8 gb describe the current behavior when I try to train on batch tf 2 0 allcaote memory endless and finally crash use tf keras backend set learning phase 1 can be temporary solution but this trick doesn work with lateset build dev20190511 describe the expect behavior the model can be train regardless of tf keras backend set learning phase 1 because I use model train on batch it should handle learn phase internally code to reproduce the issue import tensorflow as tf import numpy as np def conv x filter kernel size stride 1 1 padding same initializer he normal c tf keras layer conv2d filter filter kernel size kernel size stride stride padding padding kernel initializer initializer use bias false x return c def conv bn x filter kernel size stride 1 1 padding same initializer he normal bn gamma initializer one c conv x filter filter kernel size kernel size stride stride padding padding initializer initializer c bn tf keras layers batchnormalization gamma initializer bn gamma initializer c return c bn def conv bn relu x filter kernel size stride 1 1 padding same initializer he normal bn gamma initializer one c bn conv bn x filter filter kernel size kernel size stride stride padding padding initializer initializer bn gamma initializer bn gamma initializer return tf keras layers activation relu c bn def conv gap x output filter kernel size 1 1 x conv x filter output filter kernel size kernel size x tf keras layer globalaveragepooling2d x return x def my block x output filter inter filter c1 conv bn relu x inter filter 1 1 c2 conv bn relu c1 inter filter 3 3 c3 conv bn c2 output filter 1 1 bn gamma initializer zero p tf keras layer add c3 x return tf keras layers activation relu p def my block inc x output filter inter filter strides1x1 1 1 strides2x2 2 2 c1 conv bn relu x inter filter 1 1 stride strides1x1 c2 conv bn relu c1 inter filter 3 3 stride strides2x2 c3 conv bn c2 output filter 1 1 bn gamma initializer zero stride np multiply strides1x1 strides2x2 s conv bn x output filter 1 1 stride stride shortcut p tf keras layer add c3 s return tf keras layers activation relu p def repeat block x block delegate count kwargs assert count 0 for in range count x block delegate x kwargs return x this line make trick tf keras backend set learning phase 1 shape 299 299 3 input tf keras input shape dtype float32 output conv bn relu input 256 4 7 7 stride 2 2 output tf keras layers maxpool2d 3 3 stride 2 2 padding same output output my block inc outputs 256 256 4 strides2x2 1 1 output repeat block output block delegate my block count 2 output filter 256 inter filter 256 4 output my block inc output 512 512 4 strides2x2 2 2 output repeat block output block delegate my block count 7 output filter 512 inter filter 512 4 output my block inc output 1024 1024 4 strides2x2 2 2 output repeat block output block delegate my block count 40 output filter 1024 inter filter 1024 4 output my block inc output 1024 1024 4 strides2x2 2 2 output repeat block output block delegate my block count 16 output filter 1024 inter filter 1024 4 output my block inc output 1024 1024 4 strides2x2 2 2 output repeat block output block delegate my block count 16 output filter 1024 inter filter 1024 4 output my block inc outputs 2048 2048 4 strides2x2 2 2 output repeat block output block delegate my block count 6 output filter 2048 inter filter 2048 4 output conv gap output 1024 output tf keras layers activation sigmoid output model tf keras model model inputs input output output name resnet custom v2 optimizer tf optimizer adam 0 001 model compile optimizer optimizer loss binary crossentropy metric mean absolute error def make batch x np one shape 0 shape 1 shape 2 dtype float32 y np one 1024 dtype float32 return x y dataset tf datum dataset from tensor slice 0 1 2 3 4 5 6 7 8 9 dataset dataset map lambda x tf py function make batch x tf float32 tf float32 dataset dataset batch 10 for x y in dataset step result model train on batch x x y y print f loss step result 0 if you uncomment tf keras backend set learning phase 1 the model may be successully train but with tf 2 0 dev20190511 the traininig be always fail regardless of tf keras backend set learning phase 1
tensorflowtensorflow
keras fit generator fail in graph mode when input be dict
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow simple keras model combine from example os platform and distribution e g linux ubuntu 16 04 macos x 10 14 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 alpha0 python version 2 7 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version no cpu version gpu model and memory no cpu version describe the current behavior when run provide code it fail only when fit generator execute in graph mode in other case fit and fit generator work well python typeerror traceback most recent call last in 61 fail 62 model run eagerly false 63 model fit generator input fn usr local lib python2 7 site package tensorflow python keras engine training pyc in fit generator self generator step per epoch epoch verbose callback validation datum validation step validation freq class weight max queue size worker use multiprocesse shuffle initial epoch 1513 shuffle shuffle 1514 initial epoch initial epoch 1515 step name step per epoch 1516 1517 def evaluate generator self usr local lib python2 7 site package tensorflow python keras engine training generator pyc in model iteration model datum step per epoch epoch verbose callback validation datum validation step validation freq class weight max queue size worker use multiprocesse shuffle initial epoch mode batch size step name kwargs 255 256 be defer not model be compile 257 batch out batch function batch datum 258 if not isinstance batch out list 259 batch out batch out usr local lib python2 7 site package tensorflow python keras engine training pyc in train on batch self x y sample weight class weight reset metric 1248 else 1249 if not isinstance k symbolic learning phase int 1250 in x y sample weight true 1251 else 1252 in x y sample weight typeerror unsupported operand type s for dict and list describe the expect behavior tf keras model fit generator should work properly with input of type dict code to reproduce the issue python import numpy as np import tensorflow as tf def input fn x np random random 1024 10 y np random randint 2 size 1024 1 x tf cast x tf float32 dataset tf datum dataset from tensor slice x y dataset dataset shuffle 100 dataset dataset batch 32 dataset dataset repeat 10 def extract feature x y feature x x z tf zero like x return feature y dataset dataset map extract feature return dataset class mymodel0 tf keras model def init self super mymodel self init self feature tf keras layer densefeature tf feature column numeric column x shape 10 self dense1 tf keras layer dense 16 activation relu self dense2 tf keras layer dense 1 activation sigmoid def call self input training none mask none output self feature input output self dense1 output output self dense2 output return output model mymodel model compile loss binary crossentropy optimizer tf keras optimizer adam lr 0 05 work model run eagerly true model fit input fn work model run eagerly false model fit input fn work model run eagerly true model fit generator input fn fail model run eagerly false model fit generator input fn
tensorflowtensorflow
concretefunction do not raise error when input and tf tensorspec be not compatible with each other
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 osx mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary binary tensorflow version use command below 1 14 1 dev20190509 python version 3 7 1 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version none gpu model and memory none you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior import tensorflow as tf tf enable v2 behavior tf function def test rank x return x test rank cf test rank get concrete function tf tensorspec none none tf float32 run smoothly should raise error here test rank cf tf random normal 2 3 4 describe the expect behavior error should be rasie if input and tf tensorspec be not compatible with each other code to reproduce the issue see the above provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
unexpected tf version git version for nightly build
Bug
in colab use the follow command to reproduce pip uninstall tensorflow yq pip install tf nightly gpu 1 14 1 dev20190510 q import tensorflow as tf tf version git version which show v1 12 1 1705 g978532afa9 last hex be not any git commit btw shouldn t it be the git version use to build that package instead gentle ping gunan and yifeif
tensorflowtensorflow
about transformer
Bug
transformer top of page have anyone run this experiment and the result of my run have not reach the official result I post my code and help I find the reason python import tensorflow dataset as tfds import tensorflow as tf import time import numpy as np import matplotlib pyplot as plt from tqdm auto import tqdm import os os environ tf cpp min log level 3 print tf version if tf test be gpu available device gpu 0 else device cpu 0 print 1 read dataset and token example metadata tfds load ted hrlr translate pt to en with info true as supervise true train example val example example train example validation tokenizer en tfds feature text subwordtextencoder build from corpus en numpy for pt en in train example target vocab size 2 13 tokenizer pt tfds feature text subwordtextencoder build from corpus pt numpy for pt en in train example target vocab size 2 13 buffer size 20000 batch size 64 add a start and end token to the input and target def encode lang1 lang2 lang1 tokenizer pt vocab size tokenizer pt encode lang1 numpy tokenizer pt vocab size 1 lang2 tokenizer en vocab size tokenizer en encode lang2 numpy tokenizer en vocab size 1 return lang1 lang2 note to keep this example small and relatively fast drop example with a length of over 40 token max length 40 def filter max length x y max length max length return tf logical and tf size x max length tf size y max length operation inside map run in graph mode and receive a graph tensor that do not have a numpy attribute the tokenizer expect a string or unicode symbol to encode it into integer hence you need to run the encoding inside a tf py function which receive an eager tensor have a numpy attribute that contain the string value def tf encode pt en return tf py function encode pt en tf int64 tf int64 print 2 encode and padded batch train dataset train example map tf encode train dataset train dataset filter filter max length cache the dataset to memory to get a speedup while read from it train dataset train dataset cache train dataset train dataset shuffle buffer size pad batch batch size pad shape 1 1 train dataset train dataset prefetch tf datum experimental autotune val dataset val example map tf encode val dataset val dataset filter filter max length padded batch batch size pad shape 1 1 def get angle pos I d model angle rate 1 np power 10000 2 I 2 np float32 d model return pos angle rate def positional encoding position d model angle rad get angle np arange position np newaxis np arange d model np newaxis d model apply sin to even indice in the array 2i sin np sin angle rad 0 2 apply cos to odd index in the array 2i 1 cosine np cos angle rad 1 2 pos encode np concatenate sin cosine axis 1 pos encode pos encoding np newaxis return tf cast pos encode dtype tf float32 def create padding mask seq seq tf cast tf math equal seq 0 tf float32 add extra dimension so that we can add the padding to the attention logit return seq tf newaxis tf newaxis batch size 1 1 seq len def create look ahead mask size mask 1 tf linalg band part tf one size size 1 0 return mask seq len seq len def scale dot product attention q k v mask calculate the attention weight q k v must have match lead dimension the mask have different shape depend on its type padding or look ahead but it must be broadcastable for addition args q query shape seq len q depth k key shape seq len k depth v value shape seq len v depth mask float tensor with shape broadcastable to seq len q seq len k default to none return output attention weight matmul qk tf matmul q k transpose b true seq len q seq len k scale matmul qk dk tf cast tf shape k 1 tf float32 scale attention logit matmul qk tf math sqrt dk add the mask to the scale tensor if mask be not none scale attention logit mask 1e9 softmax be normalize on the last axis seq len k so that the score add up to 1 attention weight tf nn softmax scale attention logit axis 1 seq len q seq len k output tf matmul attention weight v seq len v depth return output attention weight class multiheadattention tf keras layers layer def init self d model num head super multiheadattention self init self num head num head self d model d model assert d model self num head 0 self depth d model self num head self wq tf keras layer dense d model self wk tf keras layer dense d model self wv tf keras layer dense d model self dense tf keras layer dense d model def split head self x batch size split the last dimension into num head depth transpose the result such that the shape be batch size num head seq len depth x tf reshape x batch size 1 self num head self depth return tf transpose x perm 0 2 1 3 def call self v k q mask batch size tf shape q 0 q self wq q batch size seq len d model k self wk k batch size seq len d model v self wv v batch size seq len d model q self split head q batch size batch size num head seq len q depth k self split head k batch size batch size num head seq len k depth v self split head v batch size batch size num head seq len v depth scale attention shape batch size num head seq len v depth attention weight shape batch size num head seq len q seq len k scale attention attention weight scale dot product attention q k v mask scale attention tf transpose scale attention perm 0 2 1 3 batch size seq len v num head depth concat attention tf reshape scale attention batch size 1 self d model batch size seq len v d model output self dense concat attention batch size seq len v d model return output attention weight def point wise feed forward network d model dff return tf keras sequential tf keras layer dense dff activation relu batch size seq len dff tf keras layer dense d model batch size seq len d model class encoderlayer tf keras layers layer def init self d model num head dff rate 0 1 super encoderlayer self init self mha multiheadattention d model num head self ffn point wise feed forward network d model dff self layernorm1 tf keras layer layernormalization epsilon 1e 6 self layernorm2 tf keras layer layernormalization epsilon 1e 6 self dropout1 tf keras layers dropout rate self dropout2 tf keras layers dropout rate def call self x training mask attn output self mha x x x mask batch size input seq len d model attn output self dropout1 attn output training training out1 self layernorm1 x attn output batch size input seq len d model ffn output self ffn out1 batch size input seq len d model ffn output self dropout2 ffn output training training out2 self layernorm2 out1 ffn output batch size input seq len d model return out2 class decoderlayer tf keras layers layer def init self d model num head dff rate 0 1 super decoderlayer self init self mha1 multiheadattention d model num head self mha2 multiheadattention d model num head self ffn point wise feed forward network d model dff self layernorm1 tf keras layer layernormalization epsilon 1e 6 self layernorm2 tf keras layer layernormalization epsilon 1e 6 self layernorm3 tf keras layer layernormalization epsilon 1e 6 self dropout1 tf keras layers dropout rate self dropout2 tf keras layers dropout rate self dropout3 tf keras layers dropout rate def call self x enc output training look ahead mask padding mask enc output shape batch size input seq len d model attn1 attn weights block1 self mha1 x x x look ahead mask batch size target seq len d model attn1 self dropout1 attn1 training training out1 self layernorm1 attn1 x attn2 attn weights block2 self mha2 enc output enc output out1 padding mask batch size target seq len d model attn2 self dropout2 attn2 training training out2 self layernorm2 attn2 out1 batch size target seq len d model ffn output self ffn out2 batch size target seq len d model ffn output self dropout3 ffn output training training out3 self layernorm3 ffn output out2 batch size target seq len d model return out3 attn weights block1 attn weights block2 class encoder tf keras layers layer def init self num layer d model num head dff input vocab size rate 0 1 super encoder self init self d model d model self num layer num layer self embed tf keras layer embed input vocab size d model self pos encode positional encoding input vocab size self d model self enc layer encoderlayer d model num head dff rate for in range num layer self dropout tf keras layers dropout rate def call self x training mask seq len tf shape x 1 add embed and position encode x self embed x batch size input seq len d model x tf math sqrt tf cast self d model tf float32 x self pos encode seq len x self dropout x training training for I in range self num layer x self enc layer I x training mask return x batch size input seq len d model class decoder tf keras layers layer def init self num layer d model num head dff target vocab size rate 0 1 super decoder self init self d model d model self num layer num layer self embed tf keras layer embed target vocab size d model self pos encode positional encoding target vocab size self d model self dec layer decoderlayer d model num head dff rate for in range num layer self dropout tf keras layers dropout rate def call self x enc output training look ahead mask padding mask seq len tf shape x 1 attention weight x self embed x batch size target seq len d model x tf math sqrt tf cast self d model tf float32 x self pos encode seq len x self dropout x training training for I in range self num layer x block1 block2 self dec layer I x enc output training look ahead mask padding mask attention weight decoder layer block1 format I 1 block1 attention weight decoder layer block2 format I 1 block2 x shape batch size target seq len d model return x attention weight create the transformer transformer consist of the encoder decoder and a final linear layer the output of the decoder be the input to the linear layer and its output be return class transformer tf keras model def init self num layer d model num head dff input vocab size target vocab size rate 0 1 super transformer self init self encoder encoder num layer d model num head dff input vocab size rate self decoder decoder num layer d model num head dff target vocab size rate self final layer tf keras layer dense target vocab size def call self inp tar training enc padding mask look ahead mask dec padding mask enc output self encoder inp training enc padding mask batch size inp seq len d model dec output shape batch size tar seq len d model dec output attention weight self decoder tar enc output training look ahead mask dec padding mask final output self final layer dec output batch size tar seq len target vocab size return final output attention weight num layer 4 d model 128 dff 512 num head 8 input vocab size tokenizer pt vocab size 2 target vocab size tokenizer en vocab size 2 dropout rate 0 1 class customschedule tf keras optimizer schedule learningrateschedule def init self d model warmup step 4000 super customschedule self init self d model d model self d model tf cast self d model tf float32 self warmup step warmup step def call self step arg1 tf math rsqrt step arg2 step self warmup step 1 5 return tf math rsqrt self d model tf math minimum arg1 arg2 learning rate customschedule d model optimizer tf keras optimizer adam learning rate beta 1 0 9 beta 2 0 98 epsilon 1e 9 temp learning rate schedule customschedule d model plt plot temp learning rate schedule tf range 40000 dtype tf float32 plt ylabel learning rate plt xlabel train step loss and metric since the target sequence be pad it be important to apply a padding mask when calculate the loss loss object tf keras loss sparsecategoricalcrossentropy from logit true reduction none def loss function real pre mask tf math logical not tf math equal real 0 loss loss object real pre mask tf cast mask dtype loss dtype loss mask return tf reduce mean loss train loss tf keras metric mean name train loss train accuracy tf keras metric sparsecategoricalaccuracy name train accuracy val loss tf keras metric mean name val loss val accuracy tf keras metric sparsecategoricalaccuracy name val accuracy training and checkpointing transformer transformer num layer d model num head dff input vocab size target vocab size dropout rate def create mask inp tar encoder padding mask enc padding mask create padding mask inp use in the 2nd attention block in the decoder this padding mask be use to mask the encoder output dec padding mask create padding mask inp use in the 1st attention block in the decoder it be use to pad and mask future token in the input receive by the decoder look ahead mask create look ahead mask tf shape tar 1 dec target padding mask create padding mask tar combine mask tf maximum dec target padding mask look ahead mask return enc padding mask combine mask dec padding mask create the checkpoint path and the checkpoint manager this will be use to save checkpoint every n epoch checkpoint path checkpoint train ckpt tf train checkpoint transformer transformer optimizer optimizer ckpt manager tf train checkpointmanager ckpt checkpoint path max to keep 5 if a checkpoint exist restore the late checkpoint if ckpt manager late checkpoint ckpt restore ckpt manager late checkpoint print late checkpoint restore epoch 200 train num len 1 for in train dataset tf function def train step inp tar tar inp tar 1 tar real tar 1 enc padding mask combine mask dec padding mask create mask inp tar inp with tf gradienttape as tape prediction transformer inp tar inp true enc padding mask combine mask dec padding mask loss loss function tar real prediction gradient tape gradient loss transformer trainable variable optimizer apply gradient zip gradient transformer trainable variable train loss loss train accuracy tar real prediction tf function def val step inp tar tar inp tar 1 tar real tar 1 enc padding mask combine mask dec padding mask create mask inp tar inp prediction transformer inp tar inp enc padding mask enc padding mask look ahead mask combined mask dec padding mask dec padding mask training false loss loss function tar real prediction ppl tf exp loss val loss ppl val accuracy tar real prediction print 3 traning model portuguese be use as the input language and english be the target language for epoch in range epoch train loss reset state train accuracy reset states val loss reset state val accuracy reset states print epoch format epoch 1 start time time inp portuguese tar english with tqdm total train num batch size as pbar for inp tar in train dataset train step inp tar pbar update batch size for inp tar in val dataset val step inp tar end time time print train loss 4f ttrain acc 2f t val loss 4f tval acc 2f t time 2f s format train loss result train accuracy result 100 val loss result val accuracy result 100 end start def evaluate inp sentence start token tokenizer pt vocab size end token tokenizer pt vocab size 1 inp sentence be portuguese hence add the start and end token inp sentence start token tokenizer pt encode inp sentence end token encoder input tf expand dim inp sentence 0 as the target be english the first word to the transformer should be the english start token decoder input tokenizer en vocab size output tf expand dim decoder input 0 for I in range max length enc padding mask combine mask dec padding mask create mask encoder input output prediction shape batch size seq len vocab size prediction attention weight transformer encoder input output false enc padding mask combine mask dec padding mask select the last word from the seq len dimension prediction prediction 1 batch size 1 vocab size predict i d tf cast tf argmax prediction axis 1 tf int32 return the result if the predict i d be equal to the end token if tf equal predict i d tokenizer en vocab size 1 return tf squeeze output axis 0 attention weight concatentate the predicted i d to the output which be give to the decoder as its input output tf concat output predict i d axis 1 return tf squeeze output axis 0 attention weight def plot attention weight attention sentence result layer fig plt figure figsize 16 8 sentence tokenizer pt encode sentence attention tf squeeze attention layer axis 0 for head in range attention shape 0 ax fig add subplot 2 4 head 1 plot the attention weight ax matshow attention head 1 cmap viridis fontdict fontsize 10 ax set xtick range len sentence 2 ax set ytick range len result ax set ylim len result 1 5 0 5 ax set xticklabel tokenizer pt decode I for I in sentence fontdict fontdict rotation 90 ax set yticklabel tokenizer en decode I for I in result if I tokenizer en vocab size fontdict fontdict ax set xlabel head format head 1 plt tight layout plt show def translate sentence plot result attention weight evaluate sentence predict sentence tokenizer en decode I for I in result if I tokenizer en vocab size print input format sentence print predict translation format predict sentence if plot plot attention weight attention weight sentence result plot print 4 evaluate model translate este um problema que temos que resolver print real translation this be a problem we have to solve translate os meus vizinhos ouviram sobre esta ideia print real translation and my neighboring home hear about this idea translate vou ent o muito rapidamente partilhar convosco algumas hist rias de algumas coisas m gicas que aconteceram print real translation so I ll just share with you some story very quickly of some magical thing that have happen you can pass different layer and attention block of the decoder to the plot parameter translate este o primeiro livro que eu fiz plot decoder layer4 block2 print real translation this be the first book I ve ever do
tensorflowtensorflow
issue with transformer guide
Bug
url s with the issue description of issue what need change format and normalization layer clear description 1 when describe the multi head attention there be this text in one line multi head attention consist of four part linear layer and split into head scale dot product attention concatenation of head final linear layer I guess the mean bullet item it be not format correctly 2 the encoderlayer and decoderlayer use layernormalization there be no layernormalization in keras layers there be only batchnormalization 3 evaluate step create mask no mask be need since a there be no padding for a single sentence b there be no look ahead since we be try to predict next word
tensorflowtensorflow
tflite gpu delegate sub operator not support
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes but the model should be run os platform and distribution linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device oneplus3 android 8 0 tensorflow instal from source or binary source tensorflow version use command below 1 13 python version 3 6 8 bazel version if compile from source 0 24 1 gcc compiler version if compile from source 5 4 0 cuda cudnn version nil gpu model and memory nil describe the current behavior the tflite gpu delegate benchmark tool provide support for sub operator to run on the gpu of the mobile sub operator which be include to the model be not run on the gpu as part of gpu delegate but fall back to cpu describe the expect behavior sub operator should be run on the gpu as per the documentation provide code to reproduce the issue attach with this be the model and error log of the model the model be a modify version of the deeplab gpu delegate model provide by google input size be 197 graph append code trial tflite sub model output1 tf reshape tf strided slice tf get default graph get tensor by name resizebilinear 2 0 begin 0 0 0 0 end 1 197 197 1 stride 1 1 1 1 shape 1 1 output2 tf reshape tf strided slice tf get default graph get tensor by name resizebilinear 2 0 begin 0 0 0 1 end 1 197 197 2 stride 1 1 1 1 shape 1 1 output3 tf subtract output2 output1 benchmark tool log trial tflite sub model adb shell datum local tmp benchmark model gpu graph datum local tmp trial tflite use gpu true load model datum local tmp trial tflite resolve reporter info initialize tensorflow lite runtime info create tensorflow lite delegate for gpu error next operation be not support by gpu delegate sub incorrect operation type pass first 74 operation will run on the gpu and the remain 1 on the cpu apply gpu delegate initialize session in 744 972ms run benchmark for at least 1 iteration and at least 0 5 second count 11 first 91729 curr 37009 min 36876 max 91729 avg 46306 5 std 16106 running benchmark for at least 50 iteration and at least 1 second count 50 first 37205 curr 37165 min 36706 max 37530 avg 37075 8 std 158 summary by node type node type count avg ms avg cdf mem kb time call delegate 1 37 034 99 906 99 906 0 000 1 sub 1 0 035 0 094 100 000 0 000 1 timing microsecond count 50 first 37200 curr 37161 min 36700 max 37520 avg 37070 4 std 158 memory byte count 0 2 node observe average inference timing in we warmup 46306 5 init 744972 no stat 37075 8 tflite file the tflite file be attach below trial tflite screenshot of modify part image
tensorflowtensorflow
custom layer build should fail if it include tensor computation
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow instal from source or binary binary tensorflow version use command below 2 0 python version any describe the current behavior since it be require that there not be tensor computation in a layer s build method we notice this because our layer serialization fail after this commit the fix be to move tensor computation into call method though it be a silent fail and would have be very difficult to troubleshoot if it didn t fail on a nightly describe the expect behavior the build method should raise some kind of error say that tensor computation be not allow in build it should also probably be mention in the documentation of tf or kera it s sort of suggest but not well enough imo especially with the hard to find fail it produce code to reproduce the issue
tensorflowtensorflow
keras code in colab from tensorflow org be not show tensorflow version detail
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
build tensorflow lite for arm64 board fail
Bug
system information linux ubuntu 16 04 tensorflow instal from source or binary source tensorflow version r1 14 describe the problem I get build error undefined reference to nnapiimplementation provide the exact sequence of command step that you execute before run into the problem I follow here be the exact sequence of command git clone cd tensorflow git checkout b r1 14 origin r1 14 tensorflow lite tool make download dependency sh tensorflow lite tool make build aarch64 lib sh any other info log I get these error info when I execute build aarch64 lib sh home administrator tensorflow tensorflow lite tool make gen aarch64 armv8 a lib libtensorflow lite a nnapi delegate o in function tflite nnapiallocation nnapiallocation nnapi delegate cc text 0x28 undefined reference to nnapiimplementation home administrator tensorflow tensorflow lite tool make gen aarch64 armv8 a lib libtensorflow lite a nnapi delegate o in function tflite nnapiallocation nnapiallocation char const tflite errorreporter nnapi delegate cc text 0x1a4 undefined reference to nnapiimplementation home administrator tensorflow tensorflow lite tool make gen aarch64 armv8 a lib libtensorflow lite a nnapi delegate o in function tflite nnapidelegate nnapidelegate nnapi delegate cc text 0x218 undefined reference to nnapiimplementation nnapi delegate cc text 0x234 undefined reference to nnapiimplementation home administrator tensorflow tensorflow lite tool make gen aarch64 armv8 a lib libtensorflow lite a nnapi delegate o in function tflite addtensoroperand tflite subgraph aneuralnetworksmodel unsigned int std vector nnapi delegate cc text 0x2b8 undefined reference to nnapiimplementation home administrator tensorflow tensorflow lite tool make gen aarch64 armv8 a lib libtensorflow lite a nnapi delegate o nnapi delegate cc text 0x578 more undefined reference to nnapiimplementation follow collect2 error ld return 1 exit status tensorflow lite tool make makefile 227 recipe for target home administrator tensorflow tensorflow lite tool make gen aarch64 armv8 a bin minimal fail make home administrator tensorflow tensorflow lite tool make gen aarch64 armv8 a bin minimal error 1 make wait for unfinished job
tensorflowtensorflow
tflite gpu delegate produce very different result
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 16 04 mobile device if the issue happen on mobile device lg v30 android 8 0 0 tensorflow instal from source or binary source tensorflow version use command below 0 0 0 gpu experimental python version 3 6 5 describe the current behavior when deeplab mobilenetv2 coco voc trainaug be run on a sample image use gpu delegate the result be significantly different to that generate use cpu the tflite model be convert use tflite convert output file mobilenetv2 coco voc trainaug tflite graph def file frozen graph input array sub 7 output array resizebilinear 3 inference type float inference input type float and here s the tflite model use cpu give the following use random colour screenshot 2019 05 10 13 30 19 gpu delegate give the following use random colour screenshot 2019 05 10 13 29 13 basically all pixel here be wrongly identify to be of the same class at index 0 because the score for class 0 be always the high similarly if mobilenetv2 ade20k train be use the two also give very different result cpu give the following use random colour screenshot 2019 05 10 04 10 59 gpu delegate give the following use random colour screenshot 2019 05 10 04 09 41 here s the tflite model use indeed the model use operation like space to batch but as far as I know these operation should only affect speed performance describe the expect behavior cpu and gpu delegate should produce the same mask
tensorflowtensorflow
tensorflow v2 variable name uniquification for keras layer in eager be inconsistent
Bug
tensorflow v2 0a when create e g keras model I would assume that when I run make generator model twice in eager mode that the trainable variable name be identical why would I assume this because the tf train checkpoint and checkpointable api make you believe that variable be couple with their corresponding object class and uniquification of variable would be no long necessary and indeed this be the case when create a variable with the same name twice as can be see at the end of the code what do I get instead in the below example the variable of the second make generator model call will be uniquified first call dense kernel 0 batch normalization v2 gamma 0 batch normalization v2 beta 0 conv2d transpose kernel 0 batch normalization v2 1 gamma 0 batch normalization v2 1 beta 0 conv2d transpose 1 kernel 0 batch normalization v2 2 gamma 0 batch normalization v2 2 beta 0 conv2d transpose 2 kernel 0 second dense 1 kernel 0 batch normalization v2 3 gamma 0 batch normalization v2 3 beta 0 conv2d transpose 3 kernel 0 batch normalization v2 4 gamma 0 batch normalization v2 4 beta 0 conv2d transpose 4 kernel 0 batch normalization v2 5 gamma 0 batch normalization v2 5 beta 0 conv2d transpose 5 kernel 0 third dense kernel 0 batch normalization v2 gamma 0 batch normalization v2 beta 0 conv2d transpose kernel 0 batch normalization v2 1 gamma 0 batch normalization v2 1 beta 0 conv2d transpose 1 kernel 0 batch normalization v2 2 gamma 0 batch normalization v2 2 beta 0 conv2d transpose 2 kernel 0 fourth dense kernel 0 batch normalization v2 gamma 0 batch normalization v2 beta 0 conv2d transpose kernel 0 batch normalization v2 1 gamma 0 batch normalization v2 1 beta 0 conv2d transpose 1 kernel 0 batch normalization v2 2 gamma 0 batch normalization v2 2 beta 0 conv2d transpose 2 kernel 0 manual creation python import tensorflow as tf from tensorflow keras import layer def make generator model model tf keras sequential model add layer dense 7 7 256 use bias false input shape 100 model add layer batchnormalization model add layer leakyrelu model add layer reshape 7 7 256 assert model output shape none 7 7 256 note none be the batch size model add layer conv2dtranspose 128 5 5 stride 1 1 padding same use bias false assert model output shape none 7 7 128 model add layer batchnormalization model add layer leakyrelu model add layer conv2dtranspose 64 5 5 stride 2 2 padding same use bias false assert model output shape none 14 14 64 model add layer batchnormalization model add layer leakyrelu model add layer conv2dtranspose 1 5 5 stride 2 2 padding same use bias false activation tanh assert model output shape none 28 28 1 return model m1 make generator model noise tf random normal 1 100 generate image m1 noise training false print v name for v in m1 trainable variable m2 make generator model noise tf random normal 1 100 generate image m2 noise training false print v name for v in m2 trainable variable with tf graph as default m1 make generator model noise tf random normal 1 100 generate image m1 noise training false print v name for v in m1 trainable variable with tf graph as default m2 make generator model noise tf random normal 1 100 generate image m2 noise training false print v name for v in m2 trainable variable a tf variable 1 name test b tf variable 1 name test print a print b
tensorflowtensorflow
build tensorflow lite micro for target bluepill fail
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary source tensorflow version b02f70947d python version 3 6 instal use virtualenv pip conda na bazel version if compile from source 0 25 1 gcc compiler version if compile from source arm none eabi g 7 3 1 cuda cudnn version na gpu model and memory na describe the problem when build tensorflow lite micro for the bluepill target the build fail on the file tensorflow lite experimental micro kernels depthwise conv cc with the follow message in file include from tensorflow lite kernels internal common h 49 0 from tensorflow lite experimental micro kernels depthwise conv cc 18 tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h in constructor gemmlowp mutex mutex tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h 70 13 error pthread mutex init be not declare in this scope mutex pthread mutex init m null pthread mutex t tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h in destructor gemmlowp mutex mutex tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h 71 14 error pthread mutex destroy be not declare in this scope mutex pthread mutex destroy m tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h 71 14 note suggest alternative pthread mutexattr t mutex pthread mutex destroy m pthread mutexattr t tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h in member function void gemmlowp mutex lock tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h 73 17 error pthread mutex lock be not declare in this scope void lock pthread mutex lock m tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h 73 17 note suggest alternative pthread mutex t void lock pthread mutex lock m pthread mutex t tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h in member function void gemmlowp mutex unlock tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h 74 19 error pthread mutex unlock be not declare in this scope void unlock pthread mutex unlock m tensorflow lite experimental micro tool make download gemmlowp profiling instrumentation h 74 19 note suggest alternative pthread mutex t void unlock pthread mutex unlock m pthread mutex t tensorflow lite experimental micro tool make makefile 209 recipe for target tensorflow lite experimental micro tool make gen bluepill cortex m3 obj tensorflow lite experimental micro kernels depthwise conv o fail make tensorflow lite experimental micro tool make gen bluepill cortex m3 obj tensorflow lite experimental micro kernels depthwise conv o error 1 provide the exact sequence of command step that you execute before run into the problem make f tensorflow lite experimental micro tool make makefile target bluepill test any other info log revert commit a52f5b54e8 fix the build issue
tensorflowtensorflow
tf datum dataset padded batch do not actually pad datum when work on multiple gpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device x tensorflow instal from source or binary binary tensorflow version use command below v1 12 0 9492 g2c319fb415 2 0 0 alpha0 python version 3 6 bazel version if compile from source x gcc compiler version if compile from source x cuda cudnn version 10 0 gpu model and memory tesla k80 12 gb ram exact command to reproduce see test below describe the problem the problem appear when one want to use a multi gpu compile model with variable shape datum feed from a tf data pipeline batch datum use padded batch compulsory here since the data be of variable shape do not seem to really pad the tensor comprise in a batch call fit or predict after do so raise valueerror input tensor shape do not match for distribute tensor inputs perreplica see below a mvce a python generator be use to generate tensor of variable shape 4 4 1 then 5 5 1 then 4 4 1 then 5 5 1 etc a tf datum dataset be build use tf datum dataset from generator include a consistency check a dummy model be build use the tf keras functional api and a tf distribute mirroredstrategy on several gpu source code log mvce import tensorflow as tf import numpy as np def data generator create a generator of variable sized tensor tensor 1 4 4 1 tensor 2 5 5 1 tensor 3 4 4 1 tensor 4 5 5 1 I 0 while true size 4 I 2 4 I 2 1 x tf random normal shape size yield x I 1 def build datum pipeline build a tf data pipeline yield variably sized tensor dataset tf datum dataset from generator generator datum generator output type tf float32 output shape none none 1 pad batch should output batch size 5 5 1 shape tensor dataset dataset padded batch batch size 2 pad shape none none 1 padding value 0 juste make sure that the tf datum pipeline be as expect it iter dataset batch 1 next it tf datum experimental get single element dataset batch 2 next it tf datum experimental get single element dataset np testing assert allclose batch 1 shape batch 2 shape np testing assert allclose batch 1 shape 2 5 5 1 return dataset def run on multi gpu build multi gpu model use mirroredstrategy and try to predict strat tf distribute mirroredstrategy with strat scope I tf keras layers input none none 1 model tf keras model model I I model compile optimizer adam loss binary crossentropy dataset build datum pipeline model predict dataset step 1 verbose 1 if name main run on multi gpu
tensorflowtensorflow
tutorial next step page be give 404
Bug
url s with the issue description of the issue what need change this page be not open clear description link be get 404 request visual if applicable 404 page not find tensorflow
tensorflowtensorflow
tflite calibration and quantization I o min max mismatch on some operator
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 arch linux 5 0 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below run base commit 3ea8756ce6d08a473d78347fb7b876ad5c1be973 relate pr relate issue python version 3 7 3 bazel version if compile from source 0 25 0 gcc compiler version if compile from source 8 3 0 cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior during calibration and quantization see tensorflow lite tool optimize calibration the tensor feed a relu operator show that its minimum value be 0 instead of a negative number this may lead to a quantize model that produce erroneous result however I currently can not confirm if the say behavior be cause erroneous quantization describe the expect behavior I expect relu s input minimum value for quantization to match the input tensor s actual minimum value a negative real number code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem an mscoco train float point 32bit model that I be attempt to calibrate and quantize be available at calibrate and quantize py be an example script that run tf lite s calibration and quantization tool it can be execute as such adjust path as need pythonpath path to tensorflow tpu model official retinanet pythonpath python calibrate and quantize py input tflite file retinanet float32 tflite output tflite file retinanet int8 tflite train file pattern path to mscoco tfrecord train other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach while run the attach python script it report the follow warning note the output min max be different from the input min max for op max pool 2d at index 2 in subgraph 0 this be legal but should happen rarely note the output min max be different from the input min max for op resize bilinear at index 77 in subgraph 0 this be legal but should happen rarely note the output min max be different from the input min max for op resize bilinear at index 80 in subgraph 0 this be legal but should happen rarely and during one my gdb session I check the i o min max value of a max pool 2d operator highlight by one of the warning above the output minimum value of the max pool 2d operator be 0 instead of 19 3 which would throw the say warning thread 1 calibrate and q hit breakpoint 2 tflite optimize anonymous namespace quantizeopoutput model 0x55822760c890 subgraph idx 0 op idx 2 property output idx 0 error reporter 0x558223944030 at tensorflow lite tool optimize quantize model cc 504 504 printf gdb p min 5 19 2778091 gdb p max 6 22 6024609 gdb p output tensor quantization min 0 10 0 gdb p op code 11 tflite builtinoperator max pool 2d gdb p output tensor quantization max 0 12 22 6024609 gdb bt 0 tflite optimize anonymous namespace quantizeopoutput model 0x55822760c890 subgraph idx 0 op idx 2 property output idx 0 error reporter 0x558223944030 at tensorflow lite tool optimize quantize model cc 504 1 0x00007f78cc28497c in tflite optimize anonymous namespace quantizeweightsinputoutput builder 0x7fff304b4860 model 0x55822760c890 allow float false error reporter 0x558223944030 at tensorflow lite tool optimize quantize model cc 570 2 0x00007f78cc284faf in tflite optimize quantizemodel builder 0x7fff304b4860 model 0x55822760c890 input type 0x7fff304b4844 tflite tensortype int8 output type 0x7fff304b4848 tflite tensortype int8 allow float false error reporter 0x558223944030 at tensorflow lite tool optimize quantize model cc 638 3 0x00007f78cc26a8c1 in tflite calibration wrapper calibrationwrapper quantizemodel this 0x5582274c7790 input py type 1 output py type 1 allow float false at tensorflow lite python optimize calibration wrapper cc 201 4 0x00007f78cc268f88 in wrap calibrationwrapper quantizemodel args 0x7f78cc697458 at bazel out k8 dbg bin tensorflow lite python optimize tensorflow lite wrap calibration wrapper cc 3418 5 0x00007f795c552e68 in pymethoddef rawfastcallkeyword from usr lib libpython3 7 m so 1 0 6 0x00007f795c553101 in pycfunction fastcallkeyword from usr lib libpython3 7 m so 1 0 7 0x00007f795c5c3d19 in pyeval evalframedefault from usr lib libpython3 7 m so 1 0 8 0x00007f795c5526db in pyfunction fastcallkeyword from usr lib libpython3 7 m so 1 0 9 0x00007f795c5c36ea in pyeval evalframedefault from usr lib libpython3 7 m so 1 0 10 0x00007f795c50bd09 in pyeval evalcodewithname from usr lib libpython3 7 m so 1 0 11 0x00007f795c552882 in pyfunction fastcallkeyword from usr lib libpython3 7 m so 1 0 12 0x00007f795c5bff9c in pyeval evalframedefault from usr lib libpython3 7 m so 1 0 13 0x00007f795c50bd09 in pyeval evalcodewithname from usr lib libpython3 7 m so 1 0 14 0x00007f795c552882 in pyfunction fastcallkeyword from usr lib libpython3 7 m so 1 0 15 0x00007f795c5bf22d in pyeval evalframedefault from usr lib libpython3 7 m so 1 0 16 0x00007f795c5526db in pyfunction fastcallkeyword from usr lib libpython3 7 m so 1 0 17 0x00007f795c5bf22d in pyeval evalframedefault from usr lib libpython3 7 m so 1 0 18 0x00007f795c50bd09 in pyeval evalcodewithname from usr lib libpython3 7 m so 1 0 19 0x00007f795c552882 in pyfunction fastcallkeyword from usr lib libpython3 7 m so 1 0 20 0x00007f795c5c36ea in pyeval evalframedefault from usr lib libpython3 7 m so 1 0 21 0x00007f795c50bd09 in pyeval evalcodewithname from usr lib libpython3 7 m so 1 0 type for more q to quit c to continue without page q
tensorflowtensorflow
bug in keras guide
Bug
url s with the issue multiple gpu description of issue what need change the last line of this guide produce a bug when run on the associated colab notebook maybe everywhere I haven t check raise list and define invalidargumenterror traceback most recent call last usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1333 try 1334 return fn args 1335 except error operror as e 19 frame invalidargumenterror no opkernel be register to support op ncclallreduce use by node training tfoptimizer ncclallreduce with these attrs reduction sum share name c0 t dt float num device 1 register device cpu gpu xla cpu xla gpu register kernel node training tfoptimizer ncclallreduce during handling of the above exception another exception occur invalidargumenterror traceback most recent call last usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1346 pass 1347 message error interpolation interpolate message self graph 1348 raise type e node def op message 1349 1350 def extend graph self invalidargumenterror no opkernel be register to support op ncclallreduce use by node training tfoptimizer ncclallreduce define at usr local lib python3 6 dist package tensorflow estimator python estimator estimator py 1254 with these attrs reduction sum share name c0 t dt float num device 1 register device cpu gpu xla cpu xla gpu register kernel be there currently visual if not will it clarify the content submit a pull request no I m only begin in tf keras
tensorflowtensorflow
tensorflow v2 tensorflow python op linalg op svd hang
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 win 10 tensorflow instal from source or binary pip tensorflow version use command below 2 0a gpu python version 3 7 3 cuda cudnn version 10 0 gpu model and memory rtx 2060 describe the current behavior python import tensorflow as tf from tensorflow python ops import linalg ops mat tf random uniform 2048 2048 s u v linalg op svd mat when run this example with my tensorflow gpu installation it hang until I cancel with ctrl c which take very long 2019 05 08 13 20 05 994717 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 2019 05 08 13 20 06 002398 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library nvcuda dll 2019 05 08 13 20 06 250271 I tensorflow core common runtime gpu gpu device cc 1467 find device 0 with property name geforce rtx 2060 major 7 minor 5 memoryclockrate ghz 1 71 pcibusid 0000 1f 00 0 totalmemory 6 00gib freememory 4 89gib 2019 05 08 13 20 06 256846 I tensorflow core common runtime gpu gpu device cc 1546 add visible gpu device 0 2019 05 08 13 20 06 803341 I tensorflow core common runtime gpu gpu device cc 1015 device interconnect streamexecutor with strength 1 edge matrix 2019 05 08 13 20 06 806295 I tensorflow core common runtime gpu gpu device cc 1021 0 2019 05 08 13 20 06 810187 I tensorflow core common runtime gpu gpu device cc 1034 0 n 2019 05 08 13 20 06 812341 I tensorflow core common runtime gpu gpu device cc 1149 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 4618 mb memory physical gpu device 0 name geforce rtx 2060 pci bus i d 0000 1f 00 0 compute capability 7 5 2019 05 08 13 20 07 023250 I tensorflow core kernel cuda solver cc 159 create cudasolver handle for stream 000001c2bfa888a0 traceback most recent call last file svd check py line 6 in s u v linalg op svd mat file c progam miniconda envs tf2 gpu lib site package tensorflow python op linalg ops py line 418 in svd tensor compute uv compute uv full matrix full matrix name name file c progam miniconda envs tf2 gpu lib site package tensorflow python ops gen linalg ops py line 2409 in svd full matrix full matrix keyboardinterrupt on tensorflow most recent v1 version it just work also when run the follow it work as well python import tensorflow as tf from tensorflow python ops import linalg ops mat tf random uniform 2048 2048 with tf device device cpu 0 s u v linalg op svd mat
tensorflowtensorflow
godoc link error
Bug
url s with the issue description of issue what need change the link doesn t work it strip the trailing md off the url when it be redirect through tensorflow org code correct link it be correct I think the redirect be break for example if you change to it work parameter define na return define na raise list and define na usage example na request visual if applicable na submit a pull request I don t think a pr be the solution here if there be a well url tho I m happy to submit a pr
tensorflowtensorflow
gfile copy do not overwrite d file properly on posix filesystem
Bug
describe the current behavior gfile copy overwrite true do not truncate the destination file before overwrite that mean if the src file be short than the d file the result d file contain the mix of the two describe the expect behavior gfile copy overwrite true result in have the exact same content of src file in the d file code to reproduce the issue echo aaa a txt echo bbbbbb b txt python3 c from tensorflow import gfile gfile copy a txt b txt overwrite true cat b txt aaa bb test with pip3 install tensorflow 1 13 1 python 3 5 2 b txt should have aaa as the content not aaa nbb ref
tensorflowtensorflow
move the dockerfile to ubuntu 18 04
Bug
current dockerfile l22 we have be base out of 16 04 its well if we can move to 18 04 the correspond version of tf serve be already use 18 04 base ubuntu in their dockerfile
tensorflowtensorflow
tf distribute behave inconsistent when use custom loss bug
Bug
system information os platform and distribution e g linux ubuntu 16 04 win10 tensorflow instal from source or binary pip tensorflow version use command below 2 0 alpha gpu python version 3 7 describe the current behavior update I try the late nightly building still no luck and now more error info show tensorflow python framework error impl internalerror fail copy input tensor from job localhost replica 0 task 0 device cpu 0 to job localhost replica 0 task 0 device gpu 0 in order to run deleteiterator no unary variant device copy function find for direction 1 and variant type index class tensorflow data iteratorresource deleter op deleteiterator below be on alpha 2 0 gpu I m try to transfer my tf1 0 code to 2 0 these day and want to make use of the distribution strategy to optimize the multi gpu training scheme simply description here my goal be adopt a complex loss that compute by args more than just y true y pre in a distribution manner my old version implementation of the custom loss be follow fchollet suggestion issuecomment 473139603 by use add loss add metric function however if I want to use tf distribution the add loss way be not allow valueerror we currently do not support compile the model with distribution strategy if model add loss tensor or model add metric tensor have be call so I be use another work around from here a work simplify code be import tensorflow as tf from tensorflow python import kera from tensorflow python keras import layer as kl from tensorflow python keras import model as km import numpy as np print tf version def my custom loss wrapper input weight def real loss y true y pre expand the out put of binary crossentropy 64 64 to 64 64 1 to match the shape of input weight bce loss tf expand dim keras loss binary crossentropy y true y pre axis 1 return bce loss input weight return real loss fake img np one 64 64 3 fake label np one 64 64 1 fake weight np one 64 64 1 5 dataset tf datum dataset from tensor fake img fake weight fake label repeat 100 batch 10 img input kl input shape 64 64 3 weight input kl input shape 64 64 1 x kl conv2d 32 3 3 stride 2 padding same img input x kl conv2dtranspose 32 3 3 stride 2 padding same x mask kl conv2d 1 1 1 stride 1 activation sigmoid name mask x model km model input img input weight input output mask model compile loss my custom loss wrapper weight input return a function real loss y true y pre optimizer sgd model fit dataset epoch 1000 since above code be work good let s enable the distribution strategy strategy tf distribute mirroredstrategy cross device op tf distribute hierarchicalcopyallreduce with strategy scope img input kl input shape 64 64 3 weight input kl input shape 64 64 1 x kl conv2d 32 3 3 stride 2 padding same img input x kl conv2dtranspose 32 3 3 stride 2 padding same x mask kl conv2d 1 1 1 stride 1 activation sigmoid name mask x model km model input img input weight input output mask model compile loss mse loss my custom loss wrapper weight input return a function real loss y true y pre optimizer sgd model fit dataset epoch 1000 no luck this time tensorflow python framework error impl invalidargumenterror you must feed a value for placeholder tensor input 2 with dtype float and shape 64 64 1 node input 2 mask sample weight 1 11 op inference keras scratch graph 1609 I be try to debug this by trace the actual input value in this file however it s exactly same with the non distribute version and the training engine just can not get the input 2 which I double check do have value match the type and shape in actual input my very personal guess be that in the strategy scope the context somehow nest and the feeding datum process therefore fail when custom loss involve with another layer of the model like in my example input weight except y true and y pre if I don use the input weight value in the function the other code can still work def my custom loss wrapper input weight def real loss y true y pre expand the out put of binary crossentropy 64 64 to 64 64 1 to match the shape of input weight bce loss tf expand dim keras loss binary crossentropy y true y pre axis 1 return bce loss return real loss I do not know if this the reason why add loss not support by tf distribution now any thought be appreciate describe the expect behavior custom loss with multiple input layer work consistent in both distribute and non distribute env
tensorflowtensorflow
typo in tensorflow for poet
Bug
exist url contain the issue 3 description of issue what need change typo clear description mobilenet be a a small efficient convolutional neural network should be mobilenet be a small efficient convolutional neural network submit pr I couldn t find the source for this codelab to fix the error be this tutorial obsolete or have it be update for tensorflow 2 0 the issue have also be report here
tensorflowtensorflow
load image
Bug
there be some problem in loading image for n in range 3 image path random choice all image path display display display image image path print caption image image path print error keyerror traceback most recent call last in 2 image path random choice all image path 3 display display display image image path 4 print caption image image path 5 print in caption image image path 3 def caption image image path 4 image rel pathlib path image path relative to data root 5 return image cc by 2 0 join attribution str image rel split 1 6 keyerror daisy 3764116502 f394428ee0 n jpg please what be the problem thank so much for take the time to file a documentation issue and even more thank if you intend to contribute to update it please do introduce yourself on our mailing list with google group forum docs or email mailto and let we know if you have any question we also encourage you to review our documentation contributor guide as a side note per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template exist url contain the issue link to the documentation entry for example description of issue what need change correct link be the link point to the source code correct to find the source code use git grep my method from the git command line in your locally check out repository clear description why should someone use this method how be it useful usage example be there a usage example parameter define be all argument that can be pass in define and format correctly return define be return value define raise list and define be error define example raise request visual if applicable be there currently visual if not would they make the content clear submit pr be you plan to also submit a pull request to fix this issue see the documentation contributor guide the documentation style guide
tensorflowtensorflow
tf signal inverse stft attributeerror int object have no attribute value in 2 0 0 alpha0
Bug
there appear to be a bug in tf signal inverse stft when test for real frame shape 1 value be none where real frame shape 1 be an integer and do not have a value reproducible code print tf version frame length 512 frame step 256 signal tf random uniform shape 1000 x tf signal stft signal frame length frame step y tf signal inverse stft x frame length frame step 2 0 0 alpha0 attributeerror traceback most recent call last in 1 tf signal inverse stft x frame length frame step mnt cube tsainbur conda envs tpy3 lib python3 6 site package tensorflow python op signal spectral op py in inverse stft stfts frame length frame step fft length window fn name 242 if frame length static be none or 243 real frame shape ndim be none or 244 real frame shape 1 value be none 245 real frame real frame frame length 246 real frame rank array op rank real frame attributeerror int object have no attribute value
tensorflowtensorflow
shape error occur after compile keras model with tf keras loss categoricalcrossentropy
Bug
system information os platform and distribution linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version 1 13 1 python version 3 6 3 describe the current behavior if I want to specify for example a reduction method of the loss function I will need to explicitly create an instance of tf keras loss categoricalcrossentropy and pass it to model compile instead of pass the categorical crossentropy keyword however compile a model with an instance of tf keras loss categoricalcrossentropy result in the follow shape error when call model fit afterwards invalidargumenterror logit and label must have the same first dimension get logit shape 1000 10 and label shape 10000 node loss 18 dense 37 loss categoricalcrossentropy sparsesoftmaxcrossentropywithlogit sparsesoftmaxcrossentropywithlogit describe the expect behavior I be expect similar behavior for both categorical crossentropy and tf keras loss categoricalcrossentropy code to reproduce the issue a short mnist example I realize that remove the to categorical transformation resolve the shape error but the model do not learn properly anymore from tensorflow python keras datasets import mnist from tensorflow python keras util import to categorical import tensorflow as tf import numpy as np x train y train x test y test mnist load datum x train x test x train astype float 255 x test astype float 255 x train x test x train reshape len x train 28 28 1 x test reshape len x test 28 28 1 y train y test to categorical y train to categorical y test model definition model tf keras sequential tf keras layer conv2d 16 8 stride 2 padding same activation relu input shape 28 28 1 tf keras layers maxpool2d 2 1 tf keras layer conv2d 32 4 stride 2 padding valid activation relu tf keras layers maxpool2d 2 1 tf keras layer flatten tf keras layer dense 32 activation relu tf keras layer dense 10 activation softmax optimizer tf train gradientdescentoptimizer learning rate 0 1 this cause error loss tf keras loss categoricalcrossentropy from logit false use the keyword everything work loss categorical crossentropy compile model with keras model compile optimizer optimizer loss loss metric categorical accuracy train model with keras model fit x train y train epoch 5 batch size 1000 validation datum x test y test verbose 2 thank you
tensorflowtensorflow
eigen version bump break nightly avx512 build with gcc 6 3
Bug
system information os platform and distribution e g linux ubuntu 16 04 debian 9 9 mobile device e g iphone 8 pixel 2 samsung galaxy n a tensorflow instal from source or binary source tensorflow version e432bf03931f4062f7c5e3a1553aff61a7294751 python version 2 7 13 instal use virtualenv pip conda virtualenv bazel version if compile from source 0 24 1 gcc compiler version if compile from source debian 6 3 0 18 deb9u1 6 3 0 20170516 cuda cudnn version n a gpu model and memory n a describe the problem in a gce vm with avx512 support cpu bazel build config opt tensorflow tool pip package build pip package any other info log in file include from usr lib gcc x86 64 linux gnu 6 include immintrin h 59 0 from external eigen archive unsupported eigen cxx11 eigen src core util configurevectorization h 318 from external eigen archive unsupported eigen cxx11 eigen core 22 from external eigen archive unsupported eigen cxx11 tensor 14 from third party eigen3 unsupported eigen cxx11 tensor 1 from tensorflow core framework tensor shape h 21 from tensorflow core kernel conv grad op h 164 from tensorflow core kernel conv grad filter op cc 21 usr lib gcc x86 64 linux gnu 6 include avx512vlbwintrin h in function typename eigen internal enable if mask load available packet type eigen internal ploadu const typename eigen internal unpacket trait type typename eigen internal unpacket trait mask t with packet eigen internal packet16h usr lib gcc x86 64 linux gnu 6 include avx512vlbwintrin h 105 1 error inline fail in call to always inline m256i mm256 maskz loadu epi16 mmask16 const void target specific option mismatch mm256 maskz loadu epi16 mmask16 u void const p in file include from external eigen archive unsupported eigen cxx11 eigen core 202 0 from external eigen archive unsupported eigen cxx11 tensor 14 from third party eigen3 unsupported eigen cxx11 tensor 1 from tensorflow core framework tensor shape h 21 from tensorflow core kernel conv grad op h 164 from tensorflow core kernel conv grad filter op cc 21 external eigen archive unsupported eigen cxx11 eigen src core arch gpu packetmathhalf h 598 38 note call from here result x mm256 maskz loadu epi16 mask from target tensorflow tool pip package build pip package fail to build use verbose failure to see the command line of fail build step info elapse time 387 379s critical path 200 24s info 763 process 763 local fail build do not complete successfully bisect to b53fd7648b5ca7eaeceb602617433be6e9a4abec report the follow error in file include from usr lib gcc x86 64 linux gnu 6 include immintrin h 59 0 from external eigen archive eigen src core util configurevectorization h 318 from external eigen archive eigen core 22 from third party eigen3 eigen core 1 from tensorflow compiler xla service cpu runtime conv2d h 19 from tensorflow compiler xla service cpu runtime conv2d cc 16 usr lib gcc x86 64 linux gnu 6 include avx512vlbwintrin h in function typename eigen internal enable if mask load available packet type eigen internal ploadu const typename eigen internal unpacket trait type typename eigen internal unpacket trait mask t with packet eigen internal packet16h usr lib gcc x86 64 linux gnu 6 include avx512vlbwintrin h 105 1 error inline fail in call to always inline m256i mm256 maskz loadu epi16 mmask16 const void target specific option mismatch mm256 maskz loadu epi16 mmask16 u void const p in file include from external eigen archive eigen core 202 0 from third party eigen3 eigen core 1 from tensorflow compiler xla service cpu runtime conv2d h 19 from tensorflow compiler xla service cpu runtime conv2d cc 16 external eigen archive eigen src core arch gpu packetmathhalf h 598 38 note call from here result x mm256 maskz loadu epi16 mask from target tensorflow tool pip package build pip package fail to build use verbose failure to see the command line of fail build step info elapse time 639 500 critical path 90 25 info 4690 process 4690 local fail build do not complete successfully revert b53fd7648b5ca7eaeceb602617433be6e9a4abec resolve the issue
tensorflowtensorflow
tflite doc conv 2d transpose transpose conv
Bug
exist url contain the issue description of issue what need change tensorflow r1 13 conv 2d transpose op be not present in tensorflow lite schema after glimpse toco source code tf nn conv2d transpose conv2dbackpropinput be convert to transpose conv l1778 so update tflite documentation replace conv 2d transpose with transpose conv would be nice
tensorflowtensorflow
dataset in tf2 0 lack of key property and method which already in tf1 13
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mac mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tensorflow version use command below 2 0 alpha python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior dataset in tf2 0 lack of key property and method which already in tf1 13 for example property output shape output type method make one shot iterator describe the expect behavior these key property and method should in dataset of tf2 0 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
pass tf datum dataset to model predict raise valueerror the batch size argument must not be specify when use dataset as an input
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution 19 04 tensorflow instal from source or binary pip install tensorflow 2 0 0 alpha0 tensorflow version use command below 2 0 0 alpha python version 3 7 3 describe the current behavior run simple classification example with keras interface describe the expect behavior predict result of fit model with tf datum dataset use model predict code to reproduce the issue from sklearn import dataset from sklearn model selection import train test split from sklearn preprocesse import standardscaler onehotencod import tensorflow as tf from tensorflow python import keras iris dataset load iris scl standardscaler ohe onehotencod category auto datum norm scl fit transform iris datum datum target ohe fit transform iris target reshape 1 1 toarray train datum val datum train target val target train test split data norm datum target test size 0 1 train datum test datum train target test target train test split train datum train target test size 0 2 train dataset tf datum dataset from tensor slice train datum train target train dataset train dataset batch 32 repeat test dataset tf datum dataset from tensor slice test datum test target test dataset test dataset batch 32 repeat val dataset tf datum dataset from tensor slice val datum val target val dataset val dataset batch 12 repeat mdl keras sequential kera layer dense 16 input dim 4 activation relu keras layer dense 8 activation relu keras layer dense 8 activation relu keras layer dense 3 activation softmax mdl compile optimizer keras optimizer adam 0 01 loss kera loss categorical crossentropy metric keras metric categorical accuracy history mdl fit train dataset epoch 10 step per epoch 15 validation datum val dataset validation step 12 result mdl evaluate test dataset step 15 comparison mdl predict class test dataset other info log e0504 15 10 24 153471 139666315605824 ultratb py 149 internal python error in the inspect module below be the traceback from this internal error traceback most recent call last file home ispmarin lib venvs dl lib python3 7 site package ipython core interactiveshell py line 3296 in run code exec code obj self user global ns self user n file line 41 in comparison mdl predict class test dataset file home ispmarin lib venvs dl lib python3 7 site package tensorflow python keras engine sequential py line 313 in predict class proba self predict x batch size batch size verbose verbose file home ispmarin lib venvs dl lib python3 7 site package tensorflow python keras engine training py line 1122 in predict batch size self validate or infer batch size batch size step x file home ispmarin lib venvs dl lib python3 7 site package tensorflow python keras engine training py line 1780 in validate or infer batch size raise valueerror the batch size argument must not be specify when valueerror the batch size argument must not be specify when use dataset as an input
tensorflowtensorflow
custom tf 2 0 training loop perform considerably bad than keras fit generator can t understand why
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 2 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v1 12 0 9492 g2c319fb415 2 0 0 alpha0 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 v10 1 105 gpu model and memory rtx 2080 ti 11 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior currently try to reproduce the result from keras model fit generator with a custom training loop in tf2 0 code run without any issue but the mae loss for the custom training loop be 3 0 whereas the mae loss for the keras model fit generator be significantly well at 2 0 describe the expect behavior I would expect the loss to be roughly equivalent code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem the datum set be extremely large but each sample consist of 4 field of length 14 995 target be a single value in second import tensorflow as tf import numpy as np import os import panda as pd import time from sklearn import preprocesse import shutil import my class tf import datum and massage os chdir home aj datum lanl earthquake prediction cv index pd read csv current datum cv assignments csv delimiter header none value astype int16 evaluation indice pd read csv current datum validation indice original csv delimiter header none value astype int64 eval index cv index np hsplit evaluation indice 2 train pd read csv current datum newfeature csv delimiter header none value astype float32 train datum other info np hsplit train 2 target og row eq ind cv ind np hsplit other info 4 target target astype float16 og row og row astype int64 eq ind eq ind astype int64 cv ind cv ind astype int64 mod eval pd read csv current datum validation index modify csv delimiter header none value astype int64 mod eval index mod cv index np hsplit mod eval 4 logtrain pd read csv current datum newfeature logtransforme csv delimiter header none value astype float32 log std log skew log kurt log sixth np hsplit logtrain 7 train datum log np concatenate log std log skew log kurt log sixth axis 1 del logtrain log std log skew log kurt log sixth other info def safe mkdir path create a directory if there isn t one already try os mkdir path except oserror pass def del dir name if os path isdir save model format name shutil rmtree save model format name if os path isdir error plot format name shutil rmtree error plot format name if os path isdir train and test loss format name shutil rmtree train and test loss format name fold 1 boolz cv ind fold cv train train datum log boolz reshape 1 cv target target boolz reshape 1 cv eqs eq ind boolz reshape 1 scaler preprocesse standardscaler fit cv train cv train scaler transform cv train cv val scaler transform train datum log batch size 64 lookback 14995 offset 15000 if np max mod eval index len train datum log prevent from divide twice on accident when re run code mod eval index mod eval index 10 train gen my class tf datagenerator data cv train target cv target indice cv eqs min index 0 max index none batch size batch size lookback lookback offset offset shuffle start true shuffle feed true val gen my class tf valdatagenerator data cv val target target eval index mod eval index cv index mod cv index cv fold batch size batch size lookback lookback class crnn tf keras model def init self super crnn self init consider locallyconnected1d self conv1 tf keras layer conv1d filter 32 kernel size 50 stride 1 padding same activation none kernel initializer he uniform name conv1a self pool1 tf keras layer maxpool1d pool size 100 stride none name pool1 self gru1 tf keras layers gru unit 32 name gru1 self dense1 tf keras layer dense unit 16 activation none name dense1 self output1 tf keras layer dense unit 1 activation relu name output1 self lrelu tf keras layers leakyrelu alpha 0 1 self mae tf keras loss meanabsoluteerror self optimizer tf keras optimizer sgd lr 1e 3 momentum 0 nesterov true def call self input x self conv1 input x self lrelu x x self pool1 x x self gru1 x x self dense1 x x self lrelu x return self output1 x def train step self sample label with tf gradienttape as tape prediction self call sample loss self mae label prediction gradient tape gradient loss self trainable variable self optimizer apply gradient zip gradient self trainable variable self train loss loss def eval once self sample label prediction self call sample loss self mae label prediction self eval loss loss def train self num epoch self train loss tf keras metric mean name train loss self eval loss tf keras metric mean name eval loss self store gradient np empty num epoch for epoch in range num epoch start time time time self train loss reset state self eval loss reset state for sample label in train gen self train step sample label train gen on epoch end for sample label in val gen self eval once sample label print epoch 0 time 1 2f train loss 2 2f test loss 3 2f format epoch 1 time time start time self train loss result self eval loss result tf keras backend clear session model crnn model train 20 model2 crnn model2 compile optimizer tf keras optimizer sgd lr 1e 3 momentum 0 nesterov true loss mae history model2 fit generator generator train gen validation data val gen epoch 20 worker 1 use multiprocesse false verbose 2 callback check this to see what be different between keras fit generator and your fit model3 crnn model3 compile optimizer model3 optimizer loss model3 mae history3 model3 fit generator generator train gen validation data val gen epoch 20 worker 1 use multiprocesse false verbose 2 callback other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach I have test to see that this model can overfit by train it on only a single sample and it be able to overfit in both scenario I be use custom generator of the class tf keras util sequence but they work equivalently under each training scenario let I know if you need any additional information in order to help I out with this issue thank attach the custom generator as well as some of the datum in text file format the large file be too large to upload but you can use the datum from here along with the build feature r script I upload build feature txt validation indice original txt validation index modify txt my class txt
tensorflowtensorflow
change doc issue format for plain text display
Bug
issue template be display in a text input box so shouldn t use markdown format b 131912565
tensorflowtensorflow
suggestion for neural machine translation with attention tutorial
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version 2 0 doc link describe the documentation issue 1 max length function be not need the kera preprocesse sequence pad sequence already pad the sequence to max length one can just get max length by input seq shape 1 2 need to explain why we skip 0 for convert and loss function I think it be because of padding 3 validation set be not use this could be an good example to explain how we evaluate the model 4 gru unit in encoder dense in attention weight calculation gru unit in decoder all have the same unit parameter this be not necessary a note could be add to explain that same unit be use for convenience we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue yes
tensorflowtensorflow
retrain image detection with mobilenet for use in tfjs failure in tensorflowjs converter
Bug
it be unclear whether this be a documentation or a programming issue if misfile please say so and suggest whether a new issue should be raise or what else can be do system information tensorflow version v1 13 1 0 g6612da8951 1 13 1 see 27538 v1 12 0 9492 g2c319fb415 2 0 0 alpha0 see 27539 and tf nightly 2 0 preview see doc link and describe the documentation issue both document mention but do not describe conversion to mobile model yet conversion use tensorflowjs converter fail in every attempt see which mention two attempt by I as well as question by other with the same problem we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue as soon as it be work I would gladly fix and or augment the doc I have not get tensorflowjs converter to work on a retrain mobilenet model despite the doc state that it should work e g export your model this save model can load for inference later or convert to tflite or tfjs
tensorflowtensorflow
trtgraphconverterv2 do not preserve output name in the signature def
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below master from april 22nd python version 3 6 7 bazel version if compile from source 0 24 gcc compiler version if compile from source 7 4 cuda cudnn version 10 0 7 5 0 gpu model and memory gtx 1080 ti describe the current behavior if you use trtgraphconverterv2 to convert a function in a save model to use trt it do not preserve the output name in the signature def of the save model if the save function decorate with tf function return a dict output a a output b b the name output a and output b be in the save model after conversion with trtgraphconverterv2 they be change to the default name output 0 and output 1 describe the expect behavior the name of the output should not change this break all code that load the model and rely on the correct name code to reproduce the issue take any save model that contain a function return a dict then run this conversion param trt convert default trt conversion param replace precision mode trt convert trtprecisionmode fp16 max batch size 1 max workspace size byte 8000000000 trt converter trt convert trtgraphconverterv2 input save model dir your save model input save model signature key your key conversion param conversion param trt converter convert trt converter save your save model use save model cli to inspect the save model
tensorflowtensorflow
distribute tensorflow do not work when use more than 26 worker
Bug
system information os platform and distribution e g linux ubuntu 16 04 rhel 7 5 tensorflow version use command below tf alpha 2 0 and tf 1 13 cpu only python version 2 7 intel xeon gold 6150 2 7ghz 18 core 16 core enable 24 75 mb l3 cache max turbo freq 3 7ghz min 3 4ghz 180 gb ram six channel 4 8 tb of disk space I have a performance issue use distribute tf for cpu machine with multiworkerstrategy use 58 worker however it seem that it do not work as the follow error warn log before flag parsing go to stderr w0502 21 43 31 821069 47528963346944 cross device op py 1106 not all device in tf distribute strategy be visible to tensorflow warning log before flag parsing go to stderr w0502 21 43 31 821974 47863205896704 cross device op py 1106 not all device in tf distribute strategy be visible to tensorflow terminate call after throw an instance of std system error what resource temporarily unavailable terminate call after throw an instance of std system error what resource temporarily unavailable terminate call after throw an instance of std system error what resource temporarily unavailable typically how many worker can be use for this scheme and how to solve this problem be it because I do not have sufficient memory
tensorflowtensorflow
break link in doc api docs python tf test
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version tensorflow core 1 13 doc link describe the documentation issue the link point to the testing guide be break break link a snapshot of the break link work as of 22nd feb 2019 we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
random uniform not support for tflite conversion
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary source tensorflow version or github sha if from source tf nightly 1 14 0 rc0 provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime and be not recognize by tensorflow if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add average pool 2d batch to space nd cast concatenation conv 2d fully connect greater equal mul reshape resize bilinear space to batch nd here be a list of operator for which you will need custom implementation randomuniform also please include a link to a graphdef or the model if possible any other info log I follow 26300 by use converter target op tf lite opsset tflite builtin tf lite opsset select tf op but I still can t create tflite file correctly
tensorflowtensorflow
tf 2 0 tf function throw exception
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below pip install tensorflow gpu 2 0 0 alpha0 python version 3 7 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 cudnn 7 5 gpu model and memory geforce gtx 980 4 g describe the current behavior I port deep deterministic policy gradient algorithm to tf 2 0 and it work fine when I run the function with with tf device gpu 0 but fail when I try to wrap the function with tf function code to reproduce the issue code repo other info log miniconda3 envs rl bin python medium seagull use I seagull github ddpg ddpg pendulam pendulam train script py warn log before flag parsing go to stderr w0501 21 58 48 871961 139821025609536 tf log py 161 entity could not be transform and will be stage without change error detail can be find in the log when run with the env variable autograph verbosity 1 please report this to the autograph team cause unexpected error transforming if you believe this be due to a bug please set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output when file the bug report cause by node have ctx unset traceback most recent call last file medium seagull use I seagull github ddpg ddpg pendulam pendulam train script py line 39 in score ddpg file miniconda3 envs rl lib python3 7 site package tensorflow python eager def function py line 426 in call self initialize args kwd add initializer to initializer map file miniconda3 envs rl lib python3 7 site package tensorflow python eager def function py line 370 in initialize args kwd file miniconda3 envs rl lib python3 7 site package tensorflow python eager function py line 1313 in get concrete function internal garbage collect graph function self maybe define function args kwargs file miniconda3 envs rl lib python3 7 site package tensorflow python eager function py line 1580 in maybe define function graph function self create graph function args kwargs file miniconda3 envs rl lib python3 7 site package tensorflow python eager function py line 1512 in create graph function capture by value self capture by value file miniconda3 envs rl lib python3 7 site package tensorflow python framework func graph py line 694 in func graph from py func func output python func func args func kwargs file miniconda3 envs rl lib python3 7 site package tensorflow python eager def function py line 317 in wrap fn return weak wrap fn wrap args kwd file miniconda3 envs rl lib python3 7 site package tensorflow python framework func graph py line 686 in wrapper args kwargs file miniconda3 envs rl lib python3 7 site package tensorflow python autograph impl api py line 392 in convert call result convert f effective args kwargs file tmp tmp0cbzqvh py line 51 in tf ddpg ag for stmt ag convert call range none ag conversionoption recursive true verbose 0 strip decorator tf function defun 1 ag convert ag do not convert ag convert call force conversion false optional feature internal convert user code true 1 n episode 1 none loop body 1 file miniconda3 envs rl lib python3 7 site package tensorflow python autograph operators control flow py line 81 in for stmt return py for stmt iter extra test body init state file miniconda3 envs rl lib python3 7 site package tensorflow python autograph operators control flow py line 90 in py for stmt state body target state file tmp tmp0cbzqvh py line 37 in loop body 1 state score break 1 ag for stmt ag convert call range none ag conversionoption recursive true verbose 0 strip decorator tf function defun 3 ag convert ag do not convert ag convert call force conversion false optional feature internal convert user code true max t extra test loop body state score break 1 file miniconda3 envs rl lib python3 7 site package tensorflow python autograph operators control flow py line 81 in for stmt return py for stmt iter extra test body init state file miniconda3 envs rl lib python3 7 site package tensorflow python autograph operators control flow py line 90 in py for stmt state body target state file tmp tmp0cbzqvh py line 21 in loop body action ag convert call act agent ag conversionoption recursive true verbose 0 strip decorator tf function defun 4 ag convert ag do not convert ag convert call force conversion false optional feature internal convert user code true state 1 0 file miniconda3 envs rl lib python3 7 site package tensorflow python autograph impl api py line 392 in convert call result convert f effective args kwargs file tmp tmpsruok5vd py line 6 in tf act action ag convert call actor local self ag conversionoption recursive true verbose 0 strip decorator tf function defun ag convert ag do not convert ag convert call defun 1 ag convert ag do not convert ag convert call force conversion false optional feature internal convert user code true tf convert to tensor state file miniconda3 envs rl lib python3 7 site package tensorflow python framework op py line 1108 in convert to tensor v2 as ref false file miniconda3 envs rl lib python3 7 site package tensorflow python framework op py line 1186 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file miniconda3 envs rl lib python3 7 site package tensorflow python framework constant op py line 304 in constant tensor conversion function return constant v dtype dtype name name file miniconda3 envs rl lib python3 7 site package tensorflow python framework constant op py line 245 in constant allow broadcast true file miniconda3 envs rl lib python3 7 site package tensorflow python framework constant op py line 283 in constant impl allow broadcast allow broadcast file miniconda3 envs rl lib python3 7 site package tensorflow python framework tensor util py line 476 in make tensor proto getdensedimension value valueerror argument must be a dense tensor array 0 0 0 99759441 0 06932093 0 get shape 1 5 but want 1
tensorflowtensorflow
tf 2 0 api docs tf image extract image patch
Bug
exist url contain the issue description of issue what need change clear description nbsp nbsp nbsp nbsp nbsp nbsp only a single sentence description be provide usage example nbsp nbsp nbsp nbsp nbsp nbsp no usage example be provide raise list and define nbsp nbsp nbsp nbsp nbsp nbsp error be not define visual if applicable nbsp nbsp nbsp nbsp nbsp nbsp no visual be include
tensorflowtensorflow
race condition with keras model to estimator in distribute mode
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 centos7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 12 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior we be use distribute tensorflow as describe here with parameterserverstrategy basically we be start the tf server on every worker and then run train and evaluate on each worker the estimator function be serialize send to each worker and then execute to create the estimator create the graph and start the training this work with standard estimator model but doesn t when use keras model with model to estimator do this still seem the advise way to do distribute learn with keras we also try new standalone mode without any success some node be fail with an io error when try to save the first checkpoint concurrently when create the estimator on each worker it call model to estimator on each worker which call save first checkpoint l457 describe the expect behavior be able to train keras model in distribute mode code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem basically we be execute this code on each worker def estimator fn estimator tf keras estimator model to estimator model config config return estimator on each worker tf estimator train and evaluate estimator fn train spec eval spec full code example be here other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach stacktrace of failure traceback most recent call last file task commons py line 59 in get experiment experiment dill load client kv wait kv experiment fn file init py line 233 in new experiment fn file keras example py line 76 in experiment fn file tmp 347e8353 e113 45e6 bc3a a89be4f9788a install tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl e0ceba8cc1b266d3356296be2708e12fb322668e tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl tensorflow python estimator keras py line 484 in model to estimator config file tmp 347e8353 e113 45e6 bc3a a89be4f9788a install tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl e0ceba8cc1b266d3356296be2708e12fb322668e tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl tensorflow python estimator keras py line 367 in save first checkpoint saver save sess late path file tmp 347e8353 e113 45e6 bc3a a89be4f9788a install tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl e0ceba8cc1b266d3356296be2708e12fb322668e tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl tensorflow python training saver py line 1441 in save self saver def filename tensor name checkpoint file file tmp 347e8353 e113 45e6 bc3a a89be4f9788a install tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl e0ceba8cc1b266d3356296be2708e12fb322668e tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl tensorflow python client session py line 929 in run run metadata ptr file tmp 347e8353 e113 45e6 bc3a a89be4f9788a install tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl e0ceba8cc1b266d3356296be2708e12fb322668e tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl tensorflow python client session py line 1152 in run feed dict tensor option run metadata file tmp 347e8353 e113 45e6 bc3a a89be4f9788a install tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl e0ceba8cc1b266d3356296be2708e12fb322668e tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl tensorflow python client session py line 1328 in do run run metadata file tmp 347e8353 e113 45e6 bc3a a89be4f9788a install tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl e0ceba8cc1b266d3356296be2708e12fb322668e tensorflow 1 12 0 cp36 cp36 m manylinux1 x86 64 whl tensorflow python client session py line 1348 in do call raise type e node def op message tensorflow python framework error impl unknownerror viewfs root keras keras model ckpt index tempstate16835974976294242898 input output error node save savev2 define at keras example py 76 savev2 dtype dt float dt int64 dt float dt float dt float dt float dt float dt float dt float dt float device job localhost replica 0 task 0 device cpu 0 arg save const 0 0 save savev2 tensor name save savev2 shape and slice sgd decay read readvariableop sgd iteration read readvariableop sgd lr read readvariableop sgd momentum read readvariableop dense bias read readvariableop dense kernel read readvariableop dense 1 bias read readvariableop dense 1 kernel read readvariableop dense 2 bias read readvariableop dense 2 kernel read readvariableop global step training sgd variable read readvariableop training sgd variable 1 read readvariableop training sgd variable 2 read readvariableop training sgd variable 3 read readvariableop training sgd variable 4 read readvariableop training sgd variable 5 read readvariableop
tensorflowtensorflow
error use dynamic rnn and tfliteconverter
Bug
system information os platform and distribution window x64 tensorflow instal from anaconda tensorflow version 2 0 alpha python version 3 7 instance on cpu exact command to reproduce describe the problem I try to convert a tensorflow lstm model to tensorflowlite after do the convert script on the v1 version the modification I make for some of the error I get be 1 from tensorflow python op import control flow util control flow util enable control flow v2 true 2 tf compat v1 disable eager execution it be do the normal training and testing but it be unable to convert 3 comment this code in envs tf env lib site package tensorflow lite experimental example lstm rnn cell py line 346 if input size value be none raise valueerror could not infer input size from input get shape 1 the current error be e tensorflow core framework op kernel cc 1355 opkernel op noop device type cpu for unknown op noop full stack trace be in the link below source code log intallation install anaconda 3 then pip install tensorflow 2 0 0 alpha0 see the code and full log here
tensorflowtensorflow
website claim that there be no internet connection
Bug
javascript on the website run some sort of detection to see if there be network connectivity or try to establish a connection in a surprising way this fail and I get a message there be no internet connection which be clearly wrong I be write this issue with the same internet connection
tensorflowtensorflow
make test micro speech loop
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information linux edh virtualbox 4 18 0 17 generic 18 18 04 1 ubuntu smp fri mar 15 15 27 12 utc 2019 x86 64 x86 64 x86 64 gnu linux tensorflow instal from source or binary git tensorflow version 1 13 1 python version python 2 7 15rc1 instal use virtualenv pip conda pip bazel version if compile from source no gcc compiler version if compile from source gcc version 7 3 0 ubuntu 7 3 0 27ubuntu1 18 04 cuda cudnn version no gpu model and memory no describe the problem the follow command describe in the readme locate in tensorflow lite experimental micro examples micro speech loop for ever make f tensorflow lite experimental micro tool make makefile test micro speech provide the exact sequence of command step that you execute before run into the problem I have follow the sequence describe in the readme 1 make f tensorflow lite experimental micro tool make makefile 2 make f tensorflow lite experimental micro tool make makefile test micro speech any other info log the reason be a while true in the source micro speech but the makefile should run micro speech test instead so I think that the issue be in the makefile but I don t understant this makefile tensorflow lite experimental micro testing test linux binary sh tensorflow lite experimental micro tool make gen linux x86 64 bin micro speech all test pass
tensorflowtensorflow
tf 2 0 use tf py function which have tf string type input in dataset map generate dtype warn
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 x64 1809 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below unknown 2 0 0 dev20190428 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 cudnn 10 0 windows10 x64 v7 5 0 56 gpu model and memory geforce gtx 1070 8 gb describe the current behavior use tf py function which have tf string type input generate warn like this w0429 14 24 18 965364 13252 backprop py 820 the dtype of the watch tensor must be float e g tf float32 get tf string this warning do not show with v1 12 0 9492 g2c319fb415 2 0 0 alpha0 but 2 0 0 dev20190428 do describe the expect behavior dtype warning should not be show code to reproduce the issue import tensorflow as tf def transform tag python x return 1 0 dataset tf datum dataset from tensor slice tag1 dataset dataset map lambda x tf py function transform tag python x tf float32 for sample in dataset print sample
tensorflowtensorflow
tf image and eager execution bug
Bug
for tensorflow 1 13 and 2 0 0a tf image require explicit tf constant instead of numpy array python import tensorflow as tf import numpy as np print tf version image np random random 512 512 1 tf square image image shape work image np random random 512 512 1 tf image ssim multiscale image image max val 1 shape fail image tf constant np random random 512 512 1 tf image ssim multiscale image image max val 1 work
tensorflowtensorflow
tf 2 0 api doc tf audio encode wav
Bug
api doc update for correct link no clear description no usage example parameter define yes not verify against current code return define yes not verify against current code raise list and define no visual if applicable no I will submit a pr please assign this bug to I part of issue 28237 issue 28236
tensorflowtensorflow
tf 2 0 api doc tf audio decode wav
Bug
api doc update for correct link no clear description no usage example parameter define yes not verify against current code return define yes not verify against current code raise list and define no visual if applicable no I will submit a pr please assign this bug to I
tensorflowtensorflow
tf 2 0 api doc tf audio
Bug
doc update for need high level description usage example I plan to submit a pr for this issue please assign to I
tensorflowtensorflow
warning recommend use unexiste tf keras layers cudnnlstm layer
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tensorflow gpu 2 0 0 alpha0 tensorflow version use command below 2 0 0 alpha0 python version 3 7 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory geforce gtx 1080 ti 11175mib describe the current behavior in tf gpu 2 0 describe the expect behavior either have a tf keras layers cudnnlstm or remove the warning code to reproduce the issue this to raise the warning import tensorflow as tf import tensorflow kera layer as ll input ll input 100 50 x ll lstm 100 input this to try to use cudnnlstm import tensorflow as tf import tensorflow kera layer as ll input ll input 100 50 x tf keras layers cudnnlstm 100 input other info log the warning message w0428 17 18 46 256715 140569873639168 tf log py 161 note that this layer be not optimize for performance please use tf keras layers cudnnlstm for well performance on gpu when try to call cudnnlstm attributeerror module tensorflow keras layer have no attribute cudnnlstm
tensorflowtensorflow
tensor graph be meaningless when eager execution be enable in tf 2 0 when compile model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary pip install tensorflow gpu 2 0 0 alpha0 tensorflow version use command below 2 0 0 alpha0 python version 3 7 3 cuda cudnn version 10 0 gpu model and memory geforce gtx 1080 ti 11175mib describe the current behavior after migrate to tf 2 0 when compile a custom model I write I get an error attributeerror tensor graph be meaningless when eager execution be enable the model compile as expect in tf 1 13 describe the expect behavior compile the model without raise any exception code to reproduce the issue run this script pass as an argument this file to create an h5 keras model then load model the result h5 file and call model compile adam categorical crossentropy other info log in 5 model compile adam categorical crossentropy acc attributeerror traceback most recent call last in 1 model compile adam categorical crossentropy acc miniconda3 envs tf2 lib python3 7 site package tensorflow python training tracking base py in method wrapper self args kwargs 454 self setattr track false pylint disable protect access 455 try 456 result method self args kwargs 457 finally 458 self setattr track previous value pylint disable protect access miniconda3 envs tf2 lib python3 7 site package tensorflow python keras engine training py in compile self optimizer loss metric loss weight sample weight mode weight metric target tensor distribute kwargs 428 loss weight 2 output 2 loss fn 429 layer loss 430 self total loss self prepare total loss skip target indice mask 431 432 function for train test and predict will miniconda3 envs tf2 lib python3 7 site package tensorflow python keras engine training py in prepare total loss self skip target indice mask 1729 1730 add regularization penalty and other layer specific loss 1731 if self loss 1732 total loss loss util scale loss for distribution 1733 math op add n self loss miniconda3 envs tf2 lib python3 7 site package tensorflow python keras engine network py in loss self 664 with op init scope 665 if context execute eagerly 666 return loss for loss in loss 667 if loss graph op get default graph 668 miniconda3 envs tf2 lib python3 7 site package tensorflow python keras engine network py in 0 665 if context execute eagerly 666 return loss for loss in loss 667 if loss graph op get default graph 668 669 relevant input miniconda3 envs tf2 lib python3 7 site package tensorflow python framework op py in graph self 937 def graph self 938 raise attributeerror 939 tensor graph be meaningless when eager execution be enable 940 941 property attributeerror tensor graph be meaningless when eager execution be enable model summary model model layer type output shape param connect to input 1 inputlayer none 4 108 108 0 conv2d 0 timedistribute none 4 108 108 320 input 1 0 0 maxpool 0 timedistribute none 4 54 54 32 0 conv2d 0 0 0 batchnorm 0 timedistribute none 4 54 54 32 128 maxpool 0 0 0 conv2d 1 timedistribute none 4 54 54 32 9248 batchnorm 0 0 0 maxpool 1 timedistribute none 4 27 27 32 0 conv2d 1 0 0 batchnorm 1 timedistribute none 4 27 27 32 128 maxpool 1 0 0 conv2d 2 timedistribute none 4 27 27 64 18496 batchnorm 1 0 0 maxpool 2 timedistribute none 4 13 13 64 0 conv2d 2 0 0 batchnorm 2 timedistribute none 4 13 13 64 256 maxpool 2 0 0 conv2d 3 timedistribute none 4 13 13 12 73856 batchnorm 2 0 0 maxpool 3 timedistribute none 4 6 6 128 0 conv2d 3 0 0 batchnorm 3 timedistribute none 4 6 6 128 512 maxpool 3 0 0 flatten timedistribute none 4 4608 0 batchnorm 3 0 0 dropout timedistribute none 4 4608 0 flatten 0 0 unify lstm unifiedlstm none 4 2048 54534144 dropout 0 0 time distribute timedistribut none 4 1 2049 unify lstm 0 0 flatten 1 flatten none 4 0 time distribute 0 0 softmax softmax none 4 0 flatten 1 0 0 dot dot none 2048 0 unify lstm 0 0 softmax 0 0 dense 1 dense none 1024 2098176 dot 0 0 dropout 1 dropout none 1024 0 dense 1 0 0 dense 2 dense none 512 524800 dropout 1 0 0 dropout 2 dropout none 512 0 dense 2 0 0 dense 3 dense none 2 1026 dropout 2 0 0 total param 57 263 139 trainable param 57 262 627 non trainable param 512
tensorflowtensorflow
documentation for c apis
Bug
be there any detailed documentation for c apis besides version example build and c api h for example create a session and tensor run query get tensor result etc
tensorflowtensorflow
tflite tflite file with single add op produce duplicate output
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below r1 13 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I have create tflite with single add op it have two input and one output when read this tflite with interpreter e g tensorflow lite python py import sys import numpy as np from tensorflow lite python import interpreter as interpreter wrapper interpreter interpreter wrapper interpreter model path sys argv 1 interpreter allocate tensor input detail interpreter get input detail output detail interpreter get output detail print input detail print output detail name input0 index 0 shape array 2 5 dtype int32 dtype quantization 0 0 0 name input1 index 1 shape array 2 5 dtype int32 dtype quantization 0 0 0 name output0 index 2 shape array 2 5 dtype int32 dtype quantization 0 0 0 name output0 index 2 shape array 2 5 dtype int32 dtype quantization 0 0 0 code use c interpreter also report duplicate output 2 2 even though outout of add builtin code 0 show one output interpreter have 3 tensor and 1 node input 0 1 output 2 2 tensor 0 input0 ktflitefloat32 ktflitearenarw 40 byte 0 0 mb 2 5 tensor 1 input1 ktflitefloat32 ktflitearenarw 40 byte 0 0 mb 2 5 tensor 2 output0 ktflitefloat32 ktflitearenarw 40 byte 0 0 mb 2 5 node 0 operator builtin code 0 input 0 1 output 2 describe the expect behavior get output detail return unique list of output code to reproduce the issue use attach tflite file to reproduce the issue add tflite zip
tensorflowtensorflow
tf tuple can t be index with tensor in graph mode
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 dev20190427 python version 3 6 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the tf tuple can t be index by tensor in graph mode describe the expect behavior tf tuple should support index with tensor in graph mode code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python import tensorflow as tf tf function def func a I tf constant 0 return a I t tf tuple tf constant 0 tf constant 1 a func t other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
404 error on tensorflow lite model repository
Bug
please make sure that this be a documentation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag doc template system information tensorflow version doc link describe the documentation issue this link be unreachable table 1 top 1 accuracy of float point and fully quantize cnn on imagenet validation dataset our pre train model be available in the tensorflow lite model repository the code use to generate these model be available we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue
tensorflowtensorflow
tf 2 0 tf summary scalar claim it see a whole tensor when fed scalar
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow my script be include below os platform and distribution e g linux ubuntu 16 04 ubuntu 16 tensorflow instal from source or binary conda environment with pip install tensorflow 2 0 0 alpha0 tensorflow version use command below 2 0 0 alpha0 I also just to check run pip install upgrade tb nightly I believe on the 25th python version 3 6 cuda cudnn version no gpu use describe the current behavior when I run the simple mnist script below with the tf summary scalar line comment out it run fine when I try to log a scalar inside model call I get an error message that seem to indicate tf summary scalar be see the original tensor somehow as oppose to the numpy sum result that I be pass to tf summary scalar describe the expect behavior accord to this would appear to be the way to log scalar value in tensorboard 2 0 be I incorrect code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem code learn the very basic by run mnist in tf 2 0 import tensorflow as tf from tensorflow keras import dataset layer import numpy as np import datetime l2 reg 0 0000001 learn rate 1e 5 num label 10 class myfirstconvnet tf keras model def init self super myfirstconvnet self init self layer1 layer conv2d 32 3 3 activation relu kernel regularizer tf keras regularizer l2 l2 reg self layer2 layer conv2d 64 3 3 activation relu kernel regularizer tf keras regularizer l2 l2 reg self pool layer maxpooling2d 2 2 self layer3 layer conv2d 32 3 3 activation relu kernel regularizer tf keras regularizer l2 l2 reg self flatten layer flatten self classifier layer dense num label activation softmax self batchnorm1 layer batchnormalization scale false self batchnorm2 layer batchnormalization scale false def call self input x self batchnorm1 self layer1 input x self layer2 x tf summary scalar layer 2 activation sum datum np sum x axis none error throw here x self batchnorm2 self pool x x self layer3 x x self flatten x return self classifier x if name main load up mnist train image train label test image test label dataset mnist load datum train image train image reshape 60000 28 28 1 astype np float32 255 0 test image test image reshape 10000 28 28 1 astype np float32 255 0 model must be compile which integrate information about training and store it in the model structure model myfirstconvnet optimizer tf keras optimizer adam lr learn rate model compile optimizer optimizer loss sparse categorical crossentropy metric accuracy set up tensorboard logdir log scalar datetime datetime now strftime y m d h m s file writer tf summary create file writer logdir file writer set as default tensorboard callback tf keras callbacks tensorboard log dir logdir train model fit train image train label epoch 10 batch size 32 validation split 0 05 shuffle true callback tf keras callback earlystopping monitor val loss patience 1 restore good weight true tensorboard callback test model evaluate test image test label verbose 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach error message 2019 04 26 08 41 48 064844 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 04 26 08 41 48 089158 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2394490000 hz 2019 04 26 08 41 48 090339 I tensorflow compiler xla service service cc 162 xla service 0x55b1385baf40 execute computation on platform host device 2019 04 26 08 41 48 090379 I tensorflow compiler xla service service cc 169 streamexecutor device 0 traceback most recent call last file hello mnist py line 55 in tensorboard callback file home menarcj othersoftware miniconda3 envs tf2 lib python3 6 site package tensorflow python keras engine training py line 806 in fit shuffle shuffle file home menarcj othersoftware miniconda3 envs tf2 lib python3 6 site package tensorflow python keras engine training py line 2503 in standardize user datum self set input cast input file home menarcj othersoftware miniconda3 envs tf2 lib python3 6 site package tensorflow python training tracking base py line 456 in method wrapper result method self args kwargs file home menarcj othersoftware miniconda3 envs tf2 lib python3 6 site package tensorflow python keras engine training py line 2775 in set input output self call input file hello mnist py line 28 in call tf summary scalar layer 2 activation sum datum np sum x axis none error throw here file home menarcj othersoftware miniconda3 envs tf2 lib python3 6 site package tensorboard plugin scalar summary v2 py line 61 in scalar tf debugging assert scalar datum file home menarcj othersoftware miniconda3 envs tf2 lib python3 6 site package tensorflow python op check op py line 1718 in assert scalar v2 assert scalar tensor tensor message message name name file home menarcj othersoftware miniconda3 envs tf2 lib python3 6 site package tensorflow python op check op py line 1751 in assert scalar message or tensor name shape valueerror expect scalar shape for conv2d 1 relu 0 see shape none 24 24 64
tensorflowtensorflow
tensorflow org display error
Bug
hello there something display error with tensorflow org with a bunch of messy just like this image image
tensorflowtensorflow
link to image guide break
Bug
system information tensorflow version n a doc link describe the documentation issue on the page the link within the text see the image guide be break it be currently direct to but 404 d edit remove content from template unrelated to the issue
tensorflowtensorflow
mobilenet v2 model can not be convert to tflite miss op
Bug
it be not possible to convert retrain model build on top of the feature vector mobilenet v2 to tensorflow lite with tf lite tfliteconverter from concrete function method during the conversion there be error tell about not support operation and datum type see below system information tensorflow version 2 0 0 alpha0 eager mode true tensorflow hub version 0 4 0 be gpu available true code to reproduce the issue colab project with output scrollto jyt0zw6l8le5 stacktrace convertererror traceback most recent call last in 11 convert the model 12 converter tf lite tfliteconverter from concrete function concrete func 13 convert tflite model converter convert 14 open tflite model wb write convert tflite model 2 frame usr local lib python3 6 dist package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str 203 stderr try convert to unicode stderr 204 raise convertererror 205 toco fail see console for info n s n s n stdout stderr 206 finally 207 must manually cleanup file convertererror toco fail see console for info 2019 04 25 19 20 27 592164 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 04 25 19 20 27 592225 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 04 25 19 20 27 594555 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 04 25 19 20 27 594629 I tensorflow lite toco import tensorflow cc 1335 convert unsupported operation statefulpartitionedcall 2019 04 25 19 20 27 604371 f tensorflow lite toco import tensorflow cc 114 check fail attr value case attrvalue ktype 1 vs 6 fatal python error abort current thread 0x00007f9bd09c4780 most recent call first file usr local lib python3 6 dist package tensorflow lite toco python toco from protos py line 33 in execute file usr local lib python3 6 dist package absl app py line 251 in run main file usr local lib python3 6 dist package absl app py line 300 in run file usr local lib python3 6 dist package tensorflow python platform app py line 40 in run file usr local lib python3 6 dist package tensorflow lite toco python toco from protos py line 59 in main file usr local bin toco from protos line 10 in abort core dump
tensorflowtensorflow
keras valueerror stop autograph building
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 dev20190424 python version 3 7 1 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cudatoolkit 10 0 130 0 cudnn 7 3 1 cuda10 0 0 gpu model and memory geforce rtx 2080 ti describe the current behavior call keras layer without call build automatically infer the shape of the trainable variable this work both in eager mode and graph mode in the current 2 0 alpha version however run the provide code in 2 0 0 dev20190424 version it give the follow error message w0425 12 08 40 775576 139922429134656 tf log py 161 entity could not be transform ed and will be execute as be some feature e g tensor dependent conditional and loop may not work as expect erro r detail can be find in the log when run with the env variable autograph verbosity 1 please report this to the autograph team cause valueerror during conversion weight for model sequential have not yet be create weight be c reate when the model be first call on input or build be call with an input shape describe the expect behavior code to reproduce the issue python import os import tensorflow as tf from tensorflow keras import layer model optimizer os environ tf cpp min log level 2 model model sequential layer dense 1 activation relu optimizer optimizer sgd be this line need in graph mode model build none 1 tf function def update batch with tf gradienttape as tape output model batch grad tape gradient output model trainable variable optimizer apply gradient zip grad model trainable variable if name main batch tf zero 1 1 dtype tf float32 update batch
tensorflowtensorflow
float 16 for training storage and run on tflite cpu and gpu
Bug
since the tensorflow lite support the gpu on mobile phone but it only support float32 and float 16 the model size with float32 be too large so will the float16 be available for training and storage and for convert to tflite
tensorflowtensorflow
tensorflow lite conversion from h5 to tflite increase file size
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v1 13 1 0 g6612da8951 1 13 1 python version 3 7 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory rtx 2070 8 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tflite conversion from h5 file increase in size from 141 kb to 508 kb describe the expect behavior a reduction in size code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem model tf keras sequential tf keras layer dense 128 activation tf nn relu tf keras layer dense 11 activation tf nn softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy model fit trainingdata traininglabel epoch 400 tf keras model save model model keras file converter tf contrib lite tfliteconverter from keras model file keras file tflite model converter convert open convert model tflite wb write tflite model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf datum dataset performance issue
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 7 4 1 gpu model and memory nvidia titan v cpu make model 2x intel xeon e5 2620 v4 8 core 16 logical datum drive samsung ssd 960 evo 1 tb describe the current behavior currently use the tf datum dataset api to load image pair for super resolution I believe base on the minimal example I could find the method below be as optimize as I can get for my use case however when I grow my path list from 550 to 7950 item it slow down over 3x it doesn t seem like this part of the pipeline should scale so poorly since the batch themselves be the same size and the process of mapping batch mostly io should be parallelize across the 32 cpu core of the machine any idea pertinent code below lr path hr path flat list of path to lr hr image respectively def load image path return tf image decode png tf read file path 3 create the dataset dataset tf datum dataset from tensor slice lr path hr path apply tf datum experimental shuffle and repeat count flag max epoch apply tf datum experimental map and batch lambda x y load image x load image y flag batch size num parallel batch max 1 cpu count 1 flag batch size apply tf datum experimental prefetch to device device gpu 0 query for the iterator iterator dataset make one shot iterator I m lr batch I m hr batch iterator get next preprocess the image batch with crop datum augmentation type conversion
tensorflowtensorflow
issue for tf keras layer densefeature when model build be call
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes my own example code os platform and distribution e g linux ubuntu 16 04 linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary from tensorflow tensorflow 2 0 0a0 gpu py3 docker image tensorflow version use command below 2 0 0a0 python version 3 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version n a gpu model and memory n a describe the current behavior tf keras layer densefeature be a new keras layer add in tf2 0a0 this be the first layer for tf keras model that accept dict of tensor as the input if tf keras layer densefeature be use as the first layer in a keras model create by subclass tf keras model model build input shape will fail with any type of input shape describe the expect behavior model build input shape will succeed and corresponding model variable be create code to reproduce the issue import tensorflow as tf feature column tf feature column numeric column header for header in c1 c2 class testmodel tf keras model def init self feature column super testmodel self init self feature layer tf keras layer densefeature feature column self dense layer tf keras layer dense 8 def call self input x self feature layer input return self dense layer x model testmodel feature column feature column shape c1 1 1 c2 1 1 change to shape 1 1 1 1 also fail model build shape print model trainable variable other info log provide with the repro dockerfile and a possible solution to it network build input shape accept dict as the input
tensorflowtensorflow
tf 2 0 tf 2 0 consume twice as much memory as tf 1 x or cntk
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 x64 1809 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tensorflow gpu 2 0 0 alpha0 tensorflow version use command below v1 12 0 9492 g2c319fb415 2 0 0 alpha0 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 cudnn 10 0 windows10 x64 v7 5 0 56 gpu model and memory geforce gtx 1070 8 gb describe the current behavior evaluate tf 2 0 keras model allocate twice as much memory as tf 1 x or cntk describe the expect behavior memory usage of tf 2 0 should be same or similar to other library not double code to reproduce the issue for tf 2 0 or 1 x import numpy as np import tensorflow as tf tf config gpu set per process memory growth true size 28000 input tf keras input size dtype float32 output tf keras layer dense size input model tf keras model model inputs input output output model predict np one 1 size dtype np float32 print complete while true pass for tf 1 x or cntk with keras import keras import numpy as np size 28000 input keras input size dtype float32 output keras layer dense size input model keras model model inputs input output output model predict np one 1 size dtype np float32 print complete while true pass with 8 gb vram gpu tf 1 x and cntk work successfully and tf 2 0 code be fail due to resource exhausted exception
tensorflowtensorflow
matmul for raggedtensor tf ragged ragged dense matmul
Bug
system information tensorflow version you be use 2 0 0a0 describe the feature and the current behavior state allow to compute a matrix multiplication between a raggedtensor and a standard dense tensor return a raggedtensor the behavior would be similar to tf sparse sparse dense matmul for instance give a raggedtensor of shape batch none channel in and a dense tensor of shape channel in channel out the operation would return a raggedtensor of shape batch none channel out where all entry along the ragged dimension have be multiply by the dense tensor will this change the current api how it could be implement as a standalone operation like tf sparse sparse dense matmul or it could be part of the exist api for instance tf math sum support raggedtensor as be who will benefit with this feature anybody work with irregular datum with application range from computer vision irregular image to deep learning on graph give the possibility to have matmul and add on raggedtensor one could implement dense convolutional and recurrent layer operate directly on irregular datum
tensorflowtensorflow
the keras example should load datum with allow pickle true
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no I be try the tutorial keras basic text classification os platform and distribution e g linux ubuntu 16 04 macos high sierra 10 13 5 tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 6 2 numpy version 1 16 3 describe the current behavior when I try the follow code from the basic text classification sample train datum train label test datum test label imdb load datum num word 10000 it fail with error traceback most recent call last file line 1 in file library framework python framework version 3 6 lib python3 6 site package tensorflow python keras datasets imdb py line 86 in load datum x train label train f x train f y train file library framework python framework version 3 6 lib python3 6 site package numpy lib npyio py line 262 in getitem pickle kwargs self pickle kwargs file library framework python framework version 3 6 lib python3 6 site package numpy lib format py line 692 in read array raise valueerror object array can not be load when valueerror object array can not be load when allow pickle false accord to the change of numpy in the default value of allow pickle be change from true to false please update the sample code accordingly may pass in allow pickle true explicitly to np load describe the expect behavior the basic text classification example could be follow without error other info log when I downgrade numpy from 1 16 3 to 1 16 1 whose default value be still allow pickle true I could finish the basic text classification example successfully
tensorflowtensorflow
tensor indexing lose shape information
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version use command below v1 12 0 9492 g2c319fb415 2 0 0 alpha0 python version 3 7 3 describe the current behavior slice tensor e g my tensor lose their shape information use tf slice do seem to work but for that I need to have all dimension define which be more often than not simply not the case describe the expect behavior shape information be keep and adjust to match the sliced datum
tensorflowtensorflow
deprecation warn in confusion matrix py
Bug
this template be for miscellaneous issue not cover by the other issue category for question on how to work with tensorflow or support for problem that be not verify bug in tensorflow please go to stackoverflow if you be report a vulnerability please use the dedicated reporting process for high level discussion about tensorflow please post to for question about the development or internal working of tensorflow or if you would like to know how to contribute to tensorflow please post to os ubuntu 18 04 2 tensorflow gpu 2 0 0alpha0 warn log before flag parsing go to stderr w0423 15 59 53 297839 140450503571264 deprecation py 323 from python3 6 site package tensorflow python op confusion matrix py 194 to int64 from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead w0423 15 59 53 298299 140450503571264 deprecation py 323 from python3 6 site package tensorflow python op confusion matrix py 195 to int32 from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead
tensorflowtensorflow
tf 2 0 inconsistency when time operation
Bug
I have be try to benchmark performance of the convolution operation with speed up such as depth separable convolution 1 and rank separable convolution 2 I m use the code paste below import time import numpy as np import tensorflow as tf define a scenario image size 320 channel batch size 2048 channel batch size repeat 200 skip 10 build various operation class build normal op tf keras model def init self kernel size channel super build normal op self init self normal tf keras layer conv2d channels kernel size pad same def call self x out self normal x return out class build rank op tf keras model def init self kernel size channel super build rank op self init self rs1 tf keras layer conv2d channel kernel size 1 padding same self rs2 tf keras layer conv2d channel 1 kernel size padding same def call self x out self rs1 x out self rs2 out return out class build depth op tf keras model def init self kernel size channel super build depth op self init self depthwise tf keras layers depthwiseconv2d kernel size pad same self pointwise tf keras layer conv2d channel 1 padding same def call self x out self depthwise x out self pointwise out return out def build op all channel kernel size normal build normal op kernel size channel rank build rank ops kernel size channel depth build depth op kernel size channel return normal rank depth def time op op tf operation benchmark operation with tf device gpu image tf random normal shape batch size image size image size channel dtype tf float32 for I in range repeat skip if I skip start time time don t time initial run op image end time time chk np round end start repeat 1000 2 return chk if name main benchmark with various channel size for channel in 64 128 256 512 adjust batch size so gpu doesn t run out of memory batch size channel batch size channel benchmark with various kernel size for param in 3 5 7 normal rank separable depth separable build op all channel param print channel channel kernel size param time normal time op normal time rank time op rank separable time depth time op depth separable print normal method ms t rank separable method ms t depth separable method ms n format time normal time rank time depth print n this result in the follow output channel 64 kernel size 3 normal method 5 31ms rank separable method 22 44ms depth separable method 0 59ms channel 64 kernel size 5 normal method 35 56ms rank separable method 27 95ms depth separable method 0 65ms channel 64 kernel size 7 normal method 39 56ms rank separable method 33 5ms depth separable method 0 63ms channel 128 kernel size 3 normal method 13 7ms rank separable method 30 75ms depth separable method 0 6ms channel 128 kernel size 5 normal method 16 85ms rank separable method 41 63ms depth separable method 0 54ms channel 128 kernel size 7 normal method 61 79ms rank separable method 46 81ms depth separable method 0 58ms channel 256 kernel size 3 normal method 12 75ms rank separable method 47 77ms depth separable method 0 56ms channel 256 kernel size 5 normal method 137 72ms rank separable method 64 87m depth separable method 0 57m channel 256 kernel size 7 normal method 159 38ms rank separable method 64 92ms depth separable method 0 61ms channel 512 kernel size 3 normal method 23 17ms rank separable method 93 72ms depth separable method 0 57m channel 512 kernel size 5 normal method 86 4ms rank separable method 119 05ms depth separable method 0 55ms channel 512 kernel size 7 normal method 537 36ms rank separable method 130 35ms depth separable method 0 55ms my concern be 1 inconsistent running time with kernel size 5 and increase channel size 2 constant running time with depth separable method in pytorch2 there be certain flag that need to be call set to true to time operation have tensorflow2 also introduce such flag also have tf developer manage to achieve the theoretical speedup claim in 1 for depth wise convolution 1 mobilenet efficient convolutional neural network for mobile visionapplication a howard et al 2 decomposeme simplify convnet forend to end learn j alvarez l petersson
tensorflowtensorflow
misalignment in bahdanauattention documentation
Bug
system information tensorflow version 1 13 doc link under the description for the init function in the args the memory sequence length should be in its own point and not part of memory we welcome contribution by user will you be able to update submit a pr use the doc style guide to fix the doc issue yes I can submit a pr for this
tensorflowtensorflow
fail to get device property error code 30
Bug
describe the current behavior I comment under issue 26255 but the original poster close the issue as his problem be solve by update to tensorflow 2 I be open a new issue because update to the pre release be not an option and I have no way to even trap this error to try to handle it plus it be an unknown error code so no hint as to how to proceed unknown error and failure to initialize gpu 2019 04 19 13 16 22 838705 e tensorflow core grappler cluster util cc 83 fail to get device property error code 30 fail to initialize gpu device 0 unknown error my configuration window 10 home tensorflow 1 13 1 python 3 5 gtx 1060 mobile max q it doesn t happen every time I run my program I have localize it to run load model from kera before reach that point I have import tensorflow and verify that gpu be available python if tf test be gpu available logg debug gpu be available be there a way to catch this error or check for it and recover attempt to reinitialize thank describe the expect behavior report the actual error provide mechanism to catch the exception and handle it code to reproduce the issue the failure be intermittent other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
graph transformation propagate fix size cc unhandled operator type ctcbeamsearchdecoder
Bug
system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary source tensorflow version use command below 1 13 1 python version 3 7 bazel version if compile from source 0 24 1 gcc compiler version if compile from source 5 4 0 cuda cudnn version 10 0 7 5 exact command to reproduce bash bazel bin tensorflow lite toco toco input file model pb output file model tflite input format tensorflow graphdef output format tflite input array input audio input length input label value input label indice output array label 1 label 2 label 3 label 4 label 5 label 6 label 7 label 8 label 9 label 10 weight 1 weight 2 weight 3 weight 4 weight 5 weight 6 weight 7 weight 8 weight 9 weight 10 neglogprob target op tflite builtin select tf op allow custom op input shape 1 49 82 1 49 49 2 also try bash tflite convert graph def file model pb output file model tflite input array input audio input length input label value input label indice output array label 1 label 2 label 3 label 4 label 5 label 6 label 7 label 8 label 9 label 10 weight 1 weight 2 weight 3 weight 4 weight 5 weight 6 weight 7 weight 8 weight 9 weight 10 neglogprob target op tflite builtin select tf op enable select tf op allow custom op input shape 1 49 82 1 49 49 2 describe the problem my frozen model contain ctc beam search decoder operator bash input tf placeholder tf float32 1 none flags freq dim flag stack frame name input audio lens tf placeholder tf int32 1 name input length logit model gru model input lens flags freq dim flag stack frame flag hide dim flag voc size 1 flag num layer label tf sparse placeholder tf int32 name input label neglogprob tf nn ctc loss label logit lens time major false tf identity neglogprob name neglogprob decode tf nn ctc beam search decoder tf transpose logit 1 0 2 lens flag beam flag n for k in range flag n tf cast decode k value dtype tf int32 name label n replace n str k 1 tf reciprocal tf nn ctc loss tf cast decode k dtype tf int32 logit lens time major false name weight n replace n str k 1 saver tf train saver tf global variable with tf session as sess print loading model from s flag checkpoint sess run tf global variable initializer saver restore sess flag checkpoint print node name n for node in tf get default graph as graph def node print node name output nodes neglogprob for k in range flag n output nodes weight d k 1 label d k 1 frozen graph def graph util convert variable to constant sess sess graph def output nodes tf train write graph frozen graph def os path dirname flag output file os path basename flag output file as text false print save frozen graph to s flag output file when I try to convert pb model to tflite I get bash 2019 04 22 14 53 34 726887 I tensorflow stream executor platform default dso loader cc 43 successfully open dynamic library libcudart so 10 0 2019 04 22 14 53 34 771557 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation tensorarrayv3 2019 04 22 14 53 34 781295 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 04 22 14 53 34 781432 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation tensorarrayv3 2019 04 22 14 53 34 781475 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 04 22 14 53 34 781653 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation tensorarrayscatterv3 2019 04 22 14 53 34 781720 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 781758 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 781807 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 781859 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 781908 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 781929 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 781973 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 781998 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782024 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation loopcond 2019 04 22 14 53 34 782085 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation tensorarrayreadv3 2019 04 22 14 53 34 782108 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782128 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 04 22 14 53 34 782146 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782175 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782487 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782535 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782624 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782644 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782804 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782831 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782894 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 782913 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 783092 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 783119 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 783166 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 783191 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 783238 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 783276 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation tensorarraywritev3 2019 04 22 14 53 34 783299 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation enter 2019 04 22 14 53 34 783319 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 04 22 14 53 34 783359 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation exit 2019 04 22 14 53 34 783379 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation tensorarraysizev3 2019 04 22 14 53 34 783421 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation tensorarraygatherv3 2019 04 22 14 53 34 783536 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 783590 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 783610 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 04 22 14 53 34 783649 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 783668 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 04 22 14 53 34 783705 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 783723 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 04 22 14 53 34 783761 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 783780 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 04 22 14 53 34 783818 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 783838 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 04 22 14 53 34 783875 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 783894 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 04 22 14 53 34 783933 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 783951 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 04 22 14 53 34 783990 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 784009 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 04 22 14 53 34 784045 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 784064 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 04 22 14 53 34 784102 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation ctcloss 2019 04 22 14 53 34 784120 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation reciprocal 2019 04 22 14 53 34 785713 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 209 operator 353 array 0 quantize 2019 04 22 14 53 34 788612 I tensorflow lite toco graph transformation graph transformation cc 39 after remove unused op pass 1 206 operator 345 array 0 quantize 2019 04 22 14 53 34 791340 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 206 operator 345 array 0 quantize 2019 04 22 14 53 34 792679 f tensorflow lite toco graph transformation propagate fix size cc 2425 unhandled operator type ctcbeamsearchdecoder abort core dump how could I avoid this problem I dind t find any issue contain such problem
tensorflowtensorflow
weird behavior use tf variable in a tf datum experimental map and batch callback function
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary 1 13 1 from conda tensorflow version use command below 1 13 1 python version 3 7 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version code to reproduce the issue import tensorflow as tf def foo x array tf variable lambda tf zero 20 dtype tf float32 trainable false use resource true ind tf reshape x 1 1 val tf reshape tf cast x tf float32 1 array assign array assign tf zero like array with tf control dependency array assign array array scatter nd update ind tf cast val dtype tf float32 return array dataset tf datum dataset range 20 dataset dataset map foo dataset dataset batch 5 dataset dataset apply tf datum experimental map and batch foo 5 num parallel batch 1 iterator dataset make initializable iterator with tf session as sess sess run tf global variable initializer sess run iterator initializer next element iterator get next fetch sess run next element print fetch describe the current behavior if use normal map and batch one get 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 describe the expect behavior if use map and batch one get change result like 0 1 2 3 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 there should be always only one and non duplicate non zero per line other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach not sure if this be relate to 27507 on a colab with tf nightly 27507 seem fix but this issue still reproduce
tensorflowtensorflow
tflite segmentation model throw error on android
Bug
system information what be the top level directory of the model you be use model research deeplab have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary source tensorflow version use command below 1 13 bazel version if compile from source bazel 0 24 0 installer darwin x86 64 cuda cudnn version na gpu model and memory no gpu use exact command to reproduce tflite convert output file test lite graph def file frozen inference graph pb input array imagetensor output array semanticprediction input shape 1 600 450 3 inference input type quantize uint8 inference type float mean value 128 std dev value 128 describe the current behavior I be use tflite for semantic segmentation I have a model train to segment object from background this model be train on deeplab I have convert this model frozen inference graph into tflite format use the below code tflite convert output file test lite graph def file frozen inference graph pb input array imagetensor output array semanticprediction input shape 1 600 450 3 inference input type quantize uint8 inference type float mean value 128 std dev value 128 the model load on android but when I try to run inference it give I this error cause by java lang illegalstateexception internal error unexpected failure when prepare tensor allocation third party tensorflow lite kernel unpack cc 54 numdimension input 1 be not true node number 4 unpack fail to prepare code to reproduce the issue tflite convert output file test lite graph def file frozen inference graph pb input array imagetensor output array semanticprediction input shape 1 600 450 3 inference input type quantize uint8 inference type float mean value 128 std dev value 128 how do I resove this error
tensorflowtensorflow
author reference non existent file
Bug
describe the documentation issue the author file reference a contributor file but no contributor or contributor md file exist this file be distinct from the contributor file see the latter for an explanation if contributor be some where else this should be make explicit in author we welcome contribution by user can you submit a pr this be a question for maintainer and a pr be not yet appropriate
tensorflowtensorflow
java api run much slow than python api
Bug
I predict image with python api only need 15ms but with the same model java api need 500 ms
tensorflowtensorflow
I use tensoflow model to predict with java api why it be more slowly than python or c api
Bug
I use tensoflow model to predict with java api why it be more slowly than python or c api
tensorflowtensorflow
tf2 0 custom training loop for keras example should provide information about tf keras backend set learning phase
Bug
system information tensorflow version tensorflow gpu 2 0 0 alpha0 doc link use the gradienttape a first end to end example describe the documentation issue that sample do not use tf keras backend set learning phase for manual training loop so if the model have batchnormalization or other layer which depend on training state it will be not train correctly there be no description about tf keras backend set learning phase even in the page of batchnormalization
tensorflowtensorflow
tf 2 0 the word2vec basic py example be not 2 0 0 alpha0
Bug
this template be for miscellaneous issue not cover by the other issue category for question on how to work with tensorflow or support for problem that be not verify bug in tensorflow please go to stackoverflow if you be report a vulnerability please use the dedicated reporting process for high level discussion about tensorflow please post to for question about the development or internal working of tensorflow or if you would like to know how to contribute to tensorflow please post to
tensorflowtensorflow
tf 2 0 tf assert equal funtion raise exception internalerror with unsigned 16 32 64
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 colab google com linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not test tensorflow instal from source or binary binary nighty tensorflow version use command below 2 0 0 dev20190420 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version nvidia smi 418 56 driver version 410 79 cuda version 10 0 gpu model and memory tesla t4 describe the current behavior as document function assert equal accept any dtype but it raise an exception internalerror with type uint16 32 64 test with cpu and gpu describe the expect behavior not raise an exception internalerror with type uint16 32 64 code to reproduce the issue python import tensorflow as tf from tensorflow python op import bitwise op from tensorflow python framework import dtype print tf version tf uint16 tf uint32 tf uint64 lhs tf constant 5 0 7 11 dtype tf uint16 rh tf constant 5 0 7 11 dtype tf uint16 tf assert equal lhs rhs may be same other asser function raise exception with assert great python lhs tf constant 7 dtype tf uint16 rh tf constant 9 dtype tf uint16 tf assert great lhs rh other info log log python internalerror traceback most recent call last in 10 exp tf constant 0 0 3 10 dtype dtype 11 re bitwise op bitwise and lhs rhs 12 tf assert equal re exp true usr local lib python3 6 dist package tensorflow python op check op py in assert equal v2 x y message summarize name 452 execution or if x and y be statically know 453 454 return assert equal x x y y summarize summarize message message name name 455 456 usr local lib python3 6 dist package tensorflow python op check op py in assert equal x y datum summarize message name 496 497 if context execute eagerly 498 eq math op equal x y 499 condition math op reduce all eq 500 if not condition usr local lib python3 6 dist package tensorflow python ops gen math op py in equal x y name 3449 else 3450 message e message 3451 six raise from core status to exception e code message none 3452 add node to the tensorflow graph 3453 try usr local lib python3 6 dist package six py in raise from value from value internalerror could not find valid device for node node node equal all kernel register for op equal device xla cpu t in dt float dt double dt int32 dt uint8 dt int8 dt complex64 dt int64 dt bool dt qint8 dt quint8 dt qint32 dt bfloat16 dt complex128 dt half device xla gpu t in dt float dt double dt int32 dt uint8 dt int8 dt complex64 dt int64 dt bool dt qint8 dt quint8 dt qint32 dt bfloat16 dt half device xla cpu jit t in dt float dt double dt int32 dt uint8 dt int8 dt complex64 dt int64 dt bool dt qint8 dt quint8 dt qint32 dt bfloat16 dt complex128 dt half device xla gpu jit t in dt float dt double dt int32 dt uint8 dt int8 dt complex64 dt int64 dt bool dt qint8 dt quint8 dt qint32 dt bfloat16 dt half device cpu t in dt bool device cpu t in dt string device cpu t in dt complex128 device cpu t in dt complex64 device cpu t in dt int64 device cpu t in dt int32 device cpu t in dt bfloat16 device cpu t in dt int16 device cpu t in dt int8 device cpu t in dt uint8 device cpu t in dt double device cpu t in dt half device cpu t in dt float device gpu t in dt bool device gpu t in dt complex128 device gpu t in dt complex64 device gpu t in dt int64 device gpu t in dt int16 device gpu t in dt int8 device gpu t in dt int32 device gpu t in dt uint8 device gpu t in dt double device gpu t in dt half device gpu t in dt float op equal
tensorflowtensorflow
tf2 0 not json serializable error wasn throw when use tf keras activation operator in keras model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 x64 1809 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below tensorflow gpu 2 0 0a0 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 gpu model and memory geforce 1070 describe the current behavior when I use tf keras activation operator in my keras model serialization of model be fail due to not json serializable error describe the expect behavior it should be serialize without any error code to reproduce the issue from tensorflow import kera from tensorflow keras import layer input keras input shape 784 name digit x layer activation relu input x keras activation relu input output layer dense 10 activation softmax name prediction x model keras model inputs input output output name 3 layer mlp model summary model save path to my model h5 if you change from activation to relu it fail to serialize
tensorflowtensorflow
I understand that java should be fast than python but not like this
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow java create session this pb come from python configproto config configproto newbuilder setallowsoftplacement true setlogdeviceplacement false clearlogdeviceplacement setgpuoption gpuoption newbuild setallowgrowth true setinteropparallelismthread 1 build sess new session graph config tobytearray java anjoslog log new anjoslog log begin log begin2series create datum float x new float 1 this audio len this audio feat len x 0 datum log stop2serie log begin2series create tensor tensor x tensor create x log stop2serie log begin2series use model tensor y sess runner feed input var x fetch label var run get 0 log stop2serie log begin2series take result float result new float int y shape 0 int y shape 1 int y shape 2 y copyto result log stop2serie log stop os platform and distribution e g linux ubuntu 16 04 win7 cpu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 1 8 0 java python version 3 5 6 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version none gpu model and memory none you can collect some of this information use our environment capture script you can also obtain the tensorflow version with python c import tensorflow as tf print tf git version tf version describe the current behavior I use python3 5 predict the same one sample speed series start 2019 04 20 13 06 18 series end 2019 04 20 13 06 18 total time 0 24s 0 00min 2019 04 20 13 06 18 I use java1 8 predict the same one sample speed start time 2019 04 20 12 57 43 924 series start create datum 2019 04 20 12 57 43 925 series end 0 00s 0 00min series start create tensor 2019 04 20 12 57 43 927 series end 0 09s 0 00min series start use model 2019 04 20 12 57 44 015 series end 0 69 0 01min series start take result 2019 04 20 12 57 44 710 series end 0 03s 0 00min end time 2019 04 20 12 57 44 738 total time 0 81s 0 01min describe the expect behavior I understand that java should be fast than python but not like this code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf 2 0 issue with tpustrategy initialize tpu system
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tensorflow instal from source or binary pip install tensorflow gpu 2 0 0 alpha0 tensorflow version use command below 2 0 0 alpha0 python version 3 6 7 colab describe the current behavior error occur when try to instantiate a simple kera model run on tpu on colab use tpustrategy it seem that there be an internal problem regard the worker name if worker name be set to worker then an exception be raise during the call to initialize tpu system job tpu worker replica 0 task 1 device cpu 0 unknown device if worker name be set to tpu worker the strategy be properly initialize but another exception be raise later when create the keras model error copy tensor to device job worker replica 0 task 0 device tpu 0 I have read issue 26513 to place a call to experimental connect to host before call initialize tpu system but it do not help pip install tensorflow gpu 2 0 0 alpha0 import tensorflow as tf import os import sys tpu worker grpc os environ colab tpu addr worker name tpu worker tf config experimental connect to host tpu worker worker name resolver tf distribute cluster resolver tpuclusterresolver tpu worker tf tpu experimental initialize tpu system resolver strategy tf distribute experimental tpustrategy resolver device tf config experimental list device print device sep n with strategy scope model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss sparse categorical crossentropy optimizer tf keras optimizer adam metric accuracy describe the expect behavior model should be properly instantiate code to reproduce the issue see above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach exception when worker name worker invalidargumenterror traceback most recent call last in 10 11 resolver tf distribute cluster resolver tpuclusterresolver tpu worker 12 tf tpu experimental initialize tpu system resolver 13 strategy tf distribute experimental tpustrategy resolver 14 device tf config experimental list device usr local lib python3 6 dist package tensorflow python tpu tpu strategy util py in initialize tpu system cluster resolver 91 with op device get first tpu host device cluster resolver 92 output tpu functional op tpupartitionedcall 93 args device ordinal 0 tout dtype string f func name 94 serialize topology output 0 numpy 95 else usr local lib python3 6 dist package tensorflow python ops gen tpu op py in tpu partition call args device ordinal tout f name 5606 else 5607 message e message 5608 six raise from core status to exception e code message none 5609 add node to the tensorflow graph 5610 if not isinstance tout list tuple usr local lib python3 6 dist package six py in raise from value from value invalidargumenterror job tpu worker replica 0 task 1 device cpu 0 unknown device exception when worker name tpu worker runtimeerror traceback most recent call last in 22 tf keras layer flatten 23 tf keras layer dense 64 activation relu 24 tf keras layer dense 10 activation softmax 25 26 usr local lib python3 6 dist package tensorflow python training tracking base py in method wrapper self args kwargs 454 self setattr track false pylint disable protect access 455 try 456 result method self args kwargs 457 finally 458 self setattr track previous value pylint disable protect access usr local lib python3 6 dist package tensorflow python keras engine sequential py in init self layer name 106 if layer 107 for layer in layer 108 self add layer 109 110 property usr local lib python3 6 dist package tensorflow python training tracking base py in method wrapper self args kwargs 454 self setattr track false pylint disable protect access 455 try 456 result method self args kwargs 457 finally 458 self setattr track previous value pylint disable protect access usr local lib python3 6 dist package tensorflow python keras engine sequential py in add self layer 167 and create the node connect the current layer 168 to the input layer we just create 169 layer x 170 set input true 171 usr local lib python3 6 dist package tensorflow python keras engine base layer py in call self input args kwargs 592 build layer if applicable if the build method have be 593 overridden 594 self maybe build input 595 explicitly pass the learning phase placeholder to call if 596 the training argument be leave unspecified by the user usr local lib python3 6 dist package tensorflow python keras engine base layer py in maybe build self input 1711 only call build if the user have manually overridden the build method 1712 if not hasattr self build be default 1713 self build input shape 1714 we must set self build since user define build function be not 1715 constrain to set self build usr local lib python3 6 dist package tensorflow python keras layers convolutional py in build self input shape 163 constraint self kernel constraint 164 trainable true 165 dtype self dtype 166 if self use bias 167 self bias self add weight usr local lib python3 6 dist package tensorflow python keras engine base layer py in add weight self name shape dtype initializer regularizer trainable constraint partitioner use resource synchronization aggregation kwargs 375 collection collection 376 synchronization synchronization 377 aggregation aggregation 378 backend track variable variable 379 usr local lib python3 6 dist package tensorflow python training tracking base py in add variable with custom getter self name shape dtype initializer getter overwrite kwarg for getter 620 new variable getter 621 name name shape shape dtype dtype initializer initializer 622 kwargs for getter 623 624 if we set an initializer and the variable process it tracking will not usr local lib python3 6 dist package tensorflow python keras engine base layer util py in make variable name shape dtype initializer trainable cache device validate shape constraint use resource collection synchronization aggregation partitioner 150 collection collection 151 synchronization synchronization 152 aggregation aggregation 153 return v 154 usr local lib python3 6 dist package tensorflow python op variable py in call cls args kwargs 210 def call cls args kwargs 211 if cls be variablev1 212 return cls variable v1 call args kwargs 213 elif cls be variable 214 return cls variable v2 call args kwargs usr local lib python3 6 dist package tensorflow python op variable py in variable v1 call cls initial value trainable collection validate shape cache device name variable def dtype expect shape import scope constraint use resource synchronization aggregation 173 use resource use resource 174 synchronization synchronization 175 aggregation aggregation 176 177 def variable v2 call cls usr local lib python3 6 dist package tensorflow python op variable py in getter kwargs 56 to avoid capture loop variable 57 def getter kwargs 58 return capture getter capture previous kwargs 59 return getter 60 usr local lib python3 6 dist package tensorflow python distribute distribute lib py in creator with resource var args kwargs 821 kwargs use resource true 822 kwargs distribute strategy strategy 823 return self create variable args kwargs 824 825 def distribute getter getter args kwargs usr local lib python3 6 dist package tensorflow python distribute tpu strategy py in create variable self next creator args kwargs 439 return create tpu mirror variable 440 self container strategy device map logical device 441 real mirror creator args kwargs 442 443 def reduce to self reduce op value destination usr local lib python3 6 dist package tensorflow python distribute tpu strategy py in create tpu mirror variable strategy device map logical device real mirror creator args kwargs 101 with tape stop record 102 device device map logical to actual device logical device 103 value list real mirror creator device args kwargs 104 result value tpumirroredvariable 105 strategy device map value list aggregation usr local lib python3 6 dist package tensorflow python distribute tpu strategy py in real mirror creator device args kwargs 432 kwargs initial value initial value fn 433 with context device policy context device placement silent 434 v next creator args kwargs 435 assert not isinstance v value tpumirroredvariable 436 value list append v usr local lib python3 6 dist package tensorflow python op variable py in kwargs 152 aggregation variableaggregation none 153 call on variable class useful to force the signature 154 previous getter lambda kwargs default variable creator none kwargs 155 for getter in op get default graph variable creator stack pylint disable protect access 156 previous getter make getter getter previous getter usr local lib python3 6 dist package tensorflow python ops variable scope py in default variable creator next creator kwargs 2490 cache device cache device name name dtype dtype 2491 constraint constraint variable def variable def 2492 import scope import scope distribute strategy distribute strategy 2493 else 2494 return variable refvariable usr local lib python3 6 dist package tensorflow python op variable py in call cls args kwargs 214 return cls variable v2 call args kwargs 215 else 216 return super variablemetaclass cls call args kwargs 217 218 usr local lib python3 6 dist package tensorflow python op resource variable op py in init self initial value trainable collection validate shape cache device name dtype variable def import scope constraint distribute strategy 420 name name 421 dtype dtype 422 constraint constraint 423 424 def repr self usr local lib python3 6 dist package tensorflow python op resource variable op py in init from args self initial value trainable collection cache device name dtype constraint 543 with op name scope initializer device context manager none 544 initial value op convert to tensor 545 initial value if init from fn else initial value 546 name initial value dtype dtype 547 self handle eager safe variable handle usr local lib python3 6 dist package tensorflow python keras engine base layer util py in 132 type init op initializer type init op v2 initializer 133 initializer initializer 134 init val lambda initializer shape dtype dtype 135 variable dtype dtype base dtype 136 if use resource be none usr local lib python3 6 dist package tensorflow python op init op v2 py in call self shape dtype 432 else 433 limit math sqrt 3 0 scale 434 return self random generator random uniform shape limit limit dtype 435 436 def get config self usr local lib python3 6 dist package tensorflow python op init op v2 py in random uniform self shape minval maxval dtype 795 op random op random uniform 796 return op 797 shape shape minval minval maxval maxval dtype dtype seed self seed 798 799 def truncate normal self shape mean stddev dtype usr local lib python3 6 dist package tensorflow python op random op py in random uniform shape minval maxval dtype seed name 238 with op name scope name random uniform shape minval maxval as name 239 shape shapetensor shape 240 minval op convert to tensor minval dtype dtype name min 241 maxval op convert to tensor maxval dtype dtype name max 242 seed1 seed2 random seed get seed seed usr local lib python3 6 dist package tensorflow python framework op py in convert to tensor value dtype name prefer dtype dtype hint 1048 prefer dtype deprecation deprecate argument lookup 1049 dtype hint dtype hint prefer dtype prefer dtype 1050 return convert to tensor v2 value dtype prefer dtype name 1051 1052 usr local lib python3 6 dist package tensorflow python framework op py in convert to tensor v2 value dtype dtype hint name 1106 name name 1107 prefer dtype dtype hint 1108 as ref false 1109 1110 usr local lib python3 6 dist package tensorflow python framework op py in internal convert to tensor value dtype name as ref prefer dtype ctx accept symbolic tensor 1184 1185 if ret be none 1186 ret conversion func value dtype dtype name name as ref as ref 1187 1188 if ret be notimplemente usr local lib python3 6 dist package tensorflow python framework constant op py in constant tensor conversion function v dtype name as ref 302 as ref false 303 as ref 304 return constant v dtype dtype name name 305 306 usr local lib python3 6 dist package tensorflow python framework constant op py in constant value dtype shape name 243 244 return constant impl value dtype shape name verify shape false 245 allow broadcast true 246 247 usr local lib python3 6 dist package tensorflow python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 251 ctx context context 252 if ctx execute eagerly 253 t convert to eager tensor value ctx dtype 254 if shape be none 255 return t usr local lib python3 6 dist package tensorflow python framework constant op py in convert to eager tensor value ctx dtype 108 return op eagertensor 109 value handle device dtype tensor 110 t op eagertensor value handle device dtype 111 scalar cache cache key t 112 return t runtimeerror error copy tensor to device job worker replica 0 task 0 device tpu 0 job worker replica 0 task 0 device tpu 0 unknown device