repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
tensorflow 2 0 add regularization loss
Bug
in tensorflow 1 x I can add regularization loss by use code like this regularization loss tf add n tf loss get regularization loss regu total loss loss regularization loss but in tensorflow 2 0 0beta1 api the loss get regularization loss be cancel so how can I add that loss in this case system information tensorflow version 2 0 0beta1
tensorflowtensorflow
teacher force in the transformer tutorial
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide please provide a link to the documentation entry for example training and checkpointing description of issue what need change teacher force seem to not be implement clear description the documentation here mention that the training use teacher force however it doesn t seem like with the code show that this be implement the variable tar real be the true output but it seem to only be use for loss and accuracy computation please let I know if I m make a mistake here thank in advance
tensorflowtensorflow
restore keras model fail inside a distribution strategy scope
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 arch linux tensorflow instal from source or binary binary use pip tensorflow version use command below both v1 14 0 rc1 22 gaf24dc9 1 14 0 and v2 0 0 beta0 17 g8e423e3 2 0 0 beta1 python version 3 7 3 cuda cudnn version cuda 10 1 168 4 cudnn 7 6 1 34 1 gpu model and memory nvidia quadro p2000 4 gb describe the current behavior inside a distribution strategy scope restore a keras model that have be train at all with tf keras model load model raise the exception show below while handle the optimizer in particular it seem look a bit similar to 28599 if you squint but many detail differ describe the expect behavior restore the model should succeed code to reproduce the issue python import numpy as np tensorflow as tf strategy tf distribute mirroredstrategy path tmp model hdf5 with strategy scope construct model model tf keras model sequential tf keras layer dense 1 input shape 1 model compile optimizer tf keras optimizer sgd loss tf keras metrics mse do a fit so the optimizer weight be create remove this let the restore succeed model fit np array 1 np array 1 save and attempt to restore tf keras model save model model path tf keras model load model path other info log traceback for tf 2 0 tf 1 14 be the same except for line number file tensorflow python keras save save py line 137 in load model return hdf5 format load model from hdf5 filepath custom object compile file tensorflow python keras save hdf5 format py line 187 in load model from hdf5 model make train function file tensorflow python keras engine training py line 1974 in make train function param self collect trainable weight loss self total loss file tensorflow python keras optimizer v2 optimizer v2 py line 491 in get update grad self get gradient loss param file tensorflow python keras optimizer v2 optimizer v2 py line 391 in get gradient grad gradient gradient loss param file tensorflow python op gradient impl py line 158 in gradient unconnected gradient file tensorflow python op gradient util py line 543 in gradientshelper for x in xs file tensorflow python op gradient util py line 543 in for x in xs file tensorflow python distribute value py line 643 in handle raise valueerror handle be not available outside the replica context valueerror handle be not available outside the replica context or a tf distribute strategy update call
tensorflowtensorflow
tf datum experimental make csv dataset can not decompress file
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow custom os platform and distribution e g linux ubuntu 16 04 mac os 10 14 5 but also test in redhat 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip3 tensorflow version use command below 1 13 1 python version 3 7 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory cpu describe the current behavior tf contrib datum make csv dataset can not decompress gzip file describe the expect behavior when compression type be set to gzip it should decompress a gzip file code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python3 import tensorflow as tf import panda as pd sample iris i d 1 2 3 sepallengthcm 5 1 4 9 4 7 sepalwidth 3 5 3 0 3 2 sepcie iris setosa iris setosa iris setosa df pd dataframe sample iris df to csv iris compress csv gz compression gzip index false train file iris compress csv gz train batch size 4 select column i d sepallengthcm sepalwidthcm specie dataset tf datum experimental make csv dataset train file train batch size column name none column default none label name specie select column select column field delim use quote delim true na value header true num epoch none shuffle true shuffle buffer size 100 shuffle seed none prefetch buffer size tf data experimental autotune num parallel read 1 sloppy true num row for inference 100 compression type tf constant gzip as a sanity check bash gunzip iris compress csv gz cat iris compress csv work as expect return 1 5 1 3 5 iris setosa 2 4 9 3 0 iris setosa 3 4 7 3 2 iris setosa other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file test py line 38 in compression type tf constant gzip file usr local lib python3 7 site package tensorflow python datum experimental op reader py line 547 in make csv dataset v1 compression type ignore error file usr local lib python3 7 site package tensorflow python datum experimental op reader py line 434 in make csv dataset v2 column name infer column name filename field delim use quote delim file usr local lib python3 7 site package tensorflow python datum experimental op reader py line 164 in infer column name column name next csv reader f csv kwargs file usr local lib python3 7 site package tensorflow python lib io file io py line 220 in next return self next file usr local lib python3 7 site package tensorflow python lib io file io py line 214 in next retval self readline file usr local lib python3 7 site package tensorflow python lib io file io py line 179 in readline return self prepare value self read buf readlineasstre file usr local lib python3 7 site package tensorflow python lib io file io py line 98 in prepare value return compat as str any val file usr local lib python3 7 site package tensorflow python util compat py line 117 in as str any return as str value file usr local lib python3 7 site package tensorflow python util compat py line 87 in as text return byte or text decode encoding unicodedecodeerror utf 8 codec can t decode byte 0x8b in position 1 invalid start byte
tensorflowtensorflow
distributeddataset iteration fail with datum of type string
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 5 lts tensorflow instal from source or binary conda tensorflow version use command below 1 14 0 python version 3 6 8 cuda cudnn version 10 0 gpu model and memory 8 x tesla p100 pcie 16 gb describe the current behavior I have notice an issue while iterate over a distributeddataset use a tf distribute mirroredstrategy that contain datum of type string with eager execution enable iterating work perfectly well but a runtimeerror be raise once the end of the dataset be reach cf log below describe the expect behavior the exception be never raise if the dataset do not contain string datum iteration stop and the rest of the code be execute code to reproduce the issue import tensorflow as tf tf enable eager execution print tensorflow version format tf version raw tf random uniform 256 20 maxval 10 dtype tf int32 strategy tf distribute mirroredstrategy print number of replicas format strategy num replicas in sync dataset 1 tf datum dataset from tensor slice raw dataset 1 dataset 1 batch 64 dataset 2 tf datum dataset from tensor this be a test repeat 256 batch 64 dist dataset 1 strategy experimental distribute dataset dataset 1 dist dataset 2 strategy experimental distribute dataset dataset 2 print iterate over datataset 2 for I example in enumerate dataset 2 print batch format I print iterate over distribute dataset 1 for I example in enumerate dist dataset 1 print batch format I print iterate over distribute datataset 2 for I example in enumerate dist dataset 2 print batch format I other info log tensorflow version 1 14 0 number of replicas 8 iterate over datataset 2 batch 0 batch 1 batch 2 batch 3 iterate over distribute dataset 1 batch 0 batch 1 batch 2 batch 3 iterate over distribute datataset 2 batch 0 batch 1 batch 2 batch 3 runtimeerror traceback most recent call last in 25 26 print iterate over distribute datataset 2 27 for I example in enumerate dist dataset 2 28 print batch format I conda envs py36 lib python3 6 site package tensorflow python distribute input lib py in next self 225 def next self 226 try 227 return self get next 228 except error outofrangeerror 229 raise stopiteration conda envs py36 lib python3 6 site package tensorflow python distribute input lib py in get next self name 254 return datum 255 256 global have value replicas get next as optional self self strategy 257 result 258 for I worker in enumerate self input worker worker device conda envs py36 lib python3 6 site package tensorflow python distribute input lib py in get next as optional iterator strategy name 160 with op device worker 161 worker have value next element 162 iterator iterator I get next as list new name pylint disable protect access 163 collective all reduce require explict device for input 164 with op device cpu 0 conda envs py36 lib python3 6 site package tensorflow python distribute input lib py in get next as list fail resolve argument 719 datum have value 720 lambda datum get value 721 lambda dummy tensor fn data value structure 722 result append real datum 723 pylint enable cell var from loop conda envs py36 lib python3 6 site package tensorflow python util deprecation py in new func args kwargs 505 in a future version if date be none else after s date 506 instruction 507 return func args kwargs 508 509 doc add deprecate arg notice to docstre conda envs py36 lib python3 6 site package tensorflow python op control flow op py in cond pre true fn false fn strict name fn1 fn2 1955 result true fn 1956 else 1957 result false fn 1958 if not strict 1959 result unpackifsingleton result conda envs py36 lib python3 6 site package tensorflow python distribute input lib py in 719 datum have value 720 lambda datum get value 721 lambda dummy tensor fn data value structure 722 result append real datum 723 pylint enable cell var from loop conda envs py36 lib python3 6 site package tensorflow python distribute input lib py in dummy tensor fn value structure 637 for feature shape feature type in zip value structure flat shape 638 value structure flat type 639 result append create dummy tensor feature shape feature type 640 641 if isinstance value structure structure nestedstructure conda envs py36 lib python3 6 site package tensorflow python distribute input lib py in create dummy tensor feature shape feature type 630 631 create the dummy tensor 632 dummy tensor array op zeros tensor shape tensorshape dim feature type 633 return dummy tensor 634 conda envs py36 lib python3 6 site package tensorflow python op array ops py in zeros shape dtype name 1869 create a constant if it win t be very big otherwise create a fill op 1870 to prevent serialized graphdef from become too large 1871 output constant if small zero shape dtype name 1872 if output be not none 1873 return output conda envs py36 lib python3 6 site package tensorflow python op array ops py in constant if small value shape dtype name 1827 try 1828 if np prod shape 1000 1829 return constant value shape shape dtype dtype name name 1830 except typeerror 1831 happen when shape be a tensor list with tensor element etc conda envs py36 lib python3 6 site package tensorflow python framework constant op py in constant value dtype shape name 244 245 return constant impl value dtype shape name verify shape false 246 allow broadcast true 247 248 conda envs py36 lib python3 6 site package tensorflow python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 252 ctx context context 253 if ctx execute eagerly 254 t convert to eager tensor value ctx dtype 255 if shape be none 256 return t conda envs py36 lib python3 6 site package tensorflow python framework constant op py in convert to eager tensor value ctx dtype 113 return t 114 else 115 return op eagertensor value handle device dtype 116 117 runtimeerror error copy tensor to device job localhost replica 0 task 0 device gpu 0 can t copy tensor with type string to device job localhost replica 0 task 0 device gpu 0
tensorflowtensorflow
mismatch in the description of batchdot and tensorflow s implementation
Bug
url s with the issue description of issue what need change I raise an issue on the plaidml repo and after some back and forth we determine the documentation for batchdot doesn t quite match the actual implementation in the tensorflow code clear description a batchdot with x shape 1 2 6 2 and y shape 1 2 2 3 and axis 3 1 have an output shape of 1 2 6 3 whereas by the tf definition for output shape a tensor with shape equal to the concatenation of x s shape less the dimension that be sum over and y s shape less the batch dimension and the dimension that be sum over if the final rank be 1 we reshape it to batch size 1 sound like it should have an output shape of 1 2 6 2 3 submit a pull request I be not plan to submit a pr at this time but I may do it later
tensorflowtensorflow
unable to train model on multiple gpu use mirroredstrategy in tf2 0
Bug
I have two tesla t4 gpu from google I be use this example split the dataset into text and train batch in order to train it on multiple gpu with define the model inside tf distribute mirroredstrategy as describe here use tfdistributestrategy with kera the model train without tf distribute mirroredstrategy on single gpu normally but not when I want to train it on multiple gpu you can find the error log below another dummy example program in the doc for use mirroredstrategy be here use tfdistributestrategy with kera on execute this it train and utilise both gpu between these two example I notice one thing which be the use of padded batch and batch it work fine in the latter case I also have a model of mine where I be use padded batch this model also can not be train and although the error there be a bit different please suggest any way to get rid of this problem thank you train on none step epoch 1 10 w0718 14 21 33 186410 139786698037056 cross device op py 764 efficient allreduce be not support for 1 indexedslice w0718 14 21 38 304410 139786698037056 cross device op py 764 efficient allreduce be not support for 1 indexedslice 2019 07 18 14 21 39 456127 e tensorflow core grappler optimizer meta optimizer cc 502 implementation selector fail invalid argument invalid format of input node name replica 1 sequential bidirectional statefulpartitionedcall replica 1 statefulpartitionedcall 8 expect forward node name index 2019 07 18 14 21 40 179202 w tensorflow core grappler optimizer implementation selector cc 199 skip optimization due to error while loading function librarie invalid argument function inference backward standard lstm 356864 357366 and inference backward standard lstm 356864 357366 specialize for statefulpartitionedcall 1 at inference distribute function 360209 both implement lstm 3542e53d 8ce9 47ba b422 dc9202236064 but their signature do not match 2019 07 18 14 21 40 669817 w tensorflow compiler jit mark for compilation pass cc 1483 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile 2019 07 18 14 21 40 752732 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 07 18 14 21 51 150641 I tensorflow core kernel data shuffle dataset op cc 111 fill up shuffle buffer this may take a while 17050 of 50000 2019 07 18 14 22 00 643784 I tensorflow core kernel data shuffle dataset op cc 162 shuffle buffer fill 2019 07 18 14 22 00 664615 w tensorflow core framework op kernel cc 1546 op require fail at partition function op cc 113 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 1 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 102 and the dst node be while 2 retval 2019 07 18 14 22 00 664706 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 1 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 102 and the dst node be while 2 retval node replica 1 sequential bidirectional statefulpartitionedcall metric accuracy div no nan readvariableop 1 50 2019 07 18 14 22 00 664771 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 1 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 102 and the dst node be while 2 retval node replica 1 sequential bidirectional statefulpartitionedcall replica 1 metric accuracy assignaddvariableop 1 41 2019 07 18 14 22 00 665117 w tensorflow core framework op kernel cc 1546 op require fail at partition function op cc 113 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 1 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval 2019 07 18 14 22 00 665167 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 1 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 102 and the dst node be while 2 retval node replica 1 sequential bidirectional statefulpartitionedcall 2019 07 18 14 22 00 666533 w tensorflow core framework op kernel cc 1546 op require fail at partition function op cc 113 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 0 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 101 and the dst node be while 1 retval 2019 07 18 14 22 00 679274 w tensorflow core framework op kernel cc 1546 op require fail at partition function op cc 113 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 0 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval traceback most recent call last file colab py line 94 in model fit train datum epoch 10 validation datum test datum file home rishabh local lib python2 7 site package tensorflow python keras engine training py line 643 in fit use multiprocesse use multiprocesse file home rishabh local lib python2 7 site package tensorflow python keras engine training distribute py line 681 in fit step name step per epoch file home rishabh local lib python2 7 site package tensorflow python keras engine training array py line 294 in model iteration batch out f actual input file home rishabh local lib python2 7 site package tensorflow python keras distribute distribute training util py line 813 in execution function return out numpy for out in distribute function input fn file home rishabh local lib python2 7 site package tensorflow python eager def function py line 428 in call return self stateless fn args kwd file home rishabh local lib python2 7 site package tensorflow python eager function py line 1335 in call return graph function filter call args kwargs pylint disable protect access file home rishabh local lib python2 7 site package tensorflow python eager function py line 589 in filter call t for t in nest flatten args kwargs expand composite true file home rishabh local lib python2 7 site package tensorflow python eager function py line 671 in call flat output self inference function call ctx args file home rishabh local lib python2 7 site package tensorflow python eager function py line 445 in call ctx ctx file home rishabh local lib python2 7 site package tensorflow python eager execute py line 67 in quick execute six raise from core status to exception e code message none file home rishabh local lib python2 7 site package six py line 737 in raise from raise value tensorflow python framework error impl invalidargumenterror 2 root error s find 0 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 1 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 102 and the dst node be while 2 retval node replica 1 sequential bidirectional statefulpartitionedcall define at usr lib python2 7 thread py 801 metric accuracy div no nan readvariableop 1 50 1 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 1 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 102 and the dst node be while 2 retval node replica 1 sequential bidirectional statefulpartitionedcall define at usr lib python2 7 thread py 801 0 successful operation 1 derive error ignore op inference distribute function 360209 function call stack distribute function distribute function
tensorflowtensorflow
keras adam optimizer unsupported by gpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary preinstalle in colab tensorflow version use command below 1 14 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory gpu on colab describe the current behavior I run a keras adam optimizer with a cnn network the code work fine with cpu if I turn on gpu in the notebook and rerun the same code I get an exception describe the expect behavior no exception code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem activate gpu import tensorflow as tf from tensorflow import kera print tf version version training sample 100 input shape 16 512 1 dataset tf datum dataset from tensor slice tf random uniform 32 16 512 1 dtype tf float32 tf random uniform 32 dtype tf float32 dataset dataset shuffle 32 repeat strategy tf distribute mirroredstrategy with strategy scope initializer he uniform nb filt 8 16 32 400 out size 1 model tf keras model sequential model add keras layer conv2d nb filt 0 kernel size 3 3 activation relu pad same kernel initializer initializer bias initializer initializer input shape input shape model add keras layer conv2d nb filt 0 kernel size 3 3 activation relu pad same kernel initializer initializer bias initializer initializer model add keras layer maxpooling2d pool size 3 3 stride 2 2 padding same model add keras layer conv2d nb filt 1 kernel size 3 3 activation relu pad same kernel initializer initializer bias initializer initializer model add keras layer conv2d nb filt 1 kernel size 3 3 activation relu pad same kernel initializer initializer bias initializer initializer model add keras layer maxpooling2d pool size 3 3 stride 2 2 padding same model add keras layer conv2d nb filt 2 kernel size 3 3 activation relu pad same kernel initializer initializer bias initializer initializer model add keras layer conv2d nb filt 2 kernel size 3 3 activation relu pad same kernel initializer initializer bias initializer initializer model add keras layer maxpooling2d pool size 3 3 stride 2 2 padding same model add keras layer flatten model add keras layer dense 1024 activation relu kernel initializer initializer bias initializer initializer model add keras layer dense nb filt 3 activation relu kernel initializer initializer bias initializer initializer model add keras layer dense out size optimizer tf keras optimizer adam 1e 3 model compile optimizer optimizer loss mean absolute error metric mean squared error mean absolute error with strategy scope batch size 32 nb epoch 1 history model fit dataset batch batch size drop remainder true epoch nb epoch step per epoch training sample batch size verbose 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach invalidargumenterror traceback most recent call last usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1355 try 1356 return fn args 1357 except error operror as e 14 frame invalidargumenterror no opkernel be register to support op ncclallreduce use by node adam ncclallreduce with these attrs share name c0 t dt float num device 1 reduction sum register device cpu gpu xla cpu xla gpu register kernel adam ncclallreduce during handling of the above exception another exception occur invalidargumenterror traceback most recent call last invalidargumenterror no opkernel be register to support op ncclallreduce use by node adam ncclallreduce define at 4 with these attrs share name c0 t dt float num device 1 reduction sum register device cpu gpu xla cpu xla gpu register kernel adam ncclallreduce
tensorflowtensorflow
can xla compile tf estimator dnnlinearcombinedestimator
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source 1 13 tensorflow version 1 13 python version 2 7 instal use virtualenv pip conda bazel version if compile from source 0 19 gcc compiler version if compile from source 4 8 cuda cudnn version n a gpu model and memory n a describe the problem try to aot compile a model train use xla error node dnn input from feature column input layer xyz axyz embed weight sparsereshape register node dnn input from feature column input layer ad ad i d embed ad ad i d embed weight sparsereshape be there any eta on this be support how can aot xla compile tf estimator dnnlinearcombinedestimator be this even possible with current support provide the exact sequence of command step that you execute before run into the problem bazel bin tensorflow compiler aot tfcompile graph mygrapht pb config graphv config pbtxt cpp class mynamespace mycomputation any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf py function in tf datum dataset pipeline doesn t work with tpuestimator
Bug
environment be google colab with tpu runtime describe the current behavior I have an input pipeline that contain tf py function as one of its processing step when train a model with tpuestimator when I run the code I get the follow error no register eagerpyfunc opkernel for cpu device compatible with node node eagerpyfunc register eagerpyfunc input pipeline task0 makeiterator accord to the documentation the input pipeline generate by your input fn be run on cpu run the same input code with a standard estimator on cpu gpu work just fine manually place the dataset all process step on the cpu with tf device cpu 0 also fail with the same error when train with tpuestimator describe the expect behavior I should be able to run python code on the cpu as part of my input pipeline when train on tpus code to reproduce the issue the notebook link above be nearly identical to I ve just add the follow line to the training datum code cell idx tf py function lambda x x idx tf int32
tensorflowtensorflow
feedback on tensorflow 2 0 beta documentation issue
Bug
hi please accept the following as feedback on my experience of tensorflow 2 0 beta 1 read the migration guide and figure I would give it a go 2 instal without issue on my mac conda environment this be to be the highlight of my success 3 the documentation site isn t searchable by version and give the frequency and amount of change here it would be nice to find a way to locate where various thing have move to 4 by way of example phasedlstmcell I know the contrib module be go in 2 0 but its still in the github 2 0 branch which be mislead far misleading be the comment here create in contrib eventual plan to move to core no indication of where one might currently find it 5 figure it might be in in addon but no luck there 6 I want to use tf kera for the first time maybe its my pip conda environment but no matter what I do I could not get it to import unless I do import tensorflow python keras do I miss this in the doc because I swear I didn t read it anywhere after some more poking around I decide I have enough exposure to 2 0 and go back to 1 14 it do give I some minute of excitement however I would love 2 0 to be speedily and widely adopt the api look a lot clean from what I read of it I didn t get to use any ultimately and I think some improvement around how the documentation be accessible would help uptake I m willing to contribute to help this along if I can
tensorflowtensorflow
serialization of keras object fail if call with different input size
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below v1 12 1 6461 gc6352706d6 1 14 0 python version 3 6 7 bazel version if compile from source 0 26 1 gcc compiler version if compile from source 7 4 0 cuda cudnn version 10 0 7 5 gpu model and memory nvidia geforce gtx 1080 ti 11 gb describe the current behavior when I try to save a function of tf module as save model that call another function with different input shape it fail with the follow error w0717 17 37 44 384423 139641189812032 save py 129 skip full serialization of object main outer object at 0x7f009a2e94e0 because an error occur while trace layer function error message in converted code home salscheid tf2 lib python3 6 site package tensorflow core python keras save save model save py 539 call and return conditional loss return layer call input layer get loss for input test signature py 32 call return self inner x dummy dummy self inner x small dummy dummy home salscheid tf2 lib python3 6 site package tensorflow core python keras engine base layer py 708 call output call fn input args kwargs home salscheid tf2 lib python3 6 site package tensorflow core python keras save save model util py 48 wrap call output loss call fn input home salscheid tf2 lib python3 6 site package tensorflow core python keras save save model save py 506 call self call collection add trace args kwargs home salscheid tf2 lib python3 6 site package tensorflow core python keras save save model save py 467 add trace fn original get concrete function args kwargs home salscheid tf2 lib python3 6 site package tensorflow core python keras save save model save py 515 original get concrete function return super layercall self get concrete function args kwargs home salscheid tf2 lib python3 6 site package tensorflow core python eager def function py 692 get concrete function concrete self stateful fn get concrete function args kwargs home salscheid tf2 lib python3 6 site package tensorflow core python eager function py 1750 get concrete function str args str self input signature valueerror python input incompatible with input signature input input signature tensorspec shape none 64 64 8 dtype tf float32 name input 1 describe the expect behavior the model can be save successfully code to reproduce the issue the follow testcase allow to reproduce the issue import tensorflow as tf class inner tf keras model def init self super init self conv1 tf keras layer conv2d 8 3 3 kernel initializer tf keras initializers he normal padding same name conv1 def call self x dummy false x self conv1 x return x class outer tf keras model def init self super init self down tf keras layer conv2d 8 3 3 stride 2 2 kernel initializer tf keras initializers he normal padding same name down self inner inner def call self x dummy false x small self down x return self inner x dummy dummy self inner x small dummy dummy class infer tf module def init self super init decorate the inference function with tf function self infer tf function self infer input signature tf tensorspec 1 64 64 8 tf float32 prev img self outer outer def infer self input return self outer input false create model infer infer save the train model signature dict infer infer infer save model dir tmp save model tf save model save infer save model dir signature dict
tensorflowtensorflow
slim
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
use xla to tacotron2 be slow than without xla
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 cento mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v1 12 0 0 ga6d8ffae09 1 12 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version v9 0 176 7 gpu model and memory p40 describe the current behavior before use xla the tacotron 2 run about 0 5s however after use xla it increase to about 0 7 which be much slow python config tf configproto config graph option optimizer option global jit level tf optimizeroption on 1 self session tf session config config I use xla like above any thought would be appreciate thank
tensorflowtensorflow
what replace tf datum get output shape in tf 2
Bug
tf 2 dataset don t appear to have the output shape property and there be no tf datum get output shape what do we use instead
tensorflowtensorflow
tf sparse to dense do not work on sparse tensor with string value
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 2 lts bionic beaver mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior try to convert a sparsetensor of type string into the correspond dense tensor use tf sparse to dense through an exception typeerror expect string pass to parameter default value of op sparsetodense get 0 of type int instead error expect string get 0 of type int instead describe the expect behavior like with use an integer value sparsetensor I expect tf sparse to dense to return a dense tensor with string value code to reproduce the issue import tensorflow as tf sample int tf sparse sparsetensor index 0 0 1 2 value 1 2 dense shape 3 4 sample string tf sparse sparsetensor index 0 0 1 2 value a b dense shape 3 4 tf sparse to dense sample int tf sparse to dense sample string there be also a colab notebook where you can execute the code directly other info log I come across this issue while read in some tfrecord that include varlenfeature with string type list everything work fine with the varlenfeature with integer right now I have not even find a way to convert the sparsetensor s string value into a new tensor as a workaround
tensorflowtensorflow
couldn t find and import the graph transform module in tensorflow1 14
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 centos7 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 1 14 python version 2 7 bazel version if compile from source 24 1 gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior import tensorflow tool graph transform as graph transform error message attributeerror module object have no attribute graph transform describe the expect behavior successfully import the graph transform module this code succesfully run with tf 1 13 but fail on tf 1 14 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow tool graph transform as graph transform other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the instal graph transform module seem have be move to tensorflow core folder in site package be this as expect
tensorflowtensorflow
how to pad input sequence for implement cudnn lstm in tf2 0
Bug
for implement cudnn lstm accord to doc there be 6 requirement as follow activation tanh recurrent activation sigmoid recurrent dropout 0 unroll be false use bias be true input be not mask or strictly right padded accord to my understanding the last one say to have input sequence not right pad however in text classification I want to pad sequence and I be use the function pad batch now I do not know how to left pad use this function as I could check it always right pad the input sequence which mean one can never use cudnn lstm for training on gpu s if they be pad the input however it can be do easily use tf keras pad sequence function define here
tensorflowtensorflow
attributeerror module tensorflow have no attribute get default graph
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
gpu device creation fail when use the cuda malloc allocator
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 1 14 master python version 3 6 bazel version if compile from source 0 24 1 gcc compiler version if compile from source 7 4 0 cuda cudnn version 10 0 gpu model and memory rtx 2080 describe the current behavior gpu device creation fail when the cuda malloc allocator be select with the error no allocator statistic this be because of an allocator stat check introduce between release 1 13 1 and 1 14 in the gpu common runtime basegpudevicefactory creategpudevice function l1310 l1312 this check fail when use the cuda malloc allocator because the virtual allocator method getstat be never overridden describe the expect behavior gpu device creation should succeed if the user specify use of the cuda malloc allocator code to reproduce the issue use the tf gpu allocator environment variable to select the cuda malloc allocator export tf gpu allocator cuda malloc then in a python shell try to use the gpu python import tensorflow as tf tf test be gpu available
tensorflowtensorflow
attributeerror tfdevicecaptureop object have no attribute set device from string
Bug
tensorflow version 1 14 0 python3 6 num of gpu s 3 sample keras code to reproduce this error import tensorflow as tf from keras application import xception from keras util import multi gpu model import numpy as np num sample 1000 height 224 width 224 num class 1000 model xception weight none input shape height width 3 class num class gpu tf config experimental list physical device gpu print gpu parallel model multi gpu model model gpu 2 error message device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 device string device gpu 0 traceback most recent call last file line 1 in file usr local lib python3 6 dist package keras util multi gpu util py line 227 in multi gpu model output model input file usr local lib python3 6 dist package kera engine base layer py line 457 in call output self call input kwargs file usr local lib python3 6 dist package kera engine network py line 564 in call output tensor self run internal graph input mask file usr local lib python3 6 dist package kera engine network py line 721 in run internal graph layer call compute tensor kwargs file usr local lib python3 6 dist package kera layer normalization py line 185 in call epsilon self epsilon file usr local lib python3 6 dist package keras backend tensorflow backend py line 1858 in normalize batch in training if not have nchw support and list reduction axis 0 2 3 file usr local lib python3 6 dist package keras backend tensorflow backend py line 291 in have nchw support explicitly on cpu be current explicit device cpu file usr local lib python3 6 dist package keras backend tensorflow backend py line 266 in be current explicit device device get current tf device file usr local lib python3 6 dist package keras backend tensorflow backend py line 247 in get current tf device g apply device function op file usr local lib python3 6 dist package tensorflow python framework op py line 4581 in apply device function op set device from string device string attributeerror tfdevicecaptureop object have no attribute set device from string arg 2
tensorflowtensorflow
gradient tape with tf math reduce euclidean norm disconnect
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mojave mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary conda tensorflow version use command below 2 0 0b1 python version 3 7 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version n a gpu model and memory n a run on cpu describe the current behavior use gradienttape to track the gradient use tf math reduce euclidean norm directly the gradient disconnect and return none describe the expect behavior I expect a gradient to be return if I decompose the function into the three constituent sequential operator square the element sum they reduce the dimension and square root the result tensor I get the gradient connect up as expect code to reproduce the issue import tensorflow as tf x tf constant 3 0 1 2 17 0 calculate euclidian distance of an nd vector by tf implementation with tf gradienttape as t2 t2 watch x z2 tf math reduce euclidean norm x axis 1 dz2 dx t2 gradient z2 x print z ngradient format z2 dz2 dx calculate euclidian distance of an nd vector by decompose operation with tf gradienttape as t t watch x x sq tf math square x x sum sq tf math reduce sum x sq axis 0 z tf math sqrt x sum sq dz dx t gradient z x print z ngradient format z dz dx other info log the example code give z 17 30433464050293 gradient none z 17 30433464050293 gradient 0 17336696 0 06934679 0 9824128 0 with the reduce euclidean distance operator the gradient be drop
tensorflowtensorflow
keras custom metric raise error when update state return an op
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 4 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be try to build a custom metric for kera which work with tensorflow 1 12 now after upgrade to python 1 14 I get the error show below I be return the result of tf group in the update state method of the metric which be of course an op what puzzle I be that tensorflow python keras util metric util update confusion matrix variable which be use by many of the other builtin metric like precision do the exact same thing to make sure that the error be not cause by my own implementation I copy the implementation of tf keras metric precision into my own file and try to run it it get the same error however when I substitute this custom metric with the builtin it work the code to reproduce this be show below describe the expect behavior the custom metric should work as expect code to reproduce the issue python import tensorflow as tf from tensorflow python keras metric import metric from tensorflow python keras util import metric util import tensorflow kera backend as k from tensorflow python op import math op from tensorflow python ops import init op import numpy as np from tensorflow python keras util generic util import to list class precision metric this be a 1 1 copy of the code in tensorflow python keras metric def init self threshold none top k none class i d none name none dtype none super precision self init name name dtype dtype self init threshold threshold self top k top k self class i d class i d default threshold 0 5 if top k be none else metric util neg inf self threshold metric util parse init threshold threshold default threshold default threshold self true positive self add weight true positive shape len self threshold initializer init op zeros initializer self false positive self add weight false positive shape len self threshold initializer init op zeros initializer def update state self y true y pre sample weight none return metric util update confusion matrix variable metric util confusionmatrix true positive self true positive metric util confusionmatrix false positive self false positive y true y pre threshold self threshold top k self top k class i d self class i d sample weight sample weight def result self result math op div no nan self true positive self true positive self false positive return result 0 if len self threshold 1 else result def reset state self num threshold len to list self threshold k batch set value v np zero num threshold for v in self variable def get config self config threshold self init threshold top k self top k class i d self class i d base config super precision self get config return dict list base config item list config item if name main x tf keras input 10 y hat tf keras layer dense 1 activation sigmoid x model tf keras model model input x output y hat model compile optimizer tf keras optimizer sgd 0 01 loss binary crossentropy metric precision however the builtin metric work model compile optimizer tf keras optimizer sgd 0 01 loss binary crossentropy metric tf keras metric precision x np random uniform 1 1 size 100 10 astype np float32 y np random choice 0 1 size 100 astype np float32 model fit x y other info log traceback most recent call last file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python framework func graph py line 676 in convert x op convert to tensor or composite x file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python framework op py line 1479 in convert to tensor or composite value value dtype dtype name name as ref false file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python framework op py line 1518 in internal convert to tensor or composite accept composite tensor true file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python framework op py line 1224 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python framework op py line 6696 in operation conversion error op name dtype name as ref typeerror can t convert operation group dep to tensor target dtype none name none as ref false during handling of the above exception another exception occur traceback most recent call last file metric bug py line 75 in metric precision file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python training tracking base py line 457 in method wrapper result method self args kwargs file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python keras engine training py line 330 in compile mask self prepare output mask file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python keras engine training py line 2170 in handle metric target output output mask file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python keras engine training py line 2118 in handle per output metric mask file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python keras engine training py line 2094 in call metric fn strategy self distribution strategy file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python keras distribute distribute training util py line 1054 in call replica local fn return fn args kwargs file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python keras engine training util py line 873 in call metric function return metric fn y true y pre sample weight weight file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python keras metrics py line 170 in call update op self update state args kwargs pylint disable not callable file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python keras util metric util py line 73 in decorate update op update state fn args kwargs file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python eager def function py line 414 in call self initialize args kwd add initializer to initializer map file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python eager def function py line 357 in initialize args kwd file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python eager function py line 1349 in get concrete function internal garbage collect graph function self maybe define function args kwargs file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python eager function py line 1652 in maybe define function graph function self create graph function args kwargs file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python eager function py line 1545 in create graph function capture by value self capture by value file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python framework func graph py line 720 in func graph from py func expand composite true file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python util nest py line 515 in map structure structure 0 func x for x in entry file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python util nest py line 515 in structure 0 func x for x in entry file user denis anaconda2 envs dev p36 lib python3 6 site package tensorflow python framework func graph py line 682 in convert str python func type x typeerror to be compatible with tf contrib eager defun python function must return zero or more tensor in compilation of wrap fn at 0xb34ec5d08 find return value of type which be not a tensor
tensorflowtensorflow
no way to generate html doc from source
Bug
description of issue what need change accord to there be no open source way to generate html doc from the tensorflow source code in march of 2016 be that still the case I haven t be able to find any way to generate html doc from the source generate offline html docs be useful for have access to offline doc without have to scrape www tensorflow org
tensorflowtensorflow
histogramfixedwidth in tflite
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 0beta provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of operator for which you will need custom implementation histogramfixedwidth code to reproduce nbin 5 value range 0 0 5 0 new value 1 0 0 0 1 5 2 0 5 0 15 root tf train checkpoint root nbin tf variable nbin root value range tf variable value range root f tf function lambda x tf histogram fix width x root value range nbin root nbin input datum tf convert to tensor new value concrete func root f get concrete function input datum converter tf lite tfliteconverter from concrete function concrete func tflite model converter convert
tensorflowtensorflow
variation in tensorflow execution latency
Bug
system information neural network lenet 300 100 target platform xeon platinum 8000 series amazon aws ec2 c5 instance support a vx 512 72 core 2 4 thread up to 2 instance core and 2 vcpus baseline framework tensorflow aw image version ami i d deep learn ami ubuntu version 23 1 ami 07262a45de118922e python version p27 describe the problem to collect baseline result for tensorflow model of lenet 300 100 find the accuracy and inference latency by use mnist dataset with a batch size of 16 and full 10000 number of batch analysis tensorflow inference runtime result there be inconsistent on subsequent session please review the attached ref time for inconsistent runtime for various batch runtime ref times txt please review the follow log and advice on this source code log time session source code with tf session graph graph config config as sess curr batch 0 warm up 5 for I in range warm up sess run output feed dict input zero while true batch input reader receive batch if batch be none break start monotonic monotonic pre sess run output feed dict input batch sys stderr write batch inference time s n format monotonic monotonic start output writer output send pre please let know need another detail please review all and advice I on this
tensorflowtensorflow
freeze graph not support tf control dependency
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary pypi tensorflow gpu 1 14 0 tensorflow version use command below 1 14 0 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version cuda 10 cudnn 7 gpu model and memory nvidia 1080 describe the current behavior this be the simple code I want to freeze where out node be as the output node name py print op tf print tf one 2 2 with tf control dependency print op out tf zero 3 by freeze the out node with freeze graph freeze graph what I get from the protobuf only contain node like stringformat printv2 and one no else information to show there execution order seem like the execution order manage by the tf control dependency be lose from the frozen graph code to reproduce the issue freeze any large or small graph that contain use tf control dependency and check how it be describe in the frozen protobuf datum
tensorflowtensorflow
strange bug with uint64 and tf constant
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 1 13 python version 2 7 describe the current behavior run the follow code import tensorflow as tf xx tf constant 54043195528445964 72057594037927941 54043195528445957 54043195528445954 108086391056891910 dtype tf int64 yy tf cast xx dtype tf uint64 qqq tf constant 1 2 3 4 5 www tf constant 1 2 3 4 5 sess tf session with sess as default print sess run xx print sess run yy the output be 54043195528445964 72057594037927941 54043195528445957 54043195528445954 108086391056891910 0 0 0 0 0 while it should have be 54043195528445964 72057594037927941 54043195528445957 54043195528445954 108086391056891910 54043195528445964 72057594037927941 54043195528445957 54043195528445954 108086391056891910 however the follow code import tensorflow as tf xx tf constant 54043195528445964 72057594037927941 54043195528445957 54043195528445954 108086391056891910 dtype tf int64 yy tf cast xx dtype tf uint64 sess tf session with sess as default print sess run xx print sess run yy print the correct output 54043195528445964 72057594037927941 54043195528445957 54043195528445954 108086391056891910 54043195528445964 72057594037927941 54043195528445957 54043195528445954 108086391056891910 this be really weird
tensorflowtensorflow
tensorarray object use as dataset reduce state lose infer shape
Bug
system information tensorflow version use command below 2 0 python version 3 describe the current behavior tensorarray object pass as accumulator to dataset reduce lose infer shape subsequent call to tensorarray concat return a fully unknown shape describe the expect behavior the element shape of the tensorarray should be partially know consistent with the behavior of an equivalent tf while loop code to reproduce the issue tf function def compute arr tf tensorarray tf float32 1 dynamic size true def body I arr real logit tf random normal 5 1 arr arr write tf cast I tf int32 real logit I 1 return I arr def cond I arr return I 10 arr tf while loop cond body 0 arr c arr concat tf print tensortarray concat shape c shape rank c shape rank return c tf function def compute ds arr tf tensorarray tf float32 1 dynamic size true def body state I arr state real logit tf random normal 5 1 arr arr write tf cast I tf int32 real logit I 1 return I arr en ds tf datum dataset range 10 enumerate arr en ds reduce 0 arr body c arr concat tf print tensortarray concat shape c shape rank c shape rank return c print with tf while loop compute print print with tf dataset reduce compute ds with tf while loop tensortarray concat shape tensorshape none 1 rank 2 with tf dataset reduce tensortarray concat shape tensorshape none rank none
tensorflowtensorflow
tflite speech example fail to train with output representation spec or mfcc example lite example speech command ml
Bug
my environment work sample link virtual environment anaconda navigator editor vs code mode of execution vs code integrate terminal with conda envs os mac osx tensorflow version 1 13 1 as require in the sample requirement when I download the sample and run it as it be as per the readme file everything work perfectly fine but when I try to change the output representation parameter value to spec or mfcc it doesn t work I get the error valueerror total size of new array must be unchanged in the model py line no 59 x reshape 800 20 x after a quick traceback I find that the spectrogram fingerprint size be take as 257x98 for every second so I change that line to x reshape 257 98 x and it successfully pass through this line but instead I get the follow traceback most recent call last file user minimaci73 anaconda3 envs samples lib python3 6 site package tensorflow python framework op py line 1659 in create c op c op c api tf finishoperation op desc tensorflow python framework error impl invalidargumenterror negative dimension size cause by subtract 3 from 1 for conv1d 12 convolution conv2d op conv2d with input shape 1 1 192 1 3 192 256 this happen in the line no 99 x reduce conv x 256 3 when I download the io example model and open it in netron I can clearly see that it use audio spectrogram what be all the change that be to be do to train the model with spec mfcc and the mfcc and raw
tensorflowtensorflow
tpuestimator function require train batch size to be set when use tpu be true
Bug
the tpuestimator constructor l2590 require train batch size to be set if use tpu be true in case when I only want to use a tpu estimator to predict that mean I have to pass in an arbitrary value in for train batch size look deeply into the code I can t pinpoint why train batch size need to be set when on a tpu but I m assume it s require somewhere deeply in the code it be very confusing for I especially since the documentation be conflict pull request 37 issue 297115525 maybe an option should be add that if train batch size be not set but one of the other two eval and predict be then provide a warning and pass an arbitrary value for train batch size otherwise maybe be more clear with the documentation that train batch size must always be set when on a tpu even if you be only predict or evaluate
tensorflowtensorflow
tf distribute mirroredstrategy incompatible with tf estimator training when define tf train scaffold with saver
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 7 3 cuda cudnn version 10 0 7 1 gpu model and memory titanxp 12 g x 4 describe the current behavior when I use tf estimator together with tf distribute mirroredstrategy for single worker multiple gpu train I meet the follow error if I try to define tf train scaffold for tf estimator estimatorspec to configure the saver parameter everything work fine just I remove the scafflold and the multiple gpu training for estimator be refer this tutorial scrollto 098zb3vvhuv file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute distribute lib py line 126 in require cross replica or default context extend raise runtimeerror method require be in cross replica context use runtimeerror method require be in cross replica context use get replica context merge call code to reproduce the issue here be my minimum snippet of code to reproduce this error python import tensorflow as tf from tensorflow python keras application import mobilenetv2 l tf keras layers def input fn dataset tf datum dataset from tensor slice feature tf random normal shape 1 224 224 3 dtype tf float32 label tf random uniform shape 1 minval 0 maxval 2 dtype tf int32 dataset dataset repeat dataset dataset batch 2 return dataset def model fn feature label mode input tensor feature feature label feature label if mode tf estimator modekeys train model mobilenetv2 input shape 224 224 3 class 2 weight none output model input tensor loss tf loss sparse softmax cross entropy label output train op tf train gradientdescentoptimizer 0 1 minimize loss global step tf train get global step define scaffold saver tf train saver sharde true keep checkpoint every n hour 1 save relative path true tf add to collection tf graphkey saver saver scaffold tf train scaffold saver saver remove scaffold this code could work return tf estimator estimatorspec mode mode loss loss train op train op scaffold scaffold multiple gpu configuration for estimator device device gpu 0 device gpu 1 strategy tf distribute mirroredstrategy device device config tf estimator runconfig model dir test multi gpu train distribute strategy estimator tf estimator estimator model fn model fn config config estimator train input fn input fn step 1000 other info log the total error log be here traceback most recent call last file test multi gpus py line 45 in estimator train input fn input fn step 1000 file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow estimator python estimator estimator py line 367 in train loss self train model input fn hook save listener file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow estimator python estimator estimator py line 1156 in train model return self train model distribute input fn hook save listener file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow estimator python estimator estimator py line 1219 in train model distribute self config train distribute input fn hook save listener file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow estimator python estimator estimator py line 1299 in actual train model distribute self config file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute distribute lib py line 1555 in call for each replica return self call for each replica fn args kwargs file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute mirror strategy py line 693 in call for each replica fn args kwargs file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute mirror strategy py line 195 in call for each replica coord join thread file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python training coordinator py line 389 in join six reraise self exc info to raise file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package six py line 693 in reraise raise value file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python training coordinator py line 297 in stop on exception yield file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute mirror strategy py line 911 in run self main result self main fn self main args self main kwargs file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow estimator python estimator estimator py line 1146 in call model fn model fn result self model fn feature feature kwargs file test multi gpus py line 34 in model fn save relative path true file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python training saver py line 825 in init self build file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python training saver py line 837 in build self build self filename build save true build restore true file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python training saver py line 875 in build build restore build restore file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python training saver py line 497 in build internal per device self groupbydevice saveable file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python training saver py line 404 in groupbydevice pydev canonical name spec tensor device for spec in saveable spec file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python training saver py line 404 in pydev canonical name spec tensor device for spec in saveable spec file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python training save saveable object py line 52 in tensor return self tensor if callable self tensor else self tensor file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute value py line 1358 in tensor return strategy extend read var sync on read variable file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute mirror strategy py line 768 in read var return replica local var get cross replica pylint disable protect access file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute value py line 1424 in get cross replica axis none file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute distribute lib py line 832 in reduce return super strategyv1 self reduce reduce op value axis file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute distribute lib py line 552 in reduce require cross replica or default context extend self extend file datum fanzong miniconda3 envs tf cuda10 lib python3 7 site package tensorflow python distribute distribute lib py line 126 in require cross replica or default context extend raise runtimeerror method require be in cross replica context use runtimeerror method require be in cross replica context use get replica context merge call
tensorflowtensorflow
scatter nd update doesn t work with string
Bug
system information I reproduce this issue in new tensorflow official docker image have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 2 7 15 describe the current behavior in my model I need to maintain an extremely long 2 d variable tensor which have several column and many row and its dtype be string in every training step I need to update only several individual row of that tensor tf scatter nd update meet my requirement perfectly except that it doesn t work with string in fact as a contrast tf scatter nd do work since the document doesn t mention that ref can t be stre I think it may be a bug describe the expect behavior I hope tf scatter nd update support string ref and I really need this feature in my project so if it can t be fix quickly any walk around suggestion include modify some source code be also welcome code to reproduce the issue import tensorflow as tf ref tf variable qq ww ee rr indice tf constant 4 3 1 7 update tf constant aa dd cc bb update tf scatter nd update ref indice update with tf session as sess sess run tf initialize all variable print sess run update other info log traceback most recent call last file line 2 in file usr local lib python2 7 dist package tensorflow python client session py line 950 in run run metadata ptr file usr local lib python2 7 dist package tensorflow python client session py line 1173 in run feed dict tensor option run metadata file usr local lib python2 7 dist package tensorflow python client session py line 1350 in do run run metadata file usr local lib python2 7 dist package tensorflow python client session py line 1370 in do call raise type e node def op message tensorflow python framework error impl invalidargumenterror no opkernel be register to support op scatterndupdate use by node scatterndupdate define at 1 with these attrs class loc variable use lock true tindice dt int32 t dt string register device cpu xla cpu register kernel device cpu t in dt bool tindice in dt int64 device cpu t in dt bool tindice in dt int32 device cpu t in dt complex128 tindice in dt int64 device cpu t in dt complex128 tindice in dt int32 device cpu t in dt complex64 tindice in dt int64 device cpu t in dt complex64 tindice in dt int32 device cpu t in dt double tindice in dt int64 device cpu t in dt double tindice in dt int32 device cpu t in dt float tindice in dt int64 device cpu t in dt float tindice in dt int32 device cpu t in dt bfloat16 tindice in dt int64 device cpu t in dt bfloat16 tindice in dt int32 device cpu t in dt half tindice in dt int64 device cpu t in dt half tindice in dt int32 device cpu t in dt int8 tindice in dt int64 device cpu t in dt int8 tindice in dt int32 device cpu t in dt uint8 tindice in dt int64 device cpu t in dt uint8 tindice in dt int32 device cpu t in dt int16 tindice in dt int64 device cpu t in dt int16 tindice in dt int32 device cpu t in dt uint16 tindice in dt int64 device cpu t in dt uint16 tindice in dt int32 device cpu t in dt int32 tindice in dt int64 device cpu t in dt int32 tindice in dt int32 device cpu t in dt int64 tindice in dt int64 device cpu t in dt int64 tindice in dt int32 scatterndupdate
tensorflowtensorflow
tf while loop with tf keras layers lstm break
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below the july 12 p36 gpu 2 0 nightly preview python version 3 6 cuda cudnn version 10 7 gpu model and memory 3 geforce gtx w 8 gb describe the current behavior first I want to mention that the lstm not work with distribute strategy be already be look into here I want to highlight this as a separate issue because it likely have a different source basically when dynamically decode a sequence with an lstm and tf while loop the code break see log below for more detail this do not happen with an rnn lstmcell configuration but the lstm be the only cudnn access point aside from gru which also do not work in this configuration describe the expect behavior the code should use the optimize cudnn lstm implementation and behave as the rnn lstmcell approach I e not fail code to reproduce the issue other info log 2019 07 12 11 37 10 386140 w tensorflow compiler jit mark for compilation pass cc 1558 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile 2019 07 12 11 38 30 548248 e tensorflow core grappler optimizer meta optimizer cc 502 function optimizer fail invalid argument input 1 of node se q3 seq encoder while body 195 tensorlistpushback 49 be pass int32 from se q3 seq encoder while body 195 decoder c lstm 3 statefulpartitionedcall 9 incompatible with expect variant 2019 07 12 11 38 37 853257 e tensorflow core grappler optimizer meta optimizer cc 502 function optimizer fail invalid argument input 1 of node se q3 seq encoder while body 195 tensorlistpushback 49 be pass int32 from se q3 seq encoder while body 195 decoder c lstm 3 statefulpartitionedcall 9 incompatible with expect variant 2019 07 12 11 38 39 689929 w tensorflow core common runtime process function library runtime cc 672 ignore multi device function optimization failure invalid argument input 1 of node se q3 seq encoder while body 195 tensorlistpushback 77 be pass int32 from se q3 seq encoder while body 195 decoder c lstm 2 statefulpartitionedcall 9 incompatible with expect variant 2019 07 12 11 38 45 280991 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 07 12 11 38 45 755520 w tensorflow core framework op kernel cc 1622 op require fail at partition function op cc 113 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 0 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 20 exit 94 and the dst node be while 0 retval 2019 07 12 11 38 45 755562 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 0 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 20 exit 94 and the dst node be while 0 retval node se q3 seq encoder while body 195 decoder c lstm 2 statefulpartitionedcall if 9 else 2424 gradient while grad while grad body 11561 gradient tensorarrayv2read tensorlistgetitem grad tensorlistlength tensorlistpopback 1920 2019 07 12 11 38 45 755854 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 0 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 20 exit 94 and the dst node be while 0 retval node se q3 seq encoder while body 195 decoder c lstm 2 statefulpartitionedcall I 11 38 49 971 notebookapp saving file at seq3 lstm cuda ipynb
tensorflowtensorflow
mirror strategy you dataset iterator run out of datum interrupting training
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary pip tensorflow gpu 2 0 beta1 tensorflow version use command below 2 0 beta1 python version 3 7 gpu model and memory titan rtx x 2 2 x 24 gb p100 x 2 2 x 16 gb error in keras model fit when use the mirroredstrategy with tensorflow dataset in both training and validation single gpu card work fine whether use the mirroredstertegy or not when use the mirroredstertegy set the device gpu 0 this problem only occur when use multiple gpu card the error display training array py 325 your dataset iterator run out of datum interrupting training make sure that your iterator can geretate at least validation step epoch batch currently the only work solutation for I be manully set the validation step in keras model fit tensorflow dataset repeat or and take will not work by set the validation batch size to 2 1 for each gpu also do not work simliar issue in here but close
tensorflowtensorflow
documentation tf 1 14 miss documentation for batch dim in tf gather
Bug
url s with the issue description of issue what need change in tf 1 14 tf batch gather be mark as deprecate and the keyword batch dim have be add to tf gather to handle the batch version though the documentation of tf gather have only be update with the type of batch dim but not how to use it neither what it do the old tf batch gather function be document in tf 1 13 though tf gather be a bit more complex than the old tf batch gather so maybe the documentation of the underlie batch gather l3514 function could be use these two exist documentation could be use to complete the exist documentation of tf gather correct link yes parameter define yes return define yes raise list and define yes usage example not for use batch dim submit a pull request no
tensorflowtensorflow
rpi3 g 4 8 do not exist on debian buster june 2019
Bug
no way I be go to fill this guy update your doc please
tensorflowtensorflow
documentation in tf 1 13 1 tf keras experimental export do not exist despite be document
Bug
url s with the issue description of issue what need change ok so let s say for some arbitrary reason out of your control cough sagemaker cough you be peg to tensorflow 1 13 1 the cool new tf keras experimental export feature look a lot easy than build all that stuff yourself so you go to try and use it unfortunately you get hit with the follow error traceback most recent call last file document example py line 10 in save to path tf keras experimental export attributeerror module object have no attribute export correct link it be not the part that be define in tensorflow python keras save save model py link to which get a 404 parameter define probably return define very possible raise list and define yep usage example yes in fact it doesn t work for posterity here s that usage example copy paste into a github gist submit a pull request nope
tensorflowtensorflow
r1 13 1 importerror for tf keras application imagenet util preprocess input
Bug
system information os platform and distribution e g linux ubuntu 16 04 mac os mojave 10 14 4 tensorflow instal from source or binary binary pip tensorflow version use command below 1 13 1 python version 3 6 7 cuda cudnn version no gpu gpu model and memory no gpu describe the current behavior none of the follow import work be work on the current version python from tensorflow keras application import imagenet util from tensorflow keras application imagenet util import preprocess input from tensorflow keras application import preprocess input describe the expect behavior the aforementioned import should work I review the source code for branch r1 13 examine file tensorflow python keras application init py l74 l86 preprocess input appear to be miss code to reproduce the issue provide above
tensorflowtensorflow
more spurious deprecation warning
Bug
similar to 27897 system information os platform and distribution linux 4 14 79 x86 64 with ubuntu 18 04 bionic tensorflow instal from pip install tensorflow 2 0 0 beta1 tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version 3 6 8 this be happen in colab sandbox google com describe the current behavior when use new apis that replace old apis you deprecation warning as if you be still use the old api describe the expect behavior if I use the new apis I should not get deprecation warning code to reproduce the issue from future import absolute import division print function unicode literal import functool import numpy as np import tensorflow as tf import tensorflow dataset as tfds train datum url train file path tf keras util get file train csv train datum url label column survive label 0 1 def get dataset file path dataset tf datum experimental make csv dataset file path batch size 12 artificially small to make example easy to show label name label column na value num epoch 1 ignore error true return dataset raw train datum get dataset train file path output warn log before flag parsing go to stderr w0711 17 34 31 453707 140627566475136 deprecation py 323 from usr local lib python3 6 dist package tensorflow python datum experimental op reader py 498 parallel interleave from tensorflow python datum experimental op interleave op be deprecate and will be remove in a future version instruction for update use tf datum dataset interleave map func cycle length block length num parallel call tf data experimental autotune instead if sloppy execution be desire use tf datum option experimental determinstic end output category sex male female class first second third deck a b c d e f g h I j embark town cherbourg southhampton queenstown alone y n categorical column for feature vocab in category item cat col tf feature column categorical column with vocabulary list key feature vocabulary list vocab categorical column append tf feature column indicator column cat col mean age 29 631308 n sibling spouse 0 545455 parch 0 379585 fare 34 385399 def process continuous datum mean datum normalize datum datum tf cast datum tf float32 1 2 mean return tf reshape datum 1 1 numerical column for feature in mean key num col tf feature column numeric column feature normalizer fn functools partial process continuous datum mean feature numerical column append num col preprocesse layer tf keras layer densefeature categorical column numerical column def get model hide unit 100 100 model tf keras sequential preprocesse layer for unit in hide unit model add tf keras layer dense unit activation relu return model train datum raw train datum shuffle 500 model get model model compile loss binary crossentropy optimizer adam metric accuracy model fit train datum epoch 20 output epoch 1 20 w0711 17 34 32 313002 140627566475136 deprecation py 323 from usr local lib python3 6 dist package tensorflow python feature column feature column v2 py 2655 add dispatch support wrapper from tensorflow python op array op be deprecate and will be remove in a future version instruction for update use tf where in 2 0 which have the same broadcast rule as np where w0711 17 34 32 347570 140627566475136 deprecation py 323 from usr local lib python3 6 dist package tensorflow python feature column feature column v2 py 4215 indicatorcolumn variable shape from tensorflow python feature column feature column v2 be deprecate and will be remove in a future version instruction for update the old featurecolumn apis be be deprecate please use the new featurecolumn apis instead w0711 17 34 32 350716 140627566475136 deprecation py 323 from usr local lib python3 6 dist package tensorflow python feature column feature column v2 py 4270 vocabularylistcategoricalcolumn num bucket from tensorflow python feature column feature column v2 be deprecate and will be remove in a future version instruction for update the old featurecolumn apis be be deprecate please use the new featurecolumn apis instead
tensorflowtensorflow
very bad performance use gradient tape
Bug
system information have I write custom code yes os platform and distribution ubuntu 18 04 2 tensorflow instal from source or binary binary pip tensorflow version use command below 2 0 0 beta1 python version 3 6 8 cuda cudnn version 10 0 7 gpu model and memory tesla k80 I m try to learn tensorflow 2 0 so I build a toy model and train it use kere fit method everything work well but when I try to implement the training loop from scratch the training be happen very very slowly keras fit method train the model in 1 min 41 sec while the training code I ve write take more than 8 min to train below be my model definition python model tf keras sequential model add layer conv2d filter 6 kernel size 3 3 activation relu input shape 28 28 1 model add layer averagepooling2d model add layer conv2d filter 16 kernel size 3 3 activation relu model add layer averagepooling2d model add layer flatten model add layer dense unit 120 activation relu model add layer dense unit 84 activation relu model add layer dense unit 10 activation softmax below I m define loss optimizer and accuracy python optimizer tf keras optimizer adam objective tf keras loss sparsecategoricalcrossentropy metric tf keras metric sparsecategoricalaccuracy and below be my training loop python time with tf device gpu 0 for epoch in range 20 cumulative loss 0 0 metric reset state for image label in dataset with tf gradienttape as tape prediction model image train true loss objective label prediction grad tape gradient loss model trainable variable optimizer apply gradient zip grad model trainable variable cumulative loss loss metric update state label prediction print epoch loss accuracy format epoch cumulative loss numpy batch 1 metric result I m runnig my notebook in google colab
tensorflowtensorflow
tpu have xla compilation issue on tf 1 14
Bug
I be get an issue with use xla on the cloud tpu on tensorflow version 1 14 system information use google s cloud tpu with tensorflow 1 14 v1 14 0 rc1 22 gaf24dc91b5 1 14 0 tf env txt system info sanity check log message remove full tf env txt attach above check python python version 3 5 3 python branch python build version default sep 27 2018 17 25 39 python compiler version gcc 6 3 0 20170516 python implementation cpython check os platform os linux os kernel version 1 smp sit jun 22 23 33 41 pdt 2019 os release version 4 14 111 os platform linux 4 14 111 x86 64 with debian 9 9 linux distribution debian 9 9 linux os distribution debian 9 9 mac version uname uname result system linux node cs 6000 devshell vm cb1acb17 794a 49c5 8346 cd612beb1d0d release 4 14 111 version 1 smp sit jun 22 23 33 41 pdt 2019 machine x86 64 processor architecture 64bit machine x86 64 be we in docker no compiler c debian 6 3 0 18 deb9u1 6 3 0 20170516 copyright c 2016 free software foundation inc this be free software see the source for copy condition there be no warranty not even for merchantability or fitness for a particular purpose check pip numpy 1 16 4 protobuf 3 8 0 tensorflow 1 14 0 tensorflow estimator 1 14 0 check for virtualenv false tensorflow import tf version version 1 14 0 tf version git version v1 14 0 rc1 22 gaf24dc91b5 tf version compiler version 4 8 5 sanity check edit out env ld library path be unset dyld library path be unset nvidia smi tf env collect sh line 147 nvidia smi command not find cuda lib tensorflow instal from info name tensorflow version 1 14 0 summary tensorflow be an open source machine learning framework for everyone home page author email license apache 2 0 location usr local lib python3 5 dist package python version major minor micro releaselevel serial 3 5 3 final 0 bazel version build label 0 27 1 build time tue jul 2 17 49 35 2019 1562089775 build timestamp 1562089775 build timestamp as int 1562089775 describe the current behavior when I run my model which be a partial implementation of transformer use xla I get an error message the error message be the follow warn log before flag parsing go to stderr w0710 13 59 32 655435 139705705027328 deprecation wrapper py 119 from issue py 24 the name tf data iterator be deprecate please use tf compat v1 datum iterator instead w0710 13 59 32 655846 139705705027328 deprecation py 323 from issue py 25 datasetv1 output type from tensorflow python data op dataset op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output type dataset w0710 13 59 32 655990 139705705027328 deprecation py 323 from issue py 26 datasetv1 output shape from tensorflow python data op dataset op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output shape dataset w0710 13 59 32 663649 139705705027328 deprecation wrapper py 119 from issue py 68 the name tf get variable be deprecate please use tf compat v1 get variable instead w0710 13 59 32 707169 139705705027328 deprecation wrapper py 119 from issue py 85 the name tf rsqrt be deprecate please use tf math rsqrt instead w0710 13 59 32 714751 139705705027328 deprecation py 506 from usr local lib python3 5 dist package tensorflow python op init op py 1251 call variancescale init from tensorflow python op init op with dtype be deprecate and will be remove in a future version instruction for update call initializer instance with the dtype argument instead of pass it to the constructor w0710 13 59 33 743603 139705705027328 deprecation wrapper py 119 from issue py 205 the name tf train gradientdescentoptimizer be deprecate please use tf compat v1 train gradientdescentoptimizer instead w0710 13 59 35 527529 139705705027328 deprecation wrapper py 119 from issue py 206 the name tf train get or create global step be deprecate please use tf compat v1 train get or create global step instead w0710 13 59 35 536509 139705705027328 deprecation wrapper py 119 from issue py 208 the name tf assign be deprecate please use tf compat v1 assign instead w0710 13 59 35 547362 139705705027328 deprecation wrapper py 119 from issue py 30 the name tf session be deprecate please use tf compat v1 session instead 2019 07 10 13 59 35 547951 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 07 10 13 59 35 799823 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2200000000 hz 2019 07 10 13 59 35 800045 I tensorflow compiler xla service service cc 168 xla service 0x5608f773ae90 execute computation on platform host device 2019 07 10 13 59 35 800062 I tensorflow compiler xla service service cc 175 streamexecutor device 0 w0710 13 59 35 811006 139705705027328 deprecation wrapper py 119 from issue py 31 the name tf global variable initializer be deprecate please use tf compat v1 global variable initializer instead 2019 07 10 13 59 36 203915 w tensorflow compiler jit mark for compilation pass cc 1412 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile 2019 07 10 13 59 36 572778 w tensorflow core framework allocator cc 107 allocation of 67141632 exceed 10 of system memory 2019 07 10 13 59 36 861743 w tensorflow core framework allocator cc 107 allocation of 67141632 exceed 10 of system memory 2019 07 10 13 59 37 125698 w tensorflow core framework allocator cc 107 allocation of 67141632 exceed 10 of system memory 2019 07 10 13 59 37 551537 w tensorflow core framework allocator cc 107 allocation of 67141632 exceed 10 of system memory w0710 13 59 39 391257 139705705027328 deprecation py 323 from usr local lib python3 5 dist package tensorflow python data op iterator op py 348 iterator output type from tensorflow python data op iterator op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output type iterator w0710 13 59 39 444635 139705705027328 deprecation py 323 from usr local lib python3 5 dist package tensorflow python data op iterator op py 349 iterator output shape from tensorflow python data op iterator op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output shape iterator w0710 13 59 39 445140 139705705027328 deprecation py 323 from usr local lib python3 5 dist package tensorflow python data op iterator op py 351 iterator output class from tensorflow python data op iterator op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output class iterator 2019 07 10 13 59 44 506360 w tensorflow core framework op kernel cc 1502 op require fail at xla op cc 343 invalid argument detect unsupported operation when try to compile graph cluster 13546192731870215987 on xla cpu jit temporaryvariable no register temporaryvariable opkernel for xla cpu jit device compatible with node node gradient addn 13 tmp var register device cpu node gradient addn 13 tmp var this error might be occur with the use of xla compile if it be not necessary that every op be compile with xla an alternative be to use auto jit with optimizeroption global jit level on 2 or the environment variable tf xla flag tf xla auto jit 2 which will attempt to use xla to compile as much of the graph as the compiler be able to traceback most recent call last file usr local lib python3 5 dist package tensorflow python client session py line 1356 in do call return fn args file usr local lib python3 5 dist package tensorflow python client session py line 1341 in run fn option feed dict fetch list target list run metadata file usr local lib python3 5 dist package tensorflow python client session py line 1429 in call tf sessionrun run metadata tensorflow python framework error impl invalidargumenterror detect unsupported operation when try to compile graph cluster 13546192731870215987 on xla cpu jit temporaryvariable no register temporaryvariable opkernel for xla cpu jit device compatible with node node gradient addn 13 tmp var register device cpu node gradient addn 13 tmp var this error might be occur with the use of xla compile if it be not necessary that every op be compile with xla an alternative be to use auto jit with optimizeroption global jit level on 2 or the environment variable tf xla flag tf xla auto jit 2 which will attempt to use xla to compile as much of the graph as the compiler be able to cluster during handling of the above exception another exception occur traceback most recent call last file issue py line 221 in sys exit main sys argv file issue py line 217 in main hlo get hlo transformer model fn transformer input fn file issue py line 33 in get hlo sess run loss file usr local lib python3 5 dist package tensorflow python client session py line 950 in run run metadata ptr file usr local lib python3 5 dist package tensorflow python client session py line 1173 in run feed dict tensor option run metadata file usr local lib python3 5 dist package tensorflow python client session py line 1350 in do run run metadata file usr local lib python3 5 dist package tensorflow python client session py line 1370 in do call raise type e node def op message tensorflow python framework error impl invalidargumenterror detect unsupported operation when try to compile graph cluster 13546192731870215987 on xla cpu jit temporaryvariable no register temporaryvariable opkernel for xla cpu jit device compatible with node node gradient addn 13 tmp var register device cpu node gradient addn 13 tmp var this error might be occur with the use of xla compile if it be not necessary that every op be compile with xla an alternative be to use auto jit with optimizeroption global jit level on 2 or the environment variable tf xla flag tf xla auto jit 2 which will attempt to use xla to compile as much of the graph as the compiler be able to cluster describe the expect behavior I expect there to be no error message code to reproduce the issue reproduce with python3 issue py issue py zip issue py be the follow also zip above import sys import tensorflow as tf import numpy as np from tensorflow contrib compiler xla import compile hide size 2048 filter size 8196 num head 32 d k hide size num head d k root d k 0 5 word len 512 def get hlo model fn input fn def xla model fn feature label spec model fn feature label mode train param none with tf control dependency spec train op return tf identity spec loss name spec loss op name train ds input fn repeat iterator tf data iterator from structure train ds output type train ds output shape loss compile xla model fn input iterator get next with tf session as sess sess run tf global variable initializer sess run iterator make initializer train ds sess run loss def transformer input fn feat shape 1 word len hide size label shape 1 1 1 word len model input tf image convert image dtype tf reshape tf constant np random uniform size feat shape feat shape tf float32 model expect tf image convert image dtype tf reshape tf constant np random uniform size label shape label shape tf float32 dataset model input model expect dataset tf datum dataset from tensor slice dataset return dataset def transformer model fn feature label mode param layer norm scale tf get variable layer norm scale hide size initializer tf one initializer layer norm bias tf get variable layer norm bias hide size initializer tf zeros initializer def layer normalization net mean tf reduce mean net axis 1 keepdim true mean tf multiply mean 1 diff tf add net mean var tf reduce mean tf square diff axis 1 keepdim true var tf add var 1e 6 net tf multiply diff tf rsqrt var return tf add tf multiply net layer norm scale layer norm bias def transformer fc net size use bias true return tf keras layer dense size activation none use bias use bias net def feed forward net net transformer fc net filter size net tf nn relu net net tf nn dropout net rate 0 1 net transformer fc net hide size return net def multi head attention q k v bias def split head net net tf reshape net 1 word len num head d k net tf transpose net 0 2 1 3 return net def combine head net net tf transpose net 0 2 1 3 net tf reshape net 1 word len hide size return net q transformer fc q hide size use bias false k transformer fc k hide size use bias false v transformer fc v hide size use bias false q split head q k split head k v split head v q tf multiply q d k root x tf matmul q k transpose b true x tf add x bias x tf nn softmax x x tf nn dropout x rate 0 1 x tf matmul x v x combine head x x transformer fc x hide size use bias false return x def encoder block net enc bias residual layer normalization net residual multi head attention residual residual residual enc bias residual tf nn dropout residual rate 0 1 net tf add residual net residual layer normalization net residual feed forward residual residual tf nn dropout residual rate 0 1 net tf add residual net return layer normalization net def encode enc input enc bias net tf nn dropout enc input rate 0 1 net encoder block net enc bias return net def decoder block net enc output dec bias enc dec bias residual layer normalization net residual multi head attention residual residual residual dec bias residual tf nn dropout residual rate 0 1 net tf add residual net residual layer normalization net residual multi head attention residual enc output enc output enc dec bias residual tf nn dropout residual rate 0 1 net tf add residual net residual layer normalization net residual feed forward residual residual tf nn dropout residual rate 0 1 net tf add residual net return layer normalization net def decode dec input enc output dec bias enc dec bias net tf nn dropout dec input rate 0 1 net decoder block net enc output dec bias enc dec bias return net label tf stop gradient label enc input feature dec input feature enc bias label enc dec bias label dec bias label target feature enc output encode enc input enc bias dec output decode dec input enc output dec bias enc dec bias net dec output loss tf reduce mean tf nn softmax cross entropy with logit v2 label target logit net train step tf train gradientdescentoptimizer 0 0001 minimize loss global step tf train get or create global step update global step tf assign global step global step 1 name update global step return tf estimator estimatorspec mode loss loss train op tf group train step update global step def main args hlo get hlo transformer model fn transformer input fn if name main sys exit main sys argv other info log the issue can actually be avoid if the issue py file be modify slightly change hide size 2048 filter size 8196 num head 32 to hide size 2048 4 filter size 8196 4 num head 32 4 on line 7 9 note that this fix work on the google cloud tpu may not work on other machine the issue be also reproduce on multiple tf version tensorflow tensorflow 1 13 0rc0 py3 be able to reproduce the issue tensorflow tensorflow 1 13 0rc1 py3 be able to reproduce the issue tensorflow tensorflow 1 13 0rc2 py3 be able to reproduce the issue tensorflow tensorflow 1 13 1 py3 be able to reproduce the issue tensorflow tensorflow 1 14 0 py3 be able to reproduce the issue tensorflow tensorflow 2 0 0a0 py3 no module name tensorflow contrib import error all of these tf version be run on vanilla tf docker image the fix above where I divide the number by 4 also fix the issue on all those version as well tf version 2 and high be not test because of significant api change for this I need to spend some time create a new python file with new api call to get the issue reproduce image issue reproduce use issue py issue avoid by divide variable by 4 expect behavior warn log before flag parsing go to stderr w0710 15 25 47 346601 140584685344512 deprecation wrapper py 119 from issue py 24 the name tf data iterator be deprecate please use tf compat v1 datum iterator instead w0710 15 25 47 347033 140584685344512 deprecation py 323 from issue py 25 datasetv1 output type from tensorflow python data op dataset op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output type dataset w0710 15 25 47 347194 140584685344512 deprecation py 323 from issue py 26 datasetv1 output shape from tensorflow python data op dataset op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output shape dataset w0710 15 25 47 356488 140584685344512 deprecation wrapper py 119 from issue py 68 the name tf get variable be deprecate please use tf compat v1 get variable instead w0710 15 25 47 417571 140584685344512 deprecation wrapper py 119 from issue py 85 the name tf rsqrt be deprecate please use tf math rsqrt instead w0710 15 25 47 425961 140584685344512 deprecation py 506 from usr local lib python3 5 dist package tensorflow python op init op py 1251 call variancescale init from tensorflow python op init op with dtype be deprecate and will be remove in a future version instruction for update call initializer instance with the dtype argument instead of pass it to the constructor w0710 15 25 48 473849 140584685344512 deprecation wrapper py 119 from issue py 205 the name tf train gradientdescentoptimizer be deprecate please use tf compat v1 train gradientdescentoptimizer instead w0710 15 25 49 911786 140584685344512 deprecation wrapper py 119 from issue py 206 the name tf train get or create global step be deprecate please use tf compat v1 train get or create global step instead w0710 15 25 49 917340 140584685344512 deprecation wrapper py 119 from issue py 208 the name tf assign be deprecate please use tf compat v1 assign instead w0710 15 25 49 928683 140584685344512 deprecation wrapper py 119 from issue py 30 the name tf session be deprecate please use tf compat v1 session instead 2019 07 10 15 25 49 929245 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 07 10 15 25 49 939914 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2200000000 hz 2019 07 10 15 25 49 940074 I tensorflow compiler xla service service cc 168 xla service 0x5563e2d4be50 execute computation on platform host device 2019 07 10 15 25 49 940087 I tensorflow compiler xla service service cc 175 streamexecutor device 0 w0710 15 25 49 940532 140584685344512 deprecation wrapper py 119 from issue py 31 the name tf global variable initializer be deprecate please use tf compat v1 global variable initializer instead 2019 07 10 15 25 50 432006 w tensorflow compiler jit mark for compilation pass cc 1412 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile w0710 15 25 50 529966 140584685344512 deprecation py 323 from usr local lib python3 5 dist package tensorflow python data op iterator op py 348 iterator output type from tensorflow python data op iterator op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output type iterator w0710 15 25 50 530351 140584685344512 deprecation py 323 from usr local lib python3 5 dist package tensorflow python data op iterator op py 349 iterator output shape from tensorflow python data op iterator op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output shape iterator w0710 15 25 50 530480 140584685344512 deprecation py 323 from usr local lib python3 5 dist package tensorflow python data op iterator op py 351 iterator output class from tensorflow python data op iterator op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output class iterator 2019 07 10 15 26 03 872235 w tensorflow core framework allocator cc 107 allocation of 123777024 exceed 10 of system memory
tensorflowtensorflow
tensorflowliteswift fail pod validation
Bug
system information os platform and distribution macos 10 14 5 tensorflowliteswift pod version 0 2 0 commit 477447155b cocoapod version 1 7 3 xcode version 10 2 1 describe the problem when run the command pod spec lint test fail because i386 architecture can not be find this happen not only on my machine but also on our private pod system fail validation mean we can not use the pod for our app without turn off the test provide the exact sequence of command step that you execute before run into the problem 1 cd to tensorflowliteswift directory 2 run pod spec lint verbose any other info log here s the output I be able to resolve this issue by up the the ios deployment target in the podspec from 9 0 to 11 0 I believe pod spec lint want to compile a 32 bit version of tensorflowlitec for io 9 and 10 but the tensorflowlitec binary only contain x86 64 and no i386 a more robust solution might be to compile for i386 or it may be possible to configure the podspec to only execute test on arm and x86 64
tensorflowtensorflow
decode wav sample rate output can not be pass to tf signal linear to mel weight matrix
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow combine output of decode wav with the sample in signal mfccs from log mel spectrogram os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary conda binary tensorflow version use command below 2 0 0 dev20190702 git version unknown python version 3 6 7 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version gpu model and memory surface book nvidia gpu describe the current behavior the output of decode wav be tuple of wav datum sample rate sample rate be int32 but linear to mel weight matrix expect a float32 sample rate if the sample rate be cast use tf cast sample rate float32 and then a typeerror be throw with the message typeerror use a tf tensor as a python bool be not allow use if t be not none instead of if t to test if a tensor be define and use tensorflow op such as tf cond to execute subgraph condition on the value of a tensor describe the expect behavior sample rate output of decode wav can be use as input to linear to mel weight matrix code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf def load and mel file path tensor from pcm sample rate tf audio decode wav path tensor sr f tf cast sample rate tf float32 mismatch in type between output of decode wav and input to linear to mel weight matrix print pcm sample rate sr f a 1024 point stft with frame of 64 ms and 75 overlap stft tf signal stft pcm frame length 1024 frame step 256 fft length 1024 spectrogram tf abs stft warp the linear scale spectrogram into the mel scale num spectrogram bin stft shape 1 low edge hertz upper edge hertz num mel bin 80 0 7600 0 80 linear to mel weight matrix tf signal linear to mel weight matrix num mel bin num spectrogram bin sr f low edge hertz upper edge hertz mel spectrogram tf tensordot spectrogram linear to mel weight matrix 1 mel spectrogram set shape spectrogram shape 1 concatenate linear to mel weight matrix shape 1 compute a stabilize log to get log magnitude mel scale spectrogram log mel spectrograms tf math log mel spectrogram 1e 6 print log mel spectrogram return log mel spectrogram path ds tf datum dataset list file wav mel ds path ds map load and mel file other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach typeerror traceback most recent call last in 1 build dataset 2 train ds build datum pair from dir train dir 3 test ds build datum pair from dir test dir in build datum pair from dir source dir 54 55 convert to mel 56 clean mel ds clean path ds map load and mel file num parallel call autotune 57 noisy mel ds noisy path ds map load and mel file num parallel call autotune 58 c user benhe conda envs homl2 lib site package tensorflow core python data op dataset op py in map self map func num parallel call 1887 return datasetv1adapter 1888 parallelmapdataset 1889 self map func num parallel call preserve cardinality false 1890 1891 deprecation deprecate none use tf datum dataset map c user benhe conda envs homl2 lib site package tensorflow core python data op dataset op py in init self input dataset map func num parallel call use inter op parallelism preserve cardinality use legacy function 3333 self transformation name 3334 dataset input dataset 3335 use legacy function use legacy function 3336 self num parallel call op convert to tensor 3337 num parallel call dtype dtype int32 name num parallel call c user benhe conda envs homl2 lib site package tensorflow core python data op dataset op py in init self func transformation name dataset input class input shape input type input structure add to graph use legacy function defun kwargs 2677 resource tracker tracking resourcetracker 2678 with track resource tracker scope resource tracker 2679 self function wrapper fn get concrete function internal 2680 if add to graph 2681 self function add to graph op get default graph c user benhe conda envs homl2 lib site package tensorflow core python eager function py in get concrete function internal self args kwargs 1418 bypass error check when get a graph function 1419 graph function self get concrete function internal garbage collect 1420 args kwargs 1421 we re return this concrete function to someone and they may keep a 1422 reference to the funcgraph without keep a reference to the c user benhe conda envs homl2 lib site package tensorflow core python eager function py in get concrete function internal garbage collect self args kwargs 1412 if self input signature 1413 args kwargs none none 1414 graph function self maybe define function args kwargs 1415 return graph function 1416 c user benhe conda envs homl2 lib site package tensorflow core python eager function py in maybe define function self args kwargs 1716 graph function self function cache primary get cache key none 1717 if graph function be none 1718 graph function self create graph function args kwargs 1719 self function cache primary cache key graph function 1720 return graph function args kwargs c user benhe conda envs homl2 lib site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 1602 arg name arg name 1603 override flat arg shape override flat arg shape 1604 capture by value self capture by value 1605 self function attribute 1606 c user benhe conda envs homl2 lib site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 784 convert func 785 786 func output python func func args func kwargs 787 788 invariant func output contain only tensor compositetensor c user benhe conda envs homl2 lib site package tensorflow core python data op dataset op py in wrapper fn args 2671 attribute defun kwargs 2672 def wrapper fn args pylint disable miss docstre 2673 ret wrapper helper args 2674 ret structure to tensor list self output structure ret 2675 return op convert to tensor t for t in ret c user benhe conda envs homl2 lib site package tensorflow core python data op dataset op py in wrapper helper args 2616 nest args nest args 2617 2618 ret autograph tf convert func ag ctx nest args 2619 if func return a list of tensor nest flatten and 2620 op convert to tensor would conspire to attempt to stack c user benhe conda envs homl2 lib site package tensorflow core python autograph impl api py in wrapper args kwargs 220 except exception as e pylint disable broad except 221 if hasattr e ag error metadata 222 raise e ag error metadata to exception type e 223 else 224 raise typeerror in convert code 27 load and mel file linear to mel weight matrix tf signal linear to mel weight matrix c user benhe conda envs homl2 lib site package tensorflow core python ops signal mel op py 155 linear to mel weight matrix low edge hertz upper edge hertz dtype c user benhe conda envs homl2 lib site package tensorflow core python ops signal mel ops py 74 validate argument if sample rate 0 0 c user benhe conda envs homl2 lib site package tensorflow core python framework op py 692 bool raise typeerror use a tf tensor as a python bool be not allow typeerror use a tf tensor as a python bool be not allow use if t be not none instead of if t to test if a tensor be define and use tensorflow op such as tf cond to execute subgraph condition on the value of a tensor
tensorflowtensorflow
trace wallclock performance discrepancy
Bug
describe the bug run a converted pytorch model have some curious performance characteristic be run the pb model directly in tensorflow be 2 5 time slow than the onnx runtime bash benchmark model onnx runtime take 4 83842 tensorflow take 12 63854 look at the trace information show that work be only be do for about 3 5 second so the question be what be session run do for the other 9 second onnx tf rnn perf to reproduce python usr bin env python benchmark onnx model import time from argparse import argumentparser import numpy as np import numpy testing import onnxruntime as runtime import tensorflow as tf from tensorflow python client import timeline def main args x np random randn args chunksize args batchsize 1 astype np float32 onnx runtime option runtime sessionoption session runtime inferencesession args model onnx option input name session get input 0 name output name session get output 0 name t0 time time onnxrt out session run output name input name x onnxrt out np array onnxrt out 0 t1 time time print onnx runtime take 5f t1 t0 tensorflow with tf compat v1 session as session with tf io gfile gfile args model pb rb as graph g tf compat v1 graphdef g parsefromstre graph read tf import graph def g graph tf get default graph first graph get tensor by name import input 0 last graph get operation 1 output 0 run option tf runoption trace level tf runoption full trace run metadata tf runmetadata t0 time time tensorflow out session run last feed dict first x option run option run metadata run metadata t1 time time tl timeline timeline run metadata step stat ctf tl generate chrome trace format with open trace file json w as f f write ctf print tensorflow take 5f t1 t0 numpy testing assert allclose onnxrt out tensorflow out atol 1e 4 if name main parser argumentparser parser add argument model parser add argument batchsize type int default 1000 parser add argument chunksize type int default 1000 main parser parse args model file onnx tensorflow python onnx tensorflow version python version 3 5 3 onnx version 1 5 0 tensorflow version 1 14 0 additional context trace file
tensorflowtensorflow
miss graph editor documentation
Bug
url s with the issue description of issue what need change the graph editor api doc first link above link to the graph editor guide second link above which be a 404 error clear description the break link to the graph editor guide need to either be fix or remove and replace with some actual comprehensive documentation on use the graph editor at the moment there be no documentation on graph editor usage available anywhere
tensorflowtensorflow
output of sysconfig get link flag do not seem to be suitable for mac
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mac high sierra 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version python 3 6 6 v3 6 6 4cf1f54eb7 jun 26 2018 19 50 54 bazel version if compile from source gcc compiler version if compile from source apple llvm version 9 0 0 clang 900 0 39 2 cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when run the example of create a custom op from on a mac the compilation stage see build the op library fail with the follow error message ld library not find for l libtensorflow framework 1 dylib clang error linker command fail with exit code 1 use v to see invocation this appear to be because the specify linker option l libtensorflow framework 1 dylib return by sysconfig get link flag be not valid for ld on the mac the man page for ld say the follow lx this option tell the linker to search for libx dylib or libx a in the library search path if string x be of the form y o then that file be search for in the same place but without prepende lib or append a or dylib to the filename I believe that the correct format for this flag be ltensorflow framework 1 since the linker will prepend lib and append dylib a workaround for this be to replace this line tf lflag python c import tensorflow as tf print join tf sysconfig get link flag with this tf lflag python c import tensorflow as tf print join tf sysconfig get link flag replace l libtensorflow framework 1 dylib ltensorflow framework 1 alternatively there be I believe a fix for the underlying cause here describe the expect behavior the link process should be perform successfully and produce a library file call zero out so code to reproduce the issue on a mac create a file call zero out cc with the follow code copy from include tensorflow core framework op h include tensorflow core framework shape inference h include tensorflow core framework op kernel h use namespace tensorflow register op zeroout input to zero int32 output zero int32 setshapefn tensorflow shape inference inferencecontext c c set output 0 c input 0 return status ok class zerooutop public opkernel public explicit zerooutop opkernelconstruction context opkernel context void compute opkernelcontext context override grab the input tensor const tensor input tensor context input 0 auto input input tensor flat create an output tensor tensor output tensor null op require ok context context allocate output 0 input tensor shape output tensor auto output flat output tensor flat set all but the first element of the output tensor to 0 const int n input size for int I 1 I n I output flat I 0 preserve the first input value if possible if n 0 output flat 0 input 0 register kernel builder name zeroout device device cpu zerooutop then run tf cflag python c import tensorflow as tf print join tf sysconfig get compile flag tf lflag python c import tensorflow as tf print join tf sysconfig get link flag g std c 11 share zero out cc o zero out so fpic tf cflag tf lflag o2 you should see an error like this ld library not find for l libtensorflow framework 1 dylib clang error linker command fail with exit code 1 use v to see invocation
tensorflowtensorflow
eager execution drastically increase tf keras model fit runtime
Bug
system information have I write custom code yes os platform and distribution linux mint 19 1 tensorflow instal from source tensorflow version v2 0 0 beta1 0 g8e423e3d56 python version 3 6 8 bazel version 0 26 0 gcc version 7 4 0 cuda cudnn version 10 0 7 5 gpu model and memory nvidia quadro p1000 4 gb gddr5 describe the current behavior my issue regard a performance degradation induce by enable eager execution in a context when no eager tensor should be create apart from the model s weight to which I do not need access as of now my installation compile from source base on yesterday s status of the r2 0 branch do not have tf 2 0 behaviour enable by default thus I compare the run time for a supervised learning task depend on whether I enable v2 behavior or simply enable resource variable it turn out the former yield a significant decrease in performance on my initial example which I do not include as it rely on custom datum and model layer the run time go from two minute to five in eager mode the first epoch be really slow while the follow one prove increasingly fast stabilize at 11 12 second 19 ms step by contrast without eager all epoch run at the same speed for 8 9 second 15 ms step I reproduce this issue on an example that use mock datum I e properly shape randomly generate value since learn performance be not at stake here and use exclusively layer take from tensorflow kera no custom bit whose code I present below as it happen not enable eager execution result in a 13m49s runtime while enable it increase it to 19m17s I e an increase of about 40 percent the task at hand consist in fit a binary classifier that take variable length sequence of vector as input for this I set up a tensorflow datum dataset instance which for simplification contain random datum in the provide code base on pre generate numpy array of datum and use the padded batch method to format my input as desire in the mock example the model consist of a layer of 100 lstm cell with input mask due to length variability and default tanh activation top with a single feed forward unit with sigmoid activation describe the expect behavior I can understand that eager execution would create a slight overhead however here it prove huge while no mechanism whatsoever require it I do not know what can be do about it per se but if such an overhead be to be expect I think it would be nice to be able to disable eager in 2 0 and I mean properly disable it not use tf compat instruction that be bind to be wipe out at some point alternatively I believe that it could be great to enable use old style not eager tensor as layer weight through e g a boolean option so that in the setting when access those weight as eager tensor be not require which I believe be a majority of case especially when some kera method allow to effectively pull out the weight as numpy array no overhead would be imply by a seemingly useless eager declaration of course I might be miss an exist feature allow to do so in which case I would be most glad to be point to a way to optimize run time when eager execution be enable maybe by enforce the isolation of the operation in a compile graph code to reproduce the issue code utf 8 test script to measure runtime performance on mock datum set up the eager constant on line 18 to toggle the use of eager execution provide it be not enable by default otherwise change the instruction on line 31 and 33 then call python3 to run it import os import time import numpy as np import tensorflow as tf eager false def main eager set up and fit a model on mock datum with or without eager execution the model consist of a layer of 100 lstm cell top with a single dense unit with sigmoid activation its fitting be bind not to yield actual accuracy improvement as the datum use be purely random but aim at measure performance as to runtime if eager tf enable v2 behavior else tf enable resource variable set up the classifier model use custom embed unit input tf keras input none 100 dtype tf float32 length tf keras input dtype tf int64 mask tf sequence mask length output tf keras layers lstm 100 input mask mask output tf keras layer dense 1 activation sigmoid output model tf keras model input length output set up the training and validation datum dataset setup mock dataset train valid dataset take 500 dataset skip 500 fit the model model compile adam binary crossentropy accuracy history model fit train repeat step per epoch 500 epoch 10 validation datum valid repeat validation step 100 def setup mock dataset n batch 600 batch size 32 return a tf datum dataset yield batch of random datum the input datum consist of a couple of tensor one with zero padded sequence of vector of size 100 the other with the true sequence length the target datum consist of sequence wise binary value generate some random mock input and target datum n sample n batch batch size length 1 np random choice 100 size n sample replace true input np random normal size length sum 100 target np random choice 2 size n sample 1 replace true set up a generator yield shuffle training sample def generator yield individual training sample nonlocal input target length n sample start 0 for I in range n sample end start length I yield input start end length I target I start end set up a tensorflow dataset base on the previous output shape tf tensorshape none 100 tf tensorshape tf tensorshape 1 dataset tf datum dataset from generator generator tf float32 tf int64 tf int64 output shape have the dataset output datum batch and return it return dataset padded batch batch size output shape if name main start time clock main eager duration time clock start min sec duration 60 duration 60 print total duration I m I min sec additional log here be the script s print out with eager execution enable epoch 1 10 2019 07 10 15 58 59 009116 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 500 500 83 165ms step loss 0 6983 accuracy 0 5009 val loss 0 6990 val accuracy 0 5034 epoch 2 10 500 500 72 144ms step loss 0 6712 accuracy 0 5904 val loss 0 7125 val accuracy 0 4931 epoch 3 10 500 500 72 143ms step loss 0 6056 accuracy 0 6841 val loss 0 7604 val accuracy 0 4947 epoch 4 10 500 500 72 144ms step loss 0 4445 accuracy 0 8073 val loss 0 9025 val accuracy 0 5031 epoch 5 10 500 500 72 145ms step loss 0 2502 accuracy 0 9169 val loss 1 1817 val accuracy 0 5069 epoch 6 10 500 500 72 144ms step loss 0 1250 accuracy 0 9699 val loss 1 4860 val accuracy 0 5091 epoch 7 10 500 500 72 145ms step loss 0 0724 accuracy 0 9859 val loss 1 6498 val accuracy 0 5047 epoch 8 10 500 500 72 143ms step loss 0 0537 accuracy 0 9891 val loss 1 7936 val accuracy 0 5038 epoch 9 10 500 500 72 143ms step loss 0 0351 accuracy 0 9948 val loss 1 8995 val accuracy 0 5066 epoch 10 10 500 500 72 144ms step loss 0 0297 accuracy 0 9949 val loss 2 0393 val accuracy 0 5041 total duration 19m17 without enable eager execution epoch 1 10 500 500 46 91ms step loss 0 6989 acc 0 5006 val loss 0 6999 val acc 0 4894 epoch 2 10 500 500 45 91ms step loss 0 6708 acc 0 5932 val loss 0 7138 val acc 0 4875 epoch 3 10 500 500 45s 90ms step loss 0 6048 acc 0 6849 val loss 0 7705 val acc 0 4975 epoch 4 10 500 500 45s 90ms step loss 0 4453 acc 0 8104 val loss 0 9408 val acc 0 4950 epoch 5 10 500 500 46 92ms step loss 0 2518 acc 0 9169 val loss 1 2647 val acc 0 5009 epoch 6 10 500 500 46 91ms step loss 0 1206 acc 0 9726 val loss 1 5996 val acc 0 4994 epoch 7 10 500 500 45 91ms step loss 0 0766 acc 0 9839 val loss 1 8580 val acc 0 4975 epoch 8 10 500 500 46 91ms step loss 0 0569 acc 0 9884 val loss 1 9623 val acc 0 4975 epoch 9 10 500 500 46 92ms step loss 0 0476 acc 0 9897 val loss 2 0733 val acc 0 4966 epoch 10 10 500 500 46 91ms step loss 0 0316 acc 0 9946 val loss 2 2490 val acc 0 5044 total duration 13m49
tensorflowtensorflow
which bazel version be okay for tensorflow source install
Bug
url s with the issue description of issue what need change it be purely a logical issue install bazel 0 24 1 the build tool use to compile tensorflow set up bazel to build c ensure you install bazel 0 23 0 or low 0 24 1 0 23 0
tensorflowtensorflow
init from checkpoint perform incorrectly
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 1 14 python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory gtx titan x describe the current behavior I have a variable store in a ckpt call prsr prior conv1 weight and I want to do a map assignment to a variable call prsr prior conv1 weight 1 0 when I execute the init from checkpoint execution it give the follow error valueerror assignment map with scope only name prsr prior conv1 should map to scope only prsr prior conv1 weight should be scope other scope describe the expect behavior I think it should assign correctly the variable from the ckpt prsr prior conv1 weight into the current variable prsr prior conv1 weight 1 0 as indicate in the api official documentation initialize partitioned variable use variable s name init from checkpoint tmp model ckpt old scope 2 var3 new scope 2 var3
tensorflowtensorflow
colocation error with sparseapply op and resourcevariable
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 7 4 1 gpu model and memory geforce gtx 980 ti describe the current behavior tensorflow attempt to colocate op with incompatible device when the op be relate to sparseapply e g sparseapplyrmsprop operation which result in an error this only occur when use resourcevariable not refvariable describe the expect behavior there should be no error and op should be assign to their appropriate device cpu vs gpu code to reproduce the issue python import tensorflow as tf import numpy as np do error true with tf graph as default as graph my var tf variable np one 5 use resource true with tf device gpu 0 if do error else none gather tf gather my var 0 2 4 opt op tf train momentumoptimizer 0 1 0 1 minimize gather with tf session graph graph as sess sess run tf global variable initializer sess run opt op other info log colocation debug info tensorflow python framework error impl invalidargumenterror can not assign a device for operation variable isinitialize varisinitializedop could not satisfy explicit device specification because the node node variable isinitialize varisinitializedop define at tmp py 7 place on device no device assignment be active during op variable isinitiali ze varisinitializedop creation be colocate with a group of nodes that require incompatible device device gpu 0 all available device job localhost replica 0 task 0 device cpu 0 job localhost replica 0 task 0 device gpu 0 colocation debug info colocation group have the follow type and support device root member assign device name index 1 request device name device gpu 0 assign device name resource device name support device type cpu possible device const gpu cpu varhandleop gpu cpu assignvariableop gpu cpu varisinitializedop gpu cpu resourcegather gpu cpu readvariableop gpu cpu stridedslice gpu cpu unique gpu cpu shape gpu cpu unsortedsegmentsum gpu cpu cast gpu cpu resourcesparseapplymomentum cpu colocation member user request device and framework assign device if any variable initializer initial value const variable varhandleop variable isinitialize varisinitializedop varisinitializedop variable assign assignvariableop variable read readvariableop readvariableop gather resourcegather device gpu 0 variable momentum initializer zeros const variable momentum varhandleop variable momentum isinitialize varisinitializedop varisinitializedop variable momentum assign assignvariableop variable momentum read readvariableop readvariableop momentum update variable unique unique momentum update variable shape shape momentum update variable stride slice stack const momentum update variable stride slice stack 1 const momentum update variable stride slice stack 2 const momentum update variable stride slice stridedslice momentum update variable unsortedsegmentsum unsortedsegmentsum momentum update variable cast cast momentum update variable cast 1 cast momentum update variable resourcesparseapplymomentum resourcesparseapplymomentum node variable isinitialize varisinitializedop define at tmp py 7 additional information about colocation no node device colocation be active during op variable isinitialize varisinitializedop creation no device assignment be active during op variable isinitialize varisinitializedop creation
tensorflowtensorflow
request for complexab and rfft operation in tf lite for tensorflow 2 0
Bug
system information os platform and distribution e g linux ubuntu 16 04 macos 10 14 5 tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 0 0 dev20190709 provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add concatenation div floor div fully connect gather log maximum mul pack pad range reduce max reduce min reshape shape split v stride slice sub here be a list of operator for which you will need custom implementation complexab rfft also please include a link to a graphdef or the model if possible any other info log full traceback convertererror traceback most recent call last in 4 converter target op tf lite opsset select tf op 5 6 tflite model converter convert miniconda3 envs wakeword lib python3 6 site package tensorflow core lite python lite py in convert self 438 input tensor input tensor 439 output tensor output tensor 440 converter kwargs 441 442 if self be calibration quantize miniconda3 envs wakeword lib python3 6 site package tensorflow core lite python convert py in toco convert impl input data input tensor output tensor args kwargs 409 datum toco convert protos model flag serializetostre 410 toco flag serializetostre 411 input datum serializetostre 412 return datum 413 miniconda3 envs wakeword lib python3 6 site package tensorflow core lite python convert py in toco convert protos model flags str toco flags str input data str 170 stderr try convert to unicode stderr 171 raise convertererror 172 toco fail see console for info n s n s n stdout stderr 173 finally 174 must manually cleanup file convertererror toco fail see console for info 2019 07 09 16 34 24 077376 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation rfft 2019 07 09 16 34 24 077408 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation complexab 2019 07 09 16 34 24 078122 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 47 operator 85 array 0 quantize 2019 07 09 16 34 24 078475 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 47 operator 85 array 0 quantize 2019 07 09 16 34 24 079060 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 40 operator 70 array 0 quantize 2019 07 09 16 34 24 079501 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 2 39 operator 69 array 0 quantize 2019 07 09 16 34 24 079910 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 39 operator 69 array 0 quantize 2019 07 09 16 34 24 080126 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 39 operator 69 array 0 quantize 2019 07 09 16 34 24 080490 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 384 byte theoretical optimal value 192 byte 2019 07 09 16 34 24 080931 e tensorflow lite toco toco tooling cc 466 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add concatenation div floor div fully connect gather log maximum mul pack pad range reduce max reduce min reshape shape split v stride slice sub here be a list of operator for which you will need custom implementation complexab rfft traceback most recent call last file user ben miniconda3 envs wakeword bin toco from protos line 10 in sys exit main file user ben miniconda3 envs wakeword lib python3 6 site package tensorflow core lite toco python toco from protos py line 59 in main app run main execute argv sys argv 0 unparse file user ben miniconda3 envs wakeword lib python3 6 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file user ben miniconda3 envs wakeword lib python3 6 site package absl app py line 300 in run run main main args file user ben miniconda3 envs wakeword lib python3 6 site package absl app py line 251 in run main sys exit main argv file user ben miniconda3 envs wakeword lib python3 6 site package tensorflow core lite toco python toco from protos py line 33 in execute output str tensorflow wrap toco tococonvert model str toco str input str exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add concatenation div floor div fully connect gather log maximum mul pack pad range reduce max reduce min reshape shape split v stride slice sub here be a list of operator for which you will need custom implementation complexab rfft I m try to convert a model with tf lite and run into this error there be another issue that request the rfft operator as well but it seem to be for tensorflow 1 x there be a commit diff ed4b7d597384e8e4b1210b7558a16640 that whitelist the rfft operation however my conversion fail be this only implement in tensorflow 1 x right now I m convert my model use this code python converter tf lite tfliteconverter from keras model test model converter target op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert I ve try use converter target op tf lite opsset tflite builtin tf lite opsset select tf op and converter target op tf lite opsset select tf op but nothing change
tensorflowtensorflow
tf2 gpu tf distribute cause crash when use rnn model
Bug
system information os platform and distribution linux ubuntu 16 04 tensorflow instal from binary tensorflow version use command below v1 12 1 5670 g718503b075 2 0 0 dev20190707 python version 3 6 4 cuda cudnn version cuda 10 0 cudnn7 6 1 gpu model and memory 3 titan xp tf distribute cause crash when use rnn model work fine when use cnn if with mirror strategy scope remove the code below can work well code to reproduce the issue import numpy as np import tensorflow as tf total data size 10000 x np random randint 100 size total data size 100 20 100 x x astype np float32 y np random randint 2 size total data size astype np int32 dataset tf datum dataset from tensor slice x y dataset dataset batch 12 mirror strategy tf distribute mirroredstrategy with mirror strategy scope model tf keras sequential tf keras layers lstm 64 tf keras layer dense 64 activation relu tf keras layer dense 3 activation sigmoid model compile loss sparse categorical crossentropy optimizer tf keras optimizer adam metric accuracy model fit dataset log apply a constraint manually follow the optimizer update step 2019 07 09 12 01 15 720305 e tensorflow core grappler optimizer meta optimizer cc 502 implementation selector fail invalid argument invalid format of input node name replica 1 sequential lstm statefulpartitionedcall replica 1 statefulpartitionedcall 0 expect forward node name index 2019 07 09 12 01 16 056015 w tensorflow core grappler optimizer implementation selector cc 199 skip optimization due to error while loading function librarie invalid argument function inference backward standard lstm 8517 9019 and inference backward standard lstm 8517 9019 specialize for replica 2 statefulpartitionedcall at inference distribute function 9976 both implement lstm e2ea6704 e320 4be8 b8e0 8ad71afc296b but their signature do not match 2019 07 09 12 01 16 282647 w tensorflow compiler jit mark for compilation pass cc 1558 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile 2019 07 09 12 01 16 325991 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 07 09 12 01 16 940501 w tensorflow core framework op kernel cc 1622 op require fail at partition function op cc 113 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 2 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval 2019 07 09 12 01 16 940501 w tensorflow core framework op kernel cc 1622 op require fail at partition function op cc 113 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 0 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval 2019 07 09 12 01 16 940560 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 0 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval node sequential lstm statefulpartitionedcall 2019 07 09 12 01 16 940597 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 2 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval node replica 2 sequential lstm statefulpartitionedcall groupcrossdevicecontroledge 0 adam adam update 1 1 const 143 2019 07 09 12 01 16 940632 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 2 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval node replica 2 sequential lstm statefulpartitionedcall metric accuracy div no nan readvariableop 1 110 2019 07 09 12 01 16 940810 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 2 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval node replica 2 sequential lstm statefulpartitionedcall 2019 07 09 12 01 16 943854 w tensorflow core framework op kernel cc 1622 op require fail at partition function op cc 113 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 1 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval traceback most recent call last file test run py line 25 in model fit dataset file usr local lib python3 6 site package tensorflow core python keras engine training py line 668 in fit use multiprocesse use multiprocesse file usr local lib python3 6 site package tensorflow core python keras engine training distribute py line 680 in fit step name step per epoch file usr local lib python3 6 site package tensorflow core python keras engine training array py line 294 in model iteration batch out f actual input file usr local lib python3 6 site package tensorflow core python keras distribute distribute training util py line 854 in execution function return out numpy for out in distribute function input fn file usr local lib python3 6 site package tensorflow core python eager def function py line 429 in call return self stateless fn args kwd file usr local lib python3 6 site package tensorflow core python eager function py line 1662 in call return graph function filter call args kwargs pylint disable protect access file usr local lib python3 6 site package tensorflow core python eager function py line 635 in filter call self capture input file usr local lib python3 6 site package tensorflow core python eager function py line 733 in call flat output self inference function call ctx args file usr local lib python3 6 site package tensorflow core python eager function py line 459 in call ctx ctx file usr local lib python3 6 site package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror 2 root error s find 0 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 0 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval node sequential lstm statefulpartitionedcall define at usr local lib python3 6 site package tensorflow core python framework op py 1654 1 invalid argument can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job localhost replica 0 task 0 device gpu 2 vs job localhost replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval node replica 2 sequential lstm statefulpartitionedcall define at usr local lib python3 6 site package tensorflow core python framework op py 1654 0 successful operation 2 derive error ignore op inference distribute function 9976 function call stack distribute function distribute function
tensorflowtensorflow
toco input shape not work as expexte
Bug
hello I have convert deeplabv3 model to tflite use toco with follow command toco graph def file home abdullah frozen inference graph pb output file model1 tflite output format tflite input array sub 7 output array resizebilinear 3 input shape 1 1024 1024 3 so tflite model should take 1 1024 1024 3 input but when I try to test this model on my laptop for testing use this code it give output of dimension mismatch and be still expect input of 1 513 513 3 here be inference code import tensorflow as tf import numpy as np from pil import image import matplotlib pyplot as plt interpreter tf contrib lite interpreter model path home abdullah document company work tflite model1 tflite interpreter allocate tensor input detail interpreter get input detail output detail interpreter get output detail quantization none use type input detail 0 dtype size 1024 image image open home abdullah picture xyz jpg image image resize size size input shape input detail 0 shape image np array image image image reshape 1 size size 3 image image 127 1 input datum image astype use type print input detail interpreter set tensor input detail 0 index input data interpreter invoke output datum interpreter get tensor output detail 0 index output datum output datum reshape size size 21 label np argmax output datum 1 label label reshape size size plt imshow label plt show print label shape print label I be get follow error traceback most recent call last file infer py line 36 in interpreter set tensor input detail 0 index input datum file home abdullah anaconda3 envs tflow lib python3 6 site package tensorflow contrib lite python interpreter py line 151 in set tensor self interpreter settensor tensor index value file home abdullah anaconda3 envs tflow lib python3 6 site package tensorflow contrib lite python interpreter wrapper tensorflow wrap interpreter wrapper py line 133 in settensor return tensorflow wrap interpreter wrapper interpreterwrapper settensor self I value valueerror can not set tensor dimension mismatch but when I set size 513 it work like charm moreover when I print input detail interpreter get input detail it print name sub 7 index 283 shape array 1 513 513 3 dtype int32 dtype quantization 0 0 0 kindly tell I how should I solve this system information os platform and distribution ubuntu 18 04 tensorflow instal from pip install tensorflow tensorflow version 1 10 1 python version python 3 6 8 bazel version na gcc compiler version na cuda cudnn version 10 0 7 4 gpu model and memory gtx 1060 and 16 gb ram
tensorflowtensorflow
no output shape after tf keras layer model build and call be it intend
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 beta1 and tf nightly gpu 2 0 preview 2 0 0 dev20190708 python version 3 6 8 cuda cudnn version cuda 10 0 gpu model and memory without and with gpu p100 describe the current behavior hi all when I try to use model summary the output shape be print as multiple after few try I realize that the output shape of keras layer model be not determined even after the layer model be build and call here be short example 1 keras layer import tensorflow as tf dense tf keras layer dense 2 dense build input shape 3 input tensor tf one 5 3 tf float32 output tensor dense input tensor the line below raise attributeerror the layer have never be call and thus have no define output shape print dense output shape 2 keras model import tensorflow as tf model tf keras model sequential tf keras layer dense 2 tf keras layers relu tf keras layer dense 4 model build input shape none 3 input tensor tf one 5 3 tf float32 output tensor model input tensor the line below raise attributeerror the layer have never be call and thus have no define output shape print model output shape describe the expect behavior I think keras layer model should have output shape but they aren t please see if it s intend I ve just start to migrate from tf 1 x to tf 2 0 and to use keras apis so I might be wrong when use keras api code to reproduce the issue describe above other info log
tensorflowtensorflow
custom op documentation d glibcxx use cxx11 abi 0 not require anymore
Bug
url s with the issue compile the op use your system compiler tensorflow binary installation last note of the paragraph description of issue what need change the documentation state that custom op for the binary pip package should be compile with d glibcxx use cxx11 abi 0 for I this actually have to be remove python 3 7 tensorflow gpu 1 14 0 from pip so I guess tf 1 14 be now build with gcc 4 if someone can confirm I could open a pr
tensorflowtensorflow
keras timedistribute on a model create duplicate layer and be inconsistent with timedistributed on a layer
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version use command below tensorflow gpu 1 14 0 python version 3 6 7 cuda cudnn version 10 gpu model and memory gtx1060 m describe the current behavior wrap a model in a timedistributed layer create duplicate node in the graph if we follow the doc link all model be callable just like layer and create a simple dense model wrap in a td layer inner input keras layers input 2 dense kera layer dense 2 activation relu inner input inner model keras model input inner input output dense full input keras layers input 2 2 td 2 kera layer timedistribute inner model full input model keras model model full input td 2 model compile sgd mse you end up with this bad td firstly if you follow the documentation approach you end up with an additional dense layer bottom leave this eat up memory and happen because you build the inner model then rebuild it again when you build the timedistributed model this can be avoid by parameterize your model link building model but it can be a very painful workaround if your model be complex but for demonstration sake here s the model as an object class innermodel keras model def init self super innermodel self init self dense kera layer dense 2 activation relu def call self input training none mask none out self dense input return out def compute output shape self input shape return input shape 0 2 input keras layers input 2 2 td model innermodel time dist kera layer timedistribute td model input model keras model model input time dist model compile sgd mse here s the improved graph well td so the additional dense layer be go but there be still two dense layer inside they have different content too well td internal describe the expect behavior if you compare to what you get if you just wrap the dense layer itself full input keras layers input 2 2 2 td 2 kera layer timedistribute kera layer dense 2 activation relu full input model keras model model full input td 2 model compile sgd mse good td I d expect wrap a model should result in a very similar look graph to wrap a layer code to reproduce the issue full code for create the graph
tensorflowtensorflow
deeplab ios tflite use deeplab on io application
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device iphone 6 tensorflow instal from source or binary binary tensorflow version use command below 1 12 python version 3 7 bazel version if compile from source 0 27 gcc compiler version if compile from source nan cuda cudnn version nan gpu model and memory nan describe the current behavior deeplab doesn t segment on io describe the expect behavior deeplab segmentation on io application code to reproduce the issue tflite convert output format tflite inference type float inference input type float input array sub 2 input shape 1 224 224 3 output array resizebilinear 2 output file user karizma download deeplabv3 mnv2 pascal trainvall freeze 224 tflite graph def user karizma download deeplabv3 mnv2 pascal trainvall freeze 224 pb mean value 128 std dev value 127 allow custom op post training quantize other info log hi I have train deep lab on my custom dataset 200 150 with 224 as crop size and during the test it detect for crop with crop size 224 now what I need be to integrate my model on io application I be able to successfully convert the model to tflite but I do not detecte anything I don t get it what s the problem because when I try to convert a deeplab pretraine mobilenet mode llink it work for I on mobile and for my model no however I have test my model pb model with python code and it detect this be my model architecture I hope it will be helpful to indrestoud what s go on s ss
tensorflowtensorflow
tf2 0 dataset iteration dynamic tensorarray and reduce operation
Bug
system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 osx tensorflow instal from source or binary binary 2 0 0 beta1 tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version 3 6 describe the current behaviour I m try to apply a reduce operation over the result of tensorarray concatenation the concatenation happen in a for loop generate by iteration over a dataset the result value of the reduce operation be malforme tensor the shape be the actual value be float32 of shape 1 this make the result tensor effectively unusable because tf will then fail either because of shape information or because of the actual value of the tensor remark if the for loop be generate from the tf range operation everything work as expect describe the expect behaviour get a valid result from the different reduce operation when apply to the result of a tensorarray concatenation operation when this one be fill in a for loop generate by iteration over a dataset code to reproduce the issue python import tensorflow as tf mean tf keras metric mean a tf random uniform 10 2 d tf datum dataset from tensor slice a batch 2 tf function def compute mean dataset I don t use the dataset at all arr tf tensorarray tf float32 1 dynamic size true for I in tf range 10 simple for loop real logit tf random normal 5 1 arr arr write tf cast I tf int32 real logit all real logit arr concat score tf reduce mean all real logit tf print tf shape score score 0 0653904751 mean update state score return mean result tf function def compute error mean dataset I use the dataset only to get the index arr tf tensorarray tf float32 1 dynamic size true for I in dataset enumerate dataset for loop with enumerate real logit tf random normal 5 1 arr arr write tf cast I tf int32 real logit all real logit arr concat score tf reduce mean all real logit tf print tf shape score score 0 256373167 bracket mean update state score return mean result tf function def compute error2 mean dataset I only use the dataset to simulate a for loop arr tf tensorarray tf float32 1 dynamic size true I tf constant 0 tf int32 for in dataset dataset for loop real logit tf random normal 5 1 arr arr write tf cast I tf int32 real logit I I 1 all real logit arr concat score tf reduce mean all real logit score tf reduce sum all real logit tf print tf shape score score 0 256373167 bracket mean update state score return mean result work well print compute mean d break because the shape be wrong for the score var we have shape and the actual value be float of shape 1 yet we can t do score 0 because the shape be in the end the score var become unusable error can not update variable with shape use a tensor with shape 1 shape must be equal print compute error mean d it seem that the error still occur as long as the call to arr write be inside a for loop generate by iteration on a dataset switch the reduce operation do not change the behaviour print compute error2 mean d
tensorflowtensorflow
op less register twice
Bug
in the 1 14 branch the cpu kernel op less be register twice for the type bfloat16 see l19 first on line 19 and then on line 21 this have the effect of throw an error invalidargumenterror multiple opkernel registration match nodedef whenever a bfloat16 comparison on a cpu be attempt
tensorflowtensorflow
the flag log dir be define twice
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version python 3 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version no gpu model and memory n a describe the current behavior duplicateflagerror the flag log dir be define twice code to reproduce the issue import tensorflow as tf from absl import flag flag define string log dir log log directory
tensorflowtensorflow
tensorflowlite model for on device speech recognizer
Bug
description of issue there a mention of a tensorflowlite model in the google ai team blog make publicly available through the model optimization toolkit in the tensorflow lite library where can I find this model be this the right place
tensorflowtensorflow
invalidargumenterror retval 0 do not have value when combine tf case and l2 regularization
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 2 7 15 cuda cudnn version 7 gpu model and memory gtx 1070 8 g describe the current behavior get the follow error when use tf case and slim l2 regularization traceback most recent call last file home yfeng23 test tf case test py line 20 in print sess run loss file usr local lib python2 7 dist package tensorflow python client session py line 950 in run run metadata ptr file usr local lib python2 7 dist package tensorflow python client session py line 1173 in run feed dict tensor option run metadata file usr local lib python2 7 dist package tensorflow python client session py line 1350 in do run run metadata file usr local lib python2 7 dist package tensorflow python client session py line 1370 in do call raise type e node def op message tensorflow python framework error impl invalidargumenterror retval 0 do not have value describe the expect behavior no error code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf import tensorflow contrib slim as slim x tf zero 1 8 fn lambda slim fully connect x 4 weight regularizer slim l2 regularizer 0 1 pre fn pair tf equal 0 0 fn tf equal 1 0 fn y tf case pre fn pair exclusive true loss tf loss get regularization loss with tf session as sess sess run tf global variable initializer print sess run loss
tensorflowtensorflow
batch tf linalg eigh be much slow on gpu than on cpu for many small matrix
Bug
describe the current behavior see title describe the expect behavior there shouldn t be such a big discrepancy between the two see below code to reproduce the issue python import tensorflow as tf sym lambda a 0 5 a tf matrix transpose a with tf device cpu 0 tf linalg eigh sym tf random uniform 100000 2 2 fast 0 02 with tf device gpu 0 tf linalg eigh sym tf random uniform 100000 2 2 slow 7 3s system information os platform linux 4 4 0 154 generic x86 64 with debian stretch sid gpu geforce gtx titan x python version 3 7 3 tf version version 1 13 1 tf version compiler version 5 4 0 cuda 10 cudnn 7
tensorflowtensorflow
tf 2 0 categorical column with vocabulary list not usable in custom training loop
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior outside of fit e g in a custom training loop categorical column with vocabulary list result in an error I have provide a modify version of classify structured datum which demonstrate this the error be valueerror column dtype and sparsetensor dtype must be compatible key thal column dtype tensor dtype describe the expect behavior code run without cause an error code to reproduce the issue it should be directly copy paste able import panda as pd import tensorflow as tf from sklearn model selection import train test split def df to dataset df shuffle true batch size 32 df df copy label df pop target ds tf datum dataset from tensor slice dict df label if shuffle ds ds shuffle buffer size len df ds ds batch batch size return ds def generate feature feature column feature layer input thal tf feature column categorical column with vocabulary list thal fix normal reversible thal one hot tf feature column indicator column thal feature column append thal one hot feature layer input thal tf keras input shape 1 name thal dtype tf string return feature column feature layer input def create model feature column feature layer input input layer tf keras layer densefeature feature column input input layer feature layer input l1 tf keras layer dense 128 activation relu input l2 tf keras layer dense 128 activation relu l1 output tf keras layer dense 1 activation sigmoid l2 model tf keras model input v for v in feature layer input value output output return model def make loss loss object def loss model x y y pre model x return loss object y true y y pre y pre return loss def grad model input target loss with tf gradienttape as tape loss value loss model input target return loss value tape gradient loss value model trainable variable def fit epoch train ds model optimizer loss obj loss make loss loss obj for epoch in range epoch for I x y in enumerate train ds loss value grad value grad model x y loss optimizer apply gradient zip grad value model trainable variable if name main url df pd read csv url custom training true train test train test split df test size 0 2 train val train test split train test size 0 2 hardcode stuff batch size 32 train ds df to dataset train batch size batch size create model and feature feature column feature layer input generate feature model create model feature column feature layer input if custom training print try custom training bce tf keras loss binarycrossentropy adam tf keras optimizer adam fit epoch 5 train ds train ds model model optimizer adam loss obj bce else print use pre define fit model compile optimizer adam loss binary crossentropy metric accuracy model fit train ds epoch 5 if you flip the custom training variable between true and false line 72 you ll see what I mean other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach 1 the complete relevant stacktrace be file application pycharm ce app content helper pydev pydevd py line 1758 in main file application pycharm ce app content helper pydev pydevd py line 1752 in main global debugg run setup file none none be module file application pycharm ce app content helper pydev pydevd py line 1147 in run pydev import execfile file global local execute the script file application pycharm ce app content helper pydev pydev imp pydev execfile py line 18 in execfile exec compile content n file exec glob loc file user ian quah pycharmproject tf2 dataset issue py line 90 in model model optimizer adam loss obj bce file user ian quah pycharmproject tf2 dataset issue py line 65 in fit loss value grad value grad model x y loss file user ian quah pycharmproject tf2 dataset issue py line 57 in grad loss value loss model input target file user ian quah pycharmproject tf2 dataset issue py line 50 in loss y pre model x file anaconda3 envs mlpl lib python3 6 site package tensorflow python keras engine base layer py line 712 in call output self call input args kwargs file anaconda3 envs mlpl lib python3 6 site package tensorflow python keras engine network py line 753 in call return self run internal graph input training training mask mask file anaconda3 envs mlpl lib python3 6 site package tensorflow python keras engine network py line 895 in run internal graph output tensor layer compute tensor kwargs file anaconda3 envs mlpl lib python3 6 site package tensorflow python keras engine base layer py line 712 in call output self call input args kwargs file anaconda3 envs mlpl lib python3 6 site package tensorflow python feature column feature column v2 py line 474 in call self state manager file anaconda3 envs mlpl lib python3 6 site package tensorflow python feature column feature column v2 py line 4299 in get dense tensor return transformation cache get self state manager file anaconda3 envs mlpl lib python3 6 site package tensorflow python feature column feature column v2 py line 2562 in get transform column transform feature self state manager file anaconda3 envs mlpl lib python3 6 site package tensorflow python feature column feature column v2 py line 4238 in transform feature transformation cache state manager file anaconda3 envs mlpl lib python3 6 site package tensorflow python feature column feature column v2 py line 3714 in get sparse tensor transformation cache get self state manager none file anaconda3 envs mlpl lib python3 6 site package tensorflow python feature column feature column v2 py line 2562 in get transform column transform feature self state manager file anaconda3 envs mlpl lib python3 6 site package tensorflow python feature column feature column v2 py line 3692 in transform feature return self transform input tensor input tensor file anaconda3 envs mlpl lib python3 6 site package tensorflow python feature column feature column v2 py line 3668 in transform input tensor self key self dtype input tensor dtype valueerror column dtype and sparsetensor dtype must be compatible key thal column dtype tensor dtype 2 place a debugger on line 50 lead I to feature column v2 py specifically transform input tensor the input tensor arg to transform input tensor be sparsetensor indice tf tensor 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 13 0 14 0 15 0 16 0 17 0 18 0 19 0 20 0 21 0 22 0 23 0 24 0 25 0 26 0 27 0 28 0 29 0 30 0 31 0 shape 32 2 dtype int64 value tf tensor 49 34 58 46 59 47 55 58 41 68 62 51 61 46 39 48 37 41 51 59 51 70 60 57 54 60 52 44 65 49 44 59 shape 32 dtype int32 dense shape tf tensor 32 1 shape 2 dtype int64 which seem strange it s like it forget that it have transform those variable
tensorflowtensorflow
add warn to dropout that it violate google patent us9406017b2
Bug
description of issue what need change google who to a large extent run this project have patent dropout see this patent be not list as a pledge patent in google s list of patent that be thus not cover by the google open patent non assertion pledge since this could cause serious issue for user it need to be document
tensorflowtensorflow
keras have memory leak when pass in dataset object to predict function
Bug
summary performance degrade quickly and memory increase consistently when call the kera predict function in a loop with a dataset object this do not happen when pass predict a numpy array or when pass in a tensor from a dataset iterator system information have I write custom code minimally reproducible example below use only stock 1 14 0 code os platform and distribution ubuntu 18 04 linux mint 19 1 tensorflow instal from source or binary pip install tensorflow gpu example not use gpu cuda visible device 1 tensorflow version use command below v1 14 0 rc1 22 gaf24dc9 1 14 0 python version 3 7 3 describe the current behavior loop over model predict x mydataset in a continuous loop degrade in performance after a few hundred iteration the minimally reproducible example below start at 0 04 per loop iteration and within about a minute of running be near 0 5s per loop iteration memory continue to climb this do not happen when pass in a numpy array to model predict x myndarray the problem be also less severe when pass in tf data iterator rather than a tf datum dataset if you pass an iterator the performance will continue to degrade at a fifth to a tenth the rate the cause of the difference between the dataset performance and the iterator performance be likely at training util py 1314 where kera create a new iterator for each predict loop the issue be completely ameliorate when pass predict the tensor produce from tf datum make one shot iterator mydataset get next in this case no additional dataset operation appear to be create by kera in the predict loop describe the expect behavior multiple call to predict should not degrade in performance over time when pass in a dataset code to reproduce the issue this code reproduce the issue and be copy paste runnable performance will degrade significantly within 30 second run this example import tensorflow as tf import numpy as np import time size 5000 inp tf keras layers input shape size dtype float32 x tf keras layer dense unit size inp model tf keras model input inp outputs x np datum np random rand 1 size ds tf datum dataset from tensor slice np data batch 1 repeat debug time time time while true model predict x ds step 1 print processing time 2f format time time debug time debug time time time this example demonstrate pass a numpy array do not have the same issue import tensorflow as tf import numpy as np import time size 5000 inp tf keras layers input shape size dtype float32 x tf keras layer dense unit size inp model tf keras model input inp outputs x np datum np random rand 1 size debug time time time while true model predict x np datum use numpy array directly print processing time 2f format time time debug time debug time time time this issue start at so at I decide to post it here when I realize that predict be create a new iterator each predict loop iteration and work when the get next tensor be pass in directly
tensorflowtensorflow
tf 2 0 api docs tf image image gradient
Bug
url s with the issue description of issue what need change usage example no usage example provide submit a pull request yes
tensorflowtensorflow
remove global step in usage example of cosinedecay and other related class
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change usage example of cosinedecay cosinedecayrestart linearcosinedecay noisylinearcosinedecay take a parameter global step which do not correspond to the definition clear description the script define the various learning rate decay class use for network training usage example there be a usage example however it do not correspond to the definition submit a pull request I can if require
tensorflowtensorflow
tf datum dataset list file return be deterministic order when shuffle false
Bug
url s with the issue list file description of issue what need change in the doc above it say note the default behavior of this method be to return filename in a non deterministic random shuffle order pass a seed or shuffle false to get result in a deterministic order so if pass shuffle false it will return a deterministic order but if check source code of the function it call follow function to get match file l769 staticmethod def list file file pattern shuffle none seed none match file gen io op match file file pattern if we check description of gen io op matching file it say note also that the order of filename return can be non deterministic tf export matching file def matching file pattern name none r return the set of file match one or more glob pattern note that this routine only support wildcard character in the basename portion of the pattern not in the directory portion note also that the order of filename return can be non deterministic args pattern a tensor of type string shell wildcard pattern s scalar or vector of type string name a name for the operation optional return a tensor of type string and also the document in define in generate file python ops gen io op py note that this routine only support wildcard character in the basename portion of the pattern not in the directory portion note also that the order of filename return can be non deterministic and also description in the function define in python training input py note the order of the file return can be non deterministic check source code of the fucntion l63 both tf io match file and tf io match filename once call gen io op matching file I think it be quite confuse here
tensorflowtensorflow
operation mark as not fetchable
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version use command below 1 13 and 1 14 python version 3 cuda cudnn version 10 0 7 gpu model and memory nvidia gtx 960 m 4 gb describe the current behavior I want to reference a list item inside tf while loop use a loop variable instead of print the list item one after another it say the follow valueerror operation while identity have be mark as not fetchable describe the expect behavior expect behaviour be to print the list item code to reproduce the issue import tensorflow as tf class a def init self self lst 1 2 3 self sess tf session self total length tf constant len self lst def loop self I pr tf print I current value self lst I eval session self sess with tf control dependency pr I tf add I 1 return I def cond self I return tf less I self total length def run self I tf constant 0 while op tf while loop self cond self loop I final I self sess run while op if name main obj a obj run I have try eager execution as well but the same error pop up other info log this issue be originally a comment by I in this thread issuecomment 502346438 it be confirm by mrry that this indeed be a bug thank for all the help
tensorflowtensorflow
significantly reduce validation accuracy when switch from alpha0 to beta0 1
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow code be strongly base on stock example but use kaggle aerial cactus identification dataset os platform and distribution google colab tensorflow instal from binary via pip install tensorflow gpu 2 0 0 beta0 tensorflow version 2 0 0 beta0 python version 3 6 7 cuda cudnn version n a google colab as of 05 07 2019 gpu model and memory n a google colab as of 05 07 2019 describe the current behavior with beta 0 and 1 the training validation history be as follow beta 0 1 this be a very low validation accuracy this can of course happen even with code base on an example the issue be that this only appear with beta0 and beta1 build not with alpha0 see below when use the exactly same code and training datum describe the expect behavior with alpha0 and the same code the history look as follow alpha 0 an upgrade of the tensorflow version should not affect the result accuracy in such a manner code to reproduce the issue 1 open this google colab 2 run the code with alpha0 choose version 3 make note of the training history plot 4 restart the runtime 5 run the code with beta0 or beta1 choose version 6 make note of the training history plot 7 observe that with no change to the code but use the beta0 instead of alpha0 the validation accuracy go down from 95 to 90 with beta1 even to 80 other info log my good guess be that change have be make to the pretraine mobilenetv2 or to the adam optimizer otherwise the drastic loss in accuracy be hard to explin
tensorflowtensorflow
batch size affect prediction output in rnn layer lstm gru
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary docker tensorflow version use command below 1 12 3 and 2 0 0 beta1 python version 2 7 12 and 3 5 2 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the output during prediction from rnn layer I e lstm and gru be dependent on the batch size when stateful false describe the expect behavior the predict output tensor for a give input tensor should always produce the same result regardless of the size of the batch the sample be find in code to reproduce the issue from future import print function import numpy as np import tensorflow as tf from tensorflow keras layers import lstm gru if name main with tf device cpu 0 batch 1 list range 1 10 shape 1000 512 input tf keras input shape shape rnn gru shape 1 2 return sequence true input model tf keras model inputs input output rnn result for I batch in enumerate batch x tf one batch shape y model predict on batch x result append y 0 for b x in list zip batch result 1 print b np max np ab result 0 x if not all np allclose x result 0 for x in result 1 raise valueerror vary batch size produce different result other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change plt xlabel class name train label I train label numpy float64 class name integer clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
mix precision mode in keras autocastvariable object be not subscriptable
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 lt tensorflow instal from source or binary official tensorflow docker tensorflow tensorflow 1 14 0 gpu py3 tensorflow version use command below 1 14 0 python version 3 6 8 describe the current behavior the follow error occur in a give code below when try to enable mixed precision mode with keras mixed precision experimental set policy infer float32 var auto cast variable slice repr py 16 call return keras backend dot input w 16 16 typeerror autocastvariable object be not subscriptable it look like resourcevariable which be return by self add weight with mixed precision be off support slice operation while autocastvariable return by self add weight when mixed precision be on doesn t it s possible to workaround this issue by convert this variable into tensor as show on line 15 but it s not clear if it s a straightforward way to perform slice op on variable describe the expect behavior should work without any error code to reproduce the issue import numpy as np from tensorflow import kera comment this line out to make code complete successfully keras mixed precision experimental set policy infer float32 var class mylayer keras layers layer def build self input shape self w self add weight shape 16 16 def call self input kwargs w self w uncomment this workaround line below to make it work with mixed precision on w keras backend cast w dtype w dtype return keras backend dot input w 16 16 input keras layer input shape 16 output mylayer input model keras model model input output model predict np zeros shape 16 16
tensorflowtensorflow
poor documentation of tf save model builder
Bug
url s with the issue description of issue what need change the documentation be unclear on several point and especially hard to understand for people who be new to tensorflow clear description what be the difference of use save model builder to other way of export model and graph for someone just want to use a pre train tensorflow model it be very hard to get an overview over all the different type of format etc for example what be the difference to tf io write graph as far as I could find out I need to use the save model stuff because it be able to add tag which tf io be not able to do and which be need for serve the doc be very unclear on the whole topic of how to use pre train model in a custom application for example in tf io write graph be use there be also no datum type give for the parameter so as someone new to tf I be absolutely unable to guess what should go there the only description be foo signature and foo asset but there be no example show how these parameter be properly use I manage to use save model simple save but it be deprecate and from the documentation of the save model builder I have no idea how I could replicate the same functionality as with simple save usage example there be example code but there be no complete example on how to create a frozen graph or on how to export and import again a pre train model submit a pull request I be an absolute beginner to tf so I can not correct the doc in a meaningful way sorry
tensorflowtensorflow
intel tensorflow mkl throw could not initialize a memory descriptor cpu gpu work fine
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 red hat enterprise 7 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior tensorflow with mkl dnn intel tensorflow throw exception could not initialize a memory descriptor in file tensorflow core kernel mkl concat op cc 380 describe the expect behavior no exception throw code to reproduce the issue pip install intel tensorflow tar xvzf testcase 2367 tar gz attach cd testcase 2367 python testcase 2367 py other info log this inference network run fine on tensorflow cpu and tensorflow gpu only mkl dnn tensorflow fail yes this be similar to issue 23145 but it be definitely not fix in r1 13 1 it be also not fix in r1 14 which I confirm by compile from source although the line number change testcase 2367 tar gz
tensorflowtensorflow
poor feature example serialization performance
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 enterprise tensorflow instal from source or binary binary tensorflow version use command below unknown 2 0 0 beta1 python version 3 7 2 cuda cudnn version 10 0 7 5 gpu model and memory geforce gtx titan x 12 gb describe the current behavior the rate of serialize example be unreasonably slow in particular performance be slow for feature s that be a sequence of value I have a dataset with numerous field include two that be represent by short fix length 1 dimensional tensor s when exclude those two feature serialization happen in a slow but manageable amount of time add either of those two feature cause it to take many time as long notably when map a dataset to a function that perform serialization increase the number of thread do not significantly impact performance and the cpu remain mostly idle I do see marked performance improvement and high cpu usage when parallelize other map function so it could be that there be some kind of global bottleneck for this operation describe the expect behavior record should be serialize in a reasonable amount of time code to reproduce the issue the follow code serialize 10000 record in about 6s on a particular machine note that if I replace the map function with one that simply return a constant it take less than 1 to complete so the serialization be the problem def make example 1 datum list feature dict a tf train feature int64 list tf train int64list value datum list 0 b tf train feature int64 list tf train int64list value datum list 1 c tf train feature int64 list tf train int64list value datum list 2 d tf train feature int64 list tf train int64list value datum list 3 e tf train feature int64 list tf train int64list value datum list 4 f tf train feature int64 list tf train int64list value datum list 5 g tf train feature int64 list tf train int64list value datum list 6 h tf train feature int64 list tf train int64list value datum list 7 I tf train feature int64 list tf train int64list value datum list 8 j tf train feature int64 list tf train int64list value datum list 9 example tf train example feature tf train feature feature feature dict return example serializetostre def make example 1 wrap input datum input 0 return tf py function make example 1 datum a datum b datum c datum d datum e datum f datum g datum h datum I data j tout tf string def feature test 1 num thread none source a tf constant 0 b tf constant 1 c tf constant 2 d tf constant 3 e tf constant 4 f tf constant 5 g tf constant 6 h tf constant 7 I tf constant 8 j tf constant 9 ds tf datum dataset from tensor source repeat 10000 ds ds map make example 1 wrap num thread it iter ds for x in it pass the follow example use about the same input data size as the previous one but use 2 feature of length 5 instead of 10 feature of length 1 the execution time increase to 17 highlight the problem with sequence as int64list s def make example 2 datum list feature dict a tf train feature int64 list tf train int64list value datum list 0 b tf train feature int64 list tf train int64list value datum list 1 example tf train example feature tf train feature feature feature dict return example serializetostre def make example 2 wrap input datum input 0 return tf py function make example 2 datum a datum b tout tf string def feature test 2 num thread none x a tf constant 0 1 2 3 4 b tf constant 5 6 7 8 9 ds tf datum dataset from tensor x repeat 10000 ds ds map make example 2 wrap num thread it iter ds for x in it pass other info log a possibly relate performance issue be mention by many user in 16933 although that issue be close over a year ago due to inactivity
tensorflowtensorflow
tf 1 14 kera probably bug in network map graph network
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 both os x and ubuntu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary 1 14 0 binary tensorflow version use command below 1 14 python version 3 6 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the current behavior I have a cnn model that work well with tensorflow 1 13 1 build the network train the network and predict from the network but after the update to tensorflow 1 14 0 I m get follow exception valueerror graph disconnect can not obtain value for tensor tensor dropout 1 cond merge 0 shape 160 160 16 dtype float32 at layer concatenate the follow previous layer be access without issue net input initial conv2d batch normalization activation conv2d dropout other info log after debug and go through the implementation of network map graph network it seem to I that there be either some bug significantly change behaviour between tensorflow 1 13 1 and tensorflow 1 14 0 in the graph disconnection check when I run the code with tensorflow 1 13 1 the tensorboard graph look as follow screenshot 2019 07 03 at 16 36 37 from this I m not sure why layer dropout 1 conditional dropout be check in the concatenate layer I assume that droupout should have be check there
tensorflowtensorflow
tf 2 0 kera tf keras concatenate graph disconnected when concatenate non sequentially
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary pip install tensorflow version use command below tensorflow gpu 2 0 0 beta1 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior error arise during concatenate when I run the follow code import tensorflow as tf from tensorflow import kera from tensorflow keras layer import conv2d concatenate input keras input shape 256 256 3 x conv2d 16 3 padding same activation relu input x list x for I in range 3 x conv2d 16 3 padding same activation relu x x list append x x concatenate 3 x list model keras model inputs input output x model summary valueerror graph disconnect can not obtain value for tensor tensor conv2d 31 identity 0 shape none 256 256 16 dtype float32 at layer concatenate 8 the follow previous layer be access without issue input 9 conv2d 29 conv2d 30 this issue do not occur in a tensorflow 1 x environment only tf 2 0 describe the expect behavior now the concatenate function work properly when use a sequential model that be if I swap in for I in range 1 rather than for I in range 3 above the code execute cleanly however the non sequential repeat concatenation in the loop leave the a graph disconnected error furthermore the error be also eliminate when use tf concat so the follow code also execute cleanly import tensorflow as tf from tensorflow import kera from tensorflow keras layer import conv2d concatenate input keras input shape 256 256 3 x conv2d 16 3 padding same activation relu input x list x for I in range 3 x conv2d 16 3 padding same activation relu x x list append x x tf concat x list 3 model keras model inputs input output x model summary therefore I do have a working alternative but there do appear to be an issue with the keras concatenate function
tensorflowtensorflow
tf keras dataset not batch correctly
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code yes os platform and distribution linux mint 19 tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version 3 6 describe the current behavior use the return value of tf keras datasets cifar10 in model fit seem to process the entire dataset in one batch independent of the give batch size describe the expect behavior both version should take the same amount of time code to reproduce the issue from tensorflow import kera x y keras datasets cifar10 load data x x 255 0 reshape x shape 0 1 model keras sequential kera layer dense 10 comment out these line result in way slow training x x 0 10 y y 0 10 model compile sgd sparse categorical crossentropy model fit x y epoch 1 batch size 1 step per epoch 10 other info log the example as write 10 10 0s 5ms step loss 7 4634 and with the marked line comment out 10 10 8 786ms step loss 5 7648
tensorflowtensorflow
keras reshape layer in functional api seem bug
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 arch linux 5 1 15 kernel tensorflow instal from source or binary arch repository tensorflow version use command below 1 14 0 python version 3 7 3 cuda cudnn version n a use cpu gpu model and memory n a describe the current behavior I be get the follow error input to reshape be a tensor with 24576 value but the request shape have 1536 describe the expect behavior this sample code should run edit out the skip connection make the code run fine do note that I have successfully use the reshape functionality just fine in straight feedforward model for some reason this bug crop up when use reshape in a skip connection as show in the sample code below code to reproduce the issue import tensorflow as tf import numpy as np def model x tf keras layers input 4 4 3 4x4 image with 3 channel y tf keras layer conv2dtranspose 3 4 2 padding same x create a 8x8 image with 3 channel by upsample with stride 2 linear tf keras layer dense 8 8 3 x linear transformation of the input linear tf keras layer reshape 8 8 3 linear reshape the output of the linear transformation to the same shape as the upsampled image y tf keras layer add y linear add they together return tf keras model model inputs x output y model model model input datum array range np random randn 128 4 4 3 astype np float32 dataset tf datum dataset from tensor slice array range batch 8 iterator dataset make one shot iterator next element iterator get next dataset output model next element session sess tf session sess run tf global variable initializer evaluate print sess run dataset output print sess run dataset output other info log runfile home jaap dropbox python project code preprocesse testing py wdir home jaap dropbox python project code preprocesse traceback most recent call last file line 1 in runfile home jaap dropbox python project code preprocesse testing py wdir home jaap dropbox python project code preprocesse file usr lib python3 7 site package spyder kernel customize spydercustomize py line 827 in runfile execfile filename namespace file usr lib python3 7 site package spyder kernel customize spydercustomize py line 110 in execfile exec compile f read filename exec namespace file home jaap dropbox python project code preprocesse testing py line 40 in print sess run dataset output file usr lib python3 7 site package tensorflow python client session py line 950 in run run metadata ptr file usr lib python3 7 site package tensorflow python client session py line 1173 in run feed dict tensor option run metadata file usr lib python3 7 site package tensorflow python client session py line 1350 in do run run metadata file usr lib python3 7 site package tensorflow python client session py line 1370 in do call raise type e node def op message invalidargumenterror input to reshape be a tensor with 24576 value but the request shape have 1536 node model 8 reshape 8 reshape define at home jaap dropbox python project code preprocesse testing py 33 original stack trace for model 8 reshape 8 reshape file usr lib python3 7 runpy py line 193 in run module as main main mod spec file usr lib python3 7 runpy py line 85 in run code exec code run global file usr lib python3 7 site package spyder kernel console main py line 11 in start main file usr lib python3 7 site package spyder kernel console start py line 310 in main kernel start file usr lib python3 7 site package ipykernel kernelapp py line 505 in start self io loop start file usr lib python3 7 site package tornado platform asyncio py line 132 in start self asyncio loop run forever file usr lib python3 7 asyncio base event py line 539 in run forever self run once file usr lib python3 7 asyncio base event py line 1775 in run once handle run file usr lib python3 7 asyncio event py line 88 in run self context run self callback self args file usr lib python3 7 site package tornado ioloop py line 758 in run callback ret callback file usr lib python3 7 site package tornado stack context py line 300 in null wrapper return fn args kwargs file usr lib python3 7 site package tornado gen py line 1233 in inner self run file usr lib python3 7 site package tornado gen py line 1147 in run yield self gen send value file usr lib python3 7 site package ipykernel kernelbase py line 365 in process one yield gen maybe future dispatch args file usr lib python3 7 site package tornado gen py line 326 in wrapper yield next result file usr lib python3 7 site package ipykernel kernelbase py line 272 in dispatch shell yield gen maybe future handler stream ident msg file usr lib python3 7 site package tornado gen py line 326 in wrapper yield next result file usr lib python3 7 site package ipykernel kernelbase py line 542 in execute request user expression allow stdin file usr lib python3 7 site package tornado gen py line 326 in wrapper yield next result file usr lib python3 7 site package ipykernel ipkernel py line 294 in do execute res shell run cell code store history store history silent silent file usr lib python3 7 site package ipykernel zmqshell py line 536 in run cell return super zmqinteractiveshell self run cell args kwargs file usr lib python3 7 site package ipython core interactiveshell py line 2848 in run cell raw cell store history silent shell future file usr lib python3 7 site package ipython core interactiveshell py line 2874 in run cell return runner coro file usr lib python3 7 site package ipython core async helper py line 67 in pseudo sync runner coro send none file usr lib python3 7 site package ipython core interactiveshell py line 3049 in run cell async interactivity interactivity compiler compiler result result file usr lib python3 7 site package ipython core interactiveshell py line 3220 in run ast node if yield from self run code code result file usr lib python3 7 site package ipython core interactiveshell py line 3296 in run code exec code obj self user global ns self user n file line 1 in runfile home jaap dropbox python project code preprocesse testing py wdir home jaap dropbox python project code preprocesse file usr lib python3 7 site package spyder kernel customize spydercustomize py line 827 in runfile execfile filename namespace file usr lib python3 7 site package spyder kernel customize spydercustomize py line 110 in execfile exec compile f read filename exec namespace file home jaap dropbox python project code preprocesse testing py line 33 in dataset output model next element file usr lib python3 7 site package tensorflow python keras engine base layer py line 634 in call output call fn input args kwargs file usr lib python3 7 site package tensorflow python keras engine network py line 751 in call return self run internal graph input training training mask mask file usr lib python3 7 site package tensorflow python keras engine network py line 893 in run internal graph output tensor layer compute tensor kwargs file usr lib python3 7 site package tensorflow python keras engine base layer py line 634 in call output call fn input args kwargs file usr lib python3 7 site package tensorflow python keras layers core py line 467 in call array op shape input 0 self target shape file usr lib python3 7 site package tensorflow python ops gen array op py line 7715 in reshape reshape tensor tensor shape shape name name file usr lib python3 7 site package tensorflow python framework op def library py line 788 in apply op helper op def op def file usr lib python3 7 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file usr lib python3 7 site package tensorflow python framework op py line 3616 in create op op def op def file usr lib python3 7 site package tensorflow python framework op py line 2005 in init self traceback tf stack extract stack
tensorflowtensorflow
memory leak in eager mode when create keras model in loop
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 arch linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not test tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 5259 ge703239 1 15 0 dev20190629 python version 3 7 3 bazel version if compile from source not compile from source gcc compiler version if compile from source not compile from source cuda cudnn version use cpu gpu model and memory use cpu describe the current behavior in eager execution when create a tf keras sequential model inside a loop and discard it immediately the memory increase over time the follow code show this by print the use memory at each iteration python import psutil import tensorflow as tf tf compat v1 enable eager execution for in range 100 tf keras sequential tf keras layer dense 3000 input dim 3000 print psutil virtual memory use 2 30 output 1 0170440673828125 1 0506706237792969 1 0841865539550781 1 1179122924804688 4 285423278808594 4 318950653076172 4 35223388671875 the same result happen when use the functional api or model subclasse api add tf keras backend clear session in the loop solve the leak in all case like in graph mode to see this effect well one should additionally use gc collect in the loop describe the expect behavior while add tf keras backend clear session to the loop help this should not be necessary because in eager execution there be no graph to clear which accord to the documentation seem to be the only thing this function do destroy the current tf graph and create a new one therefore it be also suprise that this function help at all during eager execution the expect behavior be that there be no memory leak even without tf keras backend clear session code to reproduce the issue code be in description above other info log nothing here
tensorflowtensorflow
tensorflow 2 0 keras multi gpu model only utilize one gpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary pip install tensorflow version use command below tensorflow gpu 2 0 0 beta1 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version cudatoolkit 10 0 130 cudnn 7 6 0 gpu model and memory 4x nvidia geforce gtx 1080 ti describe the current behavior when use multi gpu model I e tf keras util multi gpu model in tensorflow 2 0 to distribute a job across multiple gpu 4 only one gpu appear to be use that be when monitor the gpu usage only one gpu show substantial dedicated gpu memory usage and gpu utility describe the expect behavior each of the 4 gpu should indicate that memory be be copy to the device and process code to reproduce the issue while my issue arise with custom code use model fit generator I be able to replicate the issue use model fit with documentation code provide at import tensorflow as tf from tensorflow import kera from tensorflow keras application import xception from tensorflow keras util import multi gpu model import numpy as np num sample 1000 height 224 width 224 num class 1000 instantiate the base model or template model we recommend do this with under a cpu device scope so that the model s weight be host on cpu memory otherwise they may end up host on a gpu which would complicate weight sharing with tf device cpu 0 model xception weight none input shape height width 3 class num class replicate the model on 8 gpu this assume that your machine have 8 available gpu parallel model multi gpu model model gpu 4 gpu change to 4 parallel model compile loss categorical crossentropy optimizer rmsprop generate dummy datum x np random random num sample height width 3 y np random random num sample num class parallel model summary this fit call will be distribute on 8 gpu since the batch size be 256 each gpu will process 32 sample parallel model fit x y epoch 20 batch size 16 batch size change to 16
tensorflowtensorflow
tensorflow do not work without tcmalloc in some case boost tree
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow n a os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 4 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary from pip tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 5 2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a exact command to reproduce 1 launch jupyter notebook 2 download tutorial from google colab 3 run the notebook describe the problem hi I m currently learn tensorflow from the tutorial of the tf site but during exercise in the tutorial I get a strange error tensorflow be constantly crash with the code although it be provide from official site I try both jupyter notebook and plain python code so I google the problem a little bit and find that there s a workaround I e bash sudo apt install libtcmalloc minimal4 export ld preload usr lib libtcmalloc minimal so 4 and then the code work flawlessly however this leave another question and these be what I really wonder 1 do this problem happen to some boundary case like mine perhaps I be miss some configuration I ll be glad to let I know 2 if not that be tcmalloc be necessary for the tensorflow and consider that tcmalloc be not distribute with the every default linux for example ubuntu installation might be there a well way to evade this situation source code log before apply the tcmalloc package jupyter kernel die with this message kernel restart the kernel appear to have die it will restart automatically here be some snippet of the log message error in home sungjin virtualenvs boost bin python3 malloc memory corruption fast 0x00007fe0e804d6d0 backtrace lib x86 64 linux gnu libc so 6 0x777e5 0x7fe1f56097e5
tensorflowtensorflow
the doc be unscrollable with javascript disabled
Bug
url s with the issue description of issue what need change css body pende overflow hide should be remove clear description js be consider harmful so the doc should be usable without js the same problem be present in android and fuchsia doc
tensorflowtensorflow
tflite conversion of conv1d with dilation 1
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab tensorflow instal from source or binary binary tensorflow version use command below tensorflow 2 0 0 beta1 python version python3 describe the current behavior after convert a conv1d op to tensorflow lite the interpreter can not allocate tensor tensorflow lite kernel space to batch nd cc 96 numdimension op context input kinputdimensionnum 3 4 node number 0 space to batch nd fail to prepare describe the expect behavior tflite model should be able to load and execute code to reproduce the issue pip install q tensorflow 2 0 0 beta1 import tensorflow as tf from tensorflow keras model import model from tensorflow keras layers import def get model input tf keras input shape 10 40 no error when dilation rate 1 layer conv1d 32 3 dilation rate 2 padding same use bias false input layer globalmaxpooling1d layer output dense 2 layer model model input input output output return model model get model converter tf lite tfliteconverter from keras model model tflite model converter convert open train model tflite wb write tflite model interpreter tf lite interpreter model path train model tflite interpreter allocate tensor other info log the problem do not occur when dilation rate 1
tensorflowtensorflow
tf keras predict generator stick with use use multiprocesse true
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux debian 9 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below 1 14 python version 3 5 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version 10 0 gpu model and memory tesla p100 16280mib you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when I use model predict generator with use multiprocesse true the code get stuck describe the expect behavior ideally the code should not get stuck and all core should be use for prediction code to reproduce the issue from tensorflow keras layer import conv3d maxpool3d flatten dense from tensorflow keras layer import dropout input batchnormalization from tensorflow keras layer import avgpool3d from tensorflow keras import model from tensorflow keras optimizer import adam from tensorflow keras util import sequence from tensorflow keras import callback from tensorflow keras layer import concatenate add from tensorflow keras estimator import model to estimator from tensorflow keras util import multi gpu model from tensorflow keras util import sequence import tensorflow as tf def build model input shape 128 128 50 1 n class 3 multilabel false def spatial reduction block input block name filter input shape as list 1 with tf name scope block name maxpool maxpool3d pool size 2 2 2 stride 2 2 2 padding same input conv a 0 conv3d filter filter 4 kernel size 3 3 3 stride 2 2 2 padding same activation relu input conv b 0 conv3d filter filter kernel size 1 1 1 stride 1 1 1 padding same activation relu input conv c 0 conv3d filter filter kernel size 1 1 1 stride 1 1 1 padding same activation relu input conv b 1 conv3d filter 5 filter 16 kernel size 3 3 3 stride 2 2 2 padding same activation relu conv b 0 conv c 1 conv3d filter 5 filter 16 kernel size 3 3 3 stride 1 1 1 padding same activation relu conv c 0 conv c 2 conv3d filter 7 filter 16 kernel size 3 3 3 stride 2 2 2 padding same activation relu conv c 1 concat output concatenate maxpool conv a 0 conv b 1 conv c 2 return concat output def residual convolution block input block name filter input shape as list 1 with tf name scope block name conv a 0 conv3d filter filter 2 kernel size 3 3 3 stride 1 1 1 padding same activation relu input conv b 0 conv3d filter filter 2 kernel size 1 1 1 stride 1 1 1 padding same activation relu input conv c 0 conv3d filter filter 2 kernel size 1 1 1 stride 1 1 1 padding same activation relu input conv b 1 conv3d filter filter 2 kernel size 3 3 3 stride 1 1 1 padding same activation relu conv b 0 conv c 1 conv3d filter filter 2 kernel size 3 3 3 stride 1 1 1 padding same activation relu conv c 0 conv c 2 conv3d filter filter 2 kernel size 3 3 3 stride 1 1 1 padding same activation relu conv c 1 concat output concatenate conv a 0 conv b 1 conv c 2 conv d 0 conv3d filter filter kernel size 1 1 1 stride 1 1 1 padding same activation relu concat output add 1 add conv d 0 input return add 1 if not multilabel activation fn softmax else activation fn sigmoid input input shape input shape name input conv 1 conv3d filter 64 kernel size 1 1 1 stride 1 1 1 padding same activation relu input spatial reduction block 1 spatial reduction block conv 1 spatial reduction block 1 residual convolution block 1 residual convolution block spatial reduction block 1 residual convolution block 1 spatial reduction block 2 spatial reduction block residual convolution block 1 spatial reduction block 2 residual convolution block 2 residual convolution block spatial reduction block 2 residual convolution block 2 conv 2 conv3d filter 512 kernel size 1 1 1 stride 1 1 1 padding same activation relu residual convolution block 2 maxpool 1 maxpool3d pool size 2 2 2 stride 2 2 2 padding valid conv 2 conv 3 conv3d filter 1024 kernel size 1 1 1 stride 1 1 1 padding same activation relu maxpool 1 maxpool 2 maxpool3d pool size 2 2 2 stride 2 2 2 padding valid conv 3 flatten flatten maxpool 2 dropout 1 dropout rate 0 2 flatten dense 1 dense 512 activation sigmoid dropout 1 dropout 2 dropout rate 0 2 dense 1 output dense n class activation activation fn name output dropout 2 model model inputs input output output return model model build model 128 128 50 1 3 false class mygenerator sequence def init self x set y set batch size augment false self x self y x set y set self batch size batch size self augment augment def len self return int np ceil len self x float self batch size def getitem self idx batch x self x idx self batch size idx 1 self batch size batch y self y idx self batch size idx 1 self batch size x read image filename self augment for filename in batch x read a numpy array name filename y read label label for label in batch y return np array x np array y test generator mygenerator x test y test eval batch size augment false pred model predict generator test generator verbose 1 use multiprocesse true step eval step other info log na
tensorflowtensorflow
bug tf layer dropout not work
Bug
to reproduce the issue py sess tf session x tf one 4 4 y tf layer dropout x 0 5 sess run y this version work well py sess tf session x tf one 4 4 y tf nn dropout x 0 5 sess run y tensorflow version 1 13 3 os ubuntu 18 04
tensorflowtensorflow
attributeerror module tensorflow have no attribute matrix band part
Bug
tensorflow 2 0 0 alpha0 attributeerror module tensorflow have no attribute matrix band part
tensorflowtensorflow
tf 2 0 minor change to the load image with tf data tutorial fix image link
Bug
url s with the issue description of main issue pls take a look at jpg jpeg link in the load image with tf data tutorial in the inspect the image section when you inspect the site element they appear to be miss a letter g as in jpeg or jpg other minor suggestion rearrangement of word fix some grammar addition subtraction of commas colon space in english and python for consistency will submit a pr right away for your review markdaoust lamberta
tensorflowtensorflow
need well documentation for bestexporter
Bug
in the documentation for bestexporter the example mention do not specify how to write a compare fn by default it take the loss how to use it if we be to use custom metric that be evaluate in the model fn
tensorflowtensorflow
tfslim train mobilenetv2 model be give poor accuracy with be train false when save and restore the checkpoint
Bug
the step I have follow 1 I have train the mobilenetv2 model use slim framework with cifar10 dataset run training python train image classifier py train dir train dir dataset name cifar10 dataset split name train dataset dir dataset dir model name mobilenet v2 preprocesse name mobilenet v2 max number of step 100000 batch size 48 save interval sec 120 save summary sec 120 log every n step 100 optimizer adagrad learning rate 0 1 learn rate decay factor 0 1 num epoch per decay 200 weight decay 0 004 move average decay 0 9999 2 after training I could get a proper accuracy with below evaluation option run evaluation python eval image classifier py checkpoint path train dir eval dir train dir dataset name cifar10 dataset split name test dataset dir dataset dir model name mobilenet v2 3 I have save the checkpoint with be train false from the eval image classifier py itself convert the checkpoint to pb file and then measure the evaluation accuracy the accuracy be very less system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from binary tensorflow version use command below 1 13 python version 3 7 3 bazel version if compile from source 24 1 gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior the modem accuracy change after save the evaluation graph with be train false describe the expect behavior the model should give same accuracy after save with be train false code to reproduce the issue training with slim framework other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tflite convert h5 to tflite init node conv1 kernel assign do esn t exist in graph
Bug
system information os platform and distribution window 7 tensorflow version tensorflow gpu 1 14 0 python version 3 7 3 cuda cudnn version 10 1 7 6 1 gpu model and memory gtx1080ti 11 gb describe the current behavior after transfer learn on tensorflow keras mobilenet v2 and save h5 tflite convert keras model file tl mobilenetv2 h5 output file tl mobilenetv2 tflite allow custom op describe the issue tflite be generate but error message show e tensorflow core grappler grappler item builder cc 637 init node conv1 kernel assign doesn t exist in graph other info log create tensorflow device job localhost replica 0 task 0 device gpu 0 with 10143 mb memory physical gpu device 0 name geforce gtx 1080 ti pci bus i d 0000 03 00 0 compute capability 6 1 2019 07 01 18 16 28 142672 e tensorflow core grappler grappler item builder cc 637 init node conv1 kernel assign doesn t exist in graph
tensorflowtensorflow
tf2 0 skip optimization due to error while loading function
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary source tensorflow version use command below 2 0 0 beta1 python version 3 6 0 describe the current behavior I m try to reproduce the result from the tutorial example about text classification with an rnn provide by tensorflow at however this warning message constantly appear that show skip optimization due to error while loading function librarie invalid argument I try other optimizer and lstm or gru architecture but nothing change untitled code to reproduce the issue ruby from future import absolute import division print function unicode literal import tensorflow as tf print tf version import tensorflow dataset as tfds plot result import matplotlib pyplot as plt def plot graph histroy string plt plot histroy history string plt plot histroy history val string plt xlabel epochs plt ylabel string plt legend string val string plt show see available dataset print tfds list builder dataset info tfds load imdb review subwords8k with info true as supervise true train dataset test dataset dataset train dataset test tokenizer info feature text encoder print vocabulary size format tokenizer vocab size sample string tensorflow be cool tokenize string tokenizer encode sample string print tokenize string be format tokenize string original string tokenizer decode tokenize string print the original string format original string assert original string sample string for ts in tokenize string print format ts tokenizer decode ts buffer size 10000 batch size 64 train dataset train dataset shuffle buffer size train dataset train dataset padded batch batch size train dataset output shape test dataset test dataset padded batch batch size test dataset output shape build the model em size 64 model tf keras sequential tf keras layer embed tokenizer vocab size 64 tf keras layers bidirectional tf keras layers gru 64 tf keras layer dense 64 activation relu tf keras layer dense 1 activation sigmoid model compile loss mse optimizer sgd metric accuracy history model fit train dataset epoch 1 validation datum test dataset test loss test acc model evaluate test dataset print test loss format test loss print test accuracy format test acc it seem that many other user be experience similar issue on tf2 0 beta
tensorflowtensorflow
colab notebook crash due to ram overuse on explore overfitting and underfitting tutorial
Bug
url s with the issue please provide a link to the documentation entry for example scrollto lqg3mxf5xsjr description of issue what need change notebook crash on the code block train the baseline model report that all ram have be consume clear description user should be able to complete the entire notebook without hit resource limit maybe the model be not define correctly this be the summary model sequential layer type output shape param dense dense none 16 160016 dense 1 dense none 16 272 dense 2 dense none 1 17 total param 160 305 trainable param 160 305 non trainable param 0
tensorflowtensorflow
tensorflow need a long startup time
Bug
when I run the example on it take around 5 minute at add visible gpu device 0 before the tensorflow begin to compute as far as I know a lot of people have this problem since a year age but there seem to be no effective way to sove the problem the only way I know be to compile from source that be not a good way for a beginner my environment be win10 tensorflow gpu 2 0 beta1 cuda 10 0 cudnn 7 6 python 3 6 and with gtx 850 m when the problem will be fix
tensorflowtensorflow
problem pass tensor attr to custom op in eager execution mode
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information window 10 tensorflow instal from binary tensorflow version 1 14 python version 3 7 cuda cudnn version 10 I be define a new custom op in c which take in a single attribute of type tensor and a single input tensor variable a strip version of the op code be below include tensorflow core framework op h include tensorflow core framework op kernel h use namespace tensorflow register op dostuff attr attr tensor dtype dt float input in float output out float class dostuffop public opkernel public explicit dostuffop opkernelconstruction context opkernel context op require ok context context getattr attr attr void compute opkernelcontext context override private tensor attr register kernel builder name dostuff device device cpu dostuffop I can compile the op into a so file fine now the follow code run import tensorflow as tf dostufflib tf load op library build do stuff so sess tf interactivesession sample in np random rand 3 3 sample in t tf convert to tensor sample in dtype np float32 sample atrr np zero 3 3 dtype np float32 sample attr t tf contrib util make tensor proto sample atrr y dostufflib do stuff in sample in t attr sample attr t however if I try to use eager execution mode I e import tensorflow as tf tf compat v1 enable eager execution dostufflib tf load op library build do stuff so sample in np random rand 3 3 sample in t tf convert to tensor sample in dtype np float32 sample atrr np zero 3 3 dtype np float32 sample attr t tf contrib util make tensor proto sample atrr y dostufflib do stuff in sample in t attr sample attr t I get the follow error tensorflow python framework error impl unimplementederror attr sample locs have unhandle type 6
tensorflowtensorflow
how to calculate gradient for meta learning loop
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tf 2 0 0 dev20190628 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 gpu model and memory describe the current behavior I want to compute the gradient of a loss function with respect to a model in order to do meta learning and use those gradient to define the same model with new weight I get none as the value of the model when I use input layer describe the expect behavior I be expect it to work the same whether I define my input layer as an input layer or as tf random uniform tensor code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf from tensorflow keras layer import conv2d flatten dense input with tf gradienttape as t model tf keras model sequential conv2d filter 64 kernel size 3 activation relu conv2d filter 64 kernel size 3 activation relu conv2d filter 64 kernel size 3 activation relu conv2d filter 64 kernel size 3 activation relu flatten dense 10 activation softmax train input input shape 28 28 1 train label input shape 10 train input tf random uniform shape 1 28 28 1 dtype tf float32 train label tf random uniform shape 1 10 dtype tf float32 train output model train input loss tf loss categorical crossentropy train label train output d weight t gradient loss model trainable weight print d weight other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach if I uncomment those two line d weight be calculate and print when they be comment I get this error 2019 06 28 14 27 30 352043 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 06 28 14 27 30 386438 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2208000000 hz 2019 06 28 14 27 30 387394 I tensorflow compiler xla service service cc 168 xla service 0x2f99b30 execute computation on platform host device 2019 06 28 14 27 30 387417 I tensorflow compiler xla service service cc 175 streamexecutor device 0 traceback most recent call last file home siavash programming meta learning framework model py line 27 in d weight t gradient loss model trainable weight file usr local lib python3 6 dist package tensorflow core python eager backprop py line 1001 in gradient unconnected gradient unconnected gradient file usr local lib python3 6 dist package tensorflow core python eager imperative grad py line 76 in imperative grad compat as str unconnected gradient value file usr local lib python3 6 dist package tensorflow core python eager backprop py line 666 in one return fast fill value shape dtype file usr local lib python3 6 dist package tensorflow core python eager backprop py line 621 in fast fill constant op constant shape dtype dtype int32 file usr local lib python3 6 dist package tensorflow core python framework constant op py line 246 in constant allow broadcast true file usr local lib python3 6 dist package tensorflow core python framework constant op py line 254 in constant impl t convert to eager tensor value ctx dtype file usr local lib python3 6 dist package tensorflow core python framework constant op py line 115 in convert to eager tensor return op eagertensor value handle device dtype valueerror typeerror object of type tensor have no len
tensorflowtensorflow
tensorflow lite conversion misshapes bias vector of fullyconnecte
Bug
system information have I write custom code tflite converter code be straight from an example script os platform and distribution mac os 10 14 5 tensorflow instal from pip install tf nightly and pip install tensorflow tensorflow version test on v1 12 1 5178 gbafa0371c8 1 15 0 dev20190628 and v1 13 0 rc2 5 g6612da8951 1 13 1 python version 3 6 5 describe the current behavior tflite converter incorrectly shape the bias for fullyconnected operator specifically in my test case see attached model below in the original freeze graph model matmul 6 take a product of 32x12 matrix and 12x1 vector then add 7 add a 32x1 vector to it as a bias the converted tflite model put these two operation together into a fullyconnected op and somehow its bias matmul 6 bias be incorrectly shape as a single element vector consequently the inference result of this tflite model be incorrect describe the expect behavior the bias vector should be shape as they be in the original freeze graph model code to reproduce the issue tflite bias shape issue zip this zip file contain debug pb tf freeze graph model and debug tflite tflite convert model from frozen model the conversion code be take straight from the document import tensorflow as tf graph def file debug pb input array input output array reshape 1 converter tf lite tfliteconverter from frozen graph graph def file input array output array tflite model converter convert open debug tflite wb write tflite model the model have extra operator in the beginning sub div gather just because I do not have time to rebuild the bare minimal test case but I think it be already simple enough