repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | value of sqrt 2 be calculate incorrectly and inconsistently | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 1 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary not know tensorflow version use command below 1 14 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source not know cuda cudnn version 10 0 7 6 0 gpu model and memory tesla k 80 12 gb describe the current behavior I just compute sqrt 2 100 time in google colab the value be compute inconsistently and a bit incorrectly in a single tensor describe the expect behavior sqrt 2 should be compute the same way every time code to reproduce the issue from future import absolute import division print function unicode literal import tensorflow as tf tf enable eager execution sqrt2s tf sqrt tf constant 2 0 100 dtype tf float32 print sqrt2s print sqrt2s 0 numpy print sqrt2s 1 numpy print sqrt2s 0 numpy sqrt2s 1 numpy the result tf tensor 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142134 1 4142135 1 4142135 1 4142135 1 4142135 shape 100 dtype float32 1 4142134 1 4142135 false other info log the notebook be here |
tensorflowtensorflow | bug in tflite convert when convert tf gather operation r1 14 | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary build from r1 14 tensorflow version or github sha if from source 1 14 I might find a bug in tflite convert and I want to report this I be not sure whether you aware of this or not but I will explain it for the safety the follow convert code give an error in make tflite model it complain about check dim during the conversion import tensorflow as tf import numpy as np export dir test save model builder tf save model builder savedmodelbuilder export dir weight np random rand 32 512 x tf placeholder tf float32 shape none 512 w tf constant weight dtype tf float32 name kernel z x w gdim tf constant 3 dtype tf int32 gaxis tf constant 0 dtype tf int32 wg tf get variable kernel 32 512 y tf gather wg gdim gaxis re y with tf session as sess sess run tf global variable initializer builder add meta graph and variable sess test builder save as text true converter tf lite tfliteconverter from session sess x re option for converter converter dump graphviz dir graph dir output graph visualization file dot converter dump graphviz video true tflite model converter convert I find that it be cause by the resolve constant gather cc l42 it seem that the gather function assume that coord array be always 1 dim array however tf gather may accept it as 0 dim array a scalar value so that the checking fail if coord array become a scalar my solution be follow modification for gather function it work for I when I personally build and instal from the modify source I hope someone to check about this and fix the issue if necessary gather datum from axis 0 template inline void gather const array input array const array coord array array output array const shape input shape input array shape const std vector input datum input array getbuffer datum const shape coord shape coord array shape const std vector coord datum coord array getbuffer datum const shape output shape output array shape std vector output datum output array getmutablebuffer datum output datum resize requiredbuffersizeforshape output shape int stride 1 for int I 1 I input shape dimension count I stride input shape dim I if coord shape dimension count 0 when coord array be 0 dim a constant check eq stride 1 output datum size dcheck ge coords data 0 0 dcheck lt coord data 0 input shape dim 0 datatype out output datum datum const datatype in input data datum coord data 0 stride memcpy out in sizeof datatype stride else when coord array be 1 dim array check eq coord shape dim 0 output array shape dim 0 let s make sure we have enough space for all element in the memcpy below which write stride element start at I stride check eq stride coord shape dim 0 output datum size for int I 0 I coord shape dim 0 I dcheck ge coords datum I 0 dcheck lt coord datum I input shape dim 0 datatype out output datum datum I stride const datatype in input data datum coord datum I stride |
tensorflowtensorflow | the precision difference between tensorflow and numpy for matrix multiplication | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 6 0 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensorflow matrix multiplication on cpu do not reproduce same result as numpy matrix multiplication for float32 furthermore x y and y t x t t produce different result be this a normal behavior if it be then which version of the matrix product should be take as correct output describe the expect behavior theoretically x y and y t x t t should evaluate to same value and their value should be same as numpy version code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python import numpy as np import tensorflow as tf x np arange 150 dtype np float32 reshape 1 5 10 y np arange 200 dtype np float32 reshape 1 5 10 x tf tf variable x xt tf tf variable x t y tf tf variable y yt tf tf variable y t yxt1 tf y tf xt tf yxt2 tf tf transpose x tf yt tf with tf session as sess sess run tf global variable initializer yxt1 yxt2 sess run yxt1 tf yxt2 tf yxt1 y x t yxt2 x y t t print np max np abs yxt1 yxt2 np max np abs yxt1 yxt2 print np max np abs yxt1 yxt1 np max np abs yxt1 yxt2 which output 0 00012207031 0 0 0 00012207031 0 00012207031 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tensorflowlite runtime installation doesn t provide interpreter package | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution platform 1 name ubuntu version 16 04 5 lts xenial xerus i d ubuntu i d like debian pretty name ubuntu 16 04 5 lts version i d 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device platform 2 pretty name mendel gnu linux 3 chef name mendel gnu linux version i d 3 version 3 chef i d mendel i d like debian tensorflow instal from source or binary didn t install try to use tflite runtime tflite runtime run an inference use tflite runtime tensorflow version n a python version mendel 3 5 3 ubuntu 16 04 3 6 9 instal use virtualenv pip conda virtualenv bazel version if compile from source n a gcc compiler version if compile from source mendel gcc debian 6 3 0 18 deb9u1 6 3 0 20170516 ubuntu 16 04 gcc ubuntu 5 4 0 6ubuntu1 16 04 11 5 4 0 20160609 cuda cudnn version n a gpu model and memory n a I try to install the tflite runtime use run an inference use tflite runtime when I try to import as follow from tflite runtime import interpreter I get the follow error in both device ubuntu 16 04 and mendel from tflite runtime import interpreter traceback most recent call last file line 1 in but when I just do import tflite runtime doesn t give I an error in either platform I try use intellisense and it show I no interpreter api be I do something wrong how to fix this issue describe the problem provide the exact sequence of command step that you execute before run into the problem any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | non tensorflow code get execute only on the first run of tf function | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 14 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install wheel tensorflow version use command below 2 0 0 beta1 python version 3 7 3 gpu model and memory cpu describe the current behavior non tensorflow code get execute only on the first run of tf function describe the expect behavior either exception be throw or the code be execute every time as it be it just welcome bug code to reproduce the issue import tensorflow as tf print tf version class myobj def init self self value 0 obj myobj tf function def with py side effect tensorflow stuff o do my complex tf stuff o value 1 return tensorflow stuff for I in range 5 print I obj value a with py side effect none obj other info log my output user ikkamen pyenv version 3 7 3 bin python user ikkamens library preference pycharmce2019 2 scratch tf foo python py 2019 08 06 11 53 11 888584 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2 0 0 beta1 0 0 1 1 2 1 3 1 4 1 process finish with exit code 0 |
tensorflowtensorflow | uniquev2 report incorrect output shape | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 6 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version 3 5 2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 130 gpu model and memory tesla k40c 11441mib describe the current behavior when not use eager execution uniquev2 always report its first output to have rank 1 describe the expect behavior uniquev2 should report its first output to have the same rank as its input code to reproduce the issue the bug can be expose by force non eager execution through tf function or tf compat v1 disable eager execution the former be demonstrate below py import tensorflow as tf from tensorflow python ops import gen array op def unique rank x axis unique gen array op unique v2 x axis return tf rank unique 0 2d input should produce a 2d output x tf one 2 2 print uniquev2 output 0 rank tf function unique rank x 0 print uniquev2 output 0 rank execute eagerly unique rank x 0 this output uniquev2 output 0 rank tf tensor 1 shape dtype int32 uniquev2 output 0 rank execute eagerly tf tensor 2 shape dtype int32 but should output uniquev2 output 0 rank tf tensor 2 shape dtype int32 uniquev2 output 0 rank execute eagerly tf tensor 2 shape dtype int32 |
tensorflowtensorflow | freeze graph node def expect input do not match with tf keras layer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary anaconda binary tensorflow gpu tensorflow version use command below b unknown 1 13 1 python version 3 6 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a because of conda binary gpu model and memory gtx 1080ti 11 gb describe the current behavior when I try to freeze my tf graph that use tf keras layer I get the follow exception traceback most recent call last file home stefan miniconda3 envs ds lib python3 6 site package tensorflow python framework importer py line 426 in import graph def graph c graph serialize option pylint disable protect access tensorflow python framework error impl invalidargumenterror nodedef expect input do not match 1 input specify op output dtype attr value tensor attr dtype type nodedef node model batch normalization v1 cond const 2 during handling of the above exception another exception occur traceback most recent call last file home stefan document work asr reproduce bug bug py line 100 in export file home stefan document work asr reproduce bug bug py line 95 in export filename tensor name file home stefan miniconda3 envs ds lib python3 6 site package tensorflow python tool freeze graph py line 146 in freeze graph with def protos importer import graph def input graph def name file home stefan miniconda3 envs ds lib python3 6 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file home stefan miniconda3 envs ds lib python3 6 site package tensorflow python framework importer py line 430 in import graph def raise valueerror str e valueerror nodedef expect input do not match 1 input specify op output dtype attr value tensor attr dtype type nodedef node model batch normalization v1 cond const 2 it seem to be relate to the tf keras layer batchnormalization when I pass the training true or train false to it the error go away describe the expect behavior the graph should freeze no matter the training parameter code to reproduce the issue standalone script require to reproduce import os import tensorflow as tf from tensorflow python keras datasets import mnist from tensorflow python tool import freeze graph class model def init self self conv tf keras layer conv2d filter 64 kernel size 5 activation tf nn relu self batch norm tf keras layer batchnormalization self flatten tf keras layer flatten self logit tf keras layer dense unit 10 def forward self input with tf name scope model conv out self conv input norm out self batch norm conv out flat out self flatten norm out logit self logit flat out return tf identity logit name logit staticmethod def loss logit label with tf name scope loss loss tf loss sparse softmax cross entropy label label logit logit return loss def train def map fn x y return tf expand dim tf math divide tf cast x tf float32 255 axis 2 tf cast y tf int32 x train y train mnist load datum dataset tf datum dataset from tensor slice x train y train dataset dataset map map fn batch 128 repeat 10 iterator dataset make one shot iterator input label iterator get next model model logit model forward input loss model loss logit label with tf name scope optimizer optimizer tf train adamoptimizer learning rate 1e4 minimize loss global step tf train get or create global step with tf train monitoredtrainingsession checkpoint dir tmp as sess while not sess should stop print sess run optimizer loss 1 def export tf reset default graph export dir tmp frozen model model input tf placeholder dtype tf float32 shape none 28 28 1 name input placeholder model forward input with open tmp checkpoint as f line f readline checkpoint line line find 1 line rfind print checkpoint with tf session graph tf get default graph as sess saver tf train saver sess run tf global variable initializer print os path join tmp checkpoint saver restore sess os path join tmp checkpoint builder tf save model builder tmp frozen builder add meta graph and variable sess tf save model tag constant serve strip default attrs true builder save graph tf graph util remove training node sess graph as graph def protect node logit freeze graph freeze graph with def protos input graph def graph input saver def none input save model dir export dir save model tag tf save model tag constant serve input checkpoint os path join tmp checkpoint output node name logit output graph os path join tmp freeze pb clear device true initializer nodes restore op name filename tensor name if name main train export other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | custom op with register gradient function error in eager backprop | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 beta1 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version v10 0 130 v7 0 5 gpu model and memory nvidia titan xp describe the current behavior if I run the code with directly 2019 08 06 11 32 33 687311 e tensorflow stream executor cuda cuda driver cc 1003 fail to synchronize the stop event cuda error illegal address an illegal memory access be encounter 2019 08 06 11 32 33 687356 e tensorflow stream executor gpu gpu timer cc 78 invalid argument error recording cuda event on stream 0x7fce8a062590 cuda error illegal address an illegal memory access be encounter 2019 08 06 11 32 33 687558 f tensorflow core kernel reduction gpu kernel cu h 644 non ok status cudalaunchkernel columnreducekernel grid dim block dim 0 cu stream in t temp storage flat datum extent x extent y op init status internal an illegal memory access be encounter abort core dump if I run the code with cuda memcheck binary patch no python train simple py traceback most recent call last file train simple py line 71 in grad tape gradient loss model trainable variable file medium disk1 fordata web server kangfu anaconda3 envs tensorflow gpu lib python3 6 site package tensorflow python eager backprop py line 1002 in gradient unconnected gradient unconnected gradient file medium disk1 fordata web server kangfu anaconda3 envs tensorflow gpu lib python3 6 site package tensorflow python eager imperative grad py line 76 in imperative grad compat as str unconnected gradient value file medium disk1 fordata web server kangfu anaconda3 envs tensorflow gpu lib python3 6 site package tensorflow python eager backprop py line 137 in gradient function return grad fn mock op out grad file medium disk4 fordata web server kangfu hdrnet tf hdrnet op py line 24 in bilateral slice grad grid tensor guide tensor input tensor grad have offset have offset file line 355 in bilateral slice apply grad file line 3 in raise from tensorflow python framework error impl internalerror fail launch bilateralsliceapplygradkernel op bilateralsliceapplygrad describe the expect behavior the code should calculate gradient properly without error the custom op run correctly in infer mode however it throw bug during model fit or below with tf gradienttape as tape output model full res loss tf keras loss mse output full re grad tape gradient loss model trainable variable code to reproduce the issue I define the custom op like below register kernel builder name bilateralslice device device gpu bilateralsliceop register kernel builder name bilateralslicegrad device device gpu bilateralslicegradop register kernel builder name bilateralsliceapply device device gpu bilateralsliceapplyop register kernel builder name bilateralsliceapplygrad device device gpu bilateralsliceapplygradop it seem like the custom op not work ok in eager mode other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | behavior of tf datum dataset when step per epoch be set | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example tf datum dataset shuffle model fit fit description of issue what need change my tf datum dataset do not have repeat set which mean it should go forever at the end of step per epoch do the tf datum dataset shuffle itself or do it pick up from where it leave off or do it reset I couldn t find a clear explanation online from the googling I do my dataset be about 14 million example and the loss seem to be decrease between epoch with step per epoch set I m just worried that it s fit on the same x sample again and again it s not entirely clear to I what be happen in the background with fit |
tensorflowtensorflow | libtensorflow framework so no such file or directory | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 1 14 0 python version py36 bazel version if compile from source bazel release 0 26 1 gcc compiler version if compile from source 7 3 0 cuda cudnn version gpu model and memory I try to include the op in the code as follow idtable op module tf load op library resource loader get path to datafile libid table op so only to get the traceback infomation traceback most recent call last file wide deep py line 12 in import jarvis tensorflow as jtf file opt ml job python jarvis tensorflow init py line 4 in from jarvis tensorflow feature transform import file opt ml job python jarvis tensorflow feature transform py line 10 in from jarvis tensorflow import i d column file opt ml job python jarvis tensorflow i d column py line 44 in from jarvis tensorflow i d table op import idhashtable file opt ml job python jarvis tensorflow i d table op py line 29 in resource loader get path to datafile libid table op so file usr local lib python3 6 site package tensorflow core python framework load library py line 61 in load op library lib handle py tf tf loadlibrary library filename tensorflow python framework error impl notfounderror libtensorflow framework so can not open share object file no such file or directory the trackback show that there s no libtensorflow framework so I try to find this file in tf 1 14 0 binary version not compile version and only find site package tensorflow core libtensorflow framework so 1 but still no libtensorflow framework so so what s the correct way to include so file be it a bug in tf 1 14 |
tensorflowtensorflow | pass a variable as learn rate to adam optimizer do not work as expect | Bug | tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary through pip3 tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version sys version info major 3 minor 5 micro 6 releaselevel final serial 0 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 130 410 48 10 0 linux x64 v7 4 2 24 gpu model and memory geforce gtx 1080 with 7598 mb memory describe the current behavior if a tf variable be pass as learn rate to the adam optimizer and the variable be later change that do not seem to affect the optimizer instead the optimizer seem to cache the value of the variable at the time when the optimizer be construct describe the expect behavior my expectation be that if I pass a tf variable as the learning rate argument to tf keras optimizer adam and later assign a new value to the variable that would affect the optimization code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf print tf version git version tf version version import sys print sys version info tf a tf variable 1 0 print variable tf a initialized to format tf a numpy tf lr tf variable 0 1 trainable false tf opt tf keras optimizer adam learning rate tf lr tf function def train step with tf gradienttape as tf tape tf loss tf a 2 tf gradient tf tape gradient tf loss tf a tf opt apply gradient zip tf gradient tf a print after one step with learning rate format tf lr numpy end train step print variable tf a be format tf a numpy tf lr assign 0 0 for in range 10 print after another step now with learning rate format tf lr numpy end train step print variable tf a be format tf a numpy the code above produce the follow output on my system v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 sys version info major 3 minor 5 micro 6 releaselevel final serial 0 variable tf a initialized to 1 0 after one step with learning rate 0 10000000149011612 variable tf a be 0 8999971747398376 after another step now with learning rate 0 0 variable tf a be 0 8004083633422852 after another step now with learning rate 0 0 variable tf a be 0 7015821933746338 after another step now with learning rate 0 0 variable tf a be 0 6039347052574158 after another step now with learning rate 0 0 variable tf a be 0 5079591274261475 after another step now with learning rate 0 0 variable tf a be 0 41423195600509644 after another step now with learning rate 0 0 variable tf a be 0 3234161138534546 after another step now with learning rate 0 0 variable tf a be 0 23625943064689636 after another step now with learning rate 0 0 variable tf a be 0 1535806804895401 after another step now with learning rate 0 0 variable tf a be 0 07624538242816925 after another step now with learning rate 0 0 variable tf a be 0 005127914249897003 as you can see tf a keep change at a fast pace my expectation be that after set the learning rate variable to 0 0 update would no long change tf a other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | pass a callable learning rate to adam optimizer do not work as document | Bug | tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary through pip3 tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version sys version info major 3 minor 5 micro 6 releaselevel final serial 0 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 130 410 48 10 0 linux x64 v7 4 2 24 gpu model and memory geforce gtx 1080 with 7598 mb memory describe the current behavior the adam optimizer do not seem to keep call the supply learning rate callable it seem like it s be call once or a few time but then a cache value be repeatedly use in update describe the expect behavior accord to the documentation it should be possible to pass a callable that take no argument and return the actual value to use as learn rate to tf keras optimizer adam and this can be useful for change these value across different invocation of optimizer function my expectation be that the adam optimizer would keep call the supply callable at each update I e from within apply gradient code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf print tf version git version tf version version import sys print sys version info tf a tf variable 1 0 print variable tf a initialized to format tf a numpy lr 0 1 def get lr global lr return lr tf opt tf keras optimizer adam learning rate get lr tf function def train step with tf gradienttape as tf tape tf loss tf a 2 tf gradient tf tape gradient tf loss tf a tf opt apply gradient zip tf gradient tf a print after one step with learning rate format get lr end train step print variable tf a be format tf a numpy lr 0 0 for in range 10 print after another step now with learning rate format get lr end train step print variable tf a be format tf a numpy the code above produce the follow output on my system v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 sys version info major 3 minor 5 micro 6 releaselevel final serial 0 variable tf a initialized to 1 0 after one step with learning rate 0 1 variable tf a be 0 8999971747398376 after another step now with learning rate 0 0 variable tf a be 0 8004083633422852 after another step now with learning rate 0 0 variable tf a be 0 7015821933746338 after another step now with learning rate 0 0 variable tf a be 0 6039347052574158 after another step now with learning rate 0 0 variable tf a be 0 5079591274261475 after another step now with learning rate 0 0 variable tf a be 0 41423195600509644 after another step now with learning rate 0 0 variable tf a be 0 3234161138534546 after another step now with learning rate 0 0 variable tf a be 0 23625943064689636 after another step now with learning rate 0 0 variable tf a be 0 1535806804895401 after another step now with learning rate 0 0 variable tf a be 0 07624538242816925 after another step now with learning rate 0 0 variable tf a be 0 005127914249897003 as you can see tf a keep change at a fast pace my expectation be that after set the learning rate to 0 0 update would no long change tf a other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | invalidargumenterror can not assign a device for operation embed 1 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information I have write custom code as oppose to use a stock example script provide in tensorflow os version fedora 29 5 1 18 also test ubuntu 18 10 tensorflow instal from source or binary tensorflow tensorflow late gpu py3 jupyter tensorflow version use command below 1 14 0 python version 3 6 8 cuda cudnn version 10 0 130 gpu model and memory geforce rtx 2080 ti 11 gb describe the current behavior I m use keras I try to to fit model that contain embed layer when I call model fit generator I get an error invalidargumenterror can not assign a device for operation embed embedding initializer random uniform sub could not satisfy explicit device specification because the node colocation node embed embedding initializer random uniform sub be colocate with a group of nodes that require incompatible device job localhost replica 0 task 0 device gpu 0 all available device job localhost replica 0 task 0 device cpu 0 job localhost replica 0 task 0 device xla gpu 0 job localhost replica 0 task 0 device xla cpu 0 job localhost replica 0 task 0 device gpu 0 colocation debug info colocation group have the follow type and support device root member assign device name index 1 request device name job localhost replica 0 task 0 device gpu 0 assign device name job localhost replica 0 task 0 device gpu 0 resource device name job localhost replica 0 task 0 device gpu 0 support device type cpu possible device identity gpu cpu xla cpu xla gpu const gpu cpu xla cpu xla gpu resourcesparseapplyrmsprop cpu randomuniform gpu cpu xla cpu xla gpu readvariableop gpu cpu xla cpu xla gpu sub gpu cpu xla cpu xla gpu add gpu cpu xla cpu xla gpu mul gpu cpu xla cpu xla gpu varisinitializedop gpu cpu xla cpu xla gpu varhandleop gpu cpu xla cpu xla gpu assignvariableop gpu cpu xla cpu xla gpu resourcegather gpu cpu xla cpu xla gpu resourcesparseapplyrmsprop look strange for I after get this error I can not fit new simplify model because I get this error again I get this error even I run tensorflow keras backend get value model optimizer lr describe the expect behavior model fit without any problem like the same model without embed layer no d input code to reproduce the issue lstm input input shape 30 5 steady input input shape 3 name steady float none shape d input input shape 1 name steady d n input input shape 1 x1 layer bidirectional layer lstm 512 activation relu return sequence true lstm input x1 layer bidirectional layer lstm 256 activation relu x1 x1 model input lstm input output x1 x2 layer dense 512 activation relu steady input x2 model input steady input output x2 x3 layer embed 12 3 d input x3 layer flatten x3 x3 layer dense 512 activation relu x3 x3 model input dest input output x3 x layer concatenate x1 output x2 output x3 output x layer dense 128 activation relu x y1 output tensor layer dense 5 name y1 x y2 output tensor layer dense 5 name y2 x model model input x1 input x2 input x3 input output y1 output tensor y2 output tensor ep n 200 learning rate 0 001 decay rate learning rate ep n momentum 0 7 model compile optimizer rmsprop lr learning rate momentum momentum decay decay rate loss mae mae train gen and test get simple generatora with shuffle batch size 128 history model fit generator train gen step per epoch 1000 epoch ep n validation datum test gen validation step x test shape 0 batch size other info log some time I get this error on simplify model define two set of input inputa input shape 1 inputb input shape 128 the first branch operate on the first input x embed 1000 3 inputa x layer flatten x x layer dense 4096 activation relu x x layer dense 2048 activation relu x x layer dense 1024 activation relu x x layer dense 512 activation relu x x layer dense 256 activation relu x x dense 128 activation relu x x dense 64 activation relu x x model input inputa output x the second branch opreate on the second input y dense 1024 activation relu inputb y dense 512 activation relu y y dense 256 activation relu y y dense 128 activation relu y y dense 64 activation relu y y model input inputb output y combine the output of the two branch combine layer concatenate x output y output apply a fc layer and then a regression prediction on the combine output z dense 2 activation relu combine z dense 1 activation linear z our model will accept the input of the two branch and then output a single value model model input x input y input output z ep n 10 learning rate 0 001 decay rate learning rate ep n momentum 0 7 model compile optimizer rmsprop lr learning rate momentum momentum decay decay rate input array a np random randint 1000 size 500000 1 input array b np random randint 32 size 500000 128 output array np random randint 9 size 500000 1 def generator shuffle x a x b y batch size 1024 max index len x a 1 while 1 row np random randint 0 max index batch size yield x a row x b row y row tr gen generator shuffle input array a input array b output array history model fit generator tr gen step per epoch 3 epoch 10 probably the issue occur depend on cpu usage I m attach full log gpu error txt |
tensorflowtensorflow | custom loss function fail with sample weight and batch size 1 | Bug | system information have I write custom code yes os platform and distribution debian 9 9 tensorflow instal from conda c anaconda tensorflow version 1 14 0 python version 3 7 3 gpu model and memory n a test in cpu mode describe the current behavior an error occur when train an lstm with a custom loss function use sample weight and batch size 1 the error do not occur if batch size 1 or if sample weight none describe the expect behavior I would expect custom loss function to work irrespective of batch size and sample weight code to reproduce the issue here s a minimal example python import numpy as np import tensorflow as tf batch size 32 no problem if this be 1 sequence len 1 embed size 100 x train np random randn batch size sequence len embed size y train np random randn batch size embed size sample weight np random randn batch size no problem if this be none train input tf keras input shape sequence len embed size batch size batch size lstm layer tf keras layers lstm 200 return sequence false train input dense layer tf keras layer dense embed size lstm layer model tf keras model model inputs train input output dense layer model summary custom loss function this function could of course be replace with tf keras loss mean square error but I have a use case where I need a custom loss function class customloss tf keras loss loss def call self y true y pre return tf reduce mean tf math square difference y true y pre model compile optimizer tf keras optimizer rmsprop lr 0 001 loss customloss loss model train on batch x train y y train sample weight sample weight other info log in 29026 pavithrasv have point out that loss function from tf loss do not work with kera and suggest to use loss function from tf keras loss instead thank again consequently I think that define a custom loss function use the tf keras loss loss base class should be possible please note that in my actual use case I have a more complex custom loss function for which I need some math operation from tf math traceback bash traceback most recent call last file home john phd gitlab literary lstm bug minimal example 03 py line 38 in sample weight sample weight file home john miniconda3 envs py tf lib python3 7 site package tensorflow python keras engine training py line 1175 in train on batch output self train function in pylint disable not callable file home john miniconda3 envs py tf lib python3 7 site package tensorflow python keras backend py line 3292 in call run metadata self run metadata file home john miniconda3 envs py tf lib python3 7 site package tensorflow python client session py line 1458 in call run metadata ptr tensorflow python framework error impl invalidargumenterror can not squeeze dim 0 expect a dimension of 1 get 32 node loss 1 dense loss weight loss squeeze |
tensorflowtensorflow | tf 2 0 nigtly 190803 generate error w0803 during prediction | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tf nightly gpu 2 0 preview tensorflow version 2 0 0 dev20190803 python version instal use virtualenv pip conda pip install tf nightly gpu 2 0 preview bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the problem when run prediction I receive a great number of error with the same information thus impede performance no such problem for tf 2 0 0 beta1 sample of error text w0803 19 06 51 695915 140613664049024 training util py 1211 when pass input datum as array do not specify step per epoch step argument please use batch size instead provide the exact sequence of command step that you execute before run into the problem forecast result for time in range len series window size forecast append model predict series time time window size np newaxis forecast forecast split time window size result np array forecast 0 0 any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | a correct way to use tf contrib opt adamwoptimizer | Bug | tf contrib opt adamwoptimizer require two argument weight decay and learning rate since learn rate be usually decay along with the training should weight decay also be decay with the same schedule would you please provide an example |
tensorflowtensorflow | serialization problem in kera when use add loss or tf extension for save | Bug | system information wsl win10 ubuntu 18 04 it also happen in a real ubuntu 18 tf nightly gpu 2 0 preview 2 0 0 dev20190802 happen in cpu and gpu python3 7 describe the current behavior I have a simple test to serialized and deserialize a model which have no compile loss just one add with add loss describe the expect behavior correct serialization and deserialization of the code code 1 to reproduce the issue occur with both the pure loss function which on early version would return json error but now seem ok but also with the lambda wrapper python import tensorflow as tf import tensorflow kera as tf k model tf k model sequential model add tf k layer conv2d 32 3 3 activation relu batch size 32 input shape 28 28 1 padding same stride 2 model add tf k layer conv2dtranspose 64 3 3 stride 2 padding same model add tf k layer conv2d 1 3 3 activation sigmoid pad same loss tf k layers lambda lambda I tf k loss mse I model input 0 model output 0 loss tf k loss mean square error model input 0 tf zero like model input 0 model add loss loss model compile optimizer adam model save teste h5 model tf keras model load model teste h5 python 2019 08 03 03 38 05 624310 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2019 08 03 03 38 05 624506 e tensorflow stream executor cuda cuda driver cc 318 fail call to cuinit unknown error 303 2019 08 03 03 38 05 624652 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host desktop guerinjr proc driver nvidia version do not exist 2019 08 03 03 38 05 624991 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 08 03 03 38 05 634458 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 08 03 03 38 05 636116 I tensorflow compiler xla service service cc 168 xla service 0x7fffe06c0120 execute computation on platform host device 2019 08 03 03 38 05 636259 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version warning log before flag parsing go to stderr w0803 03 38 05 683956 139979377280832 training util py 1320 output conv2d 1 miss from loss dictionary we assume this be do on purpose the fit and evaluate apis will not be expect any datum to be pass to conv2d 1 traceback most recent call last file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python framework op py line 1554 in create c op c op c api tf finishoperation op desc tensorflow python framework error impl invalidargumenterror 1 input specify of 2 input in op while building nodedef tf op layer squareddifference squareddifference use op z t attr t type allow dt bfloat16 dt half dt float dt double dt int32 dt int64 dt complex64 dt complex128 be commutative true during handling of the above exception another exception occur traceback most recent call last file home nguerinjr document deep code project img common teste py line 14 in model tf keras model load model teste h5 file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras save save py line 138 in load model return hdf5 format load model from hdf5 filepath custom object compile file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras save hdf5 format py line 162 in load model from hdf5 custom object custom object file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras saving model config py line 55 in model from config return deserialize config custom object custom object file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras layers serialization py line 98 in deserialize printable module name layer file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras util generic util py line 191 in deserialize keras object list custom object item file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras engine sequential py line 369 in from config model add layer file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras engine sequential py line 195 in add output tensor layer self output 0 file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras engine base layer py line 799 in call output call fn cast input args kwargs file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras engine base layer py line 2507 in call return self make op input file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras engine base layer py line 2530 in make op c op op create c op graph node def input control input file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python framework op py line 1557 in create c op raise valueerror str e valueerror 1 input specify of 2 input in op while building nodedef tf op layer squareddifference squareddifference use op z t attr t type allow dt bfloat16 dt half dt float dt double dt int32 dt int64 dt complex64 dt complex128 be commutative true process finish with exit code 1 code 2 to reproduce the issue put the tf suffix only change the error message here s an example python import tensorflow as tf import tensorflow kera as tf k model tf k model sequential model add tf k layer conv2d 32 3 3 activation relu batch size 32 input shape 28 28 1 padding same stride 2 model add tf k layer conv2dtranspose 64 3 3 stride 2 padding same model add tf k layer conv2d 1 3 3 activation sigmoid pad same loss tf k layers lambda lambda I tf k loss mse I model input 0 model output 0 loss tf k loss mean square error model input 0 tf zero like model input 0 model add loss loss model compile optimizer adam model save teste tf model tf keras model load model teste tf python 2019 08 03 03 38 42 774544 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2019 08 03 03 38 42 774737 e tensorflow stream executor cuda cuda driver cc 318 fail call to cuinit unknown error 303 2019 08 03 03 38 42 774932 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host desktop guerinjr proc driver nvidia version do not exist 2019 08 03 03 38 42 775273 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 08 03 03 38 42 784443 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 08 03 03 38 42 786142 I tensorflow compiler xla service service cc 168 xla service 0x7fffc04e4040 execute computation on platform host device 2019 08 03 03 38 42 786289 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version warning log before flag parsing go to stderr w0803 03 38 42 835703 140157751789376 training util py 1320 output conv2d 1 miss from loss dictionary we assume this be do on purpose the fit and evaluate apis will not be expect any datum to be pass to conv2d 1 2019 08 03 03 38 43 281262 w tensorflow python util util cc 288 set be not currently consider sequence but this may change in the future so consider avoid use they w0803 03 38 43 369712 140157751789376 deprecation py 506 from home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python op resource variable op py 1784 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer traceback most recent call last file home nguerinjr document deep code project img common teste py line 14 in model tf keras model load model teste tf file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras save save py line 142 in load model return save model load load filepath compile file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras save save model load py line 86 in load model tf load load internal path loader cls kerasobjectloader file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python save model load py line 541 in load internal export dir file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras save save model load py line 103 in init self finalize file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras save save model load py line 127 in finalize node add layer file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras engine sequential py line 195 in add output tensor layer self output 0 file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras engine base layer py line 799 in call output call fn cast input args kwargs file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras save save model util py line 57 in return output and add loss output loss fn input args kwargs file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python eager def function py line 439 in call self initialize args kwd add initializer to initializer map file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python eager def function py line 382 in initialize args kwd file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python eager function py line 1806 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python eager function py line 2106 in maybe define function graph function self create graph function args kwargs file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python eager function py line 1997 in create graph function capture by value self capture by value file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python framework func graph py line 884 in func graph from py func func output python func func args func kwargs file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python eager def function py line 325 in wrap fn return weak wrap fn wrap args kwd file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python save model function deserialization py line 262 in restore function body n n join signature description valueerror could not find matching function to call load from the savedmodel get positional argument 1 total tensor input 0 shape 32 28 28 1 dtype float32 keyword argument expect these argument to match one of the follow 1 option s option 1 positional argument 1 total tensorspec shape none 28 28 1 dtype tf float32 name input 0 keyword argument process finish with exit code 1 be there something wrong with this simple example something I m miss or it s just that the add loss still have some problem wrt serialization deserialization I perceive a bunch of error have be fix since my last report about this custom use of kera for example I know a workaround for this which seem ok in this nightly version be to use a custom compile loss which be not work at that time as another issue I have report python import tensorflow as tf import tensorflow kera as tf k model tf k model sequential model add tf k layer conv2d 32 3 3 activation relu batch size 32 input shape 28 28 1 padding same stride 2 model add tf k layer conv2dtranspose 64 3 3 stride 2 padding same model add tf k layer conv2d 1 3 3 activation sigmoid pad same def my loss y true y pre return y true def my loss2 y true y pre return y pre model compile optimizer adam loss my loss model save teste h5 model tf keras model load model teste h5 custom object my loss my loss my loss2 my loss2 print model python home nguerinjr document deep code project venv bin python home nguerinjr document deep code project img common teste py 2019 08 03 03 46 57 744222 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2019 08 03 03 46 57 744406 e tensorflow stream executor cuda cuda driver cc 318 fail call to cuinit unknown error 303 2019 08 03 03 46 57 744557 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host desktop guerinjr proc driver nvidia version do not exist 2019 08 03 03 46 57 744891 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 08 03 03 46 57 755141 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 08 03 03 46 57 756825 I tensorflow compiler xla service service cc 168 xla service 0x7fffea867eb0 execute computation on platform host device 2019 08 03 03 46 57 757004 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version process finish with exit code 0 but as an extension to these test I ve notice the tf version do not work correctly which seem another related bug python import tensorflow as tf import tensorflow kera as tf k model tf k model sequential model add tf k layer conv2d 32 3 3 activation relu batch size 32 input shape 28 28 1 padding same stride 2 model add tf k layer conv2dtranspose 64 3 3 stride 2 padding same model add tf k layer conv2d 1 3 3 activation sigmoid pad same def my loss y true y pre return y true def my loss2 y true y pre return y pre model compile optimizer adam loss my loss model save teste tf model tf keras model load model teste tf custom object my loss my loss my loss2 my loss2 print model python home nguerinjr document deep code project venv bin python home nguerinjr document deep code project img common teste py 2019 08 03 03 47 16 464514 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2019 08 03 03 47 16 464702 e tensorflow stream executor cuda cuda driver cc 318 fail call to cuinit unknown error 303 2019 08 03 03 47 16 464859 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host desktop guerinjr proc driver nvidia version do not exist 2019 08 03 03 47 16 465200 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 08 03 03 47 16 474429 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 08 03 03 47 16 476021 I tensorflow compiler xla service service cc 168 xla service 0x7fffbb87a7b0 execute computation on platform host device 2019 08 03 03 47 16 476185 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 08 03 03 47 16 873280 w tensorflow python util util cc 288 set be not currently consider sequence but this may change in the future so consider avoid use they warn log before flag parsing go to stderr w0803 03 47 16 921350 140320731367232 deprecation py 506 from home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python op resource variable op py 1784 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer traceback most recent call last file home nguerinjr document deep code project img common teste py line 25 in model tf keras model load model teste tf custom object my loss my loss my loss2 my loss2 file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras save save py line 142 in load model return save model load load filepath compile file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras save save model load py line 93 in load model training config pylint disable protect access file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras engine training py line 340 in compile self loss self output name file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras engine training util py line 1329 in prepare loss function loss function nest map structure get loss function loss file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python util nest py line 523 in map structure structure 0 func x for x in entry file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python util nest py line 523 in structure 0 func x for x in entry file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras engine training util py line 1086 in get loss function loss fn loss get loss file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras loss py line 1166 in get return deserialize identifi file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras loss py line 1157 in deserialize printable module name loss function file home nguerinjr document deep code project venv lib python3 7 site package tensorflow core python keras util generic util py line 210 in deserialize keras object raise valueerror unknown printable module name object name valueerror unknown loss function my loss process finish with exit code 1 the real point about my interest in the add loss it that s the only way at least I think to use loss in a flexible way I still need very flexible loss add loss seem an interesting way to do this |
tensorflowtensorflow | tf gradient with unconnected gradient zero return wrong shape for unconnected resource variable | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux 5 2 1 arch1 1 arch x86 64 tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 python version 3 6 8 cuda cudnn version n a gpu model and memory n a describe the current behavior call tf gradient with unconnected gradient zero return scalar for unconnected resource variable describe the expect behavior call tf gradient with unconnected gradient zero return appropriately shape zero tensor for unconnected resource variable code to reproduce the issue python a tf variable initial value 2 3 b tf variable initial value 3 4 use resource true c tf constant 0 print tf gradient c a b unconnected gradient zero |
tensorflowtensorflow | optimizer adam do not propagate hyperparameter update on first update | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 64b tensorflow version use command below 2 0 0b1 python version 3 6 describe the current behavior use tf keras optimizer adam and update hyperparameter result in hyperparameter not update on the first update describe the expect behavior hyperparameter do update on the first run ie they be initialize to tf variable at constructor of the optimizer not on first call code to reproduce the issue adam tf keras optimizer adam adam learning rate 0 00001 this do not update learning rate but change python type hyperparameter in hyper dictionary to tf variable adam learning rate 0 00001 this finally update the tf variable call adam hyper before update yield learning rate 0 001 decay 0 0 beta 1 0 9 beta 2 0 999 call adam hyper after first update yield learning rate decay beta 1 beta 2 |
tensorflowtensorflow | tf 2 0 api docs tf tuple | Bug | documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change correct link the github link to tf tuple lead to tf tuple v2 usage example there be no usage example |
tensorflowtensorflow | tf 2 0 api docs tf train list variable | Bug | url s with the issue description of the issue what need change raise for exception to deal with any error and a few drawing to be add to make it easy to understand raise list and define no the error be not list and return be there currently visual if not will it clarify the content no there be not any current visual submit a pull request no I be not willing to submit a pull request |
tensorflowtensorflow | tf 2 0 api docs tf keras backend random uniform | Bug | url s with the issue description of issue what need change the function have no usage example and there be no error raise raise list and define no raise list or define usage example no usage example submit a pull request no |
tensorflowtensorflow | tf 2 0 api docs tf nn log softmax | Bug | url s with the issue description of the issue what need change no error define no visual be the error define there be no error define visual if applicable there be no visual available I think that visual be need to clarify the content |
tensorflowtensorflow | tf 2 0 api docs tf io extract jpeg | Bug | url s with the issue description of issue what need change no link to github source code provide no raise provide in the description no usage example correct link no link provide raise list and define raise be not define list usage example there be no usage example request visual if applicable currently no visual may not be require |
tensorflowtensorflow | tf2 unicodedecodeerror when use tf save model save | Bug | system information have I write custom code yes os platform and distribution ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version 2 0 0 dev20190731 python version 3 6 7 cuda cudnn version 10 1 gpu model and memory gtx 1080 ti describe the current behavior code crash with follow exception traceback most recent call last file mwe py line 10 in tf save model save model tmp model file lib python3 6 site package tensorflow core python save model save py line 869 in save saveable view asset info asset index file lib python3 6 site package tensorflow core python save model save py line 624 in serialize object graph write object proto obj obj proto asset file def index file lib python3 6 site package tensorflow core python save model save py line 663 in write object proto metadata obj track metadata file lib python3 6 site package tensorflow core python keras engine training py line 2816 in track metadata metadata json load super model self track metadata file lib python3 6 site package tensorflow core python keras engine base layer py line 2198 in track metadata metadata config self get config file lib python3 6 site package tensorflow core python keras engine network py line 878 in get config layer config layer get config file lib python3 6 site package tensorflow core python keras engine base layer py line 2378 in get config node def self node def serializetostring decode utf 8 unicodedecodeerror utf 8 codec can t decode byte 0x80 in position 68 invalid start byte this seem to be a regression I be run code I have work ago some week month with early tf2 nightlie and this appear to I as unexpected behavior might be introduce by to I serializetostre decode utf 8 seem incorrect since serializetostring can return arbitrary binary datum parsing and serialization note that the byte be binary not text we only use the str type as a convenient container decodability into utf 8 can not be guarantee this bug might have go unnoticed as many other serialization just happen to be be utf 8 decodable describe the expect behavior a properly save model code to reproduce the issue python import tensorflow as tf import tensorflow kera as keras input keras layers input none none none dtype float32 output tf image extract patch input size 1 128 128 1 stride 1 128 128 1 rate 1 1 1 1 padding valid model keras model input input output output tf save model save model tmp model |
tensorflowtensorflow | use class weight in fit generator cause continuous increase in cpu memory usage until depletion oom | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 2 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not relevant tensorflow instal from source or binary tensorflow version use command below tf nightly gpu 1 15 0 dev20190728 python version python 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version release 10 0 v10 0 130 nvidia smi 418 43 driver version 418 43 cuda version 10 1 gpu model and memory geforce rtx 2080ti 11 gb describe the current behavior when use class weight in fit generator cause the training process to continuously consume more and more cpu ram until depletion there be a step increase in memory usage after each epoch see below for the reproducible example to keep the reproducible example small I decrease the size of the dataset and batch size which show the trend of increase memory while training with my actual datum it deplete the full 128 gb ram by 70 epoc in the code below if you comment out the class weight the program train without deplete memory describe the expect behavior not leak memory code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf tf enable eager execution import numpy as np from tensorflow keras model import sequential from tensorflow keras layers import cudnnlstm dense from tensorflow keras optimizer import adadelta feature count 25 batch size 16 look back 5 target group 10 def random data generator x data size batch size look back feature count batch lookback feature x datum np random uniform low 1 0 high 5 size x datum size y datum size batch size target group y datum np random randint low 1 high 21 size y datum size return x datum y datum def get simple dataset generator while true yield random data generator def build model model sequential model add cudnnlstm feature count batch input shape batch size look back feature count stateful false model add dense target group activation softmax optimizer adadelta learning rate 1 0 epsilon none model compile loss categorical crossentropy optimizer optimizer return model def run training model build model train generator get simple dataset generator validation generator get simple dataset generator class weight 0 2 1 8 2 1 3 4 4 8 5 35 6 30 7 4 8 5 9 3 model fit generator generator train generator step per epoch 1 epoch 1000 verbose 2 validation datum validation generator validation step 20 max queue size 10 worker 0 use multiprocesse false class weight class weight if name main run training other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach memory usage when class weight be use mem with class weight memory without class weight mem will class weight |
tensorflowtensorflow | error when change transformer hyperparameter | Bug | url s with the issue set hyperparameter description of issue what need change I ve be work with the transformer model for language understand notebook on my own dataset I get it to work with the default hyperparameter the tutorial explain that I can create a transformer xl by adjust the hyperparameter to those that be use in the paper I change they to the suggest value and I be now get a valueerror when I try to train valueerror tensor s shape 8220 128 be not compatible with supply shape 8220 512 I think that this mean that some object be not configure properly not use the hyper parameter variable but I can t figure out where it s happen I try restart the runtime and run everything again but it didn t help system information have I write custom code as oppose to use a stock example script provide in tensorflow I have make small adjustment to the provide code in the transformer notebook to accommodate my own datum I also change the value of the hyper parameter of the model os platform and distribution e g linux ubuntu 16 04 I be run the notebook in a colab gpu runtime linux ubuntu 18 04 2 tensorflow instal from source or binary I m not exactly sure I install it use this command provide with the notebook pip install q tensorflow gpu 2 0 0 beta1 tensorflow version use command below tensorflow gpu 2 0 0 beta1 python version 3 6 8 cuda version cuda 10 0 130 gpu model and memory I m not sure how to get this info but it s a colab gpu runtime code snippet I change the output encoder from a subwordtextencoder to a tokentextencoder tokenizer out tfds feature text tokentextencoder unique concept I change the tf encode function to use a single argument instead of two def tf encode element return tf py function encode element 0 element 1 tf int64 tf int64 and I change the hyperparameter value num layer 6 d model 512 dff 2048 num head 8 |
tensorflowtensorflow | sequencefeature get overly complex when use with a sequence numeric column | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos 10 13 6 tensorflow instal from source or binary from pip install tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version v3 6 7 6ec5cf24b7 oct 20 2018 03 02 14 describe the current behavior the behaviour of sequencefeature layer be really different between when it s use with a feature column produce with sequence numeric column and when it s use with a feature column produce with sequence categorical column with identity and embed column I ve already report an issue where the call method of sequencefeature use with sequence numeric column unnecessarily require its input to be a sparsetensor instead of a regular tensor today I be experiment with keras funcional api by build a model use two sequencefeature layer along with both sequence numeric column and embed column I get graphic depiction of the model use tf keras model summary and tf keras util plot model and discover that sequencefeature with sequence numeric column get decompose in a million of sub layer of type tensorflowoplayer as opposite to the clean and to the point depiction of sequencefeature with embed column this have the only effect of overload the result of summary and plot model with not so relevant information instead of have a concise insight of the graph here be the obtain plot large and complex plot of the model the upper part of the plot be due to the conversion of the input into sparse tensor so this be relate to the previous issue the focus of the issue I m open today be this part which should not exist in my opinion describe the expect behavior assume my previous issue be solve I think that the result of plot model should simply look like this code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow feature column import embed column sequence categorical column with identity sequence numeric column from tensorflow keras import input model from tensorflow keras experimental import sequencefeature from tensorflow keras layers import input dense on my computer I have use a custom plot model instead of tf keras util plot model because of a bug see nota bene below from deep modeltodot import plot model from tensorflow keras util import plot model print tf version git version tf version version model preparation seq fc dense sequence numeric column densefeat seq layer dense sequencefeature seq fc dense name densefeatlayer nb cat 5 seq fc cat sequence categorical column with identity catfeat nb cat seq fc cat embed column seq fc cat 2 seq layer cat sequencefeature seq fc cat name catfeatlayer input dense input shape none name densefeat input cat input shape none name catfeat dtype tf int32 we need to convert input dense to a sparse tensor see zero tf constant 0 dtype tf float32 indice tf where tf not equal input dense zero value tf gather nd input dense indice sparse tf sparsetensor index value tf cast tf shape input dense dtype tf int64 x dense seq layer dense densefeat sparse 0 x cat seq layer cat catfeat input cat 0 x tf concat x dense x cat 1 output dense 1 activation sigmoid x model model input densefeat input dense catfeat input cat output output model summary plot model model nota bene the custom plot model that I use to make my plot be the same as tf keras util plot model except that I replace the following import in keras util vis util py try pydot ng be a fork of pydot that be well maintain import pydot ng as pydot except importerror pydotplus be an improved version of pydot try import pydotplus as pydot except importerror fall back on pydot if necessary try import pydot except importerror pydot none by import pydot ng as pydot and I remove the call to check pydot in model to dot because it be raise an exception |
tensorflowtensorflow | incomplete description of lstm call param | Bug | url s with the issue l2461 description of issue what need change the current documentation read call argument input a 3d tensor mask binary tensor of shape sample timestep indicate whether a give timestep should be mask training python boolean indicate whether the layer should behave in training mode or in inference mode this argument be pass to the cell when call it this be only relevant if dropout or recurrent dropout be use initial state list of initial state tensor to be pass to the first call of the cell it would be worth mention that mask training and initial state be optional if initial state be not provide default zero be impute by call cell get initial state or cell zero state see tensorflow core python op rnn py line 677 or 1382 due to iheritance of rnn layer incl mixin and miss documentation at some place haven t find any comment on get initial state it s quite hard to figure it out submit a pull request I d like to keep it to someone who really know the keras internal to I it s still a bit cryptic I e where exactly get initial state fn come from etc |
tensorflowtensorflow | distributionstrategy be not support by tf keras model model fit generator | Bug | hi recently I ve encounter a nonimplemetederror while try to apply a fit generator method of a tf keras model model with a multiworkerdistributionstrategy it be almost a year since this handler be add to the code diff de9b96ac2d81503324cbbbe21732031f and I m wonder whether to expect an implementation to be add any time soon with the release of tf2 0 for example make effort to find a workaround I ve try to transform a generator to tf dataset by tf datum dataset from generator to replace the fit generator by fit but encounter similar problem the obtain object have type datasetv1adapter which be also incompatible with distribution strategy I dare to assume that for a wide society of tf user and for I in particular this functionality would be of a great interest deal with large domain specific data set that doesn t fit into memory one often have no choice other than write a custom data generator when big datum be involve the distribute training might be crucial I would highly appreciate any information on the current state of the problem or possible workaround from the tensorflow developer team thank in advance system information tensorflow version you be use 2 0 0 dev20190729 be you willing to contribute it yes no no |
tensorflowtensorflow | contrib receptive field valueerror weight layer s name input to conv layer do not end with read | Bug | I want to compute the receptive field of some convolutional model define use tf keras I get the graph directly from the session in which the model be build and run compute receptive field from graph def from tensorflow contrib import receptive field from tensorflow keras layers import input conv2d avgpool2d dense flatten from tensorflow keras model import model from tensorflow keras backend import get session def build model input shape inp input shape input shape name my input x conv2d 32 3 3 inp x avgpool2d 2 2 x x conv2d 32 3 3 x x avgpool2d 2 2 x x conv2d 32 3 3 name my output x x flatten x x dense 100 x x dense 10 activation softmax x return x model build model 64 64 3 g get session graph receptive field compute receptive field from graph def g my input my output biasadd I set the name of the output as the last name of the operation in my output layer obtain by check g get operation I e my output biasadd but I get valueerror traceback most recent call last in 1 receptive field compute receptive field from graph def g my input my output biasadd miniconda3 envs phaunos ml lib python3 6 site package tensorflow contrib receptive field python util receptive field py in compute receptive field from graph def graph def input node output node stop propagation input resolution 272 kernel size x kernel size y stride x stride y padding x pad y 273 parse layer parameter get layer param 274 node name to node node info node name input size 275 log vlog 276 3 kernel size x s kernel size y s miniconda3 envs phaunos ml lib python3 6 site package tensorflow contrib receptive field python util parse layer parameter py in get layer param node name to node input resolution force 275 if node op conv2d or node op depthwiseconv2dnative 276 stride x stride y stride size node name to node 277 kernel size x kernel size y conv kernel size node name to node 278 compute the padding for this node separately for each direction 279 total padding x padding x padding size conv pool miniconda3 envs phaunos ml lib python3 6 site package tensorflow contrib receptive field python util parse layer parameter py in conv kernel size node name to node 86 if not weight layer read name endswith read 87 raise valueerror 88 weight layer s name input to conv layer do not end with read 89 weight layer param name weight layer read name 5 90 weight node name to node weight layer param name valueerror weight layer s name input to conv layer do not end with read I could not find any layer end with read as suggest I also try just my output but get valueerror output node be not find I also try to set g as graph def instead of g as the function s first argument and fail so here I be |
tensorflowtensorflow | tensotflow website | Bug | url s with the issue description of issue what need change can t open url |
tensorflowtensorflow | specify shapehandle s shape when create new op in tensorflow rank problem of the output tensor | Bug | system ubuntu 18 04 tensorflow 1 13 1 cudnn 7 4 2 cuda 10 2 nvidia 430 26 python 3 6 gpu 2080ti I have successfully compile the op registration file and test when only use this file but during training process I try to call the function define in the op these error be encounter which vary every time segmentation fault core dump or double free or corruption prev abort core dump or traceback most recent call last file home tifo kj anaconda3 envs tf13 lib python3 6 site package tensorflow python framework op py line 1659 in create c op c op c api tf finishoperation op desc tensorflow python framework error impl invalidargumenterror shape must be rank 1 but be rank 99648624 for myop op myop with input shape 50 2048 3 during handling of the above exception another exception occur traceback most recent call last file main py line 327 in network build graph training true file main py line 36 in build graph reuse none bn decay self bn decay up ratio opt u file home graph py line 87 in graph p out myfunc x file home myop py line 19 in myfunc return myop module myfunc x file line 68 in myfunc file home tifo kj anaconda3 envs tf13 lib python3 6 site package tensorflow python framework op def library py line 788 in apply op helper op def op def file home tifo kj anaconda3 envs tf13 lib python3 6 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file home tifo kj anaconda3 envs tf13 lib python3 6 site package tensorflow python framework op py line 3300 in create op op def op def file home tifo kj anaconda3 envs tf13 lib python3 6 site package tensorflow python framework op py line 1823 in init control input op file home tifo kj anaconda3 envs tf13 lib python3 6 site package tensorflow python framework op py line 1662 in create c op raise valueerror str e valueerror shape must be rank 1 but be rank 99648624 for myop op myop with input shape 50 1000 3 and please note the number 99648624 above be uncertain sometimes it could be 0 or any weird number below be the code for register the op in tensorflow where I specify the output s dimension as b 200 200 1 setshapefn tensorflow shape inference inferencecontext c tensorflow shape inference shapehandle input tf return if error c withrank c input 0 3 input tensorflow shape inference shapehandle dim2 tf return if error c makeshapefromshapetensor 200 dim2 tensorflow shape inference shapehandle dim3 tf return if error c makeshapefromshapetensor 200 dim3 tensorflow shape inference shapehandle dim4 tf return if error c makeshapefromshapetensor 1 dim4 tensorflow shape inference shapehandle output c makeshape c dim input 0 c dim dim2 0 c dim dim3 0 c dim dim4 0 c set output 0 output return status ok I believe that in my code of op registration the output shape have be already successfully determine there be no similar question that I can find online please help |
tensorflowtensorflow | tf lite quantize with representative data memory keep grow | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 6 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior the memory keep grow with give representative datum set the problem doesn t come from how I load datum since no problem if I test for loop over my datum probably something in quantization process be hold the datum describe the expect behavior no memory leak code to reproduce the issue def representative datum loader flag tgt h tgt w get the image name list input dir flag representative datset a path to some image dataset if input dir none raise valueerror input directory be not provide if not os path exist input dir raise valueerror input directory not find image list lr temp os listdir input dir image list lr os path join input dir for in image list lr temp if split 1 png or split 1 jpeg or split 1 jpg image list ds tf datum dataset from tensor slice image list lr image list ds image list ds shuffle len image list lr image list ds image list ds repeat read in and preprocess the image def preprocess image image image tf image decode image image channel 3 image tf image convert image dtype image tf float32 image tf image random crop image tgt h tgt w 3 return image def load and preprocess image path image tf read file path return preprocess image image image ds image list ds map load and preprocess image num parallel call tf data experimental autotune image ds image ds batch 1 push path and image into a list datum collection namedtuple datum path lr input return datum path lr image list ds input image ds def memory from somewhere online import os import psutil pid os getpid py psutil process pid memoryuse py memory info 0 2 30 memory use in gb I think print memory use memoryuse def main in ch 3 up factor 2 h w 32 32 h w 536 536 input raw tf placeholder tf float32 shape 1 h w in ch name input raw output tf split value input raw num or size split tf constant int h 2 int h 2 axis 1 0 to test dataset datum representative datum loader flag h w memory with tf session as sess sess run tf global variable initializer to test dataset image iter datum input make initializable iterator sess run image iter initializer image el image iter get next converter lite tfliteconverter from session sess input raw output converter optimization lite optimize default def representative datum gen for I in range 1000 print I end memory yield sess run image el converter representative dataset representative datum gen tflite model converter convert other info log memory just keep grow until run out |
tensorflowtensorflow | tf 2 0 tf function cause dataset iteration to crash in multi gpu mode | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below tf nightly gpu 2 0 preview python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when use tf function decorator for a function iterate over a dataset in multi gpu the colab notebook crash and give the follow log tensorflow core grappler optimizer meta optimizer cc 502 function optimizer fail invalid argument node while body 21 replica 1 statefulpartitionedcall control dependency must come after regular dependency terminate call after throw an instance of std out of range what vector m range check kernelrestarter restart kernel 1 5 keep random port it work perfectly without tf function decorator describe the expect behavior dataset should iterate as it do in eager mode without tf function decorator code to reproduce the issue with strategy scope def compute loss label prediction per example loss loss object label prediction return tf nn compute average loss per example loss global batch size global batch size def train step cat col num col label with tf gradienttape as tape input tf unstack cat col axis 1 num col prediction model inputs input training true loss compute loss label prediction gradient tape gradient loss model trainable variable optimizer apply gradient zip gradient model trainable variable prediction tf round prediction train accuracy update state label prediction train auc update state label prediction return loss tf function def distribute train step dist train total loss 0 0 num batch 0 0 for cat num label in dist train per replica loss strategy experimental run v2 train step args cat num label total loss strategy reduce tf distribute reduceop sum per replica loss axis none num batch 1 return total loss num batch def train dist train for epoch in range num epoch print f training epoch epoch 1 end train loss distribute train step dist train template epoch loss 3f accuracy 3f auc 3f print template format epoch 1 train loss train accuracy result 100 train auc result train accuracy reset states train auc reset state train dist train edit do some more digging and I be able to narrow the issue down to the line gradient tape gradient loss model trainable variable |
tensorflowtensorflow | tflite change weight | Bug | system information os platform and distribution e g linux ubuntu 16 04 osx tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 7529 g3e0ad8a004 2 0 0 dev20190731 python version 3 6 8 I m attempt to implement soft conditional computation in this approach the convolution weight be calculate dynamically for each input with a concrete function this work however tflite seem to freeze the weight from the first input I be able to resolve this by call resize tensor input and allocate tensor for each image at the cost of a small performance hit note need to fake resize the input reallocate tensor interpreter resize tensor input 0 1 in size in size nb channel interpreter allocate tensor my first thought be that tflite hardcode the weight as an optimisation however upon look through the source code this doesn t seem to be the case lead I to think this be a bug example to reproduce the issue import tensorflow as tf import numpy as np reallocate tensor false turn this on to reallocate tensor every run fix tflite accuracy nb channel 5 wt size 100 in size 30 class dynamicweightt tf keras layers layer def init self channel select size kwargs super init self channel channel self select size select size def call self input x input 0 wt input 1 x tf nn conv2d x wt 1 1 valid return x input wt tf keras layers input shape 1 nb channel wt size dtype tf float32 input datum tf keras layers input shape in size in size nb channel dtype tf float32 x dynamicweightt channel wt size select size nb channel input datum input wt model tf keras model input input datum input wt output x get the concrete function from the keras model model fn tf function lambda x model x model fn concrete model fn get concrete function input datum input wt tflite converter tf lite tfliteconverter from concrete function model fn concrete converter optimization tf lite optimize default tflite model converter convert load tflite model and allocate tensor interpreter tf lite interpreter model content tflite model interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail for in range 10 datum np array np random random sample 1 in size in size nb channel dtype np float32 wt np array np random random sample 1 1 nb channel wt size dtype np float32 out ref model datum wt concrete out fn model fn concrete tf constant datum tf constant wt note need to fake resize the input reallocate tensor if reallocate tensor interpreter resize tensor input 0 1 in size in size nb channel interpreter allocate tensor call tflite interpreter set tensor input detail 0 index datum interpreter set tensor input detail 1 index wt interpreter invoke out test interpreter get tensor output detail 0 index out ref out ref numpy out fn out fn numpy diff 100 np sum np abs out test out ref np sum np abs out ref diff fn 100 np sum np abs out fn out ref np sum np abs out ref test sum np sum out test print diff concrete function be f diff tflite be f diff fn diff example output without resize allocate fix diff concrete function be 0 000000 diff tflite be 0 000002 diff concrete function be 0 000000 diff tflite be 33 312505 diff concrete function be 0 000000 diff tflite be 34 966878 diff concrete function be 0 000000 diff tflite be 31 750450 diff concrete function be 0 000000 diff tflite be 33 716212 diff concrete function be 0 000000 diff tflite be 34 539647 diff concrete function be 0 000000 diff tflite be 35 657966 diff concrete function be 0 000000 diff tflite be 35 823278 diff concrete function be 0 000000 diff tflite be 36 413545 diff concrete function be 0 000000 diff tflite be 33 483852 edit change above example to include a variable that turn on off fix |
tensorflowtensorflow | batchmatmul merge switch | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary github tensorflow version or github sha if from source 1 14 0 provide the text output from tflite convert toco fail see console for info 2019 07 30 13 09 37 522804 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 67 operator 113 array 0 quantize 2019 07 30 13 09 37 523543 I tensorflow lite toco graph transformation graph transformation cc 39 after remove unused op pass 1 65 operator 110 array 0 quantize 2019 07 30 13 09 37 524229 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 65 operator 110 array 0 quantize 2019 07 30 13 09 37 525344 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 49 operator 87 array 0 quantize 2019 07 30 13 09 37 526111 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 49 operator 87 array 0 quantize 2019 07 30 13 09 37 526697 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 49 operator 87 array 0 quantize 2019 07 30 13 09 37 527380 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 256 byte theoretical optimal value 128 byte 2019 07 30 13 09 37 527969 e tensorflow lite toco toco tooling cc 456 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add concatenation fully connect great less equal logistic mul pack reduce max reduce min select split tanh transpose unpack here be a list of operator for which you will need custom implementation batchmatmul merge switch also please include a link to a graphdef or the model if possible any other info log I be try to convert a static lstm model with an initial state as input if the initial state none it will convert to tflite model correctly include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach convertererror traceback most recent call last in 2 converter tf lite tfliteconverter from frozen graph output graph input array input tensor input state 3 output array output output state 4 tflite model converter convert 5 open tf lstm v2 tflite wb write tflite model anaconda3 envs tf 1 14 lib site package tensorflow lite python lite py in convert self 896 input tensor self input tensor 897 output tensor self output tensor 898 converter kwargs 899 else 900 result toco convert graph def anaconda3 envs tf 1 14 lib site package tensorflow lite python convert py in toco convert impl input data input tensor output tensor args kwargs 402 datum toco convert protos model flag serializetostre 403 toco flag serializetostre 404 input datum serializetostre 405 return datum 406 anaconda3 envs tf 1 14 lib site package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str 170 stderr try convert to unicode stderr 171 raise convertererror 172 toco fail see console for info n s n s n stdout stderr 173 finally 174 must manually cleanup file |
tensorflowtensorflow | documentation be miss explanation of how to write custom loss function for the keras api | Bug | url s with the issue specify a loss metric and an optimizer description of issue what need change documentation be miss how to write custom loss function and how to use the keras loss loss class to write a loss function that take more input than just y true and y pre despite the portion say it will show how to write custom loss and metric currently it show how to write custom metric and in layer loss only clear description people can use this method to implement loss function not implement in tensorflow or their own variation of loss function it also allow port of loss function without a keras loss exposure such as tf nn weighted cross entropy with logit to be implement in a model compile use case without the need to write a custom training loop correct link n a parameter define n a return define n a raise list and define n a usage example there be no documentation but a good example would be port a loss function from tensorflow nn to be use in a class extend keras loss loss and show how to use a user write function without the keras loss loss class request visual if applicable no visual be there but a portion of code similar to the custom metric code show would be a good visual a custom loss function I write in python follow class weightedbinarycrossentropy keras loss loss pos weight scalar the effec on loss by the positive class by whatever be pass into it weight scalar all the loss can be use to increase scalar of negative weight only by pass 1 weight to pos weight to affect pos weight even more after this multiply in the other scalar you have in mind for it def init self pos weight weight from logit false reduction kera loss reduction auto name weight binary crossentropy super weightedbinarycrossentropy self init reduction reduction name name self pos weight pos weight self weight weight self from logit from logit def call self y true y pre if not self from logit with tf name scope weight cross entropy manually calculate the weighted cross entropy formula be qz log sigmoid x 1 z log 1 sigmoid x where z be label x be logit and q be the weight since the value pass be from sigmoid assumably in this case sigmoid x will be replace with y pre x 1 y true self pos weight tf math log y pre 1e 6 qz log sigmoid x 1e 6 be add as an epsilon to stop pass a zero into the log x 2 1 y true tf math log 1 y pre 1e 6 1 z log 1 sigmoid x epsilon be add to prevent pass a zero into the log return tf add x 1 x 2 self weight must be negative as it be maximize when pass to optimizer use build in function return tf nn weight cross entropy with logit y true y pre self pos weight self weight submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | error fail to prepare for tpu generic fail precondition custom op already assign to a different tpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary docker image late gpu py3 tensorflow version use command below 1 14 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory rtx 2080 ti I m try to compile tflite model with the edgetpu compiler for usage with the coral tpu stick I have two version of the tflite file for the first one I do not want to use the tpu acceleration to compare the performance I invoke the tfliteconverter with the follow configuration converter tf compat v1 lite tfliteconverter from keras model file keras file converter inference input type tf float32 converter inference output type tf float32 converter optimization tf lite optimize default tflite model converter convert open wb write tflite model for the tpu version I choose the follow option converter tf compat v1 lite tfliteconverter from keras model file keras file converter representative dataset representative dataset gen converter target op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 converter optimization tf lite optimize default tflite model converter convert open wb write tflite model to check the output of these I use the visualize py script the second graph have quantize node in it to my understanding I have to convert the second tflite file with the edgetpu compiler the first one be expectedly not convertible as it be not quantize this seem to work as the log file output edge tpu compiler version 2 0 258810407 input linearmodel 1 14 0 uint8 tflite output linearmodel 1 14 0 uint8 edgetpu tflite operator count status fully connect 1 map to edge tpu quantize 2 map to edge tpu however if I use it with the c api I run into an error when allocate tensor error fail to prepare for tpu generic fail precondition custom op already assign to a different tpu error node number 0 edgetpu custom op fail to prepare I have not find a solution to this problem I can however load the non compile tflite model although I suspect this do not provide any time saving I have upload all tflite model their html visualization and the edgetpu log with verbosity 10 to a github repo of mine be this issue already know to you |
tensorflowtensorflow | conv2d transpose param tensor shape differ from conv2d shape | Bug | system information have I write custom code yes os platform and distribution ubuntu 16 04 mobile device not test tensorflow instal from binary tensorflow version 1 13 and 1 12 python version 3 7 3 and 3 6 8 bazel version gcc compiler version cuda cudnn version 10 0 7 3 1 gpu model and memory nvidia driver 418 56 11178mib describe the current behavior the kernel tensor for conv2d and conv2d transpose layer in contrib layer behave differently the number of output determine the number of featuremap filter and be at the last position x x x here while for conv2d transpose layer it be x x here x as the follow code example will show be this behaviour want because when build in model where the input dimension and out put dimension differ I need to manually transpose the last 2 dimension in order for the model to be trainable describe the expect behavior the expect behaviour would be that the output number for conv2d layer be at the same position as for usual conv2d layer this problem be independent of the data format and save loading procedure code to reproduce the issue this be a small code snippet for reproduction please set the breakpoint at the last print and evaluate the shape dictionary you will see that conv2d layer conv1 have the kernel shape 3 3 1 32 indicate that 1 be the depth number of channel for the input 32 be the number of filter output feature map if you look at the conv2d transpose layer up1 the number of output be set to 16 but the kernel shape be 3 3 16 32 indicate that the input dimension be 16 even though they be 32 from the conv2d layer conv3 precede it I believe that this shape should instead be 3 3 32 16 because the number of output for the conv2d transpose layer be set to 16 import tensorflow as tf import tensorflow contrib layer as layer from tensorflow example tutorial mnist import input datum mnist input datum read data set tmp datum one hot true train x mnist train image train y mnist train label test x mnist test image def shape of build model layer name layer name and shape for name in layer name layer name and shape name tf get collection tf graphkeys global variable scope name 0 shape return layer name and shape def model input encoder conv1 layer conv2d input num output 32 kernel size 3 3 scope conv1 conv str2 layer conv2d conv1 num output 32 kernel size 3 3 stride 2 scope conv str2 conv2 layer conv2d conv str2 num output 32 kernel size 3 3 scope conv2 encode layer conv2d conv2 num output 32 kernel size 3 3 stride 2 scope encode conv3 layer conv2d encode num output 32 kernel size 3 3 scope conv3 upsample1 layer conv2d transpose conv3 num output 16 kernel size 3 stride 2 scope up1 upsample2 layer conv2d transpose upsample1 num output 32 kernel size 3 stride 2 scope up2 logit layer conv2d upsample2 num output 1 kernel size 3 3 scope logit pad same decode tf sigmoid logit name reconstruct return decode with tf name scope input x tf placeholder tf float32 shape none 28 28 1 name x y tf placeholder tf float32 shape none 28 28 1 name y output logit model x with tf variable scope train with tf variable scope loss loss tf reduce mean tf nn softmax cross entropy with logit label y logit output logit name loss tf summary scalar loss loss with tf variable scope optimizer optimizer tf train adamoptimizer learning rate 0 05 name adam op optimizer optimizer minimize loss with tf variable scope accuracy correct prediction tf equal tf argmax output logit 1 tf argmax y 1 name correct pred accuracy tf reduce mean tf cast correct prediction tf float32 name accuracy tf summary scalar accuracy accuracy with tf variable scope prediction cls prediction tf argmax output logit axis 1 name prediction init tf global variable initializer merge tf summary merge all sess tf interactivesession sess run init name conv1 conv str2 conv2 encode conv3 up1 up2 shape shape of build model name print set breakpoint here |
tensorflowtensorflow | external androidndk ndk toolchains arm linux androideabi 4 9 prebuilt darwin x86 64 lib gcc arm linux androideabi 4 9 x arm linux androideabi bin ld error can not open foundation no such file or directory | Bug | system information os platform and distribution e g linux ubuntu 16 04 mac 10 12 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version v1 14 0 python version 3 7 instal use virtualenv pip conda conda bazel version if compile from source 2 5 0 gcc compiler version if compile from source cuda cudnn version configure with prefix application xcode app content developer usr with gxx include dir usr include c 4 2 1 apple llvm version 9 0 0 clang 900 0 39 2 target x86 64 apple darwin16 7 0 thread model posix installeddir application xcode app content developer toolchain xcodedefault xctoolchain usr bin gpu model and memory describe the problem I download the tag 1 14 0 from git and find this problem after execute the follow command provide the exact sequence of command step that you execute before run into the problem bazel build c opt tensorflow contrib android libtensorflow inference so crosstool top external android crosstool host crosstool top bazel tool tool cpp toolchain cxxopt std c 11 cpu armeabi v7a any other info log error user qbq wzk zhangshexin tensorflow 1 14 0 tensorflow contrib android build 60 1 linking of rule tensorflow contrib android libtensorflow inference so fail exit 1 external androidndk ndk toolchains arm linux androideabi 4 9 prebuilt darwin x86 64 lib gcc arm linux androideabi 4 9 x arm linux androideabi bin ld error can not open foundation no such file or directory clang error linker command fail with exit code 1 use v to see invocation target tensorflow contrib android libtensorflow inference so fail to build use verbose failure to see the command line of fail build step info elapse time 1318 638 critical path 169 36 info 886 process 886 local fail build do not complete successfully |
tensorflowtensorflow | fail to compile tensorflow lite experimental ruy pack avx512 cc | Bug | system information have I write custom code yes but not in the fail part os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below master branch python version 3 6 bazel version if compile from source 0 21 gcc compiler version if compile from source gcc ubuntu 7 3 0 27ubuntu1 18 04 7 3 0 cuda cudnn version n a gpu model and memory n a tensorflow lite experimental ruy pack avx512 cc in function void ruy anonymous halfpackfloatavx512 const float const float int int int float float tensorflow lite experimental ruy pack avx512 cc 343 41 error can not convert m512 aka vector 16 float to m512i aka vector 8 long long int in assignment t0 loadutwo src ptr0 src ptr4 tensorflow lite experimental ruy pack avx512 cc 344 41 error can not convert m512 aka vector 16 float to m512i aka vector 8 long long int in assignment t1 loadutwo src ptr1 src ptr5 tensorflow lite experimental ruy pack avx512 cc 345 41 error can not convert m512 aka vector 16 float to m512i aka vector 8 long long int in assignment t2 loadutwo src ptr2 src ptr6 tensorflow lite experimental ruy pack avx512 cc 346 41 error can not convert m512 aka vector 16 float to m512i aka vector 8 long long int in assignment t3 loadutwo src ptr3 src ptr7 tensorflow lite experimental ruy pack avx512 cc 363 9 error mm256 storeu epi32 be not declare in this scope mm256 storeu epi32 pack ptr 0 16 mm512 castsi512 si256 r0 tensorflow lite experimental ruy pack avx512 cc 363 9 note suggest alternative mm256 store epi64 mm256 storeu epi32 pack ptr 0 16 mm512 castsi512 si256 r0 mm256 store epi64 tensorflow lite experimental ruy pack avx512 cc 382 55 error can not convert m512 aka vector 16 float to m512i aka vector 8 long long int in assignment t0 maskloadutwo row mask src ptr0 src ptr4 tensorflow lite experimental ruy pack avx512 cc 383 55 error can not convert m512 aka vector 16 float to m512i aka vector 8 long long int in assignment t1 maskloadutwo row mask src ptr1 src ptr5 tensorflow lite experimental ruy pack avx512 cc 384 55 error can not convert m512 aka vector 16 float to m512i aka vector 8 long long int in assignment t2 maskloadutwo row mask src ptr2 src ptr6 tensorflow lite experimental ruy pack avx512 cc 385 55 error can not convert m512 aka vector 16 float to m512i aka vector 8 long long int in assignment t3 maskloadutwo row mask src ptr3 src ptr7 tensorflow lite experimental ruy pack avx512 cc 402 9 error mm256 storeu epi32 be not declare in this scope mm256 storeu epi32 trail buf 0 16 mm512 castsi512 si256 r0 tensorflow lite experimental ruy pack avx512 cc 402 9 note suggest alternative mm256 store epi64 mm256 storeu epi32 trail buf 0 16 mm512 castsi512 si256 r0 mm256 store epi64 tensorflow lite experimental ruy pack avx512 cc in function void ruy pack8bitavx512 const int8 t int8 t const int8 t int int int int8 t int32 t tensorflow lite experimental ruy pack avx512 cc 465 3 error memset be not declare in this scope memset trail buf 0 ktrailingbufsize sizeof std int8 t tensorflow lite experimental ruy pack avx512 cc 465 3 note suggest alternative offset memset trail buf 0 ktrailingbufsize sizeof std int8 t offset tensorflow lite experimental ruy pack avx512 cc 500 5 error memcpy be not declare in this scope memcpy pack ptr layout kcol non trail row trail buf tensorflow lite experimental ruy pack avx512 cc 500 5 note suggest alternative m empty memcpy pack ptr layout kcol non trail row trail buf m empty tensorflow lite experimental ruy pack avx512 cc in function void ruy packfloatavx512 const float const float int int int float tensorflow lite experimental ruy pack avx512 cc 516 5 error memset be not declare in this scope memset trail buf 0 sizeof trail buf tensorflow lite experimental ruy pack avx512 cc 516 5 note suggest alternative offset memset trail buf 0 sizeof trail buf offset tensorflow lite experimental ruy pack avx512 cc 524 5 error memcpy be not declare in this scope memcpy pack ptr 16 non trailing row trail buf tensorflow lite experimental ruy pack avx512 cc 524 5 note suggest alternative m empty memcpy pack ptr 16 non trailing row trail buf |
tensorflowtensorflow | add fp16 precision support to tflite external c api | Bug | system information tensorflow version you be use master repository be you willing to contribute it yes no yes describe the feature and the current behavior state add fp16 precision support to tflite external c api will this change the current api how this change have no influence who will benefit with this feature tflite extenal c api user any other info here be my patch |
tensorflowtensorflow | transfer learn train by custom tf 2 0 training loop perform bad than keras fit | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 2 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 beta1 python version 3 6 7 bazel version if compile from source n a gcc compiler version if compile from source 7 4 0 cuda cudnn version 7 6 0 gpu model and memory gtx1660ti 6 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I perform transfer learn on pretraine model with tf custom training loop and keras fit both of the setting be the same but tf custom training loop perform bad than the keras fit I have no idea what s the problem I have ask the question on stackoverflow but not get the answer what I want describe the expect behavior the loss and accuracy of model train by tf gradienttape should be similar to the one train by keras fit with the same setting code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf from tensorflow import kera from tensorflow keras import layer physical device tf config experimental list physical device gpu try tf config experimental set memory growth physical device 0 true except pass cifar10 tf keras datasets cifar10 x train y train x test y test cifar10 load datum def process datum img lbl img tf image resize img 96 96 img img 128 128 return img lbl train datum tf datum dataset from tensor slice x train y train shuffle 50000 batch 128 test datum tf datum dataset from tensor slice x test y test batch 128 train datum train datum map process datum test datum test data map process datum train datum test datum load the pretraine model base model keras applications mobilenetv2 input shape 96 96 3 include top false pooling avg x base model output 0 output layer dense 10 activation tf nn softmax x model keras model inputs base model input output output train with keras fit model compile optimizer keras optimizer adam loss kera loss sparsecategoricalcrossentropy metric accuracy history model fit train datum epoch 1 the result be loss 0 4345 accuracy 0 8585 train with tf gradienttape optimizer keras optimizer adam train loss kera metric mean train acc keras metric sparsecategoricalaccuracy def train step datum label with tf gradienttape as gt pre model datum loss keras loss sparsecategoricalcrossentropy label pre grad gt gradient loss model trainable variable optimizer apply gradient zip grad model trainable variable train loss loss train acc label pre model keras model inputs base model input output output for xs ys in train datum train step xs ys print train loss 3f train acc 3f format train loss result train acc result the result be train loss 12 832 train acc 0 099 other info log if the model train by tf gradienttape with small learning rate 0 0001 the default be 0 001 it work well train loss 0 275 train acc 0 915 but that s not the real solution what I expect it s just a workaround |
tensorflowtensorflow | tf2 0 memory leak cause by autograph retracing due to bind method argument | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 fedora 30 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version 3 5 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior factor out a train step function into our custom model lead to a memory leak due to continuously retrace the full autograph epoch dataset loop the root cause be that bind instance method be not identical model step be not model step describe the expect behavior accord to issue 36175 identity of bind method python tracker this be suppose to be expect behavior in python I merely want to clarify the autograph behaviour and document this for other tensorflow user I guess there might be reason to use identity be comparison on argument when decide whether an autograph need retracing but if it be possible to use equality comparison for argument that be method this surprising behaviour could be avoid code to reproduce the issue py import tensorflow as tf tf function def run epoch dataset step print retrace for x in dataset step x class model def step self x return x 2 dataset tf datum dataset from tensor slice list range 128 model model for I in range 20 lead to retrace of run epoch due to non identity model step I e model step be not model step run epoch dataset model step other info log possible workaround store the bind method in a variable and thus keep use the same instance step fn model step declare the method as staticmethod and pass the model instance explicitly as first parameter |
tensorflowtensorflow | tf 1 14 kera model throw exception when input be not the deep node | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 gpu model and memory titan v you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensorflow 1 14 introduce a bug in keras model the specific change be the introduction of these line l849 l850 they make the assumption that a model s input will be deep than any other node when that be not true any node whose depth be equal to or great than the input do not get evaluate produce an exception the problem be demonstrate by the follow script it create a model where an input and a variable have the same depth in tensorflow 1 13 this run correctly but in 1 14 it throw an exception traceback most recent call last file test py line 17 in print model model input file home peastman miniconda3 lib python3 6 site package tensorflow python keras engine base layer py line 634 in call output call fn input args kwargs file home peastman miniconda3 lib python3 6 site package tensorflow python keras engine network py line 751 in call return self run internal graph input training training mask mask file home peastman miniconda3 lib python3 6 site package tensorflow python keras engine network py line 903 in run internal graph assert str i d x in tensor dict could not compute output str x assertionerror could not compute output tensor add add 0 shape 1 dtype float32 describe the expect behavior call the model should return a tensor not throw an exception code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python import tensorflow as tf import tensorflow keras layer as layer class variable layer layer def init self initial value kwargs super variable self init kwargs self var tf variable initial value def call self input return self var var variable 1 0 input layer input shape 1 output layer add input var model tf keras model input input output output print model model input other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | 404 error during oxford pet tutorial dataset loading | Bug | system information os platform and distribution window 10 tensorflow instal from binary tensorflow version gpu 2 0 beta python version 3 6 instal use virtualenv pip conda pip cuda cudnn version 10 0 7 5 gpu model and memory nvidia gtx 1050 4 gb describe the problem problematic tutorial in the segmentation tutorial with the copy code from the tutorial when load oxford pet dataset from tensorflow dataset I get tensorflow dataset core download downloader downloaderror fail to get url vgg data pet data image tar gz http code 404 I can go to the site with the web browser with no problem and even download the dataset provide the exact sequence of command step that you execute before run into the problem from future import absolute import division print function unicode literal import tensorflow as tf from tensorflow example model pix2pix import pix2pix import tensorflow dataset as tfds tfds disable progress bar from ipython display import clear output import matplotlib pyplot as plt problematic line dataset info tfds load oxford iiit pet 3 0 0 with info true any other info log tensorflow dataset core download downloader downloaderror fail to get url vgg data pet data image tar gz http code 404 be I do something wrong or the site structure change |
tensorflowtensorflow | can not use large dimension in embed layer on gpu s | Bug | I be use tf2 0 latest nightly build and I be try to train lstm model for text classification on very large dataset of 16455928 sentence for embed layer in the model I have a vocab size of 366856 and I use 1000 as embed dimension value in it on which the 2 gpus tesla t4 from google run out of memory since I can not lower the size of vocabulary maybe there be a way so I use low value for embed dimension 100 on which the model start training now my question be if there be a way I can use high value of embed dimension maybe by put set of layer of my model on different gpu if so then what be the way in tf2 0 also will use more number gpu help thank you |
tensorflowtensorflow | quantize for splitv | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary pip install tensorflow 1 14 tensorflow version or github sha if from source 1 14 0 provide the text output from tflite convert tflite convert output file quantize tflite graph def file mixnet pb inference type quantize uint8 input array truediv output array softmax mean value 128 std dev value 127 default range min 0 default range max 6 also please include a link to a graphdef or the model if possible any other info log w tensorflow lite toco graph transformation quantize cc 132 constant array mixnet s mixnet model block 0 conv2d kernel lack minmax information to make up for that we will now compute the minmax from actual array element that will result in quantization parameter that probably do not match whichever arithmetic be use during training and thus will probably be a cause of poor inference accuracy 2019 07 30 01 54 53 127377 f tensorflow lite toco toco tooling h 38 check fail s ok unimplemented this graph contain an operator of type splitv for which the quantize form be not yet implement sorry and patch welcome that s a relatively fun patch to write mostly provide the actual quantize arithmetic code for this op fatal python error abort include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | worker be out of sync with multiworkermirroredstrategy | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 darwin 18 0 0 x86 64 i386 64bit mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 dev20190729 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I follow the guide in to try multiworkermirroredstrategy with kera as my understanding training would be sync across worker but after I start train as follow python example tf2 local py 0 chief python example tf2 local py 0 worker I find worker chief be train in different pace e g if I start chief at first chief would not wait for worker it just start its own training if I start worker chief at the same time still I see one would be behind another one a few epoch sometime but as document state multiworkermirroredstrategy implement synchronous distribute training across multiple worker and it s same if I simply start two worker without chief describe the expect behavior accord to document multiworkermirroredstrategy implement synchronous distribute training across multiple worker worker need to be sync during training code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem from future import absolute import division print function unicode literal import datetime import json import os import tensorflow dataset as tfds import tensorflow as tf import subprocess import shlex import sys strategy tf distribute experimental multiworkermirroredstrategy tfds disable progress bar buffer size 60000 batch size 64 num worker 2 global batch size num worker batch size if name main worker addr localhost 9999 chief addr localhost 9998 os environ tf config json dump cluster worker worker addr chief chief addr task type sys argv 2 index int sys argv 1 print tf config os environ tf config def scale image label image tf cast image tf float32 image 255 return image label def build and compile cnn model model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss tf keras loss sparse categorical crossentropy optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model dataset info tfds load name mnist with info true as supervise true train dataset unbatched dataset train map scale shuffle buffer size train dataset train dataset unbatched batch global batch size with strategy scope multi worker model build and compile cnn model multi worker model fit x train dataset epoch 100 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tensorflow 1 14 kera functional api mix with op use placeholder throw invalidargumenterror you must feed a value for placeholder tensor | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior keras layer that have an input dependent on a tensorflow placeholder will throw an invalidargumenterror on the op creation step ask to feed a value for the placeholder specifically this happen during the base layer util create keras history input step in the call function on the layer where the input to the keras layer be pass through a graphexecutionfunction object make during backend function op input this exception be new in tensorflow 1 14 0 describe the expect behavior as in previous version of tensorflow I would not expect the invalidargumenterror to be throw when I be build the graph mix kera with tensorflow code to reproduce the issue python import tensorflow as tf def compile error tf graph tf graph with tf graph as default image tf keras input shape 224 224 3 dtype tf float32 name image scale tf placeholder dtype tf float32 shape name scale scale image image scale conv tf keras layer conv2d filter 32 kernel size 3 name conv2d scale image conv error on call here due to create keras history make a graphexecutionfunction tensorflow python framework error impl invalidargumenterror you must feed a value for placeholder tensor scale with dtype float node scale def compile succeed tf graph tf graph with tf graph as default image tf keras input shape 224 224 3 dtype tf float32 name image scale tf placeholder dtype tf float32 shape name scale scale image tf keras layers lambda function lambda tensor tensor 0 tensor 1 image scale conv tf keras layer conv2d filter 32 kernel size 3 name conv2d scale image this succeed if name main try compile error print functional api compilation succeed except exception as e print functional api compilation errore e print compile succeed print explicit keras lambda layer compilation succeed other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf 2 0 grpc error in tpustrategy experimental distribute dataset | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tf 2 0 beta 1 python version 3 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior raise this error internalerror fail copy input tensor from job worker replica 0 task 0 device cpu 0 to job worker replica 0 task 1 device cpu 0 in order to run experimentalautosharddataset unable to parse tensor proto additional grpc error information create 1564422681 083878500 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message unable to parse tensor proto grpc status 3 op experimentalautosharddataset describe the expect behavior work without any error code to reproduce the issue run the following on colab produce the error pip3 install tensorflow 2 0 0b1 dev null python3 import tensorflow as tf import os cluster tf distribute cluster resolver tpuclusterresolver tpu grpc s os environ colab tpu addr tf config experimental connect to host cluster get master tf tpu experimental initialize tpu system cluster strategy tf distribute experimental tpustrategy cluster dataset tf datum dataset range 100 batch 16 distribute dataset strategy experimental distribute dataset dataset cc srjoglekar246 vbardiovskyg |
tensorflowtensorflow | tf concat throw error after another call of tf concat if value be a single tensor or a list of length 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos 10 13 6 tensorflow instal from source or binary from pip install tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version v3 6 7 6ec5cf24b7 oct 20 2018 03 02 14 describe the current behavior assume that I have already make a previous call of tf concat when call again tf concat with a value argument which be either a list of tensor object of length 1 or a single tensor object I get the follow error duplicate node name in graph concat it seem to be a name conflict describe the expect behavior from tf concat documentation value a list of tensor object or a single tensor tf concat should not throw an error when value be a list of length 1 or a single tensor object and no name conflict should rise code to reproduce the issue import tensorflow as tf from tensorflow keras import input print use tensorflow version git version format tf version version tf version git version I input shape 3 j input shape 4 try print tf concat I j axis 1 except exception as e print type e print e try print tf concat I j axis 1 except exception as e print type e print e try print tf concat I axis 1 except exception as e print type e print e try print tf concat I axis 1 except exception as e print type e print e which output use tensorflow version 2 0 0 beta1 git version v2 0 0 beta0 16 g1d91213fe7 tensor concat 0 shape none 7 dtype float32 tensor concat 1 0 shape none 7 dtype float32 duplicate node name in graph concat duplicate node name in graph concat if I comment both print tf concat I j axis 1 line then print tf concat I axis 1 do not fail but print tf concat I axis 1 do if I also comment print tf concat I axis 1 then print tf concat I axis 1 resolve without error other info log full error log invalidargumenterror traceback most recent call last document beta1 lib python3 6 site package tensorflow python framework op py in create c op graph node def input control input 1550 try 1551 c op c api tf finishoperation op desc 1552 except error invalidargumenterror as e invalidargumenterror duplicate node name in graph concat during handling of the above exception another exception occur valueerror traceback most recent call last in 1 print tf concat I axis 1 document beta1 lib python3 6 site package tensorflow python util dispatch py in wrapper args kwargs 178 call target and fall back on dispatcher if there be a typeerror 179 try 180 return target args kwargs 181 except typeerror valueerror 182 note convert to eager tensor currently raise a valueerror not a document beta1 lib python3 6 site package tensorflow python op array ops py in concat value axis name 1282 dtype dtype int32 get shape assert be compatible with 1283 tensor shape scalar 1284 return identity value 0 name scope 1285 return gen array op concat v2 value value axis axis name name 1286 document beta1 lib python3 6 site package tensorflow python util dispatch py in wrapper args kwargs 178 call target and fall back on dispatcher if there be a typeerror 179 try 180 return target args kwargs 181 except typeerror valueerror 182 note convert to eager tensor currently raise a valueerror not a document beta1 lib python3 6 site package tensorflow python op array ops py in identity input name 84 return copy 85 else 86 ret gen array op identity input name name 87 propagate handle datum for happy shape inference for resource variable 88 if hasattr input handle datum document beta1 lib python3 6 site package tensorflow python ops gen array ops py in identity input name 4251 add node to the tensorflow graph 4252 op op def lib apply op helper 4253 identity input input name name 4254 result op output 4255 input flat op input document beta1 lib python3 6 site package tensorflow python framework op def library py in apply op helper self op type name name keyword 786 op g create op op type name input dtype none name scope 787 input type input type attrs attr protos 788 op def op def 789 return output structure op def be stateful op 790 document beta1 lib python3 6 site package tensorflow python framework func graph py in create op fail resolve argument 463 return super funcgraph self create op 464 op type input dtype input type name attrs op def 465 compute device compute device 466 467 def capture self tensor name none document beta1 lib python3 6 site package tensorflow python util deprecation py in new func args kwargs 505 in a future version if date be none else after s date 506 instruction 507 return func args kwargs 508 509 doc add deprecate arg notice to docstre document beta1 lib python3 6 site package tensorflow python framework op py in create op fail resolve argument 3294 input type input type 3295 original op self default original op 3296 op def op def 3297 self create op helper ret compute device compute device 3298 return ret document beta1 lib python3 6 site package tensorflow python framework op py in init self node def g input output type control input input type original op op def 1712 op def input node def attr 1713 self c op create c op self graph node def group input 1714 control input op 1715 1716 initialize self output document beta1 lib python3 6 site package tensorflow python framework op py in create c op graph node def input control input 1552 except error invalidargumenterror as e 1553 convert to valueerror for backwards compatibility 1554 raise valueerror str e 1555 1556 return c op valueerror duplicate node name in graph concat |
tensorflowtensorflow | tf 2 0 how to globally force cpu | Bug | in tf 1 x it be possible to force cpu only by use config tf configproto device count gpu 0 however configproto doesn t exist in tf 2 0 and change a os environment variable seem very clunky what s the tf 2 0 way of do this |
tensorflowtensorflow | can not run build ios universal lib sh | Bug | accord to I be run tensorflow lite tool make build ios universal lib sh however when I run it it give I the error undefined symbol for architecture x86 64 tflite resourcevariable resourcevariable reference from tflite interpreter interpreter in benchmark lib a interpreter o ld symbol s not find for architecture x86 64 clang error linker command fail with exit code 1 use v to see invocation make user broccoli download tensorflow master tensorflow lite tool make gen io x86 64 bin benchmark model error 1 be there any way to fix this error |
tensorflowtensorflow | what be the right way to use intra op parallelism thread and inter op parallelism thread | Bug | hi I create a session with tf configproto intra op parallelism thread 1 int op parallelism thread 1 when I run the session I use the top command to observe the situation but I find the program still use 1700 cpu why do this happen what s the right way to control the number of core thread use by tensorflow thx |
tensorflowtensorflow | model do not converge during training with distribute strategy and tf py function in dataset map | Bug | system information have I write custom code yes os platform and distribution window 10 pro tensorflow instal from binary pip tensorflow version 2 0 0 beta1 python version 3 6 8 cuda cudnn version 10 0 gpu model and memory tesla k80 describe the current behavior model do not converge and training be extremely slow with distribute strategy if tf py function call in dataset map work fine without distribute strategy or implement map function without tf py function describe the expect behavior succesful training with distribute strategy when tf py function call in dataset map code to reproduce the issue reproducible in colaboratory with tf 1 14 0 import tensorflow as tf from tensorflow import keras def get key return list range 1000 1000 generate positive feature and label def generate positive x tf random uniform shape minval 0 0 maxval 20 0 y tf random uniform shape minval 0 0 maxval 20 0 label tf constant 1 return x y label generate negative feature and label def generate negative x tf random uniform shape minval 40 0 maxval 100 0 y tf random uniform shape minval 40 0 maxval 100 0 label tf constant 0 return x y label generate pos neg feature and label def generate pf key if key 1 x tf random uniform shape minval 0 0 maxval 20 0 y tf random uniform shape minval 0 0 maxval 20 0 label tf constant 1 else x tf random uniform shape minval 40 0 maxval 100 0 y tf random uniform shape minval 40 0 maxval 100 0 label tf constant 0 x tf math sign tf random uniform shape minval 1 0 maxval 1 0 tf abs x y tf math sign tf random uniform shape minval 1 0 maxval 1 0 tf abs y feature tf stack x y return feature label dataset map function work with or without distribute strategy def map tf key x y label tf cond tf great key 1 generate positive generate negative x tf math sign tf random uniform shape minval 1 0 maxval 1 0 tf abs x y tf math sign tf random uniform shape minval 1 0 maxval 1 0 tf abs y feature tf stack x y return feature label dataset map function work without distribute strategy def map py func key feature label tf py function func generate pf inp key tout tf float32 tf int32 todo skip shape set if training without distribute strategy feature set shape 2 label set shape 1 return feature label def get dataset key get key dataset tf datum dataset from tensor slice key dataset dataset repeat dataset dataset shuffle buffer size len key replace map py func with map tf to have successful training dataset dataset map lambda key map py func key num parallel call tf data experimental autotune dataset dataset batch 100 dataset dataset prefetch buffer size tf data experimental autotune return dataset def get model x input keras input 2 x keras layer dense unit 4 activation relu input x x keras layer dense unit 4 activation relu input x x keras layer dense unit 1 activation sigmoid input x return keras model inputs input output x server tf distribute server create local server train dataset get dataset val dataset get dataset strategy tf distribute onedevicestrategy gpu 0 strategy tf distribute mirroredstrategy device gpu 0 gpu 1 cross device op tf distribute reductiontoonedevice with strategy scope model get model optimizer keras optimizer adam learning rate 1e 2 model compile optimizer optimizer loss keras loss binarycrossentropy name loss metric keras metric binaryaccuracy name accuracy run eagerly false model fit train dataset initial epoch 0 epoch 20 step per epoch 50 validation datum val dataset validation step 1 validation freq 1 other info log 1 100 eta 11 27 loss 15 6796 accuracy 0 5120 2 100 eta 6 00 loss 18 0561 accuracy 0 4938 3 100 eta 4 10 loss 19 7856 accuracy 0 4705 4 100 eta 3 14 loss 18 6577 accuracy 0 4707 5 100 eta 2 42 loss 17 9387 accuracy 0 4689 6 100 eta 2 19 loss 17 1595 accuracy 0 4741 7 100 eta 2 04 loss 16 9366 accuracy 0 4744 8 100 eta 1 53 loss 16 6326 accuracy 0 4814 9 100 eta 1 44 loss 16 1086 accuracy 0 4825 10 100 eta 1 36 loss 15 7002 accuracy 0 4848 |
tensorflowtensorflow | convert to tflite unexpected value for attribute datum format expect nhwc | Bug | I m try to convert a frozen tensorflow graph see model here I find a lot relate bug report on that issue but non of they be actually solve 30411 7967 24491 system information os platform and distribution linux ubuntu 19 04 64bit tensorflow instal from source or binary binary cpu tensorflow version use command below v1 14 0 rc1 22 gaf24dc9 1 14 0 python version 3 7 3 describe the current behavior run in jupyter I get follow error convertererror toco fail see console for info 2019 07 28 21 08 26 912035 f tensorflow lite toco import tensorflow cc 2619 check fail status ok unexpected value for attribute datum format expect nhwc fatal python error abort current thread 0x00007fc8ee681740 most recent call first file home paul anaconda3 envs openvino lib python3 7 site package tensorflow lite toco python toco from protos py line 33 in execute file home paul local lib python3 7 site package absl app py line 251 in run main file home paul local lib python3 7 site package absl app py line 300 in run file home paul anaconda3 envs openvino lib python3 7 site package tensorflow python platform app py line 40 in run file home paul anaconda3 envs openvino lib python3 7 site package tensorflow lite toco python toco from protos py line 59 in main file home paul anaconda3 envs openvino bin toco from protos line 10 in abort core dump run over shell traceback most recent call last file tensorflow issue tflite py line 10 in tflite model converter convert file home paul anaconda3 envs openvino lib python3 7 site package tensorflow lite python lite py line 898 in convert converter kwargs file home paul anaconda3 envs openvino lib python3 7 site package tensorflow lite python convert py line 404 in toco convert impl input datum serializetostre file home paul anaconda3 envs openvino lib python3 7 site package tensorflow lite python convert py line 172 in toco convert protos toco fail see console for info n s n s n stdout stderr tensorflow lite python convert convertererror toco fail see console for info 2019 07 28 21 35 22 220584 f tensorflow lite toco import tensorflow cc 2619 check fail status ok unexpected value for attribute datum format expect nhwc fatal python error abort current thread 0x00007fae1bce0740 most recent call first file home paul anaconda3 envs openvino lib python3 7 site package tensorflow lite toco python toco from protos py line 33 in execute file home paul local lib python3 7 site package absl app py line 251 in run main file home paul local lib python3 7 site package absl app py line 300 in run file home paul anaconda3 envs openvino lib python3 7 site package tensorflow python platform app py line 40 in run file home paul anaconda3 envs openvino lib python3 7 site package tensorflow lite toco python toco from protos py line 59 in main file home paul anaconda3 envs openvino bin toco from protos line 10 in abort core dump code to reproduce the issue python import tensorflow as tf graph def file model lcnn 29v2 cpu pb input array 0 output array matmul converter tf lite tfliteconverter from frozen graph graph def file input array output array tflite model converter convert open model lcnn 29v2 tflite wb write tflite model url |
tensorflowtensorflow | tflite no implementation find for long org tensorflow lite nativeinterpreterwrapper createerrorreporter int | Bug | system information os platform and distribution android 5 1 1 api 22 mobile device xiaomi redmi 3 tensorflow instal from official binary tensorflow version tensorflow lite 1 14 0 describe the current behavior tensorflow lite 1 13 1 work fine on all device I test whereas tensorflow lite 1 14 0 be break for xiaomi redmi 3 android 5 1 1 api 22 other device be ok I get a runtime error when interpreter be create describe the expect behavior no error code to reproduce the issue interpreter new interpreter tflitemodel null other info log w linker datum app eu yesse readerdemo debug 2 lib arm64 libtensorflowlite jni so unused dt entry type 0x6ffffffe arg 0x2020 datum app eu yesse readerdemo debug 2 lib arm64 libtensorflowlite jni so unused dt entry type 0x6fffffff arg 0x3 e art dlopen datum app eu yesse readerdemo debug 2 lib arm64 libtensorflowlite jni so rtld lazy fail dlopen fail can not locate symbol register atfork reference by data app eu yesse readerdemo debug 2 lib arm64 libtensorflowlite jni so w system err tensorflowlite fail to load native library dlopen fail can not locate symbol register atfork reference by data app eu yesse readerdemo debug 2 lib arm64 libtensorflowlite jni so w linker datum app eu yesse readerdemo debug 2 lib arm64 libtensorflowlite jni so unused dt entry type 0x6ffffffe arg 0x2020 datum app eu yesse readerdemo debug 2 lib arm64 libtensorflowlite jni so unused dt entry type 0x6fffffff arg 0x3 e art dlopen datum app eu yesse readerdemo debug 2 lib arm64 libtensorflowlite jni so rtld lazy fail dlopen fail can not locate symbol register atfork reference by data app eu yesse readerdemo debug 2 lib arm64 libtensorflowlite jni so w system err tensorflowlite fail to load native library dlopen fail can not locate symbol register atfork reference by data app eu yesse readerdemo debug 2 lib arm64 libtensorflowlite jni so e art no implementation find for long org tensorflow lite nativeinterpreterwrapper createerrorreporter int try java org tensorflow lite nativeinterpreterwrapper createerrorreporter and java org tensorflow lite nativeinterpreterwrapper createerrorreporter I d androidruntime shut down vm e androidruntime fatal exception main process eu yesse readerdemo debug pid 12710 java lang unsatisfiedlinkerror no implementation find for long org tensorflow lite nativeinterpreterwrapper createerrorreporter int try java org tensorflow lite nativeinterpreterwrapper createerrorreporter and java org tensorflow lite nativeinterpreterwrapper createerrorreporter I at org tensorflow lite nativeinterpreterwrapper createerrorreporter native method at org tensorflow lite nativeinterpreterwrapper nativeinterpreterwrapper java 58 at org tensorflow lite interpreter interpreter java 224 at eu yesse reader common internal detector blockingcornersclasssingledetector blockingcornersclasssingledetector java 76 at eu yesse reader common internal detector blockingcornersclassmultidetector blockingcornersclassmultidetector java 25 at eu yesse reader commons share detector asynccornersclassmultidetectorimpl asynccornersclassmultidetectorimpl java 36 at eu yesse reader tempregdoc internal tempregdocreadermanag createdetector tempregdocreadermanager java 79 at eu yesse reader tempregdoc internal tempregdocreadermanag tempregdocreadermanager java 48 at eu yesse reader tempregdocreader getreader tempregdocreader java 20 at eu yesse readerdemo activity tempregdocactivity oncreate tempregdocactivity java 24 at android app activity performcreate activity java 6093 at android app instrumentation callactivityoncreate instrumentation java 1106 at android app activitythread performlaunchactivity activitythread java 2295 at android app activitythread handlelaunchactivity activitythread java 2404 at android app activitythread access 900 activitythread java 154 at android app activitythread h handlemessage activitythread java 1315 at android os handler dispatchmessage handler java 102 at android os looper loop looper java 135 at android app activitythread main activitythread java 5296 at java lang reflect method invoke native method at java lang reflect method invoke method java 372 at com android internal os zygoteinit methodandargscaller run zygoteinit java 912 at com android internal os zygoteinit main zygoteinit java 707 |
tensorflowtensorflow | keras model fit crash with large batch size and 0 validation split | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 cento 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tf gpu 1 13 1 python version 3 6 2 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 7 4 1 gpu model and memory gtx 1080ti 10 g you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior raise an error when fit tf keras model when the training dataset size less then batch size and validation split be 0 0 if use a batch size less then dataset size or set validation split the fit be good use original kera fitting be good in any case error be traceback most recent call last file crf tf py line 123 in model fit x y batch size 16 epoch 50 validation split 0 0 file opt userhome ichongxiang conda envs py36 lib python3 6 site package tensorflow python keras engine training py line 880 in fit validation step validation step file opt userhome ichongxiang conda envs py36 lib python3 6 site package tensorflow python keras engine training array py line 329 in model iteration batch out f in batch file opt userhome ichongxiang conda envs py36 lib python3 6 site package tensorflow python keras backend py line 3076 in call run metadata self run metadata file opt userhome ichongxiang conda envs py36 lib python3 6 site package tensorflow python client session py line 1439 in call run metadata ptr file opt userhome ichongxiang conda envs py36 lib python3 6 site package tensorflow python framework error impl py line 528 in exit c api tf getcode self status status tensorflow python framework error impl invalidargumenterror can not squeeze dim 0 expect a dimension of 1 get 2 node metrics crf accuracy arithmeticoptimizer reordercastlikeandvaluepreserve bool squeeze node crf cond maximum describe the expect behavior fit successfully code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python import tensorflow as tf from tensorflow import kera from tensorflow keras import backend as k import kera from keras import backend as k import numpy as np class crf keras layers layer def init self num tag kwargs super crf self init kwargs self num tag num tag self input spec keras layers inputspec min ndim 3 self support mask true def get config self config num tag self num tag base config super crf self get config return dict list base config item list config item def build self input shape assert len input shape 3 if input shape 1 be none raise valueerror the last dimension of the input to crf should be define find none if input shape 1 self num tag raise valueerror the last dimension of the input shape must be equal to output shape use a linear layer if need self transition self add weight name transition shape self num tag self num tag initializer glorot uniform trainable true self build true def call self input mask none seq lens get seq lens input mask viterbi sequence tf contrib crf crf decode input self transition seq lens output k one hot viterbi sequence self num tag return k in train phase input output def compute output shape self input shape return input shape 2 self num tag def compute mask self input mask none if mask be not none return k any mask axis 1 return mask def get seq lens input mask none if mask be not none return k sum k cast mask dtype int32 axis 1 else shape k int shape input return k one shape 1 dtype int32 shape 1 def crf loss y true y pre crf idx y pre keras history 2 input crf get input at idx mask crf get input mask at idx seq lens get seq lens input mask y true k cast k argmax y true axis 1 dtype int32 log likelihood crf transition tf contrib crf crf log likelihood y pre y true seq lens transition param crf transition return k mean log likelihood def crf accuracy y true y pre crf idx y pre keras history 2 input crf get input at idx mask crf get input mask at idx seq lens get seq lens input mask viterbi sequence tf contrib crf crf decode inputs crf transition seq lens y true k cast k argmax y true 1 dtype int32 judge k cast k equal viterbi sequence y true k floatx if mask be none return k mean judge else mask k cast mask k floatx return k sum judge mask k sum mask num word 20 num feature 100 num tag 5 input keras layer input shape none embed kera layer embed 10 num feature mask zero true input score keras layer timedistribute kera layer dense num tag embed crf crf num tag output crf score model keras model model input output model summary x np array 1 2 3 4 0 0 4 5 6 0 0 0 y np array 1 3 4 2 0 0 2 1 3 0 0 0 y np eye num tag y print x print x shape print y print y shape model compile optimizer adam loss crf loss metric crf accuracy model fit x y batch size 16 epoch 50 validation split 0 0 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | dilate tf keras layer conv2d can t estimate output shape | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 beta1 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when create tf keras model with functional api dilate convolution of tf keras layer can t estimate the output shape describe the expect behavior the dilated convolution can estimate the output shape as convolution op do code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python import tensorflow as tf input tf keras input 100 100 3 result tf keras layer conv2d filter 10 kernel size 3 3 padding same dilation rate 2 input print result shape other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | conda install tensorflow mkl version raise error could not create a dilate convolution forward descriptor | Bug | keras vesion 2 2 4 tenssoflow version 1 13 1 cpu i3 330 exception stack epoch 1 5 abortederror traceback most recent call last in 13 validation step math ceil val flow sample val flow batch size 14 step per epoch math ceil train flow sample train flow batch size 15 callback checkpoint early tb csv logger d soft anaconda3 lib site package keras legacy interface py in wrapper args kwargs 89 warning warn update your object name call to the 90 kera 2 api signature stacklevel 2 91 return func args kwargs 92 wrapper original function func 93 return wrapper d soft anaconda3 lib site package kera engine training py in fit generator self generator step per epoch epoch verbose callback validation datum validation step class weight max queue size worker use multiprocesse shuffle initial epoch 1416 use multiprocesse use multiprocesse 1417 shuffle shuffle 1418 initial epoch initial epoch 1419 1420 interface legacy generator method support d soft anaconda3 lib site package kera engine training generator py in fit generator model generator step per epoch epoch verbose callback validation datum validation step class weight max queue size worker use multiprocesse shuffle initial epoch 215 out model train on batch x y 216 sample weight sample weight 217 class weight class weight 218 219 out to list out d soft anaconda3 lib site package kera engine training py in train on batch self x y sample weight class weight 1215 in x y sample weight 1216 self make train function 1217 output self train function in 1218 return unpack singleton output 1219 d soft anaconda3 lib site package keras backend tensorflow backend py in call self input 2713 return self legacy call input 2714 2715 return self call input 2716 else 2717 if py any be tensor x for x in input d soft anaconda3 lib site package keras backend tensorflow backend py in call self input 2673 fetch self callable fn array val run metadata self run metadata 2674 else 2675 fetch self callable fn array val 2676 return fetch len self output 2677 d soft anaconda3 lib site package tensorflow python client session py in call self args kwargs 1437 ret tf session tf sessionruncallable 1438 self session session self handle args status 1439 run metadata ptr 1440 if run metadata 1441 proto datum tf session tf getbuffer run metadata ptr d soft anaconda3 lib site package tensorflow python framework error impl py in exit self type arg value arg traceback arg 526 none none 527 compat as text c api tf message self status status 528 c api tf getcode self status status 529 delete the underlie status object from memory otherwise it stay alive 530 as there be a reference to status from this from the traceback due to abortederror operation receive an exception status 3 message could not create a dilate convolution forward descriptor in file tensorflow core kernel mkl conv ops cc 1111 node conv1 1 convolution the code work well with version without mkl |
tensorflowtensorflow | load embedding into the graph fail with libprotobuf error google protobuf io zero copy stream impl lite cc 164 can not allocate buffer large than kint32max for stringoutputstream | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 macos 10 14 tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version python 3 6 7 anaconda describe the current behavior load 1 4mil 100dim embedding into the graph word 1457657 dim 100 describe the expect behavior want to be able to use tfrecord datum set with word and graph to lookup index via tf table and then parallel lookup imbed vector compute use 3 dim tensor this use to work with small set of embedding anyway to overcome this problem without rewrite datum feed code to reproduce the issue w embed vocab tf constant embdic vocab dtype tf string shape embdic vocab size name w embed vocab w embed vocab table lookup op index table from tensor w embed vocab default value 0 name word embidx tbl w embedding tf get variable name word embedding shape embdic vocab size embdic dim initializer tf constant initializer np asmatrix embdic embedding dtype tf float32 trainable false other info log no other log just one error message libprotobuf error google protobuf io zero copy stream impl lite cc 164 can not allocate buffer large than kint32max for stringoutputstream |
tensorflowtensorflow | tensorflow lite gpu ios memory leak | Bug | I use tensorflowlitegpuexperimental for inference I have 60 model if I only use one model to inference it work well but if I switch different model in run than memory leak occur I just use code interpreter nullptr deletegpudelegate delegate to free resource so if I loss other free function if tensorflowlitegpuexperimental a bug tensorflowlitegpuexperimental the new version number be what |
tensorflowtensorflow | bug issue tf case incompatible with list comprehension | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow custom os platform and distribution e g linux ubuntu 16 04 uname a linux archlinux 5 1 5 arch1 2 arch 1 smp preempt mon may 27 03 37 39 utc 2019 x86 64 gnu linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 6 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version none gpu model and memory none describe the current behavior bug 1 the function random choice below have the correct behaviour only when use the decorator tf function if the decorator be not use tf case no long consider the predicate and always output the same value bug 2 even if the decorator tf function be use the function random choice do not have the correct behaviour if I use list comprehension to create the list of pair predicate function although it have the correct behaviour when the list be populate iteratively with append in a for loop describe the expect behavior 1 decorate the function random choice with tf function should not affect tf case unless I be not aware of the intricacy of tf function and tf case 2 choose for loop or list comprehension to create the list of predicate function pair should not affect tf case unless again I be not aware of the intricacy of tf function and tf case code to reproduce the issue python import tensorflow as tf import tensorflow kera as keras import tensorflow kera backend as k import tensorflow probability as tfp k clear session def fct1 x return x 5 def fct2 x return x 5 bug 1 if you remove tf function the function random choice no long work because tf case no long consider idx tf function def random choice x choice idx tfp distribution bernoulli prob 0 5 sample 1 0 output c x for c in choice bug 2 if you use tf function but you replace this code fns for o in output fns append lambda o by this one fns lambda o for o in output then tf case no long consider idx y tf case tf equal idx I fn for I fn in enumerate fns return output idx y x tf constant 4 0 choice fct1 fct2 output idx y random choice x choice sess tf session for I in range 10 output np idx np y np sess run output idx y print y np output np idx np example output for the code above this be the expect behaviour python 9 9 1 1 1 1 1 1 9 9 9 9 1 1 9 9 9 9 1 1 when remove tf function or when use tf function but use list comprehension to create fns this be the unexpected behaviour python 1 9 1 9 1 1 1 9 1 9 1 9 1 1 1 1 1 1 1 9 other info log I could reproduce this bug in tensorflow 2 0 0 beta1 v2 0 0 beta0 16 g1d91213fe7 |
tensorflowtensorflow | issue use tf keras modelcheckpoint when distribute under multiworkermirroredstrategy | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tf gpu 2 0 0b1 python version 3 6 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 7 4 1 gpu model and memory 2 x gv100 32 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior raise an error when use tf keras callback modelcheckpoint in the callback list when training use kera under the multiworkermirroredstrategy distribution strategy on a single machine error be tensorflow python framework error impl notfounderror no register varhandleop opkernel for gpu device compatible with node node varhandleop opkernel be find but attribute didn t match request attribute container dtype dt int32 shape share name cd2c89b7 88b7 44c8 ad83 06c2a9158347 describe the expect behavior should save a checkpoint model file and not crash code to reproduce the issue run script call run distribute training minimal example py import json import subprocess import os def create tf config name i d return cluster worker localhost 9999 chief localhost 9997 task type name index i d def set tf config i d name worker os environ tf config json dump create tf config name i d def start process cluster def key device none process list if key in cluster def for I in enumerate cluster def key if device be not none os environ cuda visible device str device device 1 else os environ cuda visible device process list append subprocess popen python distribute training minimal example py job name key job i d str I return process list device if name main cluster def create tf config cluster process list this list device start process cluster def chief device 0 for key in chief worker ps this list device start process cluster def key device process list extend this list os environ cuda visible device 0 1 os environ tf config for p in process list p wait distribute worker script call distribute training minimal example py from run distribute training minimal example import create tf config set tf config use custom check point false def parse argument import argparse parser argparse argumentparser parser add argument job name type str default worker help type of job this process be run parser add argument job i d type int default 0 help I d of this job type for this process to run return parser parse args args parse argument tf config create tf config cluster def tf config cluster set tf config args job i d args job name be chief args job name chief print be chief str be chief batchsize len cluster def worker import tensorflow as tf strategy tf distribute experimental multiworkermirroredstrategy with strategy scope def create simple model return tf keras sequential tf keras layer conv2d 32 3 activation relu pad same kernel regularizer tf keras regularizer l2 0 04 input shape 128 128 1 tf keras layer conv2d 1 3 activation relu pad same kernel regularizer tf keras regularizer l2 0 04 tf keras layer dense 1 activation softmax def localise cross entropy y true y pre ratio 1 0 positive error ratio y true tf keras backend log 0 0000001 y pre negative error 1 y true tf keras backend log 1 0000001 y pre error positive error negative error return tf keras backend mean error def localise cross entropy loss y true y pre ratio 1 0 return localise cross entropy y true y pre ratio def create datum import numpy as np datum set for I in range 20 ip np random random 128 128 1 astype np float32 op np random randint 0 2 128 128 1 astype np float32 datum set append ip op return datum set model create simple model model summary trainingdata create data model compile adam loss localise cross entropy loss metric localise cross entropy split int len trainingdata 0 8 traindata valdata trainingdata split trainingdata split def create ram generator datum while true for I in datum yield I def tensorflow generator training datum getter batchsize none import tensorflow as tf def getter generator while true item next data getter yield item shape none none 1 none none 1 dataset tf datum dataset from generator generator getter generator output type tf float32 tf float32 output shape shape if batchsize be not none dataset dataset batch batchsize return dataset def generatorise datum train gen create ram generator datum train gen tensorflow generator training train gen batchsize batchsize return train gen train gen generatorise traindata val gen generatorise valdata if not use custom check point callback list tf keras callbacks modelcheckpoint tmp hdf5 else from tensorflow keras callback import callback class custommodelcheckpointcallback callback def init self path model be chief task super custommodelcheckpointcallback self init self model model self path path self be chief be chief task def on epoch end self epoch log none if self be chief self model save self path callback list custommodelcheckpointcallback tmp hdf5 model be chief model fit train gen epoch 3 shuffle false callback callback list validation data val gen step per epoch len traindata validation step len valdata run first script will cause the issue set use custom check point to true in the second script will remove the error other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach f ffa dev deep learn dev dist env script python exe f ffa dev deep learn dev dist run distribute training minimal example py be chief false be chief true 2019 07 26 12 00 27 116726 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library nvcuda dll 2019 07 26 12 00 27 117037 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library nvcuda dll 2019 07 26 12 00 27 421095 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 9e 00 0 2019 07 26 12 00 27 421633 I tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 27 425381 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 07 26 12 00 27 426214 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 2019 07 26 12 00 27 443567 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 9e 00 0 2019 07 26 12 00 27 443960 I tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 27 448508 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 07 26 12 00 27 497949 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 5b 00 0 2019 07 26 12 00 27 498407 I tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 27 502742 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 07 26 12 00 27 503686 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 2019 07 26 12 00 27 519979 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 5b 00 0 2019 07 26 12 00 27 520543 I tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 27 523778 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 07 26 12 00 28 548291 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 07 26 12 00 28 548686 I tensorflow core common runtime gpu gpu device cc 1187 0 2019 07 26 12 00 28 548929 I tensorflow core common runtime gpu gpu device cc 1200 0 n 2019 07 26 12 00 28 554809 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 25826 mb memory physical gpu device 0 name quadro gv100 pci bus i d 0000 9e 00 0 compute capability 7 0 2019 07 26 12 00 28 569853 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 9e 00 0 2019 07 26 12 00 28 570305 I tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 28 573342 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 07 26 12 00 28 573626 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 07 26 12 00 28 573956 I tensorflow core common runtime gpu gpu device cc 1187 0 2019 07 26 12 00 28 574136 I tensorflow core common runtime gpu gpu device cc 1200 0 n 2019 07 26 12 00 28 576977 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job worker replica 0 task 0 device gpu 0 with 25826 mb memory physical gpu device 0 name quadro gv100 pci bus i d 0000 9e 00 0 compute capability 7 0 2019 07 26 12 00 28 581376 I tensorflow core distribute runtime rpc grpc channel cc 250 initialize grpcchannelcache for job chief 0 localhost 9997 2019 07 26 12 00 28 581735 I tensorflow core distribute runtime rpc grpc channel cc 250 initialize grpcchannelcache for job worker 0 localhost 9999 2019 07 26 12 00 28 599655 I tensorflow core distribute runtime rpc grpc server lib cc 365 start server with target grpc localhost 9999 2019 07 26 12 00 28 602463 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 9e 00 0 2019 07 26 12 00 28 603395 I tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 28 607136 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 07 26 12 00 28 607483 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 07 26 12 00 28 607841 I tensorflow core common runtime gpu gpu device cc 1187 0 2019 07 26 12 00 28 608102 I tensorflow core common runtime gpu gpu device cc 1200 0 n 2019 07 26 12 00 28 611206 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 25826 mb memory physical gpu device 0 name quadro gv100 pci bus i d 0000 9e 00 0 compute capability 7 0 warn log before flag parsing go to stderr w0726 12 00 28 611904 51176 cross device op py 1164 some request device in tf distribute strategy be not visible to tensorflow job worker replica 0 task 0 device gpu 0 2019 07 26 12 00 28 613309 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 07 26 12 00 28 613666 I tensorflow core common runtime gpu gpu device cc 1187 0 2019 07 26 12 00 28 613873 I tensorflow core common runtime gpu gpu device cc 1200 0 n 2019 07 26 12 00 28 618119 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 25826 mb memory physical gpu device 0 name quadro gv100 pci bus i d 0000 5b 00 0 compute capability 7 0 2019 07 26 12 00 28 634094 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 5b 00 0 2019 07 26 12 00 28 634513 I tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 28 637145 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 07 26 12 00 28 637447 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 07 26 12 00 28 637741 I tensorflow core common runtime gpu gpu device cc 1187 0 2019 07 26 12 00 28 637928 I tensorflow core common runtime gpu gpu device cc 1200 0 n 2019 07 26 12 00 28 640533 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job chief replica 0 task 0 device gpu 0 with 25826 mb memory physical gpu device 0 name quadro gv100 pci bus i d 0000 5b 00 0 compute capability 7 0 2019 07 26 12 00 28 646908 I tensorflow core distribute runtime rpc grpc channel cc 250 initialize grpcchannelcache for job chief 0 localhost 9997 2019 07 26 12 00 28 647352 I tensorflow core distribute runtime rpc grpc channel cc 250 initialize grpcchannelcache for job worker 0 localhost 9999 2019 07 26 12 00 28 666115 I tensorflow core distribute runtime rpc grpc server lib cc 365 start server with target grpc localhost 9997 2019 07 26 12 00 28 668862 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 5b 00 0 2019 07 26 12 00 28 669380 I tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 28 672907 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 07 26 12 00 28 673317 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 07 26 12 00 28 673744 I tensorflow core common runtime gpu gpu device cc 1187 0 2019 07 26 12 00 28 673983 I tensorflow core common runtime gpu gpu device cc 1200 0 n 2019 07 26 12 00 28 677930 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 25826 mb memory physical gpu device 0 name quadro gv100 pci bus i d 0000 5b 00 0 compute capability 7 0 warn log before flag parsing go to stderr w0726 12 00 28 675424 29652 cross device op py 1164 some request device in tf distribute strategy be not visible to tensorflow job chief replica 0 task 0 device gpu 0 model sequential layer type output shape param model sequential layer type output shape param conv2d conv2d none 128 128 32 320 conv2d 1 conv2d none 128 128 1 289 conv2d conv2d none 128 128 32 320 dense dense none 128 128 1 2 conv2d 1 conv2d none 128 128 1 289 dense dense none 128 128 1 2 total param 611 trainable param 611 non trainable param 0 total param 611 trainable param 611 non trainable param 0 w0726 12 00 30 992845 51176 deprecation py 323 from f ffa dev deep learn dev dist env lib site package tensorflow python data op dataset op py 505 py func from tensorflow python op script op be deprecate and will be remove in a future version instruction for update tf py func be deprecate in tf v2 instead there be two option available in v2 tf py function take a python function which manipulate tf eager tensor instead of numpy array it s easy to convert a tf eager tensor to an ndarray just call tensor numpy but have access to eager tensor mean tf py function s can use accelerator such as gpu as well as be differentiable use a gradient tape tf numpy function maintain the semantic of the deprecate tf py func it be not differentiable and manipulates numpy array it drop the stateful argument make all function stateful w0726 12 00 30 992845 29652 deprecation py 323 from f ffa dev deep learn dev dist env lib site package tensorflow python data op dataset op py 505 py func from tensorflow python op script op be deprecate and will be remove in a future version instruction for update tf py func be deprecate in tf v2 instead there be two option available in v2 tf py function take a python function which manipulate tf eager tensor instead of numpy array it s easy to convert a tf eager tensor to an ndarray just call tensor numpy but have access to eager tensor mean tf py function s can use accelerator such as gpu as well as be differentiable use a gradient tape tf numpy function maintain the semantic of the deprecate tf py func it be not differentiable and manipulates numpy array it drop the stateful argument make all function stateful w0726 12 00 31 031976 51176 distribute coordinator py 825 eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster w0726 12 00 31 031976 51176 distribute coordinator py 829 eval strategy be not pass in no distribution strategy will be use for evaluation w0726 12 00 31 031976 29652 distribute coordinator py 825 eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster w0726 12 00 31 031976 29652 distribute coordinator py 829 eval strategy be not pass in no distribution strategy will be use for evaluation 2019 07 26 12 00 31 036410 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 92019 e07 26 12 00 0 00 31 2019 07 26 12 00 310 033668290 I tensor70 I flow tensocorre cflow ommsotren rauntime gpm execuu gtorpu devpice cc 16latfo4rm 0 defaultf dlopeound dn checkeevir sce t0 wub cc 25 ith pro pergpu library atiere sstati namcale ly liquadro nke skigv100 p dmajlor open check 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 5b 00 0 2019 07 26 12 00 31 038318 I tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 31 042495 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 2019 07 26 12 00 310 042772 I tensorflow core comm2019 07 26 12 on00 31 0429 runtime gp9u g2pu I devictene cc 1763 saorfdde visiblelow co gpru deevi ce 0commo n runtime gpu gpu device cc 1181 device inte201rconnect9 0 streame7x 26 12ecutor wi th 00 31 0437stren11 igth 1 edge matritenx sor2flo0w cor19 07e c 26 1o2 00 31mmon ru n044108time gpu gpiu device cc tensorflow core 1c181ommon runtime gpu gpdevicu devicee c interconnect streamexecutor c 1w1ith strength 1 8edge matrix 7 20 19 07 0 2620 112 00 319 07 20449886 12 00 31 045 066i tensorflow core co mmon runtimie g tepnsorfu gpu devlow cice ccore comm on 1run187time g p 0 u g2019 pu de0v7 26 1ice cc2 00 31 0 456921200 0 I te n nsorflow core common runtime gpu gpu device cc 1200 0 n 2019 07 26 12 00 31 050217 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 25826 mb memory physical gpu device 0 name quadro gv100 pci bus i d 0000 9e 00 0 compute capability 7 0 w0726 12 00 31 044387 51176 cross device op py 1164 some request device in tf distribute strategy be not visible to tensorflow job worker replica 0 task 0 device gpu 0 2019 07 26 12 00 31 051178 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 25826 mb memory physical gpu device 0 name quadro gv100 pci bus i d 0000 5b 00 0 compute capability 7 0 2019 07 26 12 00 31 054114 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 9e 00 0 2019 07 26 12 00 31 054942 I tensorflow stream executor pw0726 12 00 31 054794 29652 cross device op py 1164 some request device in tf distribute strategy be not visible to tensorflow job chief replica 0 task 0 device gpu 0 latform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 31 057696 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name quadro gv100 major 7 minor 0 memoryclockrate ghz 1 627 pcibusid 0000 5b 00 0 2019 07 26 12 00 31 058142 I tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2019 07 26 12 00 31 058764 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 07 26 12 00 31 059078 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 07 26 12 00 31 059395 I tensorflow core common runtime gpu gpu device cc 1187 0 2019 07 26 12 00 31 059590 I tensorflow core common runtime gpu gpu device cc 1200 0 n 2019 07 26 12 00 31 062322 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 07 26 12 00 31 062664 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 07 26 12 00 31 063059 I tensorflow core common runtime gpu gpu device cc 1187 0 2019 07 26 12 00 31 063278 I tensorflow core common runtime gpu gpu device cc 1200 0 n 2019 07 26 12 00 31 063637 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 25826 mb memory physical gpu device 0 name quadro gv100 pci bus i d 0000 9e 00 0 compute capability 7 0 w0726 12 00 31 062981 51176 cross device op py 1164 some request device in tf distribute strategy be not visible to tensorflow job worker replica 0 task 0 device gpu 0 2019 07 26 12 00 31 066050 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 25826 mb memory physical gpu device 0 name quadro gv100 pci bus i d 0000 5b 00 0 compute capability 7 0 w0726 12 00 31 065107 29652 cross device op py 1164 some request device in tf distribute strategy be not visible to tensorflow job chief replica 0 task 0 device gpu 0 2019 07 26 12 00 31 075080 w tensorflow core grappler optimizer data auto shard cc 334 can not find shardable dataset add a shard node at the end of the dataset instead this may have performance implication 2019 07 26 12 00 31 076454 w tensorflow core grappler optimizer data auto shard cc 334 can not find shardable dataset add a shard node at the end of the dataset instead this may have performance implication 2019 2019 07 26 12 00 31 126357 w tensorflow core grapp07 26 1ler2 op 00timizers d31 126335 w tensorfatal auow core tgo srappler hard cc optim334i cannotzer data auto shard fi ccn 33d shardable4 c datasaet n adnot fidnd ing a shashardabler d ndodeataset add at thee a sheardn nod ode at the end of the dafta the det insteaatadset this insteamay have this may have performance implicapertion f ormance implication train on 16 step validate on 4 step train on 16 step validate on 4 step traceback most recent call last file distribute training minimal example py line 113 in traceback most recent call last file distribute training minimal example py line 113 in model fit train gen epoch 3 shuffle false callback callback list validation data val gen step per epoch len traindata validation step len valdata file f ffa dev deep learn dev dist env lib site package tensorflow python keras engine training py line 643 in fit model fit train gen epoch 3 shuffle false callback callback list validation data val gen step per epoch len traindata validation step len valdata file f ffa dev deep learn dev dist env lib site package tensorflow python keras engine training py line 643 in fit use multiprocesse use multiprocesse file f ffa dev deep learn dev dist env lib site package tensorflow python keras engine training distribute py line 776 in wrapper use multiprocesse use multiprocesse file f ffa dev deep learn dev dist env lib site package tensorflow python keras engine training distribute py line 776 in wrapper mode dc coordinatormode independent worker file f ffa dev deep learn dev dist env lib site package tensorflow python distribute distribute coordinator py line 853 in run distribute coordinator mode dc coordinatormode independent worker file f ffa dev deep learn dev dist env lib site package tensorflow python distribute distribute coordinator py line 853 in run distribute coordinator task i d session config rpc layer file f ffa dev deep learn dev dist env lib site package tensorflow python distribute distribute coordinator py line 360 in run single worker task i d session config rpc layer file f ffa dev deep learn dev dist env lib site package tensorflow python distribute distribute coordinator py line 360 in run single worker return worker fn strategy file f ffa dev deep learn dev dist env lib site package tensorflow python keras engine training distribute py line 771 in worker fn return worker fn strategy file f ffa dev deep learn dev dist env lib site package tensorflow python keras engine training distribute py line 771 in worker fn return fn instance model kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python keras engine training distribute py line 681 in fit return fn instance model kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python keras engine training distribute py line 681 in fit step name step per epoch file f ffa dev deep learn dev dist env lib site package tensorflow python keras engine training array py line 252 in model iteration step name step per epoch file f ffa dev deep learn dev dist env lib site package tensorflow python keras engine training array py line 252 in model iteration callback call begin hook mode file f ffa dev deep learn dev dist env lib site package tensorflow python keras callbacks py line 246 in call begin hook callback call begin hook mode file f ffa dev deep learn dev dist env lib site package tensorflow python keras callbacks py line 246 in call begin hook self on train begin file f ffa dev deep learn dev dist env lib site package tensorflow python keras callbacks py line 362 in on train begin self on train begin file f ffa dev deep learn dev dist env lib site package tensorflow python keras callbacks py line 362 in on train begin callback on train begin log file f ffa dev deep learn dev dist env lib site package tensorflow python keras callbacks py line 905 in on train begin callback on train begin log file f ffa dev deep learn dev dist env lib site package tensorflow python keras callbacks py line 905 in on train begin self model self filepath file f ffa dev deep learn dev dist env lib site package tensorflow python keras distribute multi worker training state py line 60 in init self model self filepath file f ffa dev deep learn dev dist env lib site package tensorflow python keras distribute multi worker training state py line 60 in init initial value ckpt save epoch unused value name ckpt save epoch file f ffa dev deep learn dev dist env lib site package tensorflow python op variable py line 262 in call initial value ckpt save epoch unused value name ckpt save epoch file f ffa dev deep learn dev dist env lib site package tensorflow python op variable py line 262 in call return cls variable v2 call args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python op variable py line 256 in variable v2 call return cls variable v2 call args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python op variable py line 256 in variable v2 call shape shape file f ffa dev deep learn dev dist env lib site package tensorflow python op variable py line 60 in getter shape shape file f ffa dev deep learn dev dist env lib site package tensorflow python op variable py line 60 in getter return capture getter capture previous kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python distribute distribute lib py line 1250 in creator with resource var return capture getter capture previous kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python distribute distribute lib py line 1250 in creator with resource var return self create variable args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python distribute collective all reduce strategy py line 368 in create variable return self create variable args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python distribute collective all reduce strategy py line 368 in create variable real mirror creator args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python distribute mirror strategy py line 251 in create mirror variable real mirror creator args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python distribute mirror strategy py line 251 in create mirror variable value list real mirror creator device args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python distribute collective all reduce strategy py line 355 in real mirror creator value list real mirror creator device args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python distribute collective all reduce strategy py line 355 in real mirror creator v next creator args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python op variable py line 237 in previous getter lambda kws default variable creator v2 none kws file f ffa dev deep learn dev dist env lib site package tensorflow python ops variable scope py line 2551 in default variable creator v2 v next creator args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python op variable py line 237 in previous getter lambda kws default variable creator v2 none kws file f ffa dev deep learn dev dist env lib site package tensorflow python ops variable scope py line 2551 in default variable creator v2 shape shape file f ffa dev deep learn dev dist env lib site package tensorflow python op variable py line 264 in call return super variablemetaclass cls call args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python op resource variable op py line 464 in init shape shape file f ffa dev deep learn dev dist env lib site package tensorflow python op variable py line 264 in call shape shape file f ffa dev deep learn dev dist env lib site package tensorflow python op resource variable op py line 618 in init from args return super variablemetaclass cls call args kwargs file f ffa dev deep learn dev dist env lib site package tensorflow python op resource variable op py line 464 in init graph mode self in graph mode file f ffa dev deep learn dev dist env lib site package tensorflow python op resource variable op py line 225 in eager safe variable handle shape shape file f ffa dev deep learn dev dist env lib site package tensorflow python op resource variable op py line 618 in init from args shape dtype share name name graph mode initial value file f ffa dev deep learn dev dist env lib site package tensorflow python op resource variable op py line 141 in variable handle from shape and dtype container container file f ffa dev deep learn dev dist env lib site package tensorflow python op gen resource variable op py line 1612 in var handle op graph mode self in graph mode file f ffa dev deep learn dev dist env lib site package tensorflow python op resource variable op py line 225 in eager safe variable handle shape dtype share name name graph mode initial value file f ffa dev deep learn dev dist env lib site package tensorflow python op resource variable op py line 141 in variable handle from shape and dtype container container file f ffa dev deep learn dev dist env lib site package tensorflow python op gen resource variable op py line 1612 in var handle op six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl notfounderror no register varhandleop opkernel for gpu device compatible with node node varhandleop opkernel be find but attribute didn t match request attribute container dtype dt int32 shape share name cd2c89b7 88b7 44c8 ad83 06c2a9158347 register device cpu device gpu dtype in dt half device gpu dtype in dt float device gpu dtype in dt double device gpu dtype in dt bool device gpu dtype in dt complex64 device gpu dtype in dt complex128 device gpu dtype in dt int64 device gpu dtype in dt variant op varhandleop name ckpt save epoch six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl notfounderror no register varhandleop opkernel for gpu device compatible with node node varhandleop opkernel be find but attribute didn t match request attribute container dtype dt int32 shape share name cd2c89b7 88b7 44c8 ad83 06c2a9158347 register device cpu device gpu dtype in dt half device gpu dtype in dt float device gpu dtype in dt double device gpu dtype in dt bool device gpu dtype in dt complex64 device gpu dtype in dt complex128 device gpu dtype in dt int64 device gpu dtype in dt variant op varhandleop name ckpt save epoch 2019 07 26 12 00 31 325341 w tensorflow core common runtime eager context cc 232 unable to destroy server object so release instead server don t support clean shutdown 2019 07 26 12 00 31 339402 w tensorflow core common runtime eager context cc 232 unable to destroy server object so release instead server don t support clean shutdown process finish with exit code 0 |
tensorflowtensorflow | save model and serve preprocessing | Bug | use tensorflow2 0 0 beta1 os macos 10 14 6 python 3 7 this might be a doc or feature request not sure so please forgive my general posting of the issue I use to use estimator export savedmodel export serve input receiver fn which would allow I to define serve input receiver fn where I could transform my input such transformation in my case be decode jpeg convert to float resize with kera be the recommend high level api with 2 0 I m look for information on how to do this an example model be python mobilenet url input l input shape 224 224 3 name input image mobilenet hub keraslayer mobilenet url input logit l dense unit 1 activation tf nn sigmoid name prediction mobilenet model tf keras model input input output logit train and then python tf save model save model export mobilenet finetune currently the read decode resize be be do in tf datum dataset map function I can only see a way to provide tensorspec describe the input but no way to provide a preprocessing function when save the model here for serve be this a documentation issue if so guidance on how it s do would be nice and I d be happy to contribute to the doc be it not possible to do this preprocesse with the save model with keras and if I want to do this stick to estimator |
tensorflowtensorflow | i0725 19 06 30 700653 7708 tf log py 115 saver not create because there be no variable in the graph to restore when train the tf estimator model | Bug | estimator tf contrib estimator dnnestimator head multi label head hide unit 64 10 feature column description embedding label np array train encode feature description np array train description train input fn tf estimator input numpy input fn feature label shuffle true batch size 100 num epoch 20 estimator train input fn train input fn info tensorflow call model fn i0725 19 06 29 207903 7708 tf log py 115 call model fn info tensorflow saver not create because there be no variable in the graph to restore i0725 19 06 30 700653 7708 tf log py 115 saver not create because there be no variable in the graph to restore info tensorflow saver not create because there be no variable in the graph to restore i0725 19 06 31 963634 7708 tf log py 115 saver not create because there be no variable in the graph to restore valueerror feature description be not in feature dictionary originally define at file c user appdata local continuum anaconda3 lib site package tensorflow python estimator can dnn py line 108 in dnn logit fn name dnn file c user appdata local continuum anaconda3 lib site package tensorflow python estimator can dnn py line 143 in init create scope now false file c user appdata local continuum anaconda3 lib site package tensorflow python feature column feature column py line 323 in init self name internal input layer create scope now create scope now file c user appdata local continuum anaconda3 lib site package tensorflow python op template py line 154 in make template kwargs |
tensorflowtensorflow | tflite interpreter allocate tensor fail to prepare not ktfliteint8 uint8 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 6 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior the graph only consist of tf split where I pin down to for this issue after quantization with representative dataset the interpreter fail to allocate tensor runtimeerror tensorflow lite kernel dequantize cc 62 op context input type ktfliteuint8 op context input type ktfliteint8 be not true node number 0 dequantize fail to prepare I assume tf split can not be quantize and nothing to be quantize actually since if I add tf lite opsset tflite builtins int8 to fully quantize into int8 it tell I split v be not support so this simple graph should not have any quantize dequantize at tf split op thus check type if be int8 and have dequantize layer here seem not to make sense describe the expect behavior interpreter sucessfully code to reproduce the issue input raw tf placeholder tf float32 shape 1 32 32 1 name input raw output tf split value input raw num or size split 2 axis 1 0 with tf session as sess sess run tf global variable initializer converter lite tfliteconverter from session sess input raw output converter optimization lite optimize default def representative datum gen for I in range 1 yield np random random sample 1 32 32 1 astype np float32 yield sess run tf random normal input raw get shape dtype tf float32 seed 1 both have issue converter representative dataset representative datum gen tflite model converter convert open convert model quant test tflite wb write tflite model interpreter lite interpreter model path convert model quant test tflite nterpreter allocate tensor other info log traceback most recent call last file main py line 532 in interpreter allocate tensor file proj gpu xxx local lib python3 6 site package tensorflow lite python interpreter py line 95 in allocate tensor return self interpreter allocatetensor file proj gpu xxx local lib python3 6 site package tensorflow lite python interpreter wrapper tensorflow wrap interpreter wrapper py line 106 in allocatetensor return tensorflow wrap interpreter wrapper interpreterwrapper allocatetensor self runtimeerror tensorflow lite kernel dequantize cc 62 op context input type ktfliteuint8 op context input type ktfliteint8 be not true node number 0 dequantize fail to prepare |
tensorflowtensorflow | tensorrt slowdown native fp32 and fp16 int8 file size increase | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 nvcr io nvidia tensorrt 19 02 py3 rel 19 02 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 7024 g24b3e6cf73 1 15 0 dev20190725 python version 3 5 2 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version 10 0 130 7 4 2 as per the above link container gpu model and memory tesla v100 32 gb relevant output from pip freeze tf estimator nightly 1 14 0 dev2019072201 tf nightly gpu 1 15 0 dev20190725 describe the current behavior inference speed frame per second model native fp32 fp16 int8 tiny yolo 348 333 402 415 big yolo 125 140 243 208 1 why be there a slowdown for tiny native fp32 pooyadavoodi same as here issuecomment 513938562 2 why be this slowdown not consistent for big 3 why be there a slowdown for big fp16 int8 model size megabyte model native fp32 fp16 int8 tiny yolo 35 67 44 51 big yolo 238 439 288 332 I understand that there be currently an issue where new graph weight be save twice to the pb file 30717 30789 once the weight in the table be adjust for this double weight save the result size for fp32 and fp16 seem reasonable 1 why be there an increase in size for fp16 int8 describe the expect behavior I be try to quantize two different yolo model one tiny one normal with tensorrt the goal of this quantization be 1 speed up inference 2 decrease model size as quantization and conversion proceed from native fp32 fp16 int8 I expect inference time to decrease fps to increase and model size to decrease code to reproduce the issue I be use this script and a few helper function from here the two exact script that I use be do py and utility py here be the tiny model and the big model my command for run the experiment bash python do py freeze graph big yolov3 freeze pb or tiny yolov3 frozen pb native fp32 fp16 int8 batch size 32 or 128 for tiny output dir workspace input node input output node output box other info log I run a couple of experiment just to make sure that the result be consistent network native tiny yolov3 freeze pb batchsize 128 step 100 fps median 350 2 mean 348 0 uncertainty 1 4 jitter 5 1 latency median 0 36551 mean 0 36846 99th p 0 42946 99th uncertainty 0 01453 network tftrt fp32 tiny yolov3 freeze pb batchsize 128 step 100 fps median 340 9 mean 333 4 uncertainty 1 5 jitter 4 4 latency median 0 37546 mean 0 38470 99th p 0 47469 99th uncertainty 0 06110 network tftrt fp16 tiny yolov3 freeze pb batchsize 128 step 100 fps median 403 3 mean 402 3 uncertainty 0 6 jitter 3 6 latency median 0 31740 mean 0 31824 99th p 0 34266 99th uncertainty 0 00263 network tftrt int8 tiny yolov3 freeze pb batchsize 128 step 100 fps median 417 7 mean 414 9 uncertainty 1 1 jitter 4 4 latency median 0 30641 mean 0 30873 99th p 0 35451 99th uncertainty 0 01511 network native big yolov3 freeze pb batchsize 32 step 100 fps median 125 2 mean 124 7 uncertainty 0 3 jitter 1 4 latency median 0 25553 mean 0 25677 99th p 0 28257 99th uncertainty 0 00308 network tftrt fp32 big yolov3 freeze pb batchsize 32 step 100 fps median 140 3 mean 140 2 uncertainty 0 4 jitter 1 9 latency median 0 22802 mean 0 22839 99th p 0 25419 99th uncertainty 0 00890 network tftrt fp16 big yolov3 freeze pb batchsize 32 step 100 fps median 237 6 mean 242 5 uncertainty 1 4 jitter 5 4 latency median 0 13469 mean 0 13245 99th p 0 17733 99th uncertainty 0 04387 network tftrt int8 big yolov3 freeze pb batchsize 32 step 100 fps median 210 1 mean 207 5 uncertainty 1 5 jitter 2 7 latency median 0 15231 mean 0 15657 99th p 0 16928 99th uncertainty 0 16613 network native tiny yolov3 freeze pb batchsize 128 step 100 fps median 357 4 mean 354 3 uncertainty 1 7 jitter 14 5 latency median 0 35814 mean 0 36215 99th p 0 44575 99th uncertainty 0 00629 network tftrt fp32 tiny yolov3 freeze pb batchsize 128 step 100 fps median 324 9 mean 319 4 uncertainty 1 4 jitter 2 5 latency median 0 39401 mean 0 40173 99th p 0 49218 99th uncertainty 0 06484 network tftrt fp16 tiny yolov3 freeze pb batchsize 128 step 100 fps median 376 1 mean 372 8 uncertainty 1 1 jitter 1 7 latency median 0 34036 mean 0 34363 99th p 0 38601 99th uncertainty 0 02051 network tftrt int8 tiny yolov3 freeze pb batchsize 128 step 100 fps median 392 1 mean 391 3 uncertainty 0 5 jitter 1 8 latency median 0 32645 mean 0 32717 99th p 0 33765 99th uncertainty 0 01967 network native big yolov3 freeze pb batchsize 32 step 100 fps median 124 3 mean 124 0 uncertainty 0 4 jitter 1 7 latency median 0 25737 mean 0 25842 99th p 0 31292 99th uncertainty 0 00355 network tftrt fp32 big yolov3 freeze pb batchsize 32 step 100 fps median 141 0 mean 140 7 uncertainty 0 3 jitter 0 3 latency median 0 22690 mean 0 22761 99th p 0 24239 99th uncertainty 0 00581 network tftrt fp16 big yolov3 freeze pb batchsize 32 step 100 fps median 247 4 mean 245 9 uncertainty 1 0 jitter 4 4 latency median 0 12934 mean 0 13044 99th p 0 16018 99th uncertainty 0 02011 network tftrt int8 big yolov3 freeze pb batchsize 32 step 100 fps median 206 0 mean 204 5 uncertainty 1 4 jitter 1 0 latency median 0 15536 mean 0 15885 99th p 0 16454 99th uncertainty 0 17423 |
tensorflowtensorflow | how to release kera compile model | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 1 14 0 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior it seem that tensorflow will increase memory usage when I compile keras model serveral time there be some way to release unused compile model describe the expect behavior release compile model when I don t need it anymore code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python model compile freeze training model compile fine tuning training replicate code other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tensorflow keras optimizers adadelta inconsistency with keras optimizers adadelta | Bug | system information os platform and distribution linux 4 9 0 8 amd64 x86 64 with debian 9 9 tensorflow version v1 14 0 9 gc407b045b8 1 14 0 python version 3 6 6 describe the current behavior take a look at the follow two kaggle kernel 1 2 both of they run the same code with the same os python version and tensorflow version the only difference be the first one import from keras and the second one from tensorflow keras optimizer adadelta produce a different result because of the default configuration be not the same the difference lie in the learning rate print keras optimizers adadelta get config lr 1 0 rho 0 95 decay 0 0 epsilon 1e 07 print tensorflow keras optimizers adadelta get config name adadelta learning rate 0 001 decay 0 0 rho 0 95 epsilon 1e 07 describe the expect behavior keras optimizer adadelta and tensorflow keras optimizers adadelta should produce the same result as show by optimizer adam look at the source code both adam optimizer code in optimizer py and optimizer v2 adam py default parameter be consistent with keras optimizer code this be not true with adadelta while the code in optimizer py be consistent the code in optimizer v2 adadelta py be not code to reproduce the issue code need to reproduce the issue be available by download it from the give kaggle kernel link simply click three grey dot on the upper right corner and click download code |
tensorflowtensorflow | invalid result on some gpu probably einsum | Bug | system information I have write custom code linux ubuntu 19 04 debian 9 9 tensorflow instal from binary tensorflow version 2 0 0 b1 1 14 1 15 0 python version 3 7 1 3 7 3 cuda cudnn version 10 0 10 1 gpu model and memory tesla t4 tesla p4 p100 do not exist on k80 describe the current behavior the result on the gpu be significantly different from cpu it be correct on cpu but wrong on gpu the difference on gpu be much big than zero describe the expect behavior the print out of the difference should be zero code to reproduce the issue tf 2 0 0 b1 import tensorflow as tf import numpy as np tf function def sample y nn tf w x sample config layer config layer current w index 0 h x sample w shape tf shape input w l layer 0 w current tf slice w begin 0 0 0 0 size w shape 0 w shape 1 w shape 2 l 0 l 1 w current tf reshape w current config batch size w shape 1 w shape 2 list l b current tf slice w begin 0 0 0 l 0 l 1 1 size w shape 0 w shape 1 w shape 2 l 1 h tf einsum lm ikjmn likjn h w current b current current w index current w index l 0 1 l 1 for l in layer 1 h tf nn elu h w current tf slice w begin 0 0 0 current w index size w shape 0 w shape 1 w shape 2 l 0 l 1 w current tf reshape w current config batch size w shape 1 w shape 2 list l b current tf slice w begin 0 0 0 current w index l 0 l 1 1 size w shape 0 w shape 1 w shape 2 l 1 h tf einsum likjm ikjmn likjn h w current b current current w index current w index l 0 1 l 1 h tf squeeze h axis 4 return h config batch size 8 layer 2 2 2 1 np random seed 111 w np random normal size config batch size 64 512 13 astype dtype np float32 x sample np random normal size 100 2 astype dtype np float32 permutation tf random shuffle tf range config batch size dtype tf int32 seed 111 print permutation permutation with tf device cpu 0 mean sample y nn tf w w x sample x sample config config mean p sample y nn tf w tf gather w permutation axis 0 x sample x sample config config print difference on cpu np sum np abs tf gather mean permutation axis 1 mean p with tf device gpu 0 mean sample y nn tf w w x sample x sample config config mean p sample y nn tf w tf gather w permutation axis 0 x sample x sample config config print difference on gpu np sum np abs tf gather mean permutation axis 1 mean p output difference on cpu 0 0 difference on gpu 37953784 0 tf 1 14 from tensorflow python framework version import version import tensorflow as tf import numpy as np import os print tf version tf function def sample y nn tf w x sample config layer config layer current w index 0 h x sample w shape tf shape input w l layer 0 w current tf slice w begin 0 0 0 0 size w shape 0 w shape 1 w shape 2 l 0 l 1 w current tf reshape w current config batch size w shape 1 w shape 2 list l b current tf slice w begin 0 0 0 l 0 l 1 1 size w shape 0 w shape 1 w shape 2 l 1 h tf einsum lm ikjmn likjn h w current b current current w index current w index l 0 1 l 1 for l in layer 1 h tf nn elu h w current tf slice w begin 0 0 0 current w index size w shape 0 w shape 1 w shape 2 l 0 l 1 w current tf reshape w current config batch size w shape 1 w shape 2 list l b current tf slice w begin 0 0 0 current w index l 0 l 1 1 size w shape 0 w shape 1 w shape 2 l 1 h tf einsum likjm ikjmn likjn h w current b current current w index current w index l 0 1 l 1 h tf squeeze h axis 4 return h config batch size 8 layer 2 2 2 1 np random seed 111 w np random normal size config batch size 64 512 13 astype dtype np float32 precision np exp np random normal size config batch size 64 512 astype dtype np float32 x sample np random normal size 100 2 astype dtype np float32 permutation tf random shuffle tf range config batch size dtype tf int32 seed 111 conf tf configproto conf gpu option allow growth true sess tf session config conf print permutation sess run permutation with tf device cpu 0 mean sample y nn tf w w x sample x sample config config mean p sample y nn tf w tf gather w permutation axis 0 x sample x sample config config net cpu tf gather mean permutation axis 1 mean p with tf device device gpu 0 mean sample y nn tf w w x sample x sample config config mean p sample y nn tf w tf gather w permutation axis 0 x sample x sample config config net gpu tf gather mean permutation axis 1 mean p net cpu result sess run net cpu net gpu result sess run net gpu print tensorflow version format version print difference on cpu np sum np abs net cpu result print difference on gpu np sum np abs net gpu result output difference on cpu 0 0 difference on gpu 39042744 0 other info log I do not check the tf2 version on k80 p100 the bug probably persist when use pythonic slicing operation |
tensorflowtensorflow | tf 2 0 doc include tf function in site en r2 tutorial generative cvae ipynb | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue site en r2 tutorial generative cvae ipynb description of issue what need change implement tf function decorator in the computation to improve performance and highlight one of the tf 2 0 feature clear description when implement in colab the performance improve from 30 epoch average to 3 6s epoch which be a huge benefit and well worth highlighting recommend by add only 4 line of code submit a pull request I can do that yes be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | union of string set can not be convert to dense tensor raise typeerror | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macosx 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below version 2 0 0 dev20190724 git version v1 12 1 6931 g2b5ece29d3 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior I get an exception when compute the union of two string set and try to convert the output to a dense tensor typeerror can not convert 0 to eagertensor of dtype string describe the expect behavior I should get no exception code to reproduce the issue python import tensorflow as tf a tf constant a b c d e b tf constant c e g h d tf sparse to dense tf set union a b other info log here s the stacktrace typeerror traceback most recent call last in 2 a tf constant a b c d e 3 b tf constant c e g h d 4 tf sparse to dense tf set union a b miniconda3 envs tf2 lib python3 6 site package tensorflow core python op sparse ops py in sparse tensor to dense sp input default value validate index name 1478 default value default value 1479 validate index validate indice 1480 name name 1481 1482 miniconda3 envs tf2 lib python3 6 site package tensorflow core python ops gen sparse ops py in sparse to dense sparse indice output shape sparse value default value validate index name 3158 sparsetodense name ctx post execution callback sparse indice 3159 output shape sparse value default value validate indice 3160 validate index 3161 return result 3162 except core fallbackexception typeerror can not convert 0 to eagertensor of dtype string |
tensorflowtensorflow | throw std out of range while convert lite quantization model with representative dataset | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 6 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when convert to lite quantize model with representative dataset it throw std out of range when model include tf split my model implement pixel shuffle with some split and concat operation I narrow down to tf split which be cause the issue describe the expect behavior convert sucessfully code to reproduce the issue input raw tf placeholder tf float32 shape 1 32 32 1 name input raw output tf split value input raw num or size split 2 axis 1 0 with tf session as sess sess run tf global variable initializer converter lite tfliteconverter from session sess input raw output converter optimization lite optimize default def representative datum gen for I in range 1 yield np random random sample 1 32 32 1 astype np float32 yield sess run tf random normal input raw get shape dtype tf float32 seed 1 both have issue converter representative dataset representative datum gen tflite model converter convert other info log w0725 10 03 24 455737 140039335962368 deprecation wrapper py 119 from main py 419 the name tf placeholder be deprecate please use tf compat v1 placeholder instead tensor split 0 shape 1 16 32 1 dtype float32 w0725 10 03 24 459982 140039335962368 deprecation wrapper py 119 from main py 431 the name tf session be deprecate please use tf compat v1 session instead 2019 07 25 10 03 24 464880 I tensorflow stream executor platform default dso loader cc 53 could not dlopen library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory ld library path xxxoss python3 3 6 3 gpu tf1131 cuda10 ubuntu16 x86 64 lib xxxtools lsf 10 10 1 linux2 6 glibc2 3 x86 64 lib xxxoss tcl 8 4 19 x86 64 lib xxxoss tcl 8 4 19 x86 64 lib tcl8 4 xxxoss tcl 8 4 19 x86 64 lib tclx8 4 lib64 usr lib64 lib 2019 07 25 10 03 24 464949 e tensorflow stream executor cuda cuda driver cc 318 fail call to cuinit unknown error 303 2019 07 25 10 03 24 465006 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host a07ws012123 proc driver nvidia version do not exist 2019 07 25 10 03 24 465369 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 07 25 10 03 24 508462 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2394230000 hz 2019 07 25 10 03 24 508764 I tensorflow compiler xla service service cc 168 xla service 0x4283070 execute computation on platform host device 2019 07 25 10 03 24 508788 I tensorflow compiler xla service service cc 175 streamexecutor device 0 w0725 10 03 24 521992 140039335962368 deprecation wrapper py 119 from main py 432 the name tf global variable initializer be deprecate please use tf compat v1 global variable initializer instead 2019 07 25 10 03 24 525090 w tensorflow compiler jit mark for compilation pass cc 1412 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile 2019 07 25 10 03 24 526496 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2019 07 25 10 03 24 526607 I tensorflow core grappler cluster single machine cc 359 start new session 2019 07 25 10 03 24 528117 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2019 07 25 10 03 24 528143 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer function optimizer do nothing time 0 001ms 2019 07 25 10 03 24 528153 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer function optimizer do nothing time 0 001ms 2019 07 25 10 03 24 530117 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2019 07 25 10 03 24 530193 I tensorflow core grappler cluster single machine cc 359 start new session 2019 07 25 10 03 24 532220 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2019 07 25 10 03 24 532249 I tensorflow core grappler optimizer meta optimizer cc 718 constant folding graph size after 3 node 2 2 edge 0 time 0 465ms info initialize tensorflow lite runtime terminate call after throw an instance of std out of range what map base at abort core dump |
tensorflowtensorflow | contradict bazel version suggest | Bug | url s with the issue install bazel description of issue what need change it should be bazel 0 23 0 not 0 24 1 clear description first and last sentence contradict each other install bazel 0 24 1 ensure you install bazel 0 23 0 or low correct link parameter define return define raise list and define usage example request visual if applicable submit a pull request no |
tensorflowtensorflow | not find resource do not exist exception throw in runtime | Bug | I be face a similar error mention above I will try my good to help resolve this issue as it benefit my work as well it be the same problme with issue 22631 os platform and distribution linux ubuntu x86 64 4 15 0 52 generic kernel tensorflow instal from conda 4 7 5 tensorflow version 1 13 1 bazel version n a cuda cudnn version 10 0 gpu model and memory tesla v100 sxm2 16 gb exact command to reproduce mobile device n a import tensorflow as tf import numpy as np config tf configproto config gpu option allow growth true def discriminative loss y true y pre compute loss for a batch of image args y true n h w where each element contain the ground truth instance i d y pre n h w d d dimensional vector for each pixel for each image in the batch return loss compute the loss for each image in the batch def compute loss input prediction input 1 label input 0 number of cluster in ground truth cluster tf unique tf reshape label 1 compute cluster mean and variance for each cluster def compute mean c mask tf equal label 0 c mask pixel tf boolean mask prediction mask cluster mean tf reduce mean mask pixel axis 0 return cluster mean cluster mean tf map fn compute mean cluster dtype tf float32 return tf reduce mean cluster mean we want to know the loss for each image in the batch loss tf map fn compute loss y true y pre dtype tf float32 return loss def discriminative loss work y true y pre compute the loss for only the first image in the batch prediction y pre 0 label y true 0 number of cluster in ground truth cluster tf unique tf reshape label 1 compute cluster mean and variance for each cluster def compute mean c mask tf equal label 0 c mask pixel tf boolean mask prediction mask cluster mean tf reduce mean mask pixel axis 0 return cluster mean cluster mean tf map fn compute mean cluster dtype tf float32 return tf reduce mean cluster mean class mymodel tf keras model def init self input shape super mymodel self init self conv tf keras layer conv2d filter 4 kernel size 1 1 def call self input return self conv input input shape 1 128 128 3 def my gen while true x np random rand 1 input shape 1 input shape 2 3 y np random randint 11000 11015 input shape 1 input shape 2 1 yield x y train dataset tf datum dataset from generator my gen tf float32 tf float32 tf tensorshape 1 128 128 3 tf tensorshape 128 128 1 train dataset train dataset batch 1 train dataset train dataset repeat model mymodel input shape input shape this be a fix to make loading weight possible x tf zero 1 input shape x tf zero input shape y model x with tf session config config optimizer tf keras optimizer sgd lr 0 0001 model compile loss discriminative loss optimizer optimizer model fit train dataset epoch 5 step per epoch 2 tf error log |
tensorflowtensorflow | tf tensordot documentation describe non existent a axis and b axis parameter | Bug | url s with the issue description of issue what need change there be no a axis and b axis parameter but the documentation describe the function as if there be |
tensorflowtensorflow | lstm prediction be numerically inconsistent for the last few instance | Bug | the prediction you get may differ slightly depend on input length and position within it e g if you have 11 instance of input you get one answer for the first 8 and a different answer for the last 3 I write may as it happen to I with probability around 0 4 slightly mean in the order of the least significant bit of the float32 mantissa system information yes I have write custom code supply below as a reprex in r use kera try on two platform with identical result platform a linux ubuntu ubuntu 16 04 5 lts tensorflow version version 1 7 0 git version v1 7 0 3 g024aecf414 compiler version 4 8 4 python version 2 7 12 platform b linux ubuntu 14 04 6 lts tensorflow version try both version 1 12 0 git version v1 12 0 0 ga6d8ffae09 compiler version 4 8 5 python version 2 7 6 both tensorflow instal from binary not a mobile device cuda cudnn version not use gpu model and memory not use describe the current behavior if the first dimension of x be n row I will get one value if 0 I n 4 but a possibly different value for n 4 I n these c python style 0 base index for r 1 base it s 0 I bitwand n 4 versus bitwand n 4 I n describe the expect behavior reproducible prediction from same input instance independent of row number or input length I use generalize row for a slice of a tensor with a give fix first index e g x I or pre I code to reproduce the issue this be a reprex write in r I d be happy to port to other language if that s preferable r option digit 8 fake function shape arbitrary but reproducible array seq len prod shape 2 71 1 04 shape library keras shape c 30 5 model keras model sequential layer lstm unit 2 input shape shape set weight list fake c 5 8 fake c 2 8 fake 8 n 11 not a multiple of 4 x array rep fake shape each n c n shape n copy of identical input p model predict x all prediction should match p but last n 4 row differ 1 2 1 0 46561426 0 22865930 2 0 46561426 0 22865930 3 0 46561426 0 22865930 4 0 46561426 0 22865930 5 0 46561426 0 22865930 6 0 46561426 0 22865930 7 0 46561426 0 22865930 8 0 46561426 0 22865930 9 0 46561423 0 22865926 10 0 46561423 0 22865926 11 0 46561423 0 22865926 t p p 1 2 26 the difference be low bit 1 2 3 4 5 6 7 8 9 10 11 1 0 0 0 0 0 0 0 0 2 2 2 2 0 0 0 0 0 0 0 0 3 3 3 stopifnot t p p 1 all should be equal error in eval expr envir enclo t p p 1 be not all true in contrast x12 array rep fake shape each 12 c 12 shape p12 model predict x12 stopifnot t p12 p12 1 all be well for n 12 create on 2019 07 25 by the reprex package v0 2 1 9000 other info log |
tensorflowtensorflow | tf 2 0 api docs tf math sqrt | Bug | documentation contributor guide tensorflow v 2 0 link to doc doc issue description 1 the generate file in which this symbol be define I e python op gen math op py be just plain text rather than a link to the source file it would be great if it s rightly link to the source file to enable change proposal 2 there isn t a clear distinction between the choice of usage of either tf sqrt or tf math sqrt and the implication may be performance wise of choose one over the other 3 the usage example isn t a complete code sample but largely syntax like one of the parameter the function take be represent by a placeholder symbol rather than a real value tensor 4 there should also be a mention of the at least the common error that may arise as a result of incorrect usage of this function |
tensorflowtensorflow | miss shuffle argument on validation call during training | Bug | on tensorflow 1 14 os ubuntu 16 04 when I call fit of a tf keras model use hdf5matrix on both training and validation datum I set the argument shuffle batch and it work for train batch however it fail when start validation batch typeerror typeerror while prepare batch if use hdf5 input datum pass shuffle batch the problem seem to be the shuffle argument miss on the validation call during train l424 it seem the recursive call of model iteration do not set the argument from parent training call into the validation call simply shuffle shuffle in argument list should fix the issue good andre |
tensorflowtensorflow | relation difference between container and scope in context manager be not clear | Bug | from respective doc the relation between these two context manager be not clear although variable scope and name scope inter relation be fine but from the source code one can see that they be clearly two thing that inter relate but not the same thing should I use container only in the eager mode why not use only variable scope or name scope instead also maybe other question will come as people use low level api to specialize in tensorflow dev so I think its an issue thankyou |
tensorflowtensorflow | tf trt do not return correct batch size | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory t4 describe the current behavior tf trt seem to return max batch size instead of return the feed batch size this break current code badly describe the expect behavior return the batch size as plain tf code to reproduce the issue should be simple to reproduce other info log none |
tensorflowtensorflow | fit and save method fail with a subclasse model use a sequencefeature layer with sequence numeric column | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 13 6 tensorflow instal from source or binary from pip install tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version v3 6 7 6ec5cf24b7 oct 20 2018 03 02 14 describe the current behavior tf keras model fit and tf keras model save method fail and return an error when apply to a model create by subclasse the tf keras model class which use a sequencefeature layer create by call sequencefeature init on a feature colmun create with tf feature column sequence numeric column note that the issue do not occur when the sequencefeature layer be create by call sequencefeature init on a feature colmun create with tf feature column embed column as my code to reproduce the issue show below note also that the call method of the subclasse model use sequence numeric column work fine in eager execution or in graph mode see also the example code the error message I get with fit and export save model be valueerror can not convert a partially know tensorshape to a tensor none 9 hence it seem to be occur because I do not explicitely set the batch size but the last batch usually have a size less than the batch size so I be not interested in set it to a fix value before training as a side note I do not find how to do it anyway my experiment with pass a batch input shape keyword argument to the init method of sequencefeature have fail as if the argument be always ignore and this combine poorly with nest structure of input like the one I have here with a dictionary of tensor describe the expect behavior the case when I use embed column which be the case when the function do not fail suggest that the case with sequence numeric column should behave similarly and should not make the function fail it do not make sense to I that not know the batch size be relevant in one case and not relevant in the other it may be due to the use somewhere of tf tensor shape which be static and return the none instead of tf shape which be dynamic additionally it seem a standard practice to let the batch dimension to none for the reason give above so this should be support code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow keras import model from tensorflow python feature column feature column v2 import embeddingcolumn from tensorflow keras layers import lstm dense from tensorflow keras experimental import sequencefeature print use tensorflow version git version format tf version version tf version git version class toy model def init self fc list nb feature name toy model kwargs super toy self init name name kwargs self fc list fc list self dict layer for fc in self fc list fc name fc name self dict layer fc name sequencefeature fc self lstm lstm 64 return sequence false self output layer dense nb feature activation softmax def call self input training none dict apply layer for fc in self fc list fc name fc name if type fc embeddingcolumn dict apply layer fc name self dict layer fc name input 0 else we need to convert input fc name to a sparse tensor see zero tf constant 0 dtype tf float32 dense input fc name indice tf where tf not equal dense zero value tf gather nd dense indice sparse tf sparsetensor index value dense shape dict apply layer fc name self dict layer fc name fc name sparse 0 x tf concat v for v in dict apply layer item axis 1 x self lstm x x self output layer x return x dataset parameter nb batch 15 batch size 24 sequence length 9 nb feature 10 dataset construction input dense tf constant np random normal 0 1 nb batch batch size sequence length input dense tf cast input dense dtype tf float32 input cat tf constant np random randint 0 nb feature nb batch batch size sequence length input dict dense input dense categorical input cat input dataset tf datum dataset from tensor slice input dict target cat tf constant np random randint 0 high nb feature size nb batch batch size target dataset tf datum dataset from tensor slice target cat training dataset tf datum dataset zip input dataset target dataset feature column definition fc dense tf feature column sequence numeric column dense fc cat tf feature column sequence categorical column with identity categorical nb feature embed unit 16 fc cat tf feature column embed column fc cat embed unit model parameter rnn unit 64 training parameter epoch 2 try the model with the sequence numeric column feature column model toy fc dense nb feature model compile optimizer adam loss sparse categorical crossentropy metric sparse categorical accuracy try model fit x training dataset epoch epoch model evaluate x training dataset except valueerror as e print e try path tmp test model save path new model tf keras model load model path new model evaluate x training dataset except valueerror as e print e try the call method with the sequence numeric column feature column model toy fc dense nb feature try for x y in train dataset model x print call work except valueerror as e print e try the call method in graph mode with the sequence numeric column feature column tf function def call graph model input return model input model toy fc dense nb feature try for x y in train dataset call graph model x print call in graph mode work except valueerror as e print e try the model with the embed column feature column model toy fc cat nb feature model compile optimizer adam loss sparse categorical crossentropy metric sparse categorical accuracy try model fit x training dataset epoch epoch model evaluate x training dataset except valueerror as e print e try path tmp test model save path new model tf keras model load model path new model evaluate x training dataset except valueerror as e print e note that try to build the model first by insert the line for x y in training dataset take 1 model x just before call fit do not change anything |
tensorflowtensorflow | subclasse model prevent the computation of intermediate value in graph mode | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 google colab tensorflow version use command below tensorflow 2 0 beta 1 describe the current behavior compute intermediate value within a subclasse model doesn t work in graph mode describe the expect behavior it should most likely work as it be possible in tensorflow 1 x code to reproduce the issue try it directly in google colab set use eager mode to true false switch between the two case |
tensorflowtensorflow | tf keras layer embed cause memory leak | Bug | system information have I write custom code yes os platform and distribution linux mint 19 1 tensorflow instal from binary use pip tensorflow version 2 0 0 beta1 v2 0 0 beta0 16 g1d91213fe7 python version 3 6 8 cuda cudnn version 10 0 7 5 gpu model and memory nvidia quadro p1000 4 gb gddr5 describe the current behavior a gpu edit cpu as well see addendum below memory leak rapidly emerge from use high dimensional tf keras layer embed layer to be more precise I be work on transformer network and find out that when I try to fit one e g on the portuguese to english translation task present in this official tutorial a gpu memory leak emerge after a few iteration base on this stackoverflow post 42512916 I rapidly come to suspect that the issue come from the learnable embed layer at the base of both the encoder and decoder part of the network to far assess the issue and its source I implement a pseudo transformer network see code link below that be strip of most technical component the actual model embark e g I remove positional encoding residual connection mask mechanism etc the rationale be to provide a more condense and fast run code to document this issue but also to confirm that the leak do not come from custom layer or any complex datum processing mechanism the provide code include a data pre processing pipeline entirely base on the aforementioned tutorial a model construction function that make use of the keras functional api and a main function to call the former and start the fitting process on my computer everything run fine and I can see the first few fitting iteration pass until an ugly stack of allocation error message show up see full log link below whose informative part seem to be w tensorflow core framework op kernel cc 1546 op require fail at cwise op common cc 70 resource exhaust oom when allocate tensor addendum I re run the provide code disable access to the gpu and it turn out there also be a high memory usage when run on cpu during the first epoch and mostly during its first half memory usage go up multiple gb in my case from 2 to 10 gb with an increase from 2 to 7 within the first 60 train step out of 704 and keep slowly increase throughout the follow epoch with minor decrease between increase thus display local plateaux which I would guess be relate to the loading discard of datum batch although it be a bit less of a problem than with gpu since it be relatively common to have quite some ram available plus some swap space on linux it still do not feel right that fit the fake model on a dataset which can be fully load in memory create a list of eager tensor from the tf datum dataset object contain the batched pad training set result in a marginal usage of around 100 mb of ram would end up use 16 gb or ram I would also like to note that call gc collect after training do not empty the use ram which be only free instantly when end the python process describe the expect behavior the fitting process should go one fine and the memory should not get saturate I would expect some tensor to be de allocate as iteration pass code to reproduce the issue the script I write to illustrate the issue be publicly accessible as a gist here other info log the full error stack with gpu enable be publicly accessible as a gist here |
tensorflowtensorflow | the use of list and tuple be vague in tensorflow tf datum dataset map | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 7 3 cuda cudnn version 10 1 168 7 6 0 but also fail in cpu only mode gpu model and memory gtx 1080 8 g exact command to reproduce import tensorflow as tf input tf constant 10 20 30 ds tf datum dataset from tensor slice input out ds1 ds map lambda x x 1 x 2 x 3 out ds2 ds map lambda x x 1 x 2 x 3 out ds3 ds map lambda x x 1 x 2 x 3 out I think tensorflow treat tuple as nest structure of tensor and list as tensor as show in link however tf datum dataset map doesn t behave like this in above code I expect ds1 d2 ds3 to be of shape 3 1 3 3 respectively rather than 3 since top level list should be treat as tensor rather than nest structure a solution comment100834064 57146524 be that I use tf convert to tensor to force convert x 1 x 2 x 3 to a single tensor but it s not elegant ps it s confusing to output shape only with tuple why not use list to represent shape and use tuple to represent nested structure of tensor jsut like the output of tensor shape and tensor get shape e g out |
tensorflowtensorflow | update 1 14 behavioral change to mention | Bug | tf keras optimizers adadelta s default learn rate change |
tensorflowtensorflow | bug tf 2 0 kera pointwise conv2d numerically inconsistent in keras model vs manual run | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 docker container 2 0 0b1 gpu py3 jupyter cuda cudnn version cuda 10 2 gpu model and memory geforce gtx 1080 exact command to reproduce see script below describe the problem wrap an identical tf nn conv2d operation that have ksize of 1 1 in a tf keras model and call the model with or without predict on identical datum produce different result the difference be small but accumulate through a deep network if many pointwise convs be use source code log import tensorflow as tf import numpy as np np random seed 123 pool np one 32 64 64 64 float32 w1 np random randn 1 1 64 64 astype float32 manual convolution conv1 tf nn conv2d pool w1 1 1 padding same numpy the same convolution op via keras model tmp input tf keras input shape 64 64 64 dtype float32 tmp out tf nn conv2d tmp input w1 1 1 padding same predict can also be remove conv2 tf keras model input tmp input output tmp out predict pool the individual error be small but it compound through a deep network print disagreement between manual and keras model wrap conv np abs conv1 conv2 sum this script can be run in a fresh pull of the docker container |
tensorflowtensorflow | tf 2 0 kera conv2d loose shape with dilation rate other than 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mojave tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when I use conv2d layer with dilation rate other than 1 output loos shape describe the expect behavior I expect to see calculated shape with none only in first dimension code to reproduce the issue python from tensorflow keras import input from tensorflow keras layer import conv2d tensor input shape 512 512 3 y 1 conv2d filter 256 kernel size 1 dilation rate 1 name dilation 1 tensor y 6 conv2d filter 256 kernel size 1 dilation rate 6 name dilation 6 tensor print y 1 print y 6 other info log output be tensor dilation 1 identity 0 shape none 512 512 256 dtype float32 tensor dilation 6 identity 0 shape none none none 256 dtype float32 |
tensorflowtensorflow | tf 2 0 nightly tf keras estimator model to estimator get an unexpected keyword argument use v2 estimator | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 14 5 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary no tensorflow version use command below tf nightly 2 0 preview 2 0 0 dev20190721 python version python 3 6 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory import tensorflow as tf print tf version git version tf version version v1 12 1 6727 g97b7aa03b7 2 0 0 dev20190721 describe the current behavior the follow code be work with some early release but now it be crahse estimator tf keras estimator model to estimator keras model model config training config describe the expect behavior should work out of the box as before code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf import tensorflow dataset as tfds from absl import log log set verbosity log info define the estimator s input fn step per epoch 5 batch size 64 num epoch 5 def input fn dataset ds info tfds load name mnist with info true as supervise true mnist train mnist test dataset train dataset test buffer size 10000 batch size 64 def scale image label image tf cast image tf float32 image 255 return image label tf newaxis train datum mnist train map scale shuffle buffer size batch batch size return train datum repeat define train eval spec train spec tf estimator trainspec input fn input fn max step step per epoch num epoch eval spec tf estimator evalspec input fn input fn step step per epoch def make model return tf keras sequential tf keras layer conv2d 32 3 activation relu kernel regularizer tf keras regularizer l2 0 02 input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layers dropout 0 1 tf keras layer dense 64 activation relu tf keras layer batchnormalization tf keras layer dense 10 activation softmax model make model model compile optimizer adam loss sparse categorical crossentropy metric accuracy strategy none strategy tf distribute mirroredstrategy config tf estimator to use a give strategy training config tf estimator runconfig train distribute strategy estimator tf keras estimator model to estimator keras model model config training config tf estimator train and evaluate estimator train spec eval spec other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach warn log before flag parsing go to stderr w0721 17 02 51 036228 4662060480 cross device op py 1207 there be non gpu device in tf distribute strategy not use nccl allreduce i0721 17 02 51 037434 4662060480 run config py 558 initialize runconfig with distribution strategy i0721 17 02 51 038416 4662060480 estimator training py 167 not use distribute coordinator typeerror traceback most recent call last in 64 estimator tf keras estimator model to estimator 65 keras model model 66 config training config 67 68 anaconda release conda env env gcp dl 2 0 nightly lib python3 6 site package tensorflow core python keras estimator init py in model to estimator v2 keras model keras model path custom object model dir config checkpoint format 164 config config 165 checkpoint format checkpoint format 166 use v2 estimator true 167 lint thenchange tensorflow estimator python estimator keras py typeerror model to estimator get an unexpected keyword argument use v2 estimator |
tensorflowtensorflow | tf contrib factorization walsmatrixfactorization attributeerror module object have no attribute walsmatrixfactorization | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I just run walsmodel through gcloud ml engine job I run it through my mac terminal I didn t change any code over here but it always have the follow error experiment fn tf contrib factorization walsmatrixfactorization attributeerror module object have no attribute walsmatrixfactorization gcloud ml engine job submit training wal 190721 012226 region us east1 module name walsmodel task runtime version 1 14 python version 3 5 package path home mabodx anguis news recommendation 10 recommend walsmodel job dir gs buzzbreak news recommendation 2019 07 21t01 18 02z staging bucket gs buzzbreak scale tier basic gpu runtime version output dir gs buzzbreak news recommendation 2019 07 21t01 18 02z model train 190721 012212 input path gs buzzbreak news recommendation 2019 07 21t01 18 02z news datum num epoch 500 nitem 39681 nuser 38781 topk 1000 describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | inconsistency between use kera and tf kera | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version tensorflow 2 python version 3 6 8 instal use virtualenv pip conda pip bazel version if compile from source na gcc compiler version if compile from source cuda cudnn version na gpu model and memory na there be inconsistency when use keras or tf kera to fit the model to the same dataset as everyone can see everything be similar in both code except call layer and other object from tf keras or pure kera the code be attach I m absolutely confused I try to double check the code line by line and everything be the same in both code except call module from either pure kera or tf kera it be absolutely disappointing to I why they would be totally different in fact the code be the same except that layer call from either kera or tf kera why these model should be such completely different behavior the module layer can be call from either from tensorflow python keras layers import input dense batchnormalization gaussiannoise gaussiandropout from tensorflow python keras model import model from tensorflow python keras util import np util from tensorflow python keras callbacks import csvlogger history or from keras layers import input dense batchnormalization gaussiannoise gaussiandropout from keras model import model from keras util import np util from keras callbacks import csvlogger history create model everything in the model description be the same input input shape 19671 name input input 0 batchnormalization name input 0 input input 1 dense 1024 activation linear name input 1 input 0 input 2 batchnormalization name input 2 input 1 input 3 dense 128 activation softplus name input 3 input 2 input 4 batchnormalization name input 4 input 3 encode dense unit 12 activation relu name encode input 4 input 5 dense 512 activation softplus name input 5 encode decode tcga dense unit 19671 activation linear name m rna input 5 decode micro rna dense unit 2588 activation linear name mi rna input 5 cl 0 dense unit 27 activation softmax name cl tissue encode cl 2 dense unit 33 activation softmax name cl disease encode scae model inputs input output decode tcga decode micro rna cl 0 cl 2 scae compile optimizer nadam loss mse mse cosine proximity cosine proximity loss weight 0 001 0 001 0 5 0 5 metric m rna mse mae mi rna mse mae cl tissue acc cl disease acc scae loading dataset the dataset be too large about 5 5 gb local dataset folder home ermia mega pycharmproject vsc tcga full version tcga dataset local result folder home ermia mega pycharmproject vsc tcga full version tcga result from sklearn preprocesse import labelencoder normalize import panda as pd seed 2019 np random seed seed seed dataset folder local dataset folder df m rna address dataset folder fpkm csv df mi rna address dataset folder mirna csv df tissue address dataset folder tissue csv df disease address dataset folder disease csv df m rna np loadtxt df m rna address delimiter df mi rna np loadtxt df mi rna address delimiter df tissue np ravel pd dataframe as matrix pd read csv df tissue address delimiter header none df disease np ravel pd dataframe as matrix pd read csv df disease address delimiter header none df m rna normalize x df m rna axis 0 norm max df mi rna normalize x df mi rna axis 0 norm max label encoder tissue labelencoder label encoder tissue fit df tissue encode tissue label encoder tissue transform df tissue label encoder disease labelencoder label encoder disease fit df disease encode disease label encoder disease transform df disease categorical tissue np util to categorical encode tissue categorical disease np util to categorical encoded disease m rna df m rna mi rna df mi rna indice np arange m rna shape 0 index indice 0 10750 np random shuffle indice m rna m rna indice mi rna mi rna indice categorical tissue categorical tissue indice categorical disease categorical disease indice m rna train m rna 0 9750 m rna test m rna 9750 10750 mi rna train mi rna 0 9750 mi rna test mi rna 9750 10750 categorical tissue train categorical tissue 0 9750 categorical tissue test categorical tissue 9750 10750 categorical disease train categorical disease 0 9750 categorical disease test categorical disease 9750 10750 print datum loading have just be finish print m rna shape mi rna shape categorical tissue shape categorical disease shape batch size 64 nb epoch 200 fitting model scae fit m rna train m rna train mi rna train categorical tissue train categorical disease train batch size batch size epoch nb epoch callback csv logger history validation datum m rna test m rna test mi rna test categorical tissue test categorical disease test verbose 2 the first model be not convergent at all and the accuracy of subproblem be about 0 fchollet |
tensorflowtensorflow | tf keras model load model fail on sequential model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below tf nightly gpu 2 0 preview 2 0 0 dev20190718 python version 3 7 describe the current behavior attempt to run tf keras model load model on a tf keras sequential with lazily generate input produce an error describe the expect behavior no error be throw when run the script below model2 be a clone of model code to reproduce the issue python import tempfile import tensorflow as tf def main batch size 3 image shape 32 32 3 input tf random uniform batch size image shape model tf keras sequential tf keras layer conv2d filter 16 kernel size 3 stride 2 padding same activation linear model input with tempfile namedtemporaryfile suffix hdf5 delete true as fd tf keras model save model model fd name overwrite true model2 tf keras model load model fd name compile false print model2 summary if name main main other info log run the script above produce the follow error traceback most recent call last file home kristian conda envs softlearne 2 lib python3 7 runpy py line 193 in run module as main main mod spec file home kristian conda envs softlearne 2 lib python3 7 runpy py line 85 in run code exec code run global file home kristian github hartikainen softlearne 2 test test sequential serialize py line 31 in main file home kristian github hartikainen softlearne 2 test test sequential serialize py line 25 in main model2 tf keras model load model fd name compile false file home kristian conda envs softlearne 2 lib python3 7 site package tensorflow core python keras save save py line 138 in load model return hdf5 format load model from hdf5 filepath custom object compile file home kristian conda envs softlearne 2 lib python3 7 site package tensorflow core python keras save hdf5 format py line 165 in load model from hdf5 load weight from hdf5 group f model weight model layer file home kristian conda envs softlearne 2 lib python3 7 site package tensorflow core python keras save hdf5 format py line 671 in load weight from hdf5 group layer valueerror you be try to load a weight file contain 1 layer into a model with 0 layer exception ignore in traceback most recent call last file home kristian conda envs softlearne 2 lib python3 7 site package tensorflow core python eager context py line 316 in del typeerror argument of type nonetype be not iterable |
tensorflowtensorflow | tf 2 0 tf distribute example not work nccl all reduce issue | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0b1 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 7 4 gpu model and memory titan xp you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when I try to run one of the distribute example the model fail use the default nccl all reduce cross device ops nccl all reduce error internalerror traceback most recent call last in 1 model fit train dataset epoch 50 callback callback virtualenvs physics3 lib python3 6 site package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 641 max queue size max queue size 642 worker worker 643 use multiprocesse use multiprocesse 644 645 def evaluate self virtualenvs physics3 lib python3 6 site package tensorflow python keras engine training distribute py in fit self model x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq kwargs 679 validation step validation step 680 validation freq validation freq 681 step name step per epoch 682 683 def evaluate self virtualenvs physics3 lib python3 6 site package tensorflow python keras engine training array py in model iteration model input target sample weight batch size epoch verbose callback val input val target val sample weight shuffle initial epoch step per epoch validation step validation freq mode validation in fit prepared feed value from dataset step name kwargs 292 else 293 actual input in 294 batch out f actual input 295 except error outofrangeerror 296 if be dataset virtualenvs physics3 lib python3 6 site package tensorflow python keras distribute distribute training util py in execution function input fn 811 def execution function input fn 812 numpy translate tensor to value in eager mode 813 return out numpy for out in distribute function input fn 814 return execution function 815 virtualenvs physics3 lib python3 6 site package tensorflow python eager def function py in call self args kwd 426 lifting succeed so variable be initialize and we can run the 427 stateless function 428 return self stateless fn args kwd 429 else 430 canon args canon kwd virtualenvs physics3 lib python3 6 site package tensorflow python eager function py in call self args kwargs 1333 call a graph function specialize to the input 1334 graph function args kwargs self maybe define function args kwargs 1335 return graph function filter call args kwargs pylint disable protect access 1336 1337 property virtualenvs physics3 lib python3 6 site package tensorflow python eager function py in filter call self args kwargs 587 588 return self call flat 589 t for t in nest flatten args kwargs expand composite true 590 if isinstance t op tensor 591 resource variable op resourcevariable virtualenvs physics3 lib python3 6 site package tensorflow python eager function py in call flat self args 669 only need to override the gradient in graph mode and when we have output 670 if context execute eagerly or not self output 671 output self inference function call ctx args 672 else 673 self register gradient virtualenvs physics3 lib python3 6 site package tensorflow python eager function py in call self ctx args 443 attrs executor type executor type 444 config proto config 445 ctx ctx 446 replace empty list with none 447 output output or none virtualenvs physics3 lib python3 6 site package tensorflow python eager execute py in quick execute op name num output input attrs ctx name 65 else 66 message e message 67 six raise from core status to exception e code message none 68 except typeerror as e 69 if any op be keras symbolic tensor x for x in input virtualenvs physics3 lib python3 6 site package six py in raise from value from value internalerror 2 root error s find 0 internal internal error node adam ncclallreduce 8 define at 1 1 internal internal error node adam ncclallreduce 8 define at 1 groupcrossdevicecontroledge 4 adam adam update 9 const 679 0 successful operation 9 derive error ignore op inference distribute function 9447 function call stack distribute function distribute function switch to hierarchical all reduce in the scope allow the code to run but with a performance penalty and more errror hierarchical all reduce error 2019 07 19 15 06 47 632060 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 07 19 15 06 47 947109 w tensorflow core framework model cc 475 fail to find a tunable parameter that would decrease the output time this mean that the autotune optimization get stick in a local maximum the optimization attempt will be abort 2019 07 19 15 06 48 925479 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 07 19 15 07 02 595311 I tensorflow core profiler lib profiler session cc 174 profiler session start 2019 07 19 15 07 02 610888 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcupti so 10 0 2019 07 19 15 07 07 254401 I tensorflow core platform default device tracer cc 641 collect 1045 kernel record 173 memcpy record 2019 07 19 15 07 12 732921 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 8 groupcrossdevicecontroledge 8 adam adam update 9 const 879 2019 07 19 15 07 12 732922 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 8 replica 4 metric accuracy assignaddvariableop 1 375 2019 07 19 15 07 12 732962 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 8 adam adam group dep 951 2019 07 19 15 07 12 733034 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 8 replica 9 metric accuracy assignaddvariableop 1 203 2019 07 19 15 07 12 732989 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 8 groupcrossdevicecontroledge 0 adam adam update 9 const 631 2019 07 19 15 07 12 732981 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 8 replica 3 metric accuracy assignaddvariableop 1 359 2019 07 19 15 07 12 733029 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 8 metric accuracy div no nan readvariableop 16 450 2019 07 19 15 07 12 732962 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 9 2019 07 19 15 07 12 733232 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 8 groupcrossdevicecontroledge 2 adam adam update 9 const 727 2019 07 19 15 07 12 733380 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 8 groupcrossdevicecontroledge 0 adam adam update 9 const 763 2019 07 19 15 07 12 735619 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 8 i 15 08 10 336 labapp saving file at k further investigation have reveal that this only occur when I add the 10th gpu on the node to the strategy use the first 9 gpu be fine but include the 10th gpu with any other combination of gpu lead to this behavior confirm on multiple independent system after reinstall cuda driver and restart node I still get an out of range error but the model train the model also train when only use the 10th gpu out of range error with nccl all reduce name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 1c 00 0 2019 07 20 12 10 09 708295 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 1d 00 0 2019 07 20 12 10 09 710603 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 1e 00 0 2019 07 20 12 10 09 712948 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 3d 00 0 2019 07 20 12 10 09 715631 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 3e 00 0 2019 07 20 12 10 09 717939 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 3f 00 0 2019 07 20 12 10 09 720316 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 40 00 0 2019 07 20 12 10 09 720644 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 07 20 12 10 09 722035 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 07 20 12 10 09 723341 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 07 20 12 10 09 723662 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 07 20 12 10 09 725225 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 07 20 12 10 09 726450 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 07 20 12 10 09 730176 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 07 20 12 10 09 761821 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 2019 07 20 12 10 09 762583 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 avx512f fma 2019 07 20 12 10 11 263993 I tensorflow compiler xla service service cc 168 xla service 0x3c306c0 execute computation on platform cuda device 2019 07 20 12 10 11 264051 I tensorflow compiler xla service service cc 175 streamexecutor device 0 titan xp compute capability 6 1 2019 07 20 12 10 11 264071 I tensorflow compiler xla service service cc 175 streamexecutor device 1 titan xp compute capability 6 1 2019 07 20 12 10 11 264088 I tensorflow compiler xla service service cc 175 streamexecutor device 2 titan xp compute capability 6 1 2019 07 20 12 10 11 264104 I tensorflow compiler xla service service cc 175 streamexecutor device 3 titan xp compute capability 6 1 2019 07 20 12 10 11 264119 I tensorflow compiler xla service service cc 175 streamexecutor device 4 titan xp compute capability 6 1 2019 07 20 12 10 11 264135 I tensorflow compiler xla service service cc 175 streamexecutor device 5 titan xp compute capability 6 1 2019 07 20 12 10 11 264151 I tensorflow compiler xla service service cc 175 streamexecutor device 6 titan xp compute capability 6 1 2019 07 20 12 10 11 273304 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2300010000 hz 2019 07 20 12 10 11 275776 I tensorflow compiler xla service service cc 168 xla service 0x45d7fb0 execute computation on platform host device 2019 07 20 12 10 11 275796 I tensorflow compiler xla service service cc 175 streamexecutor device 0 2019 07 20 12 10 11 308790 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 1c 00 0 2019 07 20 12 10 11 310775 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 1d 00 0 2019 07 20 12 10 11 312785 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 1e 00 0 2019 07 20 12 10 11 314726 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 3d 00 0 2019 07 20 12 10 11 316719 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 3e 00 0 2019 07 20 12 10 11 318675 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 3f 00 0 2019 07 20 12 10 11 320680 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 40 00 0 2019 07 20 12 10 11 320722 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 07 20 12 10 11 320732 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 07 20 12 10 11 320741 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 07 20 12 10 11 320767 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 07 20 12 10 11 320778 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 07 20 12 10 11 320787 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 07 20 12 10 11 320797 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 07 20 12 10 11 347344 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 2019 07 20 12 10 11 347438 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 07 20 12 10 11 362271 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 07 20 12 10 11 362286 I tensorflow core common runtime gpu gpu device cc 1187 0 1 2 3 4 5 6 2019 07 20 12 10 11 362292 I tensorflow core common runtime gpu gpu device cc 1200 0 n y y y y y y 2019 07 20 12 10 11 362296 I tensorflow core common runtime gpu gpu device cc 1200 1 y n y y y y y 2019 07 20 12 10 11 362300 I tensorflow core common runtime gpu gpu device cc 1200 2 y y n y y y y 2019 07 20 12 10 11 362304 I tensorflow core common runtime gpu gpu device cc 1200 3 y y y n y y y 2019 07 20 12 10 11 362309 I tensorflow core common runtime gpu gpu device cc 1200 4 y y y y n y y 2019 07 20 12 10 11 362313 I tensorflow core common runtime gpu gpu device cc 1200 5 y y y y y n y 2019 07 20 12 10 11 362317 I tensorflow core common runtime gpu gpu device cc 1200 6 y y y y y y n 2019 07 20 12 10 11 378594 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 11424 mb memory physical gpu device 0 name titan xp pci bus i d 0000 1c 00 0 compute capability 6 1 2019 07 20 12 10 11 380969 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 11424 mb memory physical gpu device 1 name titan xp pci bus i d 0000 1d 00 0 compute capability 6 1 2019 07 20 12 10 11 385188 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 2 with 11424 mb memory physical gpu device 2 name titan xp pci bus i d 0000 1e 00 0 compute capability 6 1 2019 07 20 12 10 11 389861 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 3 with 11424 mb memory physical gpu device 3 name titan xp pci bus i d 0000 3d 00 0 compute capability 6 1 2019 07 20 12 10 11 393361 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 4 with 11424 mb memory physical gpu device 4 name titan xp pci bus i d 0000 3e 00 0 compute capability 6 1 2019 07 20 12 10 11 397087 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 5 with 11424 mb memory physical gpu device 5 name titan xp pci bus i d 0000 3f 00 0 compute capability 6 1 2019 07 20 12 10 11 400702 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 6 with 11424 mb memory physical gpu device 6 name titan xp pci bus i d 0000 40 00 0 compute capability 6 1 2019 07 20 12 10 11 579203 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 1c 00 0 2019 07 20 12 10 11 581543 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 1d 00 0 2019 07 20 12 10 11 583872 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 1e 00 0 2019 07 20 12 10 11 585869 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 3d 00 0 2019 07 20 12 10 11 587873 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 3e 00 0 2019 07 20 12 10 11 589889 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 3f 00 0 2019 07 20 12 10 11 591870 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name titan xp major 6 minor 1 memoryclockrate ghz 1 582 pcibusid 0000 40 00 0 2019 07 20 12 10 11 591903 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 07 20 12 10 11 591913 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 07 20 12 10 11 591921 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 07 20 12 10 11 591931 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 07 20 12 10 11 591957 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 07 20 12 10 11 591967 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 07 20 12 10 11 591978 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 07 20 12 10 11 618747 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 2019 07 20 12 10 11 619236 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 07 20 12 10 11 619246 I tensorflow core common runtime gpu gpu device cc 1187 0 1 2 3 4 5 6 2019 07 20 12 10 11 619255 I tensorflow core common runtime gpu gpu device cc 1200 0 n y y y y y y 2019 07 20 12 10 11 619261 I tensorflow core common runtime gpu gpu device cc 1200 1 y n y y y y y 2019 07 20 12 10 11 619292 I tensorflow core common runtime gpu gpu device cc 1200 2 y y n y y y y 2019 07 20 12 10 11 619297 I tensorflow core common runtime gpu gpu device cc 1200 3 y y y n y y y 2019 07 20 12 10 11 619303 I tensorflow core common runtime gpu gpu device cc 1200 4 y y y y n y y 2019 07 20 12 10 11 619320 I tensorflow core common runtime gpu gpu device cc 1200 5 y y y y y n y 2019 07 20 12 10 11 619326 I tensorflow core common runtime gpu gpu device cc 1200 6 y y y y y y n 2019 07 20 12 10 11 635672 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 11424 mb memory physical gpu device 0 name titan xp pci bus i d 0000 1c 00 0 compute capability 6 1 2019 07 20 12 10 11 637653 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 1 with 11424 mb memory physical gpu device 1 name titan xp pci bus i d 0000 1d 00 0 compute capability 6 1 2019 07 20 12 10 11 639716 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 2 with 11424 mb memory physical gpu device 2 name titan xp pci bus i d 0000 1e 00 0 compute capability 6 1 2019 07 20 12 10 11 641688 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 3 with 11424 mb memory physical gpu device 3 name titan xp pci bus i d 0000 3d 00 0 compute capability 6 1 2019 07 20 12 10 11 643693 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 4 with 11424 mb memory physical gpu device 4 name titan xp pci bus i d 0000 3e 00 0 compute capability 6 1 2019 07 20 12 10 11 645687 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 5 with 11424 mb memory physical gpu device 5 name titan xp pci bus i d 0000 3f 00 0 compute capability 6 1 2019 07 20 12 10 11 647730 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 6 with 11424 mb memory physical gpu device 6 name titan xp pci bus i d 0000 40 00 0 compute capability 6 1 2019 07 20 12 10 26 189075 w tensorflow compiler jit mark for compilation pass cc 1483 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile 2019 07 20 12 10 26 273130 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 07 20 12 10 28 489688 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 07 20 12 10 36 757915 I tensorflow core profiler lib profiler session cc 174 profiler session start 2019 07 20 12 10 36 759100 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcupti so 10 0 2019 07 20 12 10 40 328991 I tensorflow core platform default device tracer cc 641 collect 722 kernel record 110 memcpy record 2019 07 20 12 10 45 501340 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 2 groupcrossdevicecontroledge 0 adam adam update 6 const 297 2019 07 20 12 10 45 501340 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 2 replica 3 metric accuracy assignaddvariableop 1 187 2019 07 20 12 10 45 501340 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 2 groupcrossdevicecontroledge 5 adam adam update 5 1 const 365 2019 07 20 12 10 45 501340 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 2 permconstnchwtonhwc layoutoptimiz 16 2019 07 20 12 10 45 501348 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 2 groupcrossdevicecontroledge 2 adam adam update 6 const 421 2019 07 20 12 10 45 501348 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 2 groupcrossdevicecontroledge 2 identity 1 523 2019 07 20 12 10 45 501354 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 2 replica 5 loss mul 260 2019 07 20 12 10 45 502198 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 2 describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | dense do not flatten input with rank 2 and behave exactly like timedistribute dense | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos 10 13 6 tensorflow instal from source or binary from pip install tensorflow version use command below v2 0 0 beta0 16 g1d91213fe7 2 0 0 beta1 python version v3 6 7 6ec5cf24b7 oct 20 2018 03 02 14 describe the current behavior a note in dense documentation say that note if the input to the layer have a rank great than 2 then it be flatten prior to the initial dot product with kernel I don t see this happen in real life instead dense behave on a 3 rank tensor as it would behave if it be wrap in a timedistribute layer make I question the utility of timedistribute at all describe the expect behavior dense should flatten its input like the documentation say in the first example bellow the shape of the kernel weight of dense should be 5 3 2 15 2 instead of 3 2 which be the shape of dense2 as expect in the case of dense2 code to reproduce the issue first example import tensorflow as tf import numpy as np print use tensorflow version git version format tf version version tf version git version tf random set seed 12 np random seed 12 init tf keras initializer glorotuniform seed 12 inp tf constant np random normal 0 1 1 5 6 inp tf cast inp dtype tf float32 gru tf keras layers gru 3 return sequence true inp print gru shape 1 5 3 dense tf keras layer dense 2 kernel initializer init bias initializer init print dense gru tf tensor 1 5456871 0 5280464 0 11647969 0 20553198 0 58126366 0 16031623 0 22882831 0 22649539 0 62777793 0 32470667 shape 1 5 2 dtype float32 for w in dense weight print w shape 3 2 instead of 5 3 2 if dense indeed flatten its input 2 tddense tf keras layer timedistribute dense print tddense gru tf tensor 1 5456871 0 5280464 0 11647969 0 20553198 0 58126366 0 16031623 0 22882831 0 22649539 0 62777793 0 32470667 shape 1 5 2 dtype float32 if dense kernel have shape 15 2 this should result in the follow error invalidargumenterror matrix size incompatible in 0 5 3 in 1 15 2 op matmul but instead what we get be the same output than without timedistribute without error dense2 tf keras layer dense 2 kernel initializer init bias initializer init tddense tf keras layer timedistribute dense2 print tddense gru tf tensor 1 5456871 0 5280464 0 11647969 0 20553198 0 58126366 0 16031623 0 22882831 0 22649539 0 62777793 0 32470667 shape 1 5 2 dtype float32 for w in dense2 weight print w shape 3 2 as expect 2 second example with a rank even large than 3 import tensorflow as tf print use tensorflow version git version format tf version version tf version git version inp tf keras input shape 10 25 25 3 dense layer1 tf keras layer dense 78 x dense layer1 inp print output shape without timedistribute print x shape dense layer2 tf keras layer dense 78 y tf keras layer timedistribute dense layer2 inp print output shape with timedistribute print y shape print weight shape without timedistribute for weight in dense layer1 trainable weight if len weight shape 2 print kernel shape else print bias shape print weight shape print weight shape with timedistribute for weight in dense layer2 trainable weight if len weight shape 2 print kernel shape else print bias shape print weight shape which output be use tensorflow version 2 0 0 beta1 git version v2 0 0 beta0 16 g1d91213fe7 output shape without timedistributed none 10 25 25 78 output shape with timedistributed none 10 25 25 78 weight shape without timedistribute kernel shape 3 78 bias shape 78 weight shape with timedistribute kernel shape 3 78 bias shape 78 we see in this example that dense and timedistribute dense behave the same in that they only touch to the last dimension of the input |
tensorflowtensorflow | base64 string contain | Bug | url s with the issue description of issue what need change some base64 string contain clear description it could be useful to add this information after this paragraph input may or may not have pad at the end see encodebase64 for padding web safe mean that input must use and instead of and |
tensorflowtensorflow | tf2 memory leak during implicit conversion from eagertensor to numpy | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below tensorflow gpu 2 0 0 beta1 python version 3 7 cuda cudnn version 10 0 7 5 gpu model and memory gtx 1080ti run the follow code on gpu cause a memory leak on my machine during implicit conversion to numpy leak do not manifest when do explicit conversion r np sum x numpy code to reproduce the issue import tensorflow as tf import numpy as np n 100000 x tf zero 10000 gpu for I in range n r np sum x |
tensorflowtensorflow | tensorrt int8 calibration table be miss | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below v1 13 0 rc1 0 g54c5616308 python version 3 5 2 cuda cudnn version cuda 10 gpu model and memory tesla v100 16 gb describe the current behavior tftrt fail to collect calibration info and can not create the int8 inference graph code to reproduce the issue here be the model link def run calibration calib graph dataset tf reset default graph tf config tf configproto tf config gpu option allow growth true x np random randint 0 10000 1 40 with tf graph as default as g input output tf import graph def graph def calib graph return element x model dense biasadd name input input output 0 output output output 0 sess tf session config tf config graph g for I in range 10 val sess run output input x return calib graph def int8quant with tf session as sess saver tf train import meta graph workspace exps ptb ptb rnn 128 ptb rnn 128 72600 meta saver restore sess workspace exps ptb ptb rnn 128 ptb rnn 128 72600 your output model dense biasadd frozen graph tf graph util convert variable to constant sess tf get default graph as graph def output node name your output trt graph trt create inference graph input graph def frozen graph output your output max batch size 10 max workspace size byte 2 30 precision mode int8 minimum segment size 2 minimum number of node in an engine int8graph run calibration trt graph none int8 graph trt calib graph to infer graph int8graph other info log here be the trace log 2019 07 19 03 47 34 167937 e tensorflow contrib tensorrt log trt logg cc 38 defaultlogger tensor tensorrtoutputph 0cannot be both input and output 2019 07 19 03 47 34 167960 e tensorflow contrib tensorrt log trt logg cc 38 defaultlogger network must have at least one output 2019 07 19 03 47 38 553760 e tensorflow contrib tensorrt log trt logg cc 38 defaultlogger tensor unnamed itensor 3 be uniformly zero network calibration fail 2019 07 19 03 47 38 578654 e tensorflow contrib tensorrt convert convert graph cc 220 calibration table be empty traceback most recent call last file usr lib python3 5 runpy py line 184 in run module as main main mod spec file usr lib python3 5 runpy py line 85 in run code exec code run global file workspace code rnnquant trt convert py line 99 in main file workspace code rnnquant trt convert py line 95 in main int8quant file workspace code rnnquant trt convert py line 89 in int8quant int8 graph trt calib graph to infer graph int8graph file usr local lib python3 5 dist package tensorflow contrib tensorrt python trt convert py line 416 in calib graph to infer graph int msg 0 tensorflow python framework error impl unknownerror calibration table be miss this shouldn t have happen |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.