repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
the tenorflow framework share object contain public hwloc symbol which cause hwloc unexpected behavior it link dynamically with some other library
Bug
system information have I write custom code no os platform and distribution ubuntu 16 04 fedora 30 and possibly any other linux tensorflow instal from c binary package libtensorflow cpu linux x86 64 1 14 0 tar gz tensorflow version use command below 1 14 0 python version not applicable describe the current behavior when I run nm libtensorflow framework so 1 14 0 grep t grep hwloc on linux I get the follow output 00000000012d9fc0 t hwloc bitmap copy 00000000012d9fb0 t hwloc bitmap dup 00000000012da6e0 t hwloc bitmap fill 00000000012dbc20 t hwloc bitmap first 00000000012dbc90 t hwloc bitmap first unset 00000000012d9ea0 t hwloc bitmap free 00000000012e78a0 t hwloc topology insert group object 00000000012e7a50 t hwloc topology insert misc object 00000000012e7110 t hwloc topology be thissystem 00000000012e7b00 t hwloc topology load 00000000012e5bf0 t hwloc topology reconnect 00000000012e8330 t hwloc topology restrict and many other publicly available hwloc interface that be include in tenorflow framework so as public symbol so when some application link with tensorflow and some other library that dynamically link with hwloc then part of hwloc interface call locate inside libhwloc so but other interface locate inside tensorflow framework so share object librarie dependency structure tensorflow hwloc for example when the library some lib on the scheme call hwloc topology init then it go to the hwloc library locate inside tensorflow framework so but when it call the hwloc topology get complete cpuset method then it go to the share hwloc library this mean that we have two instance of hwloc library which be divide between two share object this situation cause hwloc unexpected behavior inside other library e g the some lib library on the scheme describe the expect behavior the expect behavior be to not deliver any hwloc interface as part of public interface of the tensorflow share object standalone code to reproduce the issue to reproduce this issue we need at least two source code file for the application and for the library some lib cpp c include include void print bitmap hwloc const bitmap t bitmap if bitmap null printf mask be null n return char buf new char 256 hwloc bitmap snprintf buf 256 bitmap printf mask be s n buf delete buf extern void print topology info hwloc topology t topology hwloc cpuset t process cpu affinity mask hwloc nodeset t process node affinity mask parse topology hwloc topology init topology hwloc topology load topology get process affinity mask process cpu affinity mask hwloc bitmap alloc process node affinity mask hwloc bitmap alloc hwloc get cpubind topology process cpu affinity mask 0 hwloc cpuset to nodeset topology process cpu affinity mask process node affinity mask print bitmap process cpu affinity mask print bitmap process node affinity mask printf complete n print bitmap hwloc topology get complete cpuset topology print bitmap hwloc topology get complete nodeset topology printf allow n print bitmap hwloc topology get allow cpuset topology print bitmap hwloc topology get allow nodeset topology application cpp include extern void print topology info int main print topology info build step another compiler be also applicable g c fpic some lib cpp g share o some lib so some lib o lhwloc g application cpp o some app ltensorflow some lib so output mask be 0x000000ff mask be 0x0 complete mask be null mask be 0x00000001 allow mask be 0x000000ff mask be 0x00000001 expect output mask be 0x000000ff mask be 0xf f complete mask be 0x000000ff mask be 0xf f allow mask be 0x000000ff mask be 0xf f other info log proof of different hwloc interface location here be some information that be obtain via gdb during application debug share object location 0x00007fffecdf1430 0x00007fffece1869a tbbbind hwloc 0x00007fffed750e80 0x00007fffee5971b4 libtensorflow framework so 1 static hwloc method address can be easily correlate with correspond share object 0x7fffecdf66a0 tbbbind hwloc 0x7fffecdf80c0 tbbbind hwloc 0x7fffecdfe2a0 tbbbind hwloc 0x7fffeef4d6c8 libtensorflow framework so 1 static hwloc 0x7fffeef4d644 libtensorflow framework so 1 static hwloc 0x7fffeef4d686 libtensorflow framework so 1 static hwloc 0x7fffeef4d665 libtensorflow framework so 1 static hwloc 0x7fffeef4d6a7 libtensorflow framework so 1 static hwloc workaround add lhwloc flag to application compilation string to directly link the hwloc share object after this change all hwloc interface will be locate within the hwloc share object
tensorflowtensorflow
segmentation fault in ctc decode function
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior segmentation fault occur in tf keras backend ctc decode when pass a large value for top path and set greedy to false if it be far large enough to be out of range of int32 the function handle the error properly by throw an exception in python but when it be around the boundary of the range the function produce a segfault describe the expect behavior no segfault standalone code to reproduce the issue python import tensorflow as tf y pre 4 7 3 2 2 8 4 9 1 0 3 0 3 9 3 8 1 4 1 0 1 6 3 8 4 0 4 5 3 9 2 2 2 2 4 5 input length 3 2 2 top path 2147483697 tf keras backend ctc decode y pre input length false 100 top path other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach segmentation fault core dump
tensorflowtensorflow
could not load dynamic library cudart64 101 dll
Bug
system information window 10 laptop asus gl553vw tensorflow instal with pip tensorflow gpu 2 2 0 python 3 8 3 cuda version 10 1 update2 cudnn 10 1 windows10 x64 v7 6 5 32 nvidia gtx 960 m I want to use tensorflow with gpu but I keep receive this error could not load dynamic library cudart64 101 dll when I import tensorflow the output of pip list package version absl py 0 9 0 astunparse 1 6 3 cachetool 4 1 0 certifi 2020 6 20 chardet 3 0 4 gast 0 3 3 google auth 1 18 0 google auth oauthlib 0 4 1 google pasta 0 2 0 grpcio 1 30 0 h5py 2 10 0 idna 2 9 kera preprocesse 1 1 2 markdown 3 2 2 numpy 1 19 0 oauthlib 3 1 0 opt einsum 3 2 1 pip 19 2 3 protobuf 3 12 2 pyasn1 0 4 8 pyasn1 module 0 2 8 request 2 24 0 request oauthlib 1 3 0 rsa 4 6 scipy 1 4 1 setuptool 41 2 0 six 1 15 0 tensorboard 2 2 2 tensorboard plugin wit 1 6 0 post3 tensorflow gpu 2 2 0 tensorflow gpu estimator 2 2 0 termcolor 1 1 0 urllib3 1 25 9 werkzeug 1 0 1 wheel 0 34 2 wrapt 1 12 1 the output of pip debug verbose pip version pip 19 2 3 from c user ghassen download lisadetection lisadetection venv2 lib site package pip python 3 8 sys version 3 8 3 tag v3 8 3 6f8c832 may 13 2020 22 37 02 msc v 1924 64 bit amd64 sys executable c user ghassen download lisadetection lisadetection venv2 script python exe sys getdefaultencode utf 8 sys getfilesystemencode utf 8 locale getpreferredencoding cp1252 sys platform win32 sys implementation name cpython config variable py debug be unset python abi tag may be incorrect compatible tag 15 cp38 cp38 win amd64 cp38 none win amd64 py3 none win amd64 cp38 none any cp3 none any py38 none any py3 none any py37 none any py36 none any py35 none any py34 none any py33 none any py32 none any py31 none any py30 none any for import tensorflow as tf this be the output 2020 06 24 14 50 11 230153 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library cudart64 101 dll dlerror cudart64 101 dll not find 2020 06 24 14 50 11 236957 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine and for tf test be gpu available the output be warn tensorflow from 1 be gpu available from tensorflow python framework test util be deprecate and will be remove in a future version instruction for update use tf config list physical device gpu instead 2020 06 24 14 51 00 146205 I tensorflow core platform cpu feature guard cc 143 your cpu support instruction that this tensorflow binary be not compile to use avx2 2020 06 24 14 51 00 171655 I tensorflow compiler xla service service cc 168 xla service 0x21868deee70 initialize for platform host this do not guarantee that xla will be use device 2020 06 24 14 51 00 182635 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 06 24 14 51 00 190956 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library nvcuda dll 2020 06 24 14 51 01 031439 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 960 m computecapability 5 0 coreclock 1 176ghz corecount 5 devicememorysize 4 00gib devicememorybandwidth 74 65gib s 2020 06 24 14 51 01 045344 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library cudart64 101 dll dlerror cudart64 101 dll not find 2020 06 24 14 51 01 051842 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library cublas64 10 dll dlerror cublas64 10 dll not find 2020 06 24 14 51 01 058515 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library cufft64 10 dll dlerror cufft64 10 dll not find 2020 06 24 14 51 01 065241 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library curand64 10 dll dlerror curand64 10 dll not find 2020 06 24 14 51 01 072266 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library cusolver64 10 dll dlerror cusolver64 10 dll not find 2020 06 24 14 51 01 080353 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library cusparse64 10 dll dlerror cusparse64 10 dll not find 2020 06 24 14 51 01 087954 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library cudnn64 7 dll dlerror cudnn64 7 dll not find 2020 06 24 14 51 01 095638 w tensorflow core common runtime gpu gpu device cc 1598 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2020 06 24 14 51 01 186835 I tensorflow core common runtime gpu gpu device cc 1102 device interconnect streamexecutor with strength 1 edge matrix 2020 06 24 14 51 01 194906 I tensorflow core common runtime gpu gpu device cc 1108 0 2020 06 24 14 51 01 198498 I tensorflow core common runtime gpu gpu device cc 1121 0 n 2020 06 24 14 51 01 206694 I tensorflow compiler xla service service cc 168 xla service 0x21876e35be0 initialize for platform cuda this do not guarantee that xla will be use device 2020 06 24 14 51 01 214517 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 960 m compute capability 5 0 false
tensorflowtensorflow
valueerror shape 1 107 3 and 1 107 2 be incompatible
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab system linux version 4 19 104 tensorflow instal from source or binary colab system instal it tensorflow version use command below tf2 2 0 python version 3 6 describe the current behavior I be do some work about tensor train decomposition firstly I represent a video with tensor the tensor dimension be 4 which shape can be see with frame width height channel and the tensor shape be 107 60 80 3 secondly I convert the tf tensor to tt tensor via t3f library thirdly I write a riemann dimension reduction function for the tt tensor in the end I reduce the tt tensor dimension via my function but the bug come out in the end step what I want to say be that the bug didn t come out when I represent another datum with tensor which shape be 107 60 80 2 in that situtation code runne very good the exit valueerror traceback most recent call last ipython input 34 e5f2500494ee in module 1 log 2 for I in range 1000 3 f step 4 if I 10 0 5 print f 4 frame usr local lib python3 6 dist package tensorflow python framework tensor shape py in assert be compatible with self other 1115 1116 if not self be compatible with other 1117 raise valueerror shape s and s be incompatible self other 1118 1119 def most specific compatible shape self other valueerror shape 1 107 3 and 1 107 2 be incompatible describe the expect behavior I expect the video convert to tt tensor can also be reduce dimension by my function be mention above without any error just as the shape 107 60 80 2 datum standalone code to reproduce the issue here be the bug code colab address link here be the video datum address which shape be 107 60 80 3 link here be the another datum address which shape be 107 60 80 2 link 33948 jvishnuvardhan
tensorflowtensorflow
error when compute gradient of a reload savedmodel contain an if clause
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow only the snippet below os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 4 tensorflow instal from source or binary pip tensorflow version use command below 2 2 0 2 3 0 dev20200622 python version 3 7 6 cuda cudnn version cuda 10 1 243 cudnn 7 6 4 gpu model and memory tesla k80 12 gb describe the current behavior computing gradient of a reload savedmodel contain an if clause raise an error if I save a model contain an if clause as follow other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
the link to stable version of this codelab be break
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide
tensorflowtensorflow
tflite quantize hard swish operator precision issue
Bug
hi tflite team git hash b8a267a9fe95dea518cb04c726031e96874d26a0 date at jun 9 2020 description when I run quantize mobilenet v3 I find an infrequent precision issue in quantize hard swish operator let I directly walk through the example to start note I already strip off every other operator in mobilenet v3 model and tensor resize to 1 to make it clear and easy to run the text below be the tflite mlir module attribute tfl description toco convert tfl schema version 3 i32 func main arg0 tensor 1x quant uniform tensor 1x quant uniform attribute tf entry function input input output mobilenetv3 expand conv 6 expand hard swish mul 1 0 tfl hard swish arg0 tensor 1x quant uniform tensor 1x240x quant uniform return 0 tensor 1x quant uniform a mismatch happen here when input element be 128 and here s how I calculate the expect number real input value quantize input value input zp input scale 128 105 0 13308307528495789 3 0609113 real output value x relu6 x 3 6 x if x 3 3 0609113 quantize output value round to near real output value output scale output zp 3 0609113 0 075572937726974487 5 round 40 502742 5 41 5 46 while tflite give I answer of 45 in this case which I don t think be correct and that s because when rescale output back to quantize space the number be too close to 40 5 and tflite reference implementation might lose precision somewhere so it get 40 5 in this case and lead to output 45 eventually btw all other quantize input number be run fine when I look at the reference implementation under tflite kernel reference reference op h it seem like it s use fix point s0 15 to represent the relu ish value even when it s 1 0 which lead to a error factor of 32767 32768 for each fixed16 t multiplier if I have to guess that could be a source of error another possibility could be float vs double I notice in hardswishprepare it store input and output scale as float type instead of double thank in advance
tensorflowtensorflow
kronecker product
Bug
doesn t it have a function of kronecker product like np kron in numpy
tensorflowtensorflow
incorrect documentation for model from config
Bug
url s with the issue description of issue what need change tf keras model model from config function can create only layer not the complete model as it be describe in documentation correct usage be mention at but not describe in the main documenation for tf keras model class call config model get config will return a python dict contain the configuration of the model the same model can then be reconstruct via sequential from config config for a sequential model or model from config config for a functional api model clear description the behavior of tf keras model model from config do not correspond to the documentation moreover it be even more confusing when compare with similar method like tf keras model model from json model to json tf keras model model from yaml model to yaml while model from config tf keras model model from config model get config keyerror traceback most recent call last in 1 tf keras model model from config model get config anaconda3 lib python3 7 site package tensorflow python keras saving model config py in model from config config custom object 53 sequential from config config 54 from tensorflow python keras layers import deserialize pylint disable g import not at top 55 return deserialize config custom object custom object 56 57 anaconda3 lib python3 7 site package tensorflow python keras layers serialization py in deserialize config custom object 99 glob sequencefeature sfc sequencefeature 100 101 layer class name config class name 102 if layer class name in deserialization table 103 config class name deserialization table layer class name keyerror class name correct usage tf keras model from config model get config
tensorflowtensorflow
invalid test case from test zero padding 2d in convolutional test py
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 describe the current behavior l654 they try to test on different datum format configuration ie channel first channel last but for the code in the abovementione link it s meaningless it always take the last input we should construct the input separately under different condition even though it didn t cause any error but it should be fix
tensorflowtensorflow
segmentation fault interpreter setnumthread cpp
Bug
tensorflow micro system information hardware freescale I mx6 quad duallite processor armv7 processor rev 10 v71 os platform and distribution yocto build linux distribution kernel 4 9 4 the tf lite library be build with common option and use default makefile api cpp describe the problem cross compile the minimal tf lite interpreter cpp code where I add the line to set the number of thread to use result in a segmentation error when run the program the relevant code int main int argc char argv if argc 2 fprintf stderr minimal n return 1 const char filename argv 1 int startt endd startt clock load model load model std unique ptr model tflite flatbuffermodel buildfromfile filename tflite minimal check model nullptr build the interpreter tflite op builtin builtinopresolver resolver interpreterbuilder builder model resolver std unique ptr interpreter std cout allocate one number of thread std endl int numthread 1 interpreter setnumthread numthread std cout thread allocate std endl builder interpreter tflite minimal check interpreter nullptr endd clock allocate tensor buffer tflite minimal check interpreter allocatetensor ktfliteok printf pre invoke interpreter state n tflite printinterpreterstate interpreter get double time take double endd startt double clock per sec std cout 4 thread should be select std endl std cout time take to load in the model in tflite use cpp api fix time take std setprecision 5 std cout sec std endl the model I ve be use be a custom enet I train in tensorflow and convert to tflite use both tflite builtin op and tf op though I believe the problem have nothing to do with the model itself please provide the exact sequence of command step when you run into the problem run gdb and check the stacktrace result in the follow image I also cross compile a standard cpp program use multiple thread which run just fine so that exclude problem concern firmware for the record the multithreade program I get run be the follow include include include use namespace std define num thread 4 void printhello void threadid long tid tid long threadid cout hello world thread i d tid endl pthread exit null int main pthread t thread num thread int rc int I for I 0 I num thread I cout main create thread I endl rc pthread create thread I null printhello void I if rc cout error unable to create thread rc endl exit 1 pthread exit null how do I go about this problem what be cause this
tensorflowtensorflow
write custom py function inside custom layer gradient backpropagation
Bug
hi I m write a custom py function inside a custom layer but I get error during gradient backpropagation I m use tensorflow 1 1 4 with eager execution enable while debug I reduce the complexity of code and now m just try 1 d dct of input I still get the error here be my code from scipy special import expit from scipy fftpack import dct idct import time def fdct a return dct a t norm ortho t def fdct top inp return tf convert to tensor np array fdct in for in in list inp numpy def fidct a return idct a t norm ortho t def fidct top inp return tf convert to tensor np array fidct in for in in list inp numpy class mydenselayer tf keras layers layer def init self y mat super mydenselayer self init self w mat y mat def build self input shape self w tf variable initial value self w mat trainable true def call self input dct I tf py function fdct top input float32 dct I set shape tf tensorshape none 10 dct q tf multiply dct I 1 self w print dct q z floor not differentiable tf floor dct q z floor differentiable dct q tf stop gradient dct q z floor not differentiable print z floor differentiable dct deg tf multiply z floor differentiable tf cast self w float32 dct deg tf py function fidct top dct deg float32 dct deg set shape tf tensorshape none 10 return dct deg img inputs1 keras input shape 10 q mat 5 np one 10 astype np float32 mul layer mydenselayer q mat output mul layer img inputs1 model keras model input img inputs1 output output model summary x 10 np random rand 100 10 astype np float32 y x optimizer tf keras optimizer rmsprop 0 1 model compile loss tf keras loss mse optimizer optimizer history model fit x y epoch 50 invalidargumenterror input to reshape be a tensor with 10 value but the request shape have 320 node training 1 rmsprop gradient mul 3 grad reshape op statefulpartitionedcall
tensorflowtensorflow
what exactly do tf signal fft compute
Bug
url s with the issue please provide a link to the documentation entry for example description of issue what need change the description be simply fast fouri transform which isn t fully specify what be the exact function compute be there a normalization term of 1 sqrt n or 1 n or be the normalization constant entirely in the inverse fft which have equally underspecified documentation clear description a mathematical formula that specify what tf signal fft implement would be nice likewise with the other fft method the inverse fft method and stft method
tensorflowtensorflow
loop over tf range in tf function be slow than loop over range
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information colab tensorflow 2 v2 2 0 0 g2b96f3662b 2 2 0 test on both cpu and gpu gpu be much bad describe the current behavior when time a simple tf function that use a loop tf range be much slow than use range but tf range be recommend in the doc moreover be say that loop over non tensor will be roll during trace which do not happen normal range be be trace as a loop tf function input signature tf tensorspec shape none dtype tf float32 def test x for I in tf range 100 x x tf constant 1 1 return x timeit test tf range 1000 dtype float32 print 100 loop good of 3 2 12 ms per loop print 10 loop good of 3 70 2 ms per loop on gpu while use range or np arange be about 300 micro second in both cpu and gpu describe the expect behavior 1 there be a documentation issue where it currently always recommend tf range 2 the documentation should specify when python loop be not roll 3 tf range performance should be the same as range when use in a trace loop also note the np arange be fast than tf range and comparable to range
tensorflowtensorflow
tf 2 kera densefeature layer not refer to datum set field by name
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary colab default tensorflow version use command below 2 2 python version 3 7 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the current behavior while use tf keras layer densefeature as input layer tf doesn t give any error in case the input dataset be miss any field with the required field name as give in densefeature however this work absolutely fine when directly use the tf keras input give error keyerror fieldname describe the expect behavior tf keras layers densefeature should work absolutely like tf keras input standalone code to reproduce the issue use colab notebook 1 dummy model 1 correct behavior in the code use only tf keras input as input to tf keras model and give error if dataset miss require field here review be not pass as a field in the input dataset so tf raise error def dummy model 1 param metric kera metric rootmeansquarederror name rmse b tf keras input dtype tf string name condition c tf keras input dtype tf string name review model tf keras model b c b c set optimizer opt tf keras optimizer adam lr param lr beta 1 param beta 1 beta 2 param beta 2 epsilon param epsilon compile model model compile loss mean squared error optimizer opt metric metric print summary print model summary return model keyerror in user code usr local lib python3 6 dist package tensorflow python keras engine training py 1147 predict function output self distribute strategy run usr local lib python3 6 dist package tensorflow python distribute distribute lib py 951 run return self extend call for each replica fn args args kwargs kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2290 call for each replica return self call for each replica fn args kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2649 call for each replica return fn args kwargs usr local lib python3 6 dist package tensorflow python keras engine training py 1122 predict step return self x train false usr local lib python3 6 dist package tensorflow python keras engine base layer py 927 call output call fn cast input args kwargs usr local lib python3 6 dist package tensorflow python keras engine network py 719 call convert kwargs to constant base layer util call context save usr local lib python3 6 dist package tensorflow python keras engine network py 826 run internal graph input self flatten to reference inputs input usr local lib python3 6 dist package tensorflow python keras engine network py 926 flatten to reference input return tensor inp keras history layer name for inp in ref input usr local lib python3 6 dist package tensorflow python keras engine network py 926 return tensor inp keras history layer name for inp in ref input keyerror review 2 dummy model 2 wrong behavior in the code use tf keras densefeatures and tf keras input as input to tf keras model and do not give error if dataset miss require field in fact I be not sure how it map datum from dataset to the model input here review drugsname both be pass as incorrect fieldname and don t exist in the input dataset tf do not give any error it should have give error for both the field it seem that in this case tf just pick up value from dataset as per the index and not actually from the fieldname def dummy model 2 param metric keras metric binaryaccuracy name accuracy keras metric rootmeansquarederror name rmse a tf keras layer densefeature feature column f drugsname tf keras input name drugsname shape 1 dtype tf string b tf keras input dtype tf string name condition c tf keras input dtype tf string name review model tf keras model drugsname tf keras input name drugsname shape 1 dtype tf string b c b c set optimizer opt tf keras optimizer adam lr param lr beta 1 param beta 1 beta 2 param beta 2 epsilon param epsilon compile model model compile loss mean squared error optimizer opt metric metric print summary print model summary return model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
regression on train batch begin callback with no batch number and size
Bug
yep this should work in tf2 1 as well here s a full example have to fix the metric code a bit python import tensorflow as tf class count tf keras metric metric def init self name none dtype none kwargs super count self init name dtype kwargs self count tf variable 0 def update state self y true y pre sample weight none first tensor tf nest flatten y true 0 batch size tf shape first tensor 0 self count assign add batch size def result self return tf identity self count class printinfo tf keras callbacks callback def on train batch end self batch log print batch number format batch print sample see this epoch format log counter model tf keras sequential tf keras layer dense 1 model compile optimizer sgd loss mse metric count name counter x y tf one 10 10 tf one 10 1 model fit x y batch size 2 callback printinfo verbose 2 originally post by omalleyt12 in issuecomment 630325421 I would like to use the same method but with on train batch begin and it doesn t work actually the log remain empty even after use the new metric how can I use the batch size in a callback with on train batch begin
tensorflowtensorflow
conv1dtranspose documentation inconsistent with code and unclear
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change the documentation claim that padding option valid and same be support but when follow the code path to deconv output length at line 140 here l140 there be the option for full as well additionally the equation provide for calculate output shape merely say padding for a variable which be represent as a string in the api this make for a guess game of how to achieve the desire output shape clear description for example why should someone use this method how be it useful correct link l16 l16l be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide
tensorflowtensorflow
segmentation fault when run tflite s benchmark model
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 darwin kernel version 19 5 0 for model conversion linux 4 9 140 tegra for run the benchmark tool mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary instal tflite benchmark model tool use the instruction give here tensorflow version use command below n a python version n a bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I convert emotion ferplus onnx model to tf use onnx tensorflow and then convert the tf model to tflite use tflite convert onnx tf convert I model onnx o emotion pb tflite convert enable v1 converter graph def file emotion pb output file emotion tflite output format tflite input shape 1 1 64 64 input array input3 output array plus692 output 0 inference type float input datum type float then I run benchmark model tool on the tflite model I get from the previous step onnxruntime onnxruntime desktop document tensorflow bazel bin tensorflow lite tool benchmark benchmark model graph emotion tflite num run 10 num thread 4 start duplicate flag num thread min num run 10 min run duration second 1 max run duration second 150 int run delay second 1 num thread 0 benchmark name output prefix min warmup run 1 min warmup run duration second 0 5 graph emotion tflite input layer input shape input value range input layer value file allow fp16 0 require full delegation 0 enable op profile 0 max profiling buffer entry 1024 csv file to export profiling datum to enable platform wide trace 0 thread use for cpu inference 0 max number of delegate partition 0 min node per partition 0 external delegate path external delegate option use gpu 0 use xnnpack 0 load model model tflite emotion tflite the input model file size mb 35 0461 initialize session in 1 645ms run benchmark for at least 1 iteration and at least 0 5 second but terminate if exceed 150 second segmentation fault core dump describe the expect behavior benchmark tool should run without crash and give model performance metric like the follow run benchmark for at least 1 iteration and at least 0 5 second but terminate if exceed 150 second count 10 first 101249 curr 46906 min 46491 max 101249 avg 52839 8 std 16163 running benchmark for at least 10 iteration and at least 1 second but terminate if exceed 150 second count 22 first 46543 curr 46554 min 46473 max 49668 avg 46957 8 std 627 inference timing in we init 218485 first inference 101249 warmup avg 52839 8 inference avg 46957 8 note as the benchmark tool itself affect memory footprint the follow be only approximate to the actual memory footprint of the model at runtime take the information at your discretion peak memory footprint mb init 1 23047 overall 28 8906 standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf 1 15 unexpected segment fault for seemingly reasonable and correct code
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 centos7 but libtensorflow so be build in official docker image tensorflow serve 1 15 0 devel gpu tensorflow instal from source or binary source tensorflow version use command below 1 15 0 python version 3 bazel version if compile from source 0 24 1 gcc compiler version if compile from source 5 4 0 cuda cudnn version 10 0 gpu model and memory 1080ti 11178mib describe the current behavior it be by product of tensorflow serve in official docker image tensorflow serve 1 15 0 devel gpu we build tensorflow serve 1 15 base on tensorflow 1 15 everything be ok for tensorflow serve mode and it have be deploy in our product environment however we meet the bottleneck of inter process for current tensorflow serve hence we consider use direct local gpu infer luckily tensorflow be already include in tensorflow serve in tensorflow folder we build tensorflow cc use follow command shell cd tensorflow export tf need cuda 1 export tf need s3 1 export tf cuda compute capability 3 5 5 2 6 1 export tf need gcp 1 export tf need jemalloc 0 export tf need hdfs 0 export tf need opencl 0 export tf need mkl 0 export tf need verbs 0 export tf need mpi 0 export tf download mkl 0 export tf need gdr 0 export tf enable xla 0 export tf cuda clang 0 export tf need opencl sycl 0 export gcc host compiler path usr bin gcc export python bin path usr bin python export python lib path usr lib python2 7 site package export cc opt flag march native bazel build c opt config cuda copt mavx verbose failure tensorflow libtensorflow cc so build be ok as info from compile external snappy snappy sinksource cc cc1plus warn command line option wno implicit function declaration be valid for c objc but not for c info from compile external snappy snappy stub internal cc cc1plus warn command line option wno implicit function declaration be valid for c objc but not for c info from compile external snappy snappy cc cc1plus warn command line option wno implicit function declaration be valid for c objc but not for c target tensorflow libtensorflow cc so up to date bazel bin tensorflow libtensorflow cc so info elapse time 1258 352 critical path 455 73 info 7436 process 7436 local info build complete successfully 11163 total action then I import follow file into my project libtensorflow cc so libtensorflow framework so 1 so c api h tf atrrtype h tf datatype h tf status h tf tensor h my code call only c api of tensorflow which be test in old cpu tensorflow 1 13 version and main code logic be ok and main code be below init session tf session sess tf sessionoption sess opt tf newsessionoption if proto len 0 tf setconfig sess opt void option proto proto len status if tf getcode status tf ok tf deletesessionoption sess opt return null sess tf newsession graph sess opt status session infer most crash occur here tf sessionrun special note due to environment and other dependency issue my personal project have to be build in local gpu cento default glibc be old 2 17 in order to exclude the difference of gcc and glibc I build my project by use the same gcc 5 4 0 the same as tensorflow serve 1 15 0 devel gpu and the same glibc 2 23 the same as tensorflow serve 1 15 0 devel gpu here be change of cmakelist txt set third link third link wl rpath myf12 code glibc 2 23 install lib wl dynamic linker myf12 code glibc 2 23 install lib ld 2 23 so lrt list insert third link 0 l myf12 code glibc 2 23 install lib where third link will be use as target link librarie xxx third link build be ok and here be ldd info ldd bin debug fst linux vdso so 1 0x00007ffc51718000 libdl so 2 myf12 code glibc 2 23 install lib libdl so 2 0x00007f2c4f224000 libtensorflow cc v1 15 avx so myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 0x00007f2c11963000 librt so 1 myf12 code glibc 2 23 install lib librt so 1 0x00007f2c1175b000 libjemalloc so 2 mnt lustre cm share global src misc jemalloc 5 2 1 lib libjemalloc so 2 0x00007f2c112cc000 libpthread so 0 myf12 code glibc 2 23 install lib libpthread so 0 0x00007f2c110af000 libstdc so 6 mnt lustre cm share global src dev gcc 5 4 0 lib64 libstdc so 6 0x00007f2c10d34000 libm so 6 myf12 code glibc 2 23 install lib libm so 6 0x00007f2c10a2e000 libgcc s so 1 mnt lustre cm share global src dev gcc 5 4 0 lib64 libgcc s so 1 0x00007f2c10817000 libc so 6 myf12 code glibc 2 23 install lib libc so 6 0x00007f2c10476000 myf12 code glibc 2 23 install lib ld 2 23 so lib64 ld linux x86 64 so 2 0x00007f2c4f42a000 libcusparse so 10 0 mnt lustre cm share global src dev cuda 10 0 lib64 libcusparse so 10 0 0x00007f2c0ca0c000 libcusolver so 10 0 mnt lustre cm share global src dev cuda 10 0 lib64 libcusolver so 10 0 0x00007f2c04325000 libnvinfer so 5 mnt lustre cm share global src machinelearning tensorrt tensorrt 5 0 2 6 lib libnvinfer so 5 0x00007f2bfcec8000 libnvinfer plugin so 5 mnt lustre cm share global src machinelearning tensorrt tensorrt 5 0 2 6 lib libnvinfer plugin so 5 0x00007f2bfc995000 libcublas so 10 0 mnt lustre cm share global src dev cuda 10 0 lib64 libcubla so 10 0 0x00007f2bf83ff000 libcudnn so 7 mnt lustre cm share global src dev cudnn 7 5 1 lib64 libcudnn so 7 0x00007f2be2de0000 libcufft so 10 0 mnt lustre cm share global src dev cuda 10 0 lib64 libcufft so 10 0 0x00007f2bdc92b000 libcurand so 10 0 mnt lustre cm share global src dev cuda 10 0 lib64 libcurand so 10 0 0x00007f2bd87c4000 libcudart so 10 0 mnt lustre cm share global src dev cuda 10 0 lib64 libcudart so 10 0 0x00007f2bd854a000 libgomp so 1 mnt lustre cm share global src dev gcc 5 4 0 lib64 libgomp so 1 0x00007f2bd8327000 libnvtoolsext so 1 mnt lustre cm share global src dev cuda 10 0 lib64 libnvtoolsext so 1 0x00007f2bd811e000 program crash due to segment fault as below 2020 06 22 12 03 56 005532 I external org tensorflow tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2020 06 22 12 03 56 007051 I external org tensorflow tensorflow core common runtime gpu gpu device cc 1767 add visible gpu device 0 2020 06 22 12 03 56 650162 I external org tensorflow tensorflow core common runtime gpu gpu device cc 1180 device interconnect streamexecutor with strength 1 edge matrix 2020 06 22 12 03 56 650296 I external org tensorflow tensorflow core common runtime gpu gpu device cc 1186 0 2020 06 22 12 03 56 650307 I external org tensorflow tensorflow core common runtime gpu gpu device cc 1199 0 n 2020 06 22 12 03 56 652890 f external org tensorflow tensorflow core common runtime device cc 28 check fail devicenameutil parsefullname name parse name invalid device name core dump be 0 0x00007f0482651298 in gi raise sig sig entry 6 at sysdep unix sysv linux raise c 54 1 0x00007f048265271a in gi abort at abort c 89 2 0x00007f048e307a34 in tensorflow internal logmessagefatal logmessagefatal from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 3 0x00007f048dc2921f in tensorflow device device tensorflow env tensorflow deviceattribute const from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 4 0x00007f048dc62e64 in tensorflow localdevice localdevice tensorflow sessionoption const tensorflow deviceattribute const from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 5 0x00007f048d4d9166 in tensorflow basegpudevice basegpudevice tensorflow sessionoption const std string const tensorflow gtl inttype tensorflow devicelocality const tensorflow gtl inttype std string const tensorflow allocator tensorflow allocator bool int from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 6 0x00007f048d4e40c8 in tensorflow gpudevicefactory creategpudevice tensorflow sessionoption const std string const tensorflow gtl inttype tensorflow devicelocality const tensorflow gtl inttype std string const tensorflow allocator tensorflow allocator from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 7 0x00007f048d4dcc9e in tensorflow basegpudevicefactory creategpudevice tensorflow sessionoption const std string const tensorflow gtl inttype long long tensorflow devicelocality const std vector std allocator from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 8 0x00007f048d4e1f2a in tensorflow basegpudevicefactory createdevice tensorflow sessionoption const std string const std vector std allocator from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 9 0x00007f048dc299c9 in tensorflow devicefactory adddevice tensorflow sessionoption const std string const std vector std allocator from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 10 0x00007f048c709161 in tensorflow directsessionfactory newsession tensorflow sessionoption const tensorflow session from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 11 0x00007f048dca7380 in tensorflow newsession tensorflow sessionoption const tensorflow session from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so then I fix bug in follow line l1265 from const string device name string strcat name prefix device gpu tf gpu i d value into const string device name name prefix device gpu std to stre tf gpu i d value previous crash be avoid see new log create tensorflow device and rerun again it crash again in inferstatically 2020 06 22 12 08 22 185006 I external org tensorflow tensorflow stream executor platform default dlopen checker stub cc 25 gpu library be statically link skip dlopen check 2020 06 22 12 08 22 186563 I external org tensorflow tensorflow core common runtime gpu gpu device cc 1768 add visible gpu device 0 2020 06 22 12 08 23 047640 I external org tensorflow tensorflow core common runtime gpu gpu device cc 1180 device interconnect streamexecutor with strength 1 edge matrix 2020 06 22 12 08 23 047693 I external org tensorflow tensorflow core common runtime gpu gpu device cc 1186 0 2020 06 22 12 08 23 047705 I external org tensorflow tensorflow core common runtime gpu gpu device cc 1199 0 n 2020 06 22 12 08 23 050308 I external org tensorflow tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 8164 mb memory physical gpu device 0 name geforce gtx 1080 ti pci bus i d 0000 82 00 0 compute capability 6 1 a asegmentation fault core dump 0 0x00007fa69ce6ce96 in std hashtable std allocator std detail select1st std equal to std hash std detail mod range hash std detail defau lt range hash std detail prime rehash policy std detail hashtable trait m find before node unsigned long tensorflow nodedef const const unsigned long const clo ne isra 856 from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 1 0x00007fa69ce76041 in tensorflow grappler virtualscheduler getnodestateorcreateit tensorflow nodedef const from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 2 0x00007fa69ce78d9b in tensorflow grappler virtualscheduler init tensorflow grappler grappleritem const from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 3 0x00007fa69ce569df in tensorflow grappler analyticalcostestimator predictcost tensorflow graphdef const tensorflow runmetadata tensorflow grappler cost const from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 4 0x00007fa69ce4f37f in tensorflow grappler virtualcluster run tensorflow grappler grappleritem const tensorflow runmetadata from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 5 0x00007fa69cdddb5a in tensorflow grappler graphmemory inferstatically std unordered map std equal to std allocator const from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 6 0x00007fa69cdce427 in tensorflow grappler anonymous namespace identifyswappingcandidate tensorflow grappler cluster tensorflow grappler grappleritem std unordered set std equal to std allocator std unordered map std equal to std allocator clone constprop 1020 from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 7 0x00007fa69cdd091d in tensorflow grappler anonymous namespace swappingpass tensorflow rewriterconfig memopttype tensorflow grappler cluster tensorflow grappler grappleritem std unordered set std equal to std allocator clone constprop 1019 from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 8 0x00007fa69cdd3a97 in tensorflow grappler memoryoptimizer optimize tensorflow grappler cluster tensorflow grappler grappleritem const tensorflow graphdef from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 9 0x00007fa69cd01e5a in tensorflow grappler metaoptimizer runoptimizer tensorflow grappler graphoptimizer tensorflow grappler cluster tensorflow grappler grappleritem tensorflow graphdef tensorflow grappler metaoptimizer graphoptimizationresult from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 10 0x00007fa69cd03431 in tensorflow grappler metaoptimizer optimizegraph tensorflow grappler cluster tensorflow grappler grappleritem const tensorflow graphdef from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 11 0x00007fa69cd04c64 in tensorflow grappler metaoptimizer optimize tensorflow grappler cluster tensorflow grappler grappleritem const tensorflow graphdef from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 12 0x00007fa69cd070af in tensorflow grappler runmetaoptimizer tensorflow grappler grappleritem const tensorflow configproto const tensorflow devicebase tensorflow grappler cluster tensorflow graphdef from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 13 0x00007fa69ccf5fa8 in tensorflow graphexecutionstate optimizegraph tensorflow buildgraphoption const std unique ptr std unique ptr from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 14 0x00007fa69ccf8361 in tensorflow graphexecutionstate buildgraph tensorflow buildgraphoption const std unique ptr from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 15 0x00007fa69bd9e46a in tensorflow directsession creategraphs tensorflow buildgraphoption const std unordered map std hash std equal to std allocator std unique ptr tensorflow directsession runstateargs absl inlinedvector absl inlinedvector long long from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 16 0x00007fa69bd9f9de in tensorflow directsession createexecutor tensorflow callableoption const std unique ptr std unique ptr tensorflow directsession runstateargs from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 17 0x00007fa69bda1d64 in tensorflow directsession getorcreateexecutor absl span absl span absl span tensorflow directsession executorsandkey tensorflow directsession runstateargs from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 18 0x00007fa69bda3648 in tensorflow directsession run tensorflow runoption const std vector std allocator const std vector const std vector const std vector tensorflow runmetadata from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 19 0x00007fa696f70911 in tf run helper tensorflow session char const tf buffer const std vector std allocator const std vector const tf tensor std vector const tf buffer tf status clone constprop 628 from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 20 0x00007fa696f71179 in tf sessionrun from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so then I fix bug in follow line l2140 from fed port tensor i d node insert tensor i d index into if feed port find tensor i d node fed port end std unordered set port tensor i d index feed port tensor i d node port else feed port tensor i d node insert tensor i d index this crash be fix and rerun again it crash again in getnodestateorcreateit 0 0x00007f348a9f4e46 in std hashtable std allocator std detail select1st std equal to std hash std detail mod range hash std detail default range hash std detail prime rehash policy std detail hashtable trait m find before node unsigned long tensorflow nodedef const const unsigned long const clone isra 856 from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 1 0x00007f348a9fe1f5 in tensorflow grappler virtualscheduler getnodestateorcreateit tensorflow nodedef const from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 2 0x00007f348aa00e5d in tensorflow grappler virtualscheduler init tensorflow grappler grappleritem const from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 3 0x00007f348a9de98f in tensorflow grappler analyticalcostestimator predictcost tensorflow graphdef const tensorflow runmetadata tensorflow grappler cost const from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 4 0x00007f348a9d732f in tensorflow grappler virtualcluster run tensorflow grappler grappleritem const tensorflow runmetadata from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so 5 0x00007f348a965b0a in tensorflow grappler graphmemory inferstatically std unordered map std equal to std allocator const from myf12 code asr kernel third tensorflow libtensorflow cc v1 15 avx so after quickly fix 2 bug it be unexpected and impossible for I to see theses obvious bug in tensorflow release 1 15 I strongly suspect it relate to some build configuration or run configuration can you help check it thank
tensorflowtensorflow
incorrect error message for valid input of tf math segment
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when pass a 0d tensor for datum and an empty array for segment id tf math segment e g tf math segment mean throw an error say in command line invalidargumenterror segment ids should be the same size as dimension 0 of input in colab crash due to core dump at f tensorflow core framework tensor shape cc 435 check fail d dim 0 vs 0 for a 0d tensor datum the first dimension of segment id do not exist so shouldn t an empty array be the valid input also the same input behave differently in colab and command line this be also strange behavior describe the expect behavior if the input above be valid I would expect no error throw if invalid I would expect a more straightforward error message and an update in the documentation so that it specify datum tensor should not be scalar standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf import numpy as np tf math segment mean np uint16 10 np array astype int64 name none other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
gradient with respect to input return none use gradienttape
Bug
system information os platform and distribution e g linux ubuntu 16 04 google colab tensorflow version use command below 2 2 0 describe the current behavior I be try to calculate gradient for a kl divergence scalar loss value wrt input implementation have be with two approach and return none in both case 1 differentiate wrt to a tensor output none 2 differentiate wrt to a variable output none the follow work with tensorflow 1 x which use tf gradient however gradienttape be return none and it very frustrating now as I have be try to overcome this for the last 3 week please provide some support the colab link be provide below describe the expect behavior gradient matrix should contain value and until that I can t work on build my model standalone code to reproduce the issue update link other info log the below code be take from this source also follow a similar issue on this link but it doesn t help I to solve my problem any help be highly appreciated thank for your time
tensorflowtensorflow
inconsistant tf name scope behaviour with custom model in tf2 x
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab environment mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary colab pre instal tensorflow version use command below 2 3 0 nightly python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a colab cpu environment use gpu model and memory n a describe the current behavior I be try to use tf name scope inside a custom model which be inherit from tf keras model and it have custom layer which be inherit from tf keras layers layer describe the expect behavior I expect the scope name that be add with the help of tf name scope as a prefix to the weight name e g weight conv1 conv2d 35 kernel 0 should be name as conv block conv1 conv2d 35 kernel 0 when use with tf name scope conv block standalone code to reproduce the issue class config weight decay 0 0005 weight init seed 42 bnorm momentum 0 9 config config class convlayer tf keras layers layer def init self name filter k size 3 3 stride 1 padding same use bnorm true super convlayer self init name name trainable true dtype tf float32 self use bnorm use bnorm self weight decay config weight decay self k reg tf keras regularizer l2 self weight decay self k init tf keras initializers glorotnormal seed config weight init seed self b init tf zeros initializer self conv tf keras layer conv2d filter filter kernel size k size stride stride padding padding activation none use bias not self use bnorm kernel initializer self k init bias initializer self b init kernel regularizer self k reg if self use bnorm self bn tf keras layer batchnormalization momentum config bnorm momentum self mx pool tf keras layers maxpool2d pool size 3 3 stride 2 padding padding self act tf nn relu def call self x training false if self use bnorm return self act self mx pool self bn self conv x training training else return self act self mx pool self conv x class denselayer tf keras layers layer def init self name unit use bnorm true use act true super denselayer self init name name self use bnorm use bnorm self use act use act self weight decay config weight decay self k reg tf keras regularizer l2 self weight decay self k init tf keras initializers glorotnormal seed config weight init seed self b init tf zeros initializer self fc tf keras layer dense unit unit activation none use bias not self use bnorm kernel initializer self k init bias initializer self b init kernel regularizer self k reg if self use bnorm self bn tf keras layer batchnormalization momentum config bnorm momentum if self use act self act tf nn relu def call self x training false if self use bnorm op self bn self fc x training training else op self fc x if self use act return self act op else return op class classifiermodel tf keras model def init self num class 2 super classifiermodel self init name classifiermodel self num class num class input will be of shape none 128 128 3 with tf name scope conv block self conv1 convlayer conv1 32 3 3 1 same true 64 x 64 x 32 self conv2 convlayer conv2 48 3 3 1 same true 32 x 32 x 48 self conv3 convlayer conv3 72 3 3 1 same true 16 x 16 x 72 self conv4 convlayer conv4 96 3 3 1 same true 8 x 8 x 96 self conv5 convlayer conv5 128 3 3 1 same true 4 x 4 x 128 self gap tf keras layer globalaveragepooling2d self mx pool tf keras layers maxpool2d pool size 3 3 stride 2 padding same with tf name scope dense block self fc1 denselayer fc1 64 true 64 self fc2 denselayer fc2 2 false false 2 def call self x training false x b 128 128 3 training true false use for batch norm layer 1 self conv1 x training training layer 1 self mx pool layer 1 layer 2 self conv2 layer 1 training training layer 2 self mx pool layer 2 layer 3 self conv3 layer 2 training training layer 3 self mx pool layer 3 layer 4 self conv4 layer 3 training training layer 4 self mx pool layer 4 layer 5 self conv5 layer 4 training training layer 5 self mx pool layer 5 gap self gap layer 5 fc1 self fc1 gap training training logit self fc2 fc1 train none output tf nn softmax logit return logit output clf classifiermodel clf build input shape tf tensorshape none 128 128 3 for v in clf trainable variable print v name t t v shape generate output be as below conv1 conv2d 35 kernel 0 3 3 3 32 conv1 batch normalization 41 gamma 0 32 conv1 batch normalization 41 beta 0 32 conv2 conv2d 36 kernel 0 3 3 32 48 conv2 batch normalization 42 gamma 0 48 conv2 batch normalization 42 beta 0 48 conv3 conv2d 37 kernel 0 3 3 48 72 conv3 batch normalization 43 gamma 0 72 conv3 batch normalization 43 beta 0 72 conv4 conv2d 38 kernel 0 3 3 72 96 conv4 batch normalization 44 gamma 0 96 conv4 batch normalization 44 beta 0 96 conv5 conv2d 39 kernel 0 3 3 96 128 conv5 batch normalization 45 gamma 0 128 conv5 batch normalization 45 beta 0 128 fc1 dense 12 kernel 0 128 64 fc1 batch normalization 46 gamma 0 64 fc1 batch normalization 46 beta 0 64 fc2 dense 13 kernel 0 64 2 fc2 dense 13 bias 0 2 output should be as below conv block conv1 conv2d 35 kernel 0 3 3 3 32 conv block conv1 batch normalization 41 gamma 0 32 conv block conv1 batch normalization 41 beta 0 32 conv block conv2 conv2d 36 kernel 0 3 3 32 48 conv block conv2 batch normalization 42 gamma 0 48 conv block conv2 batch normalization 42 beta 0 48 conv block conv3 conv2d 37 kernel 0 3 3 48 72 conv block conv3 batch normalization 43 gamma 0 72 conv block conv3 batch normalization 43 beta 0 72 conv block conv4 conv2d 38 kernel 0 3 3 72 96 conv block conv4 batch normalization 44 gamma 0 96 conv block conv4 batch normalization 44 beta 0 96 conv block conv5 conv2d 39 kernel 0 3 3 96 128 conv block conv5 batch normalization 45 gamma 0 128 conv block conv5 batch normalization 45 beta 0 128 dense block fc1 dense 12 kernel 0 128 64 dense block fc1 batch normalization 46 gamma 0 64 dense block fc1 batch normalization 46 beta 0 64 dense block fc2 dense 13 kernel 0 64 2 dense block fc2 dense 13 bias 0 2 colab notebook link other info log the same issue be reproduce in my local machine where python 3 6 10 be instal with tensorflow 2 1 0
tensorflowtensorflow
keras layer weight sublayer get delete when create a model with they model summary plot model still show those weight as part of graph though
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab enviroment tensorflow instal from source or binary google colab default python version python 3 google colab default cuda cudnn version google colab default gpu model and memory test on both google colab p 100 gpu and cpu you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version 2020 06 20 21 44 17 003371 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 v2 2 0 0 g2b96f3662b 2 2 0 describe the current behavior I create a new model use two layer from and old model however now all of the layer weight from the old model be not show up in the new model model summary and tf keras util plot model model to file model png show shape false show layer name true rankdir tb expand nest false dpi 96 still have those weight so I think they re a part of the graph but when I print they out those weight layer be miss altogether describe the expect behavior all weight from component layer to should be in the model standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook here be a colabnotebook with a minimal example that reproduce the issue and here be the code pip install transformer q tensorflow version 2 x from transformer import tfbertmodel automodel tfrobertamodel autotokenizer import tensorflow as tf import tensorflow addon as tfa tf compat v1 log set verbosity tf compat v1 log error from tensorflow import kera from tensorflow keras import layer from copy import deepcopy logger tf get logg logg info tf version def get mini model tempmodel tfrobertamodel from pretraine bert base uncase from pt true layer9 deepcopy tempmodel layer 0 encoder layer 8 layer10 deepcopy tempmodel layer 0 encoder layer 9 inputhiddenval tf keras input shape none none dtype tf float32 name input q batch size none hidden1 layer9 inputhiddenval none none hidden2 layer10 hidden1 0 none none modelnew tf keras model input inputhiddenval outputs hidden2 del tempmodel return modelnew tf function def loss fn prob bs tf shape prob 0 label tf eye bs bs return tf loss categorical crossentropy label prob from logit true model get mini model model compile loss loss fn optimizer tfa optimizer adamw weight decay 1e 4 learning rate 1e 5 epsilon 1e 06 get model and layer directly to compare tempmodel tfrobertamodel from pretraine bert base uncase from pt true layer9 deepcopy tempmodel layer 0 encoder layer 8 layer10 deepcopy tempmodel layer 0 encoder layer 9 only one layer and that layer also have miss weight for I var in enumerate model weight print model weight I name full weight for one layer for I var in enumerate layer9 weight print layer9 weight I name test what correct output should be tokenizer autotokenizer from pretraine bert base uncased inputt tokenizer encode this be a sentence return tensor tf outt tempmodel inputt 0 test model output not the same model outt model summary somehow list the weight model summary model diagram show the correct connection between all the layer tf keras util plot model model to file model png show shape false show layer name true rankdir tb expand nest false dpi 96 edit I also try make the layer from scratch and set the weight directly and get the same result here s a colab notebook that do this and here s the code pip install transformer q tensorflow version 2 x from transformer import tfbertmodel automodel tfrobertamodel autotokenizer import tensorflow as tf import tensorflow addon as tfa tf compat v1 log set verbosity tf compat v1 log error from tensorflow import kera from tensorflow keras import layer from tensorflow keras layer import dense dropout import numpy as np import os logg tf get logg logg info tf version class tfbertselfattention2 tf keras layers layer def init self config kwargs super init kwargs if config hide size config num attention head 0 raise valueerror the hidden size d be not a multiple of the number of attention head d config hidden size config num attention head self num attention head config num attention head assert config hidden size config num attention head 0 self attention head size int config hidden size config num attention head self all head size self num attention head self attention head size self query tf keras layer dense self all head size kernel initializer get initializer config initializer range name query 2 self key tf keras layer dense self all head size kernel initializer get initializer config initializer range name key 2 self value tf keras layer dense self all head size kernel initializer get initializer config initializer range name value 2 self dropout tf keras layers dropout config attention prob dropout prob def transpose for score self x batch size x tf reshape x batch size 1 self num attention head self attention head size return tf transpose x perm 0 2 1 3 def call self input training false hide state attention mask head mask output attention input batch size shape list hide state 0 mixed query layer self query hide state mixed key layer self key hide state mixed value layer self value hide state query layer self transpose for score mixed query layer batch size key layer self transpose for score mix key layer batch size value layer self transpose for score mixed value layer batch size take the dot product between query and key to get the raw attention score attention score tf matmul query layer key layer transpose b true batch size num head seq len q seq len k dk tf cast shape list key layer 1 tf float32 scale attention score attention score attention score tf math sqrt dk if attention mask be not none apply the attention mask be precompute for all layer in tfbertmodel call function attention score attention score attention mask normalize the attention score to probability attention prob tf nn softmax attention score axis 1 this be actually drop out entire token to attend to which might seem a bit unusual but be take from the original transformer paper attention prob self dropout attention prob training training mask head if we want to if head mask be not none attention prob attention prob head mask context layer tf matmul attention prob value layer context layer tf transpose context layer perm 0 2 1 3 context layer tf reshape context layer batch size 1 self all head size batch size seq len q all head size output context layer attention prob if cast bool to primitive output attention be true else context layer return output class tfbertselfoutput2 tf keras layers layer def init self config kwargs super init kwargs self dense tf keras layer dense config hidden size kernel initializer get initializer config initializer range name dense2 self layernorm tf keras layer layernormalization epsilon config layer norm eps name layernorm2 self dropout tf keras layers dropout config hide dropout prob def call self input training false hide state input tensor input hide state self dense hide state hide state self dropout hide state training training hide state self layernorm hide state input tensor return hidden state class tfbertattention2 tf keras layers layer def init self config kwargs super init kwargs self self attention tfbertselfattention2 config name self2 self dense output tfbertselfoutput2 config name output2 def prune head self head raise notimplementederror def call self input training false input tensor attention mask head mask output attention input self output self self attention input tensor attention mask head mask output attention training training attention output self dense output self output 0 input tensor training training output attention output self output 1 add attention if we output they return output class tfbertintermediate2 tf keras layers layer def init self config kwargs super init kwargs self dense tf keras layer dense config intermediate size kernel initializer get initializer config initializer range name dense2 if isinstance config hidden act str self intermediate act fn act2fn config hidden act else self intermediate act fn config hidden act def call self hide state hide state self dense hide state hide state self intermediate act fn hidden state return hidden state class tfbertoutput2 tf keras layers layer def init self config kwargs super init kwargs self dense tf keras layer dense config hidden size kernel initializer get initializer config initializer range name dense2 self layernorm tf keras layer layernormalization epsilon config layer norm eps name layernorm2 self dropout tf keras layers dropout config hide dropout prob def call self input training false hide state input tensor input hide state self dense hide state hide state self dropout hide state training training hide state self layernorm hide state input tensor return hidden state class tfbertlayer2 tf keras layers layer def init self config kwargs super init kwargs self attention tfbertattention2 config name attention2 self intermediate tfbertintermediate2 config name intermediate2 self bert output tfbertoutput2 config name output2 def call self input training false hide state attention mask head mask output attention input attention output self attention hide state attention mask head mask output attention training training attention output attention output 0 intermediate output self intermediate attention output layer output self bert output intermediate output attention output training training output layer output attention output 1 add attention if we output they return output class tfbertselfattention tf keras layers layer def init self config kwargs super init kwargs if config hide size config num attention head 0 raise valueerror the hidden size d be not a multiple of the number of attention head d config hidden size config num attention head self num attention head config num attention head assert config hidden size config num attention head 0 self attention head size int config hidden size config num attention head self all head size self num attention head self attention head size self query tf keras layer dense self all head size kernel initializer get initializer config initializer range name query self key tf keras layer dense self all head size kernel initializer get initializer config initializer range name key self value tf keras layer dense self all head size kernel initializer get initializer config initializer range name value self dropout tf keras layers dropout config attention prob dropout prob def transpose for score self x batch size x tf reshape x batch size 1 self num attention head self attention head size return tf transpose x perm 0 2 1 3 def call self input training false hide state attention mask head mask output attention input batch size shape list hide state 0 mixed query layer self query hide state mixed key layer self key hide state mixed value layer self value hide state query layer self transpose for score mixed query layer batch size key layer self transpose for score mix key layer batch size value layer self transpose for score mixed value layer batch size take the dot product between query and key to get the raw attention score attention score tf matmul query layer key layer transpose b true batch size num head seq len q seq len k dk tf cast shape list key layer 1 tf float32 scale attention score attention score attention score tf math sqrt dk if attention mask be not none apply the attention mask be precompute for all layer in tfbertmodel call function attention score attention score attention mask normalize the attention score to probability attention prob tf nn softmax attention score axis 1 this be actually drop out entire token to attend to which might seem a bit unusual but be take from the original transformer paper attention prob self dropout attention prob training training mask head if we want to if head mask be not none attention prob attention prob head mask context layer tf matmul attention prob value layer context layer tf transpose context layer perm 0 2 1 3 context layer tf reshape context layer batch size 1 self all head size batch size seq len q all head size output context layer attention prob if cast bool to primitive output attention be true else context layer return output class tfbertselfoutput tf keras layers layer def init self config kwargs super init kwargs self dense tf keras layer dense config hidden size kernel initializer get initializer config initializer range name dense self layernorm tf keras layer layernormalization epsilon config layer norm eps name layernorm self dropout tf keras layers dropout config hide dropout prob def call self input training false hide state input tensor input hide state self dense hide state hide state self dropout hide state training training hide state self layernorm hide state input tensor return hidden state class tfbertattention tf keras layers layer def init self config kwargs super init kwargs self self attention tfbertselfattention config name self self dense output tfbertselfoutput config name output def prune head self head raise notimplementederror def call self input training false input tensor attention mask head mask output attention input self output self self attention input tensor attention mask head mask output attention training training attention output self dense output self output 0 input tensor training training output attention output self output 1 add attention if we output they return output class tfbertintermediate tf keras layers layer def init self config kwargs super init kwargs self dense tf keras layer dense config intermediate size kernel initializer get initializer config initializer range name dense if isinstance config hidden act str self intermediate act fn act2fn config hidden act else self intermediate act fn config hidden act def call self hide state hide state self dense hide state hide state self intermediate act fn hidden state return hidden state class tfbertoutput tf keras layers layer def init self config kwargs super init kwargs self dense tf keras layer dense config hidden size kernel initializer get initializer config initializer range name dense self layernorm tf keras layer layernormalization epsilon config layer norm eps name layernorm self dropout tf keras layers dropout config hide dropout prob def call self input training false hide state input tensor input hide state self dense hide state hide state self dropout hide state training training hide state self layernorm hide state input tensor return hidden state class tfbertlayer tf keras layers layer def init self config kwargs super init kwargs self attention tfbertattention config name attention self intermediate tfbertintermediate config name intermediate self bert output tfbertoutput config name output def call self input training false hide state attention mask head mask output attention input attention output self attention hide state attention mask head mask output attention training training attention output attention output 0 intermediate output self intermediate attention output layer output self bert output intermediate output attention output training training output layer output attention output 1 add attention if we output they return output configbase attention prob dropout prob 0 1 bos token i d 0 eos token i d 2 hide act gelu hide dropout prob 0 1 hide size 768 initializer range 0 02 intermediate size 3072 layer norm ep 1e 05 max position embedding 514 model type roberta num attention head 12 num hide layer 12 pad token i d 1 type vocab size 1 vocab size 50265 class attrdict dict def init self args kwargs super attrdict self init args kwargs self dict self config attrdict configbase def get initializer initializer range 0 02 create a tf initializer truncate normal with the give range args initializer range float initializer range for stddev return truncatednormal initializer with stddev initializer range return tf keras initializers truncatednormal stddev initializer range def gelu x gaussian error linear unit original implementation of the gelu activation function in google bert repo when initially create for information openai gpt s gelu be slightly different and give slightly different result 0 5 x 1 torch tanh math sqrt 2 math pi x 0 044715 torch pow x 3 also see cdf 0 5 1 0 tf math erf x tf math sqrt 2 0 return x cdf act2fn gelu tf keras layers activation gelu def shape list x deal with dynamic shape in tensorflow cleanly static x shape as list dynamic tf shape x return dynamic I if s be none else s for I s in enumerate static def cast bool to primitive bool variable default tensor to true false function argument can be insert as boolean tensor and bool variable to cope with keras serialization we need to cast output attention to correct bool if it be a tensor args default tensor to true bool if tensor should default to true in case tensor have no numpy attribute if bool variable be tensor and have numpy value if tf be tensor bool variable if hasattr bool variable numpy return bool bool variable numpy elif default tensor to true return true else variable be bool return bool variable def get 2 transformerlayerp numb tokenizer autotokenizer from pretraine allenai biome roberta base inputt tokenizer encode this be a sentence return tensor tf tempmodel tfrobertamodel from pretraine allenai biome roberta base from pt true outt tempmodel inputt 0 t layer11 tfbertlayer config name layer format 11 numb t layer12 tfbertlayer2 config name layer format 12 numb t layer11 outt none none none t layer12 outt none none none t layer11 set weight tempmodel layer 0 encoder layer 10 get weight t layer12 set weight tempmodel layer 0 encoder layer 11 get weight t layer12 intermediate intermediate act fn tf keras activation tanh del tokenizer del tempmodel return t layer11 t layer12 def get mini model p trans11 p trans12 get 2 transformerlayerp 6 inputhiddenval tf keras input shape none none dtype tf float32 name input q batch size none p output p trans11 inputhiddenval none none none 0 p outputsfinal p trans12 p output none none none 0 modelnew tf keras model input inputhiddenval output p outputsfinal return modelnew tf function def loss fn prob bs tf shape prob 0 label tf eye bs bs return tf loss categorical crossentropy label prob from logit true model get mini model model compile loss loss fn optimizer tfa optimizer adamw weight decay 1e 4 learning rate 1e 5 epsilon 1e 06 for I var in enumerate model trainable weight print model trainable weight I name
tensorflowtensorflow
sparsetensor from dense conversion error when dtype tf string
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary conda binary tensorflow version use command below unknown 2 1 0 python version 3 7 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 2 7 6 5 gpu model and memory gtx 1070 8 gb describe the current behavior fail to convert the tensor to a sparsetensor representation describe the expect behavior b should contain the sparsetensor representation of a standalone code to reproduce the issue a tf constant list ababa print a b tf sparse from dense a print b other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file test py line 264 in b tf sparse from dense a file home alex anaconda3 envs tf2 lib python3 7 site package tensorflow core python op sparse op py line 121 in from dense math op not equal tensor array op constant 0 tensor dtype file home alex anaconda3 envs tf2 lib python3 7 site package tensorflow core python framework constant op py line 258 in constant allow broadcast true file home alex anaconda3 envs tf2 lib python3 7 site package tensorflow core python framework constant op py line 266 in constant impl t convert to eager tensor value ctx dtype file home alex anaconda3 envs tf2 lib python3 7 site package tensorflow core python framework constant op py line 96 in convert to eager tensor return op eagertensor value ctx device name dtype typeerror can not convert 0 to eagertensor of dtype string
tensorflowtensorflow
concatenate weight in keras custom layer use add weight fail while computing gradient
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 n a it can be reproduce in google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary n a tensorflow version use command below 2 2 0 and tf nightly python version 3 6 7 google colab bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior while write a custom layer I need to have multiple weight matrix concatenate together if I do this in the build function use tf concatenate I get the error valueerror no gradient provide for any variable however if I make a list of weight in build and concatenate they in call it work describe the expect behavior it should work with tf concatenate as well standalone code to reproduce the issue this be the colab link to reproduce the issue provide the code below as well import tensorflow as tf import numpy as np from tensorflow keras optimizer import adam print tf version class multiinputlinear tf keras layers layer def init self output dim 32 n input 2 super multiinputlinear self init self output dim output dim self n input n input def build self input shape self input dim input shape 0 1 self w tf concat self add weight name f w I shape self input dim self output dim initializer random normal trainable true for I in range self n input axis 0 def call self input support tf concat input axis 1 return tf matmul support self w n 100 a np random normal size n n for in range 2 y np random binomial 1 1 size n 32 a in tf keras layers input batch size n shape n for in range 2 y multiinputlinear y shape 1 2 a in model tf keras model model input a in output y model compile loss categorical crossentropy optimizer adam model fit a y batch size n work code import tensorflow as tf import numpy as np from tensorflow keras optimizer import adam class multiinputlinear tf keras layers layer def init self output dim 32 n input 2 super multiinputlinear self init self output dim output dim self n input n input def build self input shape self input dim input shape 0 1 self w list self add weight name f w I shape self input dim self output dim initializer random normal trainable true for I in range self n input def call self input support tf concat input axis 1 w tf concat self w list axis 0 return tf matmul support w n 100 a np random normal size n n for in range 2 y np random binomial 1 1 size n 32 a in tf keras layers input batch size n shape n for in range 2 y multiinputlinear y shape 1 2 a in model tf keras model model input a in output y model compile loss categorical crossentropy optimizer adam model fit a y batch size n other info log valueerror no gradient provide for any variable multi input linear 4 w 0 0 multi input linear 4 w 1 0
tensorflowtensorflow
albert from tf hub doesn t work with gradienttape
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian gnu linux 9 12 tensorflow instal from source or binary automatic installation for gcp deep learn vm tensorflow version use command below 2 1 0 tensorflow hub version 0 8 0 python version 3 7 6 cuda cudnn version 10 1 gpu model and memory nvidia v100 describe the current behavior I get an error when try to run albert from tf hub inside a tf gradienttape context the error occur at the forward pass before call tape gradient the error do not occur when the model call be place outside of the tf gradienttape context describe the expect behavior no such error should occur and the model should run and gradient should be calculate properly standalone code to reproduce the issue import tensorflow as tf import tensorflow hub as hub from tensorflow import kera import numpy as np def get model global max seq length global batch size input word ids keras layer input batch shape batch size max seq length dtype tf int32 name input word id input mask keras layers input batch shape batch size max seq length dtype tf int32 name input mask segment ids keras layer input batch shape batch size max seq length dtype tf int32 name segment ids albert layer hub keraslayer trainable true name albert layer pool output sequence output albert layer input word ids input mask segment id output kera layer dense 2 sequence output model keras model input input word ids input mask segment id output output print model summary return model batch size 4 max seq length 16 model get model input ids 5 np one 4 16 dtype np int32 input mask np one 4 16 dtype np int32 segment id np zero 4 16 dtype np int32 with tf gradienttape persistent true as tape logit model input word ids input ids input mask input mask segment ids segment ids print logit other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file albert gradient tape test py line 42 in segment ids segment ids file opt conda lib python3 7 site package tensorflow core python keras engine base layer py line 822 in call output self call cast input args kwargs file opt conda lib python3 7 site package tensorflow core python keras engine network py line 717 in call convert kwargs to constant base layer util call context save file opt conda lib python3 7 site package tensorflow core python keras engine network py line 891 in run internal graph output tensor layer compute tensor kwargs file opt conda lib python3 7 site package tensorflow core python keras engine base layer py line 822 in call output self call cast input args kwargs file opt conda lib python3 7 site package tensorflow hub keras layer py line 218 in call lambda f training false file opt conda lib python3 7 site package tensorflow core python framework smart cond py line 56 in smart cond return false fn file opt conda lib python3 7 site package tensorflow hub keras layer py line 218 in lambda f training false file opt conda lib python3 7 site package tensorflow core python save model load py line 438 in call attribute return instance call args kwargs file opt conda lib python3 7 site package tensorflow core python eager def function py line 568 in call result self call args kwd file opt conda lib python3 7 site package tensorflow core python eager def function py line 606 in call result self stateful fn args kwd file opt conda lib python3 7 site package tensorflow core python eager function py line 2362 in call graph function args kwargs self maybe define function args kwargs file opt conda lib python3 7 site package tensorflow core python eager function py line 2703 in maybe define function graph function self create graph function args kwargs file opt conda lib python3 7 site package tensorflow core python eager function py line 2593 in create graph function capture by value self capture by value file opt conda lib python3 7 site package tensorflow core python framework func graph py line 978 in func graph from py func func output python func func args func kwargs file opt conda lib python3 7 site package tensorflow core python eager def function py line 439 in wrap fn return weak wrap fn wrap args kwd file opt conda lib python3 7 site package tensorflow core python save model function deserialization py line 241 in restore function body return call concrete function function input file opt conda lib python3 7 site package tensorflow core python save model function deserialization py line 72 in call concrete function result function call flat tensor input function capture input pylint disable protect access file opt conda lib python3 7 site package tensorflow core python save model load py line 99 in call flat cancellation manager file opt conda lib python3 7 site package tensorflow core python eager function py line 1697 in call flat forward function args with tangent forward backward forward file opt conda lib python3 7 site package tensorflow core python eager function py line 1423 in forward self inference args self input tangent file opt conda lib python3 7 site package tensorflow core python eager function py line 1185 in forward self forward and backward function inference args input tangent file opt conda lib python3 7 site package tensorflow core python eager function py line 1379 in forward and backward function output inference args input tangent file opt conda lib python3 7 site package tensorflow core python eager function py line 882 in build function for output output file opt conda lib python3 7 site package tensorflow core python op default gradient py line 45 in shape and dtype of a variable without handle datum n s str t valueerror internal error try to take gradient or similar of a variable without handle datum tensor statefulpartitionedcall 949 shape dtype resource
tensorflowtensorflow
tf tpu experimental initialize tpu system fail to work on nightly build
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colaboratory linux tensorflow instal from source or binary binary tensorflow version use command below tf nightly v2 3 0 dev20200619 python version v3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a tpu google colab runtime with tpu accelerator describe the current behavior tf tpu experimental initialize tpu system tpu cluster resolver raise an unexpected error the stack trace contain this error be provide underneath the sample notebook that be be use to train an efficientnetb0 model with tpustrategy contain the error message be provide here describe the expect behavior a resnet50 model as efficientnetb0 be only present in tf nightly with similar code be able to run successfully with tpustrategy and there be no such error report while call tf tpu experimental initialize tpu system a notebook with the correspond training code for tfv2 2 can be find here standalone code to reproduce the issue just call tf tpu experimental initialize tpu system use the standard mechanism on a colab runtime with tpu should suffice python tpu url grpc os environ colab tpu addr tpu cluster resolver tf distribute cluster resolver tpuclusterresolver tpu tpu url tf config experimental connect to cluster tpu cluster resolver tf tpu experimental initialize tpu system tpu cluster resolver strategy tf distribute experimental tpustrategy tpu cluster resolver other info log python run on tpu 10 57 138 26 8470 info tensorflow initialize the tpu system grpc 10 57 138 26 8470 info tensorflow initialize the tpu system grpc 10 57 138 26 8470 info tensorflow clear out eager cache info tensorflow clear out eager caches invalidargumenterror traceback most recent call last in 6 7 tf config experimental connect to cluster tpu 8 tf tpu experimental initialize tpu system tpu 9 tpu strategy tf distribute experimental tpustrategy tpu 3 frame usr local lib python3 6 dist package tensorflow python tpu tpu strategy util py in initialize tpu system cluster resolver 101 context context clear cache pylint disable protect access 102 103 serialize topology output numpy 104 105 todo b 134094971 remove this when lazy tensor copy in multi device usr local lib python3 6 dist package tensorflow python framework op py in numpy self 1061 1062 todo slebedev consider avoid a copy for non cpu or remote tensor 1063 maybe arr self numpy pylint disable protect access 1064 return maybe arr copy if isinstance maybe arr np ndarray else maybe arr 1065 usr local lib python3 6 dist package tensorflow python framework op py in numpy self 1029 return self numpy internal 1030 except core notokstatusexception as e pylint disable protect access 1031 six raise from core status to exception e code e message none pylint disable protect access 1032 1033 property usr local lib python3 6 dist package six py in raise from value from value invalidargumenterror nodedef expect input string do not match 0 input specify op attr t type attr tensor name string attr send device string attr send device incarnation int attr recv device string attr client terminate bool default false be stateful true nodedef node send typically this bug be prevalent only on nightly build master branch and not on tf v2 2 release cc tanzhenyu
tensorflowtensorflow
random in tf data dataset map be not random if not come from tensorflow
Bug
system information check os platform os linux os kernel version 53 18 04 1 ubuntu smp thu jun 4 14 58 26 utc 2020 os release version 5 3 0 59 generic os platform linux 5 3 0 59 generic x86 64 with ubuntu 18 04 bionic linux distribution ubuntu 18 04 bionic linux os distribution ubuntu 18 04 bionic mac version check python python version 3 6 9 python branch python build version default apr 18 2020 01 56 04 python compiler version gcc 8 4 0 python implementation cpython compiler c ubuntu 7 5 0 3ubuntu1 18 04 7 5 0 copyright c 2017 free software foundation inc this be free software see the source for copy condition there be no warranty not even for merchantability or fitness for a particular purpose check pip numpy 1 18 4 protobuf 3 12 2 tensorflow 2 1 1 tensorflow addon 0 10 0 tensorflow estimator 2 1 0 tensorflow import tf version version 2 1 1 tf version git version v2 1 0 33 g3ffdb91 tf version compiler version 7 3 1 20180303 standalone code to reproduce the issue python import tensorflow as tf import numpy as np import random as rd dataset tf datum dataset from tensor slice np arange 5 map lambda annotation rd random np random rand tf random uniform for el in dataset print el describe the current behavior the value for every element of the dataset be the same for python or numpy base random but not for tensorflow base random describe the expect behavior the value for tf np python base randomness be a new random value for every dataset element or a warning error to avoid fall into this trap and debugging
tensorflowtensorflow
tflite gpu delegate with opencl backend produce wrong result
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device oneplus 6 snapdragon 845 tensorflow instal from source or binary source tag v2 1 1 and binary tensorflow version use command below v2 1 1 python version python 3 6 9 bazel version if compile from source bazel 0 29 1 gcc compiler version if compile from source gcc ubuntu 7 5 0 3ubuntu1 18 04 7 5 0 cuda cudnn version from nvidia smi nvidia smi 440 33 01 driver version 440 33 01 cuda version 10 2 from nvcc version cuda compilation tool release 10 1 v10 1 243 no cudnn gpu model and memory rtx 2080 ti 11 g memory you can collect some of this information use our environment capture script build environment information tf env bulid txt execution evironment information tf env exec txt you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version v2 1 0 33 g3ffdb91 2 1 1 describe the current behavior I use yolov3 coco dataset tflite model to detect obejct while tflite opencl gpu delegate produce error result compare to tflite cpu backend tflite cpu tflite opencl gpu delegate note the tie on zidane be not detect with opencl gpu delegate at first the detection problem be encounter on oneplus 6 t to ensure the problem be only with opencl gpu delegate and not with opengle gpu delegate I build both opencl and opengle gpu delegate on linux to double check the result linux opencl linux opengle describe the expect behavior tflite opencl gpu delegate output same result as tflite cpu backend standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook one can use to reproduce result from cpu opencl and opengle gpu delegate with argument delegate delegate libtensorflowlite gpu delegate so and delegate libtensorflowlite gpu gl so respectively the model can be download from a pre bulit docker can be download by docker pull zldrobit cudagl opencl clone the git repo download the model to weight yolov3 coco fp32 tflite and run python3 tflite detect py the result will be generate in the output folder ps I be unable to provide a colab notebook due to the configuration diffculty of opencl and opengl on colab environment so be base on tensorflow tag v2 1 1 and I only add initilization code and clean code once I have time I will publish this build process asap pps I can not run the opengle gpu delegate on oneplus 6 t the most critical error message may be e libegl call to opengl es api with no current context log once per thread other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
test set be empty for imdb load datum with low maxlen value
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow custom os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 6 describe the current behavior with the current imdb load datum the follow result be see for different value of maxlen load datum len x train len x test imdb load datum maxlen 50 1035 0 imdb load datum maxlen 100 5736 0 imdb load datum maxlen 200 25000 3913 imdb load datum 25000 25000 analysis we can observe that when maxlen be low the number of test sample can be 0 this be because the train and test datum be concatenate then the sample with length maxlen be remove and the first 25 000 be consider as training datum describe the expect behavior the number of test sample should not be zero fix this can be fix when datum can be filter first to remove the one with length maxlen and then concatenate to process furth the follow be the result after the fix load datum len x train len x test imdb load datum maxlen 50 477 558 imdb load datum maxlen 100 2773 2963 imdb load datum maxlen 200 14244 14669 imdb load datum 25000 25000 standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook from tensorflow keras datasets import imdb x train x test imdb load datum maxlen 50 print len x train len x test this give 1035 0 x train x test imdb load datum maxlen 100 print len x train len x test this give 5736 0 x train x test imdb load datum maxlen 200 print len x train len x test this give 25000 3913
tensorflowtensorflow
tensorflow keras layer add cause exception in to json
Bug
system information custom code below os window 10 tensorflow instal from binary tensorflow version v2 2 0 rc4 8 g2b96f3662b 2 2 0 keras version 2 3 0 tf python version 3 6 run from cpu describe the current behavior to json cause exception in nest pack sequence as describe the expect behavior to json should work standalone code to reproduce the issue keras tensorflow bug exception in to json nest pack sequence as change kl add to tf add n and there be no problem use tensorflow 2 2 0 kera 2 3 0 tf import tensorflow kera layer as kl import tensorflow kera model as km import tensorflow as tf input kl input batch shape 2 2 sk o1 kl dense 17 name d1 input sk append o1 o2 kl dense 17 name d2 o1 sk append o2 x1 kl add sk name a1 o3 kl dense 17 name d3 o2 sk append o3 x2 kl add sk name a2 m km model input input output x1 x2 m summary junk m to json
tensorflowtensorflow
tflite model fail to prepare in inference
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 debian 9 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 1 14 python version 3 7 bazel version if compile from source 0 28 0 gcc compiler version if compile from source cuda cudnn version 10 gpu model and memory p100 16vram you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior after obtain a pb model with export tflite ssd graph py I m use toco to create a detect tflite model I can not run inference with this model or compile it for the tpu I m get this error when run inference with detect tflite traceback most recent call last file test tflite py line 33 in main file test tflite py line 21 in main interpreter allocate tensor file usr local lib python3 5 dist package tflite runtime interpreter py line 242 in allocate tensor return self interpreter allocatetensor file usr local lib python3 5 dist package tflite runtime interpreter wrapper py line 110 in allocatetensor return interpreter wrapper interpreterwrapper allocatetensor self runtimeerror tensorflow lite kernels kernel util cc 129 std abs input product scale bias scale 1e 6 std min input product scale bias scale be not true node number 108 conv 2d fail to prepare I suspect it involve use sigmoid as a score converter when I use softmax the model work fine I understand the difference between the two but I suspect that the post process op be not capable of handle num class 2 when use sigmoid describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf signal rfft 2d 3d documentation refer tcomplex and input as argument
Bug
url s with the issue description of issue what need change clear description in the args section there be input input and tcomplex treal no long exist and input should be input tensor running code python tf signal rfft 1 tcomplex tf complex64 get exception python typeerror rfft get an unexpected keyword argument tcomplex parameter define yes return define yes raise list and define no system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc3 python version 3 8 2 relate issue 39520
tensorflowtensorflow
tf signal irfft 2d 3d documentation refer input and treal as argument
Bug
url s with the issue description of issue what need change clear description in the args section there be input input and treal treal no long exist and input should be input tensor running code python tf signal irfft 1 treal tf float32 get exception python typeerror irfft get an unexpected keyword argument treal and if run code python tf signal irfft input 1 get exception python typeerror irfft get an unexpected keyword argument input parameter define yes return define yes raise list and define no system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc3 python version 3 8 2
tensorflowtensorflow
unclear rank dimension dependency of weight in sigmoid cross entropy documentation
Bug
url s with the issue description of issue what need change clear description unclear rank dependency of input weight accord to the document weight could have the same rank as label and must be broadcastable to label but it be unclear what label be parameter define yes return define yes raise list and define yes system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc3 python version 3 8 2
tensorflowtensorflow
tf nn swish documentation refer name as an argument
Bug
url s with the issue description of issue what need change clear description in the args section there be an input name but it be not in the signature and the function doesn t accept the argument run code python tf nn swish 1 name none get exception python typeerror swish get an unexpected keyword argument name parameter define yes return define yes raise list and define no system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc3 python version 3 8 2
tensorflowtensorflow
autograph could not transform warning when code contain a multi line string with backslashe
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution window 10 tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 3 cuda cudnn version 10 1 7 gpu model and memory nvidia geforce rtx 2080ti describe the current behavior autograph warning appear if a custom keras layer s call method code contain a multiline string join by the backslash if a multiline string be join use bracket however no warning appear do not seem to influence calculation in any way but a fun bug to encounter warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warning autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find describe the expect behavior no warning appear standalone code to reproduce the issue import tensorflow as tf tf autograph set verbosity 10 alsologtostdout true from tensorflow keras layer import layer input class slashphobic layer def call self input s foo bar print s return input x input shape 1 y slashphobic x autograph log convert call args kwargs not whiteliste default rule not whiteliste default rule not whiteliste default rule entity be not cache for key subkey frozenset convert error transforming entity traceback most recent call last file d miniconda3 lib site package tensorflow core python autograph impl api py line 526 in convert call convert f conversion convert target entity program ctx file d miniconda3 lib site package tensorflow core python autograph impl conversion py line 326 in convert free nonglobal var name file d miniconda3 lib site package tensorflow core python autograph impl conversion py line 240 in convert with cache entity program ctx file d miniconda3 lib site package tensorflow core python autograph impl conversion py line 475 in convert entity to ast node name entity info convert func to ast o program ctx file d miniconda3 lib site package tensorflow core python autograph impl conversion py line 634 in convert func to ast node source parser parse entity f future feature future feature file d miniconda3 lib site package tensorflow core python autograph pyct parser py line 207 in parse entity return attempt to parse normal source source future feature file d miniconda3 lib site package tensorflow core python autograph pyct parser py line 111 in attempt to parse normal source return parse str source preamble len len future feature source file d miniconda3 lib site package tensorflow core python autograph pyct parser py line 230 in parse str raise valueerror expect exactly one node node find format nodes valueerror expect exactly one node node find warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find warning autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find
tensorflowtensorflow
iterator make initializer return none
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary source tensorflow version use command below 2 2 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source 7 5 0 cuda cudnn version 10 2 7 6 gpu model and memory geforce rtx 2080 12 gb describe the current behavior I m try to run a code write in tensorflow 1 1 use tensorflow 2 2 I already run the tf upgrade v2 which change most of the unsupported thing however while create an iterator train datum tf datum dataset from generator gen function gen type gen shape train datum train datum map map func map func num parallel call self num thread prefetch datum train datum train datum prefetch 10 create a iterator of the correct shape and type iterator tf compat v1 datum iterator from structure tf compat v1 datum get output type train datum tf compat v1 datum get output shape train datum and when I initialize it train init op iter make initializer train datum the initalization operator train init op be none and I get the follow error when I run sess run train init op file home pvnieo local lib python3 6 site package tensorflow python client session py line 958 in run run metadata ptr file home pvnieo local lib python3 6 site package tensorflow python client session py line 1166 in run self graph fetch feed dict tensor feed handle feed handle file home pvnieo local lib python3 6 site package tensorflow python client session py line 477 in init self fetch mapper fetchmapper for fetch fetch file home pvnieo local lib python3 6 site package tensorflow python client session py line 263 in for fetch fetch type fetch typeerror fetch argument none have invalid type how can I solve this issue
tensorflowtensorflow
tf image flip leave right and tf image flip up down incorrectly assume rank 3 image and flip along the wrong axis
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see below os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version use command below 2 0 python version 3 7 you can also obtain the tensorflow version with 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version unknown 2 0 0 describe the current behavior setup under some circumstance in my case when a tf tensor be inside a tf datum dataset and it be be conditionally transform inside a tf datum dataset map call the shape of a tensor become unknown this may be expect behavior still behavior when tf image flip leave right be apply to this tf tensor the function incorrectly assume a rank 3 shape and flip the image along the wrong axis describe the expect behavior when tf image flip leave right be apply to the tf tensor the function do not assume a rank 3 shape and flip the image along the correct axis standalone code to reproduce the issue py import tensorflow as tf def correct image flip leave right image return tf reverse image axis 2 patch tf false change this to true to fix the bug if patch tf tf image flip leave right correct image flip leave right image input tf convert to tensor batch 0 y 0 0 1 2 3 4 5 x 0 1 channel 0 1 2 y 1 6 7 8 9 10 11 x 0 1 channel 0 1 2 image flip directly tf image flip leave right image input expect output tf convert to tensor batch 0 y 0 3 4 5 0 1 2 x 0 1 channel 0 1 2 y 1 9 10 11 6 7 8 x 0 1 channel 0 1 2 tf assert equal image flip directly expect output print directly call tf image flip leave right work as execte def generator yield image input dataset tf datum dataset from generator generator output type tf int32 def flip it image do flip bool if do flip return tf image flip leave right image else return image dataset dataset map lambda image flip it image tf constant true image flip via dataset map next iter dataset print image flip via dataset map print image flip via dataset map print expect output print expect output tf assert equal image flip via dataset map expect output this assertion fail even though it shouldn t unless patch tf be true print if you can see this message image flip via dataset map expect output output python tf image flip py 2020 06 18 17 10 08 875275 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx avx2 directly call tf image flip leave right work as execte image flip via dataset map tf tensor 6 7 8 9 10 11 0 1 2 3 4 5 shape 1 2 2 3 dtype int32 expect output tf tensor 3 4 5 0 1 2 9 10 11 6 7 8 shape 1 2 2 3 dtype int32 traceback most recent call last file tf image flip py line 54 in tf assert equal image flip via dataset map expect output file c user uib58003 appdata local continuum miniconda3 lib site package tensorflow core python op check op py line 456 in assert equal v2 return assert equal x x y y summarize summarize message message name name file c user uib58003 appdata local continuum miniconda3 lib site package tensorflow core python op check op py line 546 in assert equal message or index and value str summary msg tensorflow python framework error impl invalidargumenterror condition x y do not hold index of first 3 different value 0 0 0 0 0 0 0 1 0 0 0 2 correspond x value 6 7 8 correspond y value 3 4 5 first 3 element of x 6 7 8 first 3 element of y 3 4 5 other info log the bug be in tensorflow core op image op impl flip call by image op impl flip leave right l537 l546 tensorflow should not assume base on shape ndim be none that the image be 3 dimensional and it should not call fix image flip shape which far build on this incorrect assumption l315 l320 this cause the flipping to happen along the wrong axis the flip leave right function should be implement by array op reverse image axis 2 which always flip along the correct axis for any number of rank as long as the dimension end in height width channel the same bug probably appear in tf image flip up down for the same reason base on look at the source code but I haven t test this that function should always apply tf reverse image axis 3 for the same reason as before
tensorflowtensorflow
tensor scatter nd add doesn t support complex64 in tf 2 3 dev
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary pip tensorflow version use command below 2 3 dev python version 3 6 8 cuda cudnn version 10 1 gpu model and memory quadro p5000 16 gb describe the current behavior when use tf tensor scatter nd add with complex datum I have the follow error invalidargumenterror traceback most recent call last in 1 tf tensor scatter nd add tf transpose to update arr ind update workspace tfkbnufft venv lib python3 6 site package tensorflow python ops gen array ops py in tensor scatter add tensor index update name 10686 return result 10687 except core notokstatusexception as e 10688 op raise from not ok status e name 10689 except core fallbackexception 10690 pass workspace tfkbnufft venv lib python3 6 site package tensorflow python framework op py in raise from not ok status e name 6841 message e message name name if name be not none else 6842 pylint disable protect access 6843 six raise from core status to exception e code message none 6844 pylint enable protect access 6845 workspace tfkbnufft venv lib python3 6 site package six py in raise from value from value invalidargumenterror unsupported dtype complex64 op tensorscatteradd describe the expect behavior tf tensor scatter nd add should work with complex datum standalone code to reproduce the issue python import tensorflow as tf to update tf one 1 640000 dtype tf complex64 arr ind tf range 324000 none update tf cast tf random normal 324000 1 dtype tf float32 tf complex64 tf tensor scatter nd add tf transpose to update arr ind update colab link other info log this problem only appear for tf nightly and on gpu
tensorflowtensorflow
predicite the model with feature on tpu assertionerror could not compute output tensor dense 3 identity 0 shape none 1 dtype float32
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 kaggle use tpu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 2 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior after succesful training prediction of the model be not happen even if the dataset be right and throw this error assertionerror could not compute output tensor dense 3 identity 0 shape none 1 dtype float32 describe the expect behavior model should sucessfully predict the dataset standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach log be log txt
tensorflowtensorflow
tf io match file hang give a certain pattern
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 no mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior tf io match file hang and do not terminate when pass or in the beginning of pattern argument describe the expect behavior as far as or be not place in the beginning the function terminate with a proper error handle I would expect a similar behavior for my input below standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf tf io match file name name none other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tensorflow roadmap readme link break
Bug
link to tensorflow roadmap in readme be break this template be for miscellaneous issue not cover by the other issue category for question on how to work with tensorflow or support for problem that be not verify bug in tensorflow please go to stackoverflow if you be report a vulnerability please use the dedicated reporting process for high level discussion about tensorflow please post to for question about the development or internal working of tensorflow or if you would like to know how to contribute to tensorflow please post to
tensorflowtensorflow
keras callback stop receive some param with tensorflow 2 2 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux fedora 31 tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version python 3 7 5 cuda cudnn version no gpu gpu model and memory no gpu describe the current behavior when define a custom keras callback the set param method receive a subset of parameter in tensorflow 2 2 0 in comparison to what it use to receive in tensorflow 2 1 1 in tensorflow 2 2 0 it receive verbose 1 epoch 2 step 1 describe the expect behavior I would expect to get the same param as with tensorflow 2 1 1 which be batch size 4 epoch 2 step none sample 4 verbose 1 do validation false metric loss standalone code to reproduce the issue I use the follow script to test the behavior with the two version of tensorflow and also try with keras versus tf keras import numpy as np import kera import tensorflow kera as keras def build xor data x train np array 0 0 0 1 1 0 1 1 dtype float y train np array 0 1 1 0 dtype float return x train y train def build xor model keras input layer keras layer input shape 2 hide layer keras layer dense 2 activation sigmoid input layer output layer keras layer dense 1 activation sigmoid hide layer model keras model model input input layer output output layer model compile loss mse optimizer adam return model x train y train build xor datum model build xor model keras class debugcallback keras callbacks callback def set param self param print set param local print keras version keras version model fit x train y train batch size 4 epoch 2 callback debugcallback I get the follow output keras tf keras tf 2 1 1 batch size 4 epoch 2 step none sample 4 verbose 1 do validation false metric loss batch size 4 epoch 2 step 1 sample 4 verbose 0 do validation false metric loss tf 2 2 0 batch size 4 epoch 2 step none sample 4 verbose 1 do validation false metric loss verbose 1 epoch 2 step 1 in tensorflow 2 2 0 we don t get anymore the follow parameter batch size sample do validation metric after read the release note section break change I could expect to not receive metric anymore be the other miss param also link to that break change
tensorflowtensorflow
gatherv2 bug from pb tflite
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 1 python version 3 5 bazel version if compile from source 0 16 0 gcc compiler version if compile from source 7 4 0 cuda cudnn version 10 2 gpu model and memory rtx 2080ti you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version v2 1 0 rc2 17 ge5bf8de 2 1 0 describe the current behavior screenshot from 2020 06 16 17 29 43 I be try to create a tflite from this pb but it fail to convert this gatherv2 because 2020 06 16 17 32 12 974945 f tensorflow lite toco graph transformation resolve constant gather cc 65 check fail stride coord shape dim 0 output datum size 131072 vs 1310720 the code be from resolve constant gather cc check eq stride coord shape dim 0 output datum size be this desire I think this be a bug describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
conversion error when try to convert model use beamsearchdecoder from tensorflow addon rnn
Bug
system information os platform and distribution e g linux ubuntu 16 04 mac os 10 15 5 tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 2 command use to run the converter or code if you re use the python api converter tf lite tfliteconverter from keras model inference model converter experimental new converter true converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite quantize model converter convert the output from the converter invocation 2020 06 16 12 24 25 843841 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2020 06 16 12 24 25 843958 I tensorflow core grappler cluster single machine cc 356 start new session 2020 06 16 12 24 25 913996 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item graph to optimize 2020 06 16 12 24 25 914107 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer graph size after 460 node 66 546 edge 111 time 11 364ms 2020 06 16 12 24 25 914119 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer graph size after 460 node 0 546 edge 0 time 9 66ms 2020 06 16 12 24 25 914126 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item while body 5784 2020 06 16 12 24 25 914133 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 002ms 2020 06 16 12 24 25 914140 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 16 12 24 25 914151 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item tensorarrayv2write 1 cond true 6956 2020 06 16 12 24 25 914160 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms 2020 06 16 12 24 25 914166 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 16 12 24 25 914171 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item tensorarrayv2write 2 cond true 6974 2020 06 16 12 24 25 914177 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 16 12 24 25 914183 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 16 12 24 25 914193 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item while cond 5783 2020 06 16 12 24 25 914199 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms 2020 06 16 12 24 25 914205 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 16 12 24 25 914210 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item tensorarrayv2write cond true 6938 2020 06 16 12 24 25 914215 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms 2020 06 16 12 24 25 914228 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms 2020 06 16 12 24 25 914235 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item model predictive typing addon beam search decoder decoder while body 6435 2020 06 16 12 24 25 914240 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer graph size after 521 node 0 597 edge 0 time 4 097m 2020 06 16 12 24 25 914244 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer graph size after 521 node 0 597 edge 0 time 4 171ms 2020 06 16 12 24 25 914248 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item beamsearchdecoderstep cond true 6915 2020 06 16 12 24 25 914253 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms 2020 06 16 12 24 25 914258 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms 2020 06 16 12 24 25 914264 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item beamsearchdecoderstep cond false 6916 2020 06 16 12 24 25 914270 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms 2020 06 16 12 24 25 914276 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 16 12 24 25 914279 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item tensorarrayv2write 2 cond false 6975 2020 06 16 12 24 25 914283 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms 2020 06 16 12 24 25 914289 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 16 12 24 25 914294 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item tensorarrayv2write 1 cond false 6957 2020 06 16 12 24 25 914298 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms 2020 06 16 12 24 25 914302 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 16 12 24 25 914307 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item model predictive typing addon beam search decoder decoder while cond 6434 2020 06 16 12 24 25 914313 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 16 12 24 25 914319 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 16 12 24 25 914323 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item tensorarrayv2write cond false 6939 2020 06 16 12 24 25 914328 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms 2020 06 16 12 24 25 914333 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 001ms traceback most recent call last file lite conversion py line 60 in main file lite conversion py line 53 in main tflite quantize model converter convert file home gc miniconda3 envs grammatica tf2 gpu lib python3 7 site package tensorflow lite python lite py line 459 in convert self func 0 low control flow false file home gc miniconda3 envs grammatica tf2 gpu lib python3 7 site package tensorflow python framework convert to constant py line 706 in convert variable to constant v2 as graph func low control flow aggressive inlining file home gc miniconda3 envs grammatica tf2 gpu lib python3 7 site package tensorflow python framework convert to constant py line 461 in convert variable to constant v2 impl node defs tensor datum name to node file home gc miniconda3 envs grammatica tf2 gpu lib python3 7 site package tensorflow python framework convert to constant py line 286 in get control flow function datum arg type idx get resource type node input idx 1 file home gc miniconda3 envs grammatica tf2 gpu lib python3 7 site package tensorflow python framework convert to constant py line 259 in get resource type node name get source node name through identity node name file home gc miniconda3 envs grammatica tf2 gpu lib python3 7 site package tensorflow python framework convert to constant py line 254 in get source node name through identity while name to node node name op identity keyerror beamsearchdecoderstep cond input 1 0 any other info log I be try to convert a seq2seq model to tf lite however I be face some issue with the beamsearchdecoder from tensorflow addon my model work fine in python and the source code look like this class myseq2seqmodel tf keras model model def init self vocab size int input len int output len int batch size rnn unit int 64 dense unit int 64 embed dim int 256 kwargs super myseq2seqmodel self init kwargs base attribute self vocab size vocab size self input len input len self output len output len self rnn unit rnn unit self dense unit dense unit self embed dim embed dim self batch size batch size beam search attribute self beam width 3 encoder self encoder embed layer embed vocab size embed dim input length input len self encoder rnn layer lstm rnn unit return sequence true return state true decoder self decoder embed layer embed vocab size embed dim input length output len self decoder rnncell tf keras layers lstmcell rnn unit attention self attention mechanism tfa seq2seq luongattention dense unit self rnn cell self build rnn cell batch size batch size output self dense layer tf keras layer dense vocab size self inference decoder beamsearchdecoder cell self rnn cell beam width self beam width output layer self dense layer as tf nn embed lookup be not support by tflite embed fn lambda ids tf gather tf identity self decoder embed variable 0 ids coverage penalty weight 0 0 dynamic false parallel iteration 1 maximum iteration output len def call self input training none mask none encoder encoder self encoder embed input 0 encoder output state h state c self encoder rnn encoder decoder emb self decoder embed input 1 tile a tfa seq2seq tile batch encoder output multipli self beam width tile a tx tfa seq2seq tile batch state h multipli self beam width tile c tx tfa seq2seq tile batch state c multipli self beam width start token tf fill 1 start i d self attention mechanism setup memory tile a final output final state self inference decoder embed none start token start token end token eos i d initial state self build decoder initial state size 1 self beam width encoder state tile a tx tile c tx dtype tf float32 return final output predict id
tensorflowtensorflow
runtimeerror tensorflow lite kernel range cc 39 start limit delta 0 start limit delta 0 be not true node number 3 range fail to invoke node number 393 while fail to invoke current error runtimeerror tensorflow lite kernel reshape cc 55 stretch dim 1 0 1 node number 83 reshape fail to prepare
Bug
system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source command use to run the converter or code if you re use the python api import numpy as np import tensorflow as tf from tensorflow tts processor import ljspeechprocessor load the tflite model and allocate tensor interpreter tf lite interpreter model path fastspeech tflite interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail print input detail input detail print output detail output detail print input detail 0 print input detail 1 print input detail 2 fastspeech inference attention mask interpreter tensor interpreter get input detail 0 index speaker i d interpreter tensor interpreter get input detail 1 index input i d interpreter tensor interpreter get input detail 2 index input i d tf convert to tensor 1 2 3 4 5 6 7 8 9 10 tf int32 attention mask tf convert to tensor true true true true true true true true true true tf bool speaker i d tf convert to tensor 0 tf int32 out p interpreter tensor interpreter get output detail 0 index interpreter invoke interpreter invoke interpreter invoke print do the function get tensor return a copy of the tensor datum use tensor in order to get a pointer to the tensor mask mel before interpreter get tensor output detail 2 index print mask mel before the output from the converter invocation input detail name attention mask index 0 shape array 1 10 dtype int32 shape signature array 1 10 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter name input ids index 1 shape array 1 10 dtype int32 shape signature array 1 10 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter name speaker ids index 2 shape array 1 dtype int32 shape signature array 1 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter output detail name identity index 585 shape array dtype int32 shape signature array dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter name identity 1 index 621 shape array dtype int32 shape signature array dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter name identity 2 index 537 shape array 1 10 dtype int32 shape signature array 1 10 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter name attention mask index 0 shape array 1 10 dtype int32 shape signature array 1 10 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter name input ids index 1 shape array 1 10 dtype int32 shape signature array 1 10 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter name speaker ids index 2 shape array 1 dtype int32 shape signature array 1 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter 2020 06 16 06 20 21 460754 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2020 06 16 06 20 21 460788 e tensorflow stream executor cuda cuda driver cc 313 fail call to cuinit unknown error 303 2020 06 16 06 20 21 460819 I tensorflow stream executor cuda cuda diagnostic cc 163 no nvidia gpu device be present dev nvidia0 do not exist 2020 06 16 06 20 21 461131 I tensorflow core platform cpu feature guard cc 143 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 06 16 06 20 21 468935 I tensorflow core platform profile util cpu util cc 102 cpu frequency 2593990000 hz 2020 06 16 06 20 21 469795 I tensorflow compiler xla service service cc 168 xla service 0x7f0cfc000b20 initialize for platform host this do not guarantee that xla will be use device 2020 06 16 06 20 21 469821 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version traceback most recent call last file test tflite py line 28 in interpreter invoke file usr local lib python3 6 dist package tensorflow lite python interpreter py line 511 in invoke self interpreter invoke file usr local lib python3 6 dist package tensorflow lite python interpreter wrapper tensorflow wrap interpreter wrapper py line 113 in invoke return tensorflow wrap interpreter wrapper interpreterwrapper invoke self runtimeerror tensorflow lite kernel range cc 39 start limit delta 0 start limit delta 0 be not true node number 3 range fail to invoke node number 393 while fail to invoke also please include a link to the save model or graphdef save model fastspeech tflite failure detail conversion be successful but there be runtime error state what be wrong interpreter invoke fail any other info log pb conversion import yaml import numpy as np import matplotlib pyplot as plt import yaml import numpy as np import matplotlib pyplot as plt import tensorflow as tf from tensorflow tts configs import fastspeechconfig from tensorflow tts model import tffastspeech with open example fastspeech conf fastspeech v3 yaml as f config yaml load f loader yaml loader config fastspeechconfig config fastspeech param fastspeech tffastspeech config config name fastspeech fastspeech build fastspeech load weight example fastspeech pretraine model 150000 h5 by name true skip mismatch true tf save model save fastspeech test save fastspeech model code import numpy as np import tensorflow as tf def get initializer initializer range 0 02 create a tf initializer truncate normal with the give range args initializer range float initializer range for stddev return truncatednormal initializer with stddev initializer range return tf keras initializers truncatednormal stddev initializer range def gelu x gaussian error linear unit cdf 0 5 1 0 tf math erf x tf math sqrt 2 0 return x cdf def gelu new x smoother gaussian error linear unit cdf 0 5 1 0 tf tanh np sqrt 2 np pi x 0 044715 tf pow x 3 return x cdf def swish x swish activation function return x tf sigmoid x def mish x return x tf math tanh tf math softplus x act2fn identity tf keras layer activation linear tanh tf keras layers activation tanh gelu tf keras layers activation gelu relu tf keras activations relu swish tf keras layer activation swish gelu new tf keras layers activation gelu new mish tf keras layers activation mish class tffastspeechembedding tf keras layers layer construct charactor phoneme positional speaker embedding def init self config kwargs init variable super init kwargs self vocab size config vocab size self hide size config hide size self initializer range config initializer range self config config self position embedding tf keras layer embed config max position embedding 1 config hide size weight self sinco embed name position embedding trainable false if config n speaker 1 self encoder speaker embedding tf keras layer embed config n speakers config hide size embedding initializer get initializer self initializer range name speaker embedding self speaker fc tf keras layer dense unit config hide size name speaker fc def build self input shape build share charactor phoneme embed layer with tf name scope charactor embedding self charactor embedding self add weight weight shape self vocab size self hide size initializer get initializer self initializer range super build input shape tf function experimental relax shape true def call self input training false get charactor embedding of input args 1 charactor tensor int32 shape batch size length 2 speaker i d tensor int32 shape batch size return tensor float32 shape batch size length embed size return self embed input training training def embed self input training false applie embed base on input tensor input ids speaker ids input input shape tf shape input ids seq length input shape 1 position id tf range 1 seq length 1 dtype tf int32 tf newaxis create embedding input embed tf gather self charactor embedding input ids position embedding self position embedding position id sum embed embedding input embed position embedding if self config n speaker 1 speaker embedding self encoder speaker embedding speaker ids speaker feature tf math softplus self speaker fc speaker embedding extend speaker embedding extend speaker feature speaker feature tf newaxis embedding extended speaker feature return embedding def sinco embed self position enc np array pos np power 10000 2 0 I 2 self hide size for I in range self hide size for pos in range self config max position embedding 1 position enc 0 2 np sin position enc 0 2 position enc 1 2 np cos position enc 1 2 pad embed position enc 0 0 0 return position enc class tffastspeechselfattention tf keras layers layer self attention module for fastspeech def init self config kwargs init variable super init kwargs if config hide size config num attention head 0 raise valueerror the hidden size d be not a multiple of the number of attention head d config hidden size config num attention head self output attention config output attention self num attention head config num attention head self attention head size int config hidden size config num attention head self all head size self num attention head self attention head size self query tf keras layer dense self all head size kernel initializer get initializer config initializer range name query self key tf keras layer dense self all head size kernel initializer get initializer config initializer range name key self value tf keras layer dense self all head size kernel initializer get initializer config initializer range name value self dropout tf keras layers dropout config attention prob dropout prob def transpose for score self x batch size transpose to calculate attention score x tf reshape x batch size 1 self num attention head self attention head size return tf transpose x perm 0 2 1 3 tf function experimental relax shape true def call self input training false call logic hide state attention mask input batch size tf shape hide state 0 mixed query layer self query hide state mixed key layer self key hide state mixed value layer self value hide state query layer self transpose for score mixed query layer batch size key layer self transpose for score mix key layer batch size value layer self transpose for score mixed value layer batch size attention score tf matmul query layer key layer transpose b true dk tf cast tf shape key layer 1 tf float32 scale attention score attention score attention score tf math sqrt dk if attention mask be not none extend attention mask for self attention encoder extend attention mask attention mask tf newaxis tf newaxis extend attention mask tf cast extend attention mask tf float32 extend attention mask 1 0 extend attention mask 1e9 attention score attention score extend attention mask normalize the attention score to probability attention prob tf nn softmax attention score axis 1 attention prob self dropout attention prob training training context layer tf matmul attention prob value layer context layer tf transpose context layer perm 0 2 1 3 context layer tf reshape context layer batch size 1 self all head size output context layer attention prob if self output attention else context layer return output class tffastspeechselfoutput tf keras layers layer fastspeech output of self attention module def init self config kwargs init variable super init kwargs self dense tf keras layer dense config hidden size kernel initializer get initializer config initializer range name dense self layernorm tf keras layer layernormalization epsilon config layer norm eps name layernorm self dropout tf keras layers dropout config hide dropout prob tf function experimental relax shape true def call self input training false call logic hide state input tensor input hide state self dense hide state hide state self dropout hide state training training hide state self layernorm hide state input tensor return hidden state class tffastspeechattention tf keras layers layer fastspeech attention module def init self config kwargs init variable super init kwargs self self attention tffastspeechselfattention config name self self dense output tffastspeechselfoutput config name output tf function experimental relax shape true def call self input training false input tensor attention mask input self output self self attention input tensor attention mask training training attention output self dense output self output 0 input tensor training training mask attention output attention output tf cast tf expand dim attention mask 2 dtype tf float32 output mask attention output self output 1 add attention if we output they return output class tffastspeechintermediate tf keras layers layer intermediate representation module def init self config kwargs init variable super init kwargs self conv1d 1 tf keras layer conv1d config intermediate size kernel size config intermediate kernel size kernel initializer get initializer config initializer range padding same name conv1d 1 self conv1d 2 tf keras layer conv1d config hidden size kernel size config intermediate kernel size kernel initializer get initializer config initializer range padding same name conv1d 2 if isinstance config hidden act str self intermediate act fn act2fn config hidden act else self intermediate act fn config hidden act tf function experimental relax shape true def call self input training false call logic hide state attention mask input hide state self conv1d 1 hide state hide state self intermediate act fn hidden state hide state self conv1d 2 hide state mask hide state hide state tf cast tf expand dim attention mask 2 dtype tf float32 return mask hide state class tffastspeechoutput tf keras layers layer output module def init self config kwargs init variable super init kwargs self layernorm tf keras layer layernormalization epsilon config layer norm eps name layernorm self dropout tf keras layers dropout config hide dropout prob tf function experimental relax shape true def call self input training false call logic hide state input tensor input hide state self dropout hide state training training hide state self layernorm hide state input tensor return hidden state class tffastspeechlayer tf keras layers layer fastspeech module fft module on the paper def init self config kwargs init variable super init kwargs self attention tffastspeechattention config name attention self intermediate tffastspeechintermediate config name intermediate self bert output tffastspeechoutput config name output tf function experimental relax shape true def call self input training false call logic hide state attention mask input attention output self attention hide state attention mask training training attention output attention output 0 intermediate output self intermediate attention output attention mask training training layer output self bert output intermediate output attention output training training mask layer output layer output tf cast tf expand dim attention mask 2 dtype tf float32 output mask layer output attention output 1 add attention if we output they return output class tffastspeechencoder tf keras layers layer fast speech encoder module def init self config kwargs init variable super init kwargs self output attention config output attention self output hide state config output hide state self layer tffastspeechlayer config name layer format I for I in range config num hide layer tf function experimental relax shape true def call self input training false call logic hide state attention mask input all hide state all attention for layer module in enumerate self layer if self output hide state all hide state all hide state hide state layer output layer module hide state attention mask training training hide state layer output 0 if self output attention all attention all attention layer output 1 add last layer if self output hide state all hide state all hide state hide state output hide state if self output hide state output output all hide state if self output attention output output all attention return output output hide state attention class tffastspeechdecoder tffastspeechencoder fast speech decoder module def init self config kwargs super init config kwargs self config config create decoder positional embed self decoder positional embedding tf keras layer embed config max position embedding 1 config hide size weight self sinco embed name position embedding trainable false if config n speaker 1 self decoder speaker embedding tf keras layer embed config n speakers config hide size embedding initializer get initializer config initializer range name speaker embedding self speaker fc tf keras layer dense unit config hide size name speaker fc tf function experimental relax shape true def call self input training false hide state speaker ids encoder mask decoder pos input calculate new hide state hide state hide state self decoder positional embedding decoder pos if self config n speaker 1 speaker embedding self decoder speaker embedding speaker ids speaker feature tf math softplus self speaker fc speaker embedding extend speaker embedding extend speaker feature speaker feature tf newaxis hide state extend speaker feature return super call hide state encoder mask training training def sinco embed self position enc np array pos np power 10000 2 0 I 2 self config hide size for I in range self config hide size for pos in range self config max position embedding 1 position enc 0 2 np sin position enc 0 2 position enc 1 2 np cos position enc 1 2 pad embed position enc 0 0 0 return position enc class tftacotronpostnet tf keras layers layer tacotron 2 postnet def init self config kwargs init variable super init kwargs self conv batch norm for I in range config n conv postnet conv tf keras layer conv1d filter config postnet conv filter if I config n conv postnet 1 else config num mel kernel size config postnet conv kernel size pad same name conv format I batch norm tf keras layer batchnormalization name batch norm format I self conv batch norm append conv batch norm self dropout tf keras layers dropout rate config postnet dropout rate name dropout self activation tf nn tanh config n conv postnet 1 tf identity tf function experimental relax shape true def call self input training false call logic output mask input extend mask tf cast tf expand dim mask axis 2 tf float32 for I conv bn in enumerate self conv batch norm output conv output output bn output output self activation I output output self dropout output training training return output extend mask class tffastspeechdurationpredictor tf keras layers layer fastspeech duration predictor module def init self config kwargs init variable super init kwargs self conv layer for I in range config num duration conv layer self conv layer append tf keras layer conv1d config duration predictor filter config duration predictor kernel size pad same name conv format I self conv layer append tf keras layer layernormalization epsilon config layer norm eps name layernorm format I self conv layer append tf keras layers activation tf nn relu6 self conv layer append tf keras layers dropout config duration predictor dropout prob self conv layer sequence tf keras sequential self conv layer self output layer tf keras layer dense 1 tf function experimental relax shape true def call self input training false call logic encoder hide state attention mask input attention mask tf cast tf expand dim attention mask 2 tf float32 mask encoder hide state mask encoder hide state encoder hide state attention mask pass though first layer output self conv layer sequence mask encoder hide state output self output layer output mask output output attention mask return tf squeeze tf nn relu6 mask output 1 make sure positive value class tffastspeechlengthregulator tf keras layers layer fastspeech lengthregulator module def init self config kwargs init variable super init kwargs self config config tf function experimental relax shape true def call self input training false call logic args 1 encoder hide state tensor float32 shape batch size length hide size 2 duration gt tensor float32 int32 shape batch size length encoder hide state duration gt input output encoder mask self length regulator encoder hide state duration gt return output encoder mask def length regulator self encoder hide state duration gt length regulator logic sum duration tf reduce sum duration gt axis 1 batch size max duration tf reduce max sum duration input shape tf shape encoder hide state batch size input shape 0 hide size input shape 1 initialize output hide state and encoder mask output tf zero shape 0 max duration hide size dtype tf float32 encoder mask tf zero shape 0 max duration dtype tf int32 def condition I batch size output encoder mask encoder hide state duration gt max duration return tf less I batch size def body I batch size output encoder mask encoder hide state duration gt max duration repeat duration gt I real length tf reduce sum repeat pad size max duration real length mask tf sequence mask real length max duration dtype tf int32 repeat encoder hide state tf repeat encoder hide state I repeat repeat axis 0 repeat encoder hide state tf expand dim tf pad repeat encoder hide state 0 pad size 0 0 0 1 max duration hide size output tf concat output repeat encoder hide state axis 0 encoder mask tf concat encoder mask mask axis 0 return I 1 batch size output encoder mask encoder hide state duration gt max duration initialize iteration I I tf constant 0 dtype tf int32 output encoder mask tf while loop condition body I batch size output encoder mask encoder hide state duration gt max duration shape invariant I get shape batch size get shape tf tensorshape none none self config hide size tf tensorshape none none encoder hide state get shape duration gt get shape max duration get shape return output encoder mask class tffastspeech tf keras model tf fastspeech module def init self config kwargs init layer for fastspeech super init kwargs self embedding tffastspeechembedding config name embedding self encoder tffastspeechencoder config name encoder self duration predictor tffastspeechdurationpredictor config name duration predictor self length regulator tffastspeechlengthregulator config name length regulator self decoder tffastspeechdecoder config name decoder self mel dense tf keras layer dense unit config num mel name mel before self postnet tftacotronpostnet config config name postnet def build self dummy input for building model fake input input ids tf convert to tensor 1 2 3 4 5 6 7 8 9 10 tf int32 attention mask tf convert to tensor true true true true true true true true true true tf bool speaker ids tf convert to tensor 0 tf int32 self input ids attention mask speaker ids tf function experimental relax shape true input signature tf tensorspec shape none 10 dtype tf int32 tf tensorspec shape none 10 dtype tf bool tf tensorspec shape none dtype tf int32 def call self input ids attention mask speaker ids train false call logic embed output self embedding input ids speaker ids training training encoder output self encoder embed output attention mask training training last encoder hide state encoder output 0 duration predictor here use last encoder hide state u can use more hidden state layer rather than just use last hidden state of encoder for duration predictor duration output self duration predictor last encoder hide state attention mask batch size length speed ratio tf convert to tensor np array 1 0 dtype tf float32 duration gts tf cast tf math round duration output tf int32 length regulator output encoder mask self length regulator last encoder hide state duration gts training training create decoder positional embed decoder pos tf range 1 tf shape length regulator output 1 1 dtype tf int32 mask decoder pos tf expand dim decoder pos 0 encoder mask decoder output self decoder length regulator output speaker ids encoder mask mask decoder pos training training last decoder hide state decoder output 0 here u can use sum or concat more than 1 hide state layer from decoder mel before self mel dense last decoder hide states mel after self postnet mel before encoder mask training training mel before output mel before mel after duration output model10 keras model model input input ids attention mask speaker ids output output return output
tensorflowtensorflow
undefined reference when use tensorflow lite in arduino nano 33 ble sense platform io
Bug
tensorflow micro system information ubunutu 20 04 on x86 64 tensorflow from source 2 2 0 target arduino nano 33 ble sense hi I m try to get micro speech run on a arduino nano 33 ble sense use a model I ve train myself on google colab as google colab be currently use tensorflow 2 2 0 this model be also 2 2 0 use the arduino ide 1 8 12 sample micro speech do work and I can get inference to occur however this be a long way from tensorflow 2 2 0 in fact its many version behind and as a result I m unable to get my model to run on this version in addition this project be a step stone to another project which will swap out the model for other model and introduce other library which introduce further development and debug complexity for this reason I ve choose to adopt the platform io vscode development environment with this combination I can debug the code step by step which be a feature not support by the microsoft vscode arduino extension or arduino ide itself I can also use unit testing technology which aren t available elsewhere unfortunately my code be run into undefined reference exception on build in what be a fairly trivial port of the micro speech application to the platform io code I get these same error build the code use the local tensorflow arduino library zip which I build myself from the tensorflow 2 2 0 master branch a couple of day ago may I please get some advice any idea why undefined reference be appear in local build of micro speech or my port of the micro speech product it be also report in this issue 27629 give the explosion of option which development environment be the recommend approach for product which extend beyond hello world target micro controller arduino online arduino ide with local tensorflow arduino library zip tensorflow lite bazel from command line platform io ide vscode atom microsoft arduino over vscode entirely manual arm none eabi g execute task platformio run verbose processing nano33ble platform nordicnrf52 board nano33ble framework arduino debug tool jlink upload protocol jlink configuration platform nordic nrf52 4 2 1 arduino nano 33 ble hardware nrf52840 64mhz 256 kb ram 960 kb flash debug current jlink external cmsis dap jlink package framework arduino nrf52 mbedo 1 1 3 tool sreccat 1 164 0 1 64 toolchain gccarmnoneeabi 1 80201 181220 8 2 1 ldf library dependency finder ldf mode finder chain compatibility soft framework incompatible library home ian platformio package framework arduino nrf52 mbedos library mbe memory status find 7 compatible library more detail about library compatibility mode ldf compat mode scan dependency dependency graph 1 0 home ian platformio package framework arduino nrf52 mbedo library pdm home ian document platformio project nano33take01 lib micro feature home ian document platformio project nano33take01 lib tensorflow home ian document platformio project nano33take01 lib tensorflow building in release mode arm none eabi g o pio build nano33ble firmware elf t linker script ld dmbe app size 0xf0000 dmbe app start 0x10000 dmbe boot stack size 2048 wl gc section wl wrap calloc r wl wrap free r wl wrap malloc r wl wrap memalign r wl wrap realloc r wl wrap atexit wl wrap exit wl wrap main wl n mcpu cortex m4 mfloat abi softfp mfpu fpv4 sp d16 mthumb spec nano spec spec nosys spec wl as need pio build nano33ble src audio provider cpp o pio build nano33ble src command responder cpp o pio build nano33ble src feature provider cpp o pio build nano33ble src main cpp o pio build nano33ble src recognize command cpp o l pio build nano33ble l home ian platformio package framework arduino nrf52 mbedo variants arduino nano33ble l home ian platformio package framework arduino nrf52 mbedo variants arduino nano33ble libs wl start group wl whole archive pio build nano33ble libe06 libpdm a pio build nano33ble lib5e8 libtensorflow a pio build nano33ble lib082 libmicro feature a pio build nano33ble libframeworkarduinovariant a pio build nano33ble libframeworkarduino a lmbed lcc 310 core lcc 310 ext lcc 310 trng wl no whole archive lstdc lsupc lm lc lgcc lnosys wl end group home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src command responder cpp o in function respondtocommand tflite errorreporter long char const unsigned char bool home ian document platformio project nano33take01 src command responder cpp 48 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src feature provider cpp o in function featureprovider populatefeaturedata tflite errorreporter long long int home ian document platformio project nano33take01 src feature provider cpp 102 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src main cpp o in function tflite micromutableopresolver 4u getopdataparser tflite builtinoperator const home ian document platformio project nano33take01 lib tensorflow tensorflow lite micro micro mutable op resolver h 62 undefined reference to tflite parseopdata tflite operator const tflite builtinoperator tflite errorreporter tflite builtindataallocator void home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src main cpp o in function tflite micromutableopresolver 4u addcustom char const tfliteregistration home ian document platformio project nano33take01 lib tensorflow tensorflow lite micro micro mutable op resolver h 98 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 lib tensorflow tensorflow lite micro micro mutable op resolver h 108 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src main cpp o in function tflite micromutableopresolver 4u addbuiltin tflite builtinoperator tfliteregistration home ian document platformio project nano33take01 lib tensorflow tensorflow lite micro micro mutable op resolver h 68 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 lib tensorflow tensorflow lite micro micro mutable op resolver h 78 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src main cpp o in function setup home ian document platformio project nano33take01 src main cpp 75 undefined reference to tflite op micro register depthwise conv 2d home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 80 undefined reference to tflite op micro register fully connect home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 85 undefined reference to tflite op micro register softmax home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 90 undefined reference to tflite op micro register reshape home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 98 undefined reference to tflite microinterpreter microinterpreter tflite model const tflite microopresolver const unsigned char unsigned int tflite errorreporter home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 102 undefined reference to tflite microinterpreter allocatetensor home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 109 undefined reference to tflite microinterpreter input unsigned int home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 131 undefined reference to tflite microinterpreter microinterpreter home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 59 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 114 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src main cpp o in function loop home ian document platformio project nano33take01 src main cpp 172 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 157 undefined reference to tflite microinterpreter invoke home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src main cpp 164 undefined reference to tflite microinterpreter output unsigned int home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src main cpp o datum zz5setupe20micro error reporter 0x0 undefined reference to vtable for tflite microerrorreporter home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src recognize command cpp o in function recognizecommand processlatestresult tflitetensor const long char const unsigned char bool home ian document platformio project nano33take01 src recognize command cpp 46 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src recognize command cpp 60 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 src recognize command cpp 74 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src recognize command cpp o in function previousresultsqueue push back previousresultsqueue result const home ian document platformio project nano33take01 include recognize command h 66 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble src recognize command cpp o in function previousresultsqueue pop front home ian document platformio project nano33take01 include recognize command h 79 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble lib082 libmicro feature a micro feature generator cpp o in function initializemicrofeature tflite errorreporter home ian document platformio project nano33take01 lib micro features micro feature generator cpp 50 undefined reference to frontendpopulatestate home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld home ian document platformio project nano33take01 lib micro features micro feature generator cpp 52 undefined reference to tflite errorreporter report char const home ian platformio package toolchain gccarmnoneeabi bin lib gcc arm none eabi 8 2 1 arm none eabi bin ld pio build nano33ble lib082 libmicro feature a micro feature generator cpp o in function generatemicrofeature tflite errorreporter short const int int sign char unsigned int home ian document platformio project nano33take01 lib micro features micro feature generator cpp 79 undefined reference to frontendprocesssample collect2 error ld return 1 exit status pio build nano33ble firmware elf error 1 fail take 1 25 second the terminal process terminate with exit code 1 terminal will be reuse by task press any key to close it
tensorflowtensorflow
predcit function result be different from train val use the same data
Bug
version tensorflow 2 0 0 kera 2 3 1 ubuntu 18 I use predict to check the model performance on val datum but it be totally different from the training log file predict and val in the training use the same dataset the top layer of my model model sequential model add embed input dim num word output dim embed dim weight emb remove this if you want to train your weight input length maxlen trainable train embed mask zero true model add dropout 0 2 heuristic model output 1 of num class prediction model add batchnormalization model add dense dense layer size activation relu model add dense 1 activation sigmoid self model model self model compile optimizer adam loss tf keras loss binarycrossentropy from logit true metric tf keras metric auc curve pr name pr tf keras metric auc curve roc name roc accuracy tf keras metric precision tf keras metric recall tf keras metric binary accuracy train code self model fit generator train generator epochs epoch verbose verbose validation datum validation generator shuffle true callback checkpoint class weight class weight prediction and compute auc pr code pad x keras preprocesse sequence pad sequences x test maxlen max len classlable pro model predict pad x batch size pro np squeeze pro y test np squeeze y test from sklearn import metric precision recall threshold metric precision recall curve y test pro pr metric auc recall precision print pr format pr use prediction compute pr auc 0 2803784709997255 but val pr 0 4 I try some solution I find use model save save or from keras backend import manual variable initialization manual variable initialization true 1 when I save the whole model weight do the save function also save the embed layer 2 I be do binary classification task be I correctly use the metric in model compile function
tensorflowtensorflow
post train full integer quantization produce model with float input output
Bug
system information os platform and distribution e g linux ubuntu 16 04 run from colab notebook tensorflow instal from source or binary run from colab notebook tensorflow version or github sha if from source v2 2 0 0 g2b96f3662b command use to run the converter or code if you re use the python api colab notebook failure detail the conversion be successful however the quantize model with float input output tmp mnist tflite model mnist model quant tflite and the one with suppose int8 input output tmp mnist tflite model mnist model quant io tflite be identical I ve verify this by run diff mnist model quant tflite mnist model quant io tflite which result in an empty output that be the file be identical any other info log here s what mnist model quant io tflite look like when I open it with netron netron rather than have a quantize node right after the float input and a dequantize node right before the float output I would like to have int8 input output directly how to do that
tensorflowtensorflow
little bug
Bug
I think it s a bug image
tensorflowtensorflow
use own dataset in microspeech project doesn t work
Bug
tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 currently work with google colab tensorflow instal from source or binary tensorflow version commit sha if source tensorflow 1 x target platform e g arm mbe os arduino nano 33 etc arduino nano 33 ble sense describe the problem hello everyone I just start to get into the world of machine learning and deep learning and I still need to learn a lot but I hope that there might be someone who face similar problem to the one that I try to solve at the moment a few week ago I start work with the microspeech example for microcontroller my goal be and still be to get the project work with an own wake word to save time I decide to expand the speech command dataset with a folder contain my own datum my want wake word be a german one I m not sure if this be an important information but I try to tell as much information as possible at the moment at first I only use sample of my own voice to get more sample in a short time I combine the original recording with one that be slightly manipulate after that I upload my datum to google drive so that I could import it to colab I change the want word section and the datum dir section accord to my wanted word training seem to work well and I change the code in the microspeech project in the arduino ide and upload it to the arduino I expect the green lead to flash when I say the wake word and the blue one to do so if I say some other word of the dataset as it be intend in the original project but no matter what I say only the blue one lead flash in a second step I change my dataset because I think that it might be a problem that my dataset only consist of recording of my own voice so I collect new datum with recording from different people and try everything that I do before with my new datum this time the arduino behave a bit different as the green lead flash randomly when nothing be say at all and when I say my wake word the blue lead be still flash as it do before as I say at the beginning I m new to this topic and so I only can imagine some possible cause for my issue please feel free to correct I if some thing don t make sense I didn t collect enough sample of my wake word at the moment there be 820 sample in the folder there might be a problem that I only want to detect one word instead of two or more I forgot to change sth in the arduino project code or do sth wrong it could be a problem that my dataset be a mix of english and german word please provide the exact sequence of command step when you run into the problem after train I copy the code and paste it into the micro feature tiny conv micro feature model data cpp and I also correct the datum length parameter at the end of the source file then I change micro features micro model setting cpp and micro features micro model setting h accord to my wake word and the number of label that I ve get finally I go to the arduino command responder cpp and change the first if condition to if find command 0 h as the wake word begin with an h I erase the second if condition because I don t have a second wake word then I upload the project to the arduino if there be any information miss please tell I and I will provide they kind regard eeesi
tensorflowtensorflow
pass label none to image dataset from directory doesn t work
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos tensorflow instal from source or binary tf nightly tensorflow version use command below v1 12 1 34068 g9a70ab8813 2 3 0 dev20200614 python version 3 7 7 describe the current behavior give an error valueerror label argument should be a list tuple of integer label of the same size as the number of image file in the target directory if you wish to infer the label from the subdirectory name in the target directory pass label infer if you wish to get a dataset that only contain image no label pass label none describe the expect behavior should return a dataset hat only contain image like the error message say standalone code to reproduce the issue import tensorflow as tf train image tf keras preprocesse image dataset from directory image label none other info log traceback most recent call last file train py line 5 in label none file opt miniconda3 envs tf2 lib python3 7 site package tensorflow python keras preprocesse image dataset py line 145 in image dataset from directory label argument should be a list tuple of integer label of valueerror label argument should be a list tuple of integer label of the same size as the number of image file in the target directory if you wish to infer the label from the subdirectory name in the target directory pass label infer if you wish to get a dataset that only contain image no label pass label none
tensorflowtensorflow
tf 2 2 0 error with model fit class weight be only support for model with a single output
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow use custom loss function os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 python version 3 6 8 cuda cudnn version 10 1 7 6 gpu model and memory nvidia t4 tensor core gpu aw g4dn xlarge 16 gb describe the current behavior error with model fit function when use class weight as dictionary mapping class indice integer to a weight float value for example 1 0 0 6 2 0 0 4 traceback traceback most recent call last file train py line 117 in main file train py line 112 in main use multiprocesse true file opt anaconda3 lib python3 6 site package tensorflow python keras engine training py line 66 in method wrapper return method self args kwargs file opt anaconda3 lib python3 6 site package tensorflow python keras engine training py line 815 in fit model self file opt anaconda3 lib python3 6 site package tensorflow python keras engine datum adapter py line 1117 in init dataset dataset map make class weight map fn class weight file opt anaconda3 lib python3 6 site package tensorflow python data op dataset op py line 1621 in map return mapdataset self map func preserve cardinality true file opt anaconda3 lib python3 6 site package tensorflow python data op dataset op py line 3981 in init use legacy function use legacy function file opt anaconda3 lib python3 6 site package tensorflow python data op dataset op py line 3221 in init self function wrapper fn get concrete function file opt anaconda3 lib python3 6 site package tensorflow python eager function py line 2532 in get concrete function args kwargs file opt anaconda3 lib python3 6 site package tensorflow python eager function py line 2496 in get concrete function garbage collect graph function args kwargs self maybe define function args kwargs file opt anaconda3 lib python3 6 site package tensorflow python eager function py line 2777 in maybe define function graph function self create graph function args kwargs file opt anaconda3 lib python3 6 site package tensorflow python eager function py line 2667 in create graph function capture by value self capture by value file opt anaconda3 lib python3 6 site package tensorflow python framework func graph py line 981 in func graph from py func func output python func func args func kwargs file opt anaconda3 lib python3 6 site package tensorflow python data op dataset op py line 3214 in wrapper fn ret wrapper helper args file opt anaconda3 lib python3 6 site package tensorflow python data op dataset op py line 3156 in wrapper helper ret autograph tf convert func ag ctx nest args file opt anaconda3 lib python3 6 site package tensorflow python autograph impl api py line 262 in wrapper return convert call f args kwargs option option file opt anaconda3 lib python3 6 site package tensorflow python autograph impl api py line 492 in convert call return call unconverted f args kwargs option file opt anaconda3 lib python3 6 site package tensorflow python autograph impl api py line 346 in call unconverted return f args kwargs file opt anaconda3 lib python3 6 site package tensorflow python keras engine datum adapter py line 1246 in class weight map fn class weight be only support for model with a single output valueerror class weight be only support for model with a single output my model input and output input output describe the expect behavior this be perfectly work with tf 2 1 0 and other previous version after upgrade to the 2 2 0 I m get this error and no change be make to model architecture or any other part of model fit function
tensorflowtensorflow
image normalization preprocess in tensorflow lite io object detection example
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device iphonex tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 5 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version not relate run on cpu gpu model and memory not relate run on cpu you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior hi can anyone help to confirm which one be correct for the object detection model use in example coco mobilenet ssd v1 from the tensorflow lite io object detection example it use x 255 0 to normalize image for preprocess l319 not quantize convert to float let bytes array unsafedata bytedata var float float for I in 0 16 0xff image mean image std imgdata putfloat pixelvalue 8 0xff image mean image std imgdata putfloat pixelvalue 0xff image mean image std while from the ssd mobilenet v1 feature extractor it use 2 0 255 0 resize input 1 0 which be different from the above two l78 350 def preprocess self resize input 351 ssd preprocesse 352 353 map pixel value to the range 1 1 the preprocessing assume an input 354 value range of 0 255 355 356 args 357 resize input a batch height width channel float tensor 358 represent a batch of image 359 360 return 361 preprocesse input a batch height width channel float tensor 362 represent a batch of image 363 364 return 2 0 255 0 resize input 1 0 describe the expect behavior we get different accuracy when use a same model ssd mobilenet v1 on pc vs io when use tensorflow lite example code and pre train model for ssd mobilenet v1 how much accuracy loss on io be expect appreciate for any help thank
tensorflowtensorflow
tf dilation2d op be neither a custom op nor a flex op
Bug
system information os platform and distribution e g linux ubuntu 16 04 google colab python 3 6 tensorflow instal from source or binary tensorflow version or github sha if from source provide the text output from tflite convert info tensorflow saver not create because there be no variable in the graph to restore convertererror traceback most recent call last in 6 tf lite opsset select tf op 7 8 tflite model converter convert 9 10 save the tf lite model 2 frame usr local lib python3 6 dist package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 225 stdout try convert to unicode stdout 226 stderr try convert to unicode stderr 227 raise convertererror see console for info n s n s n stdout stderr 228 finally 229 must manually cleanup file convertererror see console for info 2020 06 12 18 51 22 076900 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 89 ignore output format 2020 06 12 18 51 22 076954 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 95 ignore drop control dependency 2020 06 12 18 51 22 626844 w tensorflow core framework cpu allocator impl cc 81 allocation of 18874368 exceed 10 of system memory 2020 06 12 18 51 22 954133 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx512f 2020 06 12 18 51 23 172150 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2000134999 hz 2020 06 12 18 51 23 172487 I tensorflow compiler xla service service cc 168 xla service 0x561ef1c2b2c0 initialize for platform host this do not guarantee that xla will be use device 2020 06 12 18 51 23 172520 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 06 12 18 51 23 180594 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 06 12 18 51 23 271482 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 06 12 18 51 23 271997 I tensorflow compiler xla service service cc 168 xla service 0x561ef1c2b100 initialize for platform cuda this do not guarantee that xla will be use device 2020 06 12 18 51 23 272026 I tensorflow compiler xla service service cc 176 streamexecutor device 0 tesla p4 compute capability 6 1 2020 06 12 18 51 23 272188 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 06 12 18 51 23 272524 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 00 04 0 name tesla p4 computecapability 6 1 coreclock 1 1135ghz corecount 20 devicememorysize 7 43gib devicememorybandwidth 178 99gib s 2020 06 12 18 51 23 272923 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 06 12 18 51 23 274901 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 06 12 18 51 23 276599 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 06 12 18 51 23 277233 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 06 12 18 51 23 279053 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 06 12 18 51 23 279837 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 06 12 18 51 23 283468 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 06 12 18 51 23 283603 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 06 12 18 51 23 284015 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 06 12 18 51 23 285247 I tensorflow core common runtime gpu gpu device cc 1697 add visible gpu device 0 2020 06 12 18 51 23 288818 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 06 12 18 51 23 293809 I tensorflow core common runtime gpu gpu device cc 1096 device interconnect streamexecutor with strength 1 edge matrix 2020 06 12 18 51 23 293841 I tensorflow core common runtime gpu gpu device cc 1102 0 2020 06 12 18 51 23 293870 I tensorflow core common runtime gpu gpu device cc 1115 0 n 2020 06 12 18 51 23 297246 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 06 12 18 51 23 297674 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 06 12 18 51 23 298025 w tensorflow core common runtime gpu gpu bfc allocator cc 39 override allow growth set because the tf force gpu allow growth environment variable be set original config value be 0 2020 06 12 18 51 23 298071 I tensorflow core common runtime gpu gpu device cc 1241 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 5523 mb memory physical gpu device 0 name tesla p4 pci bus i d 0000 00 04 0 compute capability 6 1 2020 06 12 18 51 30 236028 w tensorflow core framework cpu allocator impl cc 81 allocation of 18874368 exceed 10 of system memory loc dilation2d error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 1 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 2 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 3 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 4 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 5 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 6 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 7 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 8 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 9 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 10 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 11 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 12 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 13 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 14 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 15 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 16 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 17 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 18 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 19 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 20 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 21 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 22 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 23 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 24 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 25 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 26 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 27 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 28 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 29 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 30 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 31 error tf dilation2d op be neither a custom op nor a flex op loc dilation2d 32 error tf dilation2d op be neither a custom op nor a flex op error fail while convert main op that need custom implementation enable via set the emit custom op flag dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d traceback most recent call last file usr local bin toco from protos line 8 in sys exit main file usr local lib python2 7 dist package tensorflow core lite toco python toco from protos py line 93 in main app run main execute argv sys argv 0 unparse file usr local lib python2 7 dist package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file usr local lib python2 7 dist package absl app py line 300 in run run main main args file usr local lib python2 7 dist package absl app py line 251 in run main sys exit main argv file usr local lib python2 7 dist package tensorflow core lite toco python toco from protos py line 56 in execute enable mlir converter exception 0 error loc dilation2d tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 1 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 2 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 3 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 4 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 5 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 6 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 7 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 8 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 9 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 10 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 11 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 12 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 13 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 14 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 15 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 16 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 17 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 18 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 19 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 20 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 21 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 22 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 23 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 24 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 25 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 26 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 27 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 28 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 29 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 30 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 31 tf dilation2d op be neither a custom op nor a flex op 0 error loc dilation2d 32 tf dilation2d op be neither a custom op nor a flex op 0 error fail while convert main op that need custom implementation enable via set the emit custom op flag dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d dilation2d standalone code to reproduce the issue first download this save model second run import tensorflow as tf converter tf lite tfliteconverter from save model save converter target spec support type tf float16 converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert I be post this to request the dilation2d op for tflite thank
tensorflowtensorflow
tflite model not work with png image
Bug
hello I have train a semantic segmentation model use vgg16 and unet the model display good result before irrelevant of image type when I convert the model to tflite I notice that it do not work with png file here be an example image image when try with a jpg however the result seem fine compare to the original model an example image image hope someone have the answer because I do the same preprocessing to both image and the original model as I say work fine with both type
tensorflowtensorflow
abstractrnncell documentation
Bug
documentation for state in abstractrnncell could be more clear in the documentation for the abstractrnncell it do not make it clear that the state be a tuple this be a gotcha for I when I define a custom rnn cell that have a single state it keep add an axis to that state whenever I perform an operation on it for example the code within the call method the class implement abstractrnncell log info f states state log info f state update state update new state tf math add state state update log info f new state new state lead to the confusing output 06 12 12 28 root info state 06 12 12 28 root info state update tensor add 1 0 shape 32 4 dtype float32 06 12 12 28 root info new state tensor add 2 0 shape 1 32 4 dtype float32 upon implement the state as a tuple of length one the issue be solve I think this could be make more clear in the documentation many thank
tensorflowtensorflow
gpu accelerate lstms grus crash randomly with internalerror derive fail to call thenrnnbackward with model config
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 pro build 19041 tensorflow instal from source or binary pip install tensorflow tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 7 4 cuda cudnn version cuda 10 1 cudnn 7 6 5 gpu model and memory nvidia titan rtx 24 gb rtx 2080 ti 11 gb nvidia driver version 450 99 describe the current behavior both the jupyter notebook and extract python script on the tensorflow text classification tutorial crash randomly when train locally on my gpu with the follow traceback tensorflow python framework error impl internalerror derive fail to call thenrnnforward with model config rnn mode rnn input mode rnn direction mode 2 0 0 num layer input size num unit dir count max seq length batch size cell num unit 1 64 64 1 2537 64 64 node cudnnrnn sequential bidirectional forward lstm statefulpartitionedcall gradient tape sequential embed embed lookup reshape 38 op inference train function 6128 function call stack train function train function train function I find two similar issue 37942 and 35950 method suggest in 37942 do not work and still crash describe the expect behavior example tutorial notebook should run smoothly from top to bottom without random crash standalone code to reproduce the issue github gist here code import tensorflow dataset as tfds import tensorflow as tf import matplotlib pyplot as plt def plot graph history metric plt plot history history metric plt plot history history val metric plt xlabel epochs plt ylabel metric plt legend metric val metric plt show dataset info tfds load imdb review subwords8k with info true as supervise true train dataset test dataset dataset train dataset test encoder info feature text encoder print vocabulary size format encoder vocab size sample string hello tensorflow encode string encoder encode sample string print encode string be format encode string original string encoder decode encode string print the original string format original string assert original string sample string for index in encode string print format index encoder decode index buffer size 10000 batch size 64 train dataset train dataset shuffle buffer size train dataset train dataset padded batch batch size test dataset test dataset padded batch batch size for example batch label batch in train dataset take 20 print batch shape example batch shape print label shape label batch shape model tf keras sequential tf keras layer embed encoder vocab size 64 tf keras layers bidirectional tf keras layers lstm 64 tf keras layer dense 64 activation relu tf keras layer dense 1 model compile loss tf keras loss binarycrossentropy from logit true optimizer tf keras optimizer adam 1e 4 metric accuracy history model fit train dataset epoch 10 validation datum test dataset validation step 30 test loss test acc model evaluate test dataset print test loss format test loss print test accuracy format test acc other info log epoch 1 10 2020 06 11 23 48 47 036226 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 06 11 23 48 47 417459 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 39 391 eta 33 loss 0 6931 accuracy 0 50282020 06 11 23 48 52 108366 e tensorflow stream executor dnn cc 613 cudnn status internal error in tensorflow stream executor cuda cuda dnn cc 1986 cudnnrnnbackwarddata cudnn handle rnn desc handle model dim max seq length output desc handle output datum opaque output desc handle output backprop datum opaque output h desc handle output h backprop data opaque output c desc handle output c backprop data opaque rnn desc param handle param opaque input h desc handle input h datum opaque input c desc handle input c datum opaque input desc handle input backprop datum opaque input h desc handle input h backprop datum opaque input c desc handle input c backprop data opaque workspace opaque workspace size reserve space datum opaque reserve space datum size 2020 06 11 23 48 52 109818 w tensorflow core framework op kernel cc 1753 op require fail at cudnn rnn op cc 1922 internal fail to call thenrnnbackward with model config rnn mode rnn input mode rnn direction mode 2 0 0 num layer input size num unit dir count max seq length batch size cell num unit 1 64 64 1 1615 64 64 traceback most recent call last file lesson1 py line 59 in validation step 30 file c user han virtualenvs tensorflow in practice 9xcfuv0y lib site package tensorflow python keras engine training py line 66 in method wrapper return method self args kwargs file c user han virtualenvs tensorflow in practice 9xcfuv0y lib site package tensorflow python keras engine training py line 848 in fit tmp log train function iterator file c user han virtualenvs tensorflow in practice 9xcfuv0y lib site package tensorflow python eager def function py line 580 in call result self call args kwd file c user han virtualenvs tensorflow in practice 9xcfuv0y lib site package tensorflow python eager def function py line 611 in call return self stateless fn args kwd pylint disable not callable file c user han virtualenvs tensorflow in practice 9xcfuv0y lib site package tensorflow python eager function py line 2420 in call return graph function filter call args kwargs pylint disable protect access file c user han virtualenvs tensorflow in practice 9xcfuv0y lib site package tensorflow python eager function py line 1665 in filter call self capture input file c user han virtualenvs tensorflow in practice 9xcfuv0y lib site package tensorflow python eager function py line 1746 in call flat ctx args cancellation manager cancellation manager file c user han virtualenvs tensorflow in practice 9xcfuv0y lib site package tensorflow python eager function py line 598 in call ctx ctx file c user han virtualenvs tensorflow in practice 9xcfuv0y lib site package tensorflow python eager execute py line 60 in quick execute input attrs num output tensorflow python framework error impl internalerror derive fail to call thenrnnbackward with model config rnn mode rnn input mode rnn direction mode 2 0 0 num layer input size num unit dir count max seq length batch size cell num unit 1 64 64 1 1615 64 64 node gradient cudnnrnn grad cudnnrnnbackprop statefulpartitionedcall 1 gradient tape sequential embed embed lookup reshape 38 op inference train function 6172 function call stack train function train function train function
tensorflowtensorflow
custom train loop inconsistent with keras fit for vector variable
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow custom code os platform and distribution e g linux ubuntu 16 04 colab maco 10 15 5 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary pip tensorflow version use command below 2 2 0 python version 3 7 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version none gpu model and memory none describe the current behavior train a linear model use a custom train loop and tf gradienttape be inconsistent with tf keras model fit and the variable be update in the wrong way scalar variable be update correctly vector variable get the wrong gradient describe the expect behavior the final result should be close in both case standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf import numpy as np nn 40 x np random normal size nn 3 astype np float32 y 4 x 0 0 2 x 1 0 3 x 2 1 5 np random normal size nn scale 1 astype np float32 x tf convert to tensor x y tf convert to tensor y modelx tf keras sequential tf keras layer dense 1 modelx x opt tf keras optimizer adam learning rate 0 1 for I in range 400 with tf gradienttape as tape yhat modelx x loss tf reduce sum tf math square difference yhat y grad tape gradient loss modelx trainable weight opt apply gradient zip grad modelx trainable weight modelx variable colab to reproduce the bug other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
valueerror tf function decorate function try to create variable on non first call
Bug
standalone code to reproduce the issue
tensorflowtensorflow
tf lite tflitegpudelegate init conv 2d unsupported datum type for float32 tensor
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 tensorflow instal from source or binary pip install tensorflow tensorflow version or github sha if from source 2 2 0 tensorflow lite version org tensorflow tensorflow lite 2 2 0 org tensorflow tensorflow lite gpu 2 2 0 command use to run the converter or code if you re use the python api the code use to run the converter can be find in this notebook in particular the part of interest be python from tensorflow import lite converter lite tfliteconverter from keras model model converter optimization lite optimize default tflite model converter convert with open model unet v2 f1lo b14 e60 lr0 001 44 optimize tflite wb as f f write tflite model the output from the converter invocation the optimize tflite model be successfully generate also please include a link to the save model or graphdef train model optimize tflite model the model use this implementation of the u net and take a 256x256x3 image and output a 256x256x1 grayscale mask failure detail the model without optimization work perfectly the model with optimization work correctly when execute on the cpu however when I try to initialize the tflite interpreter with the gpu delegate use the follow code I get the error java lang illegalargumentexception internal error fail to apply delegate tflitegpudelegate init conv 2d unsupported datum type for float32 tensor java mappedbytebuffer model fileutil loadmappedfile activity unet v2 f1lo b14 e60 lr0 001 44 optimize tflite interpreter option option new interpreter option option adddelegate new gpudelegate interpreter interpreter new interpreter model option any other info log traceback of the exception I tflite create tensorflow lite delegate for gpu I tflite initialize tensorflow lite runtime w system err java lang illegalargumentexception internal error fail to apply delegate tflitegpudelegate init conv 2d unsupported datum type for float32 tensor w system err tflitegpudelegate prepare delegate be not initialize node number 36 tflitegpudelegatev2 fail to prepare restore previous execution plan after delegate application failure w system err at org tensorflow lite nativeinterpreterwrapper applydelegate native method at org tensorflow lite nativeinterpreterwrapper applydelegate nativeinterpreterwrapper java 318 w system err at org tensorflow lite nativeinterpreterwrapper init nativeinterpreterwrapper java 82 at org tensorflow lite nativeinterpreterwrapper nativeinterpreterwrapper java 63 at org tensorflow lite interpreter interpreter java 237 at it unipr advmobdev whereiswally modelexecutor loadinterpreter modelexecutor java 245 at it unipr advmobdev whereiswally modelexecutor run modelexecutor java 160 I notice that in this commit be add the setquantizedmodelsallowe l66 l76 method in the gpudelegate option class this method be not available in my current version of the library be it possible that the optimize quantize model can not be run with the gpu delegate I be unable to find this information in the documentation
tensorflowtensorflow
tensorflow python tpu tpu test and tensorflow python tpu dataset test test case failure
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow n a os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary both source and binary tensorflow version use command below v2 2 0 0 g2b96f3662b 2 2 0 python version 3 6 9 bazel version if compile from source 2 0 0 gcc compiler version if compile from source 7 5 0 cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior test case failure for both test for tensorflow python tpu dataset test it fail with runtimeerror job coordinator replica 0 task 0 device cpu 0 unknown device see test log for dataset test test log for tensorflow python tpu tpu test it fail with runtimeerror attempt to capture an eagertensor without build a function see test log for tpu test test log describe the expect behavior both test case should pass standalone code to reproduce the issue for tpu test I copy the test case into here for dataset test I copy a portion of the test till the code section that trigger error as show in the test log here you can also run this via bazel test with no oss tag remove if you build tensorflow from source other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the test log have be attach to the current behaviour above I think there be also two thing worth to note both test case can pass when eager execution be disable via op disable eager execution be a different error but could be relate I learn disable eager execution from there when look into dataset test the unknown device failure be due to a function call of lookupdevice look like the device for the context be not in device map see below gdb p device map 8 std unordered map with 8 element static npo 18446744073709551615 static kmaxsize 9223372036854775807 ptr 0x36f3234 xla cpu 0 length 9 0x374f740 static npo 18446744073709551615 static kmaxsize 9223372036854775807 ptr 0x36f3223 device xla cpu 0xla cpu 0 length 17 0x374f740 static npo 18446744073709551615 static kmaxsize 9223372036854775807 ptr 0x36f31ec job localhost replica 0 task 0 cpu 0 device cpu 0cpu 0 device xla cpu 0xla cpu 0 length 37 0x1b88f10 static npo 18446744073709551615 static kmaxsize 9223372036854775807 ptr 0x36f3211 device cpu 0cpu 0 device xla cpu 0xla cpu 0 length 13 0x1b88f10 static npo 18446744073709551615 static kmaxsize 9223372036854775807 ptr 0x36f31c0 job localhost replica 0 task 0 device cpu 0 job localhost replica 0 task 0 cpu 0 device cpu 0cpu 0 device xla cpu 0xla cpu 0 length 44 0x1b88f10 static npo 18446744073709551615 static kmaxsize 9223372036854775807 ptr 0x36f321e cpu 0 device xla cpu 0xla cpu 0 length 5 0x1b88f10 static npo 18446744073709551615 static kmaxsize 9223372036854775807 ptr 0x3754ad0 job localhost replica 0 task 0 device xla cpu 0 length 48 0x374f740 static npo 18446744073709551615 static kmaxsize 9223372036854775807 ptr 0x3754b10 job localhost replica 0 task 0 xla cpu 0 length 41 0x374f740 gdb p name 9 static npo 18446744073709551615 static kmaxsize 9223372036854775807 ptr 0x7fffc2f3e300 job coordinator replica 0 task 0 device cpu 0 length 46 2 breakpoint keep y 0x00007fffd0e10665 in tensorflow staticdevicemgr lookupdevice absl string view tensorflow device const at tensorflow core common runtime device mgr cc 112 breakpoint already hit 1 time thank ruixin
tensorflowtensorflow
can not resume training use model save and load model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution cento linux 7 6 1810 tensorflow instal from binary pip tensorflow version 2 2 0 rc3 python version 3 6 4 cuda cudnn version cuda 10 1 cudnn 7 6 0 gpu model and memory 2 x titanx 12 gb describe the current behavior I be use tensorflow 2 2 0 on multi gpu system have the need to train large network for several day I save the model weight with optimizer state use model save when I reload the model use tf keras model load model the loss spike sharply on tensorboard and the accuracy also show a sudden drop though the loss recover within the epoch it do not comply with the intended behavior of save training state use model save describe the expect behavior the api should be able to save and resume training from the very same point after load a model from h5 file standalone code to reproduce the issue this code be a minimal reproducible example it be test on multi gpu system with 8 gpu the re run of the script be achieve by delete the current model and distribute strategy and re initialize they to simulate stop and restart of training process import os import glob import numpy as np import tensorflow as tf tf version gpu tf config experimental list logical device gpu print gpu result dir os path join os getcwd test result checkpoint frequency 16 log every 1 batch size per gpu 16 num gpu len gpu global batch size batch size per gpu num gpu def get model model tf keras sequential tf keras layer conv2d filter 32 stride 1 kernel size 4 4 input shape 28 28 1 tf keras layers activation relu tf keras layer batchnormalization tf keras layer flatten tf keras layer dense 10 return model class sparsecategoricalloss tf keras loss loss def init self num class name sparsecategoricalloss from logit false loss weight 1 0 args kwargs super init args kwargs self num class num class self name name self from logit from logit self loss weight loss weight def loss fn self y true y pre label y true 0 self num class logit y pre 0 self num class loss tf keras loss sparsecategoricalcrossentropy from logit self from logit name self name reduction tf keras loss reduction none label logit loss self loss weight return loss def call self y true y pre total loss self loss fn y true y pre return total loss def get config self config super get config copy config update num class self num class name self name loss weight self loss weight return config loss sparsecategoricalloss num class 10 from logit true name categorical loss strategy tf distribute mirroredstrategy with strategy scope model get model optimizer tf keras optimizer rmsprop learning rate 0 001 epsilon 1 0 momentum 0 9 rho 0 9 model compile optimizer optimizer loss loss metric acc x train y train x test y test tf keras datasets mnist load datum x train np expand dim x train 3 x test np expand dim x test 3 class loggingcallback tf keras callbacks callback def init self result dir log every initial step 0 checkpoint frequency none kwargs super init kwargs create result directory self result dir result dir if not os path exist result dir os makedirs result dir create checkpoint directory checkpoint dir os path join self result dir checkpoint if not os path exist checkpoint dir os makedirs checkpoint dir create tensorboard directory tensorboard dir os path join self result dir tensorboard if not os path join tensorboard dir os makedirs tensorboard dir self log every log every self checkpoint frequency checkpoint frequency self train writer tf summary create file writer os path join tensorboard dir train self step initial step write metric to tensorboard def write metric tensorboard self log with self train writer as default for name value in log item if name in batch size continue tf summary scalar name value step self step def on batch end self batch log none self step 1 write metric to tensorboard if self step self log every 0 self write metric tensorboard log save model checkpoint weight optimizer state if self checkpoint frequency and self step self checkpoint frequency 0 name model step d h5 self step path os path join self result dir checkpoint name self model save path callback loggingcallback result dir result dir log every log every checkpoint frequency checkpoint frequency model fit x x train y y train batch size global batch size epoch 7 validation datum x test y test callback callback verbose 1 del model del strategy previous checkpoint glob glob os path join result dir checkpoint previous checkpoint sort key lambda x int os path basename x split 2 replace h5 late checkpoint previous checkpoint 1 print find late checkpoint s late checkpoint initial step int os path basename late checkpoint split 2 replace h5 print resume training from step d initial step new callback loggingcallback result dir result dir log every log every initial step initial step checkpoint frequency checkpoint frequency strategy tf distribute mirroredstrategy with strategy scope model tf keras model load model late checkpoint custom object sparsecategoricalloss sparsecategoricalloss model fit x x train y y train batch size global batch size epoch 10 validation datum x test y test callback new callback verbose 1 here be a link to colab show the output other info log the tensorboard entry look like this tensorboard this be a toy example use mnist after about 26k step when the training be restart the loss spike up indicate that the last save checkpoint do not save the training configuration correctly I be train an inceptionresnet network for several day and the spike in the loss be very concern when I restart the training show below tensorboard inception
tensorflowtensorflow
error in the tpu ipynb notebook
Bug
I m try to implement the code in this notebook the line for update the training loss and accuracy be incorrect python training loss update state loss strategy num replicas in sync training accuracy update state label logit I don t understand the intent behind update the loss with the product of the number of replicas and the batch loss but it give the wrong result change the line to python training loss update state label logit appear to solve the bug I also change the definition of training loss from a metric mean to a metric sparsecategoricalcrossentropy
tensorflowtensorflow
tf lite c api zero output
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device snapdragon 645 android 10 tensorflow instal from source or binary source tensorflow version use command below v2 2 0 python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 2 gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m inference the same model in c api use libtensorflowlite so and c api use libtensorflowlite c so on android ndk r18b while the c code work well and pass test case the c version output all zero describe the expect behavior output from c and c version should be exactly the same standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook void forward const float real const float imag float pout tfliteinterpreterallocatetensor interpreter tflitetensor tflite real tfliteinterpretergetinputtensor interpreter 0 tflitetensor tflite imag tfliteinterpretergetinputtensor interpreter 1 tflitetensorcopyfrombuffer tflite real real n1 sizeof float tflitetensorcopyfrombuffer tflite imag imag n2 sizeof float tfliteinterpreterinvoke interpreter const tflitetensor out tfliteinterpretergetoutputtensor interpreter 0 tflitetensorcopytobuffer out pout n out sizeof float other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
autograph could not transform
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory 2080ti 11 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the follow warning be print warn tensorflow autograph could not transform parse at 0x7f3d8c126cb0 and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 describe the expect behavior no warning standalone code to reproduce the issue import tensorflow as tf def byte feature value return tf train feature byte list tf train byteslist value value def float tensor to byte feature value return byte feature tf io serialize tensor tf convert to tensor value dtype tf float32 numpy def parse dataset dataset feature description n parallel call none def parse example proto deserialize dict tf io parse single example example proto feature description return deserialize dict parse dataset dataset map parse num parallel call n parallel call return parse dataset v 1 2 3 feature x float tensor to byte feature v example proto tf train example feature tf train feature feature feature example example proto serializetostre serialize tensor example dataset tf datum dataset from tensor slice serialized tensor feature description x tf io fixedlenfeature tf string parse dataset parse dataset dataset feature description print next iter parse dataset
tensorflowtensorflow
mask rcnn conversion succeed but require select tf op despite select only tflite builtin
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary tensorflow 2 2 0 gpu jupyter docker image so binary tensorflow version or github sha if from source 2 2 0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook converter lite tfliteconverter from keras model model keras model converter target spec support op lite opsset tflite builtin converter experimental new converter true converter allow custom op false converter representative dataset np random random 256 256 3 255 astype np uint8 tflite model converter convert open convert model tflite wb write tflite model the output from the converter invocation 256614676 also please include a link to the save model or graphdef what be a good way to provide this original model parameter from matterport repo here failure detail if the conversion be successful but the generate model be wrong state what be wrong the produce tflite model work on my android device but it require the tf select op library and fail to accept the provide gpudelegate rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
annoying documentation for tensorflow 2 2 0
Bug
I see the implementation of tf reduce mean tf reduce sum but its not there in the documentation for tf 2 2 0 it be really annoying as it create a giant confusion in our code please keep documentation and implmentation in sync
tensorflowtensorflow
documentation instruction on instal tensorflow with cuda support doesn t work
Bug
os ubuntu 18 04 graphic card nvidia 1050ti problem follow the instruction under install cuda with apt give the follow error ssh unpack libcudnn7 dev 7 6 4 38 1 cuda10 1 error be encounter while process tmp apt dpkg install fjai3s 55 libnvidia compute 450 450 36 06 0ubuntu1 amd64 deb e sub process usr bin dpkg return an error code 1 after execute this step ssh sudo apt get install no install recommend cuda 10 1 libcudnn7 7 6 4 38 1 cuda10 1 libcudnn7 dev 7 6 4 38 1 cuda10 1 additional info the complete message after run the above command be ssh read package list do building dependency tree read state information do the follow package be automatically instal and be no long require libnvidia common 440 libnvidia extra 440 use sudo apt autoremove to remove they the follow additional package will be instal cuda command line tool 10 1 cuda compiler 10 1 cuda cudart 10 1 cuda cudart dev 10 1 cuda cufft 10 1 cuda cufft dev 10 1 cuda cuobjdump 10 1 cuda cupti 10 1 cuda curand 10 1 cuda curand dev 10 1 cuda cusolver 10 1 cuda cusolver dev 10 1 cuda cusparse 10 1 cuda cusparse dev 10 1 cuda demo suite 10 1 cuda documentation 10 1 cuda driver dev 10 1 cuda driver cuda driver 450 cuda gdb 10 1 cuda gpu library advisor 10 1 cuda librarie 10 1 cuda library dev 10 1 cuda license 10 1 cuda license 10 2 cuda memcheck 10 1 cuda misc header 10 1 cuda npp 10 1 cuda npp dev 10 1 cuda nsight 10 1 cuda nsight compute 10 1 cuda nsight system 10 1 cuda nvcc 10 1 cuda nvdisasm 10 1 cuda nvgraph 10 1 cuda nvgraph dev 10 1 cuda nvjpeg 10 1 cuda nvjpeg dev 10 1 cuda nvml dev 10 1 cuda nvprof 10 1 cuda nvprune 10 1 cuda nvrtc 10 1 cuda nvrtc dev 10 1 cuda nvtx 10 1 cuda nvvp 10 1 cuda runtime 10 1 cuda sample 10 1 cuda sanitizer api 10 1 cuda toolkit 10 1 cuda tool 10 1 cuda visual tool 10 1 default jre default jre headless libcublas dev libcublas10 libnvidia cfg1 450 libnvidia common 450 libnvidia compute 450 libnvidia decode 450 libnvidia encode 450 libnvidia fbc1 450 libnvidia gl 450 libnvidia ifr1 450 nsight compute 2019 5 0 nsight system 2019 5 2 nvidia compute util 450 nvidia dkms 450 nvidia driver 450 nvidia kernel common 450 nvidia kernel source 450 nvidia modprobe nvidia settings nvidia util 450 openjdk 11 jre openjdk 11 jre headless xserver xorg video nvidia 450 suggest package font ipafont gothic font ipafont mincho font wqy microhei font wqy zenhei the follow package will be remove libnvidia cfg1 440 libnvidia compute 440 libnvidia decode 440 libnvidia encode 440 libnvidia fbc1 440 libnvidia fbc1 440 i386 libnvidia gl 440 libnvidia ifr1 440 nvidia compute util 440 nvidia dkms 440 nvidia driver 430 nvidia driver 440 nvidia kernel common 440 nvidia kernel source 440 nvidia util 440 xserver xorg video nvidia 440 the follow new package will be instal cuda 10 1 cuda command line tool 10 1 cuda compiler 10 1 cuda cudart 10 1 cuda cudart dev 10 1 cuda cufft 10 1 cuda cufft dev 10 1 cuda cuobjdump 10 1 cuda cupti 10 1 cuda curand 10 1 cuda curand dev 10 1 cuda cusolver 10 1 cuda cusolver dev 10 1 cuda cusparse 10 1 cuda cusparse dev 10 1 cuda demo suite 10 1 cuda documentation 10 1 cuda driver dev 10 1 cuda driver cuda driver 450 cuda gdb 10 1 cuda gpu library advisor 10 1 cuda librarie 10 1 cuda library dev 10 1 cuda license 10 1 cuda license 10 2 cuda memcheck 10 1 cuda misc header 10 1 cuda npp 10 1 cuda npp dev 10 1 cuda nsight 10 1 cuda nsight compute 10 1 cuda nsight system 10 1 cuda nvcc 10 1 cuda nvdisasm 10 1 cuda nvgraph 10 1 cuda nvgraph dev 10 1 cuda nvjpeg 10 1 cuda nvjpeg dev 10 1 cuda nvml dev 10 1 cuda nvprof 10 1 cuda nvprune 10 1 cuda nvrtc 10 1 cuda nvrtc dev 10 1 cuda nvtx 10 1 cuda nvvp 10 1 cuda runtime 10 1 cuda sample 10 1 cuda sanitizer api 10 1 cuda toolkit 10 1 cuda tool 10 1 cuda visual tool 10 1 default jre default jre headless libcublas dev libcublas10 libcudnn7 libcudnn7 dev libnvidia cfg1 450 libnvidia common 450 libnvidia compute 450 libnvidia decode 450 libnvidia encode 450 libnvidia fbc1 450 libnvidia gl 450 libnvidia ifr1 450 nsight compute 2019 5 0 nsight system 2019 5 2 nvidia compute util 450 nvidia dkms 450 nvidia driver 450 nvidia kernel common 450 nvidia kernel source 450 nvidia modprobe nvidia settings nvidia util 450 openjdk 11 jre openjdk 11 jre headless xserver xorg video nvidia 450 0 upgrade 79 newly instal 16 to remove and 239 not upgrade need to get 0 b 2 205 mb of archive after this operation 4 855 mb of additional disk space will be use do you want to continue y n extract template from package 100 reading database 294935 file and directory currently instal remove nvidia driver 430 440 59 0ubuntu0 18 04 1 remove nvidia driver 440 440 82 0ubuntu0 0 18 04 2 remove xserver xorg video nvidia 440 440 82 0ubuntu0 0 18 04 2 remove libnvidia cfg1 440 amd64 440 82 0ubuntu0 0 18 04 2 remove libnvidia encode 440 amd64 440 82 0ubuntu0 0 18 04 2 remove libnvidia decode 440 amd64 440 82 0ubuntu0 0 18 04 2 remove nvidia util 440 440 82 0ubuntu0 0 18 04 2 remove libnvidia fbc1 440 i386 440 82 0ubuntu0 0 18 04 2 remove libnvidia fbc1 440 amd64 440 82 0ubuntu0 0 18 04 2 remove libnvidia ifr1 440 amd64 440 82 0ubuntu0 0 18 04 2 remove libnvidia gl 440 amd64 440 82 0ubuntu0 0 18 04 2 remove nvidia compute util 440 440 82 0ubuntu0 0 18 04 2 remove nvidia dkms 440 440 82 0ubuntu0 0 18 04 2 remove all dkms module do info disable nvidia debug parse usr share ubuntu driver common quirk dell latitude debug parse usr share ubuntu driver common quirk put your quirk here debug parse usr share ubuntu driver common quirk lenovo thinkpad update initramfs defer update trigger activate remove nvidia kernel common 440 440 82 0ubuntu0 0 18 04 2 update initramfs defer update trigger activate remove nvidia kernel source 440 440 82 0ubuntu0 0 18 04 2 remove libnvidia compute 440 amd64 440 82 0ubuntu0 0 18 04 2 select previously unselecte package cuda license 10 1 reading database 294368 file and directory currently instal prepare to unpack 00 cuda license 10 1 10 1 243 1 amd64 deb unpack cuda license 10 1 10 1 243 1 select previously unselecte package cuda misc header 10 1 prepare to unpack 01 cuda misc header 10 1 10 1 243 1 amd64 deb unpack cuda misc header 10 1 10 1 243 1 select previously unselecte package cuda nvcc 10 1 prepare to unpack 02 cuda nvcc 10 1 10 1 243 1 amd64 deb unpack cuda nvcc 10 1 10 1 243 1 select previously unselecte package cuda cuobjdump 10 1 prepare to unpack 03 cuda cuobjdump 10 1 10 1 243 1 amd64 deb unpack cuda cuobjdump 10 1 10 1 243 1 select previously unselecte package cuda nvprune 10 1 prepare to unpack 04 cuda nvprune 10 1 10 1 243 1 amd64 deb unpack cuda nvprune 10 1 10 1 243 1 select previously unselecte package cuda compiler 10 1 prepare to unpack 05 cuda compiler 10 1 10 1 243 1 amd64 deb unpack cuda compiler 10 1 10 1 243 1 select previously unselecte package cuda nvdisasm 10 1 prepare to unpack 06 cuda nvdisasm 10 1 10 1 243 1 amd64 deb unpack cuda nvdisasm 10 1 10 1 243 1 select previously unselecte package cuda gdb 10 1 prepare to unpack 07 cuda gdb 10 1 10 1 243 1 amd64 deb unpack cuda gdb 10 1 10 1 243 1 select previously unselecte package cuda nvprof 10 1 prepare to unpack 08 cuda nvprof 10 1 10 1 243 1 amd64 deb unpack cuda nvprof 10 1 10 1 243 1 select previously unselecte package cuda sanitizer api 10 1 prepare to unpack 09 cuda sanitizer api 10 1 10 1 243 1 amd64 deb unpack cuda sanitizer api 10 1 10 1 243 1 select previously unselecte package cuda memcheck 10 1 prepare to unpack 10 cuda memcheck 10 1 10 1 243 1 amd64 deb unpack cuda memcheck 10 1 10 1 243 1 select previously unselecte package cuda cudart 10 1 prepare to unpack 11 cuda cudart 10 1 10 1 243 1 amd64 deb unpack cuda cudart 10 1 10 1 243 1 select previously unselecte package cuda driver dev 10 1 prepare to unpack 12 cuda driver dev 10 1 10 1 243 1 amd64 deb unpack cuda driver dev 10 1 10 1 243 1 select previously unselecte package cuda cudart dev 10 1 prepare to unpack 13 cuda cudart dev 10 1 10 1 243 1 amd64 deb unpack cuda cudart dev 10 1 10 1 243 1 select previously unselecte package cuda cupti 10 1 prepare to unpack 14 cuda cupti 10 1 10 1 243 1 amd64 deb unpack cuda cupti 10 1 10 1 243 1 select previously unselecte package cuda gpu library advisor 10 1 prepare to unpack 15 cuda gpu library advisor 10 1 10 1 243 1 amd64 deb unpack cuda gpu library advisor 10 1 10 1 243 1 select previously unselecte package cuda nvtx 10 1 prepare to unpack 16 cuda nvtx 10 1 10 1 243 1 amd64 deb unpack cuda nvtx 10 1 10 1 243 1 select previously unselecte package cuda command line tool 10 1 prepare to unpack 17 cuda command line tool 10 1 10 1 243 1 amd64 deb unpack cuda command line tool 10 1 10 1 243 1 select previously unselecte package openjdk 11 jre headless amd64 prepare to unpack 18 openjdk 11 jre headless 11 0 7 10 2ubuntu2 18 04 amd64 deb unpack openjdk 11 jre headless amd64 11 0 7 10 2ubuntu2 18 04 select previously unselecte package default jre headless prepare to unpack 19 default jre headless 2 3a1 11 68ubuntu1 18 04 1 amd64 deb unpack default jre headless 2 1 11 68ubuntu1 18 04 1 select previously unselecte package openjdk 11 jre amd64 prepare to unpack 20 openjdk 11 jre 11 0 7 10 2ubuntu2 18 04 amd64 deb unpack openjdk 11 jre amd64 11 0 7 10 2ubuntu2 18 04 select previously unselecte package default jre prepare to unpack 21 default jre 2 3a1 11 68ubuntu1 18 04 1 amd64 deb unpack default jre 2 1 11 68ubuntu1 18 04 1 select previously unselecte package cuda nsight 10 1 prepare to unpack 22 cuda nsight 10 1 10 1 243 1 amd64 deb unpack cuda nsight 10 1 10 1 243 1 select previously unselecte package cuda nvvp 10 1 prepare to unpack 23 cuda nvvp 10 1 10 1 243 1 amd64 deb unpack cuda nvvp 10 1 10 1 243 1 select previously unselecte package cuda nvrtc 10 1 prepare to unpack 24 cuda nvrtc 10 1 10 1 243 1 amd64 deb unpack cuda nvrtc 10 1 10 1 243 1 select previously unselecte package cuda nvrtc dev 10 1 prepare to unpack 25 cuda nvrtc dev 10 1 10 1 243 1 amd64 deb unpack cuda nvrtc dev 10 1 10 1 243 1 select previously unselecte package cuda cusolver 10 1 prepare to unpack 26 cuda cusolver 10 1 10 1 243 1 amd64 deb unpack cuda cusolver 10 1 10 1 243 1 select previously unselecte package cuda cusolver dev 10 1 prepare to unpack 27 cuda cusolver dev 10 1 10 1 243 1 amd64 deb unpack cuda cusolver dev 10 1 10 1 243 1 select previously unselecte package cuda license 10 2 prepare to unpack 28 cuda license 10 2 10 2 89 1 amd64 deb unpack cuda license 10 2 10 2 89 1 select previously unselecte package libcublas10 prepare to unpack 29 libcublas10 10 2 2 89 1 amd64 deb unpack libcublas10 10 2 2 89 1 select previously unselecte package libcublas dev prepare to unpack 30 libcubla dev 10 2 2 89 1 amd64 deb unpack libcublas dev 10 2 2 89 1 select previously unselecte package cuda cufft 10 1 prepare to unpack 31 cuda cufft 10 1 10 1 243 1 amd64 deb unpack cuda cufft 10 1 10 1 243 1 select previously unselecte package cuda cufft dev 10 1 prepare to unpack 32 cuda cufft dev 10 1 10 1 243 1 amd64 deb unpack cuda cufft dev 10 1 10 1 243 1 select previously unselecte package cuda curand 10 1 prepare to unpack 33 cuda curand 10 1 10 1 243 1 amd64 deb unpack cuda curand 10 1 10 1 243 1 select previously unselecte package cuda curand dev 10 1 prepare to unpack 34 cuda curand dev 10 1 10 1 243 1 amd64 deb unpack cuda curand dev 10 1 10 1 243 1 select previously unselecte package cuda cusparse 10 1 prepare to unpack 35 cuda cusparse 10 1 10 1 243 1 amd64 deb unpack cuda cusparse 10 1 10 1 243 1 select previously unselecte package cuda cusparse dev 10 1 prepare to unpack 36 cuda cusparse dev 10 1 10 1 243 1 amd64 deb unpack cuda cusparse dev 10 1 10 1 243 1 select previously unselecte package cuda npp 10 1 prepare to unpack 37 cuda npp 10 1 10 1 243 1 amd64 deb unpack cuda npp 10 1 10 1 243 1 select previously unselecte package cuda npp dev 10 1 prepare to unpack 38 cuda npp dev 10 1 10 1 243 1 amd64 deb unpack cuda npp dev 10 1 10 1 243 1 select previously unselecte package cuda nvml dev 10 1 prepare to unpack 39 cuda nvml dev 10 1 10 1 243 1 amd64 deb unpack cuda nvml dev 10 1 10 1 243 1 select previously unselecte package cuda nvjpeg 10 1 prepare to unpack 40 cuda nvjpeg 10 1 10 1 243 1 amd64 deb unpack cuda nvjpeg 10 1 10 1 243 1 select previously unselecte package cuda nvjpeg dev 10 1 prepare to unpack 41 cuda nvjpeg dev 10 1 10 1 243 1 amd64 deb unpack cuda nvjpeg dev 10 1 10 1 243 1 select previously unselecte package nsight compute 2019 5 0 prepare to unpack 42 nsight compute 2019 5 0 2019 5 0 14 1 amd64 deb unpack nsight compute 2019 5 0 2019 5 0 14 1 select previously unselecte package cuda nsight compute 10 1 prepare to unpack 43 cuda nsight compute 10 1 10 1 243 1 amd64 deb unpack cuda nsight compute 10 1 10 1 243 1 select previously unselecte package nsight system 2019 5 2 prepare to unpack 44 nsight system 2019 5 2 2019 5 2 16 b54ef97 amd64 deb unpack nsight system 2019 5 2 2019 5 2 16 b54ef97 select previously unselecte package cuda nsight system 10 1 prepare to unpack 45 cuda nsight system 10 1 10 1 243 1 amd64 deb unpack cuda nsight system 10 1 10 1 243 1 select previously unselecte package cuda nvgraph 10 1 prepare to unpack 46 cuda nvgraph 10 1 10 1 243 1 amd64 deb unpack cuda nvgraph 10 1 10 1 243 1 select previously unselecte package cuda nvgraph dev 10 1 prepare to unpack 47 cuda nvgraph dev 10 1 10 1 243 1 amd64 deb unpack cuda nvgraph dev 10 1 10 1 243 1 select previously unselecte package cuda visual tool 10 1 prepare to unpack 48 cuda visual tool 10 1 10 1 243 1 amd64 deb unpack cuda visual tool 10 1 10 1 243 1 select previously unselecte package cuda tool 10 1 prepare to unpack 49 cuda tool 10 1 10 1 243 1 amd64 deb unpack cuda tool 10 1 10 1 243 1 select previously unselecte package cuda sample 10 1 prepare to unpack 50 cuda sample 10 1 10 1 243 1 amd64 deb unpack cuda sample 10 1 10 1 243 1 select previously unselecte package cuda documentation 10 1 prepare to unpack 51 cuda documentation 10 1 10 1 243 1 amd64 deb unpack cuda documentation 10 1 10 1 243 1 select previously unselecte package cuda librarie dev 10 1 prepare to unpack 52 cuda librarie dev 10 1 10 1 243 1 amd64 deb unpack cuda library dev 10 1 10 1 243 1 select previously unselecte package cuda toolkit 10 1 prepare to unpack 53 cuda toolkit 10 1 10 1 243 1 amd64 deb unpack cuda toolkit 10 1 10 1 243 1 select previously unselecte package libnvidia common 450 prepare to unpack 54 libnvidia common 450 450 36 06 0ubuntu1 all deb check for exist driver runfile install var lib dpkg tmp ci preinst 6 var lib dpkg tmp ci preinst not find unpack libnvidia common 450 450 36 06 0ubuntu1 prepare to unpack 55 libnvidia compute 450 450 36 06 0ubuntu1 amd64 deb unpack libnvidia compute 450 amd64 450 36 06 0ubuntu1 dpkg error processing archive tmp apt dpkg install fjai3s 55 libnvidia compute 450 450 36 06 0ubuntu1 amd64 deb unpack try to overwrite usr lib x86 64 linux gnu libnvidia allocator so which be also in package libnvidia extra 440 amd64 440 82 0ubuntu0 0 18 04 2 select previously unselecte package libnvidia decode 450 amd64 prepare to unpack 56 libnvidia decode 450 450 36 06 0ubuntu1 amd64 deb unpack libnvidia decode 450 amd64 450 36 06 0ubuntu1 select previously unselecte package libnvidia encode 450 amd64 prepare to unpack 57 libnvidia encode 450 450 36 06 0ubuntu1 amd64 deb unpack libnvidia encode 450 amd64 450 36 06 0ubuntu1 select previously unselecte package libnvidia fbc1 450 amd64 prepare to unpack 58 libnvidia fbc1 450 450 36 06 0ubuntu1 amd64 deb unpack libnvidia fbc1 450 amd64 450 36 06 0ubuntu1 select previously unselecte package libnvidia gl 450 amd64 prepare to unpack 59 libnvidia gl 450 450 36 06 0ubuntu1 amd64 deb unpack libnvidia gl 450 amd64 450 36 06 0ubuntu1 select previously unselecte package libnvidia ifr1 450 amd64 prepare to unpack 60 libnvidia ifr1 450 450 36 06 0ubuntu1 amd64 deb unpack libnvidia ifr1 450 amd64 450 36 06 0ubuntu1 select previously unselecte package nvidia compute util 450 prepare to unpack 61 nvidia compute util 450 450 36 06 0ubuntu1 amd64 deb unpack nvidia compute util 450 450 36 06 0ubuntu1 select previously unselecte package nvidia kernel source 450 prepare to unpack 62 nvidia kernel source 450 450 36 06 0ubuntu1 amd64 deb unpack nvidia kernel source 450 450 36 06 0ubuntu1 select previously unselecte package nvidia kernel common 450 prepare to unpack 63 nvidia kernel common 450 450 36 06 0ubuntu1 amd64 deb unpack nvidia kernel common 450 450 36 06 0ubuntu1 select previously unselecte package nvidia dkms 450 prepare to unpack 64 nvidia dkms 450 450 36 06 0ubuntu1 amd64 deb unpack nvidia dkms 450 450 36 06 0ubuntu1 select previously unselecte package nvidia util 450 prepare to unpack 65 nvidia util 450 450 36 06 0ubuntu1 amd64 deb unpack nvidia util 450 450 36 06 0ubuntu1 select previously unselecte package libnvidia cfg1 450 amd64 prepare to unpack 66 libnvidia cfg1 450 450 36 06 0ubuntu1 amd64 deb unpack libnvidia cfg1 450 amd64 450 36 06 0ubuntu1 select previously unselecte package xserver xorg video nvidia 450 prepare to unpack 67 xserver xorg video nvidia 450 450 36 06 0ubuntu1 amd64 deb unpack xserver xorg video nvidia 450 450 36 06 0ubuntu1 select previously unselecte package nvidia driver 450 prepare to unpack 68 nvidia driver 450 450 36 06 0ubuntu1 amd64 deb unpack nvidia driver 450 450 36 06 0ubuntu1 select previously unselecte package nvidia modprobe prepare to unpack 69 nvidia modprobe 450 36 06 0ubuntu1 amd64 deb unpack nvidia modprobe 450 36 06 0ubuntu1 select previously unselecte package nvidia setting prepare to unpack 70 nvidia setting 450 36 06 0ubuntu1 amd64 deb unpack nvidia setting 450 36 06 0ubuntu1 select previously unselecte package cuda driver 450 prepare to unpack 71 cuda driver 450 450 36 06 1 amd64 deb unpack cuda driver 450 450 36 06 1 select previously unselecte package cuda driver prepare to unpack 72 cuda driver 450 36 06 1 amd64 deb unpack cuda driver 450 36 06 1 select previously unselecte package cuda librarie 10 1 prepare to unpack 73 cuda librarie 10 1 10 1 243 1 amd64 deb unpack cuda library 10 1 10 1 243 1 select previously unselecte package cuda runtime 10 1 prepare to unpack 74 cuda runtime 10 1 10 1 243 1 amd64 deb unpack cuda runtime 10 1 10 1 243 1 select previously unselecte package cuda demo suite 10 1 prepare to unpack 75 cuda demo suite 10 1 10 1 243 1 amd64 deb unpack cuda demo suite 10 1 10 1 243 1 select previously unselecte package cuda 10 1 prepare to unpack 76 cuda 10 1 10 1 243 1 amd64 deb unpack cuda 10 1 10 1 243 1 select previously unselecte package libcudnn7 prepare to unpack 77 libcudnn7 7 6 4 38 1 cuda10 1 amd64 deb unpack libcudnn7 7 6 4 38 1 cuda10 1 select previously unselecte package libcudnn7 dev prepare to unpack 78 libcudnn7 dev 7 6 4 38 1 cuda10 1 amd64 deb unpack libcudnn7 dev 7 6 4 38 1 cuda10 1 error be encounter while process tmp apt dpkg install fjai3s 55 libnvidia compute 450 450 36 06 0ubuntu1 amd64 deb e sub process usr bin dpkg return an error code 1
tensorflowtensorflow
tf raw op collectivepermute bug cause by strange device numbering
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 python version 3 6 describe the current behavior the 8 core be number as 0 1 2 3 6 7 4 5 by collectivepermute and tensorflow python tpu op tpu op all to all describe the expect behavior they should be number as 0 1 2 3 4 5 6 7 standalone code to reproduce the issue import tensorflow as tf resolver tf distribute cluster resolver tpuclusterresolver tpu grpc os environ colab tpu addr tf config experimental connect to cluster resolver tf tpu experimental initialize tpu system resolver strategy tf distribute experimental tpustrategy resolver tf function def step fn context tf distribute get replica context v context replica i d in sync group v tf raw op collectivepermute input v source target pair 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 0 return v ret strategy run step fn print ret other info log include any log or source code that would be helpful to the follow be the output perreplica 0 tf tensor 5 shape dtype int32 1 tf tensor 0 shape dtype int32 2 tf tensor 1 shape dtype int32 3 tf tensor 2 shape dtype int32 4 tf tensor 7 shape dtype int32 5 tf tensor 4 shape dtype int32 6 tf tensor 3 shape dtype int32 7 tf tensor 6 shape dtype int32 the correct output should be 7 0 1 2 3 4 5 6
tensorflowtensorflow
the parameter for batchnormalization be not update
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 os mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary official website tensorflow version use command below 2 2 0 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version no gpu model and memory no describe the current behavior I be simply change to code from tf layer to tf keras layers base on the migration instruction on the official website but it turn out the parameter for tf keras layers batchnormalization be not update properly while everything work fine for tf layer batchnormalization I do not know the reason describe the expect behavior the parameter for tf keras layers batchnormalization should be update just like what have be do for tf layer batchnormalization standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook code import tensorflow compat v1 as tf tf disable v2 behavior import numpy as np def get positive batch size train image np zero 50 28 28 1 dtype np float32 1 train label np int8 np zero 50 1 1 dataset tf datum dataset from tensor slice train image train label dataset dataset repeat 1 dataset dataset batch batch size dataset dataset prefetch 1 return dataset make one shot iterator get next def get negative batch size train image np zero 50 28 28 1 dtype np float32 train label np int8 np zero 50 1 dataset tf datum dataset from tensor slice train image train label dataset dataset repeat 1 dataset dataset batch batch size dataset dataset prefetch 1 return dataset make one shot iterator get next def tf model feature be train with tf variable scope tf model reuse tf auto reuse net tf layer conv2d 8 3 3 stride 2 2 activation tf nn relu feature 13x13x8 net tf layer batchnormalization net be train net tf layer conv2d 1 3 3 stride 2 2 activation tf nn relu net 6x6x1 net tf layer flatten net 36 net tf layer dense 1 net return net def tf keras model feature be train with tf variable scope tf model reuse tf auto reuse net tf keras layer conv2d 8 3 3 stride 2 2 activation tf nn relu feature 13x13x8 net tf keras layers batchnormalization net be train net tf keras layer conv2d 1 3 3 stride 2 2 activation tf nn relu net 6x6x1 net tf keras layer flatten net 36 net tf keras layer dense 1 net return net def get bn var collection move mean move variance none none for var in collection name var name low if variance in name move variance var if mean in name move mean var if move mean be not none and move variance be not none return move mean move variance raise valueerror unable to find move mean and variance def main layer case positive positive label get positive 10 negative negative label get negative 10 model true tf model positive true loss tf loss sigmoid cross entropy positive label model true if case 2 model false tf model negative true loss tf loss sigmoid cross entropy negative label model false update op tf get collection tf graphkey update op with tf control dependency update op train op tf train gradientdescentoptimizer 1e 3 minimize loss mean variance get bn var tf global variable init tf global variable initializer with tf session as sess sess run init while true try loss value sess run loss train op print loss loss value except tf error outofrangeerror break print sess run mean variance def main tf keras layer case tf keras backend set learning phase true positive positive label get positive 10 negative negative label get negative 10 model true tf keras model positive true loss tf loss sigmoid cross entropy positive label model true if case 2 model false tf model negative true loss tf loss sigmoid cross entropy negative label model false update op tf get collection tf graphkey update op with tf control dependency update op train op tf train gradientdescentoptimizer 1e 3 minimize loss mean variance get bn var tf global variable init tf global variable initializer with tf session as sess sess run init while true try loss value sess run loss train op print loss loss value except tf error outofrangeerror break print sess run mean variance if we run main layer 1 we would get loss 0 69315034 loss 0 6928972 loss 0 69264734 loss 0 6923976 loss 0 6921481 array 0 0 0 0 01786391 0 02162646 0 0 00420831 0 dtype float32 array 0 95099014 0 95099014 0 95099014 0 95099014 0 95099014 0 95099014 0 95099014 0 95099014 dtype float32 however if we run main tf keras layer 1 we would get loss 0 6930456 loss 0 69101626 loss 0 688996 loss 0 68698376 loss 0 68497455 array 0 0 0 0 0 0 0 0 dtype float32 array 1 1 1 1 1 1 1 1 dtype float32 it turn out that parameter for tf keras layers batchnormalization be not update properly they should be expect to be similar to tf layer batchnormalization other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
encounter fatal python error with tflite experimental converter
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 tensorflow tensorflow late gpu docker image on ubuntu 20 04 host on gcp compute engine vm mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 2 use docker tensorflow gpu image gpu model and memory tesla v100 sxm2 16 gb describe the current behavior I m use the new qat api demonstrate in the doc here then I m take the result model and convert it to tflite again reference the doc everything work as advertise until convert the model where I encounter an error see log below describe the expect behavior I expect to get a converted model I could save as a tflite binary I be only able to do this when explicitly disable the experimental converter standalone code to reproduce the issue I don t have this available at the moment I will try to provide this when able in the meantime this be the code snippet surround the problem let I know if this be insufficient model tfmot quantization keras quantize model model model compile optimizer optimizer loss loss history model fit train dataset epoch flag epoch callback callback validation datum val dataset converter tf lite tfliteconverter from keras model model converter experimental new converter false must uncomment this line to convert successfully converter optimization tf lite optimize default tflite model converter convert with open output tflite wb as f f write tflite model our model be make up of the follow keras layer add concatenate conv2d input lambda leakyrelu maxpool2d upsampling2d zeropadding2d and batchnormalization organize into several sub model each submodel be quantize on its own because the tfmot api do not have native support for submodel this issuecomment 623586866 be the recommend workaround until that be add other info log it s a long one here be what be spit out all line follow line 59 be output together at the end from my point of view the console didn t update from line 59 for several minute until it all appear together I check nvidia smi while I wait and notice gpu vram usage be at 100 almost 16 gb be be use by the python process run this script but gpu utilization be at 0 check htop show ram and cpu usage very low at the time
tensorflowtensorflow
request to include tf depth to space
Bug
system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary tensorflow version or github sha if from source 1 14 provide the text output from tflite convert valueerror didn t find custom op for name depth to space with version 1 registration fail standalone code to reproduce the issue import tensorflow as tf graph def file frozen model pb input array placeholder output array depthtospace converter tf lite tfliteconverter from frozen graph graph def file input array output array input shape placeholder 200 200 200 4 converter allow custom op true tflite model converter convert open fix shape tflite wb write tflite model link to the model any other info log I try test with tensorflow version 2 0 and also tf nightly it do not work out
tensorflowtensorflow
fuse argument of batchnormalization be not save in the model
Bug
batch normalization layer s fuse argument be not part of the save model h5 json test with tf version 2 1 0 cpu python 3 7 7 from tensorflow keras import layer from tensorflow keras model import model x layer input 32 32 m model x layer batchnormalization fuse false x print fuse in m to json output false expect output be true I find this issue when try to get equal prediction in different version of tensorflow I make a model with batch normalization layer I test the model in tf 1 12 and tf 2 1 0 both be cpu version if that matter with the same input I get almost same prediction but differ in 4 or 5th decimal difference may become large if the model be deep once I manually add fuse false in the model json the prediction become exactly same at least after batch norm layer fix this issue can help during sanity check of the model for the developer like I who work with different tf version simultaneously
tensorflowtensorflow
import util issue
Bug
please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with the documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem
tensorflowtensorflow
float point exception while execute tf unravel index function
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 3 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when pass 0 in argument dim in tf unravel index function a float point exception occur because of divide by zero in mod op function at tensorflow core kernel unravel index op cc 29 describe the expect behavior no crash in the c level I would expect an exception in python say that dim argument should not be 0 standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python c import tensorflow as tf tf unravel index indice 2 5 7 dim 3 0 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
2 3 nightly build produce error when initialize system in tf documentation s colab tutorial
Bug
system information have I write custom code no tensorflow instal from pip tf nightly tensorflow version 2 3 0 dev20200605 describe the current behavior tpu win t initialize in colab use the nightly build describe the expect behavior run the notebook successfully as one would if use tf 2 2 standalone code to reproduce the issue 1 load in colab 2 run pip install tf nightly in a new cell before run anything else 3 run the tpu initialization section of the notebook 4 observe the follow error invalidargumenterror nodedef expect input string do not match 0 input specify op attr t type attr tensor name string attr send device string attr send device incarnation int attr recv device string attr client terminate bool default false be stateful true nodedef node send
tensorflowtensorflow
tflite cross compile arm64 build error
Bug
as I be currently try to tensorflow lite for arm64 architecture I just try to follow the instruction from below cross compile for arm64 but I simply get a compilation error tensorflow lite tool make download ruy ruy cpuinfo cc 9 21 fatal error cpuinfo h no such file or directory I be surprised this start build not work out of the box btw I be try to do the above in ubuntu 16 04 vm system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device tensorflow instal from source or binary from git clone tensorflow version use command below late python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem
tensorflowtensorflow
tf summary flush segfault when writer be not a valid tf summary summarywriter object
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 7 6 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf summary flush doesn t have input validity check to ensure writer be a tf summary summarywriter give it a ndarray as writer would make it segfault describe the expect behavior the function shouldn t segfault and should have a proper input check standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf import numpy as np tf summary flush writer np random rand 2 2 cause segfault other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
attributeerror perreplica object have no attribute begin
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 1 14 python version python 2 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m try to distribute evaluation on 2 gpu on my local dev server use mirror strategy but I m get error as follow file usr local lib python2 7 dist package tensorflow estimator python estimator estimator py line 477 in evaluate name name file usr local lib python2 7 dist package tensorflow estimator python estimator estimator py line 517 in actual eval return evaluate file usr local lib python2 7 dist package tensorflow estimator python estimator estimator py line 508 in evaluate output dir self eval dir name file usr local lib python2 7 dist package tensorflow estimator python estimator estimator py line 1609 in evaluate run config self session config file usr local lib python2 7 dist package tensorflow python training evaluation py line 269 in evaluate once session creator session creator hook hook as session file usr local lib python2 7 dist package tensorflow python training monitor session py line 1007 in init stop grace period sec stop grace period sec file usr local lib python2 7 dist package tensorflow python training monitor session py line 713 in init h begin attributeerror perreplica object have no attribute begin I also notice I m have not use by distribute strategy error in the log which I don t have in distribute training use mirror strategy 2020 06 04 18 40 02 149347 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 14006 mb memory physical gpu device 0 name quadro rtx 5000 pci bus i d 0000 17 00 0 compute capability 7 5 2020 06 04 18 40 02 149933 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 1 with 14039 mb memory physical gpu device 1 name quadro rtx 5000 pci bus i d 0000 65 00 0 compute capability 7 5 info tensorflow device be available but not use by distribute strategy device cpu 0 my distribute model evaluation be as follow strategy tf compat v1 distribute mirroredstrategy self run config tf estimator runconfig model dir self job dir save summary step self save summary step session config session config eval distribute strategy estimator tf estimator estimator model fn eval model fn config self run config eval result estimator evaluate input fn input tf dataset step eval step name self eval name hook hooks checkpoint path weight path describe the expect behavior I expect distribute evaluation shall work because I also have distribute training evaluation work properly as follow self run config tf estimator runconfig model dir self job dir save checkpoint step self save checkpoint step save checkpoint sec self save checkpoint sec keep checkpoint max self keep checkpoint max save summary step self save summary step session config session config train distribute distribute strategy train spec tf estimator trainspec input fn train input tf dataset max step max step hook train hook eval spec tf estimator evalspec input fn eval input tf dataset step eval step name self eval name hook eval hook tf estimator train and evaluate estimator train spec eval spec standalone code to reproduce the issue I don t have standalone code to reproduce the issue right now other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach I also find this issue but for my case even if I remove hook I still get the similar error
tensorflowtensorflow
tflite can not run inference on tf lite model regular tensorflow op be not support by this interpreter
Bug
system information osx tf 2 3 0 dev20200602 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook conversion code converter tf lite tfliteconverter from save model curr dir save model tflite model converter convert save the tf lite model with tf io gfile gfile curr dir model tflite wb as f f write tflite model inference code compare inference import tensorflow as tf load the tflite model and allocate tensor interpreter tf lite interpreter model path model tflite interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail the model I m try to convert to tflite and run inference on be ssdlite mobilenetv2 obtain rom the model zoo failure detail conversion be successful however I can not run inference here be the error that I run into runtimeerror regular tensorflow op be not support by this interpreter make sure you apply link the flex delegate before inference node number 3 flextensorarrayv3 fail to prepare I ve be play around with converter setting with no luck I e combination of converter optimization tf lite optimize default converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op with none of the setting above set or the support op set I can convert the model but can not run inference with a similar error as above with optimization set to default it give I an error in conversion
tensorflowtensorflow
gpu delegate give different result from cpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 18 04 mobile device if the issue happen on mobile device pixel 3 xl tensorflow instal from source or binary source tensorflow version use command below r2 2 0 and master python version 3 6 bazel version if compile from source 3 0 0 gcc compiler version if compile from source default cuda cudnn version 10 0 gpu model and memory describe the current behavior gpu delegate give very different result from cpu I be able to hard code the source in tensorflow lite delegates gpu common model builder cc to allow some operation to be delegate to gpu out of 270 operation delegate just one conv 2d will produce very similar result as the one by cpu only delegate a few more operation seem to produce big difference delegate just the first mul operation will produce very different result describe the expect behavior result should be close standalone code to reproduce the issue I personally hijack tflite tool benchmark code and give an sample image as deterministic input instead of random input I would love to provide my code change if it help other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the tflite model be convert from insightface arcface mxnet model 3 2 model link to download the tflite also attach the graph of above model visualize official arcface no sub zip
tensorflowtensorflow
timeserie forecasting tutorial have bias issue
Bug
url s with the issue description of issue what need change clear description in the time series forecasting tutorial the normalization be do prior to obtain time series window consider this uni datum uni datum uni train mean uni train std this be do before x train uni y train uni univariate datum uni datum 0 train split univariate past history univariate future target this be cause the past history sample use value of future target as well during the normalization this be a bias in reality we can not use future value to normalize current value this I think be a bias and a bug correct link parameter define return define raise list and define usage example normalization should be do after extraction of sequence and only use the lhs of the sequence I still do not know if normalize the rhs of the sequence be desire but do not hurt as long as we denormalize request visual if applicable submit a pull request
tensorflowtensorflow
incorrect result of mklmaxpoolgrad
Bug
system information os platform and distribution arch linux 5 5 2 arch1 1 x86 64 tensorflow instal from source tensorflow version v1 12 1 33097 g83eb4048ba 2 2 0 and v2 2 0 0 g2b96f3662b 2 2 0 python version python 3 6 10 bazel version 3 0 0 for master 2 0 0 for r2 2 gcc compiler version gcc 9 3 0 the package be build with the command bash bazel build config mkl tensorflow tool pip package build pip package for master commit 83eb40 bazel bin tensorflow tool pip package build pip package nightly flag master 83eb40 for r2 2 bazel build config mkl tensorflow tool pip package build pip package describe the current behavior the gradient of the max pooling 2d be wrong code python import numpy as np import tensorflow as tf tf compat v1 disable v2 behavior x np array 3 0 0 2 3 0 0 0 0 1 0 0 0 1 3 0 0 0 0 0 1 1 3 8 6 astype np float32 reshape 1 5 5 1 x t tf compat v1 placeholder tf float32 shape 1 5 5 1 w np array 1 reshape 1 1 1 1 astype np float32 conv t tf nn conv2d x t w 1 1 1 1 same pool t tf nn max pool conv t 1 2 2 1 1 2 2 1 valid grad t tf gradient ys pool t xs conv t tensor conv t pool t grad t tensor tf squeeze t 1 for t in tensor with tf compat v1 session as sess conv pool grad sess run tensor feed dict x t x print conv n conv npool n pool ngrad n grad output conv 3 0 0 2 3 0 0 0 0 1 0 0 0 1 3 0 0 0 0 0 1 1 3 8 6 pool 3 2 0 1 grad 1 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 describe the expect behavior if we run the code with tf disable mkl 1 the gradient will be 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 note that the position of the second 1 s in the first and the third row be different other info log if I directly feed the input to max pool the result be correct python import numpy as np import tensorflow as tf tf compat v1 disable v2 behavior x np array 3 0 0 2 3 0 0 0 0 1 0 0 0 1 3 0 0 0 0 0 1 1 3 8 6 astype np float32 reshape 1 5 5 1 x t tf compat v1 placeholder tf float32 shape 1 5 5 1 pool t tf nn max pool x t 1 2 2 1 1 2 2 1 valid grad t tf gradient ys pool t xs x t tensor x t pool t grad t tensor tf squeeze t 1 for t in tensor with tf compat v1 session as sess x pool grad sess run tensor feed dict x t x print x n x npool n pool ngrad n grad output x 3 0 0 2 3 0 0 0 0 1 0 0 0 1 3 0 0 0 0 0 1 1 3 8 6 pool 3 2 0 1 grad 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 in addition if the I replace the conv2d with relu which will be also rewrite the result be also correct python import numpy as np import tensorflow as tf tf compat v1 disable v2 behavior x np array 3 0 0 2 3 0 0 0 0 1 0 0 0 1 3 0 0 0 0 0 1 1 3 8 6 astype np float32 reshape 1 5 5 1 x t tf compat v1 placeholder tf float32 shape 1 5 5 1 relu t tf nn relu x t pool t tf nn max pool relu t 1 2 2 1 1 2 2 1 valid grad t tf gradient ys pool t xs relu t tensor relu t pool t grad t tensor tf squeeze t 1 for t in tensor with tf compat v1 session as sess relu pool grad sess run tensor feed dict x t x print relu n relu npool n pool ngrad n grad output relu 3 0 0 2 3 0 0 0 0 1 0 0 0 1 3 0 0 0 0 0 1 1 3 8 6 pool 3 2 0 1 grad 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 I check the log with tf cpp min vlog level 1 and confirm that the op be rewrite it seem that the result be affect by the convolution
tensorflowtensorflow
tensorflow overfit and underfit
Bug
tensorboard logdir logdir size usageerror line magic function tensorboard not find
tensorflowtensorflow
tf compat v1 setdiff1d documentation refer out idx as an argument
Bug
url s with the issue description of issue what need change clear description in the args section there be an input out idx but it be not in the signature and the function doesn t accept the argument run code python tf compat v1 setdiff1d 1 1 out idx tf dtype int32 name none get exception python typeerror setdiff1d get an unexpected keyword argument out idx parameter define yes return define yes raise list and define no system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc3 python version 3 8 2
tensorflowtensorflow
unclear type dimension dependency of filter in conv1d 3d transpose documentation
Bug
url s with the issue description of issue what need change clear description unclear type and dimension dependency of input filter accord to the document filter should have the same type as value and the in channel dimension must match that of value but it be unclear what value be parameter define yes return define yes raise list and define yes no the raise list be not provide or define system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc3 python version 3 8 2
tensorflowtensorflow
please add additional check in tflitequantizationfree
Bug
hi please add additional check for quantization param before use it in the function l89 the bug appear when default tfliteconvertercalculator be use mediapipe framework please look at theirs source l317 as you can see quant param be set to nullptr therefore add additional check for similar situation on other framework for example c if q param null if q param scale tflitefloatarrayfree q param scale q param scale null if q param zero point tfliteintarrayfree q param zero point q param zero point null free q param thank
tensorflowtensorflow
run this python freeze graph py input graph c user administrator desktop 5v6g6na tensorflowfile bird data2 slim tmp inception v3 inf graph pb input checkpoint tmp train log model ckpt 721 input binary true output node name inceptionv3 prediction reshape 1 output graph tmp frozen graph pb
Bug
warn tensorflow from freeze graph py 124 checkpoint exist from tensorflow python training checkpoint management be deprecate and will be remove in a future version instruction for update use standard file apis to check for file with this prefix w0602 23 33 08 629297 12556 deprecation py 323 from freeze graph py 124 checkpoint exist from tensorflow python training checkpoint management be deprecate and will be remove in a future version instruction for update use standard file apis to check for file with this prefix fatal python error segmentation fault current thread 0x0000310c most recent call first file d 91userdata anaconda3 envs tensorflow lib site package tensorflow python lib io file io py line 384 in get match file v2 file d 91userdata anaconda3 envs tensorflow lib site package tensorflow python lib io file io py line 363 in get match file file d 91userdata anaconda3 envs tensorflow lib site package tensorflow python training checkpoint management py line 372 in checkpoint exist file d 91userdata anaconda3 envs tensorflow lib site package tensorflow python util deprecation py line 324 in new func file freeze graph py line 124 in freeze graph with def protos file freeze graph py line 357 in freeze graph file freeze graph py line 374 in main file freeze graph py line 482 in file d 91userdata anaconda3 envs tensorflow lib site package absl app py line 250 in run main file d 91userdata anaconda3 envs tensorflow lib site package absl app py line 299 in run file d 91userdata anaconda3 envs tensorflow lib site package tensorflow python platform app py line 40 in run file freeze graph py line 483 in run main file freeze graph py line 487 in get this error tf 1 14 0 cpu
tensorflowtensorflow
rnn unrolled can not be convert use from keras model
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary tf nightly tensorflow version or github sha if from source version be 2 2 0 dev20200508 git version be v1 12 1 31489 g6047d50555 describe the current behavior conversion fail with the follow valueerror can not unroll a rnn if the time dimension be undefined if use a sequential model specify the time dimension by pass an input shape or batch input shape argument to your first layer if your first layer be an embed you can also use the input length argument if use the functional api specify the time dimension by pass a shape or batch shape argument to your input layer this seem to be due to the follow addition to the from keras model conversion self keras model save temp dir save format tf describe the expect behavior conversion should succeed as it do before in previous version standalone code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow keras model import model class simplemodel model def init self super init self model name mnist self train datum none self test datum none self calib datum none self num calib 1000 datum preprocesse normalize the input image so that each pixel value be between 0 to 1 self pre process lambda x x 255 0 self load datum def load data self load mnist dataset mnist tf keras datasets mnist datum image label self train datum self test datum mnist load datum self calib datum self pre process self train datum 0 0 self num calib astype np float32 def train self cell tf keras layer grucell 3 model tf keras model sequential tf keras layers input shape 28 28 name input tf keras layers lstm 32 tf keras layer rnn cell unroll true tf keras layer flatten tf keras layer dense 10 activation tf nn softmax name output model summary train image self pre process self train datum 0 train label self train datum 1 test image self pre process self test data 0 test label self test datum 1 train the digit classification model model compile optimizer adam loss sparse categorical crossentropy metric accuracy model fit train image train label epoch 1 validation data test image test label dump savedmodel another bug here model save str self savedmodel dir return model def eval self tflite model path str interpreter tf lite interpreter model path str tflite model path interpreter allocate tensor input index interpreter get input detail 0 index output index interpreter get output detail 0 index datum preprocesse normalize the input image so that each pixel value be between 0 to 1 test image self pre process self test data 0 test label self test datum 1 run prediction on every image in the test dataset prediction digit for test image in test image pre processing add batch dimension and convert to float32 to match with the model s input datum format test image np expand dim test image axis 0 astype np float32 interpreter set tensor input index test image run inference interpreter invoke post processing remove batch dimension and find the digit with high probability output interpreter tensor output index digit np argmax output 0 prediction digit append digit compare prediction result with ground truth label to calculate accuracy accurate count 0 for index in enumerate prediction digit if prediction digits index test labels index accurate count 1 accuracy accurate count 1 0 len prediction digit return accuracy def get calib datum func self def representative datum gen for input value in self calib datum input value np expand dim input value axis 0 astype np float32 yield input value return representative datum gen if name main temp simplemodel model temp train converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize default converter representative dataset temp get calib data func save int8 tflite converter target spec support op tf lite opsset tflite builtins int8 tf lite opsset select tf op converter experimental new converter true tflite model int8 converter convert open lstm unrolled int8 tflite wb write tflite model int8
tensorflowtensorflow
class weight invalidargumenterror
Bug
hi I be use class weight for my unbalanced dataset and I be get this error invalidargumenterror traceback most recent call last in 6 step per epoch step per epoch 7 validation datum get validation dataset 8 class weight class weigth 9 opt conda lib python3 7 site package tensorflow core python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 817 max queue size max queue size 818 worker worker 819 use multiprocesse use multiprocesse 820 821 def evaluate self opt conda lib python3 7 site package tensorflow core python keras engine training v2 py in fit self model x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 340 mode modekey train 341 training context training context 342 total epoch epoch 343 cbks make logs model epoch log training result modekeys train 344 opt conda lib python3 7 site package tensorflow core python keras engine training v2 py in run one epoch model iterator execution function dataset size batch size strategy step per epoch num sample mode training context total epoch 126 step step mode mode size current batch size as batch log 127 try 128 batch out execution function iterator 129 except stopiteration error outofrangeerror 130 todo kaftan file bug about tf function and error outofrangeerror opt conda lib python3 7 site package tensorflow core python keras engine training v2 util py in execution function input fn 96 numpy translate tensor to value in eager mode 97 return nest map structure non none constant value 98 distribute function input fn 99 100 return execution function opt conda lib python3 7 site package tensorflow core python util nest py in map structure func structure kwargs 566 567 return pack sequence as 568 structure 0 func x for x in entry 569 expand composite expand composite 570 opt conda lib python3 7 site package tensorflow core python util nest py in 0 566 567 return pack sequence as 568 structure 0 func x for x in entry 569 expand composite expand composite 570 opt conda lib python3 7 site package tensorflow core python keras engine training v2 util py in non none constant value v 128 129 def non none constant value v 130 constant value tensor util constant value v 131 return constant value if constant value be not none else v 132 opt conda lib python3 7 site package tensorflow core python framework tensor util py in constant value tensor partial 820 821 if isinstance tensor op eagertensor 822 return tensor numpy 823 if not be tensor tensor 824 return tensor opt conda lib python3 7 site package tensorflow core python framework op py in numpy self 940 941 todo slebedev consider avoid a copy for non cpu or remote tensor 942 maybe arr self numpy pylint disable protect access 943 return maybe arr copy if isinstance maybe arr np ndarray else maybe arr 944 opt conda lib python3 7 site package tensorflow core python framework op py in numpy self 908 return self numpy internal 909 except core notokstatusexception as e 910 six raise from core status to exception e code e message none 911 912 property opt conda lib python3 7 site package six py in raise from value from value invalidargumenterror function node inference distribute function 431665 compilation failure detect unsupported operation when try to compile graph have valid nonscalar shape true 402714 const 0 on xla tpu jit densetodensesetoperation no register densetodensesetoperation opkernel for xla tpu jit device compatible with node node have invalid dim densetodensesetoperation register device cpu t in dt int8 device cpu t in dt int16 device cpu t in dt int32 device cpu t in dt int64 device cpu t in dt uint8 device cpu t in dt uint16 device cpu t in dt string node have invalid dim densetodensesetoperation have valid nonscalar shape loss dense 4 loss weight loss broadcast weight assert broadcastable be valid shape tpu compilation fail tpu compile succeed assert 2606253191027783421 10 I be use kaggle notebook with tpu and tensorflow version be 2 1 0 and keras version be 2 2 4 tf here s the kaggle notebook to reproduce the error thank
tensorflowtensorflow
doc recommend stateful training but the batch be shuffle
Bug
I m read this tutorial page from the documentation the gru layer be stateful so it remember its state between batch but the batch be shuffle therefore I think the stateful parameter should be false
tensorflowtensorflow
tf can t access gpu
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary pip tensorflow version use command below 2 1 0 python version 3 6 cuda cudnn version 10 gpu model and memory rtx 2060 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior image describe the expect behavior tensorflow should be able to use gpu but in the current case it be not able to while pytorch be able in the same scenario
tensorflowtensorflow
format issue in tf ragged constant documentation
Bug
url s with the issue description of issue what need change clear description format issue in the args section format of description of ragged rank be problematic the default value should be max 0 k 1 len inner shape parameter define yes return define yes raise list and define yes
tensorflowtensorflow
tf hessians documentation refer colocate gradient with op as an argument
Bug
url s with the issue description of issue what need change clear description in the args section there be an input colocate gradient with op but it be not in the signature and the function doesn t accept the argument run code python tf hessian 1 1 colocate gradient with op none get exception python typeerror hessiansv2 get an unexpected keyword argument colocate gradient with op parameter define yes return define yes raise list and define yes system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc3 python version 3 8 2
tensorflowtensorflow
unclear shape dependency of value in tf keras backend move average update documentation
Bug
url s with the issue description of issue what need change clear description unclear shape dependency of input value accord to the document value should have the same shape as variable but it be unclear what be variable parameter define yes return define yes raise list and define no
tensorflowtensorflow
test case tensorflow python eager def function test cpu only fail on tf2 2 due to inconsistent xla flag
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow n a os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 2 2 0 python version 3 8 2 bazel version if compile from source 2 0 0 gcc compiler version if compile from source gcc ubuntu 9 3 0 10ubuntu2 9 3 0 cuda cudnn version n a gpu model and memory n a describe the current behavior the test case tensorflow python eager def function test cpu only fail with the follow error log error testexperimentalcompileraisesexceptionwhenxlaisunsupporte main deffunctioncpuonlyt testexperimentalcompileraisesexceptionwhenxlaisunsupporte main deffunctioncpuonlytest traceback most recent call last file home sidong cache bazel bazel sidong 9ef871a29c692fc6a18121463529e145 execroot org tensorflow bazel out k8 opt bin tensorflow python eager def function test cpu only runfiles org tensorflow tensorflow python eager def function test cpu only py line 46 in testexperimentalcompileraisesexceptionwhenxlaisunsupporte fn 1 1 2 3 file home sidong cache bazel bazel sidong 9ef871a29c692fc6a18121463529e145 execroot org tensorflow bazel out k8 opt bin tensorflow python eager def function test cpu only runfiles org tensorflow tensorflow python eager def function py line 576 in call result self call args kwd file home sidong cache bazel bazel sidong 9ef871a29c692fc6a18121463529e145 execroot org tensorflow bazel out k8 opt bin tensorflow python eager def function test cpu only runfiles org tensorflow tensorflow python eager def function py line 650 in call return self concrete stateful fn filter call canon args canon kwd pylint disable protect access file home sidong cache bazel bazel sidong 9ef871a29c692fc6a18121463529e145 execroot org tensorflow bazel out k8 opt bin tensorflow python eager def function test cpu only runfiles org tensorflow tensorflow python eager function py line 1661 in filter call return self call flat file home sidong cache bazel bazel sidong 9ef871a29c692fc6a18121463529e145 execroot org tensorflow bazel out k8 opt bin tensorflow python eager def function test cpu only runfiles org tensorflow tensorflow python eager function py line 1745 in call flat return self build call output self inference function call file home sidong cache bazel bazel sidong 9ef871a29c692fc6a18121463529e145 execroot org tensorflow bazel out k8 opt bin tensorflow python eager def function test cpu only runfiles org tensorflow tensorflow python eager function py line 593 in call output execute execute file home sidong cache bazel bazel sidong 9ef871a29c692fc6a18121463529e145 execroot org tensorflow bazel out k8 opt bin tensorflow python eager def function test cpu only runfiles org tensorflow tensorflow python eager execute py line 59 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl invalidargumenterror function invoke by the follow node be not compilable node inference fn 7 inference fn 7 xlamustcompile true config proto n 007 n 003cpu 020 001 n 007 n 003gpu 020 0002 002j 0008 001 executor type uncompilable node unique unsupported op no register unique opkernel for xla cpu jit device compatible with node node unique stacktrace node inference fn 7 function node unique function inference fn 7 op inference fn 7 run 2 test in 0 318 fail error 1 skip 1 describe the expect behavior the test case should raise a valueerror attempt to use experimental compile but xla support be not link in rebuild with define with xla support true and capture by self assertraisesregexp the test case should pass standalone code to reproduce the issue I modify the test case a little bit and it could be reproduce from this gist other info log I look into this issue it seem that it be cause by inconsistent internal apis when debug this test file I get the follow result home sidong cache bazel bazel sidong 338a466d2403fbfe3413e7ca6003e4cf execroot org tensorflow bazel out s390x opt bin tensorflow python eager def function test cpu only runfiles org tensorflow tensorflow python eager def function test cpu only py 47 testexperimentalcompileraisesexceptionwhenxlaisunsupporte fn 1 1 2 3 pdb p test util be xla enable false pdb p pywrap tfe tf isxlaenable true basically the test proceed if test util be xla enable be false and it will raise the exception as intend when pywrap tfe tf isxlaenable be false the inconsistency cause the exception not correctly raise one way to fix it be to discard test util be xla enable and only use pywrap tfe tf isxlaenable but I think maybe it s well to solve this inconsistency issue I notice that the pywrap tfe tf isxlaenable api and this test case be add from the same commit and I assume the cause of inconsistency be that the function setxlaisenable be call incorrectly could you look into this issue and check why the function be call when xla be not enable thank sidong
tensorflowtensorflow
invalidargumenterror incompatible shape 200 10 vs 200 5 node logicaland 3 define at 1
Bug
please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow yes I have write custom code os platform and distribution e g linux ubuntu 16 04 I be run into this problem on google colab which have tensorflow 2 2 0 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary google colab setup tensorflow version 2 2 0 tensorflow version use command below tensorflow 2 2 0 python version python 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request while the model be get train it randomly fail in any of the iteration of epoch with an incompatible shape error for logicaland node with the shape differ as 200 10 and 200 5 the difference in axis 1 here be actually 2 batch size and batch size so if I change batch size the shape change accordingly also I m not see this error on my local machine if I m run this same code tensorflow version 2 2 0 rc1on a cpu but on google colab I try it both on cpu and gpu and I face this error source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem deep network contain pretraine bert and resnet50v2 network the final layer output of both network be concatenate and connect to a final softmax layer of 2 neuron there be 4 input layer for complete network 3 for bert and 1 for resnet50v2 here s the architecture code for network def complete model input word ids tf keras layers input max sequence length dtype tf int32 name input word id input mask tf keras layers input max sequence length dtype tf int32 name input mask input segment tf keras layers input max sequence length dtype tf int32 name input segment img input tf keras layers input 256 256 3 bert layer hub keraslayer bert path trainable true sequence output bert layer input word ids input mask input segment bert out tf keras layers globalaveragepooling1d sequence output resnet model tf keras application resnet50v2 include top false weight imagenet input tensor img input pooling avg resnet out resnet model output x tf keras layers concatenate bert out resnet out x tf keras layers dropout 0 4 x out tf keras layer dense 2 activation softmax x model tf keras model model input img input input word ids input mask input segment output out return model the model be create and then compile and train with the tf keras fit api I create a tf keras sequence object for generate input in batch in format and output below be the error trace which I get epoch 1 5 18 1700 eta 1 31 14 loss 1 6233 accuracy 0 5889 auc 0 5584 invalidargumenterror traceback most recent call last in 1 hist model fit train dataseq epoch 5 verbose 1 step per epoch 8500 5 validation datum valid dataseq validation step 500 5 8 frame usr local lib python3 6 dist package tensorflow python keras engine training py in method wrapper self args kwargs 64 def method wrapper self args kwargs 65 if not self in multi worker mode pylint disable protect access 66 return method self args kwargs 67 68 run inside run distribute coordinator already usr local lib python3 6 dist package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation batch size validation freq max queue size worker use multiprocesse 846 batch size batch size 847 callback on train batch begin step 848 tmp log train function iterator 849 catch outofrangeerror for dataset of unknown size 850 this block until the batch have finish execute usr local lib python3 6 dist package tensorflow python eager def function py in call self args kwd 578 xla context exit 579 else 580 result self call args kwd 581 582 if trace count self get trace count usr local lib python3 6 dist package tensorflow python eager def function py in call self args kwd 609 in this case we have create variable on the first call so we run the 610 defunne version which be guarantee to never create variable 611 return self stateless fn args kwd pylint disable not callable 612 elif self stateful fn be not none 613 release the lock early so that multiple thread can perform the call usr local lib python3 6 dist package tensorflow python eager function py in call self args kwargs 2418 with self lock 2419 graph function args kwargs self maybe define function args kwargs 2420 return graph function filter call args kwargs pylint disable protect access 2421 2422 property usr local lib python3 6 dist package tensorflow python eager function py in filter call self args kwargs 1663 if isinstance t op tensor 1664 resource variable op baseresourcevariable 1665 self capture input 1666 1667 def call flat self args capture input cancellation manager none usr local lib python3 6 dist package tensorflow python eager function py in call flat self args capture input cancellation manager 1744 no tape be watch skip to run the function 1745 return self build call output self inference function call 1746 ctx args cancellation manager cancellation manager 1747 forward backward self select forward and backward function 1748 args usr local lib python3 6 dist package tensorflow python eager function py in call self ctx args cancellation manager 596 input args 597 attrs attrs 598 ctx ctx 599 else 600 output execute execute with cancellation usr local lib python3 6 dist package tensorflow python eager execute py in quick execute op name num output input attrs ctx name 58 ctx ensure initialize 59 tensor pywrap tfe tfe py execute ctx handle device name op name 60 input attrs num output 61 except core notokstatusexception as e 62 if name be not none invalidargumenterror 2 root error s find 0 invalid argument incompatible shape 200 10 vs 200 5 node logicaland define at 1 assert less equal assert assertguard pivot f 2738 167 1 invalid argument incompatible shape 200 10 vs 200 5 node logicaland define at 1 0 successful operation 0 derive error ignore op inference train function 64050 function call stack train function train function
tensorflowtensorflow
tf2 2 halt training on 2 2080ti nvlink bridge
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04lt mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip3 install tensorflow tensorflow version use command below 2 2 python version 3 8 bazel version if compile from source no gcc compiler version if compile from source gcc7 cuda cudnn version 10 1 7 6 5 nccl 2 6 4 gpu model and memory 2 x 2080ti 11 gb nvlink bridge describe the current behavior the training halt at model fit and then the linux halt no response to my mouse keyboard I have to reboot describe the expect behavior training should be run standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook code be tf2 2 official example with tf2 2 cuda 10 1 cudnn new with cuda 10 1 7 6 5 nccl 2 6 4 linux ubuntu 20 04 lts other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach if I remove nvlink reboot the above code run fine and 2 gpu have load and I test nvlink bridge with cuda s utility code for nvlink speed I think nvlink be functional and the 2 gpu s peer to peer speed be fast the log can t be obtain because I have to reboot my computer the whole linux system halt and have no response