repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | tensorflow 2 x version be use in 1 x tutorial | Bug | url s with the issue description of issue what need change the tutorial correspond to 1 x version in github have the version 2 x use inside it thus leave no tutorial correspond to 1 x at least for save and restore clear description please find the screenshot in this link correct link be the link to the source code correct yes parameter define be all parameter define and format correctly n a return define be return value define n a usage example be there a usage example no usage example for save and restore be not present for tensorflow 1 x version request visual if applicable be there currently visual if not will it clarify the content n a submit a pull request no be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | break link for tf debugging assert same float dtype | Bug | url s with the issue please provide a link to the documentation entry for example description of issue what need change the link for tf debugging assert same float dtype be dead clear description the previous link lead to the new link must be the md file extension be miss in the link submit a pull request yes will be update the issue soon enough |
tensorflowtensorflow | tf keras model model fit strange behaviour after upgrade from 2 1 to 2 2 rc1 | Bug | I be experiment with tf keras application inception v3 inceptionv3 for classify skin cancer lesion it be go smooth since when colaboratory decide to upgrade its vm s tf version from 2 1 to 2 2 rc1 now when load the model from disk it say text warn tensorflow error in load the save optimizer state as a result your model be start with a freshly initialize optimizer but most importantly the model fit be not able anymore to properly process a keras util sequence indeed during train the step per epoch be no more infer from the sequence object show 1 unknown also it do not actually terminate the epoch snippet to reproduce the issue colab code ham1000 dataset isic dataset topwithheader widecontenttop main the code stop work overnight after colab update their vm image in the code the ham dataset be automatically download and rearrange provide you have kaggle json file with the api token the isic dataset need to be downlaode manually from their official website or from my gdrive here nb also even if I create a new network and start train it with a sequence the step per epoch be not infer as well |
tensorflowtensorflow | tf dataset cache behave differently with filter for tf 1 x | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary pip tensorflow version use command below v1 15 0 rc3 22 g590d6eef7e 1 15 0 python version bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when apply filter before cache on tf datum dataset the output when cache to file compare to cache to memory be different describe the expect behavior they should be the same tf 2x be work as expect standalone code to reproduce the issue cache to memory python import tensorflow as tf tf enable eager execution datum tf datum dataset from tensor slice list range 50 datum datum filter lambda x tf random uniform 0 5 datum datum cache output x for x in datum output 1 x for x in datum assert len outputs len output 1 for x y in zip output output 1 assert x numpy y numpy cache to file python import tensorflow as tf tf enable eager execution datum tf datum dataset from tensor slice list range 50 datum datum filter lambda x tf random uniform 0 5 datum datum cache tmp dummy datum output x for x in datum output 1 x for x in datum assert len outputs len output 1 for x y in zip output output 1 assert x numpy y numpy other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | I meet a problem with hdfswritablefile append | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux cento 7 tensorflow instal from source or binary find in tf 1 10 and recurrent in master python version find in python2 7 recurrent in python3 6 5 bazel version find in 0 15 2 recurrent 2 0 0 gcc compiler version find in 4 8 5 recurrent 7 3 0 I meet a problem with hdfswritablefile append background 1 I save model and checkpoint in hdfs background 2 my user want to add a big dict 30millon datum above 3 gb to graph the problem hdfs abort quit when tf save graph txt to hdfs part of log file usr lib python2 7 site package tensorflow python training basic session run hook py line 450 in after create session graph pbtxt file usr lib python2 7 site package tensorflow python framework graph io py line 71 in write graph text format messagetostre graph def file usr lib python2 7 site package tensorflow python lib io file io py line 434 in atomic write string to file write string to file temp pathname content file usr lib python2 7 site package tensorflow python lib io file io py line 314 in write string to file f write file content file usr lib python2 7 site package tensorflow python lib io file io py line 111 in write compat as bytes file content self writable file status file usr lib python2 7 site package tensorflow python framework error impl py line 519 in exit c api tf getcode self status status invalidargumenterror viewfs hadoop meituan xxxx01 user hadoop waimai xxxx model model date graph pbtxt tmp04d0a32366f548ec9f3aa629600fa19f invalid argument I deal with this question by log then I get a result that the graph be too big to save problem code status hdfswritablefile append stringpiece datum if libhdfs hdfswrite fs file datum datum static cast data size 1 return ioerror filename errno datum size return uint64 t but hdfswrite only accept int so there be some question when append a big string len int max so I change hdfswritablefile append function to solve my question and successfully solve it so I want to make a pull request |
tensorflowtensorflow | tensorflow docker behave different to host produce nan | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux mint 19 3 ubuntu tensorflow instal from source or binary binary tensorflow version use command below tf nightly python version 3 8 and 3 6 two system host nvidia smi 435 21 driver version 435 21 cuda version 10 1 tf 2 2 0 dev20200325 docker nvidia smi 435 21 driver version 435 21 cuda version 10 1 tf 2 2 0 dev20200325 describe the current behavior in docker the gradient calculation produce only nan on the host the same model produce normal gradient describe the expect behavior in docker the same gradient should be produce standalone code to reproduce the issue can not be provide sorry very big model and problem be hard to locate if I can locate the problem I will provide a standalone code other info log include any log or source code that would be helpful to gradient magnitude on host 3 31210213e 05 2 52332793e 05 2 79605388e 06 1 31225511e 06 1 79109138e 05 3 6471597e 06 4 21071627e 06 5 53799646e 06 4 40907201e 07 7 86542842e 07 6 31262401e 06 7 09808887e 07 2 62982558e 05 6 01945476e 06 3 60058812e 06 4 37716608e 06 5 87437571e 05 4 91119135e 06 |
tensorflowtensorflow | gpu accelerate lstms crash randomly with internalerror derive fail to call thenrnnbackward with model config | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 pro n build 17763 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pypi tensorflow version use command below v2 1 0 rc2 17 ge5bf8de410 2 1 0 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 1 cudnn 10 1 windows10 x64 v7 6 5 32 gpu model and memory gtx 1060 6 gb describe the current behavior dear tensorflow developer my jupyter notebook that be train some lstms on the gpu crash after some time with the follow traceback internalerror derive fail to call thenrnnbackward with model config rnn mode rnn input mode rnn direction mode 2 0 0 num layer input size num unit dir count max seq length batch size cell num unit 1 100 100 1 249 32 100 node gradient cudnnrnn grad cudnnrnnbackprop statefulpartitionedcall 1 op inference distribute function 7604 function call stack distribute function distribute function distribute function this crash happen after a random amount of epoch sometimes 6 sometimes 130 sometimes 300 it also crash on different window machine with different gpu please see this minimal notebook to reproduce the behaviour that also include the whole stacktrace in the stacktrace I can find the follow line 130 todo kaftan file bug about tf function and error outofrangeerror I wonder if this be connect to this issue on a cpu training everything work well and stable thank you kindly in advance for your consideration and great work describe the expect behavior the gpu accelerate lstm should not crash randomly standalone code to reproduce the issue other info log for the full traceback please check the gist from above |
tensorflowtensorflow | roadmap link break | Bug | in tensorflow org website the below roadmap link be not work |
tensorflowtensorflow | cudnn status internal error after hour of hyperparameter optimization | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow I m use completely standard tensorflow keras implementation see code below os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary tensorflow version use command below tensorflow 2 1 0 python version bazel version if compile from source python 3 7 cuda cudnn version gpu model and memory nvidia gtx 1070 8 gb cuda compilation tool release 10 1 v10 1 243 describe the current behavior I be train and tune hyperparameter use keras tuner for about 2 hour until I notice that no new trial where create in the tuner project folder this be my training setup image and this be my dynamic model image after I notice the stand still I check my terminal and get the follow error fejlmeddelse describe the expect behavior I would expect the training to keep run it complete 48 permutation train each one 3 time I e 144 model without error so the sudden interruption and kernel restart be weird to I standalone code to reproduce the issue in addition to the code reference above I ve pretty much follow the setup in this tensorflow tutorial time series forecast single step model |
tensorflowtensorflow | can not import imagedatagenerator from tensorflow keras gpu version | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 6 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 5 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version 10 1 7 3 1 gpu model and memory geforce rtx 2080ti 10989mib describe the current behavior I be transfer from tensorflow 1 to 2 and my code be all run correct in tensorflow 1 13 1 15 I meet the problem when I try to import the imagedatagenerator no matter to use tf keras processing image or tf python keras processing image describe the expect behavior in tensorflow 1 13 1 15 and tensorflow 2 0 0 cpu version use from tensorflow kera preprocesse image import imagedatagenerator can import the imagedatagenerator normally but for tensorflow 2 1 0 gpu version it show error even I add python right after tensorflow here cpu version or gpu version mean the hardware status of the pc I use standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook from tensorflow kera preprocesse image import imagedatagenerator option 1 from tensorflow python keras preprocesse image import imagedatagenerator option 2 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach for now the only way I think it can work be to revise the content in init py in somewhere within the tensorflow package but it will not be a good solution so I wonder if anyone can help I with this thank traceback most recent call last file line 1 in file home daiwei conda envs daiwei tensorflow lib python3 7 site package tensorflow core python keras preprocesse init py line 23 in import kera preprocesse modulenotfounderror no module name kera preprocesse |
tensorflowtensorflow | tf module with name scope attributeerror enter | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux mint 19 3 ubuntu tensorflow instal from source or binary binary tensorflow version use command below tf nightly python version 3 8 and 3 6 describe the current behavior exeption be raise when tf module with name scope be use the context manager be not implement proper describe the expect behavior work without error standalone code to reproduce the issue import tensorflow as tf class dummymodel tf keras layers layer def init self name dummymodel kwargs super init name name kwargs tf module with name scope def build self input shape super build input shape tf module with name scope def call self input return input dm dummymodel print dm 1 other info log attributeerrortraceback most recent call last in 1 import src test use namescope tf pose3d src test use namescope py in 17 18 19 print dm 1 tf pose3d venv lib python3 6 site package tensorflow python keras engine base layer py in call self args kwargs 967 eager execution on datum tensor 968 with backend name scope self name scope 969 self maybe build input 970 cast input self maybe cast input input 971 with base layer util autocast context manager tf pose3d venv lib python3 6 site package tensorflow python keras engine base layer py in maybe build self input 2364 operation 2365 with tf util maybe init scope self 2366 self build input shape pylint disable not callable 2367 we must set also ensure that the layer be mark as build and the build 2368 shape be store since user define build function may not be call tf pose3d venv lib python3 6 site package tensorflow python module module py in method with name scope self args kwargs 286 287 def method with name scope self args kwargs 288 with self name scope 289 return method self args kwargs 290 attributeerror enter |
tensorflowtensorflow | microinterpreter error seg fault stack smashing detect inside microinterpreter invoke method | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 4 64 bit tensorflow instal from source or binary source tensorflow version commit sha if source 0c487d64172c64d60a93bc98cf5ea07f1a8e95ba target platform e g arm mbe os arduino nano 33 etc ubuntu 18 04 4 64 bit linux raspberrypi 4 19 97 v7 compiler g ubuntu 7 5 0 3ubuntu1 18 04 7 5 0 and arm linux gnueabihf g 8 4 0 python python 3 7 6 describe the problem I be try to use the quantize model with tensorflow lite micro and get a segmentation error inside interpreter invoke call debugger show that segmentation error occur on return from eval in conv cc on node 28 of conv 2d and stack be corrupt error message be stack smashing detect terminate with compiler flag fstack protector all wstack protector my test be simply from the person detection example with model replace with mobilenet v1 0 25 224 quant at the tensorflow lite pre train model site with increase enough ktensorarenasize and model input output size change to 224x224x3 and 1x1001 amd pull additional require operator also try a few different model at another quantify mode mobilenet v1 0 25 192 quant be show the same segfault problem but the regular float point mode mobilenet v1 0 25 192 and mobilenet v1 0 25 224 run ok with many loop have anyone see similar problem or be some limitation on tensorflow lite micro that I should be aware of please provide the exact sequence of command step when you run into the problem this problem can be reproduce at this commit of fork tensorflow repo command to reproduce this problem build command git clone cd tensorflow git checkout bf2fef29281108c553c16e14dd4632b9a629de3c bazel build tensorflow lite micro examples person detection person detection c dbg copt fstack protector all copt wstack protector copt fno omit frame pointer bazel bin tensorflow lite micro examples person detection person detection stack smashing detect terminate abort core dump coredump stack show corrupt 0 gi raise sig sig entry 6 at sysdep unix sysv linux raise c 51 1 0x00007fd4e9c5b801 in gi abort at abort c 79 2 0x00007fd4e9ca4897 in libc message action action entry do abort fmt fmt entry 0x7fd4e9dd1988 s s terminate n at sysdep posix libc fatal c 181 3 0x00007fd4e9d4fcd1 in gi fortify fail abort need backtrace need backtrace entry false msg msg entry 0x7fd4e9dd1966 stack smashing detect at fortify fail c 33 4 0x00007fd4e9d4fc92 in stack chk fail at stack chk fail c 29 5 0x0000559e534cab55 in tflite op micro conv eval context 0x559e536c9280 node 0x559e536c5b18 anonymous namespace tensor arena 1419832 at tensorflow lite micro kernels conv cc 269 6 0x42f6730442f67304 in 7 0x42f6730442f67304 in 8 0x42f6730442f67304 in 9 0x42f6730442f67304 in 10 0x42f6730442f67304 in file change from the original person detection example tensorflow lite micro examples person detection main function cc tensorflow lite micro examples person detection model setting h tensorflow lite micro examples person detection person detect model datum cc change in main function cc constexpr int ktensorarenasize 1400 1024 static tflite microopresolver 5 micro op resolver micro op resolver addbuiltin tflite builtinoperator reshape tflite op micro register reshape micro op resolver addbuiltin tflite builtinoperator softmax tflite op micro register softmax 1 2 change in model setting h constexpr int knumcol 224 constexpr int knumrow 224 constexpr int knumchannel 3 constexpr int kcategorycount 1001 change in person detect model datum cc this file be modify with datum from mobilenet v1 0 25 224 quant tflite from this pre train quantize mobilenet v1 model on host model site xxd I c 12 mobilenet v1 0 25 224 quantize tflite model datum cc then replace model datum in the new model datum cc in the tensorflow lite micro examples person detection person detect model datum cc update the array name and size thank for your help |
tensorflowtensorflow | tf keras loss cosine similarity be not a negative quantity | Bug | url s with the issue description of issue what need change documentation state that tf keras loss cosine similarity be a negative quantity between 1 and 0 where 0 indicate orthogonality and value close to 1 indicate great similarity but it be actually not true tf keras loss cosine similarity can return positive value usage example import tensorflow as tf tf keras loss cosine similarity 1 1 1 1 |
tensorflowtensorflow | wrong loss calculation with multi output model when use model fit | Bug | system information have I write custom code yes though the code sample below be adapt from an official tensorflow tutorial with minimal change os platform and distribution ubuntu 18 04 tensorflow instal from official pip package version 2 1 0 python version 3 6 9 cuda cudnn version cuda 10 1 gtx 1070 describe the current behavior consider the follow mnist example python import numpy as np import tensorflow as tf input tf keras input shape 784 name digit x tf keras layer dense 64 activation relu name dense 1 input x tf keras layer dense 64 activation relu name dense 2 x output tf keras layer dense 10 name prediction activation softmax x model tf keras model inputs input output output model compile optimizer tf keras optimizer sgd loss prediction tf keras loss sparsecategoricalcrossentropy metric tf keras metric sparsecategoricalaccuracy name accuracy x train y train x test y test tf keras datasets mnist load datum x train x train reshape 60000 784 astype float32 255 x test x test reshape 10000 784 astype float32 255 y train y train astype float32 y test y test astype float32 x val x train 10000 y val y train 10000 x train x train 10000 y train y train 10000 train dataset tf datum dataset from tensor slice x train y train train dataset train dataset shuffle buffer size 1024 repeat batch 64 val dataset tf datum dataset from tensor slice x val y val val dataset val dataset batch 64 drop remainder true test dataset tf datum dataset from tensor slice x test y test test dataset test dataset batch 64 model fit train dataset epoch 3 step per epoch 1000 validation datum val dataset the output be bash epoch 1 3 1000 1000 4s 4ms step loss 0 9192 accuracy 0 7615 val loss 0 4060 val accuracy 0 8914 epoch 2 3 1000 1000 3s 3ms step loss 0 3786 accuracy 0 8939 val loss 0 3147 val accuracy 0 9127 epoch 3 3 1000 1000 3s 3ms step loss 0 3211 accuracy 0 9080 val loss 0 2796 val accuracy 0 9196 this work just fine as the loss decrease and the accuracy increase however if I simply add another dummy output to the model the training will become erroneous let s just change the model definition line to model tf keras model inputs input output x output now the output be bash warn tensorflow output dense 2 miss from loss dictionary we assume this be do on purpose the fit and evaluate apis will not be expect any datum to be pass to dense 2 epoch 1 3 warning tensorflow gradient do not exist for variable prediction kernel 0 prediction bias 0 when minimize the loss warn tensorflow gradient do not exist for variable prediction kernel 0 prediction bias 0 when minimize the loss 1000 1000 4s 4ms step loss 5 7781 prediction loss 5 7781 prediction accuracy 0 1048 val loss 5 5196 val prediction loss 5 5196 val prediction accuracy 0 1243 epoch 2 3 1000 1000 4s 4ms step loss 5 3866 prediction loss 5 3866 prediction accuracy 0 1202 val loss 5 2255 val prediction loss 5 2255 val prediction accuracy 0 0967 epoch 3 3 1000 1000 4s 4ms step loss 5 1167 prediction loss 5 1167 prediction accuracy 0 0996 val loss 5 1479 val prediction loss 5 1479 val prediction accuracy 0 0992 basically the network be not train at all because as the warning say gradient do not exist for variable prediction kernel 0 prediction bias 0 when minimize the loss describe the expect behavior the expect behavior be that by add additional output to the model should not affect the training process in anyway note that in model compile I specify the loss function use a dictionary so I don t need to modify the rest of the code after add a dummy output |
tensorflowtensorflow | error when deserialise batchnormalization layer from yaml | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see example below os platform and distribution e g linux ubuntu 16 04 window 10 anaconda3 tensorflow instal from binary pip tensorflow version use command below tensorflow gpu 2 1 0 from pip python version 3 7 7 cuda cudnn version 10 2 gpu model and memory gtx 1080 describe the current behavior when deserialise from yaml a model that contain a batchnormalization layer I get the exception yaml constructor constructorerror could not determine a constructor for the tag tag yaml org 2002 python object apply tensorflow python training track datum structure listwrapper indeed inspect the yaml show that this tag be in the bn layer as follow other yaml class name batchnormalization config axis python object apply tensorflow python training track datum structure listwrapper 3 other yaml I have include a script below that seem able to reproduce this oddly I then try the same script on a ubuntu system with tensorflow 2 1 0 and that appear ok so this might only apply to tensorflow gpu describe the expect behavior the script below serialise and deserialise to from yaml without error standalone code to reproduce the issue import tensorflow as tf input layer tf keras layers input shape 32 32 3 output tf keras layers batchnormalization input layer model tf keras model input input layer output output yaml out model to yaml model2 tf keras model model from yaml yaml out other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach full traceback traceback most recent call last file bin reproduce bug py line 13 in model2 tf keras model model from yaml yaml out file c user nitbi 000 anaconda3 envs tf2 gpu lib site package tensorflow core python keras saving model config py line 76 in model from yaml config yaml load yaml string file c user nitbi 000 anaconda3 envs tf2 gpu lib site package yaml init py line 114 in load return loader get single data file c user nitbi 000 anaconda3 envs tf2 gpu lib site package yaml constructor py line 51 in get single datum return self construct document node file c user nitbi 000 anaconda3 envs tf2 gpu lib site package yaml constructor py line 60 in construct document for dummy in generator file c user nitbi 000 anaconda3 envs tf2 gpu lib site package yaml constructor py line 413 in construct yaml map value self construct mapping node file c user nitbi 000 anaconda3 envs tf2 gpu lib site package yaml constructor py line 218 in construct mapping return super construct mapping node deep deep file c user nitbi 000 anaconda3 envs tf2 gpu lib site package yaml constructor py line 143 in construct mapping value self construct object value node deep deep file c user nitbi 000 anaconda3 envs tf2 gpu lib site package yaml constructor py line 100 in construct object datum constructor self node file c user nitbi 000 anaconda3 envs tf2 gpu lib site package yaml constructor py line 429 in construct undefined node start mark yaml constructor constructorerror could not determine a constructor for the tag tag yaml org 2002 python object apply tensorflow python training track datum structure listwrapper in line 24 column 13 axis python object apply tensorflow |
tensorflowtensorflow | tf tpu experimental initialize tpu system raise grpc error | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary use google colab tensorflow version use command below python version bazel version if compile from source 3 6 describe the current behavior when I try to set tpu system in google colab in tensorflow 2 0 0 tf tpu experimental initialize tpu system raise grpc error it work normally until about 12 hour ago and I don t change any code what I try to do to solve this problem standalone code to reproduce the issue any other info log if I use tensorflow 2 1 0 it work fine but I want to use tensorflow 2 0 0 because I want to use tpu |
tensorflowtensorflow | argmax error on deeplab quantize model with tflite hexagon delegate | Bug | os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 4 lts tensorflow instal from source or binary source tensorflow version build from git clone recurse submodule python version 3 6 9 bazel version if compile from source 2 0 0 1 14 gcc compiler version if compile from source gcc ubuntu 7 4 0 1ubuntu1 18 04 1 7 4 0 device redmi note 7 pro hexagon 685 dsp android 10 0 miui 11 describe the current behavior when I try to run the official deeplab model with quantization aware training in tflite benchmark use hexagon delegate it fail with the follow error adb shell datum local tmp benchmark model graph datum local tmp frozen inference graph dm05 5 tflite use hexagon true hexagon profile true adb opt intel intelpython27 lib libcrypto so 1 0 0 no version information available require by adb start min num run 50 min run duration second 1 max run duration second 150 int run delay second 1 num thread 1 benchmark name output prefix min warmup run 1 min warmup run duration second 0 5 graph datum local tmp frozen inference graph dm05 5 tflite input layer input shape input value range input layer value file use legacy nnapi 0 allow fp16 0 require full delegation 0 enable op profile 0 max profiling buffer entry 1024 csv file to export profiling datum to max number of delegate partition 0 enable platform wide trace 0 use gpu 0 allow low precision in gpu 1 use hexagon 1 hexagon lib path datum local tmp hexagon profiling 0 external delegate path external delegate option use nnapi 0 use xnnpack 0 load model datum local tmp frozen inference graph dm05 5 tflite info initialize tensorflow lite runtime load libcdsprpc so info create tensorflow lite delegate for hexagon info hexagon delegate 71 node delegate out of 71 node apply hexagon delegate and the model graph will be completely execute w the delegate the input model file size mb 0 746232 initialize session in 315 521ms running benchmark for at least 1 iteration and at least 0 5 second but terminate if exceed 150 second timestamp tue mar 24 18 13 02 2020 log hexagon op src op argminmax 8 d32 c 119 argminmax 8 d32 out too small hexagon src execute c 142 execute fail on node i d 37b err 1 hexagon src interface c 1174 fail in execute inner error fail fail to execute graph state fail to execute graph error node number 71 tflitehexagondelegate fail to invoke timestamp tue mar 24 18 13 02 2020 log hexagon op src op argminmax 8 d32 c 119 argminmax 8 d32 out too small hexagon src execute c 142 execute fail on node i d 37b err 1 hexagon src interface c 1174 fail in execute inner error fail fail to execute graph state fail to execute graph error node number 71 tflitehexagondelegate fail to invoke timestamp tue mar 24 18 13 02 2020 log hexagon op src op argminmax 8 d32 c 119 argminmax 8 d32 out too small hexagon src execute c 142 execute fail on node i d 37b err 1 hexagon src interface c 1174 fail in execute inner error fail fail to execute graph state fail to execute graph error node number 71 tflitehexagondelegate fail to invoke timestamp tue mar 24 18 13 02 2020 log hexagon op src op argminmax 8 d32 c 119 argminmax 8 d32 out too small hexagon src execute c 142 execute fail on node i d 37b err 1 hexagon src interface c 1174 fail in execute inner error fail fail to execute graph state fail to execute graph error node number 71 tflitehexagondelegate fail to invoke timestamp tue mar 24 18 13 03 2020 log hexagon op src op argminmax 8 d32 c 119 argminmax 8 d32 out too small hexagon src execute c 142 execute fail on node i d 37b err 1 hexagon src interface c 1174 fail in execute inner error fail fail to execute graph state fail to execute graph error node number 71 tflitehexagondelegate fail to invoke timestamp tue mar 24 18 13 03 2020 log hexagon op src op argminmax 8 d32 c 119 argminmax 8 d32 out too small hexagon src execute c 142 execute fail on node i d 37b err 1 hexagon src interface c 1174 fail in execute inner error fail fail to execute graph state fail to execute graph error node number 71 tflitehexagondelegate fail to invoke timestamp tue mar 24 18 13 03 2020 log hexagon op src op argminmax 8 d32 c 119 argminmax 8 d32 out too small hexagon src execute c 142 execute fail on node i d 37b err 1 hexagon src interface c 1174 fail in execute inner error fail fail to execute graph state fail to execute graph error node number 71 tflitehexagondelegate fail to invoke count 7 first 84641 curr 81095 min 79276 max 84641 avg 80591 std 1766 benchmarking fail I remove argmax and model work with near neighbor resizing the model do not seem to work with int32 int64 argmax as the final layer even though the operator be support by hexagon delegate faq the final resize have output shape 513 513 so I try replace bilinear resize with near neighbor resizing but the benchmark still fail I also verify the setup by correctly run mobienet v2 quantize model both model with int32 and int64 argmax seem to give same error argminmax 8 d32 out too small hexagon library v1 14 describe the expect behavior the benchmark model should run the quantize deeplab model without any problem other info log android ndk 20 benchmark tool build from late source with bazel 2 0 here be the three model that I ve try quant aware deeplab dm05 513 zip |
tensorflowtensorflow | run eagerly model option doesn t work in keras loss | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 x64 tensorflow instal from source or binary binary pip tensorflow version use command below tf 2 2 0 2 2 0 dev20200323 python version 3 7 cuda cudnn version 10 1 7 6 gpu model and memory bug appear on several computer with different gpu describe the current behavior when fit a keras model in eager mode by compile it with the option run eagerly true the loss be not run eagerly even though the rest of the model be this bug do not appear when use tf config experimental run function eagerly true instead of the run eagerly option to run the model eagerly in which case the loss be run eagerly describe the expect behavior both tf config experimental run function eagerly true and run eagerly true should have the same behaviour when fit a model run all of the model include its loss eagerly standalone code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow import kera from tensorflow keras import layer custom loss class customloss kera loss loss def call self y true y pre print tf executing eagerly x y true y pre return tf reduce mean x if name main datum np random random 16 1000 3 astype np float32 input tf keras input shape 1000 3 output tf keras layer dense 3 input model tf keras model inputs input output output tf config experimental run function eagerly true run custom loss eagerly model compile loss customloss model compile loss customloss run eagerly true do not run custom loss eagerly model fit x datum y datum other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach logcompileeagerly txt logexperimentalruneagerly txt |
tensorflowtensorflow | migrate tf1 to tf2 | Bug | hi I m newbie on tensorflow how to use embed rnn seq2seq on tensorflow v2 on tensorflow v1 it be in contrib legacy seq2seq class |
tensorflowtensorflow | freeze tensorflow model | Bug | the model I use be meta graph and win 10 ver 1903 my model clone I be try to freeze a flow pattern in tenorflow training from scratch be create after 4 file 1 model ckpt 38055 datum 00000 of 00001 2 model ckpt 38055 index 3 model ckpt 38055 meta 4 checkpoint I would like to convert they or only the need one into one file graph pb I use src import tensorflow as tf meta path model ckpt 38055 meta your meta file config tf configproto with tf session config config as sess with tf device cpu 0 restore the graph saver tf train import meta graph meta path load weight saver restore sess tf train late checkpoint saver restore sess model ckpt 38055 output node name n name for n in tf get default graph as graph def node output node name reconstruction layer freeze the graph frozen graph def tf graph util convert variable to constant sess sess graph def output node name save the frozen graph with open output graph pb wb as f f write frozen graph def serializetostring I encounter an error 2020 03 24 15 00 30 107728 I c tf jenkins workspace rel win m windows py 36 tensorflow core platform cpu feature guard cc 137 your cpu support instruction that this tensorflow binary be not compile to use avx avx2 traceback most recent call last file freeze graph py line 5 in sess sess graph def out file c user lerror anaconda3 lib site package tensorflow python framework graph util impl py line 227 in convert variable to constant inference graph extract sub graph input graph def output node name file c user lerror anaconda3 lib site package tensorflow python framework graph util impl py line 171 in extract sub graph assert node be present name to node d node file c user lerror anaconda3 lib site package tensorflow python framework graph util impl py line 131 in assert node be present assert d in name to node s be not in graph d assertionerror out be not in graph hope you can help I thank you |
tensorflowtensorflow | tf keras loss sparsecategoricalcrossentropy documentation issue | Bug | url s with the issue description of issue what need change clear description on the third line of the second code box origin probability be 9 05 05 5 89 6 05 01 94 it should be 9 05 05 05 89 06 05 01 94 |
tensorflowtensorflow | can not convert model contain categorical column with vocabulary list op | Bug | system information os platform and distribution e g linux ubuntu 16 04 cento tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 2 0rc0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook python import tensorflow as tf import os model dir model feature column example category tf constant a b a c c a label tf constant 1 0 1 0 0 0 ds tf datum dataset from tensor slice category category label ds ds batch 2 fc category tf feature column indicator column tf feature column categorical column with vocabulary list category vocabulary list a b c feature layer tf keras layer densefeature fc category model tf keras sequential feature layer tf keras layer dense 10 activation relu tf keras layer dense 1 activation sigmoid model compile optimizer adam loss binary crossentropy metric accuracy model fit ds epoch 2 converter tf lite tfliteconverter from keras model model converter allow custom op true converter experimental new converter true converter experimental new quantizer true convert the model tflite model converter convert open os path join model dir output tflite wb write tflite model the output from the converter invocation can not convert a tensor of dtype resource to a numpy array also please include a link to the save model or graphdef save model cli show dir model feature column example tag set serve signature def serve default the give savedmodel signaturedef contain the follow input s input category tensor info dtype dt string shape 1 1 name serve default category 0 the give savedmodel signaturedef contain the follow output s output output 1 tensor info dtype dt float shape 1 1 name statefulpartitionedcall 7 0 method name be tensorflow serve predict failure detail can not convert a tensor of dtype resource to a numpy array accord to my analysis this might be cause by some hashtable op which create table handle and my additional question be whether tfliteconverter could convert model contain op of initialize hashtablev2 and lookuptableimportv2 thank you any other info log full log traceback most recent call last file feature column example py line 62 in export keras hashtable model file feature column example py line 58 in export keras hashtable model tflite model converter convert file root tf2 2 lib python3 6 site package tensorflow lite python lite py line 464 in convert self func 0 low control flow false file root tf2 2 lib python3 6 site package tensorflow python framework convert to constant py line 706 in convert variable to constant v2 as graph func low control flow aggressive inlining file root tf2 2 lib python3 6 site package tensorflow python framework convert to constant py line 457 in convert variable to constant v2 impl tensor datum get tensor datum func file root tf2 2 lib python3 6 site package tensorflow python framework convert to constant py line 217 in get tensor datum datum val tensor numpy file root tf2 2 lib python3 6 site package tensorflow python framework op py line 961 in numpy maybe arr self numpy pylint disable protect access file root tf2 2 lib python3 6 site package tensorflow python framework op py line 929 in numpy six raise from core status to exception e code e message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror can not convert a tensor of dtype resource to a numpy array |
tensorflowtensorflow | validation split parameter in model fit lead to severe crash due to enormous stacktrace | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary instal from pypi python packaging index version 2 1 0 tensorflow version use command below v2 1 0 rc2 17 ge5bf8de410 2 1 0 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version nvcc v release 10 1 v10 1 105 gpu model and memory gtx 1060 6 gb describe the current behavior I create the following model and I be use it in a jupyter notebook to learn some data model sequential model add lstm 100 input shape input shape return sequence true model add lstm 100 return sequence true model add lstm 100 model add dense number of class activation softmax model compile optimizer adam lr 0 001 loss categorical crossentropy metric acc history model fit x model datum datum train pad normalize y model datum datum label ohc validation split 0 2 epoch 500 verbose 1 when I import all layer from the kera package the code work successfully on the cpu yet I would like to use the gpu accelerate lstms formerly know as cudnnlstm but they be merge in tf 2 1 into lstm to train my model I be use a jupyter notebook sadly once I change my import from the old kera package to tensorflow kera and thus to the gpu accelerate lstms I suffer from sudden major crash and by major I mean a crash jupyter server that crash my firefox as well when use pycharm with the jupyter notebook plugin pycharm crash too so I wonder why I have those massive problem across different process on my machine that occur when I execute tensorflow and my notebook as it turn out I could trace the problem to the follow line 0 it appear that as part of raise the valueerror x be return in its constructor in the str format call here x be my complete training datum set this result in a dump of 262 mb of text into stderr as part of the stacktrace that then lead to the consequence crash jupyter server crash browser via websocket connection to localhost I find that out by pipe the stream to a file as the exception s message have a len of 275168064 character next I wonder if the bug be still present in the new version of tensorflow I could see in git that the corresponding line of code do not exist anymore when instal tf nightly version v1 12 1 27849 g9278bbfc24 2 2 0 dev20200323 as the whole file train v2 py do not exist anymore subsequently I re run my code with the current tf nightly from pypi first I get this error 1 as I do not provide a numpy array after change my model fit to the follow history model fit x np array model datum datum train pad normalize y np array model datum datum label ohc validation split 0 2 epoch 500 verbose 1 I run into the follow error 2 traceback most recent call last file c user joo appdata local program python python37 lib contextlib py line 130 in exit self gen throw type value traceback file c venv lib site package tensorflow python ops variable scope py line 2805 in variable creator scope yield file c venv lib site package tensorflow python keras engine training py line 854 in fit epoch log copy copy log unboundlocalerror local variable log reference before assignment it appear that log do not exist here 2 not use validation split as part of model fit solve all my problem for tf 2 1 0 but I have to find it out the hard way moreover I could not resolve the problem for tf nightly at all even when omit the parameter as other exception pop up thank in advance for check the code and improve tensorflow 0 l539 1 l1386 2 l857 describe the expect behavior I would expect an exception that do not return over 260 mb of text as part of the stack trace the important part of the stack trace in the beginning of the message be also not show I e that validation split can not be use as the 260 mb of x overfill the non infinite buffer of my ide in a split second thus omit the whole stack trace with my training datum |
tensorflowtensorflow | ko remove reference to tf nightly 2 0 preview package | Bug | the tf nightly 2 0 preview package do not exist anymore please upgrade all notebook to use tensorflow version 2 x like try tensorflow version only exist in colab tensorflow version 2 x except exception pass affect site ko tutorials keras text classification ipynb site ko tutorials keras overfit and underfit ipynb site ko tutorial structured data feature column ipynb |
tensorflowtensorflow | zh cn remove reference to tf nightly 2 0 preview package | Bug | the tf nightly 2 0 preview package do not exist anymore please upgrade all notebook to use tensorflow version 2 x like try tensorflow version only exist in colab tensorflow version 2 x except exception pass affect site zh cn lite convert python api md site zh cn tutorials estimator boost tree ipynb |
tensorflowtensorflow | ja remove reference to tf nightly 2 0 preview package | Bug | the tf nightly 2 0 preview package do not exist anymore please upgrade all notebook to use tensorflow version 2 x like try tensorflow version only exist in colab tensorflow version 2 x except exception pass affect site ja lite convert python api md site ja guide function ipynb site ja tutorials load data csv ipynb |
tensorflowtensorflow | keras conv layer weight can not be update during distribution training mode | Bug | os ubuntu 18 04 tensorflow 2 1 0 python 3 7 gpu 4 gpu try to train a gan network on multiple gpu the network have a spectral normalization layer to update convolution layer weight the sample code and error be provide as follow python import tensorflow as tf strategy tf distribute mirroredstrategy with strategy scope conv tf keras layer conv2d 5 3 1 x tf random normal 1 5 5 3 conv x tf function def update w w nw w assign nw x2 tf random normal 3 3 3 5 strategy experimental run v2 update w args conv kernel x2 image |
tensorflowtensorflow | replace varhandleop in keras with variablev2 | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description tf 1 13 1 I can t find anything detail about difference between varhandlop and variablev2 as far as I know it seem that varhandleop implement in kera and variablev2 in tf 1 x how can I convert varhandleop to variablev2 in tf keras I m use some model processing tool that can only run under variablev2 op correct link somelink above be all I could find parameter define return define raise list and define usage example be this a usage example be that a kind of example request visual if applicable submit a pull request |
tensorflowtensorflow | input tensor have different type from describe in the comment | Bug | url s with the issue validate input shape description of issue what need change clear description at follow line of the second code of this section validate input shape it be claim that input be a 2d tensor c the property dim tell we the tensor s shape it have one element for each dimension our input be a 2d tensor contain 1 element so dim should have size 2 tf lite micro expect eq 2 input dim size however base on the notebook where the model define it should have 1d tensor for input submit a pull request no |
tensorflowtensorflow | tf metric mean cosine distance fail during distribute evaluation | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux debian gnu linux 9 tensorflow instal from source or binary binary tensorflow version use command below v1 15 2 1 g61ff2cb 1 15 2 python version 3 7 6 cuda cudnn version cuda 10 cudnn 7 6 5 gpu model and memory 2x tesla k80 describe the current behavior tf metric mean cosine distance fail at the end of distribute evaluation with mirroredstrategy typeerror fetch argument perreplica 0 replica 0 task 0 device gpu 0 1 replica 0 task 0 device gpu 1 have invalid type must be a string or tensor can not convert a perreplica into a tensor or operation non distribute evaluation that be with runconfig eval distribute none or with a single gpu only finish without error standalone code to reproduce the issue python import numpy as np import tensorflow as tf def model fn feature label mode prediction tf layer dense feature 2 metric cos tf metric mean cosine distance label prediction 1 return tf estimator estimatorspec mode mode prediction prediction loss tf constant 0 1 train op none eval metric op metric def input fn dataset tf datum dataset from tensor slice np array 1 1 np array 2 2 dataset dataset repeat dataset dataset batch 1 drop remainder true return dataset if name main gpu tf config experimental list physical device gpu assert len gpu 1 need 1 gpu to run strategy tf distribute mirroredstrategy run config tf estimator runconfig train distribute strategy eval distribute strategy estimator tf estimator estimator model fn model fn config run config print estimator evaluate input fn step 5 other info log log 1 15 txt |
tensorflowtensorflow | can t convert pb model file to tflite | Bug | when I run the follow code import tensorflow as tf path c user lawssss desktop convert pb 2 tflite frozen inference graph steelroll pb input image tensor output detection box converter tf lite tfliteconverter from frozen graph path input output input shape image tensor 1 640 360 3 converter post training quantize true tflite model converter convert open frozen inference graph steelroll tflite wb write tflite model I meet a fatal error which say f tensorflow lite toco tooling util cc 2258 check fail array data type array final data type array image tensor have mis match actual and final datum type datum type uint8 final datum type float fatal python error abort how do I solve this problem |
tensorflowtensorflow | break link | Bug | url s with issue in the above readme file mapsreduce pull request I ve correct the link and by successfully merge 813 it ll resolve this issue hey mihaimaruseac kindly review it |
tensorflowtensorflow | typo in | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue recommend upgrade process description of issue what need change there be two consecutive will in the point run the upgrade script submit a pull request be you plan to also submit a pull request to fix the issue yes I ll be open a pr soon |
tensorflowtensorflow | problem with custom metric even for h5 model | Bug | these be how I define and save the custom metric fbeta and vgg model on colab from sklearn model selection import train test split from keras preprocesse image import imagedatagenerator from keras model import sequential model from keras layers import conv2d maxpooling2d dense flatten from keras layers import dropout from keras optimizers import sgd from keras import backend def fbeta y true y pre beta 2 y pre backend clip y pre 0 1 tp backend sum backend round backend clip y true y pre 0 1 axis 1 fp backend sum backend round backend clip y pre y true 0 1 axis 1 fn backend sum backend round backend clip y true y pre 0 1 axis 1 p tp tp fp backend epsilon r tp tp fn backend epsilon calculate fbeta average across each class fbeta score backend mean 1 beta 2 p r beta 2 p r backend epsilon return fbeta score def vgg16 model in shape 128 128 3 out shape 17 load model model vgg16 include top false input shape in shape mark load layer as not trainable for layer in model layer layer trainable false allow last vgg block to be trainable model get layer block5 conv1 trainable true model get layer block5 conv2 trainable true model get layer block5 conv3 trainable true model get layer block5 pool trainable true add new classifier layer flat1 flatten model layer 1 output class1 dense 128 activation relu kernel initializer he uniform flat1 output dense out shape activation sigmoid class1 define new model model model input model input output output compile model opt sgd lr 0 01 momentum 0 9 model compile optimizer opt loss binary crossentropy metric fbeta return model create model and save in google drive model save model h5 model file drive createfile title model h5 model file setcontentfile model h5 model file upload print model be save drive createfile i d model file get i d print model download to google drive the error happen when I load the model with the following from pandas import read csv from keras preprocesse image import load img from keras preprocesse image import img to array from keras model import load model model load model path I also try model load model path f beta fbeta result model predict img print result 0 get this error valueerror unknown metric function fbeta I believe all package and path be import and define correctly model be save correctly on google drive as well I try all 3 other post 33646 33648 36390 regard similar customize metric issue and try all the potential fix and get around unfortunately none of they work I be surprised that some community member claim that h5 model should not have this issue though |
tensorflowtensorflow | do keras disable regularization dropout noise layer when evaluate the validation datum during model fit | Bug | url s with the issue fit predict description of issue what need change clear description I can t find a clear statement whether or not the regularization layer noise dropout be bypass when the validation data be process to calculate the validation loss when call model fit provide with validation datum I can see that in the source code of the dropout layer that it be branch base on the training argument of call l181 but it be completely unclear to I whether or not the validation pass be consider training or not after all I m call the fit function |
tensorflowtensorflow | tensorflow java api in scala with scala reflect runtime compilation | Bug | system information I be use the helloworld example from java api I be use scala to execute it without any problem if I compile the code at compile time instead with I run that use the scala reflect and therefore runtime compilation the session runner be not find os platform and distribution linux ubuntu 18 04 jdk openjdk version 1 8 0 242 openjdk runtime environment build 1 8 0 242 8u242 b08 0ubuntu3 18 04 b08 openjdk 64 bit server vm build 25 242 b08 mixed mode tensorflow from build sbt lazy val commonsetting seq scalaversion 2 12 10 librarydependencie seq to support runtime compilation org scala lang scala reflect scalaversion value org scala lang scala compiler scalaversion value for tensorflow4java org tensorflow tensorflow 1 15 0 org tensorflow proto 1 15 0 org tensorflow libtensorflow jni 1 15 0 lazy val test proj project in file setting commonsetting describe the current behavior basically when compile at runtime the runner do not get execute it compile properly but when execute the runner do not exist here be the short stacktrace java lang nosuchmethoderror org tensorflow session runner lorg tensorflow session runner at wrapper 1 f093d26a3c504d4381a37ef78b6c3d54 wrapper 1 f093d26a3c504d4381a37ef78b6c3d54 anonfun wrapper 1 15 describe the expect behavior I expect that both pre compiled or runtime compilation code will have the same behavior standalone code to reproduce the issue here be what work import org tensorflow graph import org tensorflow session import org tensorflow tensor import org tensorflow tensorflow val g new graph val value hello from tensorflow version val t tensor create value getbyte utf 8 the java api doesn t yet include convenience function for add operation g opbuilder const myconst setattr dtype t datatype setattr value t build val s new session g val output s runner fetch myconst run get 0 and here what do not work import scala reflect runtime universe ru import scala tool reflect toolbox val fnstr import org tensorflow graph import org tensorflow session import org tensorflow tensor import org tensorflow tensorflow val g new graph val value hello from tensorflow version val t tensor create value getbyte utf 8 g opbuilder const myconst setattr dtype t datatype setattr value t build val s new session g s runner fetch myconst run get 0 val mirror ru runtimemirror getclass getclassloader val tb mirror mktoolbox var t tb parse fnstr val fn tb eval t asinstanceof any and finally execute the function fn before submit this issue I post a question on stackoverflow I will try the scala api provide by |
tensorflowtensorflow | miss link | Bug | url s with the issue tf backend link be miss pull request 812 by sucessfully merge this pr will closw this issue |
tensorflowtensorflow | break link | Bug | url s with the issue in this repo the tf framework executor link be break pull request successfully merge of 811 will close this issue hey markdaoust lamberta and mihaimaruseac would you please review this pull request |
tensorflowtensorflow | break link in community forum md | Bug | url s with the issue description of the issue what need change in forum ms how to get help get help be break and while click it s show 404 error submit a pull request hey markdaoust and lamberta would you please assign I this issue and please provide detail to fix this issue I ll love to fix this issue |
tensorflowtensorflow | miss interpreter initialization code in the codelab | Bug | the codelab in question be build a handwritten digit classifier app with tensorflow lite 3 step 4 6 introduce the follow snippet of code kotlin read input shape from model file val inputshape interpreter getinputtensor 0 shape inputimagewidth inputshape 1 inputimageheight inputshape 2 modelinputsize float type size inputimagewidth inputimageheight pixel size finish interpreter initialization this interpreter interpreter interpreter be a class field that have not be initialize in any previous code snippet and the code fail to compile look at the finalize code in the finish directory of the downloadable sample 4 6 should ve probably include this code snippet to initialize interpreter kotlin initialize tf lite interpreter with nnapi enable val option interpreter option option setusennapi true val interpreter interpreter model option |
tensorflowtensorflow | unavailableerror socket close while use custom training loop | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow version 2 1 0 about hardware and software system information I m use kaggle kernel so more information can be find here describe the current behavior I m get error like the one below this happen during the training loop screenshot from 2020 03 21 09 41 52 describe the expect behavior the model be suppose to train normally standalone code to reproduce the issue link for the kaggle kernel other info log as describe on the notebook link above usually happen when use some combination of the below long epoch heavy model some loop use too much memory |
tensorflowtensorflow | miss argument when the input of concrete function contain list | Bug | system information tensorflow version v2 1 0 describe the current behavior python tf function def test a b c pass c test test get concrete function a tf tensorspec none 1 tf tensorspec none 1 b tf tensorspec none 2 c tf tensorspec none 3 tf tensorspec none 3 c test a tf random normal 2 1 tf random normal 2 1 b tf random normal 2 2 c tf random normal 2 3 tf random normal 2 3 the function test have a and c two argument that should be as the type list I create a concrete function c test from test but when c test be call like the code above it will throw expect argument name a a 1 b c c 1 but get value for a b c miss c 1 a 1 it seem that the function only receive the first element of the list argument |
tensorflowtensorflow | it seem that tensorflow need a check for the unreasonable parameter input dim 0 in the layer embed | Bug | system information have I write custom code as oppose to use example directory os platform and distribution e g linux ubuntu 16 04 window 10 linux ubuntu 18 04 tensorflow version 2 1 0 cpu use pip install tensorflow to download directly python version 3 6 9 cuda cudnn version gpu model and memory describe the current behavior when I build the model with an illogical parameter input dim 0 in the layer embed tensorflow use this unreasonably parameter to build and even save the model the detailed performance in build the model be show in the follow picture image key insight to sum up input dim or output dim 0 be unreasonable corner case tensorflow seem to lack in check this corner case this may lead tensorflow user to create and even save a wrong model which will bring potential risk in the subsequent usage code to reproduce the issue python import numpy as np import tensorflow keras layer as l from tensorflow keras import model input import tensorflow import os print tensorflow version kwargs input dim 0 you can also set input dim to 0 to test output dim 18 mask zero true input 10 np random random 32 10 layer l embed kwargs x input batch shape input shape y layer x bk model model x y model path os path join mode h5 bk model save model path bk model print finish |
tensorflowtensorflow | mse loss compute different value depend on shape | Bug | system information have I write custom code yes os platform and distribution linux ubuntu 19 04 tensorflow version 2 1 0 python version 3 7 5 cuda cudnn version 10 1 gpu model and memory gtx titan x describe the current behavior when mse be calculate on two vector if one have an additional dimension the result be different for instance if one be 128 and the other be 128 1 the final value be different than what be calculate with vector contain the same value but both 128 moreover the score compute for the 4 combination I try mse 128 128 mse 128 1 128 mse 128 128 1 mse 128 1 128 1 be all different so I don t really trust the computation at all so there s likely a bug somewhere describe the expect behavior either that the computation be perform correctly or that an error be raise about the inout have different shape silent error like this be difficult to debug standalone code to reproduce the issue import numpy as np import tensorflow as tf target np array 180 858 53 000732 161 95107 135 10085 5 4330907 86 42976 4 4581985 32 90381 153 1223 78 94036 190 12129 157 32057 8 215049 17 959564 21 816954 40 21217 50 351727 38 70819 52 955853 213 77878 142 41376 127 22313 164 2927 74 497314 74 87329 14 303827 164 1599 190 37862 63 337723 74 058975 70 482285 40 203323 47 59432 17 782892 70 3338 127 87029 12 542 31 236902 70 227974 81 60634 186 79362 176 01068 118 73177 74 14537 56 437016 98 60682 3 1523242 9 694114 11 809049 16 225067 4 6299715 194 44075 138 53864 118 06511 201 88641 85 310356 91 92171 107 94937 44 26706 93 79351 9 981134 40 544876 131 26842 7 305799 97 13315 94 43779 146 48007 24 092182 32 081444 32 98506 93 73731 65 58341 36 74394 57 02824 78 452866 6 0548353 11 639992 114 853455 15 473761 24 454018 127 82523 68 350876 41 449036 39 643234 45 420197 0 9474962 111 20463 10 079266 79 32773 93 07437 111 04116 47 006187 68 18549 36 195724 100 86029 74 86413 13 0117655 293 18875 39 411587 121 270706 142 66888 23 961506 81 58176 137 42886 31 068184 73 448364 90 646164 133 64107 88 79693 117 37866 54 3003 181 60715 100 147194 179 99359 24 455635 59 38088 135 56046 67 400925 151 78516 212 14339 202 64584 66 06116 1 9135226 244 05527 70 778275 50 001457 194 73297 33 012333 prediction np array 0 12198464 0 09282801 0 09430753 0 06222287 0 07448876 0 03799684 0 02936049 0 03837839 0 04432172 0 01919999 0 07735521 0 04389271 0 09087924 0 05364547 0 01343504 0 04935993 0 02090639 0 04636865 0 06702548 0 09186736 0 11273132 0 0611049 0 06820674 0 07969542 0 02481739 0 04868209 0 08474196 0 0776654 0 03664336 0 04501346 0 06626878 0 03605408 0 02785883 0 01698643 0 09615672 0 07914701 0 02611066 0 0447035 0 08619086 0 04838634 0 07977191 0 06319098 0 04025086 0 05129454 0 02673621 0 05525842 0 0054835 0 04647385 0 02476176 0 02783814 0 11566448 0 08409265 0 03792451 0 03227585 0 0632838 0 08329175 0 04616582 0 06513302 0 07169756 0 05911999 0 05913429 0 01704707 0 06693612 0 04886937 0 02549478 0 04468452 0 07630262 0 05455045 0 06637821 0 01789702 0 11108026 0 03976684 0 0171865 0 13416564 0 02845822 0 05074854 0 04896633 0 05221288 0 03563176 0 05014472 0 05413034 0 0347496 0 0645119 0 04159546 0 01868404 0 0582131 0 0336203 0 04432501 0 03495208 0 02673723 0 09592278 0 02579375 0 01584711 0 02812203 0 03840974 0 02530819 0 08957738 0 14304015 0 03345468 0 06080145 0 09284427 0 04770067 0 07064755 0 04004309 0 02097335 0 08742893 0 04389744 0 0479476 0 05911161 0 0748862 0 06840549 0 0580482 0 05427855 0 10075781 0 01691986 0 04473659 0 0634447 0 03176469 0 05624699 0 12614223 0 08688905 0 02355402 0 0871409 0 0734048 0 02676748 0 02766727 0 08999605 0 03465028 mse tf keras loss meansquarederror tf config experimental run function eagerly true print mse target prediction print mse target tf newaxis prediction print mse target prediction tf newaxis print mse target tf newaxis prediction tf newaxis output tf tensor 10867 5537109375 shape dtype float64 tf tensor 10868 94140625 shape dtype float64 tf tensor 10868 9404296875 shape dtype float64 tf tensor 10867 552734375 shape dtype float64 |
tensorflowtensorflow | min epsilon of fused batch norm be incorrect | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow n a os platform and distribution e g linux ubuntu 16 04 n a mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source orbinary n a tensorflow version use command below n a python version n a bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a the source code in current master branch have this l1516 l1519 however the statement in the comment be no long correct accord to the value of epsilon be require to be great or equal to cudnn bn min epsilon which be define in the cudnn h file to the value 1e 5 this threshold value be now lower to 0 0 to allow a wide range of epsilon value if compatibility with early version of cudnn be need the minimum should be obtain by use the cudnn bn min epsilon constant define by cudnn h or maybe it s well to just throw an error when a wrong epsilon be provide by the user personally I don t like my parameter to be silently change this can cause issue like nn fuse batch norm and nn batch normalization produce different result |
tensorflowtensorflow | bug tf 2 2 0rc0 fail with amp and horovod 0 19 1 in keras compile fit | Bug | with the recent change in the tensorflow keras optimizer api and horovod we do some testing and find that the follow configuration be now break tensorflow 2 2 0rc0 horovod 0 19 1 amp keras model compile fit sanjoy pkanwar23 could we make sure to fix this one before tf 2 2 0 get officially publish it s still an rc release for now if need you can use this docker container which contain the right set of dependency and base on the public tf2 2 0rc0 container bash docker pull born2data tensorflow hvd 0 19 1 tf 2 2 0rc0 code to reproduce bash mpirun np 2 h localhost 2 bind to none map by slot x nccl debug version x ld library path x path mca pml ob1 mca btl openib allow run as root python main py python import tensorflow as tf import horovod tensorflow keras as hvd horovod initialize horovod hvd init horovod pin gpu to be use to process local rank one gpu per process gpu tf config experimental list physical device gpu for gpu in gpu tf config experimental set memory growth gpu true if gpu tf config experimental set visible device gpus hvd local rank gpu mnist image mnist label tf keras datasets mnist load datum path mnist d npz hvd rank dataset tf datum dataset from tensor slice tf cast mnist image tf newaxis 255 0 tf float32 tf cast mnist label tf int64 dataset dataset repeat shuffle 10000 batch 128 policy tf keras mixed precision experimental policy mixed float16 128 tf keras mixed precision experimental set policy policy mnist model tf keras sequential tf keras layer conv2d 32 3 3 activation relu tf keras layer conv2d 64 3 3 activation relu tf keras layer maxpooling2d pool size 2 2 tf keras layers dropout 0 25 tf keras layer flatten tf keras layer dense 128 activation relu tf keras layers dropout 0 5 tf keras layer dense 10 activation softmax horovod adjust learning rate base on number of gpu opt tf optimizer adam 0 001 horovod add horovod distributedoptimizer opt hvd distributedoptimizer opt horovod specify experimental run tf function false to ensure tensorflow use hvd distributedoptimizer to compute gradient mnist model compile loss tf loss sparsecategoricalcrossentropy optimizer opt metric accuracy experimental run tf function false callback horovod broadcast initial variable state from rank 0 to all other process this be necessary to ensure consistent initialization of all worker when training be start with random weight or restore from a checkpoint hvd callback broadcastglobalvariablescallback 0 train the model horovod adjust number of step base on number of gpu mnist model fit dataset step per epoch 500 hvd size callback callback epoch 24 verbose 1 if hvd rank 0 else 0 error python usr local lib python3 6 dist package tensorflow python keras engine training py 503 train function output self distribute strategy run usr local lib python3 6 dist package tensorflow python distribute distribute lib py 951 run return self extend call for each replica fn args args kwargs kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2290 call for each replica return self call for each replica fn args kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2649 call for each replica return fn args kwargs usr local lib python3 6 dist package tensorflow python keras engine training py 473 train step minimize tape self optimizer loss self trainable variable usr local lib python3 6 dist package tensorflow python keras engine training py 1739 minimize optimizer apply gradient zip gradient trainable variable usr local lib python3 6 dist package tensorflow python keras mixed precision experimental loss scale optimizer py 232 apply gradient args grad and var name all reduce sum gradient usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2420 merge call return self merge call merge fn args kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2427 merge call return merge fn self strategy args kwargs usr local lib python3 6 dist package tensorflow python keras mixed precision experimental loss scale optimizer py 256 apply gradient cross replica control flow op no op usr local lib python3 6 dist package tensorflow python framework smart cond py 54 smart cond return true fn usr local lib python3 6 dist package tensorflow python keras mixed precision experimental loss scale optimizer py 248 apply fn all reduce sum gradient usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2290 call for each replica return self call for each replica fn args kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2649 call for each replica return fn args kwargs usr local lib python3 6 dist package tensorflow python keras mixed precision experimental loss scale optimizer py 262 apply gradient name all reduce sum gradient usr local lib python3 6 dist package horovod keras init py 73 apply gradient raise exception apply gradient be call without a call to exception apply gradient be call without a call to get gradient or aggregate gradient if you re use tensorflow 2 0 please specify experimental run tf function false in compile please let I know how I can help cc nluehr reedwm tgaddair cliffwoolley omalleyt12 houtom |
tensorflowtensorflow | there be a bug that string be not display in io darkmode | Bug | system information mobile device iphone 8 io 13 describe the current behavior url image classification io there be a bug that string be not display in io darkmode there be not a bug in light mode other info log the reason for this be that the background and text be the same color |
tensorflowtensorflow | ja load datum csv ipynb use non existant dataset output shape | Bug | this tutorial be currently break also need a tensorflow version 2 x |
tensorflowtensorflow | some notebook use tf nightly 2 0 preview | Bug | this package do not exist anymore please upgrade all notebook to use tensorflow version 2x |
tensorflowtensorflow | several notebook fail on keras experimental export save model | Bug | exportar el modelo a savedmodel keras experimental export save model model path to save model recrea exactamente el mismo modelo new model keras experimental load from save model path to save model these should just be model save path to save model and keras model load model path to save model |
tensorflowtensorflow | importerror dll load fail uma rotina de inicializa o da biblioteca de v nculo din mico dll falhou | Bug | importerror dll load fail uma rotina de inicializa o da biblioteca de v nculo din mico dll falhou traceback most recent call last file c python python36 lib site package tensorflow core python pywrap tensorflow py line 58 in from tensorflow python pywrap tensorflow internal import file c python python36 lib site package tensorflow core python pywrap tensorflow internal py line 28 in pywrap tensorflow internal swig import helper file c python python36 lib site package tensorflow core python pywrap tensorflow internal py line 24 in swig import helper mod imp load module pywrap tensorflow internal fp pathname description file c python python36 lib imp py line 243 in load module return load dynamic name filename file file c python python36 lib imp py line 343 in load dynamic return load spec importerror dll load fail uma rotina de inicializa o da biblioteca de v nculo din mico dll falhou during handling of the above exception another exception occur |
tensorflowtensorflow | tf io gfile glob miss some pattern use tf nightly | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below tf nightly python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior cc conchylicultor please have a look on issue from tfds tensorflow dataset 1670 test be fail for plantvillage and the300wlp dataset because in generate example function of both plant village py l142 and the300w lp py l116 tf io gfile glob do not correctly match all example pattern however python glob solve issue see pr tensorflow dataset 1684 describe the expect behavior tf io gfile glob must match all pattern provide so that all require example be generate standalone code to reproduce the issue please have a look on this colab notebook it contain all traceback as well as problem with tf io gfile glob and how python glob solve this issue as glob fix this issue but we have to use tf io gfile because we need to support gcs and other distribute file system |
tensorflowtensorflow | tflite experimental new converter incorrect model for bidirectional gru | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes provide below os platform and distribution e g linux ubuntu 16 04 mac os 10 12 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device samsung note9 tensorflow instal from source or binary tensorflow version use command below 2 1 0 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tflite produce a graph with 1 output of size 1 4 byte describe the expect behavior tflite produce a graph with 1 output of size 64 4 byte standalone code to reproduce the issue import tensorflow as tf word tf keras input shape 1 name word dtype tf int32 embed output tf keras layer embed 8000 100 input length 1 word modify embed tf keras layers bidirectional tf keras layers gru 32 embed output model tf keras model input word output modify embed model make predict function model summary model save testmodel converter tf lite tfliteconverter from save model testmodel converter target spec support op tf lite opsset tflite builtins tf lite opsset select tf op converter experimental new converter true tflite model converter convert open testmodel tflite wb write tflite model provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf 2 2 keras fit class weight be be cast to int | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no tensorflow version introduce since 2 2 0 rc0 python version bazel version if compile from source python 3 colab google colaboratory tensorflow 2 2 0 rc0 with python 3 describe the current behavior the class weight pass to keras model fit be convert to int64 which lead to all class weight that be below 1 be set to 0 describe the expect behavior that the class weight be keep as float value so that result can be more precise base on the class weight other info log I track the piece of code that be responsible for this behavior l1258 url class weight tensor op convert to tensor v2 class weight c for c in class ids int class weight c for c in class id dtype int64 and it seem that this commit for tf 2 2 introduce it but I can not figure out why it be change like this diff f8dd40712ac721c1b363e1a1ec44c1a3 url as a temporary workaround I try to make my low weight normalize to 1 but then I still get a float poant value for the other weight so even if I round it it will still not be as precise as I need it since I use tf dataset load from gcs I can not simply pass the sample weight directly myself either as a workaround so I be not sure whether be be a bug or an intend change in case it be an intend change I would like to know what the alternative would be |
tensorflowtensorflow | efficientdet valueerror can not find the placeholder op that be an input to the readvariableop | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu18 04 x86 64 cuda10 tensorflow instal from source or binary binary tensorflow version or github sha if from source tf nightly 2 2 0 dev20200319 python version 3 6 command use to run the converter or code if you re use the python api I be try to perform weight quantization use converter from save model but I be suffer from the problem that the input op of save model be not recognize correctly by converter you can confirm that the input op be include in the save model in the follow step but an error will occur when the conversion be perform 1 download the efficientdet google automl train checkpoint from the link here this be the google official release efficientdet model then unzip the download tar gz file to any location 2 clone the google automl repository console sudo pip3 install tf nightly git clone cd automl efficientdet 3 the follow script freeze the download checkpoint console python3 model inspect py model name efficientdet d0 delete logdir false freeze true runmode freeze input image size 512 ckpt path home download efficientdet d0 logdir home download efficientdet d0 log export ckpt home download efficientdet d0 export thread 4 4 use the follow python script to extract the layer name of the input layer this indicate that the input layer name of the model be input python tf nightly 2 2 0 dev20200319 import tensorflow as tf import os from tensorflow python import op def get graph def from file graph filepath tf compat v1 reset default graph with op graph as default with tf compat v1 gfile gfile graph filepath rb as f graph def tf compat v1 graphdef graph def parsefromstring f read return graph def look up the name of the placeholder for the input node graph def get graph def from file efficientdet d0 train pb input name for node in graph def node if node op placeholder print efficientdet d0 train input node name node name this will be the input node input name node name 5 the follow command convert efficientdet d0 train pb and save model be create in save model folder python tf nightly 2 2 0 dev20200319 import tensorflow as tf import os import shutil from tensorflow python save model import tag constant from tensorflow python import op def get graph def from file graph filepath tf compat v1 reset default graph with op graph as default with tf compat v1 gfile gfile graph filepath rb as f graph def tf compat v1 graphdef graph def parsefromstring f read return graph def def convert graph def to save model export dir graph filepath input name output graph def get graph def from file graph filepath with tf compat v1 session graph tf graph as session tf import graph def graph def name tf compat v1 save model simple save session export dir change input image to node name if you know the name input input name session graph get tensor by name 0 format node name for node in graph def node if node op placeholder output t rstrip 0 session graph get tensor by name t for t in output print optimize graph convert to savedmodel look up the name of the placeholder for the input node graph def get graph def from file efficientdet d0 train pb input name input output class net class predict biasadd 0 class net class predict 1 biasadd 0 class net class predict 2 biasadd 0 class net class predict 3 biasadd 0 class net class predict 4 biasadd 0 box net box predict biasadd 0 box net box predict 1 biasadd 0 box net box predict 2 biasadd 0 box net box predict 3 biasadd 0 box net box predict 4 biasadd 0 convert this to a tf serve compatible mode shutil rmtree save model ignore error true convert graph def to save model save model efficientdet d0 train pb input name output 6 confirm that save model be generate correctly by the follow command console save model cli show dir save model all metagraphdef with tag set serve contain the follow signaturedefs signature def serve default the give savedmodel signaturedef contain the follow input s input input tensor info dtype dt float shape 1 512 512 3 name input 0 the give savedmodel signaturedef contain the follow output s output box net box predict biasadd tensor info dtype dt float shape 1 64 64 36 name box net box predict biasadd 0 output box net box predict 1 biasadd tensor info dtype dt float shape 1 32 32 36 name box net box predict 1 biasadd 0 output box net box predict 2 biasadd tensor info dtype dt float shape 1 16 16 36 name box net box predict 2 biasadd 0 output box net box predict 3 biasadd tensor info dtype dt float shape 1 8 8 36 name box net box predict 3 biasadd 0 output box net box predict 4 biasadd tensor info dtype dt float shape 1 4 4 36 name box net box predict 4 biasadd 0 output class net class predict biasadd tensor info dtype dt float shape 1 64 64 810 name class net class predict biasadd 0 output class net class predict 1 biasadd tensor info dtype dt float shape 1 32 32 810 name class net class predict 1 biasadd 0 output class net class predict 2 biasadd tensor info dtype dt float shape 1 16 16 810 name class net class predict 2 biasadd 0 output class net class predict 3 biasadd tensor info dtype dt float shape 1 8 8 810 name class net class predict 3 biasadd 0 output class net class predict 4 biasadd tensor info dtype dt float shape 1 4 4 810 name class net class predict 4 biasadd 0 method name be tensorflow serve predict 7 finally an error occur when perform weight quantization with the follow python script python tf nightly 2 2 0 dev20200319 import tensorflow as tf weight quantization input output float32 converter tf lite tfliteconverter from save model save model converter experimental new converter true converter optimization tf lite optimize optimize for size converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op converter allow custom op true tflite quant model converter convert with open efficientdet d0 train tflite wb as w w write tflite quant model print weight quantization complete efficientdet d0 train tflite the follow error occur even though the input placeholder do exist in the last check procedure valueerror can not find the placeholder op that be an input to the readvariableop the output from the converter invocation console 2020 03 20 17 36 18 835931 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libnvinfer so 6 dlerror libnvinfer so 6 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 18 836031 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libnvinfer plugin so 6 dlerror libnvinfer plugin so 6 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 18 836056 w tensorflow compiler tf2tensorrt util py util cc 30 can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request 2020 03 20 17 36 20 375954 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 03 20 17 36 20 388477 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 03 20 17 36 20 388790 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1070 with max q design computecapability 6 1 coreclock 1 2655ghz corecount 16 devicememorysize 7 93gib devicememorybandwidth 238 66gib s 2020 03 20 17 36 20 388916 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcudart so 10 1 dlerror libcudart so 10 1 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 20 389000 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcubla so 10 dlerror libcubla so 10 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 20 389063 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcufft so 10 dlerror libcufft so 10 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 20 389126 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcurand so 10 dlerror libcurand so 10 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 20 389185 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusolver so 10 dlerror libcusolver so 10 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 20 389246 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusparse so 10 dlerror libcusparse so 10 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 20 391780 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 03 20 17 36 20 391813 w tensorflow core common runtime gpu gpu device cc 1592 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2020 03 20 17 36 20 392010 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 03 20 17 36 20 414641 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2208000000 hz 2020 03 20 17 36 20 415705 I tensorflow compiler xla service service cc 168 xla service 0xc1805f0 initialize for platform host this do not guarantee that xla will be use device 2020 03 20 17 36 20 415744 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 03 20 17 36 20 460041 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 03 20 17 36 20 460435 I tensorflow compiler xla service service cc 168 xla service 0xc216420 initialize for platform cuda this do not guarantee that xla will be use device 2020 03 20 17 36 20 460451 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 1070 with max q design compute capability 6 1 2020 03 20 17 36 20 460530 I tensorflow core common runtime gpu gpu device cc 1096 device interconnect streamexecutor with strength 1 edge matrix 2020 03 20 17 36 20 460538 I tensorflow core common runtime gpu gpu device cc 1102 2020 03 20 17 36 21 550114 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 03 20 17 36 21 550409 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 1 2020 03 20 17 36 21 550490 I tensorflow core grappler cluster single machine cc 356 start new session 2020 03 20 17 36 21 551093 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 03 20 17 36 21 551353 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1070 with max q design computecapability 6 1 coreclock 1 2655ghz corecount 16 devicememorysize 7 93gib devicememorybandwidth 238 66gib s 2020 03 20 17 36 21 551456 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcudart so 10 1 dlerror libcudart so 10 1 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 21 551525 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcubla so 10 dlerror libcubla so 10 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 21 551587 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcufft so 10 dlerror libcufft so 10 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 21 551648 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcurand so 10 dlerror libcurand so 10 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 21 551710 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusolver so 10 dlerror libcusolver so 10 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 21 551772 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusparse so 10 dlerror libcusparse so 10 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 opt intel openvino 2020 1 023 opencv lib opt intel openvino 2020 1 023 deployment tool ngraph lib opt intel opencl opt intel openvino 2020 1 023 deployment tool inference engine external hddl lib opt intel openvino 2020 1 023 deployment tool inference engine external gna lib opt intel openvino 2020 1 023 deployment tool inference engine external mkltiny lnx lib opt intel openvino 2020 1 023 deployment tool inference engine external tbb lib opt intel openvino 2020 1 023 deployment tool inference engine lib intel64 2020 03 20 17 36 21 551790 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 03 20 17 36 21 551796 w tensorflow core common runtime gpu gpu device cc 1592 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2020 03 20 17 36 21 551808 I tensorflow core common runtime gpu gpu device cc 1096 device interconnect streamexecutor with strength 1 edge matrix 2020 03 20 17 36 21 551812 I tensorflow core common runtime gpu gpu device cc 1102 0 2020 03 20 17 36 21 551818 I tensorflow core common runtime gpu gpu device cc 1115 0 n 2020 03 20 17 36 21 582079 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item graph to optimize 2020 03 20 17 36 21 582108 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 0 002ms 2020 03 20 17 36 21 582113 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 0ms traceback most recent call last file 03 weight quantization py line 12 in tflite quant model converter convert file usr local lib python3 6 dist package tensorflow core lite python lite py line 423 in convert self func 0 low control flow false file usr local lib python3 6 dist package tensorflow core python framework convert to constant py line 506 in convert variable to constant v2 raise valueerror can not find the placeholder op that be an input valueerror can not find the placeholder op that be an input to the readvariableop also please include a link to the save model or graphdef failure detail the error occur even though the input placeholder do exist in the last check procedure any other info log the pb checkpoint and script I use to check be commit below |
tensorflowtensorflow | prediction be not accurate after load the save model resume classification | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary tensorflow version use command below 2 1 0 python version bazel version if compile from source 3 7 gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior we have create a sample resume classification model and save the model with morethan 95 accuracy on train and test datum set after we have load the model and try to make the prediction by define prediction class use tf argmax and we have observe some ambuiguity in prediction not predict the correct label describe the expect behavior the prediction should be accurate after load the model standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook resume classification v 1 5 zip other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | add more op in tflite | Bug | system information window 10 tensorflow 2 0 some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add average pool 2d conv 2d depthwise conv 2d fully connect mul sub here be a list of operator for which you will need custom implementation identityn |
tensorflowtensorflow | alreadyexistserror when use large tensor | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution ubuntu 18 04 3 lt and ubuntu 16 04 6 lts tensorflow instal from pip python version 3 6 8 and 3 7 5 cuda cudnn version do not use gpu tensorflow version 2 1 0 describe the current behavior the code cast tensorflow python framework error impl alreadyexistserror describe the expect behavior it do not cast that error standalone code to reproduce the issue here s a minimal example I manage to make the error only occur if I use gradient tape if I remove the gradient tape the code run perfectly but I would like to be able to take the gradient python import tensorflow as tf import numpy as np class c def init self n 2000 self ae tf variable np eye n trainable true dtype tf float32 self aa tf variable np eye n trainable true dtype tf float32 self fr tf variable 0 5 trainable true dtype tf float32 self kp tf variable np zero n trainable false dtype tf float32 tf function def loss op self k tf tensor a tf tensor s tf tensor l tf constant 0 0 def loop fn I k l p tf clip by value k a I 0 01 0 99 l l s I tf math log p 1 s I tf math log 1 p at self aa a I gk self ae a I k at lk k at k tf clip by value k s I gk 1 s I self fr lk 0 0 1 0 return I 1 k l def loop cond I tf tensor return tf logical and tf great equal s I 0 tf less I 199 l tf while loop loop cond loop fn 0 k l back prop true return l tf function def regularizer self tensor tf tensor return tf reduce sum tf math log tf abs tensor 1 tf function def train op self a s opt with tf gradienttape as tape loss self loss op self kp a s aal self regularizer self aa al self regularizer self ae o loss 0 5 aal al train var self ae self aa self fr gradient tape gradient o train var opt apply gradient zip gradient train var return loss c c o tf optimizer adam learning rate 1e 3 a np arange 200 dtype np int32 s np one 200 dtype np float32 c train op a s o here s a stack trace if that be helpful bash 2020 03 20 09 56 30 208767 w tensorflow core framework op kernel cc 1655 op require fail at variable op cc 104 already exist resource per step 0 statefulpartitionedcall 3 gradient s while grad while grad body 193 gradient addn tmp var n10tensorflow19temporaryvariableop6tmpvare 2020 03 20 09 56 30 209510 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort already exist resource per step 0 statefulpartitionedcall 3 gradient while grad while grad body 193 gradient addn tmp var n10tensorflow19temporaryvariableop6tmpvare node statefulpartitionedcall 3 gradient while grad while grad body 193 gradient addn tmp var 2020 03 20 09 56 30 209650 w tensorflow core framework op kernel cc 1655 op require fail at variable op cc 104 already exist resource per step 0 statefulpartitionedcall 3 gradient while grad while grad body 193 gradient addn tmp var n10tensorflow19temporaryvariableop6tmpvare 2020 03 20 09 56 30 209813 w tensorflow core framework op kernel cc 1655 op require fail at variable op cc 104 already exist resource per step 0 statefulpartitionedcall 3 gradient while grad while grad body 193 gradient addn tmp var n10tensorflow19temporaryvariableop6tmpvare 2020 03 20 09 56 30 219319 w tensorflow core framework op kernel cc 1655 op require fail at variable op cc 104 already exist resource per step 0 statefulpartitionedcall 3 gradient while grad while grad body 193 gradient addn tmp var n10tensorflow19temporaryvariableop6tmpvare traceback most recent call last file error py line 69 in c train op a s o file home jonas local lib python3 6 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home jonas local lib python3 6 site package tensorflow core python eager def function py line 632 in call return self stateless fn args kwd file home jonas local lib python3 6 site package tensorflow core python eager function py line 2363 in call return graph function filter call args kwargs pylint disable protect access file home jonas local lib python3 6 site package tensorflow core python eager function py line 1611 in filter call self capture input file home jonas local lib python3 6 site package tensorflow core python eager function py line 1692 in call flat ctx args cancellation manager cancellation manager file home jonas local lib python3 6 site package tensorflow core python eager function py line 545 in call ctx ctx file home jonas local lib python3 6 site package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl alreadyexistserror resource per step 0 statefulpartitionedcall 3 gradient while grad while grad body 193 gradient addn tmp var n10tensorflow19temporaryvariableop6tmpvare node statefulpartitionedcall 3 gradient while grad while grad body 193 gradient addn tmp var op inference train op 806 function call stack train op |
tensorflowtensorflow | kernel concatenation test fail with aborted core dump | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary source tensorflow version commit sha if source 765ddddb22 target platform e g arm mbe os arduino nano 33 etc linux x86 64 describe the problem run make f tensorflow lite micro tool make makefile test kernel concatenation test result in the follow output tensorflow lite micro testing test linux binary sh line 46 4201 abort core dump 1 micro log filename 2 1 tensorflow lite micro tool make makefile 321 recipe for target test kernel concatenation test fail make test kernel concatenation test error 134 please provide the exact sequence of command step when you run into the problem make f tensorflow lite micro tool make makefile test kernel concatenation test |
tensorflowtensorflow | question about transformer ipynb tutorial | Bug | hi I be new to transformer and I m try to understand this transformer tutorial in this tutorial a transformer have encoder decoder and a final linear layer but in the paper a transformer have a softmax layer after the final linear layer I think that predict i d tf cast tf argmax prediction axis 1 tf int32 line correctly return the expect output anyway but I just want to know why the softmax layer be not implement in this tutorial if I add a softmax layer after the final linear layer and train the model will the prediction result be different and why this tutorial use two separate vocab instead of a share subword vocabulary maybe to keep the example simple if I want to use a share vocabulary should I implement the share embed and softmax weight part in tensor2tensor thank |
tensorflowtensorflow | generate unexpected notebook cell in japanese documentation | Bug | I find issue about after convert to doc that generate extra notebook cell in japanesse version image original notebook |
tensorflowtensorflow | savedmodelestimator not find in tf 2 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary tensorflow version use command below 2 1 0 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf estimator experimental savedmodelestimator not find describe the expect behavior accord to l112 it should be find at tf estimator experimental savedmodelestimator standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf tf estimator experimental savedmodelestimator traceback most recent call last file line 1 in attributeerror module tensorflow estimator python estimator api v2 estimator experimental have no attribute savedmodelestimator other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tensorflow lite error tensorflow lite kernels concatenation cc 52 axis 0 be not true | Bug | system information os platform and distribution e g linux ubuntu 16 04 macos catilina 10 15 3 tensorflow instal from source or binary binary tensorflow version or github sha if from source 1 15 2 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook git clone firstly run this file first and you can get save model in one minute then run this file and you can get a tflite file name bi lstm tflite in the path model bi lstm the copy this file to the xcode project I can t allocatetensor copy and paste here the exact command I find this problem be couse by this python method output bidirectional dynamic rnn cell fw cell bw char embedding sequence length word length dtype tf float32 time major true output fw output bw output output tf concat output fw output bw axis 1 the swift code be guard let modelpath bundle main path forresource bi lstm oftype tflite else print fail to load the model file return var option interpreter option option threadcount 4 let interpreter try interpreter modelpath modelpath option option try interpreter allocatetensor print interpreter inputtensorcount the output from the converter invocation 2020 03 20 07 49 14 148393 0800 signaturetextclassfication 25938 3910176 initialize tensorflow lite runtime tensorflow lite error tensorflow lite kernels concatenation cc 52 axis 0 be not true tensorflow lite error node number 1 concatenation fail to prepare fail to create the interpreter with error fail to allocate memory for input tensor also please include a link to the save model or graphdef put link here or attach to the issue bi lstm tflite zip failure detail if the conversion be successful but the generate model be wrong state what be wrong produce wrong result and or decrease in accuracy produce correct result but the model be slow than expect model generate from old converter any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | single precision tf math erf with tf2 produce slightly different result to tf1 and scipy | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 15 2d and colab tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 0 ge5bf8de410 2 1 0 python version 3 7 3 describe the current behavior tf math erf with float32 can produce slightly different result between tf1 and tf2 scipy be consistent with tf1 describe the expect behavior I wouldn t expect any difference between the two although I confess I m not sure this be actually a bug maybe it s an expected side effect of some other change standalone code to reproduce the issue with tf2 import tensorflow as tf import scipy import numpy as np print tf math erf 1 0606601 numpy print scipy special erf np array 1 0606601 dtype np float32 produce 0 86638564 0 8663856 with tf1 import tensorflow as tf import scipy import numpy as np print tf session run tf math erf 1 0606601 print scipy special erf np array 1 0606601 dtype np float32 produce 0 8663856 0 8663856 with double precision the result be consistently 0 8663855711671024 |
tensorflowtensorflow | issue with call tf device | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information ubuntu19 10 from pip tensorflow version 2 1 0 python version 3 7 4 instal use virtualenv pip conda pip gcc compiler version if compile from source 7 3 0 cuda cudnn version 10 1 gpu model and memory rtx2060 super 8 g describe the problem when call with tf device tf compat v1 train replica device setter worker device worker ps device cpu 0 ps task 1 it give error of not support function any other info log traceback most recent call last file export detection py line 38 in with experiment init graph config with dataset true as net dataset file home hashswan anaconda3 lib python3 7 contextlib py line 112 in enter return next self gen file home hashswan desktop superpoint superpoint experiment py line 73 in init graph n gpu n gpu config model file home hashswan desktop superpoint superpoint model base model py line 122 in init self build graph file home hashswan desktop superpoint superpoint model base model py line 275 in build graph self pre graph self pre in file home hashswan desktop superpoint superpoint model base model py line 223 in pre graph pre out self gpu tower data mode pre self config pre batch size file home hashswan desktop superpoint superpoint model base model py line 157 in gpu tower worker device worker ps device cpu 0 ps task 1 file home hashswan anaconda3 lib python3 7 site package tensorflow core python framework op py line 5078 in device v2 raise runtimeerror tf device do not support function runtimeerror tf device do not support function |
tensorflowtensorflow | unknownerror fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above | Bug | system information os platform and distribution ubuntu 18 04 linux kernel 5 3 0 40 generic 32 18 04 1 ubuntu smp mon feb 3 14 05 59 utc 2020 x86 64 x86 64 x86 64 gnu linux tensorflow instal from source as write here gpu support 2 try from binary also build be with all flag off except cuda y tensorflow version 2 1 0 python version 3 6 9 bazel 0 29 1 gcc compiler version 7 5 0 cuda cudnn version gpu model and memory cuda 10 1 cudnn 7 6 5 try 7 6 1 also gpu model nvidia geforce 1660 ti 6 gb try to execute simple mnist cnn example but with tensorflow kera import not kera error unknownerror fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node sequential conv2d conv2d define at 70 op inference distribute function 1004 function call stack distribute function firstly I get this error when instal tensorflow gpu from binary via pip then I think that against cuda 10 1 I have to manually build tensorflow via bazel but after successful build I get the same error I have also try change cudnn version from 7 6 5 to 7 6 1 but again it do not help cuda download from nvidia website and work perfectly cudnn be rightly instal by download it from nvidia website and copy into usr loca cuda cp p cuda include cudnn h usr local cuda cuda version include cp p cuda lib64 libcudnn usr local cuda cuda version lib64 chmod a r usr local cuda cuda version lib64 libcudnn if I try to train model without convolution layer all work great with gpu calculation for example model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 128 activation relu tf keras layers dropout 0 2 tf keras layer dense 10 if you need some additional information I will add it please help I solve this issue I truly don t know what else try use btw as I be concern I need cuda 10 1 because only this cuda support my nvidia geforce 1660 ti |
tensorflowtensorflow | keras unable to clone model a load model result save in tf format | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see attach example os platform and distribution e g linux ubuntu 16 04 cento 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 1 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior attempt to clone a loaded model that have be save in tf format I e a directory rather than h5 create an error this appear to be because the load model be of a slightly malformed type describe the expect behavior clone model should be able to clone derive type load model should return a true model instance even when load a tf format model standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf make dummy functional model x tf keras layer input 10 y tf keras layer dense 10 x a tf keras model model x y save model with tf format a save foo model print type of a be 0 format type a print a be graph network 0 format a be graph network read in the model b tf keras model load model foo model print type of b be 0 format type b print b be graph network 0 format b be graph network try to clone b error because b be not of correct type c tf keras model clone model b other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach output type of a be a be graph network true type of b be b be graph network false valueerror traceback most recent call last proj keras load clone py in 21 22 try to clone b error because b be not of correct type 23 c tf keras model clone model b local lib python3 6 site package tensorflow core python keras model py in clone model model input tensor clone function 420 else 421 return clone functional model 422 model input tensor input tensor layer fn clone function 423 424 local lib python3 6 site package tensorflow core python keras model py in clone functional model model input tensor layer fn 163 get a sequential instance instead model 164 if not model be graph network 165 raise valueerror expect model argument 166 to be a functional model instance 167 but get a subclass model instead valueerror expect model argument to be a functional model instance but get a subclass model instead I ve try overwrite the be graph network attribute but that just push off the problem to another line the reason I have to load and then clone be so I can recreate old model with specific datatype because tf 2 0 1 can t save mixed float16 model I have to save as float32 and fix after load normally one would just rebuild the model from the actual command but I ve make backward compatibility break change since then I d like to still be able to use my old model I m look at just update to tf2 2 but I don t control my environment so that may not be an option |
tensorflowtensorflow | tf contrib module not find in tensorflow training and graph export not possible | Bug | since tuesday around 9 am cest I be not able to proceed training from checkpoint or even export current checkpoint via export inference graph py test on both tf 1 15 and 2 1 0 and it work flawlessly up until yesterday be there any workaround currently system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux google colab mobile device e g pixel 4 samsung galaxy 10 if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 1 15 and 2 1 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory depend on which one colab assign to I please provide the entire url of the model you be use describe the current behavior use usr local lib python3 6 dist package finish processing dependency for object detection 0 1 env pythonpath content model research content model research slim traceback most recent call last file object detection builder model builder test py line 23 in from object detection builder import model builder file content model research object detection builder model builder py line 22 in from object detection builder import box predictor builder file content model research object detection builder box predictor builder py line 20 in from object detection predictor import convolutional box predictor file content model research object detection predictor convolutional box predictor py line 23 in slim tf contrib slim attributeerror module tensorflow have no attribute contrib same with train py traceback most recent call last file object detection legacy train py line 48 in from tensorflow contrib import framework as contrib framework modulenotfounderror no module name tensorflow contrib and model main py traceback most recent call last file object detection model main py line 26 in from object detection import model lib file content model research object detection model lib py line 27 in from object detection import eval util file content model research object detection eval util py line 40 in slim tf contrib slim attributeerror module tensorflow have no attribute contrib and even export inference graph py traceback most recent call last file object detection export inference graph py line 108 in from object detection import exporter file content model research object detection exporter py line 20 in from tensorflow contrib quantize python import graph matcher modulenotfounderror no module name tensorflow contrib describe the expect behavior test the model builder object detection builder model builder test py outputs 17 in tf1 10 in tf2 successful test code to reproduce the issue python object detection builder model builder test py or just run this notebook in colab upd 1 14 version work partly training fail after the first evaluation due to typeerror in numpy with float64 to int conversion |
tensorflowtensorflow | problem with documentation tutorial | Bug | hello tf team I be try the embed tutorial as mention on the tf doc webpage the below two line do give I error while I be follow the same documentation as mention in the webpage train batch train datum shuffle 1000 pad batch 10 test batch test datum shuffle 1000 pad batch 10 image I have manually try to fix the error by put padded shape as none or none none but both of they have throw error |
tensorflowtensorflow | make command fail due to outdate flatbuffer | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary source tensorflow version commit sha if source 2decf5694a describe the problem when run make f tensorflow lite micro tool make makefile test the follow error occur in file include from tensorflow lite core api op resolver h 20 from tensorflow lite micro micro interpreter h 20 from tensorflow lite micro examples person detection experimental person detection test cc 23 tensorflow lite schema schema generate h in function const char tflite enumnametensortype tflite tensortype tensorflow lite schema schema generate h 416 20 error isoutrange be not a member of flatbuffer this be because isoutrange be not available in flatbuffer v1 11 0 which be currently download by the make system if I replace the flatbuffer directory in tensorflow lite micro tool make download with version 1 12 0 of flatbuffer source from here it compile fine except for the test micro feature generator test which fail due to not be able to find kiss fft h please provide the exact sequence of command step when you run into the problem make f tensorflow lite micro tool make makefile test |
tensorflowtensorflow | application resnet resnet50 mean resnet50 or resnet34 | Bug | url s with the issue description of issue what need change tf provide two type of resnet model the first be resnet50 which be implement by the follow code l423 l441 where stack1 be basic version of residual function l64 l127 and the second be resnet50v2 which be implement by the follow code l483 l501 where stack2 be bottleneck version of residual function l175 192 the original paper list different type of resnet in table 1 by the original definition the resnet50 should be 34 layer resnet in the table 1 from the implementation by pytorch l244 l266 they claim the first one be resnet34 so I suggest that the resnet50 should change its name |
tensorflowtensorflow | tfds test fail with tf nightly | Bug | system information tensorflow version tf nightly python version 3 6 describe the current behavior tensorflow dataset test fail with tf nightly while it work fine with tf 1 15 and tf 2 1 describe the expect behavior test should pass standalone code to reproduce the issue colab link scrollto xb6mplocrtq7 more info |
tensorflowtensorflow | encode image as tf string tensor in the c api | Bug | tf version 1 15 0 os window 10 64 bit compiler msvc 2017 I m attempt to load a tf savedmodel and run inference on it use the c api for tf version 1 15 0 essential outlay input tensor s encode image string tensor 0 output tensor s not exhaustive detection box 0 detection score 0 detection class 0 while there s some documentation in the api header that describe how tf tensor of type tf string be encode I can t seem to find any concrete example illustration which be probably why I keep run into an error when attempt to encode an image I pick up some part from this unchecked answer on stackoverflow to get start char image contain a pointer to the image const unsigned int imagesize contain the size of the image std vector inputdim static cast tf datatypesize tf uint64 static cast imagesize size t encodedsize tf stringencodedsize imagesize size t totalsize tf datatypesize tf uint64 encodedsize char encodedinput new char totalsize for size t I 0 I tf datatypesize tf uint64 I encodedinput I 0 tf stringencode const char image datum imagesize encodedinput 8 encodedsize status if tf getcode status tf ok std cerr fail to encode image n the code enter this block and tf message status return nothing to the output stream std cerr tf message status std endl return false tf tensor input tf newtensor tf string inputdim datum inputdim size encodedinput totalsize null 0 be there any guide on how to encode image as tf string type tensor thank please use netron to view the model if necessary save model zip |
tensorflowtensorflow | scatter nd add have legacy docstring from scatter nd | Bug | url s with the issue description of issue what need change these sentence index be an integer tensor contain index into a new tensor of shape shape the last dimension of index can be at most the rank of shape seem to be direct copy paste from the doc of scatter nd however there be no shape args in scatter nd add clear description the doc should instead refer to tensor shape correct link be the link to the source code correct there be no link that s another issue though I guess only to the tf v1 parameter define be all parameter define and format correctly yes return define be return value define yes raise list and define be the error define no error raise apparently usage example be there a usage example yes request visual if applicable be there currently visual if not will it clarify the content no but there be some in scatter nd I guess it s enough submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide not right now |
tensorflowtensorflow | enable mixed precision graph rewrite error | Bug | tensorflow 1 14 and ubuntu 18 04 I m try the mixed precision training use the tf train experimental enable mixed precision graph rewrite by add the wrapper of enable mixed precision graph rewrite to the dnn optimizer in the census wide n deep model l147 I find the tensorflow internal error of valueerror tensor current loss scale read readvariableop 0 shape dtype float32 must be from the same graph as tensor head weight loss sum 0 shape dtype float32 m tf estimator dnnlinearcombinedclassifi model dir model dir linear feature column cross column dnn feature column deep column dnn hide unit 100 50 be change to m tf estimator dnnlinearcombinedclassifi model dir model dir linear feature column cross column dnn feature column deep column dnn optimizer tf train experimental enable mixed precision graph rewrite tf compat v1 train gradientdescentoptimizer 0 05 dnn hide unit 100 50 maybe it s a compatible error of estimator and enable mixed precision graph rewrite please give some help thank |
tensorflowtensorflow | get different result in eager and graph mode when I use tf keras reduction none on loss object | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source 2 2 0 gcc compiler version if compile from source cuda cudnn version gpu model and memory colab example relate addon |
tensorflowtensorflow | sparce tensor wrong exeption message when pass argument with wrong type | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux mint 19 3 ubuntu tensorflow instal from source or binary binatry tensorflow version use command below both tf nightly and tf 2 1 python version 3 8 and 3 6 describe the current behavior when pass an int32 tensor to sparce tensor last error display be valueerror unable to create eager sparsetensor check that your shape be correctly define eager sparsetensor don t support unknown dimesion get shape 4 4 4 4 when look back in stack trace the right error be race valueerror tensor conversion request dtype int64 for tensor with dtype int32 describe the expect behavior one of the follow 1 conversion should not fail 2 last error should be expect int64 tensor for shape argument get int32 tensor standalone code to reproduce the issue import tensorflow as tf indice tf cast 1 1 1 1 1 3 1 1 dtype tf int64 shape tf cast 4 4 4 4 dtype tf int64 heat map tf sparsetensor index indice value tf one tf shape indice 0 dense shape shape indice tf cast 1 1 1 1 1 3 1 1 dtype tf int64 shape tf cast 4 4 4 4 dtype tf int32 heat map tf sparsetensor index indice value tf one tf shape indice 0 dense shape shape other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow python framework sparse tensor py line 142 in init dense shape name dense shape dtype dtype int64 file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow python framework op py line 1317 in convert to tensor dtype name value dtype name value valueerror tensor conversion request dtype int64 for tensor with dtype int32 during handling of the above exception another exception occur traceback most recent call last file home bhb vscode extension ms python python 2020 2 64397 pythonfile ptvsd launcher py line 48 in main ptvsdargs file home bhb vscode extension ms python python 2020 2 64397 pythonfile lib python old ptvsd ptvsd main py line 432 in main run file home bhb vscode extension ms python python 2020 2 64397 pythonfile lib python old ptvsd ptvsd main py line 316 in run file runpy run path target run name main file usr lib python3 6 runpy py line 263 in run path pkg name pkg name script name fname file usr lib python3 6 runpy py line 96 in run module code mod name mod spec pkg name script name file usr lib python3 6 runpy py line 85 in run code exec code run global file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src test sparce tensor py line 11 in heat map tf sparsetensor index indice value tf one tf shape indice 0 dense shape shape file home bhb cloud code git 3d person pose estimation from 2d singelview image datum src venv lib python3 6 site package tensorflow python framework sparse tensor py line 148 in init get shape n format dense shape valueerror unable to create eager sparsetensor check that your shape be correctly define eager sparsetensor don t support unknown dimesion get shape 4 4 4 4 beendet |
tensorflowtensorflow | 2 x sparsetensor shape become after some operation if use tf function | Bug | with tf function if an argument x of a function be a 2 d tf sparsetensor its shape be none none however after some operation such as tf sparse transpose and tf sparse reduce sum the shape of the result tensor become please refer to this script for reproduction |
tensorflowtensorflow | segmentation fault when use cloud tpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 debian gnu linux 9 12 stretch mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary preinstalle by google cloud platform tensorflow version use command below v1 15 0 rc3 22 g590d6ee python version python 2 7 13 default sep 26 2018 18 42 22 gcc 6 3 0 20170516 on linux2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version gpu model and memory tpu provide by google cloud version v3 8 describe the current behavior I request a vm and correspond tpu use ctpu up zone europe west4 a tpu size v3 8 name europe4 tf version 1 15 machine type n1 standard 2 project my project after connect to it and I start train a translation model use t2 t trainer model transformer hparam set transformer tpu problem translate ende wmt32k pack train step 10 eval step 3 datum dir gs my project datum output dir gs my project training tmp use tpu true cloud tpu name europe4 tpu num shard 8 schedule train this be the command specify in the tutorial train a language model on a pod but the training do not start the command be abort due to an segmentation fault my user europe4 t2 t trainer model transformer hparam set transformer tpu problem translate ende wmt32k pack train step 10 eval step 3 datum dir gs my project datum output dir gs my project training tmp use tpu true cloud tpu name europe4 tpu num shard 8 schedule train warn tensorflow from usr local lib python2 7 dist package tensor2tensor util expert util py 68 the name tf variable scope be deprecate please use tf compat v1 variable scope instead warn tensorflow the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see for I o related op if you depend on functionality not list there please file an issue warn tensorflow from usr local lib python2 7 dist package tensor2tensor util adafactor py 27 the name tf train optimizer be deprecate please use tf compat v1 train optimizer instead warn tensorflow from usr local lib python2 7 dist package tensor2tensor util multistep optimizer py 32 the name tf train adamoptimizer be deprecate please use tf compat v1 train adamoptimizer instead warn tensorflow from usr local lib python2 7 dist package mesh tensorflow op py 4237 the name tf train checkpointsaverlistener be deprecate please use tf estimator checkpointsaverlistener instead warn tensorflow from usr local lib python2 7 dist package mesh tensorflow op py 4260 the name tf train sessionrunhook be deprecate please use tf estimator sessionrunhook instead warn tensorflow from usr local lib python2 7 dist package tensor2tensor model research neural stack py 38 the name tf nn rnn cell rnncell be deprecate please use tf compat v1 nn rnn cell rnncell instead warn tensorflow from usr local lib python2 7 dist package tensor2tensor rl gym util py 235 the name tf log info be deprecate please use tf compat v1 log info instead warn tensorflow from usr local lib python2 7 dist package tensor2tensor util trainer lib py 111 the name tf optimizeroption be deprecate please use tf compat v1 optimizeroption instead warn tensorflow from usr local lib python2 7 dist package tensorflow gan python estimator tpu gan estimator py 42 the name tf estimator tpu tpuestimator be deprecate please use tf compat v1 estimator tpu tpuestimator instead warn tensorflow from usr local bin t2 t trainer 32 the name tf log set verbosity be deprecate please use tf compat v1 log set verbosity instead warn tensorflow from usr local bin t2 t trainer 32 the name tf log info be deprecate please use tf compat v1 log info instead warn tensorflow from usr local bin t2 t trainer 33 the name tf app run be deprecate please use tf compat v1 app run instead warn tensorflow from usr local lib python2 7 dist package tensor2tensor util hparams lib py 49 the name tf gfile exist be deprecate please use tf io gfile exist instead w0316 12 49 12 574630 140506290238912 deprecation wrapper py 119 from usr local lib python2 7 dist package tensor2tensor util hparams lib py 49 the name tf gfile exist be deprecate please use tf io gfile exist instead warn tensorflow from usr local lib python2 7 dist package tensor2tensor util trainer lib py 839 the name tf set random seed be deprecate please use tf compat v1 set random seed instead w0316 12 49 12 673067 140506290238912 deprecation wrapper py 119 from usr local lib python2 7 dist package tensor2tensor util trainer lib py 839 the name tf set random seed be deprecate please use tf compat v1 set random seed instead warn tensorflow from usr local lib python2 7 dist package tensor2tensor util trainer lib py 116 the name tf graphoption be deprecate please use tf compat v1 graphoption instead w0316 12 49 12 675126 140506290238912 deprecation wrapper py 119 from usr local lib python2 7 dist package tensor2tensor util trainer lib py 116 the name tf graphoption be deprecate please use tf compat v1 graphoption instead warn tensorflow from usr local lib python2 7 dist package tensor2tensor util trainer lib py 129 the name tf gpuoption be deprecate please use tf compat v1 gpuoption instead w0316 12 49 12 675529 140506290238912 deprecation wrapper py 119 from usr local lib python2 7 dist package tensor2tensor util trainer lib py 129 the name tf gpuoption be deprecate please use tf compat v1 gpuoption instead i0316 12 49 12 683859 140506290238912 discovery py 271 url be request get i0316 12 49 12 725476 140506290238912 discovery py 867 url be request get i0316 12 49 12 725781 140506290238912 transport py 157 attempt refresh to obtain initial access token i0316 12 49 12 812311 140506290238912 discovery py 271 url be request get i0316 12 49 12 851186 140506290238912 discovery py 867 url be request get i0316 12 49 12 851486 140506290238912 transport py 157 attempt refresh to obtain initial access token warn tensorflow from usr local lib python2 7 dist package tensor2tensor datum generator text encoder py 940 the name tf gfile open be deprecate please use tf io gfile gfile instead w0316 12 49 12 991070 140506290238912 deprecation wrapper py 119 from usr local lib python2 7 dist package tensor2tensor datum generator text encoder py 940 the name tf gfile open be deprecate please use tf io gfile gfile instead warn tensorflow estimator s model fn include param argument but param be not pass to estimator w0316 12 49 13 249932 140506290238912 estimator py 1984 estimator s model fn include param argument but param be not pass to estimator info tensorflow use config save checkpoint sec none num ps replicas 0 keep checkpoint max 20 task type worker global i d in cluster 0 be chief true tpu config tpuconfig iteration per loop 100 num shard 8 num core per replica none per host input for training 2 tpu job name none initial infeed sleep sec none input partition dim none eval training input configuration 2 cluster spec model dir gs my project training tmp protocol none save checkpoint step 1000 keep checkpoint every n hour 10000 service none session config gpu option per process gpu memory fraction 0 95 allow soft placement true graph option cluster def job name worker task key 0 value 10 240 1 18 8470 isolate session state true use tpu true tf random seed none save summary step 100 device fn none cluster experimental distribute none num worker replicas 1 task i d 0 log step count step none experimental max worker delay sec none evaluation master u grpc 10 240 1 18 8470 eval distribute none train distribute none master u grpc 10 240 1 18 8470 i0316 12 49 13 251069 140506290238912 estimator py 209 use config save checkpoint sec none num ps replicas 0 keep checkpoint max 20 task type worker global i d in cluster 0 be chief true tpu config tpuconfig iteration per loop 100 num shard 8 num core per replica none per host input for training 2 tpu job name none initial infeed sleep sec none input partition dim none eval training input configuration 2 cluster spec model dir gs my project training tmp protocol none save checkpoint step 1000 keep checkpoint every n hour 10000 service none session config gpu option per process gpu memory fraction 0 95 allow soft placement true graph option cluster def job name worker task key 0 value 10 240 1 18 8470 isolate session state true use tpu true tf random seed none save summary step 100 device fn none cluster experimental distribute none num worker replicas 1 task i d 0 log step count step none experimental max worker delay sec none evaluation master u grpc 10 240 1 18 8470 eval distribute none train distribute none master u grpc 10 240 1 18 8470 info tensorflow tpucontext eval on tpu true i0316 12 49 13 252732 140506290238912 tpu context py 209 tpucontext eval on tpu true warning tensorflow from usr local lib python2 7 dist package tensor2tensor bin t2 t trainer py 328 the name tf gfile makedirs be deprecate please use tf io gfile makedirs instead w0316 12 49 13 368489 140506290238912 deprecation wrapper py 119 from usr local lib python2 7 dist package tensor2tensor bin t2 t trainer py 328 the name tf gfile makedirs be deprecate please use tf io gfile makedirs instead info tensorflow query tensorflow master grpc 10 240 1 18 8470 for tpu system metadata i0316 12 49 14 371850 140506290238912 tpu system metadata py 78 query tensorflow master grpc 10 240 1 18 8470 for tpu system metadata 2020 03 16 12 49 14 373167 w tensorflow core distribute runtime rpc grpc session cc 356 grpcsession listdevice will initialize the session with an empty graph and other default because the session have not yet be create info tensorflow find tpu system i0316 12 49 14 384757 140506290238912 tpu system metadata py 148 find tpu system info tensorflow num tpu core 8 i0316 12 49 14 385152 140506290238912 tpu system metadata py 149 num tpu core 8 info tensorflow num tpu worker 1 i0316 12 49 14 385714 140506290238912 tpu system metadata py 150 num tpu worker 1 info tensorflow num tpu core per worker 8 i0316 12 49 14 385898 140506290238912 tpu system metadata py 152 num tpu core per worker 8 info tensorflow available device deviceattribute job worker replica 0 task 0 device cpu 0 cpu 1 18153004822558697212 i0316 12 49 14 386076 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device cpu 0 cpu 1 18153004822558697212 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 0 tpu 17179869184 3131077452827063453 i0316 12 49 14 386504 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 0 tpu 17179869184 3131077452827063453 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 1 tpu 17179869184 285863435827645433 i0316 12 49 14 386677 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 1 tpu 17179869184 285863435827645433 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 2 tpu 17179869184 7574921377020815195 i0316 12 49 14 386845 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 2 tpu 17179869184 7574921377020815195 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 3 tpu 17179869184 6156291304405420504 i0316 12 49 14 387026 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 3 tpu 17179869184 6156291304405420504 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 4 tpu 17179869184 2147180529096251620 i0316 12 49 14 387211 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 4 tpu 17179869184 2147180529096251620 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 5 tpu 17179869184 16404941304364531224 i0316 12 49 14 387377 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 5 tpu 17179869184 16404941304364531224 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 6 tpu 17179869184 4697263980141791991 i0316 12 49 14 387541 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 6 tpu 17179869184 4697263980141791991 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 7 tpu 17179869184 15388571662384413779 i0316 12 49 14 387706 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 7 tpu 17179869184 15388571662384413779 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu system 0 tpu system 17179869184 15368005949461242606 i0316 12 49 14 387868 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu system 0 tpu system 17179869184 15368005949461242606 info tensorflow available device deviceattribute job worker replica 0 task 0 device xla cpu 0 xla cpu 17179869184 2717797883427288239 i0316 12 49 14 388044 140506290238912 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device xla cpu 0 xla cpu 17179869184 2717797883427288239 warn tensorflow from usr local lib python2 7 dist package tensorflow python training training util py 236 initialize value from tensorflow python op variable be deprecate and will be remove in a future version instruction for update use variable read value variable in 2 x be initialize automatically both in eager and graph inside tf defun contexts w0316 12 49 14 393487 140506290238912 deprecation py 323 from usr local lib python2 7 dist package tensorflow python training training util py 236 initialize value from tensorflow python op variable be deprecate and will be remove in a future version instruction for update use variable read value variable in 2 x be initialize automatically both in eager and graph inside tf defun contexts info tensorflow call model fn i0316 12 49 14 414323 140506290238912 estimator py 1145 calling model fn info tensorflow num partition 1 partition i d 0 i0316 12 49 14 414944 140506290238912 problem py 826 num partition 1 partition i d 0 info tensorflow read datum file from gs my project datum translate ende wmt32k pack train i0316 12 49 14 415206 140506290238912 problem py 644 read datum file from gs my project datum translate ende wmt32k pack train info tensorflow partition 0 num datum file 100 i0316 12 49 14 476078 140506290238912 problem py 670 partition 0 num datum file 100 warning tensorflow from usr local lib python2 7 dist package tensor2tensor datum generator problem py 680 parallel interleave from tensorflow python datum experimental op interleave op be deprecate and will be remove in a future version instruction for update use tf datum dataset interleave map func cycle length block length num parallel call tf data experimental autotune instead if sloppy execution be desire use tf datum option experimental determinstic w0316 12 49 14 479042 140506290238912 deprecation py 323 from usr local lib python2 7 dist package tensor2tensor datum generator problem py 680 parallel interleave from tensorflow python datum experimental op interleave op be deprecate and will be remove in a future version instruction for update use tf datum dataset interleave map func cycle length block length num parallel call tf data experimental autotune instead if sloppy execution be desire use tf datum option experimental determinstic warning tensorflow from usr local lib python2 7 dist package tensor2tensor util data reader py 275 tf record iterator from tensorflow python lib io tf record be deprecate and will be remove in a future version instruction for update use eager execution and tf datum tfrecorddataset path w0316 12 49 14 603265 140506290238912 deprecation py 323 from usr local lib python2 7 dist package tensor2tensor util data reader py 275 tf record iterator from tensorflow python lib io tf record be deprecate and will be remove in a future version instruction for update use eager execution and tf datum tfrecorddataset path warn tensorflow from usr local lib python2 7 dist package tensor2tensor util data reader py 37 to int32 from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead w0316 12 49 14 820894 140506290238912 deprecation py 323 from usr local lib python2 7 dist package tensor2tensor util data reader py 37 to int32 from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead warn tensorflow from usr local lib python2 7 dist package tensor2tensor util datum reader py 417 output shape from tensorflow python data op dataset op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output shape dataset w0316 12 49 14 870455 140506290238912 deprecation py 323 from usr local lib python2 7 dist package tensor2tensor util datum reader py 417 output shape from tensorflow python data op dataset op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output shape dataset info tensorflow set t2tmodel mode to train i0316 12 49 14 991553 140506290238912 t2 t model py 2248 set t2tmodel mode to train info tensorflow use variable initializer uniform unit scale i0316 12 49 15 940623 140506290238912 api py 255 use variable initializer uniform unit scale warn tensorflow from usr local lib python2 7 dist package tensorflow python autograph converter directive py 117 the name tf summary scalar be deprecate please use tf compat v1 summary scalar instead w0316 12 49 15 987256 140506290238912 deprecation wrapper py 119 from usr local lib python2 7 dist package tensorflow python autograph converter directive py 117 the name tf summary scalar be deprecate please use tf compat v1 summary scalar instead warn tensorflow from usr local lib python2 7 dist package tensorflow python autograph impl api py 255 to float from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead w0316 12 49 16 365370 140506290238912 deprecation py 323 from usr local lib python2 7 dist package tensorflow python autograph impl api py 255 to float from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead info tensorflow transforming feature input with symbol modality 33288 512 bottom i0316 12 49 16 533937 140506290238912 t2 t model py 2248 transforming feature input with symbol modality 33288 512 bottom info tensorflow transforming feature input position with identity modality bottom i0316 12 49 16 568705 140506290238912 t2 t model py 2248 transforming feature input position with identity modality bottom info tensorflow transforming feature input segmentation with identity modality bottom i0316 12 49 16 570141 140506290238912 t2 t model py 2248 transforming feature input segmentation with identity modality bottom info tensorflow transforming feature target with symbol modality 33288 512 target bottom i0316 12 49 16 571368 140506290238912 t2 t model py 2248 transforming feature target with symbol modality 33288 512 target bottom info tensorflow transforming feature target position with identity modality bottom i0316 12 49 16 598011 140506290238912 t2 t model py 2248 transforming feature target position with identity modality bottom info tensorflow transforming feature target segmentation with identity modality bottom i0316 12 49 16 599239 140506290238912 t2 t model py 2248 transforming feature target segmentation with identity modality bottom info tensorflow building model body i0316 12 49 16 600420 140506290238912 t2 t model py 2248 building model body warn tensorflow from usr local lib python2 7 dist package tensor2tensor model transformer py 96 call dropout from tensorflow python op nn op with keep prob be deprecate and will be remove in a future version instruction for update please use rate instead of keep prob rate should be set to rate 1 keep prob w0316 12 49 16 674422 140506290238912 deprecation py 506 from usr local lib python2 7 dist package tensor2tensor model transformer py 96 call dropout from tensorflow python op nn op with keep prob be deprecate and will be remove in a future version instruction for update please use rate instead of keep prob rate should be set to rate 1 keep prob warn tensorflow from usr local lib python2 7 dist package tensor2tensor layer common layer py 3077 the name tf layer dense be deprecate please use tf compat v1 layer dense instead w0316 12 49 16 710041 140506290238912 deprecation wrapper py 119 from usr local lib python2 7 dist package tensor2tensor layer common layer py 3077 the name tf layer dense be deprecate please use tf compat v1 layer dense instead segmentation fault describe the expect behavior the training should start this work 2 week ago standalone code to reproduce the issue request a vm and tpu use ctpu up zone europe west4 a tpu size v3 8 name europe4 tf version 1 15 machine type n1 standard 2 project my project then generate the datum use t2 t datagen problem translate ende wmt32k pack datum dir datum dir tmp dir tmp dir finally start the training use t2 t trainer model transformer hparam set transformer tpu problem translate ende wmt32k pack eval step 3 datum dir datum dir output dir model dir translate ende use tpu true cloud tpu name tpu name train step 10 both step be copy from the tutorial train a language model on a single or a pod this do work 2 week ago I m not sure what change in the meantime training use only a cpu work fine |
tensorflowtensorflow | class type change after save and loading | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 arch linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 8 1 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior create a model contain a certain layer class in my case tf keras layer batchnormalization change after save and load the model before load it be of type python tensorflow python keras layer normalization v2 batchnormalization which be the same as tf keras layer batchnormalization but after load it be of type python tensorflow python keras layer normalization batchnormalization which be not the same describe the expect behavior expect behavior be that the type of the layer remain unchanged after save and load standalone code to reproduce the issue python import tensorflow as tf model tf keras sequential tf keras layers batchnormalization model build input shape 1 model save tmp model h5 load model tf keras model load model tmp model h5 true print isinstance model layer 0 tf keras layer batchnormalization false print isinstance load model layer 0 tf keras layer batchnormalization attributeerror module tensorflow have no attribute python import tensorflow python keras layers normalization print isinstance load model layer 0 tensorflow python keras layer normalization batchnormalization other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the reason I want to do this be because I want to freeze updating of the batch normalization weight which can be do by set layer train false my goal be to do something like this python for l in model layer if isinstance l tf keras layers batchnormalization l training false |
tensorflowtensorflow | set shape be not load from saved model | Bug | system information tensorflow version use command below 2 1 0 python version 3 6 10 describe the current behavior when load a save keras model that contain set shape on a tensor this be not load python import tensorflow as tf inp tf keras input none 3 inp set shape none 2 3 x tf keras layer dense 3 inp model tf keras model inp x model summary model save test h5 load tf keras model load model test h5 load summary model summary layer type output shape param input 1 inputlayer none none 3 0 dense dense none 2 3 12 load summary layer type output shape param input 1 inputlayer none none 3 0 dense dense none none 3 12 shape be not identical |
tensorflowtensorflow | conv2d calculation result be inconsistent with pytorch | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version use command below python version 3 6 5 cuda cudnn version gpu model and memory 10 1 describe the current behavior for some specific datum the calculation result of tensorflow and pytorch be inconsistent describe the expect behavior the calculation result of tensorflow and pytorch be consistent or the difference be small standalone code to reproduce the issue if target interface conv1 weight torch torch from numpy np full 11 11 3 1 1 np float64 transpose 3 2 0 1 output pytorch cpu torch from numpy f conv2d torch from numpy input pytorch numpy transpose 0 3 1 2 weight torch pad 0 stride 4 numpy transpose 0 2 3 1 other info log tensorflow output 31 42857361 31 4285183 31 42857361 69 24489594 69 24489594 59 3673439 100 89792633 100 89792633 100 89792633 133 26530457 133 26530457 133 26530457 60 22449875 60 22449875 60 22449875 187 46939087 187 46939087 187 46939087 205 918396 205 918396 205 918396 96 42860413 96 42860413 96 42860413 61 42854309 61 42854309 61 42854309 148 89811707 148 89811707 148 89811707 96 28572083 96 28572083 96 28572083 83 02045441 83 02045441 83 02045441 87 89794922 87 89794922 87 89794922 135 12240601 135 12240601 135 12240601 pytorch output 31 428574 31 428518 31 428574 59 367344 69 244896 69 244896 100 89793 100 89793 100 89793 133 2653 133 2653 133 2653 60 2245 60 2245 60 2245 187 46939 187 46939 187 46939 205 9184 205 9184 205 9184 96 428604 96 428604 96 428604 61 428543 61 428543 61 428543 148 89812 148 89812 148 89812 96 28572 96 28572 96 28572 83 020454 83 020454 83 020454 87 89795 87 89795 87 89795 135 1224 135 1224 135 1224 |
tensorflowtensorflow | no documentation on how to convert session run call to tf function call | Bug | url s with the issue 1 replace v1sessionrun call description of issue what need change there be no information provide to the end user on how to convert simple session run call into tf function call for tensorflow 2 for people who be only interested in run save model and not build their own architecture the lack of information make it difficult to fully migrate away from tensorflow 1 x clear description if I be do a savedmodel base system with tensorflow 1 x I be provide the model I do not make the model there should be a direct explanation of how to convert session run call into more modern tf function call here be an example of the code I m try to convert to tensorflow 2 but I can t complete the conversion because of a lack of documentation for this use case python with tf session graph tf graph as sess tf save model loader load sess serve path to model cap cv2 videocapture camera i d ret frame cap read ret encode cv2 imencode jpg frame infer sess run detection score 0 detection box 0 feed dict encode image string tensor 0 encode tobyte essentially I m look for a piece of documentation with code equivalency for these sort of example correct link not applicable parameter define not applicable return define not applicable raise list and define not applicable usage example not applicable request visual if applicable not applicable submit a pull request I can t submit a pull request because of the lack of documentation on the issue at hand I could however submit a pull request to resolve the problem once I know how to resolve the problem |
tensorflowtensorflow | generate doc for tf 2 1 and tf 2 2 be miss information present in the source doc | Bug | url s with the issue the issue affect many page here be one example tf 2 2 tf 2 1 tf 2 0 description of issue what need change the generate doc for tf 2 1 and tf 2 2 be miss important information notably the whole documentation of init method contain information like detailed learning rate computation python def decayed learning rate step return initial learning rate decay rate step decay step be miss in tf 2 1 and tf 2 2 however the information be still present in the source see l64 l134 further example for example adam optimizer be also affect see tf 2 2 tf 2 0 the tf 2 0 version contain a lot of math describe how adam work which be not present in tf 2 2 doc |
tensorflowtensorflow | lot of link be break | Bug | url s with the issue description of the issue what need change there be lot of link be break some of they be clear description there be lot of the link be break in r2 0 site en api docs python index md when someone click on these link it s show 404 error solution when I check I find out every link be correct for example tf dtype link be break in github but work fine in tensorflow website as the number of break link be big I ll suggest add a message like our tensorflow 2 0 rc doc be move to kindly checkout there I have see this type of message in some of the tensorflow doc pull request hey dynamicwebpaige lamberta markdaoust please assign I for do this I ll love to address this issue |
tensorflowtensorflow | load runtime cudnn library 7 5 1 but source be compile with 7 6 5 | Bug | construct simple lstm model feed arbitrary stuff then I get the error tensorflow python framework error impl unknownerror fail to find the dnn implementation op cudnnrnn I trace the error above a couple of line and I see it start with the error you see in the title system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 win10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 2 1 and 2 2 nightly same python version 3 7 version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory cuda 10 1 cudnn 7 6 exactly as recommend in installation guide you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | bug in c api addsymbolicgradient | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution cento linux 7 4 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device unknown tensorflow instal from source or binary source tensorflow version use command below tf 2 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source 8 3 cuda cudnn version none gpu model and memory none describe the current behavior I build a graph then call addsymbolicgradient to generate gradient but I find the order of generate gradient do not match input my input be a b but the generate gradient be b grad a grad describe the expect behavior gradient order match input gradient of input a b should be a grad b grad my work to solve the problem I check the code in tensorflow cc framework gradient cc l532 l532 c size t dx index 0 for const edge e n in edge if e iscontroledge continue if dx index dx size return error internal invalid gradient output index dx index size dx size tf return if error backpropalongedge dx dx index e src e src output I think dx index in code should be e dst input because in edge return a unordered set which order be not match dx the input index of edge be dst input the right code may be c for const edge e n in edge if e iscontroledge continue int dx index e dst input if dx index dx size return error internal invalid gradient output index dx index size dx size tf return if error backpropalongedge dx dx index e src e src output after I modify the code my program seem work fine |
tensorflowtensorflow | example give in tf image rgb to yub be contadicte with the description | Bug | url s with the issue description of issue what need change 1 the example image x have pixel value which be not in the range of 0 1 so it can t be feed to rgb to yuv directly without scale it 2 user of the api need example which be close to practical scenario in this case nobody want to see the value change by the function but they want correct implementation and pre processing example submit a pull request yes |
tensorflowtensorflow | tf linalg triangular solve segfault instead of broadcast | Bug | system information have I write custom code no os platform and distribution linux ubuntu 18 04 tensorflow instal from pip install tensorflow tensorflow version v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 7 6 describe the current behavior tf linalg triangular solve segfault when the shape don t match and tf linalg triangular solve hasn t be run before if it have successfully be run before it raise an invalidargumenterror instead describe the expect behavior not to segfault ideally to broadcast standalone code to reproduce the issue python import numpy as np import tensorflow as tf shape1 3 3 shape2 1 3 3 work either way around if np random rand 0 5 shape1 shape2 shape2 shape1 la tf convert to tensor np tril np random randn shape1 b tf convert to tensor np random randn shape2 segfault true if segfault tf linalg triangular solve la b low true segfault else tf linalg triangular solve tf squeeze la tf squeeze b low true work fine tf linalg triangular solve la b low true raise invalidargumenterror may be relate to 25391 |
tensorflowtensorflow | valueerror can not use loss head conv 0 0 as input to merge 2 mergesummary because loss head conv 0 0 be in a while loop | Bug | I try to create my own neural network with several output it follow that there be several target vector these target vector dynamically change during training and depend on the prediction of the neural network I write about this neural network early in 37468 system information os platform and distribution debian gnu linux 9 11 stretch tensorflow version 2 1 0 the program work fine when I run it on cpu but when run on tpu it print an error valueerror can not use loss head conv 0 0 as input to merge 2 mergesummary because loss head conv 0 0 be in a while loop see info log for more detail what could be the reason of the error here be the complete code import tensorflow compat v1 as tf from tensorflow keras model import model from tensorflow keras layer import conv2d activation input batchnormalization layer from tensorflow core protobuf import rewriter config pb2 import numpy as np import tensorflow dataset as tfds height 5 width 5 use tpu true train batch size 8 8 if use tpu else 1 step 20000 learning rate 1e 4 iteration per loop 100 log step count step 100 use async checkpointe false if use async checkpointing save checkpoint step none else save checkpoint step max 500 iteration per loop model dir gs my storage model datum dir gs my storage dataset tpu grpc 10 3 101 2 8470 gcp project my project tpu zone we central1 if use tpu tpu cluster resolver tf distribute cluster resolver tpuclusterresolver tpu zone tpu zone project gcp project master tpu cluster resolver get master else tpu cluster resolver none master none class conv2d def init self x filter kernel size name stride 1 1 padding same activation relu reuse true with tf variable scope name reuse reuse self name name self x conv2d filter kernel size stride stride padding padding name name x bn name name bn self x batchnormalization scale false name bn name self x ac name name ac self x activation activation activation name ac name self x class outputlayer layer def init self name kwargs super outputlayer self init name name kwargs def call self input return input def make input fn dataset fn param def input fn param x train dataset fn 0 train batch size param batch size y true tf random uniform shape 8 batch size 32 32 8 minval 0 0 maxval 1 0 dtype tf dtype float32 seed 7777 def preprocess x y x tf cast x tf float32 1 255 label dic for h in range height for w in range width label dic head conv format h w y true return x label dic dataset x train map preprocess num parallel call tf data experimental autotune repeat shuffle 128 seed 7777 reshuffle each iteration true batch 8 batch size drop remainder true prefetch 1 return dataset return input fn def get model feature input shape reuse with tf variable scope model reuse reuse input input shape input shape seq n filter 8 for h in range height seq for w in range width if seq if h 0 and w 0 seq append conv2d input n filter 3 3 name conv format h w reuse reuse else seq append conv2d seq 1 0 x n filter 3 3 name conv format h w reuse reuse else seq append conv2d seq 1 x n filter 3 3 name conv format h w reuse reuse seq append seq tmp np array x for x in seq for seq in seq ravel output head for x in tmp output append outputlayer name output x name x x head append tf estimator regressionhead label dimension 32 32 8 name head x name model model inputs input output output head tf estimator multihead head opt tf keras optimizer adam learn rate learning rate metric accuracy model compile loss mean squared error optimizer opt metric metric model summary return model head def model fn feature label mode param batch size 8 param batch size model head get model feature param input shape reuse false logit train model feature logit train dic I 0 for h in range height for w in range width logit train I tf reshape logit train I batch size 32 32 8 logit train dic head conv format h w logit train I I 1 pre class tf argmax logit train axis 1 if mode tf estimator modekey predict return tf estimator tpu tpuestimatorspec mode prediction pre class new label for key in label new label key label key 0 loss 0 0 for key in logit train dic logit train logit train dic key loss tf square label key logit train loss op tf reduce mean loss optimizer tf train adamoptimizer learning rate param learning rate if param use tpu optimizer tf tpu crossshardoptimizer optimizer train op fn lambda loss op optimizer minimize loss op global step tf train get global step estim spec head create estimator spec feature x feature label new label mode mode logit logit train dic train op fn train op fn return estim spec tf log set verbosity tf log info tf disable v2 behavior dataset fn lambda tfds load name cifar10 with info true as supervise true try gcs true datum dir datum dir info dataset fn 1 n sample info split train get proto statistic num example n class info feature label num class train shape info feature image shape tf config set soft device placement true config tf estimator tpu runconfig master master model dir model dir save checkpoint step save checkpoint step log step count step log step count step session config tf configproto graph option tf graphoption rewrite option rewriter config pb2 rewriterconfig disable meta optimizer true tpu config tf estimator tpu tpuconfig iteration per loop iteration per loop per host input for train tf estimator tpu inputpipelineconfig per host v2 param use tpu use tpu input shape train shape learning rate learning rate model tf estimator tpu tpuestimator model fn use tpu use tpu config config train batch size train batch size param param model train make input fn dataset fn param step step |
tensorflowtensorflow | keras connectivity metadata be not set correctly in tf 2 2 0rc0 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 2 0rc0 python version 3 6 10 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior if a keras model be apply to input with mismatch but compatible shape the model be apply correctly but none of the keras connectivity metadata e g inbound outbound node or keras history be update describe the expect behavior the keras connectivity metadata should be update so after the model be apply there should be new inbound outbound node for that application and all the tensor create by that application should have their kera history set appropriately this be the behaviour in tf 2 1 0 standalone code to reproduce the issue python import tensorflow as tf for input shape in 1 1 1 print input shape input shape sub in tf keras input 1 relu layer tf keras layers relu sub out relu layer sub in submodel tf keras model sub in sub out assert len relu layer inbound nodes 1 inp tf keras input input shape out submodel inp assert len relu layer inbound nodes 2 the assert len relu layer inbound nodes 2 condition fail when input shape 1 1 indicate that no new inbound node be create when submodel be apply to inp other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the issue be that when this function be apply l926 it create new tensor which do not have a keras history set then this condition evaluate to false l947 so the set connectivity metadata function be never call here l952 |
tensorflowtensorflow | load pre train resnet 50 for object detection model on tensorflow 2 0 | Bug | url s with the issue description of issue what need change hi in the train a vanilla resnet 50 base retinanet it be say to use the path to the pre train resnet 50 checkpoint there be no link to any pre train model image I try to use the resnet 50 which be in because it be likely to be a correct implementation as an official one image unfortunately when I use python main py strategy type one device num gpu 1 model dir my model mode train config file my retinanet yaml with this yaml type retinanet train checkpoint path pretraine model home hongkuny hongkuny keras resnet50 gpu 8 fp32 eager graph cfit checkpoint prefix resnet50 train file pattern tfrecord train record eval eval file pattern tfrecord test record I get nothing load because the weight seem to be wrongly name in this file so it be not match image I also try some other model from the zoo assume we might load weight from resnet base object detection model but I get the same probleme I think we might add a link to a correct checkpoint of a compatible pre train model in order to avoid roam around incompatible model regard swann |
tensorflowtensorflow | inaccessibletensorerror when append to list while loop | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary tensorflow version use command below binary python version bazel version if compile from source 2 7 15 gcc compiler version if compile from source cuda cudnn version gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I get an inaccessibletensorerror when loop over a tensor and append to a python list only sometimes describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook codalab link python script the error happen at the line my list append new val from future import absolute import division print function import tensorflow as tf def list each row time two og value my list for old val in og value new val tf math multiply old val 2 my list append new val fail here tf print new val replace above line with this it work return my list if name main test func tf function list each row time two this work og value for in range 3 og value append tf random uniform 2 new value test func og value print we do it do not work og value tf random uniform 3 2 new value test func og value print we do it again other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach we do it traceback most recent call last file unkown looping bug py line 30 in new value test func og value file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python eager def function py line 606 in call result self stateful fn args kwd file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python eager function py line 2362 in call graph function args kwargs self maybe define function args kwargs file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python eager function py line 2703 in maybe define function graph function self create graph function args kwargs file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python eager function py line 2593 in create graph function capture by value self capture by value file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python framework func graph py line 983 in func graph from py func expand composite true file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python util nest py line 568 in map structure structure 0 func x for x in entry file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python framework func graph py line 945 in convert x dep ctx mark as return x file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python framework auto control dep py line 167 in mark as return tensor array op identity tensor file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python util dispatch py line 180 in wrapper return target args kwargs file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python op array op py line 267 in identity ret gen array op identity input name name file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python ops gen array op py line 3829 in identity identity input input name name file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python framework op def library py line 742 in apply op helper attrs attr proto op def op def file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python framework func graph py line 591 in create op internal inp self capture inp file home philip ros ws src real world venv local lib python2 7 site package tensorflow core python framework func graph py line 641 in capture tensor tensor graph self tensorflow python framework error impl inaccessibletensorerror the tensor tensor mul 0 shape 2 dtype float32 can not be access here it be define in another function or code block use return value explicit python local or tensorflow collection to access it define in funcgraph name while body 56 i d 139762147406480 access from funcgraph name list each row time two i d 139762146659984 |
tensorflowtensorflow | error retrieve regularization loss after add they to a pretraine model | Bug | it appear that add regularization loss to a pre train model result in some issue which lead to an error when retrieve the loss I be have this problem with the tensorflow 2 2 its git version be v1 12 1 26428 gcb73044 the reproducible code be show below import tensorflow as tf class model tf keras model def init self super model self init self model tf keras application resnet101 include top false weight imagenet def build self input shape none for layer in self model layer if type layer tf keras layer conv2d layer add loss lambda tf keras regularizer l2 1e 5 layer kernel def call self x return self net x if name main m model x tf random uniform shape 1 3 512 512 m build print m loss the error message be attributeerror activation object have no attribute kernel |
tensorflowtensorflow | tf config experimental list physical device gpu stop list my gpu after update from 2 0 1 to 2 1 0 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m use the follow line to set allow growth on tf 2 0 like I use to with tf config and session on tf1 python physical device tf config experimental list physical device gpu assert len physical device 0 not enough gpu hardware device available tf config experimental set memory growth physical device 0 true I ve update from 2 0 1 to 2 1 0 via pip install and suddenly tf config experimental list physical device gpu stop list my gpu as an available device describe the expect behavior when downgrade to 2 0 1 again via pip install everything work again standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf physical device tf config experimental list physical device gpu assert len physical device 0 not enough gpu hardware device available tf config experimental set memory growth physical device 0 true other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | roadmap link be break | Bug | url s with the issue readme md resource s roadmap link be break description of issue what need change it be occur page not find errer |
tensorflowtensorflow | tensorflow hexagon tflite benchmark fail with quantize mobilenetv2 | Bug | os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 4 lts tensorflow instal from source or binary source tensorflow version build from git clone recurse submodule python version 3 6 9 bazel version if compile from source 2 0 0 gcc compiler version if compile from source gcc ubuntu 7 4 0 1ubuntu1 18 04 1 7 4 0 device redmi note 7 pro hexagon 685 dsp android 10 0 miui 11 describe the current behavior when I try to run the official quantize mobilenet v2 model in tflite benchmark use hexagon delegate it fail with the follow error adb shell datum local tmp benchmark model tf15 graph datum local tmp mobilenet v2 1 0 224 quant tflite enable op profile true use hexagon true adb opt intel intelpython27 lib libcrypto so 1 0 0 no version information available require by adb start min num run 50 min run duration second 1 max run duration second 150 int run delay second 1 num thread 1 benchmark name output prefix min warmup run 1 min warmup run duration second 0 5 graph datum local tmp mobilenet v2 1 0 224 quant tflite input layer input shape input value range use legacy nnapi 0 allow fp16 0 require full delegation 0 enable op profile 1 max profiling buffer entry 1024 csv file to export profiling datum to use gpu 0 allow low precision in gpu 1 use hexagon 1 hexagon lib path datum local tmp hexagon profiling 0 use nnapi 0 use xnnpack 0 load model datum local tmp mobilenet v2 1 0 224 quant tflite info initialize tensorflow lite runtime load libcdsprpc so info create tensorflow lite delegate for hexagon info hexagon delegate 65 node delegate out of 65 node timestamp we d mar 11 12 45 53 2020 log hexagon src newnode c 413 node 2 quantize int32 ref bad input count 12 hexagon src newnode c 763 node i d 0x2 ctor fail error fail fail to prepare graph state fail to prepare graph error node number 65 tflitehexagondelegate fail to prepare error restore previous execution plan after delegate application failure fail to apply hexagon delegate benchmarking fail all the so file be initially copy to path datum local tmp on device and the delegate be successfully create I try add use nnapi true along with use hexagon option but it do not make any difference I follow the instruction to build the hexagon delegate and librarie from the official documentation bazel build config android arm64 tensorflow lite experimental delegates hexagon hexagon nn libhexagon interface so adb push bazel bin tensorflow lite experimental delegates hexagon hexagon nn libhexagon interface so datum local tmp adb push libhexagon nn skel so datum local tmp 32 bit version give an error so I use arm64 hexagon library v1 14 model mobilenet v2 1 0 224 quant also do the hexagon model execute a tflite generate by post training quantization full int8 when I try another custom quantize model post training quantization with int8 input and output it show info hexagon delegate 0 node delegate out of 158 node and seem to fall back to cpu describe the expect behavior the benchmark model should run the quantize model without any problem other info log android ndk 20 benchmark tool build from late source with bazel 2 0 here be the two model that I have try to benchmark and the correspond benchmark library file hexfile zip |
tensorflowtensorflow | confuse behavior of grucell | Bug | python cell layer grucell 4 x tf random normal 2 2 3 print cell get initial state x return a single tensor cell x 0 cell get initial state x require and return a list of tensor why would grucell get initial state return a tensor while call grucell require a list of tensor as input be this a desire behavior code test in tf2 1 furthermore while cell return output and state separately rnn return the output and state together in a single list python cell layer lstmcell 4 x tf random normal 2 2 3 s cell get initial state x cell output state cell input x 1 state s work fine print cell output print state rnn layer rnn cell return state true return sequence true rnn output state rnn x initial state s error rnn return the output and state in a single list I m wonder why rnn and cell behave differently |
tensorflowtensorflow | 58 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | third party tensorflow compiler aot codegen test no such directory find | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change to update the golden file flip update golden to true and run the follow bazel test test strategy local third party tensorflow compiler aot codegen test clear description line 154 I want to update the golden file but the give directory be not present in the third party library for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.